These days a lot of clients now have in-built support, including all GTK4 text areas, so it might not be needed. Maybe the ship for this has sailed, or there could be still value in centralising it and having a common pattern.
if we could remove the custom and duplicated code for this feature in NeoChat. Tokodon and KMail, this would make me very happy. One thing that NeoChat and Tokodon have are custom emojis support which might be a bit tricky to implement.
Amazing. I hope we will get a proper emoji picker on Plasma 6 based on this work. This is one of the thing I miss the most in windows. In windows you can just presss âwin+.â and type and the selected emoji will automatically paste on the app. Here is a baby wogue video demonstrating the problem. Ignoring the smugness, the points he pick up on are actually valid.
Direct Diacritic Display is also a fantastic thing as now I can easily and properly name peopleâs name.
Timely Translation Tasks, Simply Speak will greatly improve accessibility to many users.
All around an amazing job. Will be eagerly waiting for this.
In case you didnât know, this works in Plasma as well The UX isnât the greatest, because it requires you to manually paste it into the application.
Handwriting support like in Windows 11 would be a nice - but not very important - additional input method. It probably would mean to include something like tesseract to work.
Thatâs a really good question.
Right now all the examples in the blog post multiplex internally within a single application.
Kwin is set up so you have one input backend. Having two InputContexts alive at once both grabbing and inserting wouldnât really work.
But being able to switch at runtime seems doable.
Code wise we need 3 steps:
seeing if we can port code in my playground to use KDE global shortcuts instead of sniffing keys to activate
add some âactivate me pleaseâ signal to the InputMethod manager class
make kwin follow this, replaying as though we had just changed focus out to the old input context, and acting like a focus in to a new context on the new input method
Especially the voice input is one of the features that would increase accessibility by a lot. Desktop Linux is lacking in the accessibility department and that would get us a bit closer. The other features are cool too but I think a system wide stt system would be the most must-have of them all.
In general the demos are sick and I hope at least some of that gets into plasma 6 in a polished state!
The first two ideas, âConvenient Clipboard Connectionsâ and âEasy Emoji Entryâ, have been provided in Chinese IMEs (e.g. sogoupinyin) for decades. If we are exploring this direction, then commercial CJK IMEs, especially modern smartphone IMEs, would be a good source of inspiration.
Another example is âfigâ, which uses the ibus IME framework to provide autocompletion for Linux shells.
The problem is that to do good autocompletion, you need not only text the user is currently typing, but also surrounding text. fig does it using shell hooks. But what if we do an IME that provides autocompletion for all text boxes? (probably using ChatGPT-like tech, e.g. like what Github Copilot does in vscode)
Then perhaps there should be someway that IME can get âsurrounding textâ from the app.
Hi! I just recently found this discussion. The author of fcitx5 csslayer (the most advanced CJK input method framework under wayland at the moment) is an expert in developing input methods under Wayland. He has written several blog posts about Wayland input method issues:
Hi all. Iâm an ex dev whoâs once promising career was stripped away by disability making keyboard and mouse usage difficult to impossible. I just discovered the amazing ibus-typing-booster (Any relationship between this threadâs Fabian and that appâs Mike Fabian?), and am doing my homework to use it correctly, and came across this thread.
Sorry Iâm a little late to the party, but hopefully somebody here will see this reply, because there are a lot of important people I see here, and a very important subject was brought up.
Having two InputContexts alive at once both grabbing and inserting wouldnât really work.
This is a MAJOR accessibility problem in Wayland. We have lots of tools which can be used to help with accessibility, for power users and disabled users alike, but the big problem is that because they all exclusively grab input devices, they cannot coexist, meaning that thereâs no way to achieve the accessibility levels that are possible on X11 or Windows - we would need to have a single tool that can do literally everything, unless a new solution is developed.
I have been very lightly involved in a discussion with some application developers on github regarding this problem recently.
If any of you had any input to this discussion or any suggestions of directions to take to solve this problem, it would be deeply appreciated. The conversation on github is over here: https://github.com/kermitfrog/inputmangler/issues/2
Apologies for any interruption, but I hope some of you might find this a subject worthy of further consideration.