These days a lot of clients now have in-built support, including all GTK4 text areas, so it might not be needed. Maybe the ship for this has sailed, or there could be still value in centralising it and having a common pattern.
if we could remove the custom and duplicated code for this feature in NeoChat. Tokodon and KMail, this would make me very happy. One thing that NeoChat and Tokodon have are custom emojis support which might be a bit tricky to implement.
Amazing. I hope we will get a proper emoji picker on Plasma 6 based on this work. This is one of the thing I miss the most in windows. In windows you can just presss âwin+.â and type and the selected emoji will automatically paste on the app. Here is a baby wogue video demonstrating the problem. Ignoring the smugness, the points he pick up on are actually valid.
Direct Diacritic Display is also a fantastic thing as now I can easily and properly name peopleâs name.
Timely Translation Tasks, Simply Speak will greatly improve accessibility to many users.
All around an amazing job. Will be eagerly waiting for this.
In case you didnât know, this works in Plasma as well The UX isnât the greatest, because it requires you to manually paste it into the application.
Handwriting support like in Windows 11 would be a nice - but not very important - additional input method. It probably would mean to include something like tesseract to work.
Thatâs a really good question.
Right now all the examples in the blog post multiplex internally within a single application.
Kwin is set up so you have one input backend. Having two InputContexts alive at once both grabbing and inserting wouldnât really work.
But being able to switch at runtime seems doable.
Code wise we need 3 steps:
seeing if we can port code in my playground to use KDE global shortcuts instead of sniffing keys to activate
add some âactivate me pleaseâ signal to the InputMethod manager class
make kwin follow this, replaying as though we had just changed focus out to the old input context, and acting like a focus in to a new context on the new input method
Especially the voice input is one of the features that would increase accessibility by a lot. Desktop Linux is lacking in the accessibility department and that would get us a bit closer. The other features are cool too but I think a system wide stt system would be the most must-have of them all.
In general the demos are sick and I hope at least some of that gets into plasma 6 in a polished state!
The first two ideas, âConvenient Clipboard Connectionsâ and âEasy Emoji Entryâ, have been provided in Chinese IMEs (e.g. sogoupinyin) for decades. If we are exploring this direction, then commercial CJK IMEs, especially modern smartphone IMEs, would be a good source of inspiration.
Another example is âfigâ, which uses the ibus IME framework to provide autocompletion for Linux shells.
The problem is that to do good autocompletion, you need not only text the user is currently typing, but also surrounding text. fig does it using shell hooks. But what if we do an IME that provides autocompletion for all text boxes? (probably using ChatGPT-like tech, e.g. like what Github Copilot does in vscode)
Then perhaps there should be someway that IME can get âsurrounding textâ from the app.