The current open source language models (e.g. llama2-7b-4bit) can run with CPU-only and 4GB ram (and I believe even smaller models are coming). So we can run a model on the user’s computer, and build a KDE-integrated chatbot on it, like Windows Copilot.
It can enhance KDE in many ways:
KDE is often criticized for the sheer number of customizing options in systemsettings and apps. Now the user can change settings using plain English (or other languages) in the chatbot.
The “non-runner” plugins in KRunner (e.g. calculator, weather, dict) can be moved to the chatbot, for better history and follow-up question UI.
The chatbot can access KHelpCenter, discuss.kde.org and other sources, so it can answer questions about KDE.
KCommandBar (Ctrl-Alt-i) can be enhanced so the user doesn’t have to type the exact command, but anything with the same meaning should do.
Baloo search can also be similarly enhanced (e.g. using embedding instead of word index).
Apps like KFind and KRename can be integrated into the chatbot, so the user does find/rename in plain English.
Possible problems:
It would triple the current Plasma memory footprint. So probably it has to be off by default. But KDE devs might be less interested with a feature that only benefits a fraction of users.
The other option, of course, is that KDE e.V. hosts the language model and provides an API. That might be quite expensive, though.
Whether the license of the language model is OK, should it be distributed with Plasma, or downloaded after installation.
I agree. ChatGPT is an awesome helper for KDE, systemd services, application installations, Distro upgrades and all.
It should not need to be, but the systemd service KCM was discontinued for example.
I think a Plasma search integration coould make sense. But moreso a Plugin for Kate, for writing text. Its often a great tool for fixing code, or creating a thoroughly asked for code.
But I think it should be web based, with a dedicated server at some VPS host, strong security measurements and tokens that you have available, and a pay-what-you-can subscription? The option to have it locally, MAY be nice. But its also not really suited for many people. Its not like Firefox Translate, at least what we currently know as ChatGPT.
A small model only indexing Linux forums and so on, could maybe run locally very well.
Great idea to have the Bot access selected resources. This could really make the results more secure.
I disagree that all the small apps should be integrated into the bot. Thats simply not necessary, at least in my eyes.
You COULD have a fully fledged “Robot show me urgent mails for university and create an alarm in 1h to answer them”.
But things like searching more easily in Dolphin should not be outsourced to some way too complex thing. Kfind can probably do currently missing things like “search for file category” (image, document, video, audio)
Yes, but generating/understanding large chunks of text/code would need a larger model. Commercial products might perform much better than open source models in these areas.
Yes that is possible. We don’t even need to run our own models. Instead we just call OpenAI’s paid API. It is probably cheaper as they have more efficient hardware.
But a paid-subscription model seems to be quite unusual for an open source project like KDE. The only example comes to mind is Mozilla’s Firefox Relay Premium.
Systemd services where supposed to have a GUI, two approaches, and they both dont work.
ChatGPT is extremely useful for Linux stuff. But thinking about that, most of their data will not be Linux stuff so a model with high quality inputs could be really sufficient