Just curious. Being vehemently anti-AI, I am beginning to wonder if I should just lock updates at this point and keep things as human made as possible. Anybody have an idea how much AI created code is going into these projects? Is there going to be transparency on this issue for people like me who want nothing to do with AI or the tallentless lazy hacks that use it?
I’m not aware of a firm policy, but there was some discussion on a mailing list: 'Super Simple AI Policy proposal' thread - MARC
The GSoC page has this for GSoC participants:
KDE stance is the following:
- Do not use AI to generate the proposal, MR description or commit messages. Use your own words.
- Even if your English is not proficient, we prefer to talk to a human being instead of someone who just copy/paste their though in their own language and let the computer do the translation. You will never learn if you don’t try yourself
- Do not use generate code and make MR with AI generated code. KDE cares about licensing, ethics and environment, which are three things that AI does not care about.
Generally the KDE developers I interact with seem to be either against AI use, or cautious about trusting it too much. I’m not aware of any vibe coding (i.e. using what the AI spit out without reviewing it) being accepted into the codebase.
For me personally, I program in my free time because I enjoy it and I enjoy learning. Using AI would be pointless for my goals. I also avoid it because of the ethical and environmental issues.
Thanks. I am pleased to see KDE’s stance as stated. Not entirely surprised that Fedora seems OK with is as they are corporate. I have found the insane amount of AI infecting everything these past few months more than a little unsettling. I have always said I would rather seem paranoid than be right. I am gravely concerned about the immediate future as there seems to be zero controls on AI, and that can lead nowhere good. To me, right now, the danger posed by AI far exceeds any potential benefit.
this will all go away after the bubble pops. right?
during the dot com bubble, people thought everything needed a website and that everything will become online. but we still have big stores that aren’t online.
yes AI will change stuff, but i can guarantee it wont replace us. vibe coders will go extinct but us real programmers wont.
Till it pops, i support those who don’t support AI. Id rather my desktop environment be abandoned than be maintained by something lacking morals and life
Does AI even have a benefit? except making the wrong people rich and normalising theft of intellectual Property and misusing open source. it hasn’t provided any real benefit.
Given this stance, it’s best that I don’t answer.
I have a few scripts that I composed with the aid of AI, which I couldn’t have managed myself.
It’s not the INPUT that counts, it’s the skill of the coder to manage and properly vet the output.
Developers must take responsibility for their patches and patches are reviewed for correctness, but there is no ban on using AI to write code. Nobody will want to review and accept your patch if it looks like trash, whether it’s made by AI or by hand.
In my view, the review processes for AI assisted patches and entirely hand written patches does not need to be different. The real issue would come if reviewers are overwhelmed by AI slop and have to spend a lot of time and effort to get through it all. If reviewers start to burn out, it could cause review quality for even human made patches to go down. In some ways, having our own development platform separate from GitHub insulates us from this outcome, but we are not immune to it.
Regarding transparency, how would we even know if someone didn’t disclose it or the code wasn’t obviously AI generated? Ironically, we might need an AI to determine whether code was AI generated because researching whether some licensed code was copied by an AI without attribution would take far too long. Stolen code is an issue that we probably can’t fully stop even if LLM AIs never existed. I also think that the types of people who are willing to disclose the usage of AI probably aren’t the problematic people.
There are also some gray areas, such as Visual Studio Code’s autocomplete AI. It behaves much like the autocomplete you get from Language Server Protocol plugins or whatever Visual Studio and Intellij used for many years, but it is capable of looking at the context and suggesting the next bit of code to write with a surprising though not flawless degree of accuracy. IMO, it would be ridiculous to make people disclose the usage of that since it’s so close to how older autocomplete systems are used.
The conclusion of the thread linked by @mrp is basically that policy around AI is up to the individual project maintainers’ discretion at the moment.
Therein lays the rub. If you could not have managed it yourself, then how do you know the AI did it right? To me, it the danger comes from exactly what you stated.
Thanks for the reply. The problem right now is it seems like the wild wild west. There are no controls. No oversight. And given what is happening with Anthropic and the US dept of Defense (no I wont call it the dept of war, any more than I will call the Gulf of Mexico the Gulf of America), it looks like there never will be. Having to rely on individual ethics is about the most flawed and dangerous position I can think of.
It depends. “AI” is a tool and like any tool you need to know how to use it and not to abuse it.
“I couldn’t have managed myself” is not the same as “I don’t know what I’m doing”.
I used “AI” a few times to write or correct scripts but only the ones I understood. I may not be able to (manage) write them from scratch myself because I only do this a few times per year, so I don’t remember the syntax and the options. But once the script is written, I understand it and I can verify it. So it was a matter of saving time so I don’t have to search and lookup all syntax, options, etc.
But yeah, if someone asks ChatGPT for a script that they can’t verify and then runs it or distributes it without questioning its validity or integrity then that is a problem. I wouldn’t do that simply because I’d be afraid to run or share something I don’t understand.
This is just shortsighted. I’m an AI skeptic too and I hate forced AI integration into everything but if used properly AI can make your tasks easier. Yes, it’s easily abused and overused, like many things, but you can’t generalize this way. Would you call someone a talentelss and lazy hack because they used a power tool instead of manual screwdriver or used online search tools instead of manually going through paper books?
Yes AIs have real benefits. But most of them are little models that do not require large amount of data center, energies and don’t even need to steel.
These can increase the efficiency of devices as heaters, they can be used to voice acting player character names etc in games (a little voice specific AI where every replaced word will be paid to voice actors), controlling complex machines as fusion reactors many times more accurate than anything else and so on.
Not everything about AI is just big data LLM slop, even if it is hard to belief these days … (I am also tired of all this slop part).
No oversight.
There is oversight in the form of patch review, just not AI specific oversight and not all projects and patches have equal amounts of oversight.
FWIW, nobody I know does any significant degree of AI coding (I say significant to hedge, but I actually don’t know any). That doesn’t mean it couldn’t be happening somewhere, especially if the patches being produced are good enough to not cause noteworthy problems.
I avoided using the term “vibe coding” and used “AI coding” instead because I’m not sure how to precisely define “vibe coding”. It seems like a term that has become polarized with no firmly defined scope, so it could easily be misapplied unintentionally or intentionally to discredit someone or their patches.
I think that not all ways of using AI for coding are the same.
AI autocomplete is easily the least problematic form because you can judge the generated code piece-by-piece. This avoids the problem where you ask for something and now you have to understand a whole file or project in its entirety from the beginning to properly take responsibility for it. It’s also the least likely to cause your mental skills to atrophy.
The 2nd least problematic form involves creating files for things with a fair bit of boilerplate code that tends to follow a template and aren’t likely to cause big problems even if there’s a mistake in the code. This may include unit tests and some kinds of scripts. Most scripts for your personal use could also fall into this category since there is only ever one person who could be affected. If a script just for yourself deletes your home dir contents, that’s on you.
Where AI coding could become more of an issue is when you generate a project or an important part of one and release it to the public. I don’t know anyone who does this or would consider doing this. If someone who actually was a “talentless lazy hack” tried to do this, it wouldn’t surprise me if their project or patch quickly went bad without a good bit of luck. You need to be an expert who can properly review the generated code. There is also a real risk that your mental skills would atrophy if you relied on this kind of AI assistance too much.
AI in the more general sense predates even the modern internet. if i remember correctly, Some chat-bots were even on the Arpanet.
and also, it is inevitable that AI and LLMs change the world (even if they do so very slightly)
but currently using it is harming freedom, intellectual/private privacy/property, Computers and Many other things.
Even the Dot Com bubble stemmed of a very real technology which has changed the market. but it didn’t destroy traditional offline companies and we still have more online-first companies spawning in. during the bubble it was believed that only the early ones will survive (even though the majority died), but we see new companies everywhere.
my point is that we don’t chase the bubble until it pops.