If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?

If AI is so good at coding … where are the open source contributions?

If AI is so good at coding … where are the open source contributions?
If AI is so good at coding … where are the open source contributions?
Wasn't there recently an article on Lemmy about all the bullshit AI pull requests that FOSS maintainers have to put up with?
All over tbh, most devs are using it to some degree now.
However it's not an "AI" FOSS contribution, it'll be Doug contributing, or Pardep, or whatever.
They just quietly used AI like a normal person for some basic parts of the code to get it done faster, then tweaked it to look better.
I'd expect most FOSS projects with contributions in the past 6months have bits and pieces, a line here or there, written in AI
That's the thing, when the AI performs well, you wouldn't even be able to tell AI was used
"AI" is nowhere because it doesn't exist. Sure, there are programs that are good at summarizing Stackexchange but is that so really amazing? Maybe it saves devs a few seconds? Do we credit "AI" with amazing writing when people use grammar correction? The hype is so inane. Don't feed into it with this nonsense.
As the article explains, they haven't been able to find any meaningful contributions to actual problems. I'm sure that plagarized summaries can help with your boilerplates/etc but that's not "AI".
Thats like saying search engines dont exist.
AI definitely exists. Its basically just a slightly faster way to get code from stack exchange, except with less context and more uncertainty
"AI" is a very broad term. Back when I went to university, my AI course started out with Wumpus World. While this is an extremely simple problem, it's still considered "AI".
The enemies in computer games that are controlled by the computer are also considered "AI".
Machine learning algorithms, like recommender algorithms, and image recognition are also considered "AI"
LLMs like ChatGPT and Claude are also "AI"
None of these things are conscious, self aware or intelligent, yet they are part of the field called "AI".
These are however not "AGI" (Artificial General Intelligence). AGI is when the machine becomes conscious, and self aware. This is the scenario that all the sci-fi movies portray. We are still very far away from this stage.
If the only point you can make is picking apart that LLMs don't "count" as AI, then sorry mate but 2022 called, ot wants it's discussion back.
No one really cares about this distinction anymore. It's like literally vs figuratively.
LLMs are branded under the concept of AI, arguing it doesn't count is not a discussion people really care about anymore in the industry.
quietly used AI like a normal person
Is ai use normal though? Maybe for you and many others but the existence of these communities, articles, and folks who just don't get much out of it despite industry cramming it down everyone's throat would suggest it's anything but normal.
Yeah, I mean most Foss projects have code copied from stack exchange since decades.
AI mostly just copies from stack exchange too, so its really just copying from stack exchange with extra steps.
That’s exactly it and why I can’t take this article very seriously.
Just because AI is writing some code doesn’t mean it gets credit as the developer. A human still puts their name beside it. They get all the credit and all the responsibility.
A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.
And for that reason we can be certain that tons and tons of FOSS projects are using it. And the maintainers might not even know it.
The credit should go to the author on stack exchange.
Well there's a huge difference between "slop" and actually fine code.
As long as the domain space isn't super esoteric, and the framework is fairly mature, most LLMs will generate not half bad results, enough to get you 90% of the way there.
But then that last 10% of refining amd cleaning up the code, fixing formatting issues, tweaking names, etc is what seperates the slop for them "you can't even tell an AI helped with this" code
I have projects that prolly a good 5% to 10% of the code is AI generated, but you'd never know cuz I still did a second pass over it to sanity check and make sure its good
A piece of code I struggled with for days and some vibe-coded slop look identical in a PR.
TBF that doesn't say much for your coding.
Just because people use generated slop, that doesn't mean "AI" exists, much less that it's making valuable contributions beyond summarizing/plagarizing Stackexchange.
Quiet you, I can’t hear the echoes in this chamber anymore
I contributed a few open code to GitHub that I made via LLM ai. I had to test it and figure out the architecture