To make matters worse, programmers in the study would often overlook the misinformation.
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.
“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”
Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.
“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”
If these technologies still require large amounts of human intervention to make them usable then why are we expending so much energy on solutions that still require human intervention to make them usable?
Why not skip the burning the planet to a crisp for half-formed technology that can't give consistent results and instead just pay people a living fucking wage to do the job in the first place?
Seriously, one of the biggest jokes in computer science is that debugging other people's code gives you worse headaches than migraines.
So now we're supposed to dump insane amounts of money and energy (as in burning fossil fuels and needing so much energy they're pushing for a nuclear resurgence) into a tool that results in... having to debug other people's code?
They've literally turned all of programming into the worst aspect of programming for barely any fucking improvement over just letting humans do it.
Why do we think it's important to burn the planet to a crisp in pursuit of this when humans can already fucking make art and code? Especially when we still need humans to fix the fucking AIs work to make it functionally usable. That's still a lot of fucking work expected of humans for a "tool" that's demanding more energy sources than currently exists.
Yeah it's wrong a lot but as a developer, damn it's useful. I use Gemini for asking questions and Copilot in my IDE personally, and it's really good at doing mundane text editing bullshit quickly and writing boilerplate, which is a massive time saver. Gemini has at least pointed me in the right direction with quite obscure issues or helped pinpoint the cause of hidden bugs many times. I treat it like an intelligent rubber duck rather than expecting it to just solve everything for me outright.
I will resort to ChatGPT for coding help every so often. I'm a fairly experienced programmer, so my questions usually tend to be somewhat complex. I've found that's it's extremely useful for those problems that fall into the category of "I could solve this myself in 2 hours, or I could ask AI to solve it for me in seconds." Usually, I'll get a working solution, but almost every single time, it's not a good solution. It provides a great starting-off point to write my own code.
Some of the issues I've found (speaking as a C++ developer) are: Variables not declared "const," extremely inefficient use of data structures, ignoring modern language features, ignoring parallelism, using an improper data type, etc.
ChatGPT is great for generating ideas, but it's going to be a while before it can actually replace a human developer. Producing code that works isn't hard; producing code that's good requires experience.
ChatGPT and github copilot are great tools, but they're like a chainsaw: if you apply them incorrectly or become too casual and careless with them, they will kickback at you and fuck your day up.
What drives me crazy about its programming responses is how awful the html it suggests is. Vast majority of its answers are inaccessible. If anything, a LLM should be able to process and reconcile the correct choices for semantic html better than a human... but it doesnt because its not trained on WIA-ARIA... its trained on random reddit and stack overflow results and packages those up in nice sounding words. And its not entirely that the training data wants to be inaccessible... a lot of it is just example code wothout any intent to be accessible anyway. Which is the problem. LLM's dont know what the context is for something presented as a minimal example vs something presented as an ideal solution, at least, not without careful training. These generalized models dont spend a lot of time on the tuned training for a particular task because that would counteract the "generalized" capabilities.
Sure, its annoying if it doesnt give a fully formed solution of some python or js or whatever to perform a task. Sometimes it'll go way overboard (it loves to tell you to extend js object methods with slight tweaks, rather than use built in methods, for instance, which is a really bad practice but will get the job done)
We already have a massive issue with inaccessible web sites and this tech is just pushing a bunch of people who may already be unaware of accessible html best practices to write even more inaccessible html, confidently.
But hey, thats what capitalism is good for right? Making money on half-baked promises and screwing over the disabled. they arent profitable, anyway.
If you don't know what you are doing, and you give it a vague request hoping it will automatically solve your problem, then you will just have to spend even more time to debug its given code.
However, if you know exactly what needs do do, and give it a good prompt, then it will reward you with a very well written code, clean implementation and comments. Consider it an intern or junior developer.
Example of bad prompt: My code won't work [paste the code], I keep having this error [paste the error log], please help me
Example of (reasonably) good prompt: This code introduces deep recursion and can sometimes cause a "maximum stack size exceeded" error in certain cases. Please help me convert it to use a while loop instead.
I just use it to get ideas about how to do something or ask it to write short functions for stuff i wouldnt know that well. I tried using it to create graphical ui for script but that was constant struggle to keep it on track. It managed to create something that kind of worked but it was like trying to hold 2 magnets of opposing polarity together and I had to constantly reset the conversation after it got "corrupted".
Its useful tool if you dont rely on it, use it correctly and dont trust it too much.
Sure does, but even when wrong it still gives a good start. Meaning in writing less syntax.
Particularly for boring stuff.
Example: My boss is a fan of useMemo in react, not bothered about the overhead, so I just write a comment for the repetitive stuff like sorting easier to write
// Sort members by last name ascending
And then pressing return a few times. Plus with integration in to Visual Studio Professional it will learn from your other files so if you have coding standards it’s great for that.
Is it perfect? No. Does it same time and allow us to actually solve complex problems? Yes.
Not a programmer by any means (haven't done any since college) but I've asked it for help in writing Jira queries or Excel mess and it's been pretty solid with that stuff.
It does but when you input error logs it does pretty good job at finding issues.
I tried it out first by making game of snake that plays itself. Took some prompting to get all features I wanted but in the end it worked great in no time.
After that I decided to try to make distortion VST3 plugin similar to ZVEX Fuzz Factory guitar pedal.
It took lot's of prompting to get out something that actually builds without error I was quickly able to fix those when I copied the error log to the prompt.
After that I kept prompting it further eg. "great, now it works but Gate knob doesn't seem to do anything and knobs are not centered".
In the end I got perfectly functional distortion plugin. Haven't compared it to an actual pedal version yet.
Not that AI will just replace us all but it can be truly powerful once you go beyond initial answer.
Just like answers on the Internet, you have to read the output and not just paste it blindly. I find the answers are usually useful, even if they aren't completely accurate. Figuring out the last bit is why we are paid as programmers.
This is part of the AI will replace jobs , and AI will get conscious , AI can program and automate everything. It’s bullshit. It’s a tool to help is not replacing anything . If companies start with the slogan they had with the cloud , we will be in a trouble. Because is fake
Anyone else tired of these clickbait headlines and studies about LLM which center around fundamental misunderstandings of how LLMs work, or is it just me?
"ChatGPT didn't get a single answer on my algebra exam correct!!" Well yes, because LLMs work on predictive generation, not traditional calculation, so of course they're not going to do math or anything else with non-language-based patterns properly. That's what a calculator is for.
All of these articles are like complaining that a chainsaw is an inefficient tool for driving nails into wood. Yeah; because that's not the job this tool was made for.
And it's so stupid because there are ton of legitimate criticisms about AI and the AI rollout to be had; we don't have to look for disingenuous cases of misuse for critique.
So it is incorrect and verbose, but also comprehensive and using a well-articulated language style at the same time?
Also "study participants still preferred ChatGPT answers 35% of the time", meaning that the overwhelming majority (two-thirds) did not prefer the bot answers over the human(e), correct ones, that maybe were not phrased as confidently as they could have been.
Just say it out loud: ChatGPT is style over substance, aka Fox News. 🦊
Using the best signals, you can turn $500 into $3000 in just a few days of trading in the future and on the site, just start copying our signals and start enjoying your trades.As for a referral for good trading, checking out (Expert~~Eloi$e Wilbert) on ilint$ttrragrram, They have a user-friendly platform and offer a wide range of trading options.
Using the best signals, you can turn $500 into $3000 in just a few days of trading in the future and on the site, just start copying our signals and start enjoying your trades.As for a referral for good trading, checking out EXPERT ELOISE WILBERT ON INSTAGRAM, They have a user-friendly platform and offer a wide range of trading options.