There are a handful of use cases I've seen generative AI be useful
Searching
Partner programming (how do I...)
Chatbot (as a novelty thought, gets boring quick, and the only great ways of controlling it in a way that would be safe for business is by adding more ai)
And a few more probably.
I spent about 6 months deep diving into how it all worked. I was having dread that it would take my job and was determined to learn about it. What I learned is that there are many many serious pitfalls that seem to be more or less ignored or unknown by businesses and people covering it.
I won't say it's as bad as blockchain, there are usages for it, but the hype is pretty damn close. Business thinking it will save them billions and they can start getting rid of developers. Tech bros lining up to say it's going to bring on the singularity.
Eh. It's cool. I wouldn't say it's going to bring the second coming of Jesus.
I don't know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.
Things LLMs can do quite well:
Generate useful search queries.
Dig through provided text to determine what it contains.
Summarize text.
These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.
There may be exceptions but everything I've seen from AI programming is next level trash. It's like copy pasting from Stack Overflow without the thousand comments all around it saying DO NOT DO THIS!
When ChatGPT was just released to the general public I wanted to try it out. I had it write a script to handle some simple parsing of network log files. I was having some intermittent issue with my home network I couldn't figure out, so I had logged a lot of data and was hoping to figure out the issue. But I needed to filter out all the routine stuff that would be just noise in the background. I could have written it myself in about an hour, but figured hey maybe ChatGPT can help me bang it out in a couple of minutes.
The code it wrote looked at a glance to be very good and I was impressed. However as I read it, it turned out to be total nonsense. It was using variables and declaring them after. Halfway the script it seemed to have switched to a completely different approach leaving some sort of weird hybrid between the two. At one point it had just inserted pseudo code instead of actual functional code. Every attempt to get it to fix it's issues just made it worse. In the end I just wrote the script myself.
I've seen examples from other people who attempted to use it and it's just bad. It's like having a junior programmer high on weed writing your code, checking it and fixing it takes more time than just writing the code itself.
Then there's the issue of copyright, a lot of the training data wasn't licensed and stuff like Github Copilot want to add your data to it's training set if you want to use it. That's not OK on many levels and not even possible for people working on corporate codebases.
A lot of programmers work on big code bases, with things like best practices and code standards. Not only does the AI not know the codebase and thus wouldn't know how to do a lot of stuff in that codebase, it also doesn't know about the best practices and code standards. So for those kinds of situations it isn't useful.
I feel like people ask it to do some first year student programming tutorial tasks and the result looks somewhat like what one would expect and conclude the thing can actually write code. It really can't in reality and probably shouldn't even if it could.
That's what I mean though. It helps give you different ideas, maybe looking at the problem a different way, but I don't trust the garbage it spits out. At least half the time it makes something up, or it gives a solution that just won't work, and even then it will double or even triple down on it.
It's decent for generating ideas or names for fiction. I've used it for tabletop stuff a couple of times to give me NPC names or lists of personality traits, and it's good sometimes for breaking writers block when I get stuck on some detail and I can't figure out what word I want to use or what to name something. You can usually get it to give you some sort of okay suggestions, but the volume of ideas is usually enough to spark a better idea for me. The only weird thing I've noticed is that GPT4 (or whatever flavor bing/copilot is currently using) REALLY likes alliteration to a degree that is downright corny. It's kinda weird but sort of funny honestly.
They are very useful for outlining and similar "where do I start" writing projects. They help break the dam and just get some damn words on the screen, at which point it's often easy to continue and flesh things out to a complete thought.
I've refused to indulge in using them for searching. Do they cite their sources now? All I've seen are screenshots where it appears you're just supposed to take their word for it. Curious if that's changed.
Bing Chat has become my go-to search engine for situations where I'm not looking for a specific website or other such resource, and instead want some kind of information or knowledge. I'd recommend giving it a shot. It does a websearch in the background, puts the results into its hidden context, and then builds an answer for you based on the information it dredged up, complete with links. You can then clarify your question or ask for further details and get a back-and-forth going, it's really handy. I'd recommend giving it a shot, I believe it works without needing an account now.
Oh, I should note: don't use it like an old-school search engine where you just type a couple of keywords in. Be conversational and give context to your search. Say for example "I'm planting a garden in Witchita, Kansas. What climate zone is it, and what sorts of flowers grow well there?" And then perhaps follow that with "Are any of those attractive to hummingbirds?" Or whatever. That should help it figure out what information to look for and how to distill what you want to know from it.
Some like Phind or Perplexity cite their sources. And they give you directly the answer you're looking for without having to search it in a mess of "subscribe to our newsletter", "other articles that may interest you", 3 paragraphs of "if you read this article, you will know what you want to know", "special promotion for you",…
Oh, wow. It really isn't. Axios usually does really good reporting, but that looks more like the outline / notes for a story than something ready to publish.
I strongly dislike generative LLMs (I refuse to call them AI) for a host of reasons, but the biggest reason has less to do with the tech and more to do with the people / upper brass who are trying to replace human jobs with them and expecting it to just work (while salivating at the thought of pocketing the salary of the displaced human employee).
I don't think the article really calls that out explicitly, but they are saying it's not living up to the hype. As far as progress goes, I suppose that's a good first step.
For real, it almost felt like an LLM written article the way it basically said nothing.
Also, the way it puts everything in bullet points is just jarring to read.
Frankly, corporations seem to have no idea how to use LLMs. They want them to be a public facing company representative, which is exactly what LLMs can't do well. Where they accel is as an assistant.
Want to figure out what scale you're playing a song in? It's great at that. I've had it give me chords to go with scales too, or even asked for some scale options based on the feeling of the sound.
It's also great for looking for terms in other languages. I've got some ranged weapon abilities in my tabletop rpg. I knew i wanted one of them to be called pistolero, but I didn't know the terms fusilero or escopetero, and might not have found them on my own, but chatgpt came up with them right away.
I've also learned that it's great at looking up game guides and providing hints that aren't spoilers without giving the puzzle away. I had it generate results for the Lady's Maze in Planescape: Torment and the Water Temple in Ocarina of Time. Amazing hints without giving it away.
If you have your own brain and want to off-load some simple queries, it's great. If you want to use it in place of a human brain to talk to customers, you're barking up the wrong gpt.
That music example is how I've used them, it really is spot on. Key, tempo, scale, overlapping scales that could be used, plus factoids included. It really can be very helpful.
AI doesn't have to be good, it only needs to be good enough. Even if it's just barely functional, if it's cheaper than paying a human, then it will be used by capitalists.