I've found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you're okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you've written, AI might be the way to go since it turns a writing problem into a proofreading problem.
The Google Search LLM which summarises search results is good enough for most purposes. I wouldn't rely on it for in-depth research but like I said, it's 90% accurate and 1,000% faster. You just have to be mindful of this limitation.
I don't personally like interacting with customer service LLMs because they can only serve up help articles from the company's help pages, but they are still remarkably good at that task. I don't need help pages because the reason I'm contacting customer service to begin with is because I couldn't find the solution using the help pages. It doesn't help me, but it will no doubt help plenty of other people whose first instinct is not to read the f***ing manual. Of course, I'm not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go "off the script" and hallucinate solutions that obviously don't work, or pretend that they've scheduled a callback with a human when you request it, but they actually haven't. This is a really common problem with any sort of LLM.
At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.
Consumers don't like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.
But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren't good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.
Not that I can’t just, you know, FIND porn, but there’s something really fun about trying to generate an image just right, tweaking settings and models until you get the result you’re after.
There's a handful of actual good use-cases. For example, Spotify has a new playlist generator that's actually pretty good. You give it a bunch of terms and it creates a playlist of songs from those terms. It's just crunching a bunch of data to analyze similarities with words. That's what it's made for.
It's not intelligence. It's a data crunching tool to find correlations. Anyone treating it like intelligence will create nothing more than garbage.
Demystifying obscure or non-existent documentation
Basic error checking my configs/code: input error, ask what the cause is, double check it's work. In hour 6 of late night homelab fixing this can save my life
I use it to create concepts of art I later commission. Most recently I used it to concept an entirely new avatar and I'm having a pro make it in their style for pay
DnD/Cyberpunk character art generation, this person does not exist website basically
duplicate checking / spot-the-diffetences, like pastebins "differences" feature because the MMO I play released prelim as well as full patch notes and I like to read the differences
ChatGPT is incredibly good at helping you with random programming questions, or just dumping a full ass error text and it telling you exactly what's wrong.
This afternoon I used ChatGPT to figure out what the error preventing me from updating my ESXi server. I just copy pasted the entire error text which was one entire terminal windows worth of shit, and it knew that there was an issue accessing the zip. It wasn't smart enough to figure out "hey dumbass give it a full file path not relative" but eventually I got there. Earlier this morning I used it to write a cross apply instead of using multiple sub select statements. It forgot to update the order by, but that was a simple fix. I use it for all sorts of other things we do at work too. ChatGPT won't replace any programmers, but it will help them be more productive.
To me it's glorified autocomplete. I see LLM as a potencial way of drastically lowering barrier of entry to coding. But I'm at a skill level that coercing a chatbot into writing code is a hiderance. What I need is good documentation and good IDE statical analysis.
I'm still waiting on a good, IDE integrated, local model that would be capable of more that autompleting a line of code. I want it to generate the boiler plate parts of code and get out of my way of solving problems.
I use LLMs for multiple things, and it's useful for things that are easy to validate. E.g. when you're trying to find or learn about something, but don't know the right terminology or keywords to put into a search engine. I also use it for some coding tasks. It works OK for getting customized usage examples for libraries, languages, and frameworks you may not be familiar with (but will sometimes use old APIs or just hallucinate APIs that don't exist). It works OK for things like "translation" tasks; such as converting a MySQL query to a PostGres query. I tried out GitHub CoPilot for a while, but found that it would sometimes introduce subtle bugs that I would initially overlook, so I don't use it anymore. I've had to create some graphics, and am not at all an artist, but was able to use transmission1111, ControlNet, Stable Diffusion, and Gimp to get usable results (an artist would obviously be much better though). RemBG and works pretty well for isolating the subject of an image and removing the background too. Image upsampling, DLSS, DTS Neural X, plant identification apps, the blind-spot warnings in my car, image stabilization, and stuff like that are pretty useful too.
to copy my own comment from another similar thread:
I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.
I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.
These are the opinions of an idiot with no marketable skills.
It's nice to generate images of settings for my d&d campaign.
It's nice that I can replace Google/Siri with something I run and control locally, for controlling my home.
I have a local instance of Stable Diffusion that I use to make art for MtG proxies. Prior to AI my art was limited to geometric designs and edits of existing pieces. Integrating AI into my work flow has expanded my abilities greatly, and my art experience means that I can do more with it than just prompt engineering.
I thought it was pretty fun to play around with making limericks and rap battles with friends, but I haven't found a particularly usefull use case for LLMs.
Personally I use it when I can't easily find an answer online. I still keep some skepticism about the answers given until I find other sources to corroborate, but in a pinch it works well.
Boilerplate code (the stuff you usually have to copy anyway from GitHub) and summarising long boring articles. That's the use case for me. Other than that I agree - and having done AI service agent coding myself for fun I can seriously say that I would not trust it to run a business service without a human in the loop
I use ChatGpt to ask programming questions, it’s not always correct but neither is Stack Overflow nowadays. At least it will point me in the right direction.
I needed instructions on how to downgrade the firmware of my Unifi UDR because they pushed a botched update. I searched for a while and could only find vague references to SSH and upgrading.
They had a “Unifi GPT” bot so I figured what the hell. I asked “how to downgrade udr firmware to stable”. It gave me effective step by step instructions on how to enable SSH, SSH in and what commands to run to do so. Worked like a charm.
So yeah, I think the problem is we’re in the hype era of LLMs. They’re being over applied at lots of things they aren’t good at. But it’s extremism in the other direction to say there aren’t functions they can do well.
They are at least better than your average canned chat/search bot or ill informed CSR at finding an answer to your question. I think they can help with lots of frustrating or opaque computer related tasks, or at least point you in the right direction or surface something you might not be able to find easily otherwise.
They just aren’t going to write programs for you or do your office job for you like execs think they will.
There are plenty of uses for it. There are also plenty of bad implementations that don't use it in a way that helps anyone.
We're going through an overhyped period currently but we'll see actual uses in a few years once the dust settles. About 10 years ago, a similar thing happened with AI vision and now everyone has filters they can use on cameras and face detection. We'll reach another plateau until the next tech hype comes about.
A friend's wife "makes" and sells AI slop prints. He had to make a twitter account so he could help her deal with the "harassment". Not sure exactly what she's dealing with, but my friend and I have slightly different ideas of what harassment is and I'm not interested in hearing more about the situation. The prints I've seen look like generic fantasy novel art that you'd see at the checkout line of a grocery store.
It looks impressive on the surface but if you approach it with any genuine scrutiny it falls apart and you can see that it doesn't know how to draw for shit.
I find it helpful to chat about a topic sometimes as long as it's not based on pure facts, You can talk about your feelings with it.
I got high and put in prompts to see what insane videos it would make. That was fun. I even made some YouTube videos from it. I also saw some cool & spooky short videos that are basically "liminal" since it's such an inhuman construction.
But generally, no. It's making the internet worse. And as a customer I definitely never want to deal with an AI instead of a human.
There are a few uses where it genuinely speeds up editing/insertion into contracts and warns of you of red flags/riders that might open you up to unintended liability. BUT the software is $$$$ and you generally need a law degree before you even need a tool like that. For those that are constantly up to their chins in legal shit, it can be helpful. I'm not, thankfully.
I made an AI song for my mom's birthday on Suno and she loved it so much she cried. So that was nice.
I don't like how people are using it to just replace artists. It would be find if it's just to automate some things, like, "AI can tell you when ___ needs to be replaced," but it feels more like it's being used as a stick to workers. Like, "Keep acting up and I'll replace you with dun dun dunAI!"
AI is used extensively in science to sift through gigantic data sets. Mechanical turk programs like Galaxy Zoo are used to train the algorithm. And scientists can use it to look at everything in more detail.
Apart from that AI is just plain fun to play around with. And with the rapid advancements it will probably keep getting more fun.
Personally I hope to one day have an easy and quick way to sort all the images I have taken over the years. I probably only need a GPU in my server for that one.
Some of my friends enjoy fucking around with those character AIs. I never got the appeal, even as an RP nerd, RPing is a social activity to me, and computers aren't people
I have seen funny memes be made with Image Generators -- And tbqh as long as you're not pretending that being an AI prompter makes you an "artist", by all means go crazy with generating AI images for your furry porn/DnD campaign/whatever
https://goblin.tools/ is a cool little thing for people as intensely autistic as I am, and it runs off AI stuff.
Voice Recognition/Dictation technology powered by AI is a lot better than its pre-AI sibling. I've been giving it a shot lately. It helps my arthritis-ridden hands.
If you mean anything that utilizes machine learning ("AI" is a buzzword), then "AI" technology has been used to help scientists and doctors do their jobs better since the mid 90s
I use silly tavern for character conversations, pretty fun. I have SD forge for Pomy diffusion, and use Suno and Udio. Almost all of that goes to DND, the rest for personal recreation.
Google and openai all fail to meet my use cases and if I cuss they get mad so fuck em.
I never use those for making money or any other personal progression, that would be wrong.
Garbage in; garbage out. Using AI tools is a skillset. I've had great use with LLMs and generative AI both, you just have to use the tools to their strengths.
LLMs are language models. People run into issues when they try to use them for things not language related. Conversely, it's wonderful for other tasks. I use it to tone check things I'm unsure about. Or feed it ideas and let it run with them in ways I don't think to. It doesn't come up with too much groundbreaking or new on its own, but I think of it as kinda a "shuffle" button, taking what I have already largely put together, and messing around with it til it becomes something new.
Generative AI isn't going to make you the next mona Lisa, but it can make some pretty good art. It, once again, requires a human to work with it, though. You can't just tell it to spit out an image and expect 100% quality, 100% of the time. Instead, it's useful to get a basic idea of what you want in place, then take it to another proper photo editor, or inpainting, or some other kind of post processing to refine it. I have some degree of aphantasia - I have a hard time forming and holding detailed mental images. This kind of AI approaches art in a way that finally kinda makes sense for my brain, so it's frustrating seeing it shot down by people who don't actually understand it.
I think no one likes any new fad that's shoved down their throats. AI doesn't belong in everything. We already have a million chocolate chip cookie recipes, and chatgpt doesn't have taste buds. Stop using this stuff for tasks it wasn't meant for (unless it's a novelty "because we could" kind of way) and it becomes a lot more palatable.
It helps make simple code when Im feeling lazy at work and need to get something out the door.
In personal life, I run a local llm server with SillyTavern, and get into some kinky shit that often makes for an intense masturbation session. Sorry not sorry.
It's done a lot of bad/annoying things but I'd be lying if I said it hasn't enabled me to completely sidestep the enshittification of Google. You have to be smart about how you use it but at least you don't have to wade through all the SEO slop to find what you want.
And it's good for weird/niche questions. I used it the other day to find a list of meme songs that have very few/simple instruments so that I could find midi files for them that would translate well when going through Rust's in-game instruments. I seriously doubt I'd find a list like that on Google, even without the enshittification.
I use perplexity.ai more than google now. I still don’t love it and it’s more of a testament to how far google has fallen than the usefulness of AI, but I do find myself using it to get a start on basic searches. It is, dare I say, good at calorie counting and language learning things. Helps calculate calorie to gram ratios and the math is usually correct. It also helps me with German, since it’s good at finding patterns and how German people typically say what I am trying to say, instead of just running it through a translator which may or not have the correct context.
I do miss the days where I could ask AI to talk like Obama while he’s taking a shit during an earthquake. ChatGPT would let you go off the rails when it first came out. That was a lot of fun and I laughed pretty hard at the stupid scenarios I could come up with. I’m probably the reason the guardrails got added.
I find ChatGPT useful in getting my server to work (since I'm pretty new with Linux)
Other than that, I check in on how local image models are doing around once every couple of months. I would say you can achieve some cool stuff with it, but not really any unusual stuff.
Generative AI has been an absolute game changer in my retouching work. Slightly worrying that it'll put me out of work sometime in the future, but for now it's saving me loads of time, handling the boring stuff so I can concentrate on the stuff it can't do.
I was really psyched about AI when it first hit my news feed. Now I'm less than impressed. Most generalist AI platforms get things wrong constantly. Having an LLM trained on specific things, like math or science or maybe law, I could see being useful.
We're at the "AI everything" phase instead of the "AI what makes sense" phase.
It helps when writing a lot of boilerplate or if I’m being lazy and want to solve something. However I do not need AI in everything I use. It seems everyone wants AI in their product whilst it’s doing the same thing everyone else is doing.
Regardless of how useful some might find it, there isn’t a single use case that justifies the environmental cost (not to mention the societal cost). None. Stop using it. You were able to survive and function without it 2 years ago, and you still can.
it’s useful for programming from time to time. But not for asking open questions.
I’ve found having to double check is too unnerving and letting it just provide the links instantly is more my way of working.
Other than that it sometimes sketches things out when I have no idea what to do, so all in all it’s a glorified search engine for me.
Other than work I despise writing emails and reports and it fluffs them up.
I usually have to edit them afterwards to not make em look ai-made but it adds some „substance“.
It's great for parsing through the enshittified journalism. You know the classic recipe blog trope? If you ask chatgpt for a recipe, it just gives you one. Whether it's good or not is a different story, but chatgpt is leagues better at getting to the info you want than search has been for the last decade.
I ask it a lot of technical questions that are broad and non-specific. It helps to quickly get a gauge on what is the correct way to implement something.
Porn has been ruined by AI too. Jokes aside it's really a boner killer.
Idk who faps to that whack shit but it's trying so hard to make everything look baby silk smooth with unrealistic bodies most likely stolen from hentai.
I work on a 20+ year knowledge base for a big company that has had no real content management governance for pretty much that whole time.
We knew there was duplicate content in that database, but were talking about thousands of articles, with several more added daily.
With such a small team, identifying duplicate/redundant content was just an ad-hoc thing that could never be tackled as a whole without a huge amount of resources.
AI was able to comb through everything and find hundreds of articles with duplicate/redundant content within a few hours. Now we have a list of articles we can work through and clean up.
I've enjoyed some of the absurd things out can come up with. Surreal videos and memes (every president as a bodybuilder wrestler). However it's never been useful and the cost isn't worth the benefit, to me.
I've been finding it useful for altering recipes to take my wife's allergies into account. I don't use it for much else. And certainly not for anything important.
For me throwing a graph in and telling it to create a table from it and stuff like that is really super helpful, since I often have to do this, and by hand it's a very tedious job. Sorting and cleaning tables and translating stuff is super handy and I use it quite often. But other then that I don't care.
When it just came out I had AI write fanfiction that no sane person would write, and other silly things. I liked that. That and trail cam photos of the Duolingo mascot.
I think my complaints are more with how capitalism treats new technology, though-- and not just lost jobs and the tool on the climate. Greed and competition is making it worse and worse as a technology that AI itself, within a years span, has been enshittified. There are use cases that it can do a world of good, though, just like everything else bad people ruin.
I used to spend 1 month a year where all I did was write performance reports on people I supervise. Now I put the facts in let AI write the first draft, do some editing and I'm done in a week.
I have horrible spelling and sometimes write in an archaic register. I also often write in a way that sounds rather aggressive which is not my intention most of the time. Ai helps me rewrite that shit and makes me more sensitive to tone in written text.
Of course just like normal spell check and auto completion feature one still needs to read it a final time.
My primary use of AI is for programming and debugging. It's a great way to get boilerplate code blocks, bootstrap scripts, one-liner shell commands, creating regular expressions etc. More often than not, I've also learned new things because it ends up using something new that I didn't know about, or approaches I didn't know were possible.
I also find it's a good tool to learn about new things or topics. It's very flexible in giving you a high level summary, and then digging deeper into the specifics of something that might interest you. Summarizing articles, and long posts is also helpful.
Of course, it's not always accurate, and it doesn't always work. But for me, it works more often than not and I find that valuable.
Like every technology, it will follow the Gartner Hype Cycle. We are definitely in the times of "everything-AI" or AI for everything - but I'm sure things will calm down and people will find it valuable for a number of specific things.
I use it for coding (rarely pure copy paste), explaining code, use/examples, finding tools to use.
Better translation than Google translate for Japanese.
Asking for things that search engines only gives generic results for.
So I'm really bad about remembering to add comments to my code, but since I started using githubs ai code assistant thing in vs code, it will make contextual suggestions when you comment out a line. I've even gone back to stuff I made ages ago, and used it to figure out what the hell I was thinking when I wrote it back then 😆
It's actually really helpful.
I feel like once the tech adoption curve settles down, it will be most useful in cases like that: contextual analysis
Even before AI the corps have been following a strategy of understaffing with the idea that software will make up for it and it hasn't. Its beyond the pale the work I have to do now for almost anything I do related to the private sector (work as their customer not as an employee).
My corp has been very skeptical and suspicious. So far the only allowed ai is to summarize slack. For channels that I want to keep in the loop but not waste time monitoring, it creates a nice summary of recent traffic.
I was trying to help one guy who used an online ai despite it being against policy. However he was just using it as a search engine to find a code solution and it took way too long to give him the wrong answer. A search engine would have been faster but he’d have to use his own judgement to identify the wrong answer. Pretty arrogant guy despite not knowing what he was doing, so I didn’t fight it when he insisted he was going to follow what it told him
I'll use it to write scripts for repetitive tasks at my job. I never learned or know code so it's actually super helpful in that sense but that isn't really what OP is asking i don't think. I use AI by going on to their platform and initiating the interaction. I disable every form of AI I am capable of disabling/uninstalling. Every integrated sense of AI has been obnoxious.
I use chatgpt to make questions for me when my teachers refuse to give me anything to practice on before final exams. Even then, I'd take literally anything they'd give over whatever AI can generate
Tbh it’s made a pretty significant improvement in my life as a software developer. Yeah, it makes shit up/generates garbage code sometimes, but if you know how to read code, debug, and program in general, it really saves a lot of grunt work and tedious language barriers. It can also be a solid rubber duck for debugging.
Basically any time I just need a little script to take x input and give me y output, or a regex, I’ll have ChatGPT write it for me.
I love chatgpt, and am dumbfounded at all the AI hate on lemmy. I use it for work. It's not perfect, but helps immensely with snippets of code, as well as learning STEM concepts. Sometimes I've already written some code that I remember vaguely, but it was a long time ago and I need to do it again. The time it would take to either go find my old code, or just research it completely again, is WAY longer than just asking chatgpt. It's extremely helpful, and definitely faster for what I'd already have to do.
I guess it depends on what you use it for ¯\_(ツ)_/¯.
I hope it continues to improve. I hope we get full open source. If I could "teach" it to do certain tasks someday, that would be friggin awesome.
I think it’s a fun toy that is being misused and forced into a lot of things it isn’t ready for.
I’m doing a lot with AI but it’s pretty much slop. I use self hosted stable diffusion, Ollama, and whisper for a discord bot, code help, writing assistance, and I pay elevenlabs for TTS so I can talk to it. It’s been pretty useful. It’s all running on an old computer with a 3060. Voice chat is a little slow and has its own problems but it’s all been fun to learn.
That's a bit loaded question. By AI I assume you're refering to GenAI/LLMs rather than AI broadly.
I use it to correct my spelling on longer posts and I find that it improves the clarity and helps my point come across better.
I use Dall-E to create pictures I never could have before, because despite my interest in drawing, I just never bothered to learn it myself. GenAI enables me to skip the learning and go straight to creating.
I like that it can simulate famous people and allows me to ask 'them' questions that I never could in real life. For example, yesterday I spent a good while chatting with 'Sam Harris' about the morality of lying and the edge cases where it might be justified. I find discussions like this genuinely enjoyable and insightful.
I also like using the voice mode where I can just talk with it. As a non-native english speaker, I find it to be good practise to help me improve my spelling pronunciation.
Im suprisingly on board for ai art. It does allow you to create whatewer you want without having the technical ability to do so.
( For example of you want a sick wallpaper )
Ot significantly lowers the floor as far as creating anything art related goes.
But for very specific purposes it's worth considering as an option.
Text-to-image generation has been worth it to get a jumping-off point for a sketch, or to get a rough portrait for a D&D character.
Regular old ChatGPT has been good on a couple occasions for humor (again D&D related; I asked it for a "help wanted" ad in the style of newspaper personals and the result was hilariously campy)
In terms of actual problem solving... There have been a couple instances where, when Google or Stack Overflow haven't helped, I've asked it for troubleshooting ideas as a last resort. It did manage to pinpoint the issue once, but usually it just ends up that one of the topics or strategies it floats prove to be useful after further investigation. I would never trust anything factual without verifying, or copy/paste code from it directly though.
Internet search, e.g. Google, is now functionally almost completely useless. I use ChatGPT basically as a Google replacement.
I will still search for stuff - I use Kagi - but give up after half a dozen results if none of them are relevant and go to ChatGPT instead. Often, ChatGPT is more helpful. But sometimes it just makes a bunch of nonsense up.
ChatGPT is great for when you need to find something where you kind of know at least the vague shape of what you’re expecting and you have enough expertise to filter out any of the lies it makes up.
If used in the specific niche use cases its trained for, as long as its used as a tool and not a final product. For example, using AI to generate background elements of a complete image. The AI elements aren't the focus, and should be things that shouldn't matter, but it might be better to use an AI element rather than doing a bare minimum element by hand. This might be something like a blurred out environment background behind a peice of hand drawn character art - otherwise it might just be a gradient or solid colour because it isn't important, but having something low-quality is better than having effectively nothing.
In a similar case, for multidisciplinary projects where the artists can't realistically work proficiently in every field required, AI assets may be good enough to meet the minimum requirements to at least complete the project. For example, I do a lot of game modding - I'm proficient with programming, game/level design, and 3D modeling, but not good enough to make dozens of textures and sounds that are up to snuff. I might be able to dedicate time to make a couple of most key resources myself or hire someone, but seeing as this is a non-commercial, non-monitized project I can't buy resources regularly. AI can be a good enough solution to get the project out the door.
In the same way, LLM tools can be good if used as a way to "extend" existing works. Its a generally bad idea to rely entirely on them, but if you use it to polish a sentence you wrote, come up with phrasing ideas, or write your long if-chain for you, then it's a way of improving or speeding up your work.
Basically, AI tools as they are, should be seen as another tool by those in or adjacent to the related profession - another tool in the toolbox rather than a way to replace the human.
Theres someone I sometimes encounter in a discord Im in that makes a hobby of doing stuff with them (from what I gather seeing it, they do more with it that just asking them for a prompt and leaving them at that, at least partly because it doesnt generally give them something theyre happy with initially and they end up having to ask the thing to edit specific bits of it in different ways over and over until it does). I dont really understand what exactly it is this entails, as what they seem to most like making it do is code "shaders" for them that create unrecognizable abstract patterns, but they spend a lot of time talking at length about technical parameters of various models and what they like and dont like about them, so I assume the guy must find something enjoyable in it all. That being said, using it as a sort of strange toy isnt really the most useful use case.
I built a spreadsheet for a client that sorts their email into threads and then segments various conversations into a different view based on shipment numbers mentioned in the conversations. But it's a lot of work to get something like this set up. Am thinking of going into consulting/implementation.
To me AI is useless. Its not intelligent, its just a blender that blends up tons of results into one hot steaming mug of "knowledge". If you toss a nugget of shit into a smoothie while it's being blended, it's gonna taste like shit. Considering the amount of misinformation on the internet, everything AI spits out is shit.
It is purely derivative, devoid of any true originality with vague facade of intelligence in an attempt to bypass existing copyright law.
I went for a routine dental cleaning today and my dentist integrated a specialized AI tool to help identify cavities and estimate the progress of decay. Comparing my x-rays between the raw image and the overlay from the AI, we saw a total of 5 cavities. Without the AI, my dentist would have wanted to fill all of them. With the AI, it was narrowed down to 2 that need attention, and the others are early enough that they can be maintained.
I'm all for these types of specialized AIs, and hope to see even further advances in the future.
I use ChatGPT and Copilot as search engines, particularly for programming concepts or technical documentation. The way I figure, since these AI companies are scraping the internet to train these models, it’s incredibly likely that they’ve picked up some bit of information that Google and DDG won’t surface because SEO.
I usually keep abreast of the scene so I'll give a lot of stuff a try. Entertainment wise, making music and images or playing dnd with it is fun but the novelty tends to wear off. Image gen can be useful for personal projects.
Work wise, I mostly use it to do deep dives into things like datasheets and libraries, or doing the boring coding bits. I verify the info and use it in conjunction with regular research but it makes things a lot easier.
Oh, also tts is fun. The actor who played Dumbledore reads me the news and Emma Watson tells me what exercise is next during my workout, although some might frown on using their voices without consent.
The only things I use and I know they have AI are Spotify recommendations, live captions on videos and DLSS. I don't find generative AI to be interesting, but there's nothing wrong with machine learning itself imo if it's used for things that have purpose.
I used it a decent amount at my last job to write test reports that had a lot of similar text with minor changes.
I also use it for dnd to help me quickly make the outlines of side characters & flesh out my world.
Like any new tool it is being abused to hurt the working class by the wealthy. It does have useful aspects if used properly but it's pretty overshadowed by all the awful uses imo
The applications of what you call ai are absolutely limitless. But to be clear what you're calling "AI" isn't AI in terms of what you might want it to be what you're referring to are large language models or LLM's. Which aren't ai, not yet.
It's short sighted statements like this that really get my blood boiling.
If humanity actually achieves artificial intelligence it'll be the equivalent of the printing press or agriculture. It'll be like inventing the superconductor or micro transistors all over again. Our world will completely change for the better.
If your interactions with these llms have been negative, I can only assume that you have a strong bias against this type of technology and have simply not used it in a way that's applicable for you.
I personally use llms pretty much daily in my life and they have been nothing but an excellent tool.
For the most part it's not useful, at least not the way people use it most of the time.
It's an engine for producing text that's most like the text it's seen before, or for telling you what text it's seen before is most like the text you just gave it.
When it comes to having a conversation, it can passibly engage in small talk, or present itself as having just skimmed the Wikipedia article on some topic.
This is kinda nifty and I've actually recently found it useful for giving me literally any insignificant mental stimulation to keep me awake while feeding a baby in the middle of the night.
Using it to replace thinking or interaction gives you a substandard result.
Using it as a language interface to something else can give better results.
I've seen it used as an interface to a set of data collection interfaces, where all it needed to know how to do was tell the user what things they could ask about, and then convert their responses into inputs for the API, and show them the resulting chart. Since it wasn't doing anything to actually interpret the data, it never came across as "wrong".
Playing with it on my own computer, locally hosting it and running it offline, has been pretty cool. I find it really impressive when it's something open source and community driven. I also think there are a lot of useful applications for things that are traditionally not solvable with traditional programming.
However a lot of the pushed corporate AI feels not that useful, and there's something about it that really rubs me the wrong way.
I have found ChatGPT to be better than Google for random questions I have, asking for general advice in a whole bunch of things but sido what to go for other sources. I also use it to extrapolate data, come up with scheduling for work (I organise some volunteer shifts) and lots of excel formulae.
I have a custom agent that i ask questions to that then goes and finds sources then answers my question. Can do math by writing python code and using the result. I uae it almost exclusively instead of regular search. Ai makes coding far quicker giving examples remeber shit i cant remeber how to use writing basic functions etc.
Writing emails.
Making profile pictures.
I used to enjoy the tldr bot on lemmy till some fascist decided to kill it instead of just letting people block it.
I've never had AI code run straight off the bat - generally because if I've resorted to asking an AI, I've already spent an hour googling - but it often gives me a starting point to narrow my search.
There's been a couple of times it's been useful outside of coding/config - for example, finding the name of some legal concepts can be fairly hard with traditional search, if you don't know the surrounding terminology.
I’ve used it to fill in the gaps for DND storyline. I’ll give it a prompt and a couple of story arcs then I’ll tell it to write in a certain style, say a cowardly king or dogmatic paladin. From there it will spit out a story. If I don’t like certain affects, I’ll tell it to rewrite a section with some other detail in mind. It does a fantastic job and saves me some of the guesswork.
I have had fun with ChatGPT, but in terms of integrating it into my workflow: no. It just gives me too much garbage on a regular basis for me not to have to check and recheck anything it produces, so it's more efficient to do it myself.
And as entertainment, it's more expensive than e.g. a game, over time.
As a college student, best experience I've had is just generating stories that you can easily tell are AI written by use of specific language.
Second best was when I tried taking pokemon from older generations, taking their BST, telling an AI (perplexity) that I wanna give them gen 5 BST, providing a spreadsheet with all gen 5 pokemon w/BST and each individual stat, and using whatever it gives me as a baseline for making BST edits.
Otherwise, I wouldn't say I'm a big fan of AI since I don't have many uses for it myself.
Only one I ever use is the meta AI built into messenger because my friends and I can have it make silly and often extremely cursed pictures that make us laugh
I like messing with the locally hosted AI available. We have a locally hosted LLM trained on our command media at work that is occasionally useful. I avoid it otherwise if I didn't set it up myself or know who did.
duck.ai is very helpful for niche/specific questions I have but can’t find online. It’s also helpful for super quick questions that don’t really warrant a forum post. However, I always take things with a grain of salt.
It's an overly broad term, and the "hype" use-cases dominate the discussion in a way that lacks vision. I'm using machine learning to optimize hardware accelerated processing for particle physics. So, ya, it's not all slop. And what is, may very well evolve.
I’m not impressed with the LLMs. They do make great synonym generators.
Stable diffusion and other image diffusers are genuinely amazing. And I’m not talking about asking copilot to make Fortnite shrek. There are incredibly complex ways in which you can fine tune to tell it how to shape and refine the image. It has and is going to continue to revolutionize graphical art. And once the math shrinks down it’s going to be everywhere.
I use AI every day. I think it's an amazing tool. It helps me with work, with video games, with general information, with my dog, and with a whole lot of other things. Obviously verify the claims if it's an important matter, but it'll still save you a lot of time. Prompting AI with useful queries is a skill set that everyone should be developing right now. Like it or not, AI is here and it's going to impact everyone.
It stimulates my brain, and I enjoy the randomness of it all. It's like how in nature things can be perfectly imperfect - random and still beautiful - unintentional and still emotion-inducing. Sure, I see the ethical issues with how an AI is trained and how capitalism cares more about profit than people leading to job loss or exploitation; however, those are separate issues in my mind, and I can still find joy in the random output of an AI. I could easily tunnel on the bad parts of AI and what's happening as the world devours a new technology, but I still see benefits it can bring in the medical research and engineering fields.
Yes. Ai art is great. It's a new medium and pretty much every argument against it was made against photography a century ago, and most of them against pre-mixed paints before that. Stop believing the haters who don;t know what it actually is.
Kitboga has used AI (STT, LLMs, and TTS) to waste the time of Scammers.
There are AI tools being used to develop new cures which will benefit everyone.
There are AI tools being used to help discover new planets.
I use DLSS for gaming.
I run a lot of my own local AI models for various reasons.
Whisper - for Audio Transcriptions/Translations.
Different Diffusion Models (SD or Flux) - for some quick visuals to recap a D&D session.
Tesseract OCR - to scan an image and extract any text that it can find (makes it easy to pull out text from any image and make it searchable).
Local LLMs (Llama, Mixtral) for brainstorming ideas, reformatting text, etc. It's great for getting started with certain subjects/topics, as long as I verify everything that it says.
Going through data and writing letters are the only tasks I've seen AI be useful for. I still wouldn't trust it as far as I could kick it's ass and I'd check it well before submitting for work.
I abhor it and I think anybody who does actually like it is using it unethically: for art (which they intend to profit off of), for writing papers or articles, and for writing bad code.
That said, I did find some use for chatGPT last year. I had it explain to me some parts of Hawking's paper on black hole particle creation, this was only useful for this one case because Hawking had a habit of stating something is true without explaining it and often without providing useful references. For the record, chatGPT was not good at this task, but with enough prodding and steering I was eventually able to get it to explain some concepts well enough for my usage. I just needed to understand a topic, I definitely wasn't asking chatGPT to do any writing for me, most of what it spits out is flat out wrong.
I once spent a day trying to get it to solve a really basic QM problem, and it couldn't even keep the maths consistent from one line to another.