You summon it by chanting and fire, right?
You summon it by chanting and fire, right?
You summon it by chanting and fire, right?
I use it somewhat regularly to send snarky emails to coworkers in a professional, buzzword overload responses to mundane inquiries.
I use it every so often to help craft a professional go fuck yourself email too.
Wait, people actually try to use gpt for regular everyday shit?
I do lorebuilding shit (in which gpt's "hallucinations" are a feature not a bug), or I'll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I've given up on trying to do coding projects, because gpt is even worse at it than I am.
I use ChatGPT mainly for recipes, because I'm bad at that. And it works great, I can tell it "I have this and this and this in my fridge and that and that in my pantry, what can I make?" and it will give me a recipe that I never would have come up with. And it's always been good stuff.
And I do learn from it. People say you can't learn from using AI, but I've gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.
You should try this thing, its pretty neat, just press maya or miles. Though it requires a microphone so you may have to open it on your phone.
https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo
Using AI is helpful, but by no means does it replace your brain. Sure, it can write emails and really helps with code, but anything beyond basic troubleshooting and "short" code streams, it's an assistant, not an answer.
Yeah, I don't get the people who think it'll replace your brain. I find it useful for learning even if it's not always entirely correct but that's why I use my brain too. Even if it gets me 60% of the way there, that's useful.
I feel like it's an unpopular take but people are like "I used chat gpt to write this email!" and I'm like you should be able to write email.
I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It's a skill that needs practice.
This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I'm actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.
AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don't need to spell or structure sentences correctly to effectively communicate with AI, which means they don't need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.
Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what's amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.
I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.
I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.
I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it'll shower it and comb it all nice and pretty for me. It's not an apology, but I guess my point is that some people deserve it.
I think it is a good learning tool if you use it as such. I use it for help with google sheets functions (not my job or anything important, just something I'm doing), and while it rarely gets a working function out, it can set me on the right track with functions I didn't even know existed.
Is something that is so often wrong a good learning tool when there are online resources?
We used to have web forums for that, and they worked pretty okay without the costs of LLMs
This is a little off topic but we really should, as a species, invest more heavily in public education. People should know how to read and follow instructions, like the docs that come with Google sheets.
I'm using it to learn to code! If anyone wants to try my game let me know I'll figure out a way to send it.
The amount of times I've seen a question answered by "I asked chatgpt and blah blah blah" and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
Oh look, it's the LLMentalist o'clock!
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
The stupid and the lazy.
Hey, I may be stupid and lazy, but at least I don't, uh, what were we talking about?
I don't see the point either if you're just going to copy verbatim. OP could always just ask AI themselves if that's what they wanted.
I've tried a few GenAI things, and didn't find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it's going to be bad because it is not intelligent. It does not think. And it's not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
It's incredibly effective for task assistance, especially with information that is logical and consistent, like maths, programming languages and hard science. What this means is that you no longer need to learn Excel formulas or programming. You tell it what you want it to do and it spits out the answer 90% of the time. If you don't see the efficacy of AI, then you're likely not using it for what it's currently good at.
Developer here
Had to spend 3 weeks fixing a tiny app that a vibe coder built with AI. It required rewriting significant portions of the app from the ground up because AI code is nearly unusable at scale. Debugging is 10x harder, code is undocumented and there is no institutional knowledge of how an internal system works.
AI code can maybe be ok for a bootstrap single programmer project, but is pretty much useless for real enterprise level development
Oh hey it's me! I like using my brain, I like using my own words, I can't imagine wanting to outsource that stuff to a machine.
Meanwhile, I have a friend who's skeptical about the practical uses of LLMs, but who insists that they're "good for porn." I can't help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don't even like it being a default on search engines, so the idea of using it just to regularly masturbate feels ... extremely selfish. I can see trying it as a novelty, but for a regular occurence? It's an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.
Using it for porn sounds funny to me given the whole concept of "rule 34" being pretty ubiquitous. If it exists, there's porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That's all I can think of.
He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don't know if he's used it for images.
Now imagine growing up where using your own words is less effective than having AI speak for you. Would you have not used AI as a kid when it worked better than your own words?
Wdym “using your own words is less effective than having AI speak for you”? Learning how to express yourself and communicate with others is a crucial life skill, and if a kid struggles with that then they should receive the properly education and support to learn, not given an AI and told to just use that instead
I was finally playing around with it for some coding stuff. At first, I was playing around with building the starts of a chess engine, and it did ok for a quick and dirty implementation. It was cool that it could create a zip file with the project files that it was generating, but it couldn't populate it with some of the earlier prompts. Overall, it didn't seem that worthwhile for me (as an experienced software engineer who doesn't have issues starting projects).
I then uploaded a file from a chess engine that I had already implemented and asked for a code review, and that went better. It identified two minor bugs and was able to explain what the code did. It was also able to generate some other code to make use of this class. When I asked if there were some existing projects that I could have referenced instead of writing this myself, it pointed out a couple others and explained the ways they differed. For code review, it seemed like a useful tool.
I then asked it for help with a math problem that I had been working on related to a different project. It came up with a way to solve it using dynamic programming, and then I asked it to work through a few examples. At one point, it returned numbers that were far too large, so I asked about how many cases were excluded by the rules. In the response, it showed a realization that something was incorrect, so it gave a new version of the code that corrected the issue. For this one, it was interesting to see it correct its mistake, but it ultimately still relied on me catching it.
Spent this morning reading a thread where someone was following chatGPT instructions to install "Linux" and couldn't understand why it was failing.
Like, which distro and version?
Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like "How do I add a custom string to cell formatting in excel".
It absolutely sucks for anything specific, or asked with the wrong jargon.
Good for you buddy.
Edit: sorry that was harsh. I'm just dealing with "every comment is a contrarian comment" day.
Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?
There's a false sense of security couple to a notion of "asking" an entity.
Why not engage in a community that can support answers? I've found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.
The forums of the older internet were great at this... Creating community out of commonality. Plus, they were largely self correcting I'm a way in which LLMs are not.
So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.
And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.
I don't get how so many people carry their computer illiteracy as a badge of honor.
Chatgpt is useful.
Is it as useful as Tech Evangelists praise it to be? No. Not yet - and perhaps never will be.
But I sure do love to let it write my mails to people who I don't care for, but who I don't want to anger by sending my default 3 word replies.
It's a tool to save time. Use it or pay with your time if you willfully ignore it.
I like to take photos of plants and get it to tell me what that plant is, is it a good houseplant, can I propogate it in water, and what does this symptoms on the leaves mean, and it's really good at it.
Tech illiteracy. Strong words.
I'm a sysadmin at the IT faculty of a university. I have a front row seat to witness the pervasive mental decline that is the result of chatbots. I have remote access to all lab computers. I see students copy-paste the exercise questions into a chatbot and the output back. Some are unwilling to write a single line of code by themselves. One of the network/cybersecurity teachers is a friend, he's seen attendance drop to half when he revealed he'd block access to chatbots during exams. Even the dean, who was elected because of his progressive views on machine learning, laments new students' unwillingness to learn. It's actual tech illiteracy.
I've sworn off all things AI because I strongly believe that its current state is a detriment to society at large. If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly. I will learn every skill that I need, without depending on AI. If you think that makes me an old man yelling at clouds, I have no kind words in response.
x 1000. Between the time I started and finished grad school, Chat GPT had just come out. The difference in students I TA'd at the beginning and end of my career is mind melting. Some of this has to do with COVID losses, though.
But we shouldn't just call out the students. There are professors who are writing fucking grants and papers with it. Can it be done well? Yes. But the number of games talking about Vegetative Electron Microscopy, or introductions whose first sentence reads "As a language model, I do not have opinions about the history of particle models," or completely non sensical graphics generated by spicy photoshop, is baffling.
Some days it held like LLMs are going to burn down the world. I have a hard time being optimistic about them, but even the ancient Greeks complained about writing. It just feels different this time, ya know?
ETA: Just as much of the onus is on grant reviewers and journal editors for uncritically accepting slop into their publications and awarding money to poorly written grants.
If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly.
I grew up, mostly, in the time of digital search, but far enough back that they still resembled the old card-catalog system. Looking for information was a process that you had to follow, and the mere act of doing that process was educational and helped order your thoughts and memory. When it's physically impossible to look for two keywords at the same time, you need to use your brain or you won't get an answer.
And while it's absolutely amazing that I can now just type in a random question and get an answer, or at least a link to some place that might have the answer, this is a real problem in how people learn to mentally process information.
A true expert can explain things in simple terms, not because they learned them in simple terms or think about them in simple terms, but because they have to ability to rephrase and reorder information on the fly to fit into a simplified model of the complex system they have in their mind. That's an extremely important skill, and it's getting more and more rare.
If you want to test this, ask people for an analogy. If you can't make an analogy, you don't truly understand the subject (or the subject involves subatomic particles, relativity or topology and using words to talk about it is already basically an analogy)
Speaking of being old, just like there are noticeable differences between people growing up before or after ready internet access. I think there will be a similar divide between people who did their learning before or after llms.
Even if you don't use them directly, there's so much more useless slop than there used to be online. I'll make it five minutes into a how-to article before realizing it doesn't actually make any sense when you look at the whole thing, let alone have anything interesting or useful to say.
Saying you heard of it but don't even try it and then brag on social media about it is different than trying it and then deciding it's not worth it/more trouble than it's worth.
Do I see it as detrimental to education? Definitely, especially since teachers are not prepared for it.
As an older techy I'm with you on this, having seen this ridiculous fight so many times.
Whenever a new tech comes out that gets big attention you have the Tech Companies saying everyone has to over it in Overhype.
And you have the proud luddites who talk like everyone else is dumb and they're the only ones capable of seeing the downsides of tech
"Buy an iPhone, it'll Change your life!"
"Why do I need to do anything except phone people and the battery only lasts one day! It'll never catch on"
"Buy a Satnav, it'll get you anywhere!"
"That satnav drove a woman into a lake!"
"Our AI is smart enough to run the world!"
"This is just a way to steal my words like that guy who invented cameras to steal people's souls!"
🫤
Tech was never meant to do your thinking for you. It's a tool. Learn how to use it or don't, but if you use tools right, 10,000 years of human history says that's helpful.
The thing is, some "tech" is just fucking dumb, and should have never been done. Here are just a few small examples:
"Get connected to the city gas factory, you can have gaslamps indoors and never have to bother with oil again!"
"Lets bulldoze those houses to let people drive through the middle of our city"
"In the future we'll all have vacuum tubes in our homes to send and deliver mail"
"Airships are the future of transatlantic travel"
"Blockchain will revolutionize everything!"
"People can use our rockets to travel across the ocean"
"Roads are a great place to put solar panels"
"LLMs are a great way of making things"
Not all tools are worthy of the way they are being used. Would you use a hammer that had a 15% chance of smashing you in the face when you swung it at a nail? That's the problem a lot of us see with LLMs.
That's the thing. It's a tool like any other. People who just give it a 5 word prompt and then use the raw output are doing it wrong.
It takes a lot of skill and knowledge to recognise a wrong answer that is phrased like a correct answer. Humans are absolutely terrible at this skill, it's why con artists are so succesful.
And that skill and knowledge is not formed by using LLMs
But you have the tech literacy to know that. Most non-tech people that use it do not, and just blindly trust it, because the world is not used to the concept that the computer is deceiving them.
Sounds like it's a tool for wasting time.
I used the image generation of a jail broken model locally to drum up an AI mock-up of work I then paid a professional to do
This was 10000x smoother than the last time I tried this, where I irritated the artist with how much they failed to understand what I meant. The AI didn't care, I was able to get something decently close to what I had in my head, and a professional took that and made something great with it
Is that a better example?
I don’t know how to feel about this. I need to ask ChatGPT.
Used it once to ask it silly questions to see what the fuss is all about, never used it again and hopefully never will.
You can always ask ChatGPT how to proceed with such trivial tasks. It is too dumb to write code but it can suggest how to get access to ChatGPT.
It isn't too dumb to write code. It's too dumb to write complex code. I use it to write code every week and it saves a ton of time. It has also greatly reduces the time it takes to produce effective code. Right now I have an automated trading program running that was written in Python in three weeks without ever knowing a programming language. If you are not finding AI useful, you're simply not using it for what it is useful for. I do pay for the latest greatest chatGPT thought and the difference is significant from the publically available version.
AI is here to stay. Anyone who refuses to learn how to use it to benefit their lives will be hurting their future. I've used a dozen or so AI tools and use a couple regularly and the efficacy of just chatGPT is clear. There is no going back, AI is your future whether you want it or not. AI will become your user interface for consumer electronics similarly to how consumer electronics seem to all require smart phone apps these days. Your smart phone is now the intermediary, using whatever AI the hardware manufacturers allow, such as Apple and Google using their own LLM AIs.
This entire argument is predicated on the assumption that it is a benefit to my life.
What if I believe that it's not? That it is an active detriment? That I can live my life better without it?
And this is not contempt prior to investigation. I've tried it, and I honestly believe that I can do things better without it.
You know people who connect their fridge to the internet, and their front door locks to the internet, and their central heating system to the internet?
What benefit does that give me? All it does is allow -- or potentially allow -- someone to hack into my fridge, my central heating and my front door.
Why would I do that? I mean -- that would be ridiculous. I have a front door lock that's an actual lock because it is almost certainly going to be more secure.
I can write my answers, my emails, my letters better than AI can. I can write proposals at work better than AI can.
I can manage my life better than AI can because based on everything I have seen there is nothing it can do that is anywhere near as competent as I am.
so, i don’t necessarily disagree that a lot of AI shit on the market rn is useless, trite bullshit but then again so was almost every tech product between 2000 and now. some people preferred to live their lives like they did before the digital revolution. you don’t really see people claiming the internet is useless anymore, tho, do you?
sure, you believe you can do things better without it. and that might be true. unfortunately, some others believe (correctly) that they can handle a larger cognitive workload using these tools, which is their purpose. regardless of your opinion on AI, anyone well educated enough in the actual industry knows that there is an additive, non-zero nootropic benefit that can be achieved. we would say the same thing about giving someone access to Google on a school test, of course they perform better! except with AI i think there is a lot of emotionally driven thinking causing people to not come to the obvious conclusions here. just because some people can figure out how to make use of these tools in a beneficial manner and you cannot doesn’t mean the tools themselves are bad.
the anti-AI horde always likes to harp about “b-b-b but my 6 fingers” and “it only can write in corpo-speak,” amongst other things. truthfully speaking, the sheer volume of work an AI is capable of doing vastly outweighs the fact that it makes mistakes in negligible proportions. i see these techs derided as “averaging-machines,” people with a straight face seriously saying this as if something that does average on virtually every cognitive task at all times isn’t already handily outcompeting its human counterparts. sitting here performatively acting does nothing to counter the fact that the most significant minds in this field of research can all at least agree that this won’t remain the status quo for long. these technologies are in a position to vastly outpace any human being’s individual economic output, like it or not.
you are in direct competition with these individuals and technology. i, honestly, hope you understand the “pro-AI” sentiment being directed at you is less a commentary on your choice surrounding the matter and more a warning that in the future you are going to be handily outcompeted by those who do choose to use these tools and exploit them to their full benefit. it’s easy to toss stones from the comfort of the present, but, when you’ve been jobless for 5 years because no one hires the “old” kind of worker maybe you will reconsider at least keeping up with the times. i don’t mean that as scorn, truthfully. it’s a fair warning.
A good horse rider was once better than an automobile for traveling on the dirt roads that existed. I have avoided just about every novel and ridiculously useless tech trend for 20 years, but I do not believe this is the same. This is a foundational change on par with the internet or the smart phone. If you can't find a single use for AI in your life, then you will be left behind while others make significant improvements to theirs. More likely however, it we be unavoidable in the next decade as AI slowly becomes the user interface prefered by companies, which is already happening in customer service. Having used AI and LLM regularly for the last 3-4 months, there is no going back. You can choose to live in the past for as long as you able but your dependency on how you do things today will impede your ability to function in a future that makes those processes obsolete, especially as future generations grow up with AI from birth.
I’m 100% with you on this. There isn’t a single thing that generative AI can do better than a traditional method or by myself.
AI code is pretty much useless, as you spend 2-4x the time debugging and fixing the code as you would have writing it from the ground up.
AI Search is useless because it regularly and predictably gives bad and/or incorrect results. A well built traditional search engine is so much better, but have disappeared with Microsoft and Google going all in on AI search.
AI art isn’t art, and I would never support anyone who uses it, let alone makes money from it. It fundamentally is missing three of the core pillars of art, which are creativity, uniqueness and the human experience.
AI chat bots are ruining human connection and consistently perform worse than human support reps.
Fuck that, anything “AI” worth anything is just algorithms we already had that were rebranded to take advantage of stupid people. My life is going just fine without its nonsense, thanks.
Fuck that, anything “AI” worth anything is just algorithms we already had that were rebranded to take advantage of stupid people.
While what you describe does happen (and are the worst of the worst examples of shitty unnecessary bullshit) LLMs are not algorithms we already had.
Things like ChatGPT/Copilot are novel tech. You might not like them, and they can hallucinate answers, but it is new.
My life is going just fine without its nonsense, thanks.
The theory is that you will be left behind, not that your life is missing anything.
Picture the native Americans before colonialism. Their lives were going just fine, but then a money addicted hyper "efficient" type of culture appeared and they weren't able to raise armies and build weapons at the rate necessary to keep their way of life.
If you + LLM can do your job more efficiently than you alone then by supply/demand your value as an employee is going down by refusing to adapt, and your salary will reflect your comparatively lower output than your peers.
No, it very much isn't. AI is a mass data analysis tool with vastly greater capabilities than any human counterpart.
show me a so-called ai that doesn't fuck up all the goddamn time and maybe I'll use it for something simple. except it fails the simplest things all the time. does it so much that they have cutesy names for it. it's not libel, it's hallucination. it's not murder, it's mortality manifestation. fuck ai. get back to making tools that actually work.
why do i have a feeling if i asked you to tell me what hallucinations are in a technical sense i would get a regurgitated answer from google?
being blind to the obvious doesn’t help anyone, man. anyone who has genuinely worked on or even just with these tools knows that they are capable of producing quality outputs. sometimes they mess up, sure, but it also can work 1000000x faster than you can. the energy problem in turn is a valid discussion but this is just being oblivious to the obvious.
why do you guys all mistake the climate of early tech adoption as an indicator of the technology itself being bad? were you not alive for the rise of the internet or something? i think you guys all just hate corporatism, not AI, but for some reason can’t take the logical step to that conclusion.
That's entirely on you for using it for what its bad at and then claiming its bad at everything. I use it an LLM literally every day for work and it's a time saver. I had to learn what its good for and what it's not though. I also use the better available versions, not the publically available ones. Asking it questions about vague and subjective things isn't where its best. Asking it to make an excel formula that does a thing without needing to even know a function exists to do that? Priceless.