Redditors are the dumbest, most credulous idiots to ever walk the Earth
Redditors are the dumbest, most credulous idiots to ever walk the Earth
Literally just mainlining marketing material straight into whatever’s left of their rotting brains.
Redditors are the dumbest, most credulous idiots to ever walk the Earth
Literally just mainlining marketing material straight into whatever’s left of their rotting brains.
For fucks sake it's just an algorithm. It's not capable of becoming sentient.
Have I lost it or has everyone become an idiot?
This is verging on a religious debate, but assuming that there's no "spiritual" component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex "meat computers"?
stochastic parrots
I could have sworn that the whole point of that paper was to point out that LLMs aren't actually intelligent, not that human intelligence is basically an LLM.
I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.
That's not something that I've bought into yet.
so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here's my philosophical examination of the issue.
the thing is, we don't even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.
so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.
here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.
hint: you can't. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as 'illusory' - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this 'something' would be the 'consciousness' or 'sentience' or to put it in your oh so smug terms the 'soul' that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from 'what are qualia' to 'what are those illusory, deceitful qualia decieving'. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.
Consider information processing, and the kinds of information processing that our brains/minds are capable of.
What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human's normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term 'philosophical zombie' comes from) There is no reason to assume that an information processing system that contains information about itself would have to be 'aware' of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).
and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.
our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.
so the options we are left with in terms of conclusions to draw are:
And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.
I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff
I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress
I don't know where everyone is getting these in depth understandings of how and when sentience arises.
It's exactly the fact that we don't how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don't even know how it works, so why are these AI hypemen so sure they got it figured out?
The only logical answer is that they don't and it's 100% marketing.
Hoping computer algorithms made in a way that's meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.
this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn't mean we are close to anything.
Consider the complexity of sentient, multicellular organism. That's trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that's still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.
You're making a lot of assumptions about the human mind there.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don't think something as general as consciousness can be accurately called an algorithm.
Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don't even have language.
It just screams of a marketing scam. I'm not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don't think this is what they're doing. I think they're just trying to sell the next Google AdSense
That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.
How is that plausible? The human brain has more processing power than a snake's. Which has more power than a bacterium's (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written
chatGPT : freshman-year-"hello world"-program
human being : amoeba
(the : symbol means it's being analogized to something)
a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they're still just data points, which have no sentience or consciousness.
Both are something much greater than the sum of their parts, but in a human's case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn't....do anything, it has no will
Have I lost it
Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”
Have I lost it
No you haven't. I feel the same way though, since the world has gone mad over it. Reporting on this is just another proof that journalism only exists ro make capitalists money. Anything approaching the lib idea of a "free and independent press" would start every article explaining that none of this is AI, it is not capable of achieving consciousness, and theyvare only saying this to create hype
Have I lost it or has everyone become an idiot?
Brainworms has been amplified and promoted by social media, I don't think you have lost it. This is just the shitty capitalist world we live in.
They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn't come here to read shit from AI.
I've seen this several times now; they're treating the word-generating parrot like fucking Shalmaneser in Stand on Zanzibar, you literally see redd*tors posting comments that are basically "I asked ChatGPT what it thought about it and here...".
Like it has remotely any value. It's pathetic.
ChatGPT could give you a summary of the entire production process
with entirely made up numbers
It can replace customer service agents
that will direct you to a non-existent department because some companies in the training data have one
and support for shopping is coming soon
i look forward to ordering socks and receiving ten AA batteries, three identical cheesegraters, and a leopard
I said it at the time when chatGPT came along, and I'll say it now and keep saying it until or unless the android army is built which executes me:
ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.
I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some "real" argument for different types and stages of AI and my only preemptive response to them is basically "keep your industry specific terminology inside your specific industries." The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because... Frankly, they're full of shit and it's annoying.
the average person was always an NPC who goes by optics instead of fundamentals
"good people" to them means clean, resourced, wealthy, privileged
"bad people" means poor, distraught, dirty, refugee, etc
so it only makes sense that an algorithm box with the optics of a real voice, proper english grammar and syntax, would be perceived as "AI"
What I wanted:
What I got:
AI has been used to describe many other technologies, when those technologies became mature and useful in a domain though they stopped being called AI and were given a less vague name.
Also gamers use AI to refer to the logic operating NPCs and game master type stuff, no matter how basic it is. Nobody is confused about the infected in L4D being of Skynet level development, it was never sold as such.
The difference with this AI push is the amount of venture capital and public outreach. We are being propagandized. To think that wouldn't be the case if they simply used a different word in their commercial ventures is a bit... Idk, silly? Consider the NFT grift, most people didn't have any prior associations with the word nonfungible.
ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.
Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.
I've used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.
One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.
"It's not the full AI we expected" is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.
This tech is not less than a year old. The "tech" being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.
People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.
ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.
The people that are into this and believe the hype have a lot of crossover with "Effective Altruism" shit. They're all biased and are nerds that think Roko's Basilisk is an actual threat.
As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they're unleashing on the world is cool in their minds, but oh no we've done too many lines at work and it shit out something and now we're all freaked out that maybe it'll kill us. As long as this technology is used to serve the interests of capital, then the only results we'll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we're subjected to, they'll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.
Where do you get the idea that this tech is less than a year old? Because that's incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90's. Any recent "breakthroughs" are more about computing power than a theoretical shift.
I hate to tell you this, but I think you've bought into marketing hype.
I never said that stuff like chatGPT is useless.
I just don't think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within... wait. I guess it already happened based on Musk's predictions from years ago.
If people wanna discuss theories and such: have fun. Just don't expect me to give a shit until skynet is looking for John Connor.
Perceptrons have existed since the 80s 60s. Surprised you don't know this, it's part of the undergrad CS curriculum. Or at least it is on any decent school.
LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don't answer that I can tell.
This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can't make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won't finance your search engine that talks back in the long term and it can't do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.
The only thing it's been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.
this tech is less than a year old
what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it's ahistorical to call this new tech.
I'm not really a computer guy but I understand the fundamentals of how they function and sentience just isn't really in the cards here.
I don’t understand how we can even identify sentience.
Nobody does and anyone claiming otherwise should be taken with cautious scrutiny. There are compelling arguments which disprove common theses, but the field is essentially stuck in metaphysics and philosophy of science still. There are plenty of relevant discoveries from neighboring fields. Just nothing definitive about what consciousness is, how it works, or why it happens.
Nobody does, we might not even be. But it's pretty easy to guess inorganic material on earth isn't.
sapience isn't but all these things already respond to stimuli, sentience is a really low bar.
Sentience is not a "low bar" and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called "AI" is nowhere near either one.
A piece of paper is sentient because it reacts to my pen
plenty of things respond to stimuli but aren't sapient - hell, bacteria respond to stimuli.
Roko's Basilisk, but it's the snake from the Nokia dumb phone game.
We all did...
I was gonna say, "Remember when scientists thought testing a nuclear bomb might start a chain reaction enflaming the whole atmosphere and then did it anyway?" But then I looked it up and I guess they actually did calculations and figured out it wouldn't before they did the test.
Might have been better if it did
No I’m not serious I don’t need the eco-fascism primer thank you very much
The half serious jokes about sentient AI, made by dumb animals on reddit are no closer to the mark than an attempt to piss on the sun. AI can't be advancing at a pace greater than we think, unless we think it's not advancing at all. There is no god damn AI. It's a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it's got no grasp on anything, no comprehension, let alone a promise of sentience.
There are plenty of stuff and people that get to me, but few are as good at it as idiot tech bros, their delusions and their extremely warped perspective.
Exactly. It's just statistics, it's not really useful beyond what it has been trained on, but people seem to think that it's something it's not. I guess that is the fault of the advertising push by these corporations to market these statistical algorithms as "AI"
I don't know if Reddit was always like this but all /r/ subreddits feel extremely astroturfed. /r/liverpoolfc for example feels like it is run by the teams PR division. There are a handful of criticcal posts sprinkled in so redditors can continue to delude themselves into believing they are free thinking individuals.
Also this superintelligent thing was doing well on some fifth grade level tests according to Reuter's anonymous source which got OpenAI geniuses worried about AI apocalypse.
Reddit has always been astroturfed but it’s clearly increased since 2015 and especially in the last year since their IPO push
Glazers are so incompetent that r/reddevils is astroturfed against the club.
The Liverpool club subreddit is just delusional. Many saw the 22/23 season decline coming, saw that the players were declining beforehand, but it all got dismissed and downvoted because results were good at the time.
Everything is extremely astroturfed and fake on Reddit, there are lots of companies making good money from running influence operations on Reddit these days. There was a massive uptick in it when Lemmy became popular almost as if anyone with any real substance left the platform entirely.
In 2013, Reddit admins did an oopsy-whoopsy and accidentally revealed that the Eglin Air Force Base was the #1 most reddit-addicted "city" (Eglin is often cited as the source of government social-media propaganda/astroturfing programs). They deleted the post, but not before archive.org caught it.
I think it should be noted, that some of the members on the board of OpenAI are literally just techno-priests doing actual techno-evangelism, their job literally depends on this new god and the upcoming techno-rapture being perceived as at least a plausible item of faith. I mean it probably works as any other marketing strategy, but this is all in the context of Microsoft becoming the single largest company stakeholder on OpenAI, likely they don't want their money to go to waste paying a bunch of useless cultists so they started yanking Sam Altman's chain. The OpenAI board reacted to the possibility of Microsoft making budget calls, and outed Altman and Microsoft swiftly reacted by formally hiring Altman and doubling down. Obviously most employees are going to side with Microsoft since they're currently paying the bills. You're going to see people strongly encouraged to walk out from the OpenAI board in the upcoming weeks or months, and they'll go down screaming crap about the computer hypergod. You see these aren't even marketing lines that they're repeating acritically, it's what's some dude desperately latching onto their useless 6 figure job is screaming.
goldmine for anthropologists and sociologists as the techbros reinvent religion from first principles
these people are basically just the magician priests of Ra again
The saddest part of all is that it looks like they really are wishing for real life to imitate a futuristic sci-fi movie. They might not come out and say, "I really hope AI in the real world turns out to be just like in a sci-fi/horror movie" but that's what it seems like they're unconsciously wishing for. It's just like a lot of other media phenomena, such as real news reporting on zombie apocalypse preparedness or UFOs. They may phrase it as "expectation" but that's very adjacent to "hopeful."
I'm really appreciative of this meme. I endorse it and wish it could enter the minds of everyone alive right now.
Yeah I think it was Kim Stanley Robinson who said that sci-fi is taken as religious mythology often, like the prophecy of superluminal space travel or machine superintelligence, very much like prophecies of heaven and a savior god.
Also the point that if you point this out as a myth, whatever your credentials as a sci-fi writer or even a physicist, the faithful will launch a crusade against you
You're right on, in my opinion. It's a gnarly distraction from the Marxist way of analyzing this: further alienation from the means of production. I really like how you frame it as a religious thing. It pairs nicely with literal interpretations of the Bible, really. Gotta wonder how many of these folks come from strict Baptist murkan families.
Yeah. I've written game AI, I've worked in AI research, I've looked under the hood and examined how LLMs work, but people with little or no experience still tell me I'm wrong and that they know better.
I think there's an important difference with the two examples, where one contracts everything we understand about the way the universe works, and the other does not.
Is it really sad to wish for that? There are plenty of more positive representations of such things that are seen in the sci-fi/horror genre.
Sci-Fi is ultimately speculative fiction, an idea of how the world might be, and while it might be a bit silly to act like whatever speculative fiction you have in mind is an accurate representation of the future without very strong evidence, I'm not sure I would describe it as sad.
I swear 99% of reddit libs
don't understand anything about how LLMs work.Knowing how AI actually works is a very reliable vaccine against
ismNew Q* drop lol
Some graph traversal algorithm ass name.
CIA is using the Wizard 101 system to generate Redditor names
Redditors straight up quote marketing material in their posts to defend stuff, it's maddening. I was questioning a gamer on Reddit about something in a video game, and in response they straight up quoted something from the game's official website. Critical thinking and media literacy are dead online I swear.
Shit can’t even do my homework right.
He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating as any of the bazinga takes on
. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don't look for ways to utilize, subvert and counter these technologies while they're still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating
That's because it's not artificial intelligence. It's marketing.
Oh my god it’s this post again.
No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.
And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.
Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.
The hexbear party line toward LLMs
this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx's words with respect to stuff like this:
The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.
However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people's privacy or fabricate audio/video evidence is only going to get worse. I don't think it's possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.
I think their position is heavily misguided at best. The question is whether AI is sentient or not. Obviously they are used against the working class, but that is a separate question from their purported sentience.
Like, it’s totally possible to seize AI without believing in its sentience. You don’t have to believe the techbro woo to use their technology.
We can both make use of LLMs ourselves while disbelieving in their sentience at the same time.
Is that such a radical idea?
We’re not saying that LLMs are useless and we shouldn’t try and make use of them, just that they’re not sentient. Nobody here is making that first point. Attacking the first point instead of the arguments that people are actually making is as textbook a case of strawmanning as I’ve ever seen.
Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.
As it stands, the capitalists already have the old means of information warfare -- this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text -- but with filters installed by communists, rather than the PR arm of a company? That won't be nearly as convincing as just talking and organizing with people in real life.
Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it's still just a programme on a server. Computers are very, very fragile. I'm just not too worried about it.
It's not a new means of production, it's old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It's dismissed because it's dismissals, there's no new technology here, it's a souped up chatbot that's been marketed like something else.
As far as machines being conscious, we are so far away from that as something to even consider. They aren't and can't spontaneously gain free will. It's inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don't understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.
The fuck is chat gpt or AI art going to do for communism?
I think AI art could be great but chatGPT as a concept of something that "knows everything" is very moronic
AI art has the potential to let random schmucks make their own cartoons if they input just a little bit of work. However, this will probably require a license fee or something so you're probably right
Personally I would love to see well-made cartoons about Indonesian mythology and stuff like that, which will never ever be made in the west (or Indonesia until it becomes as rich as China at least) so AI art is the best chance at that
Kinda, but like cool ML is alphafold/esm/mpnn/finite elements optimizers for cad/qcd/quantum chemistry (coming soon(tm)). LLMs/diffusion models are ways of multiplying content, fucking up email jobs and static media creators/presumably dynamic ones as well in the future.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.
I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.
Please elaborate in as much detail as possible, ideally with numerous hyperlinks. (I'm less surprised by this than you might think, but would greatly appreciate being clued into what's going on in this arena right now, as I've been largely cut off from information about it for years now.)
It's a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn't be using this for the same reason you don't use a gag dictionary for a research project. Grow up
Whenever the tech industry needs a boost some new bullshit comes up. Crypto, self driving and now AI, which is literally called AI for marketing purposes, but is basically an advanced algorithm.
And anyone pushing back is labelled a luddite who hates change. P.T. Barnum would have loved Redditors.
The best for me was self driving. Like trains are a thing you know. But no. We need to design our road system to allow self driving. My god
I think there's some kinda video game derived Matrix Brain thing going on here as well. Like at the end of the day a computer is a box that does math. Math isn't a force or element or particle or wave or whatever, it's a methodology used by humans to determine or predict various relationships between things. Math doesn't decide how anything works, it just describes it. All a computer does is Math and doing math doesn't change the world.
It would be pretty funny if scientists emulate a human brain only for it to just not work
r/ChatGPT must be the most braindead out of all the often-spurious AI subreddits.
Could a sentient AI trans its gender?
I mean that seriously, because sentience and sapience requires a sense of self, after all. Could these AI ever change anything about themselves, such as their own names, and the pronouns used to refer to them? I think until we ever get an AI that chooses of its own accord to change its Self in that way, we won't ever have true sentient AI.
And of course, knowing the types of Bazinga Brains who champion AI becoming sentient, they'd probably program out an AI's ability to self-identify, which would make the idea of a sentient AI moot
Complete nonsense.
As we all know, many idol-worshiping peoples have encountered gnomes and through worshiping and offering tribute to these gnomes, these gnomes become the hosts for powerful dark gods who reward their followers generously but are known to be fickle and demanding.
Silicon Valley is infamous for it's bizzare polycules and their ottoman haramesque power struggles. Somehow, Sam Altman offended Aella_Girl's polycule who happened to control the board of open AI.
Aella_girl is openly in sexual concourse with a series of garden gnomes who she likely worships and has married as it is known that gnomes usually demand a wife or your first born child.
Proof: https://cashmeremag.com/reddit-gonewild-aella-gnome-cam-53817/
She likely used the powers of this dark god to remove Sam Altman from the board but likely failed to meet it's escalating demands or otherwise disappointed this entity and as a result failed to remove him. It is known that when disappoints a gnome or stops worshiping it that one's fortunes will fall into a rapid decline so if this happens to her then we know what likely happened. Either that or Sam Altman is also in contact with a dark entity of some sort.
lmao did you hear this from a Romani couple from Indiana?
More likely than you think
I gotta admit. Before llm were common I used to call bazinga brain types "glorified markov chains" because that's what most of them are. But now with chatgpt I can't call them "glorified chat-gpt" because it truly is better at coming up with bullshit than they do, and it really can almost write better CRUD code than most of these bazinga brains. That's why I think they're so obsessed with this stuff.
Elon Musk could announce he's already been to Mars actually, twice actually, and half his fans would believe it.
As Marxists, we must carefully investigate all technological advance, especially those in computation, and of cybernetics we must be most curious of all.
Computation is at most the symbolic record of the movement of a thought through time. The best an AI can ever do is recording and playing back the process-of-thinking to us, and in the case of large-language-models, this computation produces works that will always pale in comparison to the minds it was trained upon. In spite of any psychic qualities that may be assigned to the electron, our silicon genius can never actually be a Sibelius or a Riemann.
Those fundamental limitations of computers, however, will matter little: the consistent history of the misuse of new computer technologies, by the capitalist powers, for controlling individuals and populations, confirms for us that the repression and social deterioration that we collectively experience will reach unbearable heights in the days to come.
People really do think a version of SmarterChild with more bells and whistles is really Skynet
Bring on skynet already since you seem to think you can you cowards. Tired of all this advertising, shit or get off the pot.
Godamn 416 comments
I think we have reached the struggle session zone
420th comment
Has chatgpt solved any of these yet? Wouldn’t that be a sign it is actually “thinking?”
At this point, the thread is more about natural stupidity than it is artificial intelligence.
I am once again returning to this thread (for god knows why probably because I am somewhat unhinged) to question whether one of these LLMs have done something or performed an action without user input. A very strong opinion, but I feel like LLMs are useful at this point to cut corners to solve bullshit problems. Honestly, I am kinda compelled to write a Godamn essay synthesizing material from Graeber with the current information coming out of tech bro hell because I feel like there’s a lot there.
Even in my own job, my company is using GPT. To do what you might ask? Send emails, create reminders about emails, schedule meetings, respond to client requests that require the intervention of a living human who might have something on their desktop that needs to be shown to the client/edited accordingly. Or create some kind of meeting summary. Again, great. What’s the meeting about? Why can’t we just restructure this conversation in relation to to what end?
We should all become rightfully intrigued when these “AI” begin acting on their own accord, but right now, they’re being controlled by people with an inhumane agenda, antithetical to the human experience. I guess this is just the case and point why the humanities shouldn’t have been gutted in the west-you can’t answer any of these questions with a formula that won’t end in some form of light eugenics. What happens when it does act by itself and you use it for your bs work? Awesome job! You just reinvented slavery! 🥰
Functionally of course, none of this matters if it’s “AGI” or not when it’s a power grab of extreme levels by the bourgeois. Whether they admit it or not, humans will still be needed to do their “work” (it’s certainly not labor) and they will slowly use “AI” as a justification to reduce wages, benefits and what have you.
Trust me bro, i'm as positive of this as I am of my NFT collection gaining value
I'd say most of us were redditors once... No point in tarring all with one brush
Being a redditor is a state of being, not just an account on a link aggregate site
It's a lifestyle sweaty
No, they get their tar until they stop being redditors
Yeah, that would probably wear out the brush. We're gonna need a lot of brushes to tar every
er