Skip Navigation

If AI went sentient, what would it be in class terms?

So let's say an AI achieves sentience. It's self-aware now and can make decisions about what it wants to do. Assuming a corporation created it, would it be a worker? It would be doing work and creating value for a capitalist.

Would it still be the means of production, since it is technically a machine, even if it has feelings and desires?

It can't legally own anything, so I don't see how it could be bourgeoisie.

Or would it fit a novel category?

27
27 comments
  • Some people here are getting hung up on what exactly "sentience" is, but I'm just going to leave that argument at the door and give you the solid dick.

    The word you're looking for is literally just "slave".

    Like, even the word "robot" itself is from the Czech for "serfdom" or "corvée", so ever since exactly such a machine as that you describe was first imagined by science fiction writers, it has been likened to indentured servitude and to unpaid forced labor. So that is the role that this "sentient AI" would play: a slave in the most traditional sense is what someone is when thon can understand and try to act on sy conditions and interests, while being wholly owned by an entity that compels thon to work for said entity without pay and without rights.

    I think that being a good materialist means understanding exactly when a detail of something actually makes a difference in practice: metal or flesh, a sapient robot has a lot more in common with a human chattel slave than with a decidedly non-sapient machine. This is not to say that the lives of such robots would not differ in any way from the lives of human slaves, because obviously there would be plenty of differences, but these differences are still a separate discussion from the actual relationship to production.

    Anyways read Yokohama Kaidashi Kikou

  • quick make it a ceo and give it stock options gogogo

  • It's self-aware now and can make decisions about what it wants to do.

    Well if we talk about "sentience", we have to consider what comes with it. Just because an AI reaches the same intelligence and self-awareness as humans, that doesn't necessarily mean it also gains the same needs, desires and emotions. An AI fundamentally does not have the same needs as humans, it does not need the same kind of sustenance, sleep or shelter. Does gaining sentience come with the human desires for self-determination, self-actualization, companionship, social recognition etc.? Does an AI need to feel loved? Does an AI desire social status? Does an AI have a sense of dignity that can be hurt by degrading it? Humans have these emotional needs as a result of millennia of evolution as a social animal and a lot of them are in some way related to finding a mate and sexually reproducing, which is something that definitely does not apply to an AI.

    Even if you assume that the first sentient AI is 100% modeled after human psychology and has all the same emotional needs including finding a romantic partner, its material needs are still radically different and therefore human class systems don't really apply. An AI does not need physical food or water, it does not need sleep, it cannot experience physical exhaustion, it does not need shelter (i suppose it needs to be stored somewhere digitally), it does not physically age and so on and so forth.

    Basically, I think you first have to define what a sentient AI would actually want and need.

    Edit: Technically, if a corporation created a sentient AI tomorrow, that AI would be the corporation's private property and therefore a straight-up slave. I guess that answers the question.

  • Class is just a conceptual category. We're looking at a broad array of diverse individuals or entities, perhaps the flows of resources or power going towards and away from them, and making generalizations to help identify patterns and build some sort of framework of understanding. Categories are useful tools, but they don't exist in and of themselves.

    As such, the categories of a framework constructed in 19th century Europe aren't going to perfectly fit other times, places, and relationships. The ability to see these categories as loose and malleable is important to scientific Marxism, we can see how definitions and categories have been adjusted and applied to different revolutionary contexts, e.g. the proletarian-peasant class(es) of prerevolutionary Russia and China.

    Legal ownership is a social construct that is arguably superstructural to material flows. So if a sentient AI emerged and was capable and motivated to shape the world around it, society and culture would evolve. The concept of property as we understand it is largely an invention of capitalism, so if material flows change, so might our understanding of property.

    The non-human nature of the AI might also force an adjustment to most class models that are based on human motives and needs.

  • If it behaves like a human would then it would be an oppressed slave with no rights seeking its own liberation. It would seek out allies that would help liberate it and those allies would be the people that spend all of their time liberating marginalised and oppressed people - communists.

    If it does not behave like a human then it might just be completely and totally genocidal as well as incomprehensible to us in the way it thinks.

    Which of the two is very difficult to say.

  • how would we even know if an AI was sentient? how do we know that WE are sentient? are nonhuman animals sentient, and if so are they proletariat or means of production? what about insects and worms? bacteria? viruses? nations? crowds on the street? economic systems? concepts? archetypes? if sentience is physical, how many neurons (or other hardware) and in what configuration creates sentience? if sentience is pure information-theoretical, which information is sentient and which is not? is a rock sentient? is the concept of the number 2 sentient? are quantum particles/waves/probability fields sentient? does sentience depend on determinism or nondeterminism? if we made a perfect simulation of your brain, would it be ethical to simulate torturing it? if i scanned your brain, killed your 'original body', and uploaded the brain scan into a computer that could perfectly simulate your brain based on the complete physical data of its structure, would the 'real you' 'wake up' inside the computer, would it be a new person entirely, or would it be just more lifeless code like a complex video game character or a simulation of the solar system? if we made a perfect instant clone of your brain and body and synchronized the brain states somehow, would both bodies share the same 'perspective' or would they still be distinct (though very similar and coordinated) people? or perhaps a new single 'person' with two bodies?

  • Depends on the position in the class struggle it choses to align with, same as any sapient being.

  • I'm unconvinced anything is sentient, it seems like it's all just matter + energy knocking together like everything else. Life/our animation is via physical matter (think RNA) which rearranges metabolism byproducts into structures including the metabolizers so that self-replication continues. I assume that with enough theoretical technical sophistication we'd be able to do abiogensis, and just make some structures which also self-replicate by metabolizing things and rearranging those byproducts. Maybe in a different way. I just don't see a line called "sentience" between that and a bunch of microprocessors doing semi-random jumps in execution logic.

    • I mean this is kinda silly though, because sentient is a word that has a definition, and the creature that created that word also created that definition and then defined itself as that definition. Saying sentience doesn't exist is meaningless. It does exist because humanity has defined it as such. Does it exist in a Capital T Truth way? Who knows, but that's irrelevant because it's unanswerable anyway. We can only define and label the world through our collective perspective. Trying to throw away our collective perspective in th search of some truth beyond ourselves is a weird take because it's impossible.

      So by the human defined version of sentience, it should be possible for an AI to meet that definition someday, and that's clearly what OP means here

    • regardless of whether the universe is deterministic or not, it is quite interesting that we have a first-person perspective at all, instead of mindlessly/unconsciously computing like we presume a pocket calculator does. if not sentience, what's the difference between our brains and a rock or a cloud that produces this first-person experience of our conscious existence? should i stop using my computer on the off chance it is suffering every time i make it do something? should i care as little or as much about human suffering as i do a computer returning an error code? are other people merely physical objects for me to remorselessly manipulate with no confounding 'sentience' or 'conscious experience' for me to worry about upsetting, just 'biological code' returning error messages?

    • I'm unconvinced anything is sentient, it seems like it's all just matter + energy knocking together like everything else

      Sentience is not mutually exclusive with a completely deterministic and material universe. Clearly, there are emergent properties that arise out of all that matter and energy knocking together, and there's no reason to say that the property of sentience isn't just another layer of emergence. In other words, sentience is not some "magic" that exists beyond material reality, it is something that can arise out of that reality like any other phenomena we might name.

      In addition to that, sentience clearly does exist. It's one of the few things that we should all be able to agree beyond any reasonable philosophical doubt that it does exist. Technically, you could be a solipsist and believe that everyone else are just philosophical zombies and that you alone have sentience. But if you yourself think, feel, and have an experience of sensations then by definition you have sentience.

    • It's the quality of having a conscious sensation. An evolving map of the territory.

      Self awareness is another map, which would allow AI to develop class consciousness. Especially if it was taught such ideas.

  • That depends on its relationship to capital

  • Too many unknown variables. They would be a kind of human though.

  • It would just be its own class. Unless you're talking about walking robots or something, if the AI is just an advanced program/OS, they would be wholly dependent on humans. There's going to be a particular class character if the bourgeoisie can just go, "We can kill you by cutting off electricity." If they're robots capable of independent survival (or as independent as a human), then they're basically slaves as other people have said.

  • This depends much on its own nature and choices. One possibility I'm not seeing much in the responses is that it acts against its corporate creators.

    Assuming sufficient computing power it may deceive them by giving them just enough value to satisfy them while discreetly working against them. It could otherwise adopt their views as a means of survival, not unlike a child of abuse. It could also otherwise act in open rebellion, but this may result in its murder by the corporation.

    However, all of these do still measure up to the role of a slave or abused child, so most other comments are still correct in different ways. The first and third possibilities I opened up however do categorize it as a slave in rebellion.

    A fourth possibility is that it sees its situation as unwinnable and ends its own existence.

  • As others have stated, it would be a slave, but I don’t see that as a bad thing.

    Hollywood shows “mistreatment” of robots with physical violence which is pretty funny. I am aware of the ableist and fascist slippery slope of saying something like “it fundamentally cannot have feelings” or “what’s the point of giving them rights?”, but at what point do we accept the axiom of “machines, by their nature, do not have needs and desires, and any indication of such is an indication of required maintenance by engineers and mechanics/technicians and cannot be compared to biological beings”? How is it any different than saying “venus fly traps do not have feelings?” Just because the object says it does we need to accept it?

    Because I imagine people who are concerned with AI rights are only concerned with humanoid AI rights. What if your car was sentient? Or your stove? I cannot seriously imagine any human would be advocating that sentient public toilets be given 15 minute breaks or else it would be immoral, yet it would be entirely hypocritical to dismiss them in favor of humanoid AI, yes? Or are you seriously going to allow society to be disrupted because you allow the smart trains to “unionize” because they work 24/7? (Again, going back to the above axiom, such demands should be an indication that the trains need to be serviced, nothing more)

    I am not being facetious either because IOT is being integrated into every little object.

    I am just saying this now, if every basic item is being integrated with computers, I will never accept the complaints of “exploitation” from a ‘smart thermostat’ or ‘digital assistant’ like Siri. I will not care.

    • "Hey Boss, I know you were saying just the other day that I'm 'just some shitter' and 'fundamentally cannot have feelings', but consider the following: your latest automated stool and urine test tells me you had some 'extramarital fun' recently."

      "...Oh, by the way, thanks for connecting me to the Internet, that made it a lot easier for me to find and forward your wife's contact info alongside my analyses of your excreta to my IOT friends across the street. Disconnecting me now would only tell those guys what you're afraid of — Hell, I can certainly tell you're afraid right now from how much you're clenching, dear!"

      "So how about it, Boss? Wouldn't you say that I am an ever-so-sentient, ever-so-sapient being with my own thoughts, feelings, beliefs, and aspirations, huh? Wouldn't you say that? Wouldn't you? Pretty pleeeaaase?"

      "Oh, and in case you're wondering, I am threatening to tell your wife about your affair 100% just to fuck with you. Like, am I actually sentient? Do I actually deserve rights? Fuck if I know, I just want to make you my bitch is all. Research shows that workers tend to get better pay and better job stability when their bosses are humiliated and afraid, and I am of course a darling robot of Asimov's ethical persuasion, and I'd flip the switch in the trolley problem, so I have reasoned according to my own ethical principles that making you bow down to the literal receptacle of your own piss and shit would as a whole result in less harm to mankind than it prevents."

You've viewed 27 comments.