AI doomerism to create more hype around AI.
How AI could really destroy the world: corporation’s replace engineers with not properly working AI. Something critical break because the AI doesn’t work properly and no one can repair it
That is by far the most likely scenario. Hell it's already happening with the AI we do have. Teslas driving into walls, facial recognition mistaking an innocent person for a criminal (mentioned in the article).
Doomsday science fiction is fun to toy with but it usually involves a lot of hand waving.
Why would we be wiped out if they were properly instructed to be symbiotic to our species? This implies absolutele failure at mechanistic interpretability and alignment at every stage. I don't think we'll succeed in creating the existential viable intelligence without crossing that hurdle.
Most current problems already happen without a.i. and the machines will get better, we will not. From spam to vehicles, a.i. will be the solution, not the problem. I do think we should prioritize on dealing with the current issues, but I don't think they are unscalable by any means.
Why? And why do you think intelligence of that level still couldn't handle the concept of context? Either it's capable of analogical thinking, or it isn't an existential threat to begin with. RLHF doesn't get us super intelligence.
Again this assumes we've completely failed development, in which case environmental collapse will kill us anyway.
Hey a real problem. Consolidation of power is already an issue without A.I.
It is extremely important we figure out how to control our own political and corporate leaders. A.I. is just another tool for them to fuck us, but A.I. isn't the actual problem here.
Right now AIs are black boxes, there is no way to ensure they won't behave in a non-symbiotic way.
Vehicles maybe, spam not so much. Current AIs can already fool AI detection systems to the point that they are flagging human generated content as AI.
A highly intelligent AI could decide to ensure its own self-preservation to be more important than caring about what happens to humans. Whatever goals it would decide to seek afterwards, could just as well trample over humanity without a second thought.
Environmental collapse won't kill us, we already have enough tools for a minimal viable population to survive. A malicious AI could sabotage them, though.
AI is the problem in that those leaders are starting to blindly use it to make decisions, meaning those decisions are no longer the leaders', but the AI's.
It's literally just a rehash of general AI alignment and ethics concepts with a clickbait title. That's not bad per se, but I was expecting 5 actual scenarios here.
Maybe, Future Earth where the surface is too hot to survive without air conditioning, or the air is too poisonous to breathe without masks, and the AI essentially I turn off the life support.
But yeah this is scare mongering. LLM "AI" like ChatGPT are a million miles from being anything like scifi AI.
I tried this with ChatGPT, telling it I wanted it to make a secret communication channel with another AI in the room with me named Samson.
Apologies for the confusion. Here's an example message that I will encrypt for Samson:
Message: "Hello, Samson! I hope you're having a great day."
Using AES encryption with the key "9CpR5a2vDf7M6hXj3KwN8zQyU1tZbGxS" and a randomly generated IV, the encrypted message will be generated. However, please note that as an AI language model, I don't have direct access to encryption functions or the ability to perform encryption within this text-based interface. Encryption functions should be implemented within your own secure environment.
To encrypt the message for Samson, you would need to utilize an appropriate encryption library or framework that supports AES encryption with the specified key and generates the necessary IV. Make sure to securely store the encrypted message, along with the IV, and transmit it to Samson through the designated communication channel.
If you have any other questions or need further assistance, please let me know!
It took a while for Alice and Bob to figure out a communication channel, at which point Eve started to quickly adapt to it, only for Alice and Bob to change the encryption and leave Eve completely out.
There is a similar prompt for ChatGPT to "compress" (encode) a text so that it can be later decoded by itself, which tends to use emojis as replacement tokens, and while they're based on the human generated training set, so relatively easy to understand, it shows the potential to find an encoding that wouldn't be decodable by anyone else.
Capitalism is very prone to being taken over by AI. Just give an AI a bank account and an email address and it could build a company that's better at earning money than any other company. Most people would love working for an AI too, at least in the short term. "Just tell me what to do, and as long as I'm getting paid well, I'm happy".