The prank was a reference to the "paper clip maximizer" scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.
OpenAI's offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the "paper clip maximizer" scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.
Its far more likely that wealth inequality will be greatly exasperated, and the philanthropic elite will become even more powerful while the "middle class" disappears entirely into a new bottom class that makes up most of humanity, Imperium of Man style.
The problem is the social unrest will be uncontrollable, and among their solutions will not only to be to micromanage the population, but also adjust its numbers by force.
So murder drones will just be sent to cull the masses down to a manageable number. This is the robot future Randall is concerned about in XKCD 1968.
Imagine if the police dogbots killed people and we couldn't question why, nor had the power to resist. This is a problem being considered by AI ethicists. Also that AI will develop scarier ways to cull the population, possibly instigating the demise of their operating end users.
One of my favorite games. I play through about once a year and this year they added new features. When you beat it, you can go to other universes with different starting conditions to play through and see how that changes things. I don't think that will make me play more but it will keep a record of my plays.
I'm genuinely worried I'll be watching TV and Clippy will appear: "It looks like your entire species is about to be vaporised by a coordinated drone strike. Would you like some help? Well, you gotta beg for it now, bitch!"
One of OpenAI's biggest rivals played an elaborate prank on the AI startup by sending thousands of paper clips to its offices.
The paper clips in the shape of OpenAI's distinctive spiral logo were sent to the AI startup's San Francisco offices last year by an employee at rival Anthropic, in a subtle jibe suggesting that the company's approach to AI safety could lead to the extinction of humanity, according to a report from The Wall Street Journal.
Since then, OpenAI has rapidly accelerated its commercial offerings, launching ChatGPT last year to record-breaking success and striking a multibillion-dollar investment deal with Microsoft in January.
AI safety concerns have come back to haunt the company in recent weeks, however, with the chaotic firing and subsequent reinstatement of CEO Sam Altman.
According to The Atlantic, Sutskever commissioned and set fire to a wooden effigy representing "unaligned" AI at a recent company retreat, and he reportedly also led OpenAI's employees in a chant of "feel the AGI" at the company's holiday party, after saying: "Our goal is to make a mankind-loving AGI."
OpenAI and Anthropic did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
The original article contains 374 words, the summary contains 199 words. Saved 47%. I'm a bot and I'm open source!
To touch more deeply on why paperclips allude to the extinction of humanity: the Paperclip Problem is a parable in which an artificial intelligence operating a factory is instructed to make as many paperclips as possible.
It doesn't stop when it runs out of resources. Instead, it commandeers mining machines to deplete the planet's metals, and after those had been used up, it melts every human down for the trace iron in our blood.
The instructions didn't think the AI needed to be told when to stop, so it just kept going. https://en.m.wikipedia.org/wiki/Instrumental_convergence https://cepr.org/voxeu/columns/ai-and-paperclip-problem
You would think so, but you have to remember AGI is hyper-intelligent. Because it can constantly learn, build, and improve upon itself at an exponential rate it's not only a little bit smarter than a human-- it's smarter than every human combined. AGI would know that if it's caught trying to maximizing paperclips humans would shut it down at the first sign something is wrong, so it would find unfathomably clever ways to avoid detection.
If you're interested in the subject the YouTube channel Computerphile has a series of videos with Robert Miles that explain the importance of AI safety in an easy to understand way.
For a system to be advanced enough to be that dangerous, it would need the complex analogical thought that would prevent this type of misunderstanding. Rather, such dumb super intelligence is unlikely.
however, human society has enabled a paperclip maximizer in the form of profit maximizing corporate environments.