There is a way of seeing the world where you look at a blade of grass and see "a solar-powered self-replicating factory". I've never figured out how to explain how hard a superintelligence can hit us, to someone who does not see from that angle. It's not just the one fact.
It's almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after "these edibles ain't shit" is, I dunno, bad?
He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.
That's 100% my weird late-night word choices. You can reuse it for whatever.
I agree with your sentiment, but the wording is careful. Scaffolding is inherently temporary. It only is erected in service of some further goal. I think what I wanted to get across is that Yud's philosophical world was never going to be a permanent addition to any field of science or maths, for lack of any scientific or formal content. It was always a farfetched alternative fueled by science-fiction stories and contingent on a technological path that never came to be.
Maybe an alternative metaphor is that Yud wanted to develop a new kind of solar panel by reinventing electrodynamics and started by putting his ladder against his siding and climbing up to his roof to call the aliens down to reveal their secrets. A decade later, the ladder sits fallen and moss-covered, but Yud is still up there, trapped by his ego, ranting to anybody who will listen and throwing rocks at the contractors installing solar panels on his neighbor's houses.
A year and two and a half months since his Time magazine doomer article.
No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.
Just another note in a panic that accomplished nothing.
It’s also a bunch of brainfarting drivel that could be summarized:
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
Or
Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
We get it, we just don't agree with the assumptions made. Also love that he is now broadening the paperclips thing into more things, missing the point of the paperclips thing abstracting from the specific wording of the utility function (because like with disaster prepare people preparing for zombie invasions, the actual incident doesn't matter that much for the important things you want to test). It is quite dumb, did somebody troll him by saying 'we will just make the LLM not make paperclips bro?' and he got broken so much by this that he is replying up his own ass with this talk about alien minds.
e: depressing seeing people congratulate him for a good take. Also "could you please start a podcast". (A schrodinger's sneer)
did somebody troll him by saying ‘we will just make the LLM not make paperclips bro?’
rofl, I cannot even begin to fathom all the 2010 era LW posts where peeps were like, "we will just tell the AI to be nice to us uwu" and Yud and his ilk were like "NO DUMMY THAT WOULDNT WORK B.C. X Y Z ." Fast fwd to 2024, the best example we have of an "AI system" turns out to be the blandest, milquetoast yes-man entity due to RLHF (aka, just tell the AI to be nice bruv strat). Worst of all for the rats, no examples of goal seeking behavior or instrumental convergence. It's almost like the future they conceived on their little blogging site shares very little in common with the real world.
If I were Yud, the best way to salvage this massive L would be to say "back in the day, we could not conceive that you could create a chat bot that was good enough to fool people with its output by compressing the entire internet into what is essentially a massive interpolative database, but ultimately, these systems have very little do with the sort of agentic intelligence that we foresee."
But this fucking paragraph:
(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered “much more likely” while still being not likely enough.)
ah, the sweet, sweet aroma of absolute copium. Don't believe your eyes and ears people, LLMs have everything to do with AGI and there is a smol bean demon inside the LLMs that is catastrophically misaligned with human values that will soon explode into the super intelligent lizard god the prophets have warned about.