From what I've been able to piece together from the various theological disputes people have had with the murder cult it seems like the only two differences are that Ziz and friends are much more committed to nonhuman animal welfare than the average rat and that they have decided that the correct approach to conflict is always to escalate. This makes them more aggressive about basically everything which looks like a much deeper ideological gap than there actually is. I'm not going to evaluate whether these are reasonable conclusions to take from the same bizarre set of premises that lead to Roko's Basilisk being a concern.
Man, after a long day this is the exact story I needed. Doing a vital public service as always, David.
To be fair, the highest-level claims of just about any conspiracy theory sound at least plausible. Even Qanon tends to start off with claims that are basically confirmed by the Epstein case before they start extending the conspiracy to more places, incorporating Jesus, and excluding their preferred Messiah figures.
OpenAI can't simply "add on" DeepSeek to its models, if not just for the optics. It would be a concession. An admittal that it slipped and needs to catch up, and not to its main rival...
I actually disagree here. I think Ed underestimates how craven and dishonest these people are. I expect they'll try to quietly integrate any efficiency improvements they can get from it and bluster through any investor questions about it. Their hope at this point has to be that more hardware is still better and that scaling is still gonna be the thing to make fetch happen. This again isn't a revolutionary new structure, even if it is a significant improvement over anything Saltman and co have been doing.
This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.
How many times are AI people going to re-learn that LLMs don't have "awareness" or "reasnloning' in a sense humans would find meaningful?
Don't mind me, just imagining a brighter world where the people in power learn Settlers of Catan instead of Chess or Civilization.
You can't be this divorced if you've ever actually been married, since that tends to create and/or require connecting with another human being and feeling something not unlike love, at least for long enough to get the papers files.
I don't want to say with absolute confidence that there's no scenario I can imagine to which a nuclear apocalypse would be preferable (the real kind, not the Fallout kind). But I have yet to hear one.
Goddammit why can't the murder cult story just stay morbidly fascinating? Now I've got to worry about implications and how the worst people are gonna use this as ammo.
I do actually have a mechanism for using the sharp edges of NVidia cards for dick mouse trapping purposes. And we could - hypothetically - use the extraneous power inputs to mine Bitcoin or something, maximizing efficiency!
Fascism really is "we have imperialism at home"
In the process of looking for ways to link up with homeschool parents that aren't doing it for culty reasons, I accidentally discovered the existence of a small but active subreddit for "progressive monarchists". It's titled r/progressivemonarchists, because their imagination in naming conventions only slightly outatrips their imagination for forms of government. Given how our usual sneer fodder overlaps with nrx I figured there are others here who I can inflict this headache on.
I don't know if this is good news for the underlying risk of how willing the nuts and bolts of society are to resist unlawful or monstrous policies. IDK, on the subject of complicity I think the fact that we eventually joined the war has caused a deep cultural amnesia about how much influence the Reich had on the states and vice versa. Charles Lindbergh, Madison Square Garden, etc. We didn't really acknowledge how much our cultural and political structures are open to authoritarianism, much less addressing those issues.
Thanks, I hate it.
Especially because Trump's legal teams have historically been more than incompetent enough to produce this kind of work on their own.
It is a long-established truth that it's significantly easier to con someone who thinks they're smarter than you. Also as I think about it a little bit there seems to be a reasonable corollary of their approach towards Bayesian thinking that you not question anything that matches your expectations, which is exactly how you get taken advantage of by the kind of grifter they're attached to. Like, they've been thinking about the singularity for long enough that the Sams (bankman-fried, Altman, etc) have a well-developed script for what they expect the first stages to look like and it is, as demonstrated, very easy to fake that.
Dan Shipper, who has been testing Operator for a few days, found that it often cannot access websites because they have blocked OpenAI from crawling it.
Wait so ghoulishly scraping the entire internet without regard for the performance impact on the sites you're scraping and getting blocked from anyone with the good sense to do so has downsides?!?
Is there any rundown on this backstory for people who missed it happening live over the last few years that doesn't get sidetracked into theological disputes with the murder cult?
Yeah. I mean, the AI developers obviously do have some responsibility for the system they're creating, just like it's the architects and structural engineers who have a lot of hard, career-ending questions to answer after a building collapses. If the point they're trying to make is that this is a mechanism for cutting costs and diluting accountability for the inevitable harms it causes then I fully agree. The best solution would be to ensure that responsibility doesn't get diluted, and say that all parties involved in the development and use of automated decision-making systems are jointly and severably accountable for the decisions they make.
It falls into a broader type of tech hype based in the idea that if it would be good for something to work a certain way then if we can make it work at all it will obviously work in that optimal way. Like, it would be cool if we could get exponential growth in our rockets somehow (Maybe they reproduce? Do the rockets fuck, Elon?) so therefore assuming we can get rockets at all we can definitely make them scale like that.
Call it the Milliways argument. Because if you've already done five thousand impossible things before breakfast, why not cap it off with lunch at the restaurant at the end of the universe?
That's why we need to combine them! AI on the block chain can burn an unprecedented amount of electricity to impress VCs and get ever more funding! The line is going to go so up that all other lines will look down by comparison!