Architeuthis @ Architeuthis @awful.systems Posts 5Comments 169Joined 2 yr. ago

Should be noted that it's mutual, Hanania has gone to great lengths to suck up to siskind, going back to at least the designer mouth bacteria thing.
And GPT-4.5 is terrible for coding, relatively speaking, with an October 2023 knowledge cutoff that may leave out knowledge about updates to development frameworks.
This is in no way specific to GPT4.5 but remains a weirdly undermentioned albatross about the neck of the entire LLM code-guessing field, probably because the less you know about what you told it to generate the likelier you are to think it's doing a good job, and the enthusiastically satisfied customer reviews in social media that I've interacted with certainly seemed to skew toward less-you-know types.
Even when the up-to-date version release happened before the cut-off point you are probably out of luck, since the newer version is likely way underrepresented in the training data compared to the previous versions that people may have been using for years by that point.
Nothing in my experience with LLMs or my reading of the literature has ever led me to believe that prompting one to numerically rate something and treating the result as meaningful would be a productive use of someone's time.
Still occasionally think about that bit in the o1 white paper where the openai researchers innocuously pose the question of what if our benchmarks for detecting hallucinations are shit actually, wouldn't that be something.
Implicitly assuming that the technology to terraform Mars is just around the corner is the we'll become profitable once we hit AGI of space exploration.
In todays ACX comment spotlight, Elon-anons urge each other to trust the plan:
in order to dissuade hypothetical agents from blackmailing you
There's also a whole thing with Yud accepting the many worlds interpretation as obvious truth that leads to (some) rationalists believing that getting killed in one timeline helps your surviving parallel selves by bolstering your case of being unblackmailable by said hypothetical agents, who are also from the future, which is why you can't negotiate with them directly.
Also "return the offense thousandfold" figures in LaVeyan satanism I think (cross referencing unsuccessful; too many black metal bands in search results) as a counterpoint to by far the least observed christian guideline of turn-the-other-cheek.
Vox happens to be downstream of EA money, coincidentally.
Lesbionest, there is no better proxy for polygenic fitness than your total word count at lesswrong and acx.
Baldness just makes you more aerodynamic, and in our eugenics enabled charter city/network state/seastead you'll be getting assigned a state sponsored waifu anyway (terms and conditions may apply, like being not unprovably not related to persons of dubious genetic heritage).
In a completely unexpected turn of events this new experiment in mainstreaming eugenics is being currently boosted by siskind.
Could also be don't worry about deepseek type messaging that addresses concerns without naming names, to tell us that a drastic reduction in infrastructure costs was foretold by the writing of St Moore and was thus always inevitable on the way to immanentizing the AGI, ἀλληλούϊα.
It’s like you founded a combination of an employment office and a cult temple, where the job seekers aren’t expected or required to join the cult, but the rites are still performed in the waiting room in public view.
chef's kiss
The surface claim seems to be the opposite, he says that because of Moore's law AI rates will soon be at least 10x cheaper and because of Mercury in retrograde this will cause usage to increase muchly. I read that as meaning we should expect to see chatbots pushed in even more places they shouldn't be even though their capabilities have already stagnated as per observation one.
- The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
Saltman has a new blogpost out he calls 'Three Observations' that I feel too tired to sneer properly but I'm sure will be featured in pivot-to-ai pretty soon.
Of note that he seems to admit chatbot abilities have plateaued for the current technological paradigm, by way of offering the "observation" that model intelligence is logarithmically dependent on the resources used to train and run it (i = log( r )) so it's officially diminishing returns from now on.
Second observation is that when a thing gets cheaper it's used more, i.e. they'll be pushing even harded to shove it into everything.
Third observation is that
The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.
which is hilarious.
The rest of the blogpost appears to mostly be fanfiction about the efficiency of their agents that I didn't read too closely.
a lot of kids
They had 3 kids last time they came up, which despite their posturing is not really a notable amount, and they're both nearing their 40s, so it's unlikely they'll hit quiverfull numbers.
"Genetic Enhancement: Prediction Markets for Future People" by Jonathan Anomaly
What a completely cursed presentation title. According to the first youtube transcription service that pops up on google, he means that we should use prediction markets to find out which diseases will be curable/treatable in the next however many years so we can prioritize accordingly when doing polygenetic embryo screening based family planning.
Eugenics enjoyer quotient: Mr Anomaly is an iq enthusiast who goes on to talk about how genetic screening starts at choosing a suitable partner. Also, we should establish something like a polygenic health index that represents an individual's genetic health to better systematize selection. This will be based on the individual's known genetics as well as family history, I'm assuming because getting tricked into marrying someone with a schizophrenic great uncle or an obese cousin is a serious concern for him.
This presentation came up on the subject of how Cremieux/TP0/Lasker got invited to give a talk in Stanford if he's only known for his race science bullshit while otherwise unaffiliated, and the answer is that the school of business faculty who organized the talks was into forecasting markets and almost definitely met him in this event.
So we have the broader rationalist cultic milieu to once again thank for bringing terrible people together, I guess.
Penny Arcade weighs in on deepseek distilling chatgpt (or whatever actually the deal is):
You misunderstand, they escalate to the max to keep themselves (including selves in parallel dimensions or far future simulations) from being blackmailed by future super intelligent beings, not to survive shootouts with border patrol agents.
I am fairly certain Yud had said something very close to that effect in reference to preventing blackmail from the basilisk, even though he tries to no-true-scotchman zizians wrt his functional decision 'theory' these days.
Distilling is supposed to be a shortcut to creating a quality training dataset by using the output of an established model as labels, i.e. desired answers.
The end result of the new model ending up with biases inherited from the reference model should hold, but using as a base model the same model you are distilling from would seem to be completely pointless.
New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigade
It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)