Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HA
Posts
1
Comments
253
Joined
2 yr. ago

  • The point of 230 was not to protect hobbyists, but rather to encourage big platforms (like CompuServe at the time) to moderate their users. The issue was that in moderating users, the platform makes themselves a publisher rather than a distributor (which were already immune from liability for the speech they distributed).

    Without section 230, platforms will simply stop all moderation (including for illegal activity and content) to protect themselves from liability. Every single platform operating in the US would become 4Chan (or worse, since even 4Chan does some moderation).

  • I'm thinking a mini-pc of some sort. The circle and yoga pose make me think Chrome (OS?) and Arch. Gaming could relate to some partnership with Steam or Xbox. Alternatively, maybe something about VR?

    My first instinct is to connect it to the rumors around the Valve Fremont. But my brain thinks that's pretty unlikely.

  • The two encrypted messaging platforms I currently suggest are XMPP or Matrix. Both are usually fine and are decentralized. The main thing with them is to either self-host or choose a server you trust to set up an account — which applies to the Fediverse in general.

  • It's a lot easier to scan for very specific code behavior than it is to scan for "anything useful for espionage". And that still wouldn't solve the question of what their server software is doing or where the collected data is ending up.

  • If the code were static and unchanging, sure. But it's not possible to conduct such analysis every time an update is issued on a continuing basis, without fast becoming a hundreds of millions of dollars or more program.

    So the better question isn't whether it's possible — it's whether it's feasible. And the answer is no, it's not.

  • I'm actually surprised AGI isn't better defined in the contract

    There'll be a significant lawsuit if OpenAI tries such a declaration without MS on board with it.

    But i'm not sure how much OpenAI is even investing toward AGI. LLM is their bread and butter and I don't know many experts that think LLM is the path to AGI.