Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TH
Posts
0
Comments
37
Joined
2 wk. ago

  • Games were standardized to $60 back around 2005. Prior to that, games were just whatever the price that publishers decided the game should be. Chrono Trigger cost $80 USD at launch in 1995: https://fantasyanime.com/squaresoft/ctabout.htm Adjusting for inflation, that would be just shy of $170 USD now. It was not uncommon for games for the Nintendo 64 to retail for $70-80: https://retrovolve.com/n64-games-were-ridiculously-expensive-when-they-first-came-out/

    Video games (particularly console and handheld games) have always been an expensive hobby. Games also haven't been adjusted for inflation in the 20 years since prices were largely standardized, which is why they have become a microtransaction hell.

    Honestly, this will likely lead the the return of video game demos. Because video games were prohibitively expensive in the 80s and 90s, demos were a huge part of the culture so that you could try them out ahead of time to get a feel for if they were worth the price tag.

  • Generally speaking, people used ChatGPT back when it first came out, had a bad experience and never fucked with it again, so their understanding of it is frozen in time. Most people know next to nothing about the current state of AI unless you're a researcher or enthusiast. They're completely unprepared for the actual state of the industry.

  • When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.

    And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.

  • It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

    And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.

  • Not really. There is a correct choice. This is the equivalent of if you have two people dying who need transplants, so let's murder someone to harvest their organs so the other two can live. What Janeway did was morally bankrupt, and the only person on that crew with any semblance of morality was The Doctor.