Skip Navigation

Could a Large Language Model Be Conscious? Within the next decade, we may well have systems that are serious candidates for consciousness.

www.bostonreview.net Could a Large Language Model Be Conscious? - Boston Review

Within the next decade, we may well have systems that are serious candidates for consciousness.

Could a Large Language Model Be Conscious? - Boston Review
5
5 comments
  • I can't take this shit seriously while the authors eat flesh

    • Why not? Ethical or moral values have about as much bearing on the scientific outcome as how attractive the researchers are.

      • Ok so assuming good faith:

        There's a huge cohort of idiot savant type technologists that are obsessed with the idea of machine consciousness and the incredibly important implications of that (according to them). Yet these same people by and large absolutely refuse to engage in any behaviour modification regarding the incredibly strong evidence of consciousness we find in extant earthlings. Human and non human.

        So I can't take their claims of being interested this any more seriously than a teenager's musings on the meaning of life because they don't actually believe anything they're saying.

        Knowledge without belief in it is not knowledge, it is mere rhetoric and wordplay.

        If you care about machine consciousness and think it would carry any weight or demand modification of our behaviour you would already be acting with urgency against humanitarian crises and animal agriculture.

  • Multimodal AI has been a goal for quite some time - eventually we'll reach a point where a multimodal system represents intelligence reasonably accurately. Approaching this from a technical perspective and trying to define intelligence is, in my opinion, a much more boring conversation than talking more broadly about intelligence.

    Most people would agree that most living things are intelligent in some fashion. In particular, if you ask them whether pets can be intelligent (dogs, cats, etc.) people will agree. The more abstract a living being is from our emotions and our society, the less you'll see people term this life as intelligent, with the caveat that bigger life is often considered intelligent before smaller life is. Insects, for example, are often discarded as not intelligent on account of their size. A great example of this is bees, which often find themselves ranked amongst the smartest animals on earth because they exhibit higher level behavior such as that of a choice of non-violence, having emotions and more. We also know that bees communicate to each other with dance, and communication is often considered a higher level of intelligence.

    I think where people often get lost is how to compare different modes of intelligence. Historically, we're really bad at measuring this, even in humans. Most people are familiar with the concept of an 'Intelligence Quotient' or IQ, as a measure of how 'smart' someone is. Unfortunately these viewpoints are far from objective; while IQ tests have improved over the years as we've refined out western and educational thinking, they still have serious biases and chronically underestimate intelligence in an inequitable way (minority individuals still score lower than non-minority folks from similar SES and other background factors). It's very difficult to (and one might suggest impossible to) compare and contrast aspects of intelligence such as visual processing with other aspects of intelligence such as emotional intelligence. When we expand this kind of thinking to non-human individuals, how does one create an IQ score when including factors we don't measure in humans such as olfactory intelligence (how well one can identify smell)? If humans have a very low score in comparison with well known discriminators such as canines, how does that factor into an overall score of intelligence? Ultimately we must see that measures of intelligence, while useful, are not objective in a broader scope. We can reasonably accurately compare the visual processing intelligence between two systems, whether we consider them living or not, but we cannot use these to measure intelligence as a whole - otherwise we must consider visual processing AI as intelligent.

    This is why I think the most interesting conversation is one about intelligence as a whole. On an IQ test, I could absolutely ace the visual intelligence portion of the test, scoring above 200 points, and then completely fail all the other ones (get nothing correct) and be considered a low IQ human. However, when an AI does it, we don't consider them intelligent. Why is that? Why don't we consider these tests when we speak about animals? Is it because we don't have a good way of translating what we wish to test to a system (animal) which cannot respond with a language we understand? How might this thinking change, if we were to find a way to communicate with animals, or expand our knowledge of their languages? Perhaps in a slight bit of irony the very intelligence we are questioning is providing us answers to long standing questions about the animal kingdom - AI has granted us access to understand animal communication in much more depth, revealing that bats argue over food, distinguish between genders, have names, and that mother bats speak to their babies in an equivalent of “motherese.”.

    It's easy to brush of considerations like this in the context of AI because some people are making arguments which seem inane in the historical context of how we measure intelligence, but I think many of us don't realize just how much we've internalized from society about what "intelligence" means. I would encourage everyone to re-examine the very root itself, the word intelligence, and rather than deferring to existing ontologies for a definition, to consider the context in which these definitions were set and why it may be time to redefine the word itself or our relationship to it.