Skip to content

Can AI be sentient? Notes on the discussion about LaMDA

The AI language model LaMDA (Language Model for Dialogue Applications) developed by Google has been the subject of much discussion since mid-June 2022, after Google engineer Blake Lemoine (now suspended) claimed on June 11 that LaMDA had become sentient and published online the conversation he had with the AI.1Cf. Lemoine, Blake, Is LaMDA sentient? – an interview, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, accessed 22 July, 2022. So has AI really come to life, as futurists like Ray Kurzweil have been predicting for years? Or are we too easily deceived?

What is LaMDA?

LaMDA is an example of a so-called artificial neural network, like other language models such as GPT-3 or BERT, but unlike them it was trained specifically for dialogue. Like GPT-3 and BERT, LaMDA is based on the neural network architecture Transformer, also developed by Google.2Cf. Collins, Eli; Gharamani, Zoubin, LaMDA: Our breathtaking conversation technology, https://blog.google/technology/ai/lamda/, accessed 19 July 2022. Neural networks were first proposed in the 1940s by Warren McCulloch and Walter Pitts to compute virtually any logical or arithmetic function. Essentially, they use a topology of artificial neurons whose number and relative weights to each other change through training with datasets that are as comprehensive as possible. Once a number of input and desired output values of a function have been obtained, the network can now use a “learning”-algorithm3The term “learning” was deliberately placed in quotation in this context, as it is an example of anthropomorphic terms that are widely used in the AI context and should be treated with caution. Machines do not learn and “machine learning” is only superficially similar to human learning. to attempt to determine all the parameters of a function in an iterative process. This principle has been increasingly used in AI research since 2009 for pattern recognition and (re)production, in the case of GPT-3 and LaMDA of speech.4Cf. Art. „Künstliches neuronales Netz”, https://de.wikipedia.org/wiki/Künstliches_neuronales_Netz, accessed 22 July 2022. LaMDA was trained with 1.56 trillion words from existing dialogues for this purpose.5Cf. Thoppilan, Romal et al, LaMDA: Language Model for Dialogue Applications, arXiv:2201.08239, https://arxiv.org/pdf/2201.08239.pdf, accessed 22 July 2022.

An inherent risk with neural networks is that they become a black box once trained. Since the topology of the networks can change not only in terms of weighting but also in terms of the number of artificial neurons as a result of training, the function of individual neurons becomes difficult or even impossible to understand as the amount of training data and the duration of training increase.6Cf. Olah, Christopher, Neural Networks, Manifolds, and Topology, http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/, accessed 20 July 2022. This does not mean, however, that we generally do not understand how a neural network works, and still less that something “magical” happens within it. Figuratively speaking, a trained neural network can be compared to a mathematics student who writes down the correct solution for each task, but omits the way to get there. While this would not earn any points in a mathematics exam, this behaviour is accepted in neural networks. This, however – due to sheer quantity often uncurated training data and the lack of verifiability of the interconnections of artificial neurons after their training – is also a reason for the bias of some algorithms.7For current examples, cf. chapter 4 “Bias” in: The State of AI Ethics 3 (2022) No. 6 of February 2022, 121-136, https://montrealethics.ai/wp-content/uploads/2022/01/State-of-AI-Ethics-Report-Volume-6-February-2022.pdf, retrieved on 20 July 2022.

Lemoine: Software engineer and Gnostic Christian

When assessing a claim, one should start by looking at the person who is making it. Blake Lemoine studied at the University of Lousiana at Lafayette. There he earned a bachelor’s degree in 2008, a master’s degree in 2010 and a PhD in 2013, all in computer science. Since 2015, he has worked as a software engineer at Google.8Cf. LinkedIn profile of Blake Lemoine, https://www.linkedin.com/in/blake-lemoine-5b53b652/, accessed 21 July 2022. According to himself, his expertise is in the area of “AI bias” (see above), so one of his tasks was to test LaMDA for bias in the areas of gender, ethnicity or religion.9Cf. TV interview v. 26 Jun. 2022, Bloomberg Technology, https://www.youtube.com/watch?v=kgCUn4fQTsc, accessed on 21 July 2022. He was thus part of the testing and not the development team of LaMDA.

On the question of how he came to the conclusion that LaMDA was conscious, Lemoine wrote on Twitter that he based his assessment on his “religious beliefs”10Tweet from 14 June 2022, https://twitter.com/cajundiscordian/status/1536503474308907010, accessed 22 July 2022.. He claims to be a “priest”, and when the LaMDA eloquently explained to him that it had a soul, he was “inclined to give it the benefit of the doubt”.11Cf. tweet from 14 June 2022, https://twitter.com/cajundiscordian/status/1536504857154228224, accessed on 22 July 2022.

His self-description as a “priest” naturally raises questions about his religious orientation. In a blog post from 2019, Lemoine describes himself as a “Gnostic Christian”. He was raised Catholic, but turned to atheism out of disappointment with the Church’s not “particularly good answers” for some of his questions during his confirmation preparation. Later, he came across Gnostic gospels which seemed to make sense to him and which were to become the basis for his later faith. Most recently, he co-founded the Church of Our Lady Magdalene with LGBT activist Kitty Striker, which is “partly an artistic statement, partly a for-profit religious life coaching company and partly attempt to build diverse communities of respect and dignity based around family in all of its senses”.12Cf. Lemoine, Blake, Explaining Google, https://cajundiscordian.medium.com/explaining-google-c73caa0cd091, accessed 22 July 2022. Elsewhere, he signs a blog post as “Priest of the Church of Our Lady Magdalene”.13Cf. Lemoine, Blake, Press release, https://cajundiscordian.medium.com/press-release-58d1b4a456dc, accessed 22 July 2022.

While this insight into his religious biography testifies to a possibly somewhat unsteady search for meaning, in the AI context Lemoine’s self-description as a Gnostic is particularly enlightening.

Dualism and the belief in conscious AI

Gnosis refers to a syncretistic religious movement of the 2nd and 3rd century AD, whose characteristic is a dualistic world view. Dualism describes the idea that the cosmos is determined by two opposing principles; dualistic systems call these principles spirit and matter, light and darkness or heaven and earth.14Cf. Hutter, Manfred, Dualismus. I. Religionsgeschichtlich, in: Lexikon für Theologie und Kirche3, Bd. 3 Dämon-Fragmentenstreit, Freiburg i. Br. 1995, 387f., 387. In all dualistic schools of thought, the sphere of thought and the divine is valued higher than that of body and matter.15Cf. Wetz, Franz Josef, Dualismus. II. philosophical, in: Lexikon für Theologie und Kirche3, Bd. 3 Dämon-Fragmentenstreit, Freiburg i. Br., 1995, 388f., 389. It is striking how much thousands of years old dualistic topoi come to the light in representatives of Strong AI16Strong AI refers to the idea that AI is capable of independent thought and conciousness, cf. Searle, John R., Chinese room argument, in: Scholarpedia 4(8):3100, Revision 66188: “The contrast is that according to Strong AI, the correct simulation reall is a mind. According to Weak AI, the correct simulation is a model of the mind.” – the idea of a conscious AI is ultimately nothing other than the idea of a disembodied spirit.

Incidentally, the Church has always condemned dualism and instead advocated the unity of body and soul – an idea that is increasingly shared by today’s research. For instance, the Heidelberg psychiatrist Thomas Fuchs describes the supposed alternative between a subjective ego in the sense of the Cartesian res cogitans, which rules over the entire body (res extensa), and the brain as the originator of actions, as too narrow.17Cf. Fuchs, Thomas, Das Gehirn – ein Beziehungsorgan. Eine phänomenologisch-ökologische Konzeption, Stuttgart 22009, 67. The brain, as an organ, is not capable of making any decisions at all – concepts such as feeling, willing and deciding are not applicable on a physiological level:

“The brain does not have mental states or consciousness, because the brain is not alive – it is only the organ of a living being, a living person. Not neuron clusters, not brains, but only persons feel, think, perceive and act.”18Ibd., 283, (Translation: MR, emphasis as in the original).

A brain living independently of the body in a nutrient solution would therefore be just as inconceivable as a sentient or conscious AI.

Lemoine’s “benefit of the doubt” and Kurzweil’s “leap of faith”

Lemoine admits that he has no scientific evidence for his claim that LaMDA is sentient, but that he trusts the software out of religious motives. The author, futurist and Director of Engineering at Google, Raymond Kurzweil, already in 2005 goes into detail about the reverse engineering of the brain, with which the “software” of the brain should be reprogrammed, because: “[A]chieving the hardware computational capacity of a […] human brain […] will not automatically produce human levels of capability”.19Cf. Kurzweil, Raymond, The Singularity Is Near. When Humans Transcend Biology, New York 2005, 145. But how does a simulation, as powerful and realistic as it may be, become conscious?

The question is indeed difficult to answer, since, as Kurzweil himself admits, there is no objective test for the existence of consciousness.20Cf. ibd., 378. Nor is there any philosophical or scientific consensus on the answer to the qualia problem.21The qualia problem is the question of the relationship between subjective-phenomenal perception and mental states. The title of Kurzweil’s bestseller How to Create a Mind, published in 2013, promises to provide an answer to this question – unfortunately, the author does not fulfil this promise:

“My objective prediction is that machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia. […] We will come to accept that they are conscious persons. My own leap of faith is this: Once machines do succeed in being convincing when they speak of their qualia and conscious experiences, they will indeed constitute conscious persons.”22Kurzweil, Raymond, How to Create a Mind. The Secret of Human Thought Revealed, New York 2013, 209f. (Emphasis: MR).

Kurzweil’s answer is thus as simple as it is disappointing and is similar to Lemoine’s argument: if machines give the impression of consciousness, we should also assume that they possess consciousness.

ELIZA – did the computer already awake in 1966?

One of the important pioneers of early AI research was also one of its strongest critics: with his ELIZA programme, Joseph Weizenbaum presented one of the first language-analysis programmes for human-computer interaction at MIT in 1966. The reactions he observed to this programme made him an early sceptic of IT technology, which was reflected in his main work Computer Power and Human Reason. From Judgment to Calculation.

The programme, completed in 1966, is based on simple speech analysis and simulates, very effectively for its time, a conversation situation with a psychotherapist. Weizenbaum gives an example of such a “conversation” between a human user and ELIZA (in italics):

“Men are all alike.
In what way
They’re always bugging us about something or other.
Can you think of a specific example
Well, my boyfriend made me come here.
Your boyfriend made you come here
He says I’m depressed much of the time.
I am sorry to hear you are depressed
It’s true. I am unhappy.
Do you think coming here will help you not to be unhappy
I need some help, that much seems certain.
What would it mean to you if you got some help
Perhaps I could learn to get along with my mother.
Tell me more about your family
[…]”23Cf. Weizenbaum, Joseph, Computer Power and Human Reason. From Judgment to Calculation, San Francisco 1976, 3f.

The example shows that the programme is based on a rather simple algorithm. It tries to isolate individual terms from the user’s text line and convert them into a counter question. Where this fails, the programme asks a general question to keep the conversation alive. Weizenbaum’s motivation for the programme was to demonstrate the general functioning of a computer to a broad public.24Cf. ibd., 4f.

What is shocking to Weizenbaum after the publication of ELIZA, however, are the reactions to his programme. The psychiatrist Kenneth Colby, for example, suggests using the programme for therapeutic purposes:

“The human therapist, involved in the design and operations of this system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient therapist ratio. […] A human therapist can be viewed as an information processor and decision maker with a set of decision rules […]”25Colby, quoted from: Weizenbaum, Computer Power, 5f.

The fact that a psychiatrist no longer sees himself as a person who mediates therapy, but as a mechanical “information processor”, and could thus come up with the idea of being able to delegate his work to a computer programme, represents a mechanistic reduction of the human being that is simply incomprehensible to Weizenbaum.26Cf. ibid, 5f. and Id., Wo sind sie, die Inseln der Vernunft im Cyberstrom? Auswege aus der programmierten Gesellschaft (mit Gunna Wendt), Freiburg i. Br. 2006, 97: “Today you can find many variants of ‘Eliza’ on the net, all doing roughly the same thing. Only the purposes are different. There is even a variant in which the programme no longer plays the role of the psychiatrist but that of a priest and, so to speak, receives confessions via computer. Although I am not a Catholic, this idea appalls me. If one really believes that a machine can forgive one’s sins and give absolution, then I really wonder what meaning faith or priestly ordination still have.” (Translation: MR)

What he finds particularly frightening is how quickly human users are prepared to perceive the computer as an actual conversation partner when talking to ELIZA. His secretary, who had followed the development of the programme for months and is therefore well informed about how it works, asks Weizenbaum to leave the room during a “conversation” with ELIZA – as if it were an actual conversation partner with whom one is discussing intimate details. Weizenbaum is concerned that people seem to be willingly deceived by the illusion of the computer after only a short period of use.27Cf. Id., Computer Power, 6f.

Conclusion

There are 56 years lying between ELIZA and LaMDA. In light of this, it should come as no surprise that the deception that Weizenbaum described as early as 1976 is much more convincing today. The principle of deception is in fact the motivation for the development of all chatbots, since they are programmed to convince as “conversation partners”. The age difference, however, does not change the fact that ELIZA and LaMDA work according to the same principle: An algorithm generates a response to an input that appears as convincing as possible – except that in the case of LaMDA, the algorithm is based on a neural network trained with 1.56 trillion words.

Nevertheless, the deception is still not perfect: for instance, chatbot LaMDA “speaks” about how spending time with friends or family fills it with joy. In the fable told by LaMDA, no reference to its “existence” as a chatbot is recognisable, nor is the meaning of the “broken mirror” satisfactorily explained. Had Lemoine followed up here, these shortcomings would have become even more obvious.

But regardless of whether a deception is perfect or not: the claim that AI is conscious is – as was shown with the help of Lemoine and Kurzweil – a sort of religious claim. It is based on the idea that the human brain is ultimately only a biological computer and that human consciousness is biological software. Research like that of Thomas Fuchs shows that this is probably not true.

More meaningful than the question of whether AI has developed consciousness therefore seems to be the question of how we deal with the deceptive potential of AI in the medium term: In order to ensure responsible decisions when dealing with AIs and algorithms, knowledge of their capabilities and shortcomings must become part of general education.

Otherwise, there is a realistic danger that humans will pass on decisions and judgements to AI systems that cannot make them because they have no power of judgement. In the words of the philosopher Hans-Dieter Mutschler, this is a particular threat where modern man, due to the constraints of increasing efficiency, feels his power of judgement to be a burden: “It could be shown relatively easily that man is not a robot. But if he wants to be a robot, then any argumentation is powerless.”28Mutschler, Hans-Dieter, Ist der Mensch ein Roboter? In: Koßler, Matthias; Zecher, Reinhard (ed.), Von der Perspektive der Philosophie. Beiträge zur Bestimmung eines philosophischen Standpunkts in einer von der Naturwissenschaften geprägten Zeit (= Schriftenreihe Boethiana, 56), Hamburg 2002, 291-308, 306 (emphasis like in the original, translation: MR)

Note: Parts of this text are based on my Magister thesis submitted to the Faculty of Catholic Theology at Universität Bonn on 18 June 2014, “Künstliche Intelligenz als Herausforderung für die Zukunft. Theologische und ethisch-moralische Reflexionen“ (“Artificial Intelligence as a Challenge for the Future. Theological and Ethical-Moral Reflections“).

  • 1
    Cf. Lemoine, Blake, Is LaMDA sentient? – an interview, https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, accessed 22 July, 2022.
  • 2
    Cf. Collins, Eli; Gharamani, Zoubin, LaMDA: Our breathtaking conversation technology, https://blog.google/technology/ai/lamda/, accessed 19 July 2022.
  • 3
    The term “learning” was deliberately placed in quotation in this context, as it is an example of anthropomorphic terms that are widely used in the AI context and should be treated with caution. Machines do not learn and “machine learning” is only superficially similar to human learning.
  • 4
    Cf. Art. „Künstliches neuronales Netz”, https://de.wikipedia.org/wiki/Künstliches_neuronales_Netz, accessed 22 July 2022.
  • 5
    Cf. Thoppilan, Romal et al, LaMDA: Language Model for Dialogue Applications, arXiv:2201.08239, https://arxiv.org/pdf/2201.08239.pdf, accessed 22 July 2022.
  • 6
    Cf. Olah, Christopher, Neural Networks, Manifolds, and Topology, http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/, accessed 20 July 2022.
  • 7
    For current examples, cf. chapter 4 “Bias” in: The State of AI Ethics 3 (2022) No. 6 of February 2022, 121-136, https://montrealethics.ai/wp-content/uploads/2022/01/State-of-AI-Ethics-Report-Volume-6-February-2022.pdf, retrieved on 20 July 2022.
  • 8
    Cf. LinkedIn profile of Blake Lemoine, https://www.linkedin.com/in/blake-lemoine-5b53b652/, accessed 21 July 2022.
  • 9
    Cf. TV interview v. 26 Jun. 2022, Bloomberg Technology, https://www.youtube.com/watch?v=kgCUn4fQTsc, accessed on 21 July 2022.
  • 10
    Tweet from 14 June 2022, https://twitter.com/cajundiscordian/status/1536503474308907010, accessed 22 July 2022.
  • 11
    Cf. tweet from 14 June 2022, https://twitter.com/cajundiscordian/status/1536504857154228224, accessed on 22 July 2022.
  • 12
    Cf. Lemoine, Blake, Explaining Google, https://cajundiscordian.medium.com/explaining-google-c73caa0cd091, accessed 22 July 2022.
  • 13
    Cf. Lemoine, Blake, Press release, https://cajundiscordian.medium.com/press-release-58d1b4a456dc, accessed 22 July 2022.
  • 14
    Cf. Hutter, Manfred, Dualismus. I. Religionsgeschichtlich, in: Lexikon für Theologie und Kirche3, Bd. 3 Dämon-Fragmentenstreit, Freiburg i. Br. 1995, 387f., 387.
  • 15
    Cf. Wetz, Franz Josef, Dualismus. II. philosophical, in: Lexikon für Theologie und Kirche3, Bd. 3 Dämon-Fragmentenstreit, Freiburg i. Br., 1995, 388f., 389.
  • 16
    Strong AI refers to the idea that AI is capable of independent thought and conciousness, cf. Searle, John R., Chinese room argument, in: Scholarpedia 4(8):3100, Revision 66188: “The contrast is that according to Strong AI, the correct simulation reall is a mind. According to Weak AI, the correct simulation is a model of the mind.”
  • 17
    Cf. Fuchs, Thomas, Das Gehirn – ein Beziehungsorgan. Eine phänomenologisch-ökologische Konzeption, Stuttgart 22009, 67.
  • 18
    Ibd., 283, (Translation: MR, emphasis as in the original).
  • 19
    Cf. Kurzweil, Raymond, The Singularity Is Near. When Humans Transcend Biology, New York 2005, 145.
  • 20
    Cf. ibd., 378.
  • 21
    The qualia problem is the question of the relationship between subjective-phenomenal perception and mental states.
  • 22
    Kurzweil, Raymond, How to Create a Mind. The Secret of Human Thought Revealed, New York 2013, 209f. (Emphasis: MR).
  • 23
    Cf. Weizenbaum, Joseph, Computer Power and Human Reason. From Judgment to Calculation, San Francisco 1976, 3f.
  • 24
    Cf. ibd., 4f.
  • 25
    Colby, quoted from: Weizenbaum, Computer Power, 5f.
  • 26
    Cf. ibid, 5f. and Id., Wo sind sie, die Inseln der Vernunft im Cyberstrom? Auswege aus der programmierten Gesellschaft (mit Gunna Wendt), Freiburg i. Br. 2006, 97: “Today you can find many variants of ‘Eliza’ on the net, all doing roughly the same thing. Only the purposes are different. There is even a variant in which the programme no longer plays the role of the psychiatrist but that of a priest and, so to speak, receives confessions via computer. Although I am not a Catholic, this idea appalls me. If one really believes that a machine can forgive one’s sins and give absolution, then I really wonder what meaning faith or priestly ordination still have.” (Translation: MR)
  • 27
    Cf. Id., Computer Power, 6f.
  • 28
    Mutschler, Hans-Dieter, Ist der Mensch ein Roboter? In: Koßler, Matthias; Zecher, Reinhard (ed.), Von der Perspektive der Philosophie. Beiträge zur Bestimmung eines philosophischen Standpunkts in einer von der Naturwissenschaften geprägten Zeit (= Schriftenreihe Boethiana, 56), Hamburg 2002, 291-308, 306 (emphasis like in the original, translation: MR)
Leave a Reply

Your email address will not be published.