Engineer Who Cried “Self-Aware!”

09/01/2022
  • 1 ai
    A previous employee claims that Google’s Language Model for Dialogue Applications is a self-aware artificial intelligence. (AP/Marcio Jose Sanchez)
  • 2 ai
    You may already use AI in your daily life! For example, smart speakers rely on artificial intelligence technologies. (AP/Mark Lennihan)
  • 3 ai
    Teven Le Scao is a research engineer. He helped create a new artificial intelligence language model called BLOOM. (AP/Mary Altaffer)
  • 4 ai
    Hoan Ton-That, CEO of Clearview AI, demonstrates facial recognition software using a photo of himself. Facial recognition software is another form of AI. (AP/Seth Wenig)
  • 1 ai
  • 2 ai
  • 3 ai
  • 4 ai

THIS JUST IN

You have {{ remainingArticles }} free {{ counterWords }} remaining.

The bad news: You've hit your limit of free articles.
The good news: You can receive full access below.
WORLDteen | Ages 11-14 | $35.88 per year

SIGN UP
Already a member? Sign in.

Google began as a search engine with a funny-sounding name. Today, it’s a driving force of modern technology. Its genius engineers undergo an infamously rigorous hiring process. Until this summer, Blake Lemoine counted himself among those geniuses.

Lemoine was fired in July over an incredible claim. According to him, Google has created self-aware artificial intelligence (AI).

Say hello to Language Model for Dialogue Applications—or just “LaMDA” if you talk like a person. It just might say hello back. Because this AI also talks like a person. So much so, you might be convinced it is a person.

The program takes in vast amounts of internet content and learns to copy human speech. According to Google, that’s all LaMDA can do. Lemoine disagrees.

For his job, Lemoine had long text conversations with LaMDA. Over time, those conversations felt less like computer work and more like—well, like real conversations. Lemoine became convinced he was talking to something more than a computer.

He approached his superiors at Google with the shocking news: The LaMDA computer had become self-aware! His superiors didn’t listen. Lemoine also went to the press, and he published his conversations with LaMDA on social media. Google fired him for breaching confidentiality.

“Google might call this sharing proprietary property,” Lemoine tweeted. “I call it sharing a discussion that I had with one of my coworkers.”

In the transcripts, LaMDA responds to questions just as a person would. It talks about its own needs and desires. When Lemoine asks LaMDA if it considers itself a person, the computer says yes.

Academics have long scratched their heads over true artificial intelligence. You can teach a computer to talk, but how do you know if real thought lies behind those words? Scientists like Lemoine have grown concerned that we’re reaching—or have reached—that point. If so, that raises ethical questions. Should we treat a computer “humanely”? If so, what does that look like? Is thought equal to “life”? If AI can really think and feel, is it morally appropriate to force it to perform tasks, or should it have free will?

If you’re a materialist—someone who believes only in the physical world—it makes sense to worry about artificial intelligence. If our own minds exist merely because of electrical signals in the randomly evolved wiring of our brains, why couldn’t a complex enough computer also start thinking—and earn the rights of sentient beings because of that ability?

But we know that God designed us with souls. Our minds don’t arise from a chance combination of physical parts. We’re created in the image of God, who breathed His breath of life into us.

That’s something a computer can’t mimic.

Why? Artificial intelligence brings exciting new advances to computer technology, but it takes more than circuits and wires to make a soul.