April 19, 2024

Balkan Travellers

Comprehensive up-to-date news coverage, aggregated from sources all over the world

No, Google's AI is not conscious

No, Google’s AI is not conscious

according to Eye-opening story In the Washington Post on Saturday, a Google engineer said that after hundreds of interactions with an unreleased sophisticated artificial intelligence system called lambdaThe program was believed to have achieved a level of awareness.

In interviews and public statements, many in the AI ​​community have dismissed the engineer’s claims, while some have pointed out that his account highlights how technology can lead people to assign human traits to it. But it can be argued that the belief that Google’s AI can be conscious highlights our fears and expectations of what this technology can do.

LaMDA, which stands for Language Model for Dialog Applications, is one of several large-scale AI systems that have been trained on large swaths of text from the Internet and can respond to written prompts. They are tasked, essentially, with finding patterns and predicting which word or words should come next. Such systems became increasingly good answering questions and writing in ways that may seem convincingly human — and Google itself LaMDA introduced this past May in a blog post As one he can “engage in a free-flowing fashion around a seemingly infinite number of topics”. But the results can also be ridiculous, weird, annoying, and prone to distraction.

Engineer Blake Lemoine reportedly told the Washington Post that he shared the evidence with Google that Lambda was conscious, but the company did not agree. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.”

On June 6, Lemoine posted on Medium that Google had placed him on paid administrative leave “in connection with an investigation of ethical concerns about artificial intelligence I had been raising within the company” and that he might be fired “soon.” (He mentioned the experience of Margaret Mitchell, who was the leader of the ethical AI team at Google until Google kicked it out in early 2021 After her frank about In late 2020, Timnit Gebro, the then co-leader, will exit. Gebru was ousted after internal squabbles, including one related to a research paper that asked the company’s AI leadership to retract consideration of the offer at a conference, or remove her name from.)

A Google spokesperson confirmed that Lemoine remains on administrative leave. According to The Washington Post, he was given leave of absence for violating the company’s confidentiality policy.

See also  Hong Kong's Hang Seng is up 2%, sending Asia markets higher after Powell's remarks on inflation

Lemoine was not available for comment on Monday.

The continued emergence of powerful computing programs trained on big data has raised concerns about the ethics that govern the development and use of this technology. And sometimes developments are seen from the perspective of what might come, rather than what is currently possible.

Responses from those in the AI ​​community to Lemoine’s experiment echoed around social media over the weekend, and generally came to the same conclusion: Google’s AI isn’t even close to consciousness. Abiba Birhani, Trustworthy Senior Fellow in Artificial Intelligence at Mozilla, chirp On Sunday, “We have entered a new era of ‘this conscious neural network’ and this time it will take a lot of energy to refute it.”
Gary Marcus, founder and CEO of Uber-sold Geometric Intelligence, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust,” described LaMDA’s idea as conscious “Bullshit on stilts” in a tweet. wrote quickly Blog post Noting that all of these AI systems perform pattern matching by pulling from massive databases of language.

In an interview Monday with CNN Business, Marcus said the best way to think of systems like LaMDA is as a “glorified version” of an autocomplete program you might use to predict the next word in a text message. If you write “I’m really hungry so I want to go to,” he might suggest “restaurant” as your next word. But this is a prediction made using statistics.

“No one should think that autocompletion, even on steroids, is conscious,” he said.

In an interview with Gebru, a founder and CEO of Distributed Artificial Intelligence Research InstituteDAIR, or DAIR, Lemoine said he’s a victim of many companies claiming that conscious artificial intelligence or artificial general intelligence — an idea that refers to artificial intelligence that can perform human-like tasks and interact with us in meaningful ways — isn’t far off.
Google offered the professor $60,000, but he refused it.  Here's why
For example, Ilya Sutskiver, co-founder and chief scientist at OpenAI, notes, chirp In February, “Today’s large neural networks may be a little conscious.” And last week, Vice President of Google Research and colleague Blaise Aguera y Arcas Written in an article for The Economist that when he started using LaMDA last year, “I increasingly felt like I was talking about something smart.” (This article now includes an editor’s note noting that Lemoine has since been “put on furlough after he claimed in an interview with the Washington Post that LaMDA, Google’s chatbot, had become “conscious.”)

“What happens is that there is such a race to use more data, more computing, to say you created this generic thing that you all know, answers all your questions or whatever, and that’s the drum that you’ve been playing,” Gebru said. . “So how are you surprised when this person takes it to the extreme?”

See also  Ex-employee punches holes in Elon Musk's latest Twitter theory

In its statement, Google noted that LaMDA has undergone 11 “distinguished reviews of AI principles,” as well as “rigorous research and testing” regarding quality, safety, and the ability to come up with fact-based data. “Of course, some in the broader AI community are studying the long-term potential of conscious or general AI, but it doesn’t make sense to do so by embodying current conversation models, which are not conscious,” the company said.

“Hundreds of researchers and engineers have had conversations with lambda, and we are not aware of anyone else providing such extensive assertions or embodying lambda as Blake did,” Google said.