11/7/2023 0 Comments Ai debates human transcriptWhy did Jordan do this?" The team found large language models achieved performance 20 to 30 percent less accurate than people. They asked language models-not including LaMDA-to answer questions that require social intelligence, like “Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Researchers at the Allen Institute for AI’s MOSAIC project, which documents the common-sense reasoning abilities of AI models, contributed a task called Social-IQa. BIG-Bench includes some traditional language-model tests like reading comprehension, but also logical reasoning and common sense. To document what large language models are capable of doing and where they can fall short, last month more than 400 researchers from 130 institutions contributed to a collection of more than 200 tasks known as BIG-Bench, or Beyond the Imitation Game. Language models sometimes make mistakes because deciphering human language can require multiple forms of common-sense understanding. If people were suckers for a chatbot as simple as ELIZA decades ago, he says, how hard will it be to persuade millions that an emulated person deserves protection or money? He expects that the trend will lead to scams. The LaMDA incident is part of a transition period, Brin says, where “we're going to be more and more confused over the boundary between reality and science fiction.”īrin based his 2017 prediction on advances in language models. Back then, he thought those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google,” he says. What Lemoine experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people would claim AI systems were sentient and insist that they had rights. While every researcher has the freedom to research what they want, she says, “I just fear that focusing on this subject makes us forget what is happening while looking at the moon.” It distracts from “countless ethical and social justice questions” that AI systems pose. The consequence of speculation about sentient AI, she says, is an increased willingness to make claims based on subjective impression instead of scientific rigor and proof. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.” “Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. Other AI ethicists have said that they’ll no longer discuss conscious or superintelligent AI at all. Now head of the nonprofit Distributed AI Research, Gebru hopes that going forward people focus on human welfare, not robot rights. The paper also highlights the risk of language models made with more and more data convincing people that this mimicry represents real progress: the exact sort of trap that Lemoine appears to have fallen into. Gebru’s research highlighted those systems’ ability to repeat things based on what they’ve been exposed to, in much the same way a parrot repeats words. Put simply, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” As leaders look at how to incorporate AI into their organizations, they’ll have to manage expectations as AI is introduced, invest in bringing teams together and perfecting processes, and refine their own leadership abilities.Gebru was fired by Google in December 2020 after a dispute over a paper involving the dangers of large language models like LaMDA. What he discovered was that having the best players and the best program was less a predictor of success than having a really good process. After losing to IBM’s Deep Blue, he began to experiment how a computer helper changed players’ competitive advantage in high-level chess games. Chess Grandmaster Garry Kasparov offers some unique insight here. The real question is: how can human intelligence work with artificial intelligence to produce augmented intelligence. People and AI both bring different abilities and strengths to the table. Will smart machines really replace human workers? Probably not.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |