The Truth About AI Alphabet Soup (ANI, AGI, ASI) – TechTalks

The Truth About AI Alphabet Soup (ANI, AGI, ASI) – TechTalks

Spread the love

This article is part of Demystifying AI, a series of articles that (attempt to) disambiguate the jargon and myths surrounding AI.

AI is often explained using the categories of Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI).[1] Despite this strange conceptual framework that brings nothing real, it is found in many discussions.[2] If you are unfamiliar with these categories, consider yourself lucky and move on to another more substantial article. If you’re unlucky, I urge you to keep reading.

First of all, lamenting categorizations – as I’m about to do – has limited value because categories are arbitrarily similar and distinct, depending on how we categorize things. For example, the ugly duckling theorem demonstrates that swans and ducklings are the same if we wish to manipulate the properties for comparison. All differences are meaningless unless we have prior knowledge of those differences. Alas, this article will unpack these suspicious categories from a business perspective.

artificial narrow intelligence (ANI) is often confused with weak artificial intelligence. John Searle, a philosopher and professor at the University of California, explained in his seminal 1980 paper, “Minds, Brains and Programsthat weak artificial intelligence would be any solution that is both narrow and a superficial doppelganger of intelligence. Searle explains that such research would be useful for testing hypotheses about segments of spirits, but not spirits.[3] ANI cuts that in half and allows researchers to focus on narrow, superficial assumptions and ignore minds. In other words, the ANI purges intelligence and minds and makes artificial intelligence “possible” without doing anything. After all, everything is tight, and if you squint hard enough, everything is a superficial doppelganger of intelligence.

General artificial intelligence (AGI) is the idealized solution that many design with AI in mind. As the researchers work on the narrow and the superficial, they talk about AGI, which represents the unique history of AI, dating back to the 1950s, with a revival in the past decade. AGI implies two things about a solution that should not apply to business-centric problem solving. First, a program has the general aptitude for human intelligence (perhaps all human intelligence). Second, an AGI is a general problem solver or blank slate, which means that all knowledge of a problem is rhetorical and independent of a strategy for solving that problem.[4] Instead, knowledge depends on a vague and ill-defined ability relating to the multidimensional structure of natural intelligence. If that sounds ostentatious, that’s because it is.

artificial intelligence (ASI) is a by-product of achieving the goal of AGI. A common belief is that general intelligence will trigger an “intelligence explosion” which will quickly trigger super-intelligence. ASI is thought to be “possible” because of recursive self-improvement, the limits of which are delineated only by the blind imagination of a program. ASI is accelerating to rapidly reach and exceed the collective intelligence of all humanity. The only problem for ASI is that there are no more problems. When the ASI solves one problem, it also asks for another with the momentum of Newton’s cradle. An acceleration of this kind will wonder what will follow ad infinitum until the laws of physics or theoretical calculation settle.

University of Oxford researcher Nick Bostrom says we have reached ASI when machines are smarter than the best humans in all areas, including scientific creativity, general wisdom and social skills.[5] Bostrom’s portrayal of ASI has religious significance. Like their religious counterparts, ASI believers even predict specific dates when the Second Coming will reveal our Savior. Curiously, Bostrom cannot explain how to create an artificial intelligence. His argument is regressive and depends on himself for his explanation. What will ASI create? Well, AGI. Who will create AGI? Someone else, of course. The AI ​​categories suggest a false continuum at the end of which ASI sits, and no one seems particularly upset by their ignorance. However, fanaticism is a dubious innovation process.

Part of our collective problem when we talk about AI is that we root our thinking in common but unnecessary dichotomies.[6] False dichotomies create an artificial sense that there is an alternative. ANI, AGI and ASI suggest a false balance between various technologies by presenting several aspects of an argument that do not exist. Even if we accept the definition of ANI and ignore its triviality, there is nothing compelling about AGI or ASI. To mention something that won’t exist to assess today’s technology pronounced with a more catchy name like ANI is odd. We don’t compare birds to griffins, horses to unicorns, or fish to sea serpents. Why compare (or scale) computation to human intelligence or the intelligence of all humans?

Any explanation that includes AGI or ASI distorts reality. Anchoring is a cognitive bias in which an individual relies too heavily on initial information (called an “anchor”) to make decisions. Studies have shown that anchoring is difficult to avoid, even when looking for it.[7] Even if we recognize that AGI and ASI are significantly wrong or misplaced, they can still distort reality and create misalignments. We must not be fooled by a false dichotomy and a false balance.

AI is not three things. It is not something that evolves by “intelligence” or that fits perfectly in three bins. These categories do not delineate specific technologies, highlight areas of research, or capture a continuum where one begins with work on ANI and ends with ASI. They are absurd. AI is one thing: a singular and unprecedented objective to recreate intelligence ex nihilo. However, this goal is permanently misaligned with business.

Business objectives cannot be totaled and absorb everything around them because business communication, which encompasses all strategies, is effective only when it cannot be misunderstood. Unless you plan to align your business with the singular and unprecedented goal of AI, you need to be mindful when calling your goals AI, because you can’t say “AI” these days if you want. to be understood. As we call more and more things “AI”, the task of communicating purpose and direction becomes even more difficult. However, saying ANI, AGI or ASI does not help matters. It interferes with communication. The best advice for technical managers is to avoid false continuums, false dichotomies, and false balances. As media critic Jay Rosen explains, borrowing a phrase from American philosopher Thomas Nagel, “false balance is a ‘view from nowhere’..‘”

[1] for example, Tim Urban, “The AI ​​Revolution: The Road to Superintelligence,” Wait But Why, September 7, 2017, waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html.

[2] I’ve seen too many corporate and government presentations start out by explaining AI when trying to explain these categories. It never helps.

[3] J. Searle, “Minds, Brains and Programs”, Behavioral and brain sciences 3, no. 3 (1980): 417–24. https://doi.org/10.1017/S0140525X00005756.

[4] AI pioneer Marvin Minsky discusses his approach to AI in a biography by the new yorker in 1981, saying, “I don’t have to tell the machine exactly what to do. This would eliminate the problem. Eliminating a problem is a problem in AI research, which is also a problem for anyone seriously considering an AGI strategy.

[5] Bostrom, N. (1998). “How Long Until Superintelligence?” International Journal of Prospective Studies, 2. http://www.nickbostrom.com/superintelligence.html

[6] As a general rule, the number of categories erected to explain a field of research is inversely proportional to the knowledge available on this research.

[7] In one study, Professors Fritz Strack and Thomas Mussweiler of the University of Würzburg gave participants a manifestly wrong anchor. Participants were asked whether Mahatma Gandhi died before or after the age of nine or before or after the age of 140. Neither option can be correct, but when the two groups were asked to suggest when they thought he died, they guessed significantly different average ages of fifty versus sixty-seven. Fritz Strack and Thomas Mussweiler, “Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility”, Journal of Personality and Social Psychology 73, no. 3 (1997): 437–46, https://doi.org/10.1037/0022-3514.73.3.437.

Leave a Comment

Your email address will not be published.