Weak AI


Weak artificial intelligence, is artificial intelligence that implements a limited part of mind, or as narrow AI, is focused on one narrow task. In John Searle's terms it “would be useful for testing hypothesis about minds, but would not actually be minds”. Contrast with strong AI which is defined as a machine with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind.
“Weak AI” is sometimes called “narrow AI”, but the latter is usually interpreted as subfields within the former. Hypothesis testing about minds or part of minds are typically not part of narrow AI, but rather implementation of some superficial lookalike feature. Many currently existing systems that claim to use “artificial intelligence” are likely operating as a narrow AI focused on a specific problem, and are not weak AI in the traditional sense.
Siri, Cortana, and Google Assistant are all examples of narrow AI, but they are not good examples of a weak AI, as they operate within a limited pre-defined range of functions. They do not implement parts of minds, they use natural language processing together with predefined rules. They are in particular not examples of strong AI as there are no genuine intelligence nor self-awareness. AI researcher Ben Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle" evidenced by annoying results if you ask questions outside the limits of the application.
Some commentators think weak AI could be dangerous because of this "brittleness" and fail in unpredictable ways. Weak AI could cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. In 2010, weak AI trading algorithms led to a “flash crash,” causing a temporary but significant dip in the market.