The first thing to recognize is that this has not come out of nowhere. We are currently in the third wave of artificial intelligence (AI). The first wave began in 1960, although the term artificial intelligence was initially coined in 1956 by Professor John McCarthy of Dartmouth, who defined it as “the science and engineering of making intelligent machines.” But what do we mean by “intelligent”?
We shouldn’t think of AI as something found only in the upper strata of the technological world. Its building blocks stem from human functions that have been applied to machines. One such building block is the algorithm, a concept derived from the field of logic.
If you have ever used an air conditioner or lights that automatically switch on when you walk into a room, you have been using AI for years. These two examples are considered “level 1” AI—out of four possible levels—because of the simplicity of the intelligence involved. Another basic form of AI is the calculator, which has been “taught” how to do mathematical calculations that humans can do at a slower pace or with less accuracy. AI level is determined on the basis of the kinds of rules the machine follows and whether or not the machine is able to learn. More advanced levels of AI include automatic vacuum cleaners that can learn the floorplan of a house and smartphone voice recognition technology.
This sort of learning is known as “machine learning.” The more sophisticated the AI inside a machine, the less it needs to be instructed what to do; instead, it learns through trial and error.
This does not mean that AI is free from error. It often can only do what it has been taught or programmed to do.
So where and when do we encounter AI? Every time you check the weather or read the news, there’s a very high probability that you are being given information that has been compiled, checked, and distributed using AI. At the end of the day, speed and precision are what computers do better than people. Because of this, media giants like Bloomberg tend to hire programmers rather than journalists to write for them.
AI uses patterns to organize information and make predictions. That’s why your bank will alert you if your debit card is used to make multiple withdrawals in distant places or with a frequency that deviates from the norm. Likewise, AI alerts are used in airports to detect when lines are getting too long and more staff are needed. But this does not mean that AI is free from error. It often can only do what it has been taught or programmed to do. In one very controversial case, facial recognition technology was found to be racially biased, showing that AI—like human intelligence—has limits.
As we spend more time speaking to our phones using voice-activated commands and watching TV series recommended to us by algorithms, we are feeding the need, use, and sophistication of AI.
A Brave New World?
World-renowned scientists such as Stephen Hawking have shared their misgivings about the dangers of AI. Many people therefore view the increasing presence of this technology as a potential threat. Others, such as Ray Kurzweil, have made it their mission to spread the word about the singularity that is expected to take place within the next 25 years, at which point technological growth will reach unfathomable proportions. Some imagine paradise, while others imagine Terminator-like scenarios. Even the business giant Elon Musk has expressed his concerns.
However, the overwhelming majority of scientists currently working in the AI world do not believe that this is where we are headed. Some of these scientists are actively involved in making the case for ethical AI. What is certainly clear is that AI has got our attention. As we spend more time speaking to our phones using voice-activated commands and watching TV series recommended to us by algorithms, we are feeding the need, use, and sophistication of AI. Will the AI of the future make the world a better, safer place where humans and machines can peacefully coexist? For now, this question remains in our hands.
© IE Insights.