BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Is Machine Learning Really AI?

Following
This article is more than 4 years old.

One of the downsides to the recent revival and popularity of Artificial Intelligence (AI) is that we see a lot of vendors, professional services firms, and end users jumping on the AI bandwagon labeling their technologies, products, service offerings, and projects as AI products, projects, or offerings without necessarily being the case. On the other hand, there isn’t a well-accepted delineation between what is definitely AI and what is definitely not AI. This is because there isn’t a well-accepted and standard definition of what is artificial intelligence. Indeed, there isn’t a standard definition of intelligence, period.

Perhaps it is best to start with the overall goals of what we’re trying to achieve with AI, rather than definitions of what AI is or isn’t. Since the beginning of the AI in the 1950s, the goals of intelligent systems are those that mimic human cognitive abilities. This means the ability to perceive and understand its surroundings, learn from training and its own experiences, make decisions based on reasoning and thought processes, and the development of “intuition” in situations that are vague and imprecise; basically the world in which we live in. From a delineation perspective, it’s easy to classify the movements towards Artificial General Intelligence (AGI) as AI initiatives. After all, AGI systems are attempting to create systems that have all the cognitive capabilities of humans, and then some. Therefore certainly all AGI initiatives as AI initiatives.

On the flip side, simply automating things doesn’t make them intelligent. It may take time and effort to train a computer to understand the difference between an image of a cat and an image of a horse or even between different species of dogs, but that doesn’t mean that the system can understand what it is looking at, learn from its own experiences, and make decisions based on that understanding. Similarly, a voice assistant can process your speech when you ask it “What weighs more: a ton of carrots or a ton of peas?”, but that doesn’t mean that the assistant understands what you are actually talking about or the meaning of your words. So, can we really argue that these systems are intelligent?

In a recent interview with MIT Professor Luis Perez-Breva, he argues that while these various complicated training and data-intensive learning systems are most definitely Machine Learning (ML) capabilities, that does not make them AI capabilities. In fact, he argues, most of what is currently being branded as AI in the market and media is not AI at all, but rather just different versions of ML where the systems are being trained to do a specific, narrow task, using different approaches to ML, of which Deep Learning is currently the most popular. He argues that if you’re trying to get a computer to recognize an image just feed it enough data and with the magic of math, statistics and neural nets that weigh different connections more or less over time, you’ll get the results you would expect. But what you’re really doing is using the human’s understanding of what the image is to create a large data set that can then be mathematically matched against inputs to verify what the human understands.

How Does Machine Learning relate to AI?

The view espoused by Professor Perez-Breva is not isolated or outlandish. In fact, when you dig deeper into these arguments, it’s hard to argue that the narrower the ML task, the less AI it in fact is. However, does that mean that ML doesn’t play a role at all in AI?  Or, at what point can you say that a particular machine learning project is an AI effort in the way we discussed above? If you read the Wikipedia entry on AI, it will tell you that, as of 2017, the industry generally accepts that “successfully understanding human speech, competing at the highest level in strategic game systems, autonomous cars, intelligent routing in content delivery network and military simulations” can be classified as AI systems.

The line between intelligence and just math or automation is a tricky one. If you decompose any intelligent system, even the eventual end goal of AGI, it will look just like bits and bytes, neural networks, decision-trees, lots of data, and mathematical algorithms.  Similarly, if you decompose the human brain, it’s just a bunch of neurons firing electrochemical pathways. Are humans intelligent? Are zebras intelligent? Is bacteria intelligent? Where’s the delineation between intelligence in living organisms? Perhaps intelligence is not truly a well-defined thing, but rather an observation of the characteristics of a system that exhibit certain behaviors. In this light, one of those behaviors is understanding and perceiving its surroundings, and another of those is learning from experiences and making decisions based on those experiences. In this light, ML definitely forms a part of what is necessary to make AI work.

Over the past 60+ years there have been many approaches and attempts to get systems to learn to understand its surroundings and learn from its experiences. These approaches have included decision trees, association rules, artificial neural networks of which Deep Learning is one such approach, inductive logic, support vector machines, clustering, similarity and metric learning including nearest-neighbor approaches, Bayesian networks, reinforcement learning, genetic algorithms and related evolutionary computing approaches, rules-based machine learning, learning classifier systems, sparse dictionary approaches, and more. For the layperson, we want to stress that AI is not interchangeable for ML and certainly ML is not interchangeable with Deep Learning.  But ML supports the goals of AI, and Deep Learning is one way to do certain aspects of ML. Or to put it another way, doing machine learning is necessary, but not sufficient, to achieve the goals of AI, and Deep Learning is an approach to doing ML that may not be sufficient for all ML needs.

What Parts of AI are not Machine Learning?

It’s an interesting exercise to think about how you, as an adult human, have gained the intelligence that you have now. In some instances, you learned from simply being part of your environment such as learning how gravity works, how to speak to others and understand what they are saying, and cultural norms. In other instances, you learned in a teaching environment from instructors who knew a particular abstract subject area such as math or physics. In yet other instances you learned from repeating a particular task over and over again to get better at that task, such as music or sports. From the AI perspective, these are just different kinds of learning, and therefore, different machine learning strategies. Supervised learning for being taught how to do things. Unsupervised learning when you’re learning from observing the world. Reinforcement learning when you’re learning by trial and error. Therefore, doesn’t it make sense that all forms of machine learning should be considered AI? What else could there be?

Some say that machine learning is a form of pattern recognition, understanding when a particular pattern occurs in nature or experience or through senses, and then acting on that pattern recognition. When you look at it from that perspective, it becomes clear that the learning part must be paired with an action part. Decisions and reasoning is not just applying the same response to the same patterns over and over again. If that was the case, then all we’re doing is using ML to simply automate better. Given the same inputs and feedback, the robot will perform the same action. But do humans really work that way? We experiment with different outcomes. We weigh alternatives. We respond differently when we’re stressed than when we’re relaxed. We prioritize. We think ahead and think about the potential outcomes of a decision. We play politics and we don’t always say what we want to say. And the big one: we have emotions. We have self-consciousness. We have “awareness”. All of these things move us beyond the task of learning into the world of perceiving, acting, and behaving. These are the frontiers of AI.

The Moving Threshold of Intelligence

In reading this piece, you’re actually yourself thinking and learning about Machine Learning and AI, the relationships to each other, and whether or not specific ML activities are accomplishing the goals of what we aim to achieve in AI. Likewise, even for those at the extremes of the AI spectrum considering only AGI to be truly AI or on the other polar opposite that consider any application of ML to be AI, the truth lies somewhere in the middle. Some machine learning initiatives are more like automation and application of formulas that can’t continuously evolve or respond to change, while other machine learning efforts are closer to intelligence, which can change and adapt over time with experience, improving at their task or desired outcome.

The technology industry continues to iterate on ML and address problems previously considered to be more complicated and difficult. As the collection of ML activities mature, while some are definitely not AI-like at all or particularly intelligent, others are progressing the industry down the path of AI. Eventually we’ll start to see the sort of technology evolution that has long been the goal of AI.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here