Search
Close this search box.

AI & Data Best Practices

What is Artificial Intelligence?

Table of Contents

Cognilytica is excited to share the “What is Artificial Intelligence” white paper we wrote in collaboration with the Consumer Technology Association (CTA) in 2018. Read the white paper in full below or download in the link at the end.

Executive Summary: The Intelligent Machine

One way to describe artificial intelligence (AI) is the ability of machines to exhibit the intelligence of humans. People have been working on the development of AI systems for over half a century. What has brought AI to the attention of many of late are the generative AI solutions and voice-enabled digital assistants that people are using to help them with their daily routines. The fact that people converse with these systems, and the systems respond with amazing capabilities, makes us think of them as exhibiting human-like qualities.

What is enabling AI to finally come into its own after decades of research and development is the processing power of modern computers coupled with massive amounts of data we have accumulated about so many things. We have data about the cars traveling on our roads, about mobile phone usage, about the weather, about who buys what where and so many other things. AI systems can take vast quantities of data and look for patterns that can help us make predictions and generally better understand the world.

AI systems are built using various core technologies and building blocks like machine learning (including emergent deep learning architectures), natural language understanding/generation and computer vision. This whitepaper will touch on each of them. Depending on the nature of data, AI systems leverage three major approaches to learn – supervised learning, unsupervised learning, and reinforcement learning. These approaches are realized using various types of algorithms from regression techniques to convolutional neural networks. AI systems are being deployed to perform various workloads/tasks like recognition, classification, pattern matching, and natural language processing, and more specific applications like digital assistants, chatbots, and self-driving vehicles. This whitepaper will touch on these and other applications, too.

AI systems also raise a number of issues. Do we need them to be able to explain why they do what they do? Perhaps, in some situations. How will AI systems impact jobs? Like all technological developments they will create new opportunities and make some existing jobs obsolete. How can biases be kept out of AI systems and when could inserting bias be beneficial to the objective? By making sure data used to train them is accurate and comprehensive and objectives are defined clearly. And will an AI “super intelligence” try to take over the world? Not in our lifetimes, and perhaps not ever. This whitepaper will cover each of these topics in more detail.

What is Artificial Intelligence 

At the most abstract level, Artificial Intelligence (AI) is machine behavior and function that exhibits the intelligence and behavior of humans. This usually means the ability for machines to perceive and understand their surroundings, learn from teaching/training and its own experiences, make decisions based on reasoning and thought processes, have natural conversations and linguistic communication with humans, and develop “intuition” in situations that are vague and imprecise.

A Brief History of AI

The pursuit of AI is almost as old as the history of digital computing itself. The first attempts at developing intelligent systems came in the 1940s with McCulloch and Pitt’s artificial neuron[1], which was further expanded upon until 1956, when a group of multi-disciplinary researchers convened at Dartmouth University expecting to make significant progress in the new field of “Artificial Intelligence.”[2] Steady progress has been made in the decades since. The first chess and checkers-playing systems, as well as those that exhibited abilities to have natural language interaction, evolved in the 1960s[3]. So, AI is really nothing new in the world of computing.

The growth and evolution of AI could be characterized as being in fits and starts. Waves of interest and funding in AI would be followed by periods of decline (known as “AI winters”), to then be followed by a resurgence of interest and funding. Many of the causes of the waning of interest can be traced to two issues: overpromising what AI is capable of, and the limits of technology to deliver on those promises. People get excited when they see demonstrations of what AI is capable of, or read sensationalized media accounts of future AI systems. They then have their own grandiose visions of what AI could be used for. This results in new AI projects getting funded. But when current technology is not able to deliver the performance that was imagined, funding dries up. This cycle has repeated itself over the decades.

For those developing AI systems the key is promising advances in cognitive technology that research and technology are able to fulfill. Large quantities of data and computing power are necessary to teach computers intelligent tasks. It was not until the recent big data explosion enabled us to accumulate vast quantities of data, and until advances in computer technology brought us cheap and massive computing power, that the consumer technology industry was able to offer products that began to live up to some of the promises of AI. And now many people are paying attention to AI.

What the Experts Say

Despite AI’s long and storied history there is, perhaps surprisingly, no accepted standard definition of what AI really is. Nevertheless, industry has rallied around the consensus that AI represents a collection of approaches and technologies that collectively aim to bring cognitive capabilities to the machines and systems we build. 

“Artificial Intelligence is the science and engineering of making intelligent machines.” — John McCarthy, AI researcher at Stanford, MIT, and Dartmouth who coined the phrase Artificial Intelligence [4]
”Artificial Intelligence doesn’t mean [just] one thing… it’s a collection of practices and pieces that people put together.” — Rodney Brooks, pioneering AI researcher, MIT professor, Rethink Robotics [5]
“Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” — Nils J. Nilsson, Stanford University [6]
AI is “intelligence that is not biological.” Max Tegmark, researcher, MIT professor [7]
AI is “a branch of computer science that studies the properties of intelligence by synthesizing intelligence” Herbert A. Simon, pioneering AI researcher, Carnegie Mellon [8]

The challenge, of course, with these definitions is that we don’t fully understand what biological intelligence itself means, and so trying to build intelligence artificially is a challenge. However, we can easily see what humans are good at that machines are not, and focus our efforts on making those “cognitive” tasks realistic. 

Why does AI Matter?

It is becoming clear that people are ever-more demanding of their interactions with devices and systems. People find greater satisfaction from the convenience of automated self-service systems than they do in dealing with the inevitable shortcomings of human-to-human interaction. People also want instant access to information, goods delivered to them at their whim, and transportation  to destinations with minimal hassle. Just as importantly, businesses have the never-ending desire to increase productivity, provide greater levels of service and satisfaction to their customers and improve their interactions with all their stakeholders. They want to be able to identify people and objects in their ever-growing piles of data. All of these factors are driving individuals and organizations to demand from machines and systems the perfect version of what they want from interactions with other humans and organizations – automation combined with intelligence. And this is driving the need for and adoption of AI and other forms of more intelligent systems.

Defining the Key Elements of Artificial intelligence

Generally, we can group aspects of AI into three categories (the 3 “P”s):

PerceptionPredictionPlanning
Understanding the environment around you and process the various inputs from surroundings and sensorsUnderstand patterns to predict what will happen next and learn from different iterations to improve overall performanceUse what was learned and perceived to make decisions and plan next steps.
Perception-related cognitive technologies include image and object recognition and classification (including facial recognition), natural language processing and generation, unstructured text and information processing, robotic sensor and Internet of Things (IoT) signal processing, and other forms of perceptual computing.Prediction-focused cognitive technologies utilize a range of machine learning, reinforcement learning, big data, and statistical approaches to process large volumes of information, identify patterns or anomalies, and suggest next steps and outcomes. Predictive technologies span the range from big data analytics to complex, human-like decision modes.Planning-focused cognitive technologies include decision-making models and methods that try to mimic how humans make decisions. Planning-focused cognitive technologies also help generate natural-sounding conversations and is the area of research leading to intuition, common sense, emotional IQ, and other factors that make humans much better than machines at planning and decision-making.
Source: Cognilytica

Narrow AI vs. General AI

AI has a broad spectrum of ability.  The more an AI system approaches the abilities of a human, with all the intelligence, emotion, broad applicability of knowledge, then the “stronger” it is. On one end of this spectrum is the ultimate goal: Artificial General Intelligence (AGI), or the development of machine intelligence so strong and general purpose that it can tackle any task and handle any problem with the cognitive capabilities and mental dexterity of humans. Generally intelligent systems can generalize knowledge from one domain to another, make plans in the future based on knowledge and experiences, and adapt to change as it happens in the ecosystem. We have yet to build anything that is close to the capabilities of AGI systems.

On the other hand the more specific to a particular application a particular AI system is, the more “narrow” it is in comparison. Narrow AI systems are not aiming to solve the problem of AGI but rather use bits and pieces of AI technology to solve specific problems. Narrow AI systems are applied to a specific task, and the intelligence is not meant (or able) to be applied to other tasks. Examples include image recognition, conversational technology, predictive analytics, recommendation engines and profiling systems, and autonomous vehicles. Since we have not yet achieved AGI, despite some attempts to get us close, it follows then that all current practical implementations of AI are considered narrow AI.

The Pieces that Make up the AI Ecosystem

The Artificial Intelligence Ecosystem

Human intelligence comes from a combination of many discrete skills that together form a whole intelligent being. Likewise, the field of AI encapsulates many overlapping concepts that contribute to increasing intelligence of the systems in which they are applied.

An Overview of Artificial Intelligence Concepts and Technologies

Machine Learning

The ability to learn is key to being able to exhibit the characteristics of intelligence. Without learning, humans (or machines) can’t truly be intelligent because they can’t adapt to new situations, respond to the environment, or use knowledge in new and unfamiliar situations. Computers are great at doing what you program them to do. However providing hard-coded instructions on how to do something is very different from actually learning how to do that task. Our brains aren’t wired as a set of instructions, so if instructions aren’t the way to encode learning, how can machines learn so they can be intelligent?

Machine Learning (ML) is the set of methods and approaches that provide a means by which computer systems can encode learning and then apply that learning in relevant situations. There are two broad classifications for methods for how machines can learn: supervised learning and unsupervised learning. In supervised learning a human teacher shows machines examples of what should be learned as well as the correct answers. Eventually, the machine learns the general rule that connects the inputs to the outputs so it can make educated guesses when presented with new data. In unsupervised learning, a machine tries to learn for itself by finding hidden patterns in data and identifying situations where the patterns apply or data doesn’t fit those patterns. There are also many hybrids of the two approaches, such as reinforcement learning, in which the machine teaches itself based on a reward or goal set by the human supervisor. These AI systems try different things and discover the best approach to reach that goal.

Just like with humans, there are many ways for machines to learn. One of the easiest forms of supervised learning is decision trees. This is when a human maps out the various possibilities in a tree or graph that a computer can understand.  Examples include teaching a machine how to play checkers or basic chatbot conversations where a machine is programmed to consider specific groups of options for specific situations. Other ML approaches include methods that mimic how logicians derive theories, cluster data together into groups, build complicated interconnected models of objects, and genetic algorithms, where the system iterates itself to a solution in much the same way that evolution works.

Neural Networks: Simulating the Way the Brain Works

Our brains aren’t a bunch of pre-programmed decision trees. Instead, brains have neurons that make connections between the inputs from our senses and what we’ve learned.  AI researchers created neural networks as a way to simulate how the brain works. With neural networks, ML is structured as a set of artificial neurons that are connected together and that take input information, analyze it, and produce an output. Over time these networks quickly learn patterns through supervised learning approaches.

Since we don’t really know exactly how the brain works or how humans learn, all of these ML approaches are just approximations at recreating the way the brain learns. You can think of ML as a collection of different approaches that are best used in different situations. Some are great at recognizing faces, while others are great for conversations. Some are good for playing games while others are best at diagnosing illnesses.

Deep Learning: An Evolution in Neural Networks

Neural networks were previously hard to manage and scale. However, with the creation of deep learning, combined with huge datasets (aka “big data”) and ever faster computer processors, we have a way to handle many layers of artificial neurons for more complicated learning. It is these layers that give deep learning its name, the more layers the deeper the learning. Deep learning has been proven to be very good at recognizing images, natural language processing, and other applications.

What makes deep learning possible is massive amounts of data to study and extremely fast computer processors with which to study it. You need large volumes of well-labeled clean data sets to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that must all be done at the same time, you need a lot of raw computing power — specifically numerical computing power.

Imagine you’re trying to forecast how many people will drive over a toll bridge each hour on a particular weekend. Among many other factors that might be considered are how many people are on within a certain range of either side of the bridge at any given moment, what weekend attractions (sporting events, concerts, festivals, beaches, ski slopes, etc.) are within a certain range of either side of the bridge during the weekend of interest, what will the weather be like, etc. Some of these factors won’t change very much over time – the beaches are always there. Other factors may weigh more or less heavily depending on when the forecast is made. If it’s May, for example, it doesn’t make sense to make specific assumptions about the weather on a September weekend. If it’s only two days before that weekend though then it does make sense for the weather forecast to be a larger factor.

A deep learning system will take in massive amounts of data related to all of the relevant factors, make assumptions about which things will have the biggest impact on the prediction, and then make its prediction taking into account all of the data. The system learns to predict well by being told how close its predictions are to reality. So, for it to learn it needs to be fed not only all of the input data, but the output data too. It makes its prediction and then compares the prediction to the known outcome. Obviously for this to happen there needs to be sufficient historical data available that the system can use to predict results, compare with actual results, then repeat.

The massive amounts of data we have today coupled with massive amounts of computer processing power make it possible for deep learning systems to learn quickly. The computer can try all kinds of predictive models, including ones that we humans would likely never consider. Do sales of kites within a certain distance of the bridge during the previous week provide any insight into how many people will travel over the bridge on a given weekend? Maybe not. But before deep learning systems we humans would not likely have bothered to try factoring in kite sales because of our limited resources and our perception that other factors would be more important. If it turned out that kite sales did help to predict bridge traffic more accurately a deep learning system could discover this. It just needs clean, comprehensive input data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well.

While ML approaches today still have a ways to go before coming close to being as good as our brains, they are quickly getting better and achieving things we never thought were possible before – self-driving cars, recognizing faces, beating world masters at Chess and Go, and talking to you. As ML progresses, so too does AI.

Pervasive Knowledge

AI moves us up one more level on the so-called “Data-Information-Knowledge-Wisdom (DIKW) Pyramid“:

The introduction of ML and other aspects of AI to information brings about knowledge that has been hereto unavailable. Rather than simply being a better means to extract more analysis from data, AI enables us to have more knowledge from that data, making higher-level connections between disparate pieces of information, and giving us more insight into what that information actually means.

Right now the concept of pervasive connectivity manifests itself in the assumption that you have immediate or ready access to communications and networking at your fingertips. In the future, pervasive knowledge means that you’ll have immediate or ready access to knowledge and insight at your fingertips. In other words, individuals will demand more of their relationships with governments, companies, and each other. We will expect things to be available as instantly as we need it, and the information provided be complete, because there simply will be no excuse to do otherwise. As a result, this pervasive knowledge will become part of the assumption of where we are, just like we are now expecting to be able to get Internet, electricity, and information whenever and wherever we need it without excuse.

Natural Language Processing & Generation

 The advancement of deep learning and other approaches to ML have made possible leaps in advancement for understanding and processing speech and natural language. The result is a wide range of applications that can unlock the power of spoken conversation and written word.

Natural Language Processing (NLP) provides the ability for machines to understand spoken and written words, the meanings of formulations of those words into sentences, and the ability to summarize volumes of content into a form that is understandable. Likewise, Natural Language Generation (NLG) provides computers the ability to communicate back to humans in a form they understand and in a conversational flow. NLP and NLG have in turn enabled additional conversational capabilities including chatbots, conversational interfaces, and a wide range of applications that can unlock the power of unstructured information and spoken content in ways not possible before.

Computer Vision

Computers have long been able to capture and store images and visual information. However, the challenge of being able to identify what is actually in those images is particularly complicated. The brain has a whole area of interconnected neurons and biochemical pathways dedicated to decoding the meaning of the electrical impulses that come from the retina, where humans perceive light and images. For a long time, getting accurate image recognition was a computationally difficult and hard task. Developers and researchers found it difficult to be able to program their way to classify images or identify faces in different positions, lighting situations, and facial expressions. Then Deep Learning came and changed everything.

The use of deep neural networks with appropriately large quantities of training data is giving new meaning to computer vision, allowing systems to fairly accurately identify and classify images, recognize faces, and detect specific objects inside of images or video. The use cases for such computer vision capabilities are enormous, ranging from security to autonomous vehicle enabling systems, to factory floor and warehouse operations, to medical imaging diagnostics and beyond. The specific use of AI-enabled computer vision is single handedly advancing many industries that need to extract valuable information from images or video.

Applications of Artificial Intelligence

Digital Assistants

Much of our daily lives is spent engaging other humans for a variety of tasks, from customer support to retail to interacting with businesses of all types. Through the use of digital assistants, organizations of all types are engaging AI to provide instant, always-accessible interaction with their customers and stakeholders without sacrificing the feeling of personal interaction. In the not-so-distant future, digital assistants will be everywhere. We’ll be interacting with them daily in both our personal and business lives. We’ll be chatting with assistants in our homes and cars, and also interacting with other people’s and business’s conversational agents.

In a future where everyone will have a personal digital assistant, we’ll have them do everything from messaging friends when you’re putting together a birthday party, to scheduling all the logistics for that party, to dealing with inbound calls from late attendees who can’t make it. On the other end, all the companies involved in those transactions will also have digital assistants to make sure that the interaction happens seamlessly, regardless of the time of the day or the location of the customers. Soon enough, as dependent as we are now on our GPS systems for keeping us from getting lost and our mobile phones for keeping us always connected, we’ll be dependent on these digital assistants for keeping our lives in order.

Chatbots and Conversational Interfaces

We humans were born to talk and communicate. Much of the brain is devoted to generating, sensing, processing, and understanding speech and communications in its various forms. So it’s no surprise that what we want from our interactions with machines is what we want with interactions with people: natural conversation. AI is enabling a wide range of conversational interfaces and technologies, including chatbots and voice assistants that are changing the way we interact with our systems.

At its simplest core, a chatbot is a software application or computer program that accepts input in the form of written or spoken text in a natural language and provides as output words in written or spoken form also in a natural language. Unlike other programs, the whole “trick” of a chatbot is to make it seem like you’re conversing with a real human being. How real this trick is depends on the sophistication of the system and the extent to which the human wants to know it’s talking to a machine or not.

Chatbots are best suited for applications where back-and-forth interaction with humans are required. These scenarios include customer support and customer service, information acquisition (especially over the phone), interaction with devices where physical input is not possible or convenient (driving, flying a plane, operating heavy equipment, etc.), or a new class of intelligent personal assistants where hands-free interaction is preferred.

In addition, we’re starting to see chatbots take off in the context of intelligent voice assistants such as Amazon’s Alexa, Apple’s Siri, Google’s Home, Microsoft’s Cortana or Samsung’s Bixby, among others. These devices are introducing the concept of the pervasive, voice-activated assistant that enables a wide range of capabilities from conversational commerce to personal business assistant capabilities and more.

Augmented Intelligence

Augmented Intelligence is the idea of machines and humans working together to enhance, rather than replace, humans for particular tasks. The term “augmented intelligence” provides a way to differentiate the goals of helping humans with AI technology from those AI approaches meant to operate independently of humans.

Augmented Intelligence is a “force multiplier” that enables humans to do more, and therefore enables businesses and organizations to do more with the people they already have. Think of augmented intelligence as giving humans AI “super powers” or as intelligent helpers that can help humans perform tasks that might have previously been too difficult or dangerous, too expensive, or simply be too tedious. What makes the combination work is that humans and machines are good at different things.  

Augmented Intelligence — What Machines and Humans are Good at… and Not

HumansMachines
StrengthsIntuition
Emotional IQ
Common Sense
Creativity
Learning Adaptability
Probabilistic thinking
Dealing with large volumes of info
Being trained and following instructions
Lack of selfish motivations
WeaknessesProbabilistic thinking
Dealing with large volumes of info
Bias
Responding reliably to training and instructions
Intuition
Emotional IQ
Common Sense
Creativity
Hard to adapt learning to different situations
Dependence on quality of training data

Many enterprises and organizations are seeing significant benefit from applying Augmented Intelligence approaches to their cognitive solutions. Augmented intelligence is promising an AI-enabled future in which the human is still the center of the organization. If anything, the power of AI enables augmented intelligence solutions that make us humans better and more effective at what we do, delivering more benefit to organizations, ourselves, and society. 

Robots and Cobots

Czech writer Karel Čapek first coined the term ‘Robot’ in his 1920 play R.U.R. Originally envisioned as physical, hardware things, the term robot is used in a wide array of manners to deal with any sort of software or hardware-based automation, whether intelligent or not.

Physical robots are still highly desired in many industries, especially to perform tasks often referred to as the four “D’s”: Dirty, Dangerous, Dear (or Expensive), and Dull (or Demeaning). These robots operate every day in manufacturing, warehouse, health care, and other situations to perform tasks. However, to make industrial robots work in a reliable way without causing physical harm to humans, they often must be separated from physical human contact, operating in entirely human-free zones or within cages to prevent accidental human contact. Or, if they are roaming about in the free world, they are constrained in their strength and capability so that they can’t inflict harm. 

However, constraining physical robots in this way limits their application and power. Companies looking to increasingly automate and enable greater portions of their business that require human labor need ways to increase the interaction of robots and people without endangering their welfare. Collaborative robots, known by the shorthand “cobots” are meant to operate in conjunction with, and in close proximity to humans to perform their tasks. Indeed, unlike their more isolated counterparts, cobots are intentionally built to physically interact with humans in a shared workspace. In many ways, cobots are the hardware version of augmented intelligence that we talked about above. Instead of replacing humans with autonomous counterparts, cobots augment and enhance human capabilities with super strength, precision, and data capabilities so that humans can do more and provide more value to the organization.

Intelligent Process Automation

Today’s knowledge workers spend their time in email, on the phone, in various desktop and online apps and websites dealing with customers, suppliers, employees, partners, and internal stakeholders. Many companies are interested in enabling their existing labor to provide greater value for the organization by having autonomous agents perform tedious, repetitive, and error-prone tasks. For many enterprises and public sector agencies, lots of those repetitive but necessary tasks exist in the back office of their organization, handling the daily administrative tasks necessary to keep companies humming and growing without disruption.

Into this space of aggregating, managing, and manipulating data from a wide variety of sources is emerging a new class of Robotic Process Automation (RPA) tools. These robots act on behalf of, or in place of, their human counterparts to interact with existing, legacy systems in the enterprise or anywhere online. They mimic the behavior of humans so that the human can focus on more important tasks for the company, rather than say, copying information from a website into a spreadsheet.

Yet, while RPA is making significant improvements in companies’ operations by replacing rote human activity with automated tasks, AI is poised to give this new engine of productivity a gigantic boost. RPA tools get stuck when judgement is needed on what, how, and when to use certain information in certain contexts. Systems that leverage ML to dynamically adapt to new information and data will shift these systems from mere robots that automate processes to Intelligent Process Automation (IPA) tools that can significantly impact the knowledge worker economy. Or as McKinsey Consulting puts it, “In essence, IPA takes the robot out of the human.”[9]

Pattern Matching & Advanced Data Analytics

ML in particular is really good at identifying patterns in large volumes of data. Whether using supervised or unsupervised learning approaches, ML enables companies to look at large volumes and streams of data and make analyses on that data, identifying information that fits into acceptable patterns or outliers that don’t. Industries as diverse as finance, healthcare, insurance, manufacturing, mining, and logistics are using ML’s inherently strong pattern matching capabilities to do everything from fraud detection to identifying potential overload situations in IoT devices to detecting anomalies in patient data.

Indeed, there is a significant amount of activity in AI from the “data science” perspective, leveraging ML not only for AI-specific tasks, but also for more data analysis and insight tasks. In the past it would take a trained data scientist to be able to spot information in a sea of data, but now AI-enabled systems are quickly spotting important information in an autonomous fashion. 

It’s no wonder that the methods and techniques of ML are appealing to data scientists who have before had to deal with simply more advanced SQL or other data queries. ML provides a wide array of techniques, algorithms, and approaches to gain insight, provide predictive power, and further enhance the value of data in the organization, elevating data to information, and then to knowledge.

Self-Driving Vehicles and Devices

Have you ever dreamed of the day where your car could drive for itself, freeing you to do other things, such as reading, catching up on emails, watching a movie, or sleeping rather than focusing on the road while in the car? Automotive manufacturers and transportation technology vendors are rapidly progressing toward that goal. The power of AI and ML combined with detailed city and road mapping, lane-keeping, collision avoidance, and self-parking is leading to automobiles and trucks that can take us to our destinations without us having to keep our feet on the pedals or hands on the steering wheel.

While self-driving cars are the most visible form of these autonomous vehicles, AI and ML technology is enabling a wide range of autonomous capability including self-operating robots, self-flying drones, and other devices and systems of various sorts. Self-driving vehicles, which once seemed like science fiction a mere decade ago, have become an inevitability that urban planners are now including in their planning. Airborne, self-flying drones are increasingly used in a wide range of personal and commercial applications.

We’re seeing a rise in interest in self-navigating ships, self-driving trucks and delivery bots of all sorts. Autonomous doesn’t just mean robots and moving vehicles, and autonomous systems are increasingly being used in retail, health care, journalism, and even critical infrastructure management.

Issues with Artificial Intelligence

AI is seen by many as a great transformative technology that creates limitless opportunities to improve our lives. But there are some concerns about possible negative consequences, and many are working to address these concerns.

Explainable AI (XAI)

When people make decisions you can ask them how they came to their decisions. But with many AI algorithms an answer is provided without any specific reason, and you can’t ask the machine to explain how it came to its decision. This may raise concerns in some situations.

Many companies and researchers are working to create AI systems that explain their decision making – what is known as Explainable AI or XAI[9]. The goal of XAI is to provide verifiable explanations of how ML-based systems make decisions and let humans be in the loop as a check on the decision-making process.

Complicated AI systems are making thousands or millions of connections between different pieces of data. Oftentimes these connections are so complex and intertwined that it’s extremely difficult to explain how an AI system came to a specific conclusion. To better understand the challenges of making AI explainable, and the fact that even in life-or-death situations humans can be comfortable following the unexplained recommendations of AI systems, let’s consider a real-world application – weather forecasting.

We’ve all seen weather forecasts that include predicted future radar images. Sometimes these images will predict that 24 hours in the future a small rain shower will appear over a specific neighborhood covering no more than half a square mile. The amount of information that goes into making this prediction is massive. Hundreds of thousands of sensors around the country, in the ocean, in the upper atmosphere and in space collect data on things like temperature, wind speed, wind direction, barometric pressure, humidity, and more. The data from each of these sensors is stored in databases, enabling analyses of their changes over time. Things that have been learned from analyzing past weather patterns, coupled with what’s known about the very recent history of the weather, enables complex computer models to predict what the weather will be like at a given location in the near future

But when most people get their weather forecast they don’t really care about the details that led to the prediction. They understand it’s a prediction, not a guarantee, and they’re content to know that the prediction has been made to the best of the ability of the weather forecaster with the tools available. So when we see that forecast that shows a small rain shower immediately over our neighborhood in 24 hours we don’t expect that in precisely 24 hours it will begin raining in our neighborhood. Instead we interpret it as “there’s a chance of a passing shower tomorrow around this time.”

We accept weather forecasts without a detailed explanation of what led to them even though they can be a matter of life or death. Furthermore, we accept them knowing full well that different models are predicting different outcomes. This is perhaps most evident during hurricane season. Each time a hurricane threatens the United States weather forecasters will show us the predicted paths of a storm from several different forecasting models. When we get those forecasts do we want an explanation of how hundreds of thousands of pieces of data each factored into the predictions? No. Instead we are comfortable trusting the models and understanding that their predictions are less accurate the farther out into the future they go.

Weather forecasting has traditionally consisted of complex sets of equations that are based on things we know about the physical world. Artificial intelligence is improving our forecasting ability by enabling a larger number of variables to be considered, and this is helping us discover new things about what causes weather patterns[10]. As AI plays more of a role in weather forecasting it seems unlikely that people’s expectations regarding the explainability of forecasts will change.

Some who talk about the need for XAI cite situations where AI might be used in ways that could significantly impact people’s lives, such as deciding whether or not someone receives a loan, or what the appropriate sentence should be for someone convicted of a crime. These are issues that clearly need serious deliberation. One of the things we should ask ourselves in these deliberations is whether or not the results we get from human decision makers in these situations are the gold standard many of us think them to be.

For situations where we deem it necessary to have XAI there are generally two ways to provide it. The first is to use machine learning approaches that are inherently explainable, such as decision trees, knowledge graphs, or similarity models. The second is to develop new approaches to explaining the outcomes of more complicated and sophisticated neural networks. There are many organizations, like the Partnership on AI and the Defense Advanced Research Projects Agency, working to create methods for explaining the outcomes of these more complicated machine learning methods[11]. As AI technology becomes more ubiquitous in our lives we will figure out the levels of explainability that are appropriate for different applications. Explainable AI may help increase trust in AI systems.

Clean Data and Avoiding Bias in AI Decision-Making

There’s an old expression in computer programming: “garbage in – garbage out.” It’s a simple way of saying that a computer system’s output is only as good as the quality of its input data. This is the case with AI. Training data that is incomplete or contains errors could cause an AI system to produce biased results.

Deep learning relies heavily on training data. The system learns what it knows from this data. If an AI system is going to make recommendations based on factors X, Y and Z then it’s important for there to be comprehensive data involving each of these factors. If one of the factors is gender, for example, and 90 percent of the training data was collected from people of one gender, then the system is going to learn more about that gender than the other.

It’s also important for the data to be “clean,” meaning error-free and well-organized. This is a massive task that must be overseen by detail-oriented people. Data is input, collected and stored in different ways by different people. This can result in inaccuracies in the data that then result in the wrong things being learned by an AI system. For example consider the name of a corporation. Some people might type it in as “Acme Widget Company, Inc.” while others might type it in as “Acme Widget Company Inc.” Maybe the person who designed the software that is collecting the data doesn’t care about the comma in the first example because it’s not important for that person’s use of the data. However, if the data is then exported into what’s known as a comma delimited file where each piece of data is separated from the next by a comma, then that comma becomes problematic because a system that imports the comma delimited data will think it means the start of a new piece of data. Having clean, accurate data is critical. In this example it’s the difference between the system knowing there are 11 entries for Acme Widget Company in Anytown vs. thinking there are only six.

Not paying attention to details like commas can corrupt data

This comma problem is just one example of the numerous ways that data can become corrupted. Incorrect or missing training data can cause an AI system to be unexpectedly biased.

As a society we accept bias is some situations. A good example is car insurance premiums. Two people with cars of the same make, model and year who live in similar homes on the same block and work for the same employer doing the same job may pay different rates if their ages are different, if their genders are different, or if their marital statuses are different[12]. In situations like this it’s possible that AI technology could make our existing biased systems less biased by allowing additional factors to be taken into account that are not considered today. In fact, in the future with AI-based systems it’s possible that none of your personal characteristics would factor into your car insurance rate and instead the rate would be based largely on your driving behavior. If car insurance companies had comprehensive data on how often you drive, how fast you drive, how hard you brake, how closely you follow the vehicle in front of you, etc. they may be able to establish rates without paying any attention to your age, gender or marital status.

Some people’s fears about bias in AI systems have been fueled by publicity around AI-powered chatbots that made statements that were racist or derogatory toward a particular religion. To the extent that an AI system learns how to communicate by studying human conversations, the tendencies of the humans that it’s learning from will have an impact on how the AI system communicates, to be sure. However, it’s important to note that many of the racist/anti-religion comments generated by these AI chatbots were not really the chatbots adopting this thinking themselves. Instead, humans had figured out that if they asked the chatbot to “repeat after me” they could make the chatbot say anything they wanted[13]. So they did and generated some offensive messages that made headlines.

The Impact of AI on Jobs

AI is poised to have a significant impact on the way many of us work. Some categories of jobs are expected to be replaced by AI-enabled systems. However, many more jobs will have people working alongside AI technologies. A recent report from the Organization for Economic Cooperation and Development (OECD) found that 14 percent of jobs in OECD countries, including the U.S., are “highly automatable.”[15] In the United States, the report estimates that 13 million jobs are at risk from automation. These numbers are lower than some other studies have suggested, but they are by no means insignificant.

Past experience with technological change suggests AI will change the nature of how work is done, especially in industries such as transportation, retail, government, healthcare, law, engineering, and customer service. Some jobs will be displaced over time, but the use of more intelligent automation will free up organizations to assign human resources to higher value and more meaningful tasks. As has happened with every wave of technology, from the automatic weaving looms of the early industrial revolution to the computers of today we see that the nature of jobs changes over time and as a result the workforce must adapt and learn new high-demand skills to fill new jobs. We can and should expect the same in the AI-enabled economy. Experience and research are showing that companies that adopt augmented intelligence approaches, where AI is augmenting and helping humans do their jobs better rather than fully replacing humans, not only realize faster and more consistent ROI, but also end up with workforces that are more supportive and appreciative of AI[15].

The United States is facing a skills gap in technology jobs with more than 6.7 million unfilled jobs in our country and over 6 million Americans still unemployed[16]. This is a particularly acute problem in the AI field where it has been speculated that there are around 300,000 people working in AI globally, but millions more roles available for qualified applicants[17]. Both our education systems and employers must collaborate to ensure our future workforce is prepared to perform the skilled jobs enabled by AI. Many educational institutions, such as community colleges and career tech vocational schools in particular, are already partnering with industry to co-create curriculums that will prepare their students for careers in high demand fields.

It’s important to remember that our economy is exclusively about humans trading with other humans. Even when we transact with corporations, there are humans that are the owners of those corporations. So when a particular job becomes obsolete because of advances in technology the technology isn’t taking income away from humans, generally. The business that deploys the technology is paying the technology’s developers – humans – for it. And presumably the technology is increasing the business’s profitability, allowing the business’ human owners to accumulate more resources to spend on other goods and services provided by humans. The nature of work changes as technology advances, but it’s always humans who are reaping the benefits.

The AI “Super Intelligence”

AI researchers will tell you that we’re nowhere near the capabilities for Artificial General Intelligence (AGI) that some people fear. We may never get there. Many experts in the field claim we could be hundreds of years away from such a super intelligent system, and we might not ever even get to that point. Rodney Brooks does the most thorough job of debunking and refuting these claims in his essay “The Seven Deadly Sins of Predicting the Future of AI”. To summarize, he says we really don’t know how far we are to the realization of AGI. He sees us greatly overestimating both the capabilities of AGI and underestimating how long it will take to reach those capabilities. Furthermore, he says people think that AGI will be a lot more powerful than it will actually become. Just like how a great technology is perceived as magic to the less technologically advanced, people now ascribe great superpower-like capabilities to a future AGI that is not yet developed and may never be.

Advance your Skills with CPMAI AI Best Practices

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The What is Artificial Intelligence?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!