Search
Close this search box.

Cognilytica is now part of PMI!

AI & Data Best Practices

The Layers of Trustworthy AI

Table of Contents

Anyone looking to use and/or develop AI systems need ways that maintain trust, provide visibility and transparency, and utilize processes and methods that can provide greater oversight and accountability for potent AI systems that need to address – the layers of trustworthy AI.

Why do we Need Trustworthy AI?

We all know we can trust machines to make decisions for us, right?… right?!

Well at least we know we can trust the companies,  organizations, and vendors that are building and implementing AI solutions, right?

… umm, ok maybe we have some issues.

Organizations are increasingly making use of AI systems to power their operations and enable a wide range of applications from the trivial and mundane to the mission-critical. Many of these systems are being put into applications that can potentially impact people’s daily lives and livelihoods. 

At the core of Trustworthy AI are the needs to:

  • Keep you, your customers, your employees, your partners, and your organization safe
  • Keep the trust of your customers, employees, users, and ecosystem
  • Address the fears and concerns of AI

Each of these areas needs to be taken into consideration to build trust. Because, you don’t want to spend all this time, and allocate money, and resources just to have people not feel comfortable or trust the AI systems that you build and have it be a very expensive failure.

Fears and Concerns of AI

Hollywood and science fiction have provided us with many examples of bad machines that can threaten our lives and our freedoms, our control, dignity, and even the environment in which we live. This can lead to people conjuring up all sorts of ideas about what intelligent machines are able to do, even if not correct.

Fears of AI

Common fears people have when it comes to AI have to do with the way people feel about the increasing use of intelligent machines doing things that people would otherwise do. These fears include:

  • Worries that superintelligent machines – Artificial General Intelligence (AGI) – will take over the world
  • Worries that AI systems will take our jobs
  • Worries that we’ll lose control over privacy 
  • Worries that our data will be used for surveillance
  • Worries that there will be too much data in too few hands

While these fears are mostly feelings, they shouldn’t be discounted.  By telling people simply that superintelligence isn’t here yet, or that robots haven’t taken their jobs (yet), or that we can trust the companies and organizations with their handling of data doesn’t just make these fears go away.

Concerns of AI

People also have some real concerns about AI. These concerns are grounded in actual realities of AI systems that cause real issues around trust. Common concerns of AI include:

  • Lack of transparency (especially around deep learning)
  • Bad actors doing bad things with AI
  • AI systems vulnerable to tampering and data security
  • Susceptibility to bias in data and issues of fairness
  • Concerns around data privacy and usage
  • Laws and regulations not keeping up with technology

While AI  fears might not be easily satisfied with assurances or realities of today, AI concerns can absolutely be addressed in a range of ways to give people greater confidence in the AI systems we use and build.

Bad Machines Doing Bad Things

With AI, we have this idea of bad machines that can threaten our lives, our freedoms, our control, and our dignity. These bad machines can also threaten the environment and not act in the best interests of humanity.

Bad machines are unsafe and can cause physical, financial, emotional, social, mental, and environmental harms. 

We can prevent machines from becoming “bad” by imposing limits, controls, safeguards, and guardrails. We can also monitor, manage, provide controls,  testing, and of course, by keeping a human in the loop.

Bad People Doing Bad Things

We can’t, however, prevent malicious actors from doing bad things as easily.

AI enables people to potentially cause greater harm through their intentional or unintentional acts. Through malicious use or through negligent activities. We don’t want AI systems, and the people that use them, violating our laws, violating our trust, violating our privacy, violating our freedoms, violating our financial health, the environment, our mental health, or even our lives.

Bad Practices of AI

Even the best organizations with the best of intentions can have poor outcomes that put everyone in danger. Many times these poor outcomes come from bad practices in the way they build, run, use, or manage AI systems.

There are a range of bad practices around AI including:

  • Lack of sufficient oversight and management
  • Improper development practices that misuse data or increase vulnerabilities
  • Lack of a good/valid reason or a positive purpose for implementing the AI system that put employees, customers, and users at risk
  • Removing the human from the loop and putting machines in charge, causing issues of lack of accountability, lack of ability to contest or dispute AI system results, and a general feeling by people of a lack of “agency” or ability to control their lives and destiny

Bad practices are not solved with more technology. But rather, by implementing processes, procedures, training, and more controls of people over the systems that impact them.

Bad Visibility into AI Systems

Lack of visibility is another area we need to think about. Is there limited visibility into the data and the processes being used? Far too often, people are just blindly trusting AI, not really knowing what’s going into how it was created.

There is also often limited disclosure and limited consent when using AI systems. 

People are using AI systems and don’t really understand what it is that they’re using. Or, what it is they agreed to give over in exchange for using this AI system.

 Users have limited to zero visibility into algorithm selection, training data, and algorithmic behavior. 

  • How transparent do you want to be? 
  • How much visibility do you want to provide? 
  • How much visibility is necessary? 

If customers and employees start asking questions on the above, how are you able to address and answer these questions? These are all things that you need to take into consideration when you are building AI systems with trust in mind.

The Layers of Trustworthy AI

Trustworthy AI isn’t just one thing. It’s a collection of different things that need to come together.

Oftentimes, discussions around AI ethics, responsible AI, and trustworthy AI generally just revolve around the set of what is “right” vs “wrong” with regards to intelligent systems. 

However, Trustworthy AI needs to address people and their well-being. It needs to address AI systems not doing harm. It needs to address bad people not doing bad things with AI systems and how you will address this if and when it happens. It needs to address areas related to safety, care and responsibility. It also relates to the use and disclosure around data, visibility into data and algorithm selection, governance, and trusting the algorithms itself. 

Turns out that all these things discussed above are not at the same level of concern in terms of who needs to worry about them and how and when you implement them. 

This is why you need to think about Trustworthy AI in layers. But how many layers? And what needs to be addressed in each layer? 

At a high level, the 5 main layers of Trustworthy AI address:

  • Societal ethics – These are ethical principles that provide guidelines for AI systems to participate in society in a positive manner, address concerns about human values (such as do no harm), benefits to broad human populations, issues around bias, diversity, and inclusivity, and aspects of human control, freedom, and agency.
  • Responsible use of AI – Addresses the potential for misuse or abuse of AI, concerns around safety and privacy, trust, human accountability, and other factors that make sure that AI systems are used in appropriate ways.
  • Systemic AI transparency – Ethical principles that focus on giving human users as much visibility into overall system behavior, including issues of visibility into data and AI configuration, appropriate disclosure and user consent, means for gaining visibility into bias and potential mitigation of that bias, and use of open systems.
  • AI governance – These ethical principles focus on aspects of process and control to provide predictable and reliable interaction with AI systems as well as the ability to audit and monitor AI systems, and potential third-party regulation or certification of systems.
  • Algorithmic explainability – AI systems, especially ones based on deep learning neural networks, are often accused of being black boxes, not providing any understanding of how machines arrived at their conclusions. This set of ethical principles provides a set of guidelines for algorithmic explainability if possible. And if not, other ways of interpreting or gaining common sense understanding of algorithmic decision-making.

We can visualize the layers of trustworthy AI as follows:

The Ethical AI Layer

 At the base layer are societal issues. We want machines that will comply with the most fundamental of human values. We don’t want to build systems that make societal ethics problems worse. We want to avoid harm, both physical and other forms such as emotional or financial. We want to make sure AI systems will never be beyond our ability to control them. 

This layer addresses general guidelines everyone should follow like no AI system should do this thing, period. We need to think about all of humanity at this layer and focus on things our AI systems shouldn’t do or avoid doing. In the layer we need to address and have appropriate answers for areas related to:

  • Human Values – Machine-based systems should exhibit the same values that we have as humans. Do no harm.
  • Dignity – AI systems should not treat humans as machines.
  • Fairness – AI systems should not favor one group over another
  • Diversity & Inclusion – AI systems should be built for and incorporate data from the breadth of humanity
  • Bias & Discrimination – AI systems shouldn’t further bias or discrimination 
  • Freedom & Agency – AI systems shouldn’t limit human choice or freedom of action.
  • Human Benefit – AI systems should be built for the benefit of the widest group of humanity, and not for the benefit of a few.
  • Human Control – AI systems should never operate without humans in control
  • Respect of the Environment – AI systems should take care not to abuse or harm the environment.

The Responsible AI Layer

Next is the responsible AI layer which addresses AI from a societal and systemic perspective. Just because you can do something, even ethically, doesn’t mean you should do it. You need to address how you do it the right way. In other words, being careful in the way that you do those things when it comes to AI.

This layer addresses setting out considerations for proper use versus misuse, abuse, or improper use of AI systems. Take facial recognition technology for example. The technology itself is neutral. It’s the application of that technology that makes it “good” or “bad”.

This layer also addresses issues related to laws and regulations. Responsible AI shouldn’t violate laws. Responsible AI shouldn’t violate or abuse people’s privacy. Responsible AI should focus on keeping people and their data safe. Responsible AI prevents the abuse or misuse of AI technology. Responsible AI provides a well-defined trail of human accountability.

 In this layer we need to address and have appropriate answers for areas related to:

  • Positive Purpose – AI systems must be built for some positive purpose
  • Safety & Security – AI systems should be safe and secure
  • Trust – AI systems should not violate human trust or cause people to mistrust entities
  • Human Accountability – Human individuals should be identified who are responsible and accountable for the behavior and operation of the AI systems
  • Privacy – AI systems should not violate the privacy of humans or impose state-wide surveillance
  • Misuse, Abuse, & Compliance with Laws – Humans should not misuse AI systems for any criminal or non-law abiding purpose, or to use AI to circumvent laws or regulations
  • Lethal Autonomous Weapons – AI systems should not enable lethal autonomous weapons
  • Workforce Disruption – AI systems should not be built that have as an intentional goal the mass replacement of human workers and/or mass disruption to economies

The Transparent AI Layer

The transparent layer has to do with how we manage and run our system. In this layer you need to address how you plan to provide visibility or transparency into how AI systems are created or applied. 

AI system transparency is about providing visibility into all the aspects of what went into building an AI system so users can understand the full context of how an AI system is built and used. After all, you can’t ask people to trust something when they have absolutely no idea how it was built. And what actually went into training that system. You don’t want to spend lots of time and money and deploy resources at a solution only to have people not trust the system and not use it. 

So, in this layer of trustworthy AI we need to address and have appropriate answers for areas related to:

  • AI System Transparency – AI systems should provide visibility into the data and components of the system with their configuration that is used to generate results. Human decisions on the operation, versioning, development, and use of the AI system should be disclosed and open.
  • Bias Measurement & Mitigation – AI systems should provide a means to constantly measure bias of various sources and provide means to mitigate any bias detected.
  • Open Systems – AI systems as a whole should use open source technology with the mechanism by which the system operates visible to all
  • Disclosure & Consent – Organizations should disclose when AI systems are being used and when humans are interacting with AI systems. AI systems should provide a means for humans to consent out of interaction with AI systems, being included in AI models, or otherwise being impacted by the AI system.

The Governed AI Layer

The governed AI layer of trustworthy AI addresses your practices & processes for managing our AI systems. This includes addressing how you audit, measure, regulate, guide, secure, and provide processes for your AI systems.

Organizations need established methods to assess ongoing risk to AI systems and identified means to mitigate those risks. You want to address what are the processes, procedures and controls, audits and governance processes that will be set in place to keep an eye on and control of our system.

In this layer we need to address and have appropriate answers for areas related to:

  • System Auditability – AI systems should provide ways to audit all aspects of operation and behavior
  • Contestability – AI systems need to provide ways to contest or appeal AI decisions for human review
  • Risk Assessment & Mitigation – Organizations need established methods to assess ongoing risk to AI systems and identified means to mitigate those risks
  • System Monitoring & Quality – Systems operating within ethical guidelines need to make sure that AI systems are always operating within acceptable performance, usage, and other parameters
  • Education & Training – Ethical frameworks should require those who are involved in AI system creation or use to be trained in their proper development and use
  • Regulation & Certification – AI systems should comply with requirements of regulatory bodies and third-party certifications with regular third-party audits and certifications of ethical operation

The Explainable AI Layer

The final layer of Trustworthy AI is that of interpretable and explainable AI. This layer addresses the technical methods that go into understanding system behavior and make black boxes less so.

Separate from the notion of transparency of AI systems is the concept of AI algorithms being able to explain how they arrived at particular decisions. The ability for AI algorithms to explain the exact cause-and-effect from input data to output result is known as AI algorithmic explainability. However, it is widely recognized that not  many ML approaches are inherently explainable, in particular deep learning.

Deep learning is credibly popular. But it’s also incredibly opaque. This opacity is also referred to as a “black box” in that it is not possible to fully explain how certain decisions were made given inputs. This results in challenges of AI explainability that have not been fully resolved. 

Users have no idea all the settings and configurations that went into creating the model. And, they have no idea how the model actually got to the end result that it did. This is the idea of a “black box” where a system doesn’t provide any transparency or understanding of how it operates in a manner sufficient to understand how specific inputs result in specific outputs.

Relying on black box technology can be dangerous. Without understandability, we don’t have trust. To trust these systems, humans want accountability and explanation.

But, can we really understand how algorithmic decisions are being made? That’s what this layer aims to address.

Getting verifiable explanations of how machine learning systems make decisions and let humans be in the loop is key.

In this layer we need to address and have appropriate answers for areas related to:

  • Understandability / Root Cause Explanations – When AI systems fail to provide expected results, AI systems should always provide a human understandable means to understand the root cause of any failures. Explanations without necessarily algorithmic explanation.
  • Algorithmic Interpretability – AI systems should provide a means to interpret AI results so that cause and effect can be understood, even with limited algorithmic explainability
  • Algorithmic Explainability – AI systems should use algorithms that provide a direct means to explain how outcomes were arrived from input data

Where did these Layers of Trustworthy AI come from?

Cognilytica has reviewed and analyzed over 60 different trustworthy AI frameworks. These frameworks were published by a variety of different entities, different national governments, various government agencies, NGOs, academic institutions, nonprofits, and standards bodies. 

Many different nations, from the UK to China to the United States have been releasing trustworthy AI frameworks. And, what we noticed is that they are not apples to apples comparisons.

Among all those groups who are made up of researchers, dedicated ethicists with decades of experience, and knowledgeable AI practitioners, one would think there would be a well-defined, comprehensive Trustworthy and ethical AI framework that can be easily applied to guide the use and development of AI systems for all needs. 

However, after looking at these 60+ different frameworks, we realized that there was no truly comprehensive Trustworthy, ethical, and responsible AI frameworks. 

Many of them had huge gaps not addressing some critical topics. Some of these gaps were incredibly eye opening and sometimes scary. 

 Many were just a patchwork of ethical concepts, with terminology and meaning frequently blurred across different ethical concepts and vague in their definition or application. 

Individually, none of the frameworks provide enough guidance for an organization to make use of just one framework for their needs.

This is why we put together the layers of trustworthy AI. We did all the work so you don’t have to!

Cognilytica’s Trustworthy AI Framework – your roadmap to success

If people and organizations want to move forward, and they don’t want to reinvent the wheel and come up with yet another framework, then what can be done to help?

Well, in aggregate, these frameworks we reviewed provide a good picture of the totality of Trustworthy AI needs and ways in which we can apply the principles to specific needs.

So, we compared each framework against each other. 

We normalized the terminology. 

We identified the core principles shared from each published framework. 

And then, we established less-vague terminology to categorize their meaning and identified the best examples of each Trustworthy AI principle. And we categorized each idea and concept into the appropriate layers.

The result is Cognilytica’s comprehensive Trustworthy AI framework that takes into account the full spectrum of AI concepts across the five different layers.

Our framework is comprehensive, it’s extendable, and it covers every layer and component of trustworthy AI because we want to make sure that you’re really understanding it and leaving no stone unturned. 

Just like with any framework, you need to gather the right team to the table. You need to bring in the necessary roles, decision makers, and stakeholders. And answer the tough questions at every layer. 

Each framework component is a decision that needs to be made for a given project, especially within your organization on how to deal with each topic. Remember, at the end of the day, you need to make sure that your organization feels comfortable moving forward and really is taking this trustworthy AI seriously. It’s one thing to have a framework, but it’s another to actually adapt it and use it regularly at the organization. Because, afterall, Trustworthy AI is not just something to say you’re doing. There are real, substantial liabilities and risks to untrustworthy, unethical, and irresponsible AI.

To learn more about Cognilytica’s Truthworhty AI framework, the 5 layers of Trustworthy AI, and how to apply it for your organization, you can take our free Intro to Trustworthy AI course.

Get Certified in Trustworthy AI – and learn how to Apply the Layers of Trustworthy AI!

If you want to take this one level deeper, and gain a comprehensive understanding of the Five Layers of Trustworthy AI as well as learn the step by step approach for how to build a Framework for Trustworthy AI to keep your AI systems safe and ethical, consider getting our certification.

Cognilytica’s Trustworthy AI Framework Training & Certification is the most comprehensive, Vendor-Neutral Trustworthy AI Training & Certification. Learn how to Build and Run Trustworthy AI Systems. Boost your credentials. Advance your career.

And…keep Your AI solutions, Organization, Customers, and Stakeholders Trustworthy. Get Certified and Learn how to Craft Trustworthy AI Frameworks that Work

Good luck on your Trustworthy AI journey!

Advance your Skills with CPMAI AI Best Practices

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The The Layers of Trustworthy AI

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!