Search
  • Swathi Young

What, Why and How of Ethical Design of AI systems


Why it matters?

When it comes to negative and detrimental outcomes of AI systems, the world is your stakeholder. AI systems are driving our decisions about entertainment, transportation, finance, shopping, supply/chain, clinical trials, criminal justice, and health outcomes. AI products have come a long way beyond providing functional and technical solutions. They are influencing societal changes in the way people interact, work, and live.

In that context it is imperative we think of the bigger picture - AI systems and capabilities impacting outcomes - especially those perpetuating societal injustices in terms of unfair and unjust results.

Here is an image published by the AINow institute, " a research institute examining the social implications of artificial intelligence."


Among others, here are six steps to help with asking questions around ethical AI systems:


1. Begin with the user , needs and objectives:

When it comes to AI/ML use cases, organizations start with the data first and dig quickly into prototyping a supervised learning algorithm. While this can produce short term results in a siloed use case, it is not beneficial 1. to scale and 2. measure outcomes and impact. An alternative is to spend sufficient time with users discussing and debating the use case, understanding current business process, heuristics of the process, user expectations of outcomes.



2. Think about long terms outcomes: Conversations among diverse set of stakeholders is recommended - jotting down expectations of results, aligning the use case to organizational/departmental goals and even thinking about 2nd and 3rd degree impacts.


3. Provide Opt-ins :

One of the thing that surprises me about "2001: A Space Odyssey" is that no one thought about providing an Opt-out function. The only way HAL could be disabled is to remove the plug physically. Jokes aside, design an "Opt-in feature" for recommendation engines especially for increased risk to erroneous outputs. Think of a use case of a recommendation made for bail versus no bail in criminal justice, if the algorithm starts producing skewed results, the user should temporarily suspend the system instead of circling back to the vendor or AI engineer.

4. Explain the results differently based on accuracy/ confidence levels:

Developing machine learning algorithms is only one aspect of your solution. Understanding the accuracy of the results and how it impacts the end users needs a highly skilled domain expert to hand-hold the data scientist and interpreting the results. Ensure that less accurate results are appropriately flagged.

5. Examine your data (Rinse and Repeat) :

Data is at the core of all AI products, systems, and applications. Most applications/products use historical data that is skewed due to pre-existing biases. You can use technology tools and solutions to deal with biased datasets. However, for a holistic solution, focusing on the following can help:

•Team Composition , Diversity of Thought and backgrounds

• Data Provenance – what does the data represent?

•Post Deployment Monitoring – Continuous monitoring of outputs and data variability

•Evaluate for Inclusion

Here is a pictorial of the various aspects of data to think about :


6. Disclose the "secret sauce"

An important question that needs to be considered is "Why is this decision made by the algorithm? How can we explain the results?" This is important not only to build trust of using the AI product but also to explain the decision-making process in a court of law.

Interpretability and explainability of machine learning models is a growing research area. Based on the type of models and algorithms being developed, you can use various methods. Here I present a pictorial of some of the commonly used technical tools used:


In conclusion, there are many aspects when it comes to ethically designed AI systems. Here are six essential steps you can consider.

15 views0 comments

Recent Posts

See All