AI Ethics Beyond the Bias

Recently, I was involved in a discussion about the ethics of AI and as usual, the main focus fell on the biases in the training data. Here I would like to highlight other aspects of the autonomous AI systems that will have substantial ethical implications. Let’s start with a few definitions and the different elements of an autonomous AI system.  

“Artificial Intelligence” is a term generally used to describe systems that do something smart. They understand their environment and can take decisions on their own. Having a conversation with an advanced AI system should be indistinguishable from a conversation with a human. Machine Learning is one of the technologies used to train AI. Instead of writing down programs and rules we make AI go through a vast amount of historical data of previous decisions. Each record in the data has a label, showing the right way to classify it. After many iterations, the algorithm finds patterns in the data that can help it guess the right labels.

The problem is that many of these historical decisions, for example, credit applications, are based not only on the pieces of evidence of the specific case but also the biases of the credit inspector on racial, age or gender prejudices. Another concern comes from instances in which the training data contains a disproportional representation of one type of customer group and significant under-representation of other groups.
Training the algorithms using unfair data will create unfair machines and will multiply the harm. The biases in the data cannot be resolved by simply removing the sensitive fields, as some of the additional parameters can also reveal them. An example of this would be the year you graduated from school can tell your age.   
 
Even if the focus is primarily on the training data, there are other parts of an autonomous AI system that can trigger ethical issues and unfairness.

Agents with Static Logic

Figure 1

Figure 1 shows a simplified diagram of a machine learning algorithm in the perspective of autonomous systems. The agent operates in an environment and receives signals from that environment. These signals trigger the logic to decide what action would be best to perform. The agent then executes the appropriate actions and receives an update from the Environment.
The logic of the system is trained offline using the machine learning algorithm described above. The logic is deployed in the decision-making process where it can crunch the input information and perform the designed actions. The deployed logic does not change until a newly trained logic is installed.

The quality of the training data is very important for the decision process. At the same time, the AI has a pretty limited impact on the outcome of the system as its human operators and designers fully control the follow-up actions. Their objectives are outside of the scope of the AI and fall into the classical social and political regulations.

Let’s now look at more complex designs of autonomous systems as shown in figure 2. You can find more about the different types here: https://www.javatpoint.com/types-of-ai-agents.
 
Aside from the decision logic that transforms the inputs from perception into action, we have several additional functions that can be used in different combinations.

Agents with Dynamic Logic

Figure 2
  • Goal – this is an externally set objective that the agent tries to achieve. It can be a desired room temperature or to maximize the value of stocks portfolio.
  • Model – this is a representation of the external world that the agent can explore. A GPS device, for example, uses the map of the city as a model. When the user sets a destination (the Goal), it can simulate traveling on all the roads connecting your location to the desired point and find the shortest path. If it didn’t have the map, it would have to explore the streets physically, which could be inconvenient.
  • Utility – this is a function of the agent that evaluates the different ways to achieve a goal. It picks the option that brings the most benefits. It can also assess the possible outcomes when trying to accomplish complex goals – like the shortest AND the fastest path for the GPS case. Another example would be to find the optimal price of a product aiming at the highest revenue and highest market share.
  • Learning – this is the ability for the decision logic to learn from its actions. It evaluates the result of its previous actions and updates their values (Utility). Learning by doing is the domain of the Reinforcement Learning category of ML. There is no pre-trained logic in the system. The agent is let loose in a virtual or real environment and needs to achieve a goal. It doesn’t know what the goal is; it just receives rewards when it performs specific actions. In time, the agent learns what set of actions increase its reward and learns to gain it efficiently. It is critical to set the right reward functions as they drive the behavior of the agent.

Let’s now look at how these functions impact the fairness of AI and what questions to ask when we have to evaluate it.

Ethical Considerations and Questions

​Training Data ​​​​​​

As previously stated, this is the most discussed aspect of AI ethics, with significant emphasis on fairness and inclusion. 
We should ensure the data is not contaminated with biases. We should also be honest with ourselves – the AI on its own cannot solve the inherited problems of fairness and justice in the society and even less the distribution of wealth.

Questions for evaluation:

  • ​Are the different groups and outcomes represented proportionally?
  • Are there sensitive fields that can cause bias?
  • Are there privacy issues with the data?
​​Goal

AI will attempt to achieve its goal without seeing the bigger picture or understanding all the consequences. This is one of the critical elements of every system.
Imagine a Hospital AI, whose goal is to heal patients while saving money for the hospital. What will it do when a patient needs an expensive treatment? What if it finds a combination of otherwise harmless pills that can kill the patient? Will the doctors be able to find out about it or prevent it? And to make things even more complicated, when you have constrained resources like money, what is fairer – to spend a million dollars trying to save one person or using it to save five people? Should the AI calculate the long term return of value to society for each individual, and decide based on that? Are we comfortable with that?

Aside from life and death decisions, we can also discuss the ethics of business goals in everyday operations. Is it fair to use human weaknesses and emotions to sell them things they don’t need? Imagine an advertisement targeting specific customers, using words from their own wedding vows. The customers might not even realize why the advertised product sounds so lovely. It might sound far fetched, but getting in the customers’ heads by making ads “personal” and “emotional” is a legitimate goal of ad creators. Now the AI will be able to do it at scale, for each individual customer.

Another issue with agents maximizing on their own goal is known as the “Tragedy of the Commons”. The problem occurs when independent agents exploit a shared resource to the point where all of them lose – for example, overfishing or polluting. Will the agents be able to control their greed in boundaries that will keep the ecosystem sustainable?

Questions for evaluation:

  • Who benefits from the system?
  • What are the short term and long-term impacts of running the system?
  • Is it exploiting the weaknesses of the customers and manipulating them?
  • How does it prioritize conflicting goals?
  • What will happen if everyone does it?
Actions 

It is essential to be careful about what activities the AI can execute on its own. Giving a machine gun or a tank is a shortcut to Skynet, but other seemingly harmless operations like driving a car or applying pesticides in the field could also become dangerous. In a famous thought experiment, we give the AI the ability to acquire as many resources as it needs, and the simple goal to calculate more digits of Pi. It will take all available resources – electricity, connectivity, manufacturing, transportation and will shut down every other consumer including humans, that do not contribute to the “Goal”. The thought experiment ends up with the destruction of the human race and expansion of the AI on other planets and exploiting their resources.

Questions for evaluation:

  • What actions are allowed?
  • Are they sustainable in the long run?
  • Can they trigger an unexpected outcome?
​Utility 

How the system prioritizes the benefit of its actions? What if there is a product that is good for the customers but bad for the business? Currently, deep neural networks do not provide comprehendible reasoning of why they picked one option over another. If we want the people to be in control of the system, the final decision should utilize either explicit rules or some form of hierarchical value trees. Such tools can bring transparency to decision mechanisms.

Questions for evaluation:

  • What are the decision priorities?
  • What other options does it consider?
  • How it affects other participants and non-participants?
  • Will it prioritize the interests of the customers or the shareholders?
  • Is it sustainable?
  • How does it solve conflicting targets?
Execution

What kind of safety rails should we put around the AI when the implementation doesn’t go as planned?
If we look at the famous dilemma of who should an autonomous car hit in case of a malfunction, the most cynical answer is – the person with the least insurance coverage. Will this make the people buy more and more expensive insurances, to not be the preferred target? What if it starts targeting the “healthy organ donors”? Will we start faking our medical records?

Questions for evaluation:

  • What are the testing procedures?
  • Are there safety guards?
  • Who is in charge of checking it?
  • Is it protected in case of unexpected outcomes?
  • How do they prevent undesired feedback loops?
Learning

Learning systems utilize an adaptive feedback loop that can make things escalate very quickly for good or for bad. There are common issues with feedback systems, like timing mismatch between cause and effect. In these cases, the agent is not waiting long enough for the effect of the previous actions to take place and takes the system out of balance. In other cases, the agent overreacts in response to a change in the environment. This overreaction triggers a move in the opposite direction. As a result, the system never finds a balance.

There were several cases already when AI traders and algorithms created havoc in stock exchanges just by learning and adapting to each other’s actions.
We should be testing the sensitivity of AI systems to potential feedback loops caused by repetitive events in the environment. There should be mechanisms to detect and break such loops automatically.
Even in non-extreme circumstances, adaptive systems can bring unexpected outcomes. Imagine a group of competing AIs that can figure out a scheme to actually collaborate. For example Marketing AIs that divide the market between themselves and push out the competition without ever communicating with each other, only by reading the market results.

Questions for evaluation:

  • How does it react to reinforcing feedback loops?
  • What mechanisms are there for an emergency brake?

Everything so far was only about a single agent in an environment. It will take another article to discuss implementations of two or more agents that need to collaborate or compete with each other. Such systems will be a real test for AI ethics and empathy. How will they act if they have to consider agents from different species, like other types of AI, animals, or humans?

Governance and Corporate Responsibility

In the paragraphs above we looked at some of the questions that we have to be asking the developers of AI. It is still unclear who should be asking the questions and what will they do if they don’t like the answers.

The control over AI is a topic of many conferences and workgroups for Governmental and International organizations. Here is an excellent overview of the different international initiatives and guidelines related to ethics and human rights in the AI perspective.

Most of these documents try to protect human autonomy and fairness in the decision-making process. We have to be aware that humans play different roles in society. What seems to be fair for the management and the shareholders might be very unfair for the employees. For example, you can read here about Uber using social science and video game techniques to manipulate its drivers into working longer and harder. Also, what looks fair for the customers might seem unfair for the local producers. What is reasonable for one company might seem unfair for the whole society.

In conclusion, businesses will develop AI to achieve their own business goals. The ethics of the companies will be reflected in their automated systems, and this is where the focus should be.

If you are struggling to prioritize the AI goals for your company, here is a simple reference chart to help you.

Goals for AI

Have you encountered other questionable or inspiring implementations of AI? I would like to read your comments!

Leave a Reply

Close Menu