Ethics in artificial intelligence

20-09-2022 | News

Hidden dangers and possible solutions

by Reid Blackman

L'etica nell'intelligenza artificiale - Festival del futuro

If not well trained and powered, artificial intelligence systems can produce aberrant results that reflect hidden biases in accumulated historical data. But there are methods to avert the perverse effects. Let's see some cases.

In 2019, a study published in the journal Science revealed that Optum's artificial intelligence, used by many healthcare systems to identify high-risk patients who should have received periodic follow-up, prompted doctors to pay more attention to whites than blacks. Only 18% of the people identified by the AI were colored, while the 82% were white. After looking at the data on the patients who were actually the most sick, the researchers calculated that those percentages should have been 46% and 53%, respectively. The impact was very significant: according to the researchers, AI had been applied to at least 100 million patients.

While never having thought of discriminating against blacks, the data scientists and executives involved in creating the Optum algorithm had fallen into an extraordinarily common trap: train AI with data that reflects secular discrimination and produces outputs conditioned by cultural bias. In this specific case, the data used showed that blacks received less health care, which prompted the algorithm to erroneously infer that they needed less help.

There are so many well-documented and widely publicized ethical risks associated with AI: unintentional prejudices is privacy violations they are just two of the most visible. In many cases the risks are closely linked to certain uses, such as the possibility that self-driving cars run over pedestrians or that AI-generated newsfeeds for social media create distrust of public institutions. Sometimes they create serious reputational, regulatory, financial and legal threats. Because AI is designed to operate on a large scale, when a problem arises it affects everyone who has to deal with a particular technology - for example, everyone who responds to a recruiting ad or everyone. who apply for a mortgage in the bank. If companies don't seriously address the ethical issues of planning and executing AI projects, risk wasting a lot of time and money developing software that will prove too risky to use or sell, as many have already discovered.

An organization's AI strategy must answer a variety of questions:

  • What ethical risks could the AI you are designing, developing or deploying create?
  • If they are ignored, how much time and resources might it cost to respond to a regulatory survey?
  • How much would a fine be if you found us guilty of violating laws or regulations?
  • How much should be spent to regain consumer and public confidence, if money can solve the problem?

The answers to these questions are the measure of how much a AI-related ethical risk prevention program. The program must start at the top and permeate all levels - and, of course, technology as well. And, possibly, provide for a ethics committee for AI risks which should include different figures such as ethics experts, lawyers, technologists, strategy experts and bias identifiers. 

How and why does AI discriminate?

The sources of prejudice in AI they are numerous. The real discrimination is often reflected in the data used to train it. For example, a study carried out in 2019 revealed that banks were more likely to deny home loans to people of color than to whites with similar financial characteristics. In some cases the discrimination and the product of a subsampling data relating to populations on which the AI will affect. Suppose you need data on commuter travel habits to establish public transport schedules; therefore, collect information on smartphone geolocation at peak times. The problem is that the 15% of the population, that is millions of people, does not have a smartphone. Many simply cannot afford it. The less well-off, therefore, would be underrepresented in the data used to train your AI. Consequently, it would tend to make decisions that benefit the more affluent neighborhoods. The same happens when investigating the behavior of those who have undergone criminal convictions, with which an AI can associate a higher probability of committing crimes in the future, thus denying, for example, access to various types of occupations.

Dealing with problems of this order is not easy. A business may not be able to explain historical data inequalities or may not have the necessary resources to make a well-informed decision on discrimination by AI. And these examples raise a broader question: when is it ethically acceptable to produce differential effects between subpopulations and when is it an insult to the principle of equality? The answers vary from case to case, and cannot be found by changing AI algorithms.

Technology is not enough

We thus arrive at the second obstacle: theinability of technology - and technologists - of effectively solve the problem of discrimination. At the highest level, the AI takes a series of inputs, performs various calculations, and creates a series of outputs: you enter data on mortgage applicants and the AI makes decisions about what applications to accept and reject. Enter data on transactions made - where, when and by whom - and the AI generates assessments of their legitimacy or illegality. Enter data on criminal records, curriculum and symptoms and the AI makes forecast judgments, respectively, on the risk of relapse, on the potential validity of selection interviews and on people's health conditions.

One thing AI actually does is dispense benefits: mortgages, lighter sentences, job interviews, and so on. And if you have information on the demographic situation of the recipients, then you can see how those benefits are distributed among the various subpopulations. Then you may be wondering if it is a fair and reasonable distribution. And if you are a technologist, you might try to answer this question by applying one or more of the quantitative metrics of correctness identified by the increasingly substantial research on machine learning.

Problems are certainly not lacking with this approach. Perhaps the most serious is that there are about twenty quantitative parameters on correctness, but they are not compatible with each other: it is impossible to be corrected contextually based on all of them. Here the technical tools are not enough. They can tell you how much the changes you make to your AI will affect various correctness parameters, but they can't tell you which parameters to use. An ethical and commercial judgment must be made in this regard and data scientists and engineers are not equipped to do so. The reason has nothing to do with their character; it is simply that the vast majority of them has no experience and no training in dealing with complex ethical dilemmas

The possible solution

The solution, therefore, also consists in creation of an ethics committee for the risks associated with AI, with the right expertise and authority to make an effective impact. Its function is simple: to systematically and comprehensively identify the ethical risks of AI-based products that are developed internally or purchased from third parties, and to help mitigate them. 

An authoritative committee that oversees the ethical implications of AI is an important tool for identify and mitigate the risks of ultra-advanced technology which promises great opportunities. Naturally, the utmost attention must be paid to the formation criteria of the committee and the role it will have to assume within the organization, in order to avoid even greater damage than that which, unintentionally, AI could cause.

Reid Blackman is founder and CEO of Virtue, and author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press, 2022).

Share this content on: