Thoughts on AI, Bias and the need for Human Subjectivity

About a year ago, I had the privilege of being invited to be a panelist discussant for the Humber College – Research Analysts’ Program Spring Symposium. Each discussant was asked to comment on the following question:

What new ethical concerns are being raised in market, social, and evaluation research due to the advent of AI and automation?

The points I made in my remarks are below. But after a year, do I think differently? The answer is “No.” I just returned from a trip to New York where I was working with a client that specializes in trade and economic sanctions. The client invited several subject matter experts for me to work with over two intense days. We were working on the foundational steps to building a certification exam for international sanctions specialists. I learned much from these highly educated, intelligent, seasoned experts in the field. They worked for multinational financial institutions, Fortune 500 companies, the insurance industry and as specialized consultants. In each case, they told of their experiences and uses of artificial intelligence (AI)-driven data analytics and how their own big data was mined for red flags relating to sanctions. Also, in each case, they spoke of the human touch, the intuition, the experience, the industry know-how that is required to investigate these red flags, which for the most part, occur in the grey area between cut and dry, hard and fast policy and legislation. This experience furthered my stance that I take in the bulleted points below, that AI, at least at this point in time, is a helpful tool, but cannot supplant the person that is called upon to make sense of the data output.

 

  • I want to take an approach to ethics that may / may not be considered nefarious practice, but potentially life-changing nonetheless.
  • I contextualize my response in Big Data and Predictive Analytics and the biases that exist in these data that are resultant of human subjectivity, but also, as I will explain, simultaneously depends on moral and ethical human subjectivity to minimize harm. In 2016, The World Economic Forum identified 9 Ethical Issues with AI, and by my estimation 3 of these issues focus on biases in data.
  • So, we know Big Data-based Predictive Analytics are out there every time we run Google search and see ads based on our recent searches. We search Amazon and we have suggestions based on our previous purchases and search history. We see local restaurant ads on our smartphones wherever we are – this is a type of Big Data-driven surveillance. But what I am going to discuss is something far more serious than an Amazon recommending the wrong book or shampoo.
  • Big Data-based Predictive Analytics are literally and figuratively driving the AI behind autonomous vehicles. Tell me, who oversees the analysis of the tranches of traffic data that are used to program the various decision algorithms in autonomous vehicles? People do. Think about collision avoidance algorithms – fairly straightforward perhaps, stay in your lane, obey the traffic laws. But what about the “no-win” situation where there will be some sort of catastrophic impact no matter what decision is made… for example, the AI is programmed to make decision to hit either a passenger vehicle or a school bus; to choose between an elderly man or the youth who’s not paying attention as each illegally crosses the street from different directions? These algorithms are based on human biases and require serious ethical considerations.
  • Now, take the use of Big Data-driven Analytics in Predictive Policing. There are many successes in predictive policing, including here in Canada. In Edmonton, AI has been used to create models that predict where the likelihood of property-based crime is going to spike, based on dozens of factors identified in decades of crime data analyzed by people like you. The algorithms present tools for the police and community to use to reduce property crime and according to an evaluation study, have done so by some 13% in a short pilot. In fact, for every, $1.00 spent on the AI, there was a policing-related savings of $1.06 for the city.
  • Also, in Saskatchewan, AI has been used to take the biased subjectivity out of case triage for Indigenous Youth at risk. When humans did the triage, the rate of harm to these youths was alarming. When AI took over, incident rates fell dramatically.
  • All good right? No. Critics of Predictive Policing argue that some models are biased by faulty data. The technology has raised tough questions about whether hidden biases in these systems will lead to over-policing of racialized and lower-income communities. We saw recently this weekend how two young males of colour were arrested in a Philadelphia Starbucks and no crime was committed.
  • Two issues here: First, an arrest does not equal a crime, but that arrest data is there. Second, a dangerous circle can be formed as increased police presence in areas identified by AI have higher arrest rates, and as these higher arrest rates get fed back into the algorithm, resultant in increased police presence.
  • Here in Toronto, there is a strong case to be made against the use of AI and big Data-driven Predictive Analytics for policing, as much of the recent data have been collected through the racially-biased practice of carding – which has been likened to Big Data-driven surveillance. This practice has been legislated out since last year, but the data collected are still at play. Purely objective decision algorithms based on these data are bound to reflect the biases in the data collection.
  • So, what do we do about these situations? I don’t have the answers, but I do believe the answers include your participation as Researchers and Research Analysts. Your moral compass along with your technical skills will shape the future of AI use in society. Objectivity is great in many situations, and we have seen how subjectivity introduces biases. But I believe that some level of subjectivity, that human touch, is necessary to curb the unethical use of biased data.

 

 

COMMENTS

  1. AI bias doesn’t come from AI algorithms, it comes from people. What does that mean and what can we doObjectivity is a philosophical concept of being true independently from individual subjectivity causedWith AI, puppeteers come in all genders, but there is always a human behind the machine.

    • Sid Rae
      May 2, 2019

      Thank you. As AI is not my area of expertise, I believe that was the point I was trying to make.

Comments are closed.