last posts

UF researcher describes bias in artificial intelligence and pushes for 'algorithmic justice'

techsm5

Artificial intelligence powers the tools we use every day – Siri, Amazon Alexa, unlocking iPhones with facial recognition. But these tools serve some people better than others.

Tina Tallon is Assistant Professor of Artificial Intelligence in the Arts at the University of Florida School of Music. She studies what is called algorithmic justice – how linguistic, racial and gender biases are built into these technologies and how to correct them.

WUFT Report for America Corps member Katie Hyson sat down with Tallon to talk about what it means and why it matters.

This interview has been edited and condensed for clarity. Listen above or read a slightly longer version below.

HEEL: So I’m very interested in all the AI ​​tools that are used in everyday life – I mean, we come into contact with them every time we open our phones – and the different kinds of biases that are ingrained in the tools people use.

HYSON: Can you tell us about some of those biases?

HEEL: The majority of the dataset is in English. And so you already have a bias towards English speakers, on the right, where people who might speak other languages ​​are not represented in these datasets.

And then, of course, when you’re dealing with computer vision, there’s an incredible amount of racial bias. Historically, films and various photographic sensors on cameras, unfortunately, failed to image darker skin as well as lighter skin.

We also have gender biases when it comes to the audio technology, the microphones that we’re using right now, right?

I am a singer. And so, as I worked with a lot of microphones and other types of voice technology, I noticed that they didn’t work as well for me as they did for some of my colleagues.

There are inherent biases in some circuits and these designs date back to the late 19th, early 20th century.

HYSON: So for someone who’s not into AI, not into science, who might not even be aware that many of the tools they use during the day are artificial intelligence, what would be an everyday example of how someone might interact with this tool and it might not serve them as well as someone else?

HEEL: A good example of this is hiring. Many people are unaware of the fact that much of the first round of resume and CV sorting actually uses a lot of AI tools. So the AI ​​is trained on different types of search words and other types of data sets that might disproportionately favor someone from a specific background over someone else.

Another example – many immigration exams actually require some language proficiency. There was a case in Australia where an English speaker of Irish or Scottish descent came to take a [AI] English proficiency test for his visa in Australia. And he said his command of the language was not up to par. And she failed the test even though she is a native English speaker.

I think we owe it to ourselves and everyone around us to question the underlying structures that lead to these emergent experiences that we have in everyday life.

Every time you unlock your phone or try to use Siri or Alexa, yes, all of those things are powered by AI. And every time we engage with them, a certain amount of data is passed to those companies to kind of reinforce the learning of those data sets.

HYSON: Is there already significant work underway to address these issues? And what are the possible solutions?

HEEL: Right now, algorithmic justice and accountability is a hot topic of conversation. And a lot of people pay attention to it.

However, we see big tech companies like Twitter and Google that have actually made their teams responsible for holding other members of their companies accountable or for doing research that supports this work of justice. And so it’s difficult because I think we were making a lot of progress, but it’s very fickle, and it just depends on who’s in power.

Ultimately, I think a lot of it comes down to broader education and the public demanding accountability from these companies.

One of the things I asked for is some sort of algorithmic FDA, right? With our own FDA, any medical intervention, whether a therapeutic device or a drug, must be approved by the FDA before it can be marketed.

And I think the same must happen with algorithmic tools. We need to have someone walk by and say, “Okay, what impact will this tool have on society? Have you demonstrated that you have taken the necessary steps to adequately check your algorithmic tool for different types of bias? »

HYSON: Can you explain why it is important that these algorithms and technologies work the same for everyone?

HEEL: Unfortunately, AI reinforces many biases that already exist. And already, it reinforces the systems of discrimination that we see negatively impacting various communities around the world.

Data is a reflection of a society’s values. And I think, unfortunately, the technology that collected the data is also a reflection of the values ​​of a society. And unfortunately, what we have seen over and over again is that the values ​​that are being reflected right now are those of prejudice and discrimination.

And so we have to be very careful, because once a specific technology or idea is entrenched, you build so much on it that it’s impossible to change it.

If we don’t act now, to counteract these various types of prejudice [in AI] they will become rooted. And it’s even more dangerous, because the technologies that we will have in the future will be built on that. And so we have to stop this cycle somewhere. And I think now is the right time to do it.

HYSON: Is there anything else you want people to understand?

HEEL: There are many good uses for AI. There are many amazing ways AI can create access tools. There are many ways to use AI to improve health outcomes. There are many ways to use AI to mitigate the impacts of climate change.

And so all is not catastrophic.

However, we must be very critical of these technologies. Algorithmic literacy is really important. We need everyone to be involved.

And we need to make sure everyone understands what the issues are and how they can play a role in trying to use these tools to create a better future.


techsm5

Comments



Font Size
+
16
-
lines height
+
2
-