last posts

How artificial intelligence affects human rights and freedoms

techsm5

Smartphones and the many technologies that make them ever smarter have had a huge impact on the way we communicate, organize and mobilize. If you’ve ever led, attended, or even considered taking part in a protest, you may have found the information you needed through smart devices like your phone or watch, but it’s also possible that you were advised to leave them at home to avoid being spotted.

Smartphones have helped enable access to educational information and resources through access to online learning and tools, especially when in-person/physical learning is not possible or easily accessible. Mobile phones and the Internet have become an important element in the enjoyment of certain rights and freedoms, such as freedom of speech, freedom of speech and the right to protest.

However, technologies such as facial recognition and geolocation, which allow you to use your mobile phone and some of its applications, are also used outside your mobile devices and can be used by systems such as traffic cameras. and security, by public and private entities looking for data. This was demonstrated in Hong Kong, where it was reported that authorities were using data collected from security cameras and social media to identify people who had taken part in the protests.

Given the increased use and capabilities of artificial intelligence (AI), there is a new demand for research into the impact of these technologies on civic space and civil rights, and everything in between. the two.

Dr. Mona Sloane is a Principal Investigator at the Center for Responsible AI at New York University. She is a sociologist who studies the intersection of design, technology, and society, particularly in the context of AI design and policy. Sloane explains that most AI systems are created to make decision-making processes much easier and faster in everyday life, but the data behind this creation is flawed.

“Entities that develop and deploy AI often have an interest in withholding the assumptions behind a model, as well as the data it was built on and the code that embodies it,” Sloane told Global Citizen. “AI systems typically need large amounts of data to function well enough. Extractive data collection processes can invade privacy. Data is always historical and will always represent historical inequities and inequalities. basis for making decisions about what should happen in the future therefore solidifies these inequalities.”

Researchers like Sloane focus on how these powerful technologies exist in the real world and make it nearly impossible to break down systemic barriers.

Facial recognition in civic space

In January 2021, Amnesty International launched its global Ban the Scan campaign, which aims to end “the use of facial recognition systems, a form of mass surveillance that amplifies racist policing and threatens the right to protest”.

The campaign pointed out that algorithmic technologies, like facial recognition scanners, “are a form of mass surveillance that violates the right to privacy and threatens the rights to freedom of peaceful assembly and expression.”

In 2019, the world saw protesters in Hong Kong covering their faces and knocking over streetlights with face scanners to evade detection by facial recognition, or trying to find ways to use AI to their advantage.

“One protester in particular, this Colin Cheung guy that we found, created a facial recognition tool to try to identify the police. And he didn’t actually release the product, but he says it’s because that the police actually targeted him,” New York Times journalist Paul Mozur reported. “When they caught him they needed access to his phone and so they tried to force his face in front of his phone to use the phone’s facial recognition feature to unlock it… [He] was able to quickly disable that while under attack, but this shows you how … our biometrics have become so critical to technology, that they’re kind of becoming weaponized in all these different forms.

Adjunct professor in the Faculty of Information and Media Studies at Western University in London, Ontario, Luke Stark studies the ethical, historical, and social impacts of computer technologies such as AI and machine learning. He uses the term “digital surveillance technologies” to cover the concept of data collection in the media and questions how this data is used by the government – and what this tracking and suppression means in various parts of the world, with legal differences. diets.

He argues that while this data collection is excessive, it is also difficult to sift through, which Edward Snowden’s leaks have proven, he says.

“The powers that be, the spies and intelligence agencies, and the security agencies have a lot of anxiety about going through all that data, having too much data, understanding and interpreting that data. “, Stark told Global Citizen.

Unfortunately, he adds that data review doesn’t have to be perfect because it’s ‘brute force’ as it is – people get terminated or deleted due to errors in the system, but then inadvertently mistaken for something other than what the system was trying to catch them doing.

“I’m thinking in particular of the growing number of cases in the United States where black men are being arrested based on the alleged identification of a facial recognition system,” he said. “And it turns out the facial recognition system picked the wrong person, including one case where the police department used a composite drawing from a cartoonist in the system, then found someone who looked like the drawing. and stopped him.”

Stark points out that there are both technical issues with algorithmic systems and social issues, which, when put together, “have enormous scope for both abuse in terms of the law as it exists, but also for abuses from a more democratic point of view of human rights”.

Technology in the world of protest

Stark warns that these technologies have a chilling effect on protests, given how they can be used.

“The more they are integrated, the more dangerous they can be. Which does not mean that they are not dangerous when they are not integrated, [because they still] kind of have the ability to track people and identify them through facial recognition systems and then have all kinds of data on, for example, their movements. If you track things like public transport usage through digital smart cards, geolocation data through cellphones, all these different types of digital traces, a state that is willing and able to pull all of this data together can really , really cracking down on dissent in an extraordinarily effective way,” he explained. “Hong Kong, I think, is a good example of how by really cracking down on a wide range of protest leaders, it pushes everybody to be really quiet.”

He adds that in North America, too, activists advise against bringing a phone with you when attending protests so you can’t be tracked – but that also means losing a way to document the events.

The power that comes with data collection

AI impacts us not only in terms of the data it is able to collect, but also in terms of shaping what we perceive to be true.

Algorithms can be used to determine a person’s preferences, including political choices, which can then be used to influence the type of political posts someone might see on their social media feeds. A notable data collection breach in this regard involved consultancy firm Cambridge Analytica, which collected private data from the Facebook profiles of more than 50 million users as part of its work for the US campaign. former US President Donald Trump in 2016, according to the New York Times.

According to Jamie Susskind, author of Future Politics: Living Together in a World Transformed by Technology.

“Digital is political. Instead of viewing these technologies as consumers or capitalists, we need to view them as citizens. Those who control and own the most powerful digital systems in the future will increasingly have great control on the rest of us,” Susskind said in an interview with Forbes.

Algorithms further allow different people to have different perceptions of reality, showing people different types of media that align with their politics.

Yet, mobile technology has undoubtedly helped to open up civic space, in some circumstances, despite creating new challenges, particularly in terms of tracking and monitoring activists and/or protesters.

The ability of these digital surveillance technologies to be used in ways that suppress dissent and/or lead to the profiling of certain groups is why scholars like Sloane talk about AI.

“There is no magic bullet to make AI fairer, we need to approach this problem from all angles. Academics need to work in a more interdisciplinary way. Social scientists can help engineers understand how social structures are deeply connected to technologies,” she added. said. “Affected communities and the public need to be included (and compensated) in the AI ​​design process. We need safeguards that support and don’t hinder innovation.”

Stark agrees, adding that there’s a lot of work to be done on how AI technologies interact with real people in the real world, including asking, “How do you know what assumptions we’re making? What practices do we use to make inferences about people?What kind of inference is involved?What stories are we telling when we make these inferences?

“These have long been concerns of social scientists,” he said. “And I think, as a set of technologies, [AI] really brings this issue to the fore.”


This article is part of a series related to advocating for advocacy and civic space, made possible with funding from the Ford Foundation.


techsm5

Comments



Font Size
+
16
-
lines height
+
2
-