last posts

A rogue artificial intelligence is no longer science fiction

techsm5

Author Profile Picture

From the curvature of cucumbers to the size of fish sticks, no aspect of life is too small or insignificant to escape the watchful gaze of Brussels. It is therefore not surprising that the EU is now turning to the world of artificial intelligence. That’s right, folks – the EU has released a set of draft AI regulations.

Why we are writing on this topic:

AI is emerging and offers endless possibilities but also many risks and pitfalls. The better we understand how AI works, the more effectively we will use it.

The sentences above were not written by me, but are in fact the work of an artificial intelligence. They were written by ChatGPT, a language model built by the research lab and the OpenAI company. The AI ​​was given large amounts of text, tasked with identifying its patterns, and then trained to produce plausible original responses to prompts. In this case, I asked ChatGPT to “write a fun article about EU plans to regulate AI”.

Daily life

Artificial intelligence has already started to be part of our daily life. We communicate with customer service chatbots, use machine translation, and select auto-generated responses in messaging and messaging apps that guess how we intend to complete our sentences.

Artificial intelligence is at work as Google chooses which results to show us based on its guesses about what we’re looking for, and it’s already being used to filter out unqualified candidates in early rounds of applications.

Farmers can use AI to monitor crops and distribute fertilizer or water when needed, or feed livestock autonomously. Computer programmers use automatically generated code to speed up their work. RNs are trained to detect tumors in scan results, with the goal of helping doctors diagnose cancers.

There are probably many more uses for AI that we can’t imagine yet. Technology is expected to change the way many jobs are done, supporting perhaps the most stereotypical types of writing, as the opening sentences of this article demonstrate.

The advancement of this technology raises interesting ethical questions and a dilemma of how to regulate it.

There is the question of responsibility. AI systems are able to act autonomously, so if they make a mistake, who is responsible? The risks are significant, whether it’s a misdiagnosis, unfair discrimination in a hiring process, or a self-driving vehicle causing a car crash.

partial

AIs can learn to be biased from the information they are fed. The sentences that started this article are a good example. They indicate that ChatGPT learned from Boris Johnson’s English-language reporting style on the EU, which tended to exaggerate or fabricate alleged petty food regulation plans by a nebulous ‘Brussels’.

Training an AI based on past hiring decisions could teach it to discriminate based on gender or ethnicity. Due to the datasets used in its development, facial recognition AI may work less well for black people, putting them at a disadvantage when such systems are used to verify identity, as they already are in banking. online and to access social services.

Insurance companies could use an AI to exclude people with certain lifestyles or family medical histories from coverage, leading to unfair results. There are also privacy concerns regarding the use of data to train RNs.

Ultimately, artificial intelligence systems could become extremely powerful. Who will they serve?

Science fiction

Currently, they are being developed by the wealthiest tech companies in the world, and so can be expected to advance these already powerful private interests, be they designed in a culturally oriented state- United and that they further entrench inequalities.

It may sound like science fiction, but there are deep concerns about whether artificial intelligence could ultimately work against the interests of humanity. Repressive governments are already using AI powered by massive data collection from the population as a means to grant or deny benefits.

AIs can behave with some autonomy, so an AI with internet access could train to become the most efficient scam network ever, simultaneously running an infinite number of phishing scams, ransom or romance, and learning from each experience to be more effective in the next. An AI could even build its own artificial intelligence system, to serve yet another purpose.

AI Law

The EU’s attempt to tackle this emerging brave new world is called the AI ​​Act. As a pioneering regulation that would apply to 450 million of the world’s wealthiest people, it is likely to have global influence.

The bill proposed by the European Commission has a risk-based approach. It outright bans the type of AI that are considered to carry “unacceptable risk”, such as the type of social rating system associated with the Chinese government. “High-risk” systems, such as those used in transportation, education, law enforcement, or recruitment, are required to reduce risk and incorporate human oversight. “Minimal” or “limited” risk systems, such as chatbots, spam filters or video games, have less strict rules.

The legislation has evolved over the course of negotiations, with the 27 EU member states recently agreeing a compromise among themselves. The new draft would exempt military, defense and national security AIs from being covered by the regulations. It would also allow police, in exceptional circumstances, to use remote biometric surveillance in public spaces, such as using facial recognition to find suspects. Negotiations with the European Parliament are forthcoming.

Some civil society groups have warned that the safeguards in the legislation are too weak. It is clear that EU governments do not want efforts to reduce the risks of artificial intelligence to deprive them of the technology’s potential opportunities, or to stop innovation in an industry that many see as economic growth. promising.

This article was previously published in the Irish Times.

techsm5

Comments



Font Size
+
16
-
lines height
+
2
-