last posts

Artificial intelligence is transforming our world - it's up to all of us to make it right

techsm5

All major technological innovations have a series of positive and negative consequences. For AI, the spectrum of possible outcomes – from most negative to most positive – is extraordinarily wide.

It is clear that the use of AI technology can cause harm, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when used in politically motivated disinformation campaigns or to enable mass surveillance.12

But AI systems can also cause unintended damage, when they act differently than intended or fail. For example, in the Netherlands, authorities used an artificial intelligence system that falsely claimed that around 26,000 parents had made fraudulent claims for childcare allowance. The false allegations caused hardship for many poor families and also led to the resignation of the Dutch government in 2021.13

As AI becomes more powerful, the possible negative impacts could become much greater. Many of these risks have rightly captured the public’s attention: more powerful AI could lead to massive displacements of labor or extreme concentrations of power and wealth. In the hands of autocrats, it could reinforce totalitarianism through its aptitude for mass surveillance and control.

the said alignment problem AI is another extreme risk. It is the concern that person would be able to control a powerful AI system, even if the AI ​​takes actions that harm us humans, or humanity as a whole. This risk unfortunately receives little attention from the general public, but it is considered an extremely important risk by many leading AI researchers.14

How could an AI escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions and “chooses” to do so. The risk is that we try to ask the AI ​​to pursue a specific goal – even a very worthwhile goal – and in pursuing that goal, it ends up harming humans. These are unintended consequences. The AI ​​does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI ​​not to do these things? It’s certainly possible to build an AI that avoids any particular problem we foresee, but it’s hard to predict. everything possible unintended harmful consequences. The alignment problem arises because of “the inability to properly and completely define true human goals,” as artificial intelligence researcher Stuart Russell puts it.15

Can’t we just turn off the AI? This might also not be possible. Indeed, a powerful AI will know two things: it runs the risk of humans being able to turn it off, and it cannot achieve its goals once it has been turned off. As a result, the AI ​​will pursue a very fundamental goal of ensuring that it is not extinguished. That’s why, once we realize that a highly intelligent AI is causing unintended harm in the pursuit of a specific goal, it may not be possible to turn it off or change what the system is doing.16

This risk – that humanity may not be able to maintain control once AI becomes very powerful, and that this could lead to extreme catastrophe – was recognized from the earliest days of AI research there. over 70 years old.17 The very rapid development of AI in recent years has made the solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough to answer all possible questions. Particularly on the worst risks of AI systems, and what we can do now to reduce them, I recommend reading Brian Christian’s book The Alignment Problem and Benjamin Hilton’s article “Preventing an AI-related catastrophe “.

If we manage to avoid these risks, transformative AI could also have very positive consequences. Advances in science and technology have played a crucial role in many positive developments in human history. If artificial ingenuity can augment our own, it could help us make progress on the many big issues we face: from cleaner energy, to replacing nasty work, to much better healthcare.

This extremely large contrast between the possible positives and negatives clearly shows that the stakes are exceptionally high with this technology. Reducing the negative risks and solving the problem of alignment could be the difference between a healthy, flourishing and rich future for humanity – and its destruction.

techsm5

Comments



Font Size
+
16
-
lines height
+
2
-