last posts

Not everything we call AI is actually “artificial intelligence”. Here's what you need to know

techsm5

In August 1955, a group of scientists applied for US$13,500 in funding to organize a summer workshop at Dartmouth College, New Hampshire. The area they set out to explore was artificial intelligence (AI).

While the request for funding was modest, the researchers’ guess was not: “every aspect of learning or any other characteristic of intelligence can in principle be described with such precision that a machine can be designed to simulate it”.

Since those humble beginnings, movies and media have romanticized AI or portrayed it as a villain. Yet for most people, AI has remained a talking point and not part of a conscious lived experience.

AI has arrived in our lives

Late last month, AI, in the form of ChatGPT, broke free from sci-fi speculation and research labs and onto consumer desktops and phones. It’s called “generative AI” – suddenly a cleverly worded prompt can produce an essay or put together a recipe and shopping list, or create an Elvis Presley-style poem.



Read more: ChatGPT chatbot wows people with its writing skills. An expert explains why it’s so impressive


Although ChatGPT was the most spectacular entrant in a year of generative AI success, similar systems showed even greater potential for creating new content, with text-to-image prompts used to create images vibrant colors that have even won art competitions.

AI may not yet have a living consciousness or a popular theory of mind in sci-fi movies and novels, but it’s at least getting close to disrupting what we think of as intelligence systems. artificial intelligence can do.

Researchers working closely with these systems have vanished from the perspective of sensitivity, as in the case of Google’s LaMDA large language model (LLM). An LLM is a model that has been trained to process and generate natural language.

Generative AI has also raised concerns about plagiarism, the exploitation of original content used to create models, the ethics of manipulation of information and breach of trust, even “the end of programming”. .

At the center of it all is the question that has become increasingly urgent since the Dartmouth Summer Institute: Does AI differ from human intelligence?

What does “AI” really mean?

To qualify as an AI, a system must exhibit a certain level of learning and adaptation. For this reason, decision-making systems, automation and statistics are not AI.

AI is generally defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.

The main challenge in creating a general AI is to adequately model the world with all the knowledge, in a consistent and useful way. It’s a colossal undertaking, to say the least.

Most of what we call AI today has narrow intelligence – where a particular system solves a particular problem. Unlike human intelligence, such narrow artificial intelligence is effective Alone in the field in which he was trained: fraud detection, facial recognition or social recommendations, for example.

AGI, however, would work like humans do. For now, the most notable example of trying to achieve this is the use of neural networks and “deep learning” trained on large amounts of data.

Neural networks are inspired by the functioning of the human brain. Unlike most machine learning models that perform computations on training data, neural networks work by feeding each data point one at a time through an interconnected network, adjusting parameters each time.

As more and more data passes through the network, the parameters stabilize; the end result is the “trained” neural network, which can then produce the desired output on new data – for example, recognizing whether an image contains a cat or a dog.

The significant leap forward in AI today is due to technological improvements in the way we can train large neural networks, readjusting large numbers of parameters with each run thanks to the capabilities of large cloud computing infrastructures. For example, GPT-3 (the AI ​​system that powers ChatGPT) is a large neural network with 175 billion parameters.

What does AI need to work?

AI needs three things to be successful.

First we must high quality and unbiased data, and a lot. Researchers who build neural networks use the large datasets resulting from the digitization of society.

Co-Pilot, to augment human programmers, derives its data from billions of lines of code shared on GitHub. ChatGPT and other major language models use the billions of websites and text documents stored online.

Text-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from datasets such as LAION-5B. AI models will continue to evolve in sophistication and impact as we further digitize our lives and provide them with alternative data sources, such as simulated data or data from game settings like Minecraft.



Read more: No, the Lensa AI app doesn’t technically steal artists’ work, but it will dramatically change the art world


AI also needs computing infrastructure for effective training. As computers become more powerful, models that now require intensive effort and large-scale computation may in the near future be managed locally. Stable Diffusion, for example, can already be run on local computers rather than cloud environments.

The third need for AI is improved models and algorithms. Data-driven systems continue to rapidly advance in one area after another once considered the territory of human cognition.

However, as the world around us is constantly changing, AI systems must be constantly retrained using new data. Without this crucial step, AI systems will produce factually incorrect answers or fail to take into account new information that has emerged since they were trained.

Neural networks are not the only approach to AI. Another important camp in artificial intelligence research is symbolic AI – instead of digesting huge datasets, it relies on rules and knowledge similar to the human process of forming internal symbolic representations of particular phenomena.

But the balance of power has tilted heavily towards data-driven approaches over the past decade, with the “founding fathers” of modern deep learning recently awarded the Turing Prize, the equivalent of the Nobel Prize in Computer Science. .

Data, computation and algorithms form the foundation of the future of AI. All indicators show that rapid progress will be made in all three categories in the foreseeable future.

techsm5

Comments



Font Size
+
16
-
lines height
+
2
-