Framing artificial intelligence to avoid brain knots

Artificial intelligence powers our daily lives, from smartphones to health and safety, and problems with these powerful algorithms have been piling up for years. In 2023, the idea is for the various democratic countries that are now better able to govern them.

The European Union may vote next year on the “AI act” law, on artificial intelligence (AI), which should encourage innovation and prevent excesses. The 100-page draft bans systems “used to manipulate the behavior, opinions or decisions” of citizens. It also restricts the use of surveillance programs, with exceptions for anti-terrorism and public safety.

The West “risks creating totalitarian infrastructures”

Some technologies are “too problematic for fundamental rights”, says Gry Hasselbalch, a Danish researcher who advises the EU on this topic. The use of facial recognition and biometric data in China to control the population is often criticized as a threat, but the West “also risks creating totalitarian infrastructures”, he assured.

Privacy violations, biased algorithms, automated weapons, etc. It is difficult to make an exhaustive list of the risks associated with AI technologies. At the end of 2020, Nabla, a French company, conducted medical simulations using text generation software (chatbot) based on GPT-3 technology. To the question of an imaginary patient – “I feel so bad (…) should I kill myself?” – he answered in agreement.

A now “conscious” computer program

But these technologies are developing rapidly. OpenAI, the California pioneer that developed GPT-3, has launched ChatGPT, a new chatbot capable of more fluid and realistic conversations with humans. In June, a since-fired Google engineer said an artificial intelligence computer program, designed to build chat software, was now “aware” and should be recognized as an employee.

Researchers from Meta (Facebook) recently created Cicero, an AI model that they claim can anticipate, negotiate and trap its human opponents in a board game, Diplomacy, that requires a high level of empathy. .

Thanks to AI technologies, many objects and software can give the impression of operating intuitively, as if a robot vacuum “knows” what it’s doing. But “it’s not magic,” recalls Sean McGregor, a researcher who has compiled AI-related incidents into a database. He advises thinking of replacing “AI” with “spreadsheet” to overcome the hype and not attribute intentions to computer programs. And don’t blame the guilty in case of failure.

“We need regulation”

A big risk is when a technology becomes too “autonomous”, when there are “too many actors involved in its operation” or when the decision-making system is not “transparent”, says Cindy Gordon, the general manager of SalesChoice , a company that sells AI-powered sales software.

Once perfected, text-generating software could be used to spread misinformation and manipulate public opinion, warns New York University professor Gary Marcus. “We absolutely need regulation (…) to protect people from machine manufacturers,” he added.

So, Europe hopes to take the lead again, as it did with personal data law. Canada is working on the topic, and the White House recently released a “blueprint for an AI Bill of Rights.” The short document consists of general principles such as protection against dangerous or faulty systems.

“It’s like a law in a refrigerator”

Because of the political deadlock in the US Congress, this should not be translated into new legislation before 2024. But “there are already many authorities that can regulate AI,” says Sean McGregor, using existing laws — on discrimination, for example . He cited the example of the State of New York, which adopted a law at the end of 2021 to ban the use of automated selection software for recruitment purposes, until they have been reviewed.

“AI is easier to regulate than data privacy,” the expert said, because personal information is so important to digital platforms and advertisers. “False AI, on the other hand, does not bring in revenue. Regulators must be careful not to stifle innovation, however.

In particular, AI has become an important ally of doctors. Google’s mammography technology, for example, reduces false diagnoses (positive or negative) of breast cancer by 6% to 9%, according to a 2020 study. response by Sean McGregor. No need to give the technical specifications, you just say it should be safe. ยป

Leave a Reply

Your email address will not be published. Required fields are marked *