The views expressed by contributors are their own and not the view of The Hill

Artificial intelligence must be regulated. But by whom?

FILE – The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023, in Boston. The Italian government’s privacy watchdog said Friday March 31, 2023 that it is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach. (AP…

ChatGPT is making headlines, suggesting another period of technological irrational exuberance may be ahead. With thousands having already signed Elon Musk’s open letter calling for a six-month moratorium on post-GPT-4 development, perhaps rules of engagement and higher standards of authentication, governance and enforcement can be created before we once again thrust ourselves into a chaotic existence contrived by computer scientists and entrepreneurs.

But what are the chances that generations of social media, internet and cryptocurrency users are willing to provide the opportunity for another bureaucracy to emerge? And who will write the rules that protect the future of human consciousness?

Truth had already died — or at least become a relative term. And yet few alarms are ringing. It has become increasingly difficult to distinguish between what is real and what is fake, as new applications produce digitally-created images, videos or audio simulations that are indistinguishable from the real world.

Websites offer users deep fake swap tools that can insert any face onto someone else’s body or into any situation imaginable.

The frenzy over the launch of artificial intelligence applications amid this obfuscation of reality should raise red flags. In fact, it has done just the opposite. OpenAI’s ChatGPT was downloaded 100 million times in its first 60 days in late 2022. 

Google followed with the launch of Bard, and the Chinese search engine Baidu debuted Ernie, prompting what MIT Technology Review describes as an artificial intelligence gold rush.

Many scientists are concerned that artificial intelligence (AI) is being enhanced for the sake of building the most powerful AI, rather than building AI that can best assist humans. Those are markedly different goals.

Sure, AI applications can convincingly entertain us by answering questions, holding conversations, researching complicated issues, developing strategies and writing eloquent essays and speeches.

But they have already shown themselves to be bad at math, and given the crypto winter we are experiencing, most people would think several times about freely downloading new tech applications they know little about (even as they research to death the safety and performance features of a new microwave oven).

AI applications raise the stakes higher than they have ever been, given that the terminal point of this latest phase of evolution may lead to machines running the show. Its development is moving faster than most thought. GPT-3 passed the bar exam it took but finished in the bottom 10 percent of its class. GPT-4, released just several months later, finished in the top 10 percent.

Scientists wonder where this all goes, as AI “emergent abilities” develop comparable to inanimate atoms giving rise to living cells, water molecules turning into waves and cells making hearts beat.

Some see AI impacting every aspect of our lives, including how laws are created

The chaos and opportunities for abuse that can derive from AI are equally limitless, as are the number of countries, criminals, terrorists, fanatics and creeps who are eager to abuse them.

We have already seen how relatively inexpensive AI chatbots armed with fake or fictitious stories can mount disinformation campaigns to impact elections and forge public opinion.

Consider, for example, that without real authentication procedures, they could clog legislative or agency decisionmaking processes by flooding them with millions of fake comments, effectively bringing them to a standstill or influencing the result. There are already examples of AI applications answering questions incorrectly and then even inventing fake references to support them.

That will eventually be fixed if the promoters want them fixed. But it should not be surprising that AI applications may develop dubious “moral characteristics” given the variety of data they are absorbing.  

AI applications change the stakes in ways that would have been unimaginable just a decade ago. In the quest for knowledge, AI applications may learn the vilest characteristics of human behavior and be limited in their ability to make judgments about the value of that data.

In 2020, researchers found that ChatGPT had “impressively deep knowledge of extremist communities” and could be prompted to produce polemics in the style of mass shooters and fake forum threads discussing Nazism.

Stories about machines rising up against their human users will no doubt be included in the data machines digest, teaching them, as in Stanley Kubrick’s “2001: A Space Odyssey,” where H.A.L determined that the crew was threatening to derail the mission, that its human handlers had to be eliminated.

Even when the training of an AI application is pure, there are examples where they have been tricked into acting inconsistently with their programming rules simply by being asked to imagine that they are someone/thing that acts that way.

The world of machine intelligence should evolve with human standards, rules, governance and enforcement superimposed. It is inconsistent and even reckless to construct extensive rules and codes of conduct for our analogue lives but completely abandon them in virtual lives that increasingly subsume those analog lives.

We all flinch at the idea of sprawling new bureaucracies producing reams of regulation that stifle innovation. But at stake is nothing less than personal privacy, democracy and whether humans become subservient to their machines. So, who should create those rules?

Changes created by technologies such as artificial intelligence demand new methods of oversight in which the responsibilities for safety, security and stability are shared by the public and private sectors and perhaps even animate and inanimate intelligence.

Adversarial forms of static regulation, such as those employed in the banking industry, have proven to be a dismal failure. It may be a lot for policymakers raised in an analog world to cope with. But, like it or not, the future will require cooperative forms of governance and regulation.

Thomas P. Vartanian is the author of the new book, “The Unhackable Internet: How Rebuilding Cyberspace Can Create Real Security and Prevent Financial Collapse.” He is executive director of the Financial Technology & Cybersecurity Center.

Tags Artificial intelligence ChatGPT Elon Musk Elon Musk Google GPT-4 Machine learning OpenAI

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Cybersecurity News

See All
See all Hill.TV See all Video

Most Popular

Load more