The views expressed by contributors are their own and not the view of The Hill

We must address the AI risks right in front of us today

The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (AP Photo/Richard Drew, File)

ChatGPT — the generative artificial intelligence (AI) app that can answer obscure questionswrite computer codecompose haikustutor algebra and engage in eerily humanlike conversation — has activated Silicon Valley’s hyperbole machine. 

Sam Altman, chief executive of OpenAI, the start-up that designed ChatGPT with funding from Microsoft, told a TV interviewer, “This will be the greatest technology humanity has yet developed.”

Not everyone agrees. Forty-two percent of CEOs polled at a Yale University corporate summit in June said they’re concerned that artificial intelligence could potentially destroy humanity within the next five to 10 years.

Who’s right? To answer that question, Congress is holding hearings, Senate Majority Leader Chuck Schumer (D-N.Y.) is promising a raft of legislation and President Biden is meeting with industry experts.

Let’s clear the air. First, ChatGPT and rival generative AI systems do not, in and of themselves, constitute a threat to the existence of humankind. But the technology does create serious immediate risks. These include the facilitation of political disinformation and cyberattacks, amplification of racial and gender bias, invasions of personal privacy and proliferation of online fraud.

Emphasizing these more discrete hazards makes sense even if one harbors anxiety that, if left unchecked, advancing AI may one day pose existential dangers. If we want to grapple effectively with potential threats to humanity, the best way to start is to regulate the AI risks right in front of us.

A few words of background: Generative AI developers feed mountains of data scraped from the internet into mathematical systems called neural networks, which are trained to recognize statistical patterns in the information. One type of network called a large language model (LLM) is trained to analyze all manner of online text: Reddit posts, digitized novels, peer-reviewed scientific studies, tweets, crowdsourced Wikipedia entries and much more. By observing patterns, an LLM gradually develops the ability to formulate prose, computer code and even conversation. There are also systems that generate images and audio.  

Generative AI has promising applications in health care, education and scientific research. But a new report published by the Center for Business and Human Rights at New York University’s Stern School of Business enumerates a series of societal hazards:

  • The new technology will make disinformation easier to produce and more convincing.
  • Cyberattacks against banks, power plants and government agencies will be aided by AI systems that can produce malware code in response to elementary text prompts.
  • Fraud also will proliferate as criminals learn to harness tools that allow technically unsophisticated users to personalize scams for individual victims.
  • Privacy violations will occur because the vast internet datasets used to train LLMs contain personal information that bad actors may be able to coax out of AI apps.
  • Bias and hate speech that exist online are likely to seep into the responses that LLMs offer up, leading to the victimization of marginalized groups.
  • Generative AI systems tend to “hallucinate,” or make up false facts, which creates dangers if users seek advice on such topics as medical diagnosis and treatment.

To address these dangers, tech companies can take a variety of steps, including:

  • Without exposing their core code to business rivals or bad actors, companies should disclose their data sources, specific steps they take to reduce bias and privacy violations and tests they run to minimize hallucination and harmful content.
  • Generative AI systems should not be released until they are proven safe and effective for their intended use. Monitoring should continue even after release with the possibility of removing models from the marketplace if significant unanticipated dangers arise.
  • To reduce confusion and fraud, generative AI designers need to find ways to watermark or otherwise designate AI-generated content. At the same time, they and others should improve tools that can be used to detect AI-created material.
  • Surprisingly, AI designers often don’t understand why their creations act as they do. They need to step up current efforts to solve this interpretability problem as part of the larger effort to make models safe.

For their part, Congress and regulatory agencies can begin by ensuring that existing criminal, consumer protection and privacy laws are enforced in cases involving generative AI. Lawmakers should go further, enhancing the authority of the Federal Trade Commission or a new stand-alone agency to oversee digital industries, including AI companies.

Congress has failed in recent years to pass legislation mandating more disclosure by the social media industry. It must return to the task while broadening its field of vision to include AI. At the same time, lawmakers need to try again to pass a privacy law that would give consumers more control over their personal information. Finally, Congress should bolster public sector and academic AI computer research capacity so that regulators and university scientists can keep up with their private sector counterparts.

We can’t afford to repeat the mistakes made with social media, which grew into a virtually unregulated industry based on cheerful marketing about promoting free speech and personal connections. Without much oversight, Facebook, Twitter and YouTube became havens for misogynist and racist trolls, Russian disinformation operatives and Jan. 6 insurrectionists.

Today, we have an opportunity to get ahead of the problems that generative AI will almost certainly create. The tech industry needs to be held accountable, no matter how lucrative its innovations are.

Paul M. Barrett is the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business. Justin Hendrix is an associate research scientist and adjunct professor at NYU Tandon School of Engineering and the CEO and editor of Tech Policy Press.

Tags Artificial intelligence artificial intelligence regulation ChatGPT Chuck Schumer cyberattacks Disinformation Joe Biden Politics of the United States Sam Altman

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

More Technology News

See All
See all Hill.TV See all Video

Most Popular

Load more