Regulating AI Favored by Large Bipartisan Majorities of Voters

Nine Major Proposals for Government Regulating Artificial Intelligence Favored by Very Large Bipartisan Majorities of Voters

As the House’s new Task Force on Artificial Intelligence considers how government should address AI issues, such as deepfakes in the election and bias in algorithms, a new survey finds very large bipartisan majorities favor giving the federal government broad powers to regulate Artificial Intelligence (AI). They endorse seven proposals currently under consideration in Congress and the Executive Branch for regulating AI-generated deepfakes and AI making decisions with the potential for harm.

Internationally, as the United Nations agrees on a US-led resolution to ensure AI does not violate human rights, voters favor the US working to establish an international agency to regulate large-scale AI projects, and create an international treaty prohibiting AI-controlled weapons.

The survey was conducted with 3,610 registered voters by the Program for Public Consultation at the University of Maryland’s School of Public Policy.  To ensure that respondents fully understood issues around AI, as in all public consultation surveys, respondents were provided in-depth briefings and arguments for and against each proposal, reviewed by experts on each side of the debates.

Creating new laws for AI-generated deepfakes registered overwhelming bipartisan support. Advancements in AI have led to the ability to easily create hyper-realistic images, video and audio. All three proposals surveyed garnered the support of over eight-in-ten Republicans and Democrats:

  • Prohibit the use of deepfakes in political campaign advertisements, such as to depict an opponent saying something they did not, or to depict an event that did not occur, as proposed by the Federal Election Commission. (National 84%, Republicans 83%, Democrats 86%)
  • Prohibit the public distribution of any pornographic deepfake that was made without the consent of the person being depicted, as proposed in the Preventing Deepfakes of Intimate Images Act and DEFIANCE Act. (National 86%, Republicans 85%, Democrats 87%)
  • Require that all deepfakes which are shared publicly be clearly labeled as such, as proposed in the AI Labeling Act, AI Disclosure Act, and DEEPFAKES Accountability Act. (National 83%, Republicans 83%, Democrats 85%)

Large bipartisan majorities also favor three proposals for closely regulating AI programs that make decisions which can significantly impact people’s lives, including in healthcare, banking, hiring, criminal justice, and welfare, in a similar manner to the way the FDA regulates drugs. There is evidence that some of these programs have violated regulations, shown bias (e.g. by race, gender, age, etc.) and caused material harm to individuals.

More than seven-in-ten voters favor proposals that would:

  • Require these AI programs pass a test before they can be put into use, which would evaluate whether they may violate regulations, make biased decisions, or have security vulnerabilities. (National 81%, Republicans 76%, Democrats 88%)
  • Allow the government to audit programs that are in use, and require the AI company to fix any problems that are found. (National 77%, Republicans 74%, Democrats 82%)
  • Require AI companies to disclose information to the government about how the decision-making AI was trained, if requested, to aid with pre-testing and audits. (National 72%, Republicans 67%, Democrats 81%)

These proposals come from the Algorithmic Accountability Act, and mirror regulations in the European Union’s Artificial Intelligence Act.

Creating a new federal agency for AI to enforce regulations, oversee AI development and provide guidance on AI policy is supported by 74% (Republicans 68%, Democrats 81%). This proposal is based on the Digital Platforms Commission Act.

In the international realm, Americans also support the creation of an international regulatory agency for large-scale AI, modeled after the International Atomic Energy Agency, as proposed by OpenAI, NYU Professor Gary Marcus, and UN Secretary-General António Guterres.  A large bipartisan majority (77%) favors the creation of such an international agency that would develop international standards for large-scale AI and have the authority to monitor and inspect whether their standards are being met (Republicans 71%, Democrats 84%).

Also in the international realm, Americans support creating a treaty to prohibit the development of weapons that can use AI to fire on targets without human control – called lethal autonomous weapons – as has been called for by the International Committee of the Red Cross, and the Campaign to Stop Killer Robots. A large bipartisan majority (81%) favors the US actively working to establish such a treaty, and creating an international agency to enforce that prohibition (Republicans 77%, Democrats 85%).

“Clearly Americans are seriously concerned about the current and potential harms from AI,” comments Steven Kull, director of PPC.  “Large majorities of Republicans as well as Democrats favor creating robust federal and international agencies to regulate AI and protect people from deepfakes, biased decision-making, and other potential harms from AI.”

When respondents evaluated arguments for and against each of the above proposals, the arguments in favor of regulations were found convincing by larger majorities of both Republicans and Democrats. However, majorities also found many of the arguments against convincing, including: regulation will stifle innovation; prohibitions violate the freedom of expression; and international agencies may abuse their power.  Kull adds, “Americans are wary of government regulation, but they are clearly more wary of the unconstrained development and use of AI.”

The survey was fielded online February 16-23, 2024 with a representative non-probability sample of 3,610 registered voters provided by Precision Sample from its larger online panel. The confidence interval varies from +/- 1.4 to 1.8%.

Be the first to comment

Leave a Reply

Your email address will not be published.


*