Artificial Intelligence

Over the last few years, Artificial Intelligence (AI) technology has advanced rapidly, and its usage throughout society has greatly increased. Developments in AI have brought numerous benefits, as well as many concerns about the effects of its current usage, and future potential for harm. 

What role, if any, the Federal government should have in regulating the development and deployment of AI programs in the US, and internationally, has been the subject of much debate.

National Sample: 3,610 Registered Voters
Confidence Interval: +/- 1.3 to 1.8%
Fielded: February 16-23, 2024

  • Questionnaire (PDF): Coming Soon
  • Full Report (PDF): Coming Soon

Proposals with bipartisan support:

  • Require new decision-making AI programs to pass a government-run test before they can be deployed
  • Allow the government to audit decision-making AI programs, and if any problems are found, require they be fixed
  • Require AI companies disclose the training data for decision-making AI to the Federal government
  • Require deepfakes be labeled clearly as such
  • Prohibit political campaign advertisements from using deepfakes
  • Prohibit the publication of pornographic deepfakes without the consent of the individuals being depicted
  • Create a new Federal agency to enforce regulations on AI, oversee AI developments, and provide guidance on AI policy
  • The US actively working to create an international treaty to ban the development and use of AI-powered weapons that can fire on targets autonomously
  • The US actively working to create an international organization to regulate and monitor large-scale AI programs

The immediate concerns that AI programs present, and which can be regulated domestically by the government, were presented as follows:

First, we will address immediate concerns about AI programs that are already being used.

For example, some AI programs have:

  • violated regulations, though they were not instructed to do so 
  • provided incorrect information 
  • made flawed recommendations or decisions
  • unintentionally treated some groups in a biased way (e.g. by race or gender)

AI programs have also been purposely used to:

  • create misinformation very quickly and on a large scale, 
  • create fake videos of people or events that appear very real which have misled people or damaged reputations
  • steal private data

AI programs have also been hacked and used for harmful purposes. Some of these concerns can be addressed at the national level, by the federal government. We will explore proposals for what the government might do.

Several of the proposals for regulating AI are based on taking a preventative approach, and so respondents were introduced to that idea, as follows:

As mentioned, there is debate about what role the government should play in regulating AI companies and AI programs.

There are two general approaches that the government can take:

One approach is for the government to take action only after a company has sold a product or service, something has gone wrong, and the product has harmed consumers in some way.

Another approach is for the government to more actively intervene in advance to try to prevent harm from happening. This is called a preventative approach. This approach is used by the government in some areas, such as in healthcare, whereby the government requires new drugs to pass a series of tests before they can be put on the market.

REGULATING DECISION-MAKING AI PROGRAMS

Respondents were introduced to the proposal for regulating decision-making AI programs, by requiring that they pass government-run pre-tests before they can be deployed, as follows:

One way that the government can take a preventative approach with AI is to require that new AI programs pass a series of tests before they can be put into general use. This is called “pre-testing”. This would be similar to how the government requires testing new drugs.

There is now a proposal to require pre-testing of new AI programs that are going to be used to make decisions that can have significant impacts on people, including in healthcare, banking, housing, education, employment, legal services, and utilities like electricity.

For example, this would include AI programs used:

  • by banks to determine who gets accepted for a loan,
  • by government agencies to determine whether a person is eligible for government benefits, such as food stamps
  • by health insurance companies to determine whether a person’s medical treatment is covered
  • by companies to determine whether a person should be hired
  • by utility companies to determine how to allocate resources, like electricity when there is a shortage 

The tests would try to ensure that the AI program:

  • follows regulations to reduce the chances that it will break the law 
  • follows best practices established by professionals, to reduce the chances it will cause harm
  • has security protections for data privacy and against hacking    
  • does not have unintended biases that result in it treating some groups worse than others, based on their race, gender, religion, age, sexual orientation, or nationality

These tests would be run by the government, or by an independent third-party verified by the government. 

If the AI program does not pass the tests, it would not be approved for general use.

The arguments in favor were found convincing by very large and bipartisan majorities. Four of the five arguments against were also found convincing by majorities, but much smaller majorities than the pro arguments and less bipartisan – more Republicans than Democrats found the con arguments convincing.

[arguments graphs q3-12 – since there are 5 pairs, maybe make one or two-page block]

Asked for their final recommendation, 81% were in favor, including 76% of Republicans and 88% of Democrats

Majorities in all types of Congressional districts were in favor, from very red (79%) to very blue (84%).

[final rec graph q14]

Status of Legislation
The proposal was put forward in the 118th Congress in the Algorithmic Accountability Act by Sen. Wyden (D) and Rep. Clark (D), but has not yet moved out of committee. 

The proposal is also in the European Union’s Artificial Intelligence Act.

Respondents were told that, “some AI programs are already in use and have not been pre-tested. AI programs can also change over time as they learn more or are updated by the company.

They were presented a proposal to preventatively regulate decision-making AI programs that are already in use: 

Give the government the authority to audit AI programs, or to contract independent third-parties to audit them, that are already in use and that make decisions which have significant impacts on people’s lives. 

The audits would include tests on whether the program follows regulations and best practices, has data privacy and security protections, and does not have unintended biases. If the audit finds that the AI program has problems in any of those areas, then the company who owns the AI program would have to fix them and redistribute the corrected version.

Asked for their final recommendation, 77% were in favor, including 74% of Republicans and 82% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (74%) to very blue (78%).

[final rec graph q16]

Status of Legislation
The proposal was put forward in the 118th Congress in the Algorithmic Accountability Act by Sen. Wyden (D) and Rep. Clark (D), but has not yet moved out of committee. 

The proposal is also in the European Union’s Artificial Intelligence Act.

Respondents were told that, “if the government does pre-test or audit AI programs, another question is how much access the government would have, to see how the AI companies develop their programs.

They were then presented the following proposal:

Require that AI companies provide the government with information about how the AI was trained, when the government requests it. This would include a summary of the data used to train the AI, and a description of how the data was obtained. This would not include any sensitive information about individuals, such as medical or financial records.

All of the arguments in favor and against were found convincing by bipartisan majorities, but the pro arguments did better overall, and among Democrats and Republicans.

[arguments graphs Q17-20]

Asked for their final recommendation, 72% were in favor, including 67% of Republicans and 81% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (69%) to very blue (72%).

[final rec graph q22]

Status of Legislation
The proposal was put forward in the 118th Congress in the Algorithmic Accountability Act by Sen. Wyden (D) and Rep. Clark (D), but has not yet moved out of committee. 

The proposal is also in the European Union’s Artificial Intelligence Act.

REGULATING DEEPFAKES

Respondents were presented the following proposal to regulate deepfakes, by requiring they be labeled:

Require that any deepfake image or video distributed publicly – e.g. posted online or shown on TV – must have a label that states that it is not real and was generated by AI. For videos, this label would need to be present the entire time the deepfake is on the screen. For audio deepfakes, they would be required to have a verbal statement at the beginning. 

Deepfakes that are used for entertainment purposes to impersonate a real person (such as portraying a movie actor as younger), would not be required to have a label, as long as the person being portrayed has given their consent.

The arguments in favor were found convincing by very large bipartisan majorities. The arguments against did not do as well, with one found convincing by less than half overall, including less than half of Republicans and Democrats; and the other by just over half.

[arguments graphs q23-26]

Asked for their final recommendation, 83% were in favor, including 83% of Republicans and 85% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (81%) to very blue (85%).

[final rec graph q28]

Status of Legislation
The proposal was put forward in the 118th Congress in the:

  • AI Labeling Act by Sen. Schatz (D) and Rep. Kean (D)
  • AI Disclosure Act by Rep. Torres (D)
  • DEEPFAKES Accountability Act by Rep. Torres (D)

None of the bills have yet to make it out of committee.

Respondents were told that, “there have already been campaign advertisements that have used deepfakes depicting politicians doing or saying things they have not, and events that have not happened.

They were then introduced to the following proposal: “Make it illegal for political campaigns, including PACs, to use deepfakes in their campaign advertisements.

The arguments in favor were found convincing by overwhelming bipartisan majorities, while the arguments against did substantially worse, with each found convincing by less than half, including less than half of Republicans and Democrats.

[arguments graphs q29-32]

Asked for their final recommendation, 84% were in favor, including 83% of Republicans and 86% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (78%) to very blue (78%).

[final rec graph q34]

Status of Legislation
The proposal has been considered by the Federal Elections Commission, but has not yet been codified into law.

Respondents were introduced to the topic of pornographic deepfakes as follows:

As you may know, people have created deepfake images and videos of individuals engaging in sexual activities without that person’s consent. For example, people’s faces have been put on images and videos of other people engaging in sexual acts. These deepfakes have then been posted publicly online.

The proposal was then presented:

Make it illegal to publicly distribute a deepfake of a person engaging in sexual activity, such as by posting it on the internet, without that person’s consent.

 It would not apply to people who only make such deepfakes for their personal use and do not make them public. 

The argument in favor was found convincing by an overwhelming bipartisan majority, while the argument against did quite poorly, with less than half finding it convincing, including less than half of Republicans and Democrats.

[argument graphs q35-36]

Asked for their final recommendation, 86% were in favor, including 85% of Republicans and 87% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (86%) to very blue (84%).

[final rec graph q38]

Status of Legislation
The proposal was put forward in the 118th Congress in the Preventing Deepfakes of Intimate Images Act by Rep. Morelle (D). It has not yet made it out of committee.

FEDERAL AGENCY

Respondents were introduced to the idea of a new Federal agency for AI as follows:

So far, we have been talking about some specific problems with AI.  We are now going to explore a more general proposal for having a Federal agency for AI.

Currently, a variety of federal agencies are responding to specific concerns with AI programs that are arising in their own area of expertise.

The proposal was then presented:

This proposed agency for AI would take a preventative and comprehensive approach to overseeing and regulating the development and use of AI programs. The agency would:

  • closely monitor the state of AI programs and their uses, and try to anticipate potential problems
  • define best practices for developing and using AI programs, based on input from AI experts, industry leaders, and other professionals
  • make recommendations for AI regulations to Congress and the Executive Branch
  • enforce AI regulations that have been adopted

The arguments in favor were found convincing by very large bipartisan majorities. The arguments against did not do as well, but were still found convincing by majorities overall.

[arguments graphs q39-42]

Asked for their final recommendation, 74% were in favor, including 68% of Republicans and 81% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (71%) to very blue (75%).

[final rec graph q44]

Status of Legislation
The proposal was based on the Digital Platforms Commission Act by Sen. Bennett (D) from the 118th Congress. It has not yet made it out of committee.

INTERNATIONAL TREATIES

A description of lethal autonomous weapons and concerns about their future development were presented as follows:

As you may know, AI programs have been put into weapons to assist with finding and locking onto targets. There is a concern that the weapon will not only be programmed to find a certain type of target (enemy combatant or military site), but also to make the decision whether to fire on a target, independent of any human choice at the time. These types of weapons are known as lethal autonomous weapons.

The reason that militaries would build lethal autonomous weapons is that they can be more efficient and effective than weapons which require some human control: Thousands of them can be deployed at the same time without the need for an equivalent amount of humans controlling them or making the final decision to attack targets.

There is a concern that these weapons may not always accurately distinguish the target, and may end up firing on civilians or non-military sites.

They were then presented the following proposal:

A proposal has been put forward for an international treaty that would prohibit lethal autonomous weapons.  Weapons could use AI to find and lock onto a target, but a human would have to decide whether it fires on that target.

The treaty would also have a UN agency enforce this requirement. Member nations would have to disclose information about the use of AI in their weapons systems and allow the UN agency to inspect their weapons systems.

Non-Member nations would be pressured to ban lethal autonomous weapons as well.

This proposal is modeled after other international treaties for monitoring and regulating potentially dangerous technologies, such as nuclear and biochemical weapons.

So the question is whether the US should actively work with other nations to create an international treaty to ban lethal autonomous weapons. 

All of the arguments were found convincing by bipartisan majorities, but the arguments in favor did substantially better, overall and among Republicans and Democrats.

[arguments graphs q45-48 – Q45 is pro, Q46 is con, Q47 is also a con, and Q48 is a pro]

Asked for their final recommendation, 81% were in favor, including 77% of Republicans and 85% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (77%) to very blue (83%).

[final rec graph q50]

Status of Proposal
The proposal to ban lethal autonomous weapons has been considered by the UN’s Secretary-General Anotonio Guterres and the International Coalition of the Red Cross; and has been advocated for by the Campaign to Stop Killer Robots. A treaty has not yet been drafted.

Respondents were briefed on the concerns about future developments of large-scale AI programs that have the potential cause international damage:

Now let’s turn to a proposal for dealing with large-scale AI programs.

Among some AI experts, there is a concern that large-scale AI programs could be created that are highly intelligent, have advanced capabilities, and, perhaps most significantly, have a high level of autonomy. According to these experts, these AI programs could become uncontrollable by humans and engage in dangerous behavior that causes massive harm.

On the other hand, some AI experts have said that these fears of an AI program becoming so powerful and destructive independent of human control are neither realistic nor based on any evidence.

A recent survey of AI experts found that more than half believe there is at least a five percent chance that AI could be developed to the point that it could cause extremely bad outcomes, even possibly human extinction.

In addition to concerns about AI acting autonomously, there are also broad concerns that highly powerful AI programs could be hacked or misused to cause massive harm.

 The proposal for regulating large-scale AI programs was then presented:

A proposal has been put forward for an international treaty for regulating large-scale AI programs. This treaty would have two parts:

Member nations (those that signed the treaty) would establish a set of regulations for the development and use of large-scale AI programs, with the goal of ensuring that they:

  • can always be shut down by human operators in case they get out of control
  • have robust security measures to protect them from being hacked or misused
  • do not cause major unintended and problematic consequences

As AI technology advances and changes, member nations could establish new regulations.

  • An international agency would be created to monitor and inspect whether nations’ large-scale AI projects are following the agreed-on regulations, and help fix any problems that arise. Member nations would be required to disclose information about their large-scale AI programs and agree to inspections and non-Member nations would be pressured to do so as well.

This proposal is modeled after previous international treaties for monitoring and regulating potentially dangerous technologies, such as nuclear and biochemical weapons.

So, the question is whether the US should actively work with other nations to create such an international treaty to establish an agency to regulate large-scale AI programs.

The arguments in favor and against were both found convincing by bipartisan majorities, but the argument in favor did much better, overall and among Republicans and Democrats.

[argument graphs q51-52]

Asked for their final recommendation, 77% were in favor, including 71% of Republicans and 84% of Democrats.

Majorities in all types of Congressional districts were in favor, from very red (70%) to very blue (79%).

[final rec graph q54]

Status of Proposal
The proposal is based on ideas to create an international AI agency similar to the International Atomic Energy Agency, that has been put forward by UN Secretary General António Guterres, as well as OpenAI CEO Sam Altman. A treaty to create such an organization has not yet been drafted.