Here’s What We Learned from Our Deep-Dive on Artificial Intelligence

Here’s What We Learned from Our Deep-Dive on Artificial Intelligence

The use of artificial intelligence (AI) is expanding rapidly. These technological breakthroughs present both opportunity and potential peril. AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers. With so much potential for innovation, organizations investing in AI-oriented practices are already ramping up initiatives that boost productivity to remain competitive.

Like most disruptive technologies, these investments can both create and displace jobs. If appropriate and reasonable protections are not put in place, AI could adversely affect privacy and personal liberties or promote bias. Policymakers must debate and resolve the questions emanating from these opportunities and concerns to ensure that AI is used responsibly and ethically.

This debate must answer several core questions: What is the government’s role in promoting the kinds of innovation that allow for learning and adaptation while leveraging core strengths of the American economy in innovation and product development? How might policymakers balance competing interests associated with AI—those of economic, societal, and quality-of-life improvements—against privacy concerns, workforce disruption, and built-in-biases associated with algorithmic decision-making? And how can Washington establish a policy and regulatory environment that will help ensure continued U.S. global AI leadership while navigating its own course between increasing regulations from Europe and competition from China’s broad-based adoption of AI?

Statement on AI Commission Report

The United States faces stiff competition from China in AI development. This competition is so fierce that it is unclear which nation will emerge as the global leader, raising significant security concerns for the United States and its allies. Another critical factor that will affect the path forward in the development of AI policy making is how nations historically consider important values, such as personal liberty, free speech, and privacy.

To maintain its competitive advantage, the United States, and like-minded jurisdictions, such as the European Union, need to reach agreement to resolve key legal challenges that currently impede industry growth. At this time, it is unclear if these important allies will collaborate on establishing a common set of rules to address these legal issues or if a more competitive—and potentially damaging—legal environment will emerge internationally.

AI has the capacity to transform our economy, how individuals live and work, and how nations interact with each other. Managing the potential negative impacts of this transition should be at the center of public policy. There is a growing sense that we have a short window of opportunity to address key risks while maximizing the enormous potential benefits of AI.

The time to address these issues is now.

In 2022, the U.S. Chamber of Commerce formed the Commission on AI Competitiveness, Inclusion, and Innovation (“Commission”) to answer the questions central to this debate. The Commission, cochaired by former representatives John Delaney (D-MD) and Mike Ferguson (R-NJ), was tasked with the mission to provide independent, bipartisan recommendations to aid policymakers. Commissioners met over the course of a year with over 87 expert witnesses during five separate field hearings across the country and overseas, while also receiving written feedback from stakeholders answering three separate requests for information posed by the Commission.


The Commission observed six major themes from its fact finding:

Key takeaways

  • The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
  • Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
  • A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
  • The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
  • The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
  • Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.

Understanding the importance of these findings, the Commission also determined that the following five pillars should be at the core of AI regulatory policy making:

Five pillars of AI regulation

Efficiency

Policymakers must evaluate the applicability of existing laws and regulations. Appropriate enforcement of existing laws and regulations provides regulatory certainty and guidance to stakeholders and would help inform policymakers in developing future laws and regulations. Moreover, lawmakers should focus on filling gaps in existing regulations to accommodate new challenges created by AI usage.

Collegiality

Federal interagency collaboration is vital to developing cohesive regulation of AI across the government. AI use is cross-cutting, complex, and rapidly changing and will require a strategic and coordinated approach among agencies. Therefore, the government will need to draw on expertise from the different agencies, thus allowing sector and agency experts the ability to narrow in on the most important emerging issues in their respective areas.

Neutrality

Laws should be technology neutral and focus on applications and outcomes of AI, not the technologies themselves. Laws regarding AI should be created only as necessary to fill gaps in existing law, protect citizens’ rights, and foster public trust. Rather than trying to develop a onesize-fits-all regulatory framework, this approach to AI regulation allows for the development of flexible, industry-specific guidance and best practices.

Flexibility

Laws and regulations should encourage private sector approaches to risk assessment and innovation. Policymakers should encourage soft law and best practice approaches developed collaboratively by the private sector, technical experts, civil society, and the government. Such non-binding, self-regulatory approaches provide the flexibility of keeping up with rapidly changing technology as opposed to laws that risk becoming outdated quickly.

Proportionality

When policymakers determine that existing laws have gaps, they should attempt to adopt a risk-based approach to AI regulation. This model ensures a balanced and proportionate approach to creating an overall regulatory framework for AI.