
[ad_1]
As the AI frontier advances at a rapid pace, the U.S. government is struggling to keep up. I lead AI policy in Washington, D.C., and I can tell you that before we decide how to regulate cutting-edge AI systems, we first need to see them clearly. Right now, we’re navigating a fog.
As an AI policy fellow at the Federation of American Scientists (FAS), my role is to develop bipartisan consensus to improve the government’s ability to analyze current and future systems. In this work, I engage with experts in government, academia, civil society, and the AI industry. I learn that there is no broad consensus on how to manage the potential risks of groundbreaking AI systems without hindering innovation. However, there is broad agreement that the U.S. government needs to More information on the technology and practices of AI companiesWithout a detailed understanding of the latest AI capabilities, policymakers cannot effectively assess whether existing regulations are sufficient to prevent misuse and accidents, or whether companies need to take additional steps to protect their systems.
When it comes to nuclear power or Aviation safetyThe federal government requires private companies in these industries to provide timely information to ensure the public’s welfare. We need the same insight into the emerging field of AI. Otherwise, this information gap could leave us vulnerable to unforeseen national security risks or lead to overly restrictive policies that stifle innovation.
It is encouraging to see that Congress is making incremental progress in improving the government’s ability to understand and respond to new developments in AI. Since ChatGPT debuted in late 2022, lawmakers from both parties and both chambers of Congress have taken AI more seriously. The House of Representatives established a Bipartisan Artificial Intelligence Task Force Senate Majority Leader Chuck Schumer (D-N.Y.) organized a series of AI Insights Forum Gather outside input to lay the foundation for AI policy. These activities informed the Senate’s bipartisan AI Task Force Artificial Intelligence Roadmap Areas of consensus were outlined, including the “development and standardization of risk testing and assessment methodologies and mechanisms” and an information sharing and analysis hub focused on artificial intelligence.
Several bills have been introduced to increase information sharing on AI and strengthen the government’s response capabilities. Artificial Intelligence Research, Innovation, and Accountability Act Requires companies to submit risk assessments to the Commerce Department before deploying AI systems that could affect critical infrastructure, criminal justice, or biometrics. Another bipartisan bill Vocational Education and Training Artificial Intelligence Act (FAS Agree), which proposes a system for independent assessors to audit and verify that AI companies are following established guidelines, similar to existing practices in the financial industry. The bills were approved by the Senate Commerce Committee in July and could receive a vote in the full Senate before the 2024 election.
There has also been encouraging progress elsewhere in the world. In May, the governments of the United Kingdom and South Korea announced that most of the world’s leading AI companies had agreed to a new set of Voluntary Safety Commitment At the Seoul AI Summit, these commitments include identifying, assessing and managing the risks associated with developing state-of-the-art AI models, drawing on the work of businesses Responsible Scaling Strategy The agreement, first proposed last year, provides a roadmap for future risk mitigation as AI capabilities develop. AI developers also agreed to disclose their leading-edge AI safety approaches, including “sharing more detailed information with trusted actors, including their respective governments, that cannot be shared publicly.”
However, these commitments lack enforcement mechanisms and standardized reporting requirements, making it difficult to assess whether companies are complying with their commitments.
Even some industry leaders have expressed support for increased government oversight. OpenAI CEO Sam Altman said, Emphasize this point In testimony before Congress early last year, he said, “I think if this technology goes wrong, it could go very wrong, and we want to speak out. We want to work with the government to prevent that from happening.” Anthropic CEO Dario Amodei took this sentiment a step further; Responsible Scaling Strategyhe Expressed his hope The government will translate elements of the policy into a “carefully designed testing and auditing system with accountability and oversight mechanisms”.
Despite these encouraging signs from Washington and the private sector, significant gaps remain in the U.S. government’s understanding and response to the rapid development of AI technologies. Specifically, three key areas require immediate attention: protecting independent research on AI safety, early warning systems for improvements in AI capabilities, and comprehensive reporting mechanisms for real-world AI incidents. Closing these gaps is critical to protecting national security, promoting innovation, and ensuring that AI development advances the public interest.
Safe Haven for Independent AI Safety Research
AI companies often block or even Threat prohibition Researchers have found safety flaws in using their products, which has a chilling effect on necessary independent research. This has left the public and policymakers in the dark about the dangers that widely used AI systems could pose, including threats to U.S. national security. Independent research is critical because it provides an outside check on the claims of AI developers, helping to identify risks or limitations that the companies themselves may not be aware of.
An important suggestion to address this problem is that companies should provide Providing legal safe harbors and financial incentives for good-faith AI safety and trustworthiness researchCongress can provideVulnerabilities” Bounty Provide legal protections for AI security researchers who identify vulnerabilities, and for experts who study AI platforms, similar to the protections provided to social media researchers. Platform Accountability and Transparency Act. In a Open Letter Earlier this year, more than 350 leading researchers and advocates called on companies to provide such protections to security researchers, but No company This has not yet been done.
With these protections and incentives, thousands of U.S. researchers will be able to stress-test AI systems, enabling real-time evaluation of AI products and systems.Managing the risk of misuse of dual-use base models”, Congress should consider codifying these best practices.
Early warning system for improving AI capabilities
The U.S. government has limited ways to identify and respond to potentially dangerous capabilities of cutting-edge AI systems, and is unlikely to keep pace if AI capabilities continue to grow rapidly. Knowledge gaps within the industry prevent policymakers and security agencies from addressing emerging AI risks. Worse, the potential consequences of this asymmetry will grow over time as the risks of AI systems grow and their applications become more widespread.
Establish Artificial Intelligence Early Warning System Will provide governments with the information they need to respond to AI threats. Such a system would create a formal reporting channel for AI developers, researchers, and other interested parties Artificial Intelligence Capabilities There are both civilian and military uses (e.g. Promoting biological weapons research or Cyber Attacks) to the government. The Commerce Department’s Bureau of Industry and Security can serve as a clearinghouse to receive, classify, and forward these reports to other relevant agencies.
This proactive approach will provide government stakeholders with up-to-date information on the latest AI capabilities, allowing them to assess whether current regulations are adequate or if new safeguards are needed. For example, if advances in AI systems increase the risk of a biological weapons attack, government agencies will be alerted immediately so they can respond quickly and safeguard the public’s well-being.
Reporting mechanisms for real-world AI incidents
Currently, the U.S. government lacks a comprehensive understanding of adverse events caused by AI systems, which hinders its ability to identify risky usage patterns, evaluate government guidelines, and effectively respond to threats. This blind spot prevents policymakers from developing timely and informed responses.
Building a voluntary state AI Incident Reporting Center A standardized channel will be created for companies, researchers, and the public to confidentially report AI incidents, including system failures, accidents, misuse, and potential hazards. The center will be located at the National Institute of Standards and Technology, leveraging existing incident reporting and standards development expertise while avoiding mandatory requirements; this will encourage industry collaboration and participation.
Combining this real-world data on adverse AI incidents with forward-looking capability reporting and researcher protections will enable governments to develop more informed policy responses to emerging AI issues and further enable developers to better understand the threats.
The three proposals strike a balance between oversight and innovation in AI development. By incentivizing independent research and increasing government awareness of AI capabilities and incidents, they can support safety and technological progress. Governments can foster public trust and potentially accelerate AI adoption across sectors, while guarding against regulatory backlash that could come from preventable, high-profile incidents. Policymakers will be able to craft targeted regulations to address specific risks—such as AI-enhanced cyber threats or potential misuse in critical infrastructure—while retaining the flexibility needed for continued innovation in areas such as medical diagnostics and climate modeling.
Passing legislation in these areas will require bipartisan cooperation in Congress. Stakeholders from industry, academia, and civil society must advocate for and participate in the process, providing expertise to refine and implement these proposals. Window of Action It is possible that some AI transparency policies could be attached to must-pass legislation, such as the National Defense Authorization Act, during the remainder of the 118th Congress. Time is short, and swift and decisive action now will lay the foundation for better AI governance in the years ahead.
Imagine a future where our governments have the tools to understand and responsibly guide the development of AI, and where we can harness AI’s potential to solve major challenges while guarding against risks. This future is within our grasp—but only if we act now to clear the fog and strengthen our collective vision for how AI is developed and used. By improving our collective understanding and oversight of AI, we increase our chances of steering this powerful technology toward outcomes that benefit society.
[ad_2]
Source link