In The Know: First US Regulations On AI Systems

In a recent joint statement, 87 human rights and civil rights organisations urged the US Congress to take action on the significant human rights and societal risks created and enabled by artificial intelligence (AI) technologies. The statement outlines some threats posed by AI to society. The first US regulation on AI systems is under preparation.
Discover the Power of Digital Wealth Management, Seamlessly - with Altoo. Platform Preview.

According to the organisations, “screening tools used by companies to streamline hiring, for example, have created barriers to employment for people with disabilities, women, older people, and people of color.” The easy creation of manipulated video and audio is fueling consumer fraud and extortion schemes and raising critical questions for the election-related information environment and public discourse. However, the U.S. government has ~50 independent regulatory bodies, and many AI risks can be addressed via existing authorities. Some US cities and states have already passed legislation limiting the use of AI in areas such as police investigations and hiring. Other measures are underway.

 

Fifteen Pioneers

In July 2023, the Biden-Harris Administration already secured voluntary commitments from seven companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI—to manage the risks associated with AI. Approximately two months later, in September 2023, eight additional companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI—subscribed to these commitments as well.

The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems’ capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation.

Under the “AI Disclosure Act of 2023”, all material generated by artificial intelligence technology has to include the following: “DISCLAIMER: This output has been generated by artificial intelligence”. It would apply to videos, photos, text, audio, and/or any other AI-generated material. The Federal Trade Commission (FTC) would be responsible for enforcement of violations, which could result in civil penalties. That reflects a rising fear that AI will make it far easier to create “deep fakes” and convincing disinformation, especially as the 2024 presidential campaign accelerates.

Your Wealth, Our Priority: Altoo's Consolidation Power, Secure Document Management, and Seamless Stakeholder Sharing for High Net Worth Individuals. Preview Platform.

 

Protecting Civil Rights

According to Politico magazine, the Biden administration’s long-awaited executive order on AI is expected to leverage the federal government’s vast purchasing power to shape American standards for a technology that has run ahead of regulators. The White House is also expected to lean on the National Institute of Standards and Technology to tighten industry guidelines on testing and evaluating AI systems—provisions that would build on the voluntary commitments on safety, security, and trust. Briefly, they will join the voluntary statement of 15 major tech companies (see above).

The New York Times notices that the federal government’s first regulations on AI systems will include requirements that the most advanced AI products be tested to assure that they cannot be used to produce biological or nuclear weapons, with the findings from those tests reported to the federal government. 

The order affects only American companies, but because software development happens around the world, the United States will face diplomatic challenges enforcing the regulations, which is why the administration is attempting to encourage allies and adversaries alike to develop similar rules.

The executive action will build on years of White House efforts to establish AI standards. The Trump White House issued an executive order to drive American leadership in AI in 2019. In October 2022, the Biden administration issued its non-binding AI Bill of Rights, outlining the administration’s broad stances on governing automated systems, with an emphasis on protecting civil rights.

Altoo: Secure Swiss Professional for Consolidated Assets and Document Management. Platform Preview.

Insights On Wealth Management And More.

Delivered To Your Inbox.
Left Menu Icon