Sen. Ron Wyden, D-Ore., with Sen. Cory Booker, D-N.J., and Rep. Yvette Clarke, D-N.Y., introduced Thursday the Algorithmic Accountability Act of 2023, to create new protections for people affected by artificial intelligence (AI) systems that are already impacting decisions affecting housing, credit, education and other high-impact uses.
The bill applies to new generative AI systems used for critical decisions, as well as other AI and automated systems.
“AI is making choices, today, about who gets hired for a job, whether someone can rent an apartment and what school someone can attend. Our bill will pull back the curtain on these systems to require ongoing testing to make sure artificial intelligence that is responsible for critical decisions actually works, and doesn’t amplify bias based on where a person lives, where they go to church or the color of their skin,” Wyden said.
“We know of too many real-world examples of AI systems that have flawed or biased algorithms: automated processes used in hospitals that understate the health needs of Black patients; recruiting and hiring tools that discriminate against women and minority candidates; facial recognition systems with higher error rates among people with darker skin; and more. The Algorithmic Accountability Act would require that automated systems be assessed for biases, hold bad actors accountable, and ultimately help to create a safer AI future,” said Senator Booker.
“Americans do not forfeit their civil liberties when they go online. But when corporations with vast resources continue to allow their AI systems to carry biases against vulnerable groups, the reality is that countless have and will continue to face prejudice in digital spaces,” said Congresswoman Clarke. “No longer can lines of code remain exempt from our anti-discrimination laws. My bill recognizes that every algorithm has an author and every bias has an origin and that, through proper regulation, we can ensure safety, inclusion, and equity are truly priorities in critical decisions affecting Americans’ lives.”
The bill requires companies to conduct impact assessments for effectiveness, bias and other factors, when using artificial intelligence to make critical decisions. It also creates, for the first time, a public repository at the Federal Trade Commission of these systems, and adds 75 staff to the commission to enforce the law.
It is co-sponsored by Democratic Sens. Martin Heinrich, D-N.M., Gary Peters, D-Mich., Bob Casey, D-Pa., Ben Ray Luján, D-N.M., Tammy Baldwin, D-Wis., Jeff Merkley, D-Ore., Sheldon Whitehouse, D-R.I., Brian Schatz, D-Hawaii, Mazie Hirono, D-Hawaii, and Elizabeth Warren, D-Mass.
“From determining employment decisions to granting personal loans, algorithms are increasingly making critical decisions about Americans’ health, finances, housing, education, and access to opportunities – but they’re too often flawed and amplify harmful biases,” said Senator Warren. “This bill will help ensure greater transparency on the impacts of algorithms, and it will empower the FTC to better protect consumers.”
“Consumers deserve to know when and how important decisions, like loan approvals, are decided by automated systems,” said Luján. “That’s why I’m proud to join my colleagues in introducing the Algorithmic Accountability Act to implement safeguards, limit biases, and hold automated systems accountable.”
“As the use of AI and algorithmic decision making becomes more prevalent-particularly by companies that make critical decisions about Americans’ health, finances, housing, and educational opportunities-we must ensure that there are sufficient regulations and standards in place to protect people from bias and discrimination,” said Senator Hirono. “I am glad to cosponsor the Algorithmic Accountability Act to strengthen consumer protections and increase accountability for companies using algorithms to make decisions.”
“Algorithms play a significant role in the way society interacts and behaves every single day, whether we like it or not,” said Senator Merkley. “While some algorithms may appear harmless, there are few boundaries and little public insight into how these systems operate. Big tech companies and developers must be transparent with the public about how they use our data to shape every aspect of our lives, because we know AI has the potential to exacerbate discrimination and inequality. This bill is step one.”
“Transparency is the best way to build in both accountability and trust that artificial intelligence systems are working responsibly, especially as more industries adopt these tools,” said Senator Peters. “Americans should know when they are interacting with automated systems that are making critical decisions that could impact their health, finances, civil rights, and more.”
In the House, the bill is cosponsored by Rep. Ayanna Pressley, D-Mass., Rep. Nanette Diaz Barragan, D-Calif., Rep. Cori Bush, D-Mo., Rep. Jamaal Bowman, D-N.Y., Rep. Dwight Evans, D-Pa., Rep. Lori Trahan, D-Mass., Rep. Steve Cohen, D-Tenn., Rep. Andre Carson, D-Ind., Rep. Marc Veasey, D-Texas, Rep. Frederica S. Wilson, D-Fla., Rep. Pramila Jayapal, D-Wash., Rep. Jared Huffman, D-Calif., Rep. Robin L. Kelly, D-Ill., and Rep. Emanuel Cleaver, D-Mo.
A full summary of the bill is available here.
The Algorithmic Accountability Act is endorsed by a broad array of experts and civil society organizations: Access Now, Accountable Tech, Aerica Shimizu Banks, Anti-Defamation League, Center for Democracy and Technology (CDT), Color of Change, Consumer Reports, Encode Justice, EPIC, Fight for the Future, IEEE, Montreal AI Ethics Institute, National Hispanic Media Coalition, New America’s Open Technology Institute, Vera and US PIRG.
“As AI and generative AI systems become mainstreamed and popularized, we must ensure there are appropriate safeguards put in place,” said Ya?l Eisenstat, Vice President of the Anti-Defamation League’s Center for Tech & Society. “We have already seen how social media platforms leverage AI-powered systems without any real accountability and, at times, spread hate and antisemitism as a result. We need transparency into how these systems operate and guardrails around what they can and cannot do. Senator Wyden’s Algorithmic Accountability Act is an important effort to provide the public with needed information on how these systems are being used and the assurance that companies are monitoring the impacts of these new technologies.”
“Poorly designed algorithms can result in inaccurate outcomes, inconsistent results, serious discriminatory impacts, and other harms,” said Justin Brookman, Director of Technology Policy at Consumer Reports. “The Algorithmic Accountability Act is an important foundation to provide researchers and policymakers with the tools to identify who can be impacted by these emerging technologies and how. We look forward to continue working with the sponsors of the bill to seek out the most effective ways to mitigate algorithmic harm.”