How Will the AI Bill of Rights Affect AI Development?

As the White House looks to protect consumers from AI abuse, will it help or hamper artificial intelligence adoption? Whatever the outcome, the stakes are high.

John Edwards, Technology Journalist & Author

January 23, 2023

5 Min Read
Washington, DC at the White House.
Sean Pavone via Alamy Stock

Last October, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights that asserts principles and guidance around equitable access and use of automated, or artificial intelligence systems.

The guidelines aren't binding, but the Biden Administration hopes that the document will convince tech firms to take steps to protect consumers, including clearly explaining how and why an automated system is in use, and designing AI systems to be equitable.

The AI Bill of Rights (AIBoR) is a recommendation, not law, says Ramayya Krishnan, dean of Carnegie Mellon University's Heinz College of Information Systems and Public Policy, as well as a member of the US Department of Commerce's National Artificial Intelligence Advisory Committee.

The demand for implementing the AIBoR's measures will ultimately depend on how well it's received by developers and the public. “If there's significant adoption of some or all these rights, this will create the demand for software libraries and modules that can contribute to creating services and capabilities,” Krishnan says. “More important,” he notes, “it will create demand for responsible AI development and deployment practices.”

In the near term, the AIBoR will have only a minimal impact, says Nicola Morini Bianzino, global CTO of business advisory firm EY. “It encourages those who have already identified trustworthy/responsible AI as a brand distinguisher, but it won’t change the operating practices of those who have different priorities.”

For the vast majority of AI developers, those who are already operating in a safe, transparent and trustworthy manor, the AI Bill of Rights should have minimal impact, says Wayne Butterfield, a partner with ISG Automation, a unit of technology research and advisory firm ISG. “Of course, some changes may be needed for those who presently are not adhering to these best practices -- with areas like explainability becoming not just a nice to have, but a must have.”

The document provides five principles for the ethical use of AI, as well as a detailed discussion of specific use cases across different sectors of the economy but isn't legally binding Bianzino observes. “The degree to which it influences engineer behavior will therefore come down to enforcement mechanisms and should be seen in the context of other regulatory agency actions.”

Transparent and Explainable

The AIBoR states that AI systems should be transparent and explainable, and not discriminate against individuals based on various protected characteristics. “This would require AI developers to design and build transparent and fair systems and carefully consider their systems’ potential impacts on individuals and society,” explains Bassel Haidar, artificial intelligence and machine language practice lead at Guidehouse, a business consulting firm.

Creating transparent and fair AI systems could involve using techniques, such as feature attribution, that can help identify the factors that influenced a particular AI-driven decision or prediction, Haidar says. “It could also involve using techniques such as local interpretable model-agnostic explanations (LIME), which can help to explain the decision-making process of a black-box AI model.”

AI developers will have to thoroughly test and validate AI systems to ensure that they function properly and make accurate predictions, Haidar says. “Additionally, they will need to employ bias detection and mitigation techniques to help identify and reduce potential biases in the AI system.”

Adoption Obstacles

The AIBoR provides a framework that enterprises can use to shape their decision making around AI deployment, yet it isn't as comprehensive as the EU’s proposed AI Act, Bianzino says. Therefore, many large multinational companies operating in both the US and EU may choose to voluntarily operate under the highest global standard. “So while [AIBoR] is a welcome step toward strengthened oversight of AI applications, the EU AI Act, expected in 2024, may turn out to be the more influential legislation for large global enterprises,” he says.

Moving Forward

Despite its flaws, the AIBoR marks an important step forward, Krishnan says. “It provides a concrete statement of what AI rights are and what they could/should be.”

Krishnan observes that various federal agencies and programs, such as the Federal Trade Commission, the Equal Employment Opportunity Program, and the Consumer Financial Protection Bureau (CFPB), can already put some of the AIBoR's rights into place through rule making. “For example, CFPB is requiring simple explanations to users when negative decisions about extending credit are made whether the decision was made by an AI or not,” he notes. “We will be in state of transition in the next couple of years.”

Bianzino, however, cautions that everyday users will only benefit to the extent that data scientists and enterprises are willing to implement the principles outlined in the AIBoR. “Without binding and comprehensive regulations, alongside robust monitoring and enforcement, everyday users are unlikely to see any tangible benefit in the short-term,” he explains. “As regulatory agencies implement the principles and enforce against infractions, and additional complementary legislation is adopted, the benefits to the public in terms of safety and accountability will rise over time.”

The AIBoR is concise, and protects both data privacy and potential algorithmic harm, Krishnan says. “It's an important and essential step and will spur work on policy and technology,” he concludes.

What to Read Next:

IBM’s Krishnan Talks Finding the Right Balance for AI Governance

Why the US Risks Falling Behind in AI Leadership

5 Ways to Embrace Next-Generation AI

About the Author

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights