White House Blueprint is the Starting Point for Building Responsible AI
COMMENTARY | The report brings new urgency to ongoing agency efforts.
Late last year, White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, instantly elevating the topic of responsible AI to the top of leadership agendas across executive branch agencies. While the themes of the blueprint are not entirely new—building on prior work including the AI in Government Act of 2020, a December 2020 executive order on trustworthy AI, and the Federal Privacy Council’s Fair Information Practice Principles—the report brings new urgency to ongoing agency efforts to leverage data in ways consistent with our democratic ideals.
With a stated goal of supporting “the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems,” the blueprint is rooted in five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. The Blueprint also includes notes on applying the principles and a technical companion to support operationalization.
Some agencies that are less mature in their data capabilities might consider the blueprint to be of limited relevance. This is simply not the case. The OSTP has broadly defined automated systems to include even basic statistical analysis, specifically: “Any system, software or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” All federal agencies, regardless of their data and analytic maturity, are likely using systems that are subject to the blueprint.
There are a handful of executive agencies that have already defined ethical principles for the development and deployment of AI systems, including the Department of Energy and the Department of Defense. However, in both the public and private sectors, the gap between declaring principles and embedding them across the product lifecycle is massive. While recommendations detailed in the blueprint—like disparity assessments, representative and robust data, privacy by design and reporting—are necessary for establishing responsible AI, or RAI, they are far from sufficient. Agencies need a comprehensive approach to overcome the common barriers to RAI and implement the recommendations in the blueprint.
Earlier this year Boston Consulting Group and MIT Sloan Management Review conducted a global survey of more than 1,000 executives worldwide to understand the breadth and scope of RAI adoption. There are some important commonalities across the public and private sectors. Roughly half of organizations see AI as a strategic priority. Half consider RAI a top part of the management agenda. Similar proportions also have responsible AI efforts underway.
While at a high level, public and private sector organizations appear to be approaching RAI in similar ways, the perception of RAI as a check-the-box exercise is 30% more common in public sector agencies than in private firms. Furthermore, among public sector managers, 61% cite lack of funding and resourcing for RAI initiatives as a significant impediment, more than 50% greater than the comparable figure for the private sector. The level of training or knowledge among staff, lack of awareness about RAI, lack of RAI expertise and talent, and lack of prioritization and attention by senior leaders are also more frequently cited in the public sector.
Successfully catalyzing a broader cultural transformation requires that agencies address the gaps highlighted in the survey results. Based on learnings from public and private sector organizations, there are a few key actions agencies can take. First, the leaders of government agencies must champion responsible AI and ensure sufficient allocation of resources—both in terms of budget and talent. This is consistent with the National Security Commission on Artificial Intelligence’s recommendation that: “Senior-level responsible AI leads should be appointed across the government to improve executive leadership and policy oversight.”
Second, given the cultural shift that RAI requires, executive agencies must invest in the training of staff, the communication of new policies and expectations, and the ultimate objectives of RAI to ensure that staff are enabled and empowered to apply AI principles.
Finally, it should be clear that principles are only the first step. Just as important is putting in place the governance, processes and tools that make them tangible and actionable for the teams procuring, building, deploying and leveraging AI. Leaders must develop and drive the execution of a comprehensive strategy and implementation plan.
The OSTP has done a significant service by laying out a blueprint for the responsible development and deployment of AI. The challenge of translating that blueprint into a reliable, trusted and effective operating structure now falls to the executive leaders across government. How we collectively respond to that challenge will determine the scale and nature of AI’s impact on American society for decades to come.
Steve Mills is a managing director and partner at Boston Consulting Group and Sean Singer is a principal at Boston Consulting Group.