Jan 29, 2024

Balancing Innovation and Oversight in the Technological Era: The AI Governance Conundrum

Nick Reese
January 29, 2024

Oyez, oyez, oyez! The AI Governance Council is now in session! All those concerned with the matters before the Council be seated.

Esteemed members of the Council, distinguished innovators, and those gathered in these hallowed halls of technology governance, we are convened this day under the solemn duty and the weighty charge bestowed upon us by the governance policies of our noble organization.

Let it be known that the Council, in its unwavering pursuit of technological governance, trust, and order, has before it today a case of grave import. We are to deliberate upon matters most serious, where the scales of governance shall be weighted with due care and diligence.

To the members of the Council, yours is a solemn challenge: you are the arbiters of innovation, the keepers of automation. Your duty is to weigh the evidence presented with neither prejudice nor partiality. Let not station nor stature of tech companies nor the whispers of the social media gallery sway your judgement, for in your hands lies the fate of the AI system who stands before us presumed mission enhancing until proven otherwise.

Let all who are present here today remember: we are bound by the solemn duty to uphold public trust in AI and enhance the mission capabilities of our organization, and to dispense governance with an even hand.

With these words, I declare this Council open. Let us proceed with the gravity and the respect that this solemn occasion demands. The innovators may now present their case.

Shape

Is this what you think of when you think of AI governance? A hallowed hall of stern looking, powdered wig wearing, judges who are issuing a harsh ruling to an AI being? It’s what I think of, and I actually worked on AI governance for the US government back before it was en vogue. I’m sorry to report that I own not a single black robe and only three powdered wigs…just kidding, I don’t own any powdered wigs either. But AI governance is a very popular topic that has been rattling around policy making and technology circles for at least 4 years and broader technology governance for much longer. This is a difficult problem because of the ever-bedeviling reality with which policy makers must contend: two things can be true at the same time.  

With the release of the Biden Artificial Intelligence Executive Order, tech circles are abuzz with talk of incoming regulation on AI. That may indeed be in our future, but that’s not the same as AI governance. AI governance is about how an organization oversees its use of AI products that are not regulated. Governance is a matter of policy, not of law or of rulemaking, and is not going to be uniform across organizations. AI governance is extremely specific to the organization implementing it and the industry or sector it is in. Creating an effective AI governance structure that prioritizes innovation and presumes innocence is critical for any organization that is attempting to use AI broadly. Fundamentally it comes down to a question of “can” or “should.”

In the US government, there is a law that will make many reading this roll their eyes. It’s the Federal Information Security Modernization Act  or FISMA. FISMA originally required federal agencies to take a “risk-based, cost effective” approach to cybersecurity and required agencies to develop policies to ensure security in information systems. Today, that means that if a federal agency wants to use a particular piece of software or hardware, it must comply with a set of standards. This is governance that asks the question, “CAN we use this system?” Said another way, if the system in question complies with a set of security requirements, FISMA says you are allowed to use it. What FISMA does not ask is SHOULD you use it. In the world of information technology products, the CAN question is often good enough. Few people ask whether we SHOULD be using phones or computers to conduct business. However, AI is entirely different than standard IT technology that is governed by FISMA and requires a different approach.

When I was a policy maker in the federal government, I recall proposals to slot AI systems under FISMA as the governance structure. But AI governance is not ONLY about whether an organization CAN use AI under its security policies, but also whether it SHOULD use AI for the application being proposed. Further, AI governance should address under what governance conditions an AI system can be used based on both the risk of using the system and of not using the system. For example, an AI governance council might impose a requirement of quarterly audits of the system’s output or enhanced security measures around its training data. Critically, governance of AI must be based on risk, but not ONLY the risk of using AI, but also the risk of not.

Many of the current conversations around the use of AI, particularly in the federal government, are about the risk of using AI. Risks such as reputation, public trust, security, and more. What’s missing from these conversations is what the risk to reputation, public trust, security, and other issue might be if an organization decides NOT to innovate, NOT to automate, and incurs a significant incident because of human error that could have been prevented by a more detail oriented and less error prone AI. Would you rather have human eyes monitoring volumes of network traffic for anomalies or an AI?

AI governance is not synonymous with regulation, but it is also not synonymous with the types of technology governance to which we’ve all grown accustomed. It requires an entirely different approach that asks a risk-based question of “can” or “should” from both the perspective of using AI and of NOT using AI. This is a nuanced approach that does not fit well into existing structures, which makes its adoption a challenge. But just like most things in life, a single structure to govern every situation is rarely effective because more than one thing can be true at the same time. It can be true that an organization must put a governance structure in place to ensure its AI is not misused while it is also true that if it did not use that AI system, it would see a degradation to its mission relative to its competitors. AI governance is something new and it will not be the same for every organization because what AI means to different entities will vary widely. Each organization will have to find its own way, but there are frameworks that can help create a structure that mitigates misuse and allows innovation to enhance mission capabilities.

A panel of powdered wigs may not be how you choose to do AI governance (or maybe it is), but governance should be dispensed with an even hand. Ours is a solemn charge for we are truly the arbiters of innovation and the keepers of automation. AI advancements shall continue to charge forward and the meek shall not inherit competitive advantage. Those who can effectively govern will be able to quickly and effectively adopt while those still struggling to figure out if they CAN use AI will watch their bottom lines dwindle.