Artificial Intelligence Between Risk and Benefit Risk management is key

A National Institute of Standards and Technology (NIST) official said that adopting artificial intelligence (AI) requires a certain level of risk to achieve the desired results, noting that these risks cannot be completely avoided but must be effectively managed.

Martin Stanley, an AI and cybersecurity researcher at the U.S. Department of Commerce, told FedInsider's "Smart Government" panel this week that federal agencies are typically overly cautious about risk, but that can't fully apply to AI.

Stanley said: "Risks must be managed first and foremost, because the benefits of AI are compelling enough to be worth pursuing."

He pointed out that the Institute's AI risk management framework intersects in many aspects with the Federal Reserve's guidance on the use of algorithmic models in the financial sector, stressing that the Institute was inspired by these experiences to simplify concepts and use clear language in dealing with risks, in terms of their probability of occurrence and their positive or negative impacts.

His remarks came at a time when many U.S. government agencies are working to implement new Office of Management and Budget (OMB) guidance under the Trump administration, which maintains most of the features of the previous approach under Biden.

According to the new memo (M-25-21), federal agencies were required to publish their plans for managing the use of AI by the end of September, including identifying uses that have a high impact on public rights and safety and require special risk management procedures.

Stanley emphasized the importance of striking a balance in AI governance, saying: "Governance should not be so heavy-handed or complex that it delays innovation and burdens bureaucracy."

He also praised the government's approach to risk management, saying, "The U.S. has successfully identified high-impact uses of AI that require greater oversight and additional safeguards to avoid potential risks."

Some areas of research or experimentation may not need the same level of caution, but when it comes to high-impact applications, "it is necessary to look closely at the potential consequences," he said.

comments