New AI guidelines signal change is coming
By Josh Davies, Principal Technical Manager, Fortra
You may be stringent on your AI adoption, but it’s hard to tell if your partners and suppliers are exercising the same amount of caution. New AI security guidelines indicate that they may soon have to.
What are the new AI security guidelines?
This past November, the UK published the first global guidelines for the safe development of AI systems, with the endorsement of agencies from 18 countries, including the United States.
The UK’s National Cyber Security Centre (NCSC), a part of GCHQ, and the U.S.’s Cybersecurity and Infrastructure Security Agency (CISA) cooperated with industry experts and 21 other international agencies to create the Guidelines for Secure AI System Development, which are broken down into four main areas:
- Secure Design | Direction for each step of the AI system development lifecycle. This section deals with risks, threat modeling, and the inevitable trade-offs.
- Secure Development | Counsel for the development stage specifically, this area covers supply chain security, documentation, and asset and technical debt management.
- Secure Deployment | Guidance for deployment, including how to protect AI models against threat, loss, and compromise, how to develop effective incident management policies, and how to responsibly release AI models safely into the market.
- Secure Operation and Maintenance | Principles for protecting an AI system once it has been deployed. This covers information sharing, logging and monitoring, and update management.
The guidelines also recommend the use of red teaming, delineating that “you release models, applications or systems only after subjecting them to appropriate and effective security evaluation such as benchmarking and red teaming…and you are clear to your users about known limitations.”
Red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate.
Looking to the future is exactly what these guidelines aim to do; especially as they have no legal bite as of now.
Why now?
The AI arms race and rapid adoption of generative AI and open AI systems have created concerns in the cyber security sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users. These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.
These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation-states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without unnecessarily stifling innovation and adoption.
As systems and nation-states are increasingly interdependent on each other, global buy-in is crucial. We have already seen how collective security is important; otherwise, threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example of this. These guidelines level the playing field by homogenizing guidance across nation states and limit a race to the bottom with AI tech.
Do these guidelines have teeth?
Will we see adoption? Or does it just serve to re-assure the public that AI issues are being considered? What is the consequence of not following the guidance? I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations, and I would hope to see it coming from both sides of the pond and the other international agencies that backed it in the first place. This would leave ambitious, but unsafe, AI developers with few places to hide (and still make a global market impact).
Without any punitive measures, a cynic would say organisations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy-in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind. This move would open up these leaders to the court of public opinion if there is evidence that they were aware of a flaw and did not take appropriate action, and that this inaction resulted in a subsequent compromise and/or data breach.
Where do we go from here?
What does this mean for AI systems, developers, and the entities that use them? Nothing legally (yet), but it’s a definite shot across the bow that legislation like this will soon be coming. For now, these guidelines state in clear terms international concerns about the rapid rush to integrate AI, and what the minimal safety precautions should be as we venture into the still relatively unknown.
My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. But this is a good start.
Jesse Pitts has been with the Global Banking & Finance Review since 2016, serving in various capacities, including Graphic Designer, Content Publisher, and Editorial Assistant. As the sole graphic designer for the company, Jesse plays a crucial role in shaping the visual identity of Global Banking & Finance Review. Additionally, Jesse manages the publishing of content across multiple platforms, including Global Banking & Finance Review, Asset Digest, Biz Dispatch, Blockchain Tribune, Business Express, Brands Journal, Companies Digest, Economy Standard, Entrepreneur Tribune, Finance Digest, Fintech Herald, Global Islamic Finance Magazine, International Releases, Online World News, Luxury Adviser, Palmbay Herald, Startup Observer, Technology Dispatch, Trading Herald, and Wealth Tribune.