On Episode 38 of The Cyber Security Matters Podcast, we discussed changes to AI governance with Patrick Sullivan, the VP of Strategy and Innovation at A-Lign. He shared his insights on changing legislation and what that means for organisations that use AI as part of their workflow, as well as his definition of ‘AI governance’. Here’s what he said:
What does the term ‘AI governance’ actually mean?
ISACA through COBIT has introduced control objectives for AI and has defined governance as a value-creation process. When we think about governance, we think about value creation. COBIT says that governance is creating desired outcomes at an optimized risk and cost. So for us, we need to ask ‘What do we want to create? What risk are we willing to bear? And what budget do we have to support all these things?’ Our practices are processes that are employed to ensure that we’re creating the outcomes that we want as an organization in both a risk-appropriate and resource-appropriate way.
What frameworks or guidelines can organizations adopt to ensure AI systems are used responsibly and ethically, and does this vary based on the size of the organisation?
Generally, we won’t see the applicable frameworks vary based on organizational size. In the market today, there are two frameworks that most organizations are using to build their AI governance systems to adhere to X number of regulations. For neuco as an example, we saw that the EU AI Act was written into the Official Journal last week. These regulations are pressing, which means many organizations that are bound to the AI Act now need to take significant action to prepare themselves.
How do those frameworks and guidelines actually physically enhance trust within the supply chain?
ISO 42001 is a certifiable standard and management system. Organisations that implement ISO 42001 as their AI management system can have a third-party auditor certification body, of which A-lign is one, independently validate that appropriate processes are in place, that appropriate procedures and commitments have been made, and that the management system is running effectively to meet the intent of the standard. So there’s a certification mechanism that organisations can use to offer assurance to others in their supply chain and their value chain.
Many in the security space are already very familiar with security questionnaires. We’re currently seeing a lot of pressure on organisations to answer AI questions because the market is really educating itself about what’s important. That is then driving the need to respond to those questions or unknowns to or from suppliers. While regulation will always be a pressing concern, self-policing in the market is where I see us go with responsible AI use.
How do you expect AI governance and compliance to change in the coming years?
Over the next five years, I think we’ll see the skills gap become more pronounced. I don’t know that there’s necessarily the awareness that there needs to be. We’re seeing groups come online like a group called the International Association for Algorithmic Auditors, which helps new algorithmic auditors or AI auditors understand what skills they need to be successful, and I think we’ll see more organisations like that come online as the recognition of the AI governance and AI assessment skills gap becomes more pronounced. As that happens, the market will really largely start self-policing, and we’ll enter the hype cycle. But, once that begins to simmer down, AI governance will become more of an operational process just like any other governance, risk governance, or vulnerability management process.
To hear more from Patrick, tune into Episode 38 of The Cyber Security Matters Podcast here.
We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.