How to Mitigate Insider Threats & Other Cyber Security Risks 

As we rely more and more on technology, our risk of cyber attacks or information leaks is also increasing. On a joint episode of The Cyber Security Matters Podcast we spoke with Jake Bernardes, the Field CISO at anecdotes, and Ido Shlomo, the Co-founder & CTO of Token Security, about their advice for people and companies who are looking to secure their cyber assets. Read on for their insights on how to reduce your cyber security risks, including insider threats. 

Jake: “Insider threats are divided into two categories; intent and incompetence. But insider threats are real. If I look at most attacks and incidents that I’ve worked out in my time, 90%  of the insider threats have been in the incompetence category. People accidentally hard-coded credentials into IDP. That’s like identity providers leaving the credentials for the entire customer database on a public-facing URL. But there are different ways to catch them. 

There is also the compliance piece, which is where anecdotes come in. We’re really good at identifying how people will divert from the norms and what control is best to use. We could connect a US to anecdotes and say, ‘This is what a normal VM looks like. This is what it has to look like to comply with PCI SOC or ISO’. As soon as someone creates one which doesn’t comply with that regulation, our system will flag a noncompliance and therefore show what was wrong. It gives you a chance to both logically correct it and then go and work with the person to educate them or uncover their intent. You have the visibility to fix it before it becomes an issue. That’s the key point of all compliance and regulation-based security; fixing things before you have a breach or before damage occurs.”

Ido: “Incompetence is a hard word, but most of the time, it’s just a lack of education or understanding. For example, one of them is people being off-boarded from a company, and the entire resource they’d created isn’t kept track of. That’s an insider threat, but the insider is still in the company because it’s people don’t take care of it that are the problem. You see a lot of those issues in identity space. People are so passionate about technology that they make every mistake possible. They plug in their CFO’s Excel, and they allow them to query all of the organization’s data with zero limiting on the permissions they have, and nobody’s keeping track of that. In the identity space, that’s crucial. We’ve just seen Ticketmaster, Santander Bank, and TNT suffering from those types of threats. Securing your own people is the hardest thing to do right now for security teams.”

Jake: “There are a few things ways to handle insider threats, one of which is slow down. We’re obsessed with being fast to market, so we almost encourage issues and errors. Look at the desire – and desperation – to get AI chatbots to the market last year. That resulted in a flight and a car that were both bought for $1 because these tools had been improperly tested. That will have happened because someone was pressured either internally by themselves or externally by their leadership to deliver and develop quickly, so they either skipped steps or just didn’t do them thoroughly enough. 

Another way to mitigate these threats is to understand what you’re doing. A lot of the time, people build stuff without really realising what they’re doing. It’s important to understand that a software development lifecycle goes from A to B, and it shouldn’t be limited. Understanding what the end goal is means you can make sure you have those steps lined up in the process. 

Finally, getting the client there when you talk about compliance and regulations always sounds boring, but when we get a bug, we can see everything happening in security. We can see everything from identity issues or cloud security issues, onboarding issues, lack of training and policies not being signed – all of that stuff. Once you get a holistic view, you can educate the leadership and filter down the necessary information.”

Ido: “It is still very important to keep the pace. You want to understand where you’re taking too big of a risk, and you need to understand how to do things securely. Security should really invest more time into the auto-remediation of problems; not when you have an incident but much before that.”

To hear more about securing your cyber assets, tune into Episode 39 of The Cyber Security Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

AI Governance, Security and Compliance

On Episode 38 of The Cyber Security Matters Podcast, we discussed changes to AI governance with Patrick Sullivan, the VP of Strategy and Innovation at A-Lign. He shared his insights on changing legislation and what that means for organisations that use AI as part of their workflow, as well as his definition of ‘AI governance’. Here’s what he said:

What does the term ‘AI governance’ actually mean? 

ISACA through COBIT has introduced control objectives for AI and has defined governance as a value-creation process. When we think about governance, we think about value creation. COBIT says that governance is creating desired outcomes at an optimized risk and cost. So for us, we need to ask ‘What do we want to create? What risk are we willing to bear? And what budget do we have to support all these things?’ Our practices are processes that are employed to ensure that we’re creating the outcomes that we want as an organization in both a risk-appropriate and resource-appropriate way. 

What frameworks or guidelines can organizations adopt to ensure AI systems are used responsibly and ethically, and does this vary based on the size of the organisation? 

Generally, we won’t see the applicable frameworks vary based on organizational size. In the market today, there are two frameworks that most organizations are using to build their AI governance systems to adhere to X number of regulations. For neuco as an example, we saw that the EU AI Act was written into the Official Journal last week. These regulations are pressing, which means many organizations that are bound to the AI Act now need to take significant action to prepare themselves. 

How do those frameworks and guidelines actually physically enhance trust within the supply chain?

ISO 42001 is a certifiable standard and management system. Organisations that implement ISO 42001 as their AI management system can have a third-party auditor certification body, of which A-lign is one, independently validate that appropriate processes are in place, that appropriate procedures and commitments have been made, and that the management system is running effectively to meet the intent of the standard. So there’s a certification mechanism that organisations can use to offer assurance to others in their supply chain and their value chain. 

Many in the security space are already very familiar with security questionnaires. We’re currently seeing a lot of pressure on organisations to answer AI questions because the market is really educating itself about what’s important. That is then driving the need to respond to those questions or unknowns to or from suppliers. While regulation will always be a pressing concern, self-policing in the market is where I see us go with responsible AI use.

How do you expect AI governance and compliance to change in the coming years?

Over the next five years, I think we’ll see the skills gap become more pronounced. I don’t know that there’s necessarily the awareness that there needs to be. We’re seeing groups come online like a group called the International Association for Algorithmic Auditors, which helps new algorithmic auditors or AI auditors understand what skills they need to be successful, and I think we’ll see more organisations like that come online as the recognition of the AI governance and AI assessment skills gap becomes more pronounced. As that happens, the market will really largely start self-policing, and we’ll enter the hype cycle. But, once that begins to simmer down, AI governance will become more of an operational process just like any other governance, risk governance, or vulnerability management process. 

To hear more from Patrick, tune into Episode 38 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.