What Does the Future Hold for Cyber Security and Its Relationship with AI?

On Episode 26 of The Cyber Security Matters Podcast we were joined by Simon Hunt, the Chief Product Officer at Reveald. Simon is a prolific industry leader and inventor within cybersecurity and technology, specialising in protecting financial information. He also sits on a number of boards within the Cyber Security industry and volunteers with the American Red Cross. During the episode, Simon shared his insights into the relationship between Cyber Security and AI, which you can read here:

“I am super excited about the possibilities of generative AI. But, let’s remember that generative AI is guessing what it thinks the most likely word to come next will be. It’s fascinating how much reasonable content it has created just by guessing what word comes next using statistics. That’s fascinating to me. Ask Chat GPT to write a children’s story or love letters to your wife and it’s amazing. 

But the eye opener for me was that the systems I built create very complicated output, and you have to have a huge amount of expertise to interpret what it generates. We do a lot of work to turn that into stories that people understand. We found that we could throw that raw data into a generative AI model and it would make a readable explanation. If I wanted to tell somebody what their problem is, it would do that perfectly for me. 

I realised I could do it in Japanese, or Baja, I could tell it to write it in any language – and it’s not translating the English output into Japanese, it’s translating the raw data into Japanese. The translation or output is still a beautiful, understandable story. My challenge was taking raw data and making it simpler, because there used to be a huge natural language problem. Now it’s generative AI’s problem. 

Now, of course, we have the problem of misinterpretation, but we have the opportunity to eliminate the requirement for super talented experts and make our process more scalable. That is intriguing to me. I’m not trying to automate everything; I’m saying that we should automate as much as possible and redirect human talent. 

For me, AI is not discovering new things, it’s making our discoveries consumable and actionable for a wider range of people. Who knows where it will go? But now we can take entry level people that are at the beginning of their cybersecurity awareness, and make them as powerful as the experts of today. If we can do that, then we can cut the legs off this problem. 

Fundamentally, it’s not intelligence. AI is not adding any unique insight. It’s shocking how little unique insight we need to write a two page children’s story just by predicting the words that come next. However, we need to be careful with our expectations. You can’t ask it to solve cancer. If it came up with an answer, it would just have regurgitated something that a person has already tried. 

There is a challenge. If you ask AI to compare two companies, it will generate an output that would take you hours to do by hand. As a timesaver it’s amazing, but schools are worrying because it’s becoming indistinguishable from natural language, so how do you tell it’s not plagiarism? It’s a tool that we should use to take complicated information and make it consumable by people who are not domain experts. I can solve that industry challenge with predictive text.”

To hear more from Simon, tune into Episode 26 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Navigating the Fast-Paced Cyber Security Sector

On Episode 25 of The Cyber Security Matters Podcast we were joined by Jaye Tillson and John Spiegel, who are passionate cyber security evangelists and the co-hosts of the Zero Trust Forum podcast. Jaye has over 20 years of experience in the cyber security industry, across IT infrastructure and zero trust architecture, while John’s background in the industry includes overseeing major projects for global retailer Columbia Sportswear. Read on to find out their perspectives on why the cyber security industry is moving so quickly. 

John: “I talked about paying off your security, which is also referred to in the industry as ‘defence in depth’. So why are people looking to move into this model? Security’s got to be simplified and streamlined. Visibility is hard when you have eight or nine point products that are chained together for remote access, or when your products don’t have API’s that integrate. Security is really hard when you just think about technology and you don’t think about the business outcomes. 

Primarily, what’s driving this change is simplified platforms which bring together technologies that were siloed. Companies are also looking to reduce their costs, not only from a vendor perspective, but from an operational perspective. On top of that, both Jay and I fell into security because of the way applications and workforce are distributed. Now you’ve got to have a different approach to security. Similarly, the way networking and security is transformed and delivered is changing. 

For you to be a player in it from a vendor perspective, you have to have the full stack. You can’t just be a networking vendor and rely on another vendor for the security aspect anymore, you have to bring both together because that’s what provides visibility, simplicity and the platform effect, which is what customers are looking for. 

Another interesting piece is David Holmes (who is an analyst for Forrester) did some research, and they asked customers who had moved over to this SASE and SSE model if they are still using the same vendors as they were using previously. Is there any buyer’s remorse? Are they looking to go back or maintain that relationship? The answer in almost 85% of the cases was ‘No, there’s no buyer’s remorse, we’re happy and we’re not looking to go backwards. This is a better approach.’ What does that mean for the industry? It means that the incumbent vendors out there are under threat. That’s why you will continue to see consolidation within the industry.”

Jaye: “I realised that having people on my network who were able to go everywhere and see everything or potentially hack everything was concerning. That’s how zero trust came about, which is built on the concept of only giving access to devices and applications that people need access to for their roles. You constantly check in, monitor and give visibility, and both SASE and SSE are based on that structure. 

Then you’ve got the consolidation element within the market. Recent statistics show that CISOs have over 100 security tools within their environment, which is impossible to manage. That’s because if you have a problem within the environment you won’t know which vendor to go to, where the gap is, what tool it is, or what you’re looking at. Consolidation is bringing more products under one banner and within one user interface, which simplifies your security. Cyber Security is a difficult place to work because you’re constantly under threat or being attacked, the legislation is constantly changing and it’s a very high pressure environment. If you can consolidate and become more simple, not only is it easier from a support perspective, it gives a better user experience.

There’s talk that ransomware is kind of dropping off, but that’s clearly not the case. We need to make everybody’s life simpler by removing and reducing the attack surface and simplifying administration, product and efficiency for the users. Zero trust is a huge thing in the USA, and the government is doing things about it which are flowing down into legislation across EMEA. Once people start to realise that their tools sit on top of that, there’s going to be a snowball effect.”

To hear more from Jaye and John about their work in the industry, tune into Episode 25 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Facing Challenges in the Cyber Security Industry 

The Cyber Security industry faces challenges on a daily basis due to the nature of its work. However, its challenges aren’t just security threats. On Episode 24 of The Cyber Security Matters Podcast we were joined by Michele Chubirka, a Cloud Security Advocate at Google, to talk about the wider challenges in the industry. Michelle has led a remarkable two-decade career in cyber security and has a background as a cloud native expert, giving her a wealth of insights into the space. Here’s what she shared with us: 

“Information security can be a struggle. There’s something called witnessing windows or common shock, which is when we see the small violence and violation that happens in our day to day lives. Well, that’s information security to a tee. You have the big breaches and traumatic events – you’re reading about it now with the movement hacks, ransomware, etc. – but every day you experience the vulnerabilities in your organisation. You report on them, saying ‘Hey, you have these vulnerabilities and they don’t get remediated’, and the solution technically seems very simple, but it’s really an adaptive challenge because it has a lot of dependencies and unpredictable human beings are involved. 

A lot of security people experience burnout after a while, because you want to do the right things, but there’s a social issue where people don’t or won’t collaborate well enough to solve the problem. Cyber Security is a challenging field because people are drawn to doing technical things and being engineers, but then find out that they have to work with people, which is a very different skill set. When I started, teams were super small and you could solve a problem end to end yourself. That’s not the case anymore. Now you have huge teams of hundreds of people working on a single application. Now you have to worry about getting people to talk to each other. You have to resolve conflict. 

I wish somebody had taught me to improve my people skills as well as focussing on my technical skills in my professional development. The social science that I’m studying is restorative practices and restorative justice, which is about building human capital or social capital by finding ways to repair harm, restore relationships and build community. If our organisations and companies aren’t communities, we’re going to struggle to build a truly secure cyber environment. 

The problem is that people are really attached to this idea of security being like law enforcement or a military framework. We think of threats as attackers, and there’s a lot of accepted victim shaming. When something happens within an organisation and the bad guys leave, you’ve got to clean up and recover from the trauma of what happened. That’s when the blame shifts. People start asking ‘Who can we blame internally for this problem?’ Then you get some victim-perpetrator oscillation where there’s a blaming game. Then the victims are being held to account as perpetrators because they didn’t secure their systems or they didn’t do the things that you asked them to do. That’s not helpful. 

There are a lot of reasons why developers don’t always write secure code or update their dependencies. Sometimes the systems that security people put in place are not friendly or easily consumable. Developers may be under really tight timelines and they’ve got way too much on their plates, so how much is really their fault? There are often swirling, interpersonal, conflict-ridden situations that create anger and resentment, because security professionals are doing their best but they feel like they can’t make enough change. This is exactly what happens when you’re faced with these witnessing windows, where people are disempowered but aware of what’s happening. When you’re in that situation, you know what the problem is but you can’t change it, the results are stress and eventual burnout. 

That’s really the problem with information security right now. People are building great technologies and there are new techniques coming out every year, but the attacks only get worse, and the job seems to get harder. So what are we doing? I think the reason that the situation is the way it is is because we’re having people problems – it’s not simply a technology problem. 

To learn more about the challenges facing the Cyber Security industry, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Addressing Human Behaviour in Cyber Security

In the Cyber Security industry, one of the biggest risk factors is human behaviour. On Episode 23 of The Cyber Security Matters Podcast we were joined by Ira Winkler, the Field CISO and VP at CYE. He shared his insights on the risks of human behaviour, as well as some great anecdotes from writing multiple books on cyber security. Read on to learn from his experience. 

How have you seen cyber risk progress over your career?

When I do speaking events, I always ask people ‘how many of you are security professionals?’ Most of the audience raises their hands and I go, ‘Okay, you’re all failures, because there is no such thing as security. The definition of security is being free from risk, and you’re never going to be free from risk. So technically, we’re all cyber risk managers.’ If we’re all risk managers, how are we mitigating those risks? I do what I call cyber risk optimization, where we’re quantifying and mapping out the risks according to actual attack paths and vulnerabilities. That allows us to determine how we optimise risk by taking your potential assets, mapping them to vulnerabilities to get an actual cost, and then figuring out which are the best vulnerabilities to theoretically mitigate. 

Now, we’re at a point where machine learning is actually able to start doing things we were not able to do before. Everybody thinks machine learning is this really fancy thing, but it’s taking big data and putting it through mathematical calculations that were not available to us 10 years ago. Now we’re actually able to crunch data, look at trends, and come up with actual calculations of how to optimise risk. I’m finally able to take the concepts I wrote about in 1996-97 and implement them today. 

How do you balance user responsibility and the responsibility of the operating system? 

The solution I’m putting together is human security engineering consortia, because here’s the problem: awareness is important. I wrote ‘Security Awareness for Dummies’ because awareness is a tactic. Data leak prevention can be important to stop major attacks, and anti malware can be important to stop major attacks, so those are tactics too. The problem is that currently, when we look at the user problem, it’s being solved with individual tactics that are not coordinated through a strategy. We need a strategy to look at it from start to finish that includes both the operating system and the user responsibilities. 

You’ve got to stop and think, ‘what are my potential attack vectors? What capabilities does a user have?’ A user can only do things that you enable them to do, they only have access to data you allow them to have, they only have a computer that has the capabilities you provide them. You need to stop and think, ‘given that finite set of capabilities and data provided to a user, what is the strategy that looks at it from start to finish and best mitigates the overall risk?’ I’m not saying you can get rid of risk completely, but you need to create a strategy to mitigate as much risk as possible from start to finish, knowing the capabilities you provide to the user. 

One of my books is ‘Zen and the Art of Information Security’, which includes a concept of what makes an artist, and it’s the person’s ability to look at a block of marble and see a figure in it. They can produce different pieces of art, but they’re all made the same way. There’s a repeatable process and what they use to get what they got. Now in the same way, there’s a repeatable process for looking at human-related errors. You look at the potential attacks against users and ask ‘What mighty users do, using good will, thinking they’re doing the right thing but accidentally causing harm?’ Most damage to computer systems is done by well-meaning users who inevitably create harm. 

You don’t go around and see people saying, ‘I’m getting in my car and crashing into another car’ – that’s why they’re called accidents. We have a science in how we design roads, literally the curvature of roads is a science and when they assign speed limits to it there is a science to understanding what a user does, what their capabilities are, and how you can mitigate that to reduce the risks. In cyber risk, you should be asking similar questions, like ‘How can I proactively analyse how the user gets in the position to potentially initiate a loss and mitigate that proactively?’ Then you design the operating system to reduce the user’s inadvertent risks. 

To learn more about human behaviour and risk in Cyber Security, tune into Episode 23 of The Cyber Security Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Inside Data Loss Prevention

In recent years there have been growing concerns around privacy and data loss. On Episode 22 of The Cyber Security Matters Podcast we spoke to Chris Denbigh-White, the Chief Security Officer at Next, about data loss and how it’s affecting the industry. Here are his thoughts: 

Data loss prevention has always been the ugly friend of cyber security. If you mention DLP to 9 out of 10 cyber professionals they’ll say, ‘this doesn’t work, but we’ve got to do it’. It’s effectively a tick-box exercise, but it’s a box that does nothing. It’s the old adage of a firewall that has allow rules going both ways. We have to do it though, because otherwise some of our users either complain massively, or are blocked from doing their job. That’s something that Next aims to address; we’re trying to provide DLP that makes sense. That means using machine learning to understand user behaviour. 

I like to understand people’s business processes and build guardrails around what they actually need for security. We’re here to ensure that people who do business and make money don’t lose all their data or have it stolen, as well as protecting them from getting massive GDPR fines. Security itself doesn’t make the business any money, but not having security can cost a business a lot. That means that we need to understand what is valuable to the business and find a way to protect it. 

That’s different from typical data loss prevention tools. We need to understand things like ‘how does this company deal with things like insider risk and insider threats?’ We’ll think outside the box, like ‘Why don’t we address risks through behavioural change and training people on better cyber practices, rather than relying on draconian controls?’ I strongly believe that what we’re doing increases business cadence and reduces friction by approaching DLP in that way. That’s something that I think AI and machine learning are going to help people understand better, because they’ll be used to understand the people around us better and therefore they’ll uncover internal and external threat actors more effectively. 

The way that we approach things is by helping companies understand what normal is, and helping them to address the question ‘Am I happy with what that normal is?’ Our solutions are built by asking things like, ‘Do I want people uploading things to this web application and not that web application?’ That’s a well trodden path to data loss. Another common issue is the use of copy and paste. On one hand, I want users to be able to copy and paste because we’re advocates of strong and long passphrases and the use of password managers – all of which utilise copy and paste. But on the other hand, I don’t want people copying and pasting swathes of sensitive data from sensitive apps and into a text file that’s then emailed off. 

We’ve moved away from just file based data loss, because people lose data in more ways than you’d think. There are copy and pastes, web uploads, Chat GPT prompts… being able to understand and control your data in those ways is its own tool. There’s a business process where we help companies identify their normal and their risks, then we set up specialised guardrails in a super simple process. I think that’s the future of the space. Companies that develop schooling to support security that’s done with people are going to succeed moving forward, whereas increasing levels of draconian control and intrusions are going to come to an end. 

To learn more about protecting your data, tune into Episode 22 of The Cyber Security Matters Podcast

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

From National Security to Cyber Security With Mark Daniel Bowling 

The Cyber Security space is an exciting one to be part of. On The Cyber Security Matters Podcast we regularly ask our guests how they get into the industry, and on Episode 21 our guest had a fascinating answer. We were joined by the CISO of ExtraHop Mark Daniel Bowling, who has over 20 years experience in Cyber Security, beginning as a special agent and cyber crimes investigator for the FBI. Since then he’s transitioned into several roles, most recently as the Chief Risk, Security, and Information Security Officer at ExtraHop. He shared the story of his unusual career path and his advice for other people who want to make a similar journey. 

How did you first get into the cybersecurity industry?

It was almost entirely a consequence of my service in the FBI. I spent six years in the United States Navy, where I was supposed to go into submarines, but I ended up on a carrier because we won the Cold War back in ‘91, so we just didn’t need as many subs. I did a little bit of time in the corporate world and didn’t love it, then I joined the FBI in 1995. That was right as cyber was becoming a thing. We didn’t even have a cyber division in the FBI back then, but we had a cyber investigation section coming out of the white collar branch. We created what was known as NIPC, or the National Infrastructure Protection Centre, then eventually when Muller came in, in 1999 or 2000, he created the cyber division. I grew up in the FBI and cyber at the same time, because I was an Electrical Engineering and Computer Engineering technologist, so it was the right place for me to go. 

I made a great career in cyber in the FBI. When I retired from the FBI I went to another agency, which was the Department of Education, making a transition from a very serious law enforcement and intelligence community agency to the one that was more public facing. After that I retired from federal service and then I went into the public sector as a full time employee, but then I started to move into the consultant track where I’ve had multiple great partnerships with customers, and it was really good. I went back to full time employee status when I came to ExtraHop a couple of years ago. So that’s the route that I took, but I would say my experience in the FBI was really what pushed me into cybersecurity.

Who or what has been the biggest influence in your career?

Because much of my career was in public service, the biggest influence has been the amazing public servants that I met in my career. My role model was a man in the United States Navy named Admiral Larsen. He was a four star Admiral, and I worked for him in the Pentagon. He was just an amazing man. Anybody who knew Admiral Larsen recognises what a great leader he was. 

In the FBI there were a couple of amazing public servants too. I would say David Thomas, who was one of the early assistant directors of the cyber division, was also a great man. He helped build the cyber programme within the FBI. He was one of the great men I knew in the FBI. 

And then at the Department of Education there was a man named Chuck Cox. He was in the Air Force Office of Special Investigations before he went over to the Office of the Inspector General. He has since passed away, but he was a tremendous man. Each of those individuals modelled public service in an amazing way for me.

How do you feel your background within the FBI has shaped your career working for a security vendor like extra hop?

I think it’s absolutely vital that anybody who works in security understands the nature of threat and risk. If all you do is think about technology, you’re missing the boat. The job of the business is to stay in business, make money, acquire and retain customers, sell more products, provide better services and increase not just your profit margin, but also your presence in whatever sector you’re in. They don’t want to have to worry about cyber security, so the cyber security folks have to understand the threats to the business for them. 

You have to be able to see things in terms of risk, and that’s what the FBI did for me. One of the things that Muller did when he came into the FBI was created priorities, and we created those priorities based on the risks. After 1991, the number one priority in the FBI was counterterrorism, number two was counterintelligence, and of course, number three was cyber because of the growth of cyber attacks at that time. So what I learned in the FBI was to see things in terms of risk, understand a threat, appreciate the capabilities of the threat actors, and then turn around and prioritise and your resources appropriately to reduce the threat either by remediation or mitigation. If you can create compensating controls around the threat, it reduces the actual risk. At the FBI I learned that you can accept some threats, others you just have to remove, and some you can create compensating controls around. 

What one piece of advice would you give to someone entering the industry?

I would tell them to one, stay humble, two, listen, and three, be willing to do things that you’re not comfortable with so that you can learn from the experience. There’s different reasons for learning. You should learn how to do something you’re not comfortable doing so that you appreciate the people who do it on a daily basis. You should learn to do something to understand the level of effort that it actually takes, so that when you ask people to do it as a leader, you know what they’re going to do for you and what they’re going to have to give up to get it done. 

To learn more about Mark Daniel’s experiences and insights, tune into Episode 21 of The Cyber Security Matters Podcast here. 

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Exploring the Relationship Between APIs and Cyber Security

APIs are a growing part of the tech industry, and impact a number of areas like Cyber Security. On Episode 20 of The Cyber Security Matters Podcast we spoke with Jeremy Ventura, who is the Director, Security Strategy & Field CISO at ThreatX, about how the rise of APIs is affecting the Cyber Security space. Jeremy has over 10 years’ experience in the Cyber Security industry, beginning his professional career as a security analyst for defence based manufacturing business radian before working his way up to his current position. He’s also the host of ThreatX’s eXploring Cybersecurity podcast, making him an experienced and informed member of the Cyber Security community. Read on for his insights on APIs. 

What should a regular person know about API security and how it affects the world around them?

We use API’s every single day, but most consumers, especially if you’re not technical, won’t realise it. Let’s think about ease of use. If I want to pay a bill I’ll do it with one of the three credit cards that I have. When I’m on an app, I’m just selecting whether I want to pay with Apple Pay or my Chase Card or my Amex card, whatever it might be. Those payments are all API connections. Here’s another good one; when you call an Uber or a Lyft, they’re looking for the closest Uber in your geolocation and the fastest route. Those are all API connections that are pulling that data down. Think about your phone – when you look at the weather today in your location, that uses API connections to pull together your geolocation and the weather from different weather providers. So even though API’s are all out there, they’re pretty much hidden by design. We use API’s on an everyday basis – probably hundreds of them on a normal day. 

Now, when it comes to API security, that’s where individuals need to be conscious. Just because it’s easy to use doesn’t mean it’s always secure. APIs in general are designed to connect multiple systems together and send business logic or business data. That’s not anything insecure. However, those transactions that are sent in the background sometimes can contain sensitive company information, or what we call PII, personally identifiable information. That’s things like usernames, passwords, credit card numbers, social security numbers, whatever it might be. That’s why the API security space is so hot right now, because they’re designed to send potentially sensitive data to each other. If that process or transfer is not secured properly, then we have big problems. Every individual – technical or not – needs to be aware of everything they’re putting out there on APIs. Your information is being sent to and from multiple different companies or products, which is a risk.

What is your take on the current state of the API space generally?

API’s are nothing new – they have been around for decades now. API security though is fairly new. That’s where we’re starting to see a lot of security vendors either incorporate technology that can help them in the API security space or we’re seeing a lot of big companies being completely transparent. 

I think with that we’re going to see a lot of acquisitions happen pretty soon as well. That’s normal when you have hot, new emerging technologies that are solving real world problems. Why wouldn’t I want to get my hands on that if I’m the largest security vendor? This is when the market can get a little confusing, where you have a lot of different vendors saying, ‘Hey, I do API security’, but they all do it differently. My recommendation is that when you’re evaluating vendors or you’re valuing the space, make sure you’re getting tools and products and services built with that in-depth approach. No one security tool is ever going to be perfect, so it’s important to take a layered approach. 

How much does AI affect API security?

AI in general is definitely affecting security. One thing I’ll be clear about is that attackers and hackers alike have been using AI for a long time. It’s actually nothing new. What’s happening now is that typical security may be a little bit behind. Now they’re starting to ask ‘how can I incorporate AI in my security tools like a security vendor? Can I incorporate AI into my products?’ 

An instant response company just announced that they included AI in their responses. They can create playbooks on the fly based upon the data that someone enters. Maybe I’ve experienced a phishing incident and I need to know who to contact. The AI model within that tool will actually spit out the exact task, or runbook that you need to do. If it’s used correctly, especially in security tooling, AI can definitely have an extreme power and effect for end users. 

Just like anything though, AI can also create a lot of false positives. We need to be very careful about 100% relying on AI and saying ‘this is the be all and end all’, because AI isn’t right all the time. AI in general security, including API security, is definitely starting to have an effect on both the security vendor side and the end user side.

To learn more about how APIs are affecting the Cyber Security space, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Securing the Cloud in Cyber Security

Securing the Cloud is a major challenge across the Cyber Security industry. On Episode 19 of The Cyber Security Matters Podcast we spoke to Abhishek Singh, the Co-Founder and CEO of Araali Networks, about how Cyber Security professionals are navigating the growing challenges of keeping the Cloud secure. Abhishek has 25 years’ experience in Cyber Security, including a period in which he led a team to build a data centre scale platform to enable micro segmentation and security in a virtual machine environment. This wealth of experience gives him some great insights into the current issues around securing the Cloud. 

Could you explain what zero trust is and what the biggest problems are with implementing it?

Zero Trust has become a buzzword. Zero trust people say ‘trust nothing’, but zero trust is fundamentally a networking concept. That concept is actually very simple. Imagine it as a castle and moat problem, where you have a castle and a moat around it called a perimeter. Everything inside the castle is trusted. Everything outside the perimeter is untrusted. If you have to come into the castle, you come through a firewall, and then you are trusted. So it is a networking concept which relies on perimeter security and having an open interior.

The problem with that approach is that your perimeter has to be perfect. If there’s one bad guy coming in, you’re in trouble. If one Trojan horse seeps in, you’re in trouble. If you’re building a zero trust environment you have to keep your controls inside out. Even if your environment is not pristine, every resource has to defend itself. 

The Cloud is very zero trust friendly in that it denies access by default, so if you want to expose anything online you have to explicitly open it up. However, egress is open. And that is the problem with zero trust, it’s too hard to close down egress. So if someone is already inside, going out is free, and that is what attackers abuse. So in spite of Cloud being very different, very novel, very thought through and upfront, egress is open. And that is the fundamental problem. 

What do you see as the biggest challenges in securing the cloud itself?

The real question is, ‘is the Cloud more secure?’ That is the biggest thing that people need to understand, and there is no straight answer. Depending on who you ask, they will give you a different answer. Many people believe the Cloud is more secure because Amazon has done a lot of good work there, and other cloud providers have followed suit. But the real rub there is, it’s as secure as you make it. Security is a shared responsibility, and Amazon is very clear about it. They are saying ‘we have given you the tools to make it secure’, but they have not done your work for you. Amazon has not secured your stuff. Coming from an on-prem background, when you go into the Cloud where there are new paradigms, it’s very hard to fulfil your shared responsibility. If you have not done so, Cloud is not more secure. 

The other challenge is attackers. On-prem Windows is a fertile ground for attackers to be doing things. They have not exploited Cloud. At some point though, that’ll change. Things like solar wind supply chain attacks used to be science fiction, right? The cloud is like that – it’s waiting to explode. It’s not that it’s more secure – it’s just that attackers have not diverted their attention to it yet. They’re still trying to go after Windows workloads on prem. The moment they come to Cloud, there’s a lot to be had.

Why do you think businesses like Waze have had such success over the last few years?

So the reason Waze has been successful is because of simplicity. Security has been very cumbersome over the years. Orca was the first company who came out and said, ‘We’ll give you a Cloud account, and without any agents we’ll go and survey it and show you visibility’. The ease of use itself was very compelling. My problem with that approach is that by showing your Cloud position, you’re making yourself more vulnerable. I know I’m vulnerable. I did not need to see a picture to get that insight. The thing I need to know is how do I not become exploitable? How do I remediate my vulnerabilities? That is still a hard problem, because the Cloud is hard. It’s difficult, which is why it is vulnerable. Showing me my visibility is not helping me become less vulnerable. The thing we should focus on is remediation, and that’s the language of zero trust. The reason this became so popular is because of the ease of installation in a world where Cyber Security is hard to work with. Time to value is unspoken. 

To learn more about securing the Cloud, listen to Episode 19 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Tackling Talent Challenges in the Cyber Security Sector

As recruiters, we’re often faced with a number of challenges when it comes to sourcing talent in the cyber security sector. On Episode 18 of The Cyber Security Matters Podcast we spoke to Jake Bernardes, the CTO for Whistic, about his perspectives on the topic. Here are his insights: 

The reality is that there never has been a skill shortage in cyber security. That is completely fake news. The problems are actually between the hiring manager or hiring team and the candidate. And those issues are extensive. Let’s start with the kind of person that the hiring manager wants. Do they know what the key skills are that that person needs to have? Secondly, people are very bad at writing job descriptions. The next problem is that once you’ve written the job description it gets translated to a job ad. 

We all rely on recruitment in our business. Usually HR are filling in for recruitment functions, and they don’t understand what I’ve told them they’re hiring for. Do they know what I’ve actually asked for? Are they translating something which doesn’t make any sense? Are they adding things because they are standard requests, like ‘must be college or university educated’, ‘must have this qualification’ etc, when I actually don’t care as a hiring manager? The problem is when that person HR misinterprets my request and does not put the right spin on it when it goes out to market. 

There are then two more problems in that situation. Firstly, that description doesn’t make a lot of sense, and secondly it’s not focussing on the right keywords. We’re often having issues with the salary as well, because this is a high-paid field. We’re going out to recruiters who can’t fulfil a role where the requirements don’t make sense and the salary doesn’t work. It’s impossible to find someone that doesn’t exist, so it creates the illusion of a talent shortage.  

The flip side is that I don’t have a shortage of candidates. What I have is an inability to screen candidates properly because everyone has realised that there’s money in cyber so they’ve made their resume cyber orientated. If HR does the screening, they don’t have the competence to know what is or isn’t relevant. They often miss potential gems because the resumes are quite simple but have one really interesting line at the bottom. They just go and find an SRE or cybersecurity analyst. HR puts on a layer of nonsense that they think makes sense, including a salary banding which is completely unrealistic, then throws it to recruiters and hopes that they can turn carbon into diamonds. 

Our industry is a weird one. There are so many people who are very good, but on paper they shouldn’t be good. On paper they should never have even been in the interview. Standard education and experience doesn’t allow me to spot the people who are going to excel, but people’s passion projects do. And so I stand by my statement, there is no skill shortage here. There is a fundamental disconnect and a poor process between cybersecurity leaders and the candidates who are applying. Everything in between those two dots is broken currently.

To learn more about the talent challenges in the Cyber Security sector, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Cyber Security and AI: Insights from David Stapleton

AI has been sweeping the internet for months since the release of Chat GPT 3. As the world looks at the implications of these powerful new AI models, the cyber security industry is no exception. On Episode 17 of The Cyber Security Matters Podcast we spoke to David Stapleton, the CISO at CyberGRX, who we met at the RSA conference. With over 20 years of experience in business administration, cyber security, privacy and risk management, David has a unique expertise that makes him the perfect person to share insights on the relationship between Cyber Security and AI. Read on to hear his thoughts! 

A lot of attention has been paid to AI – with good reason. I have this mental model where if my mother is aware of something that’s in my field, that’s when it’s really reached the public Zeitgeist. When she asked me a question about the security of AI, I knew it wasn’t a niche topic anymore. 

Artificial intelligence is an interesting phenomenon. Conceptually, it’s not that different from any other rapid technological advancement that we’ve had in the past. Anytime these things have come up, the same conversations have started to happen. With the advent of cloud there was a real fear that was sparked – particularly in the cybersecurity community – around the lack of control over those platforms. We had to trust other people to do the right thing. How do I present that risk to the board and get their approval for that? Maybe it’s a good financial decision, but we are introducing unnecessary risks. 

Another example of that may have been the movement towards Bring Your Own Device (BYOD) and allowing people to connect their personal devices to company networks and data. That sounds terrifying from a security perspective, but you can see how that opens the door to increased productivity, efficiency and flexibility. 

AI is not too dissimilar from that perspective, and we can see plenty of positive aspects to the utilisation of artificial intelligence. It’s a catalyst for productivity which could provide exposure to multiple different data points and bring together salient insights in a way that it’s hard for the human mind to do at that kind of a speed. It can also reduce costs, bring additional value to stakeholders and potentially help companies gain competitive advantages. 

Conversely, there are potential risks. It is such a new technology, and we’re still learning about how it works as we’re using it. There’s a lot of questions from a legal perspective about the ownership of the output of different AI technologies, particularly with the tools that produce audio visual outputs. The true implementation and impact of that isn’t going to be known until the courts have worked those details out for us. 

We’re in a position now where some companies have taken a look at AI and said, ‘We don’t know enough about this, but we feel the risk is too great, so we’re going to prohibit the utilisation of these tools.’ Other companies are taking the exact opposite approach: ‘We also don’t know a whole lot about this, but we’re going to pretend this problem doesn’t exist until things work themselves out.’ 

At CyberGRX we’re taking a middle of the road approach where we’re treating AI models as another third party vendor that we’re using for work purposes. We’re going to share access or data with that tool, but we need to analyse it from a security risk and legal risk perspective before we approve its utilisation. That’s a fairly long-winded way of saying that there are amazing opportunities for AI but there are risks. 

We’ve already seen threat actors starting to use artificial intelligence to beef up their capabilities. You could understand logically how artificial intelligence gives a fledgling or would-be threat actor the ability to get in the game and take action sooner than they otherwise would be able to. When Chat GPT first was released to the public, the very first thing that I put into it was ‘Write a keylogger in Python’. That’s a little piece of malware that will log your keystrokes and collect things like passwords or credentials. It just did it. It was there on the screen as a perfectly legitimate piece of software. Since then they’ve tightened the controls, but there was a time when someone with bad intent could start producing different types of malicious software without even learning to code.

To learn more about the uses of AI in Cyber Security, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.