Creating Global Connectivity in IoT 

On Episode 24 of The Connectivity Matters Podcast, we were joined by Mart Kroodo, the Founder & CEO at 1oT, which provides global connectivity for the IoT industry, to talk about the company’s innovative solutions. From day one Mark’s goal with 1oT has been to make eating technology more accessible to companies worldwide and enable them to far more easily choose between different telecom providers. To do that, he’s had to drive more communication and collaboration in the industry. Read on for an inside look at his connectivity solutions. 

“The main thing we were thinking about is that there are 2,000 telecoms worldwide, so how could we create synergy between different direct companies? Back in 2016, it was clear that easy SIM would be the next big thing, because if telecoms could collaborate, then swapping from one telecom to another using the same technology would be a no-brainer because of the value for IoT companies. Right away we thought that there needed to be a neutral, independent telecom middleman who gathers telecoms on one service, because different telecoms were not getting long, and it was very hard to find synergy there. 

From day one, we have been aggregating different telecom deals on one service. In essence, we are a reseller. We negotiate with telecoms, and we resell their megabytes and gigabytes to our customers. The value we provide is that we have many telecom deals we can offer to our customers, and once they start using different telecom profiles, they still need to manage the service somehow. 

We have built the connectivity management platform that enables them to control the sims, starting from switching on and off and sending SMS or setting data limits to actually getting notifications. Setting up that let me know if someone either consumes too much, too little, or goes to a blacklisted country. Also, the system learns from the behaviour of the sim, and if it detects something weird, then it notifies the customer right away. 

We put a lot of effort into building the connectivity management platform from day one, which has been super important because it’s not just about getting their connectivity up and running. If you are running thousands or hundreds of thousands of devices in different countries on a daily basis, then you need to understand what’s going on. If something happens, you need to debug it, and you want to go into details. You want to see all of the sessions, minute by minute in the right network, looking at how much it consumed, what might be the issue, etc. You want to do everything on a self-service basis through a platform or by using API’s. 

A few years ago, we finished developing our own ECM infrastructure as well. Now we are among the 33 companies in the world who have built the ECM infrastructure themselves. The main thing is good telecom deals, connectivity management platforms and infrastructures are owned by us developers.”

To hear more about Mark and 1oT’s solutions, tune into Episode 24 of The Connectivity Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Using AI in Connectivity 

Generative AI has already become part of the workforce. To help us understand its role in the connectivity industry, we spoke with Bruno Santos, the Global Business Development Director at Celfocus, on Episode 23 of The Connectivity Matters Podcast. He explained the impact that AI has already had on the industry and shared his opinion on the opportunities it offers. Read on for the highlights of the conversation. 

AI has come into the public eye much more significantly more recently. Do you see that as a positive or a negative?

I tend to see it as positive. If it’s a new technology and it’s disruptive, it will change our behaviour. I’m not sure if it will be equivalent to the Industrial Revolution in the early 19th century, but it is something that will change our lives forever. We are also seeing a lot of impact not only with our clients but with our own internal processes too. Our teams are using AI to accelerate their performance and delivery, like generating new test data without waiting for the client to provide us with it or having ChatGPT generate code. We then take that output and take it to the next level by fine-tuning the requirements and wording in order to build and deliver the solution. 

AI has been changing a lot of our ways of working internally. We even have a mandate from our board to start using more ChatGPT engineering in our daily tasks. If we can use it to build a presentation, we should do that as much as we can in order to become more efficient. We are also challenging some of the assets AI produces, but I think it’s changing our ways of working and will keep changing more. 

What impact has AI had on Celfocus so far?

We’ve been leveraging AI to increase advanced data analytics since 2018. When it comes to the clear benefits to our clients, it’s always very hard to demonstrate and explain what we do with the data, but you can predict things based on historical data, so while it’s always very hard to convince our clients, adopting AI is the right move. We have an extensive set of AI use cases to share with our clients, the most important of which is the very good positioning in the industry we have thanks to these insights. 

Over the last two years, especially with all the ChatGPT hype, it was very easy for us to change because we were already using AI. Now we’re extending our capabilities and solutions to use generic AI. Our team was already fluent in this kind of technology; now we’re just using a different flavour. We’re working closely with our clients to demonstrate the benefits of using AI. 

What would you say are the biggest opportunities AI presents for the industry?

There are a few of them, and we are already seeing efficiency and cost savings. We are currently assigning people to different tasks that are more beneficial for our clients. We are also rescaling our clients’ workforces based on our solutions. Going beyond that, AI is changing things around operations and automating things in order to be cost-effective. That’s where you have the key decision points for our clients, like “What is the business case? What is the financial benefit I can get from the solution?” Those are key drivers for applying those generative AI and machine learning use cases to our clients. There is a huge set of opportunities, but they all go in the direction of making processes more effective.

What are the biggest challenges around AI in connectivity?

The biggest challenge is confidence in the solution and the technology. The privacy and ethics of what we are delivering are some of the key topics because everything involves a huge set of data from everywhere, not only from public information but sometimes from critical and confidential information from our clients. There’s always the problem of trust.  Data and regulation is one of the key topics that we need to address. How trustworthy is the solution that we are bringing? Sometimes it’s it’s not straightforward. 

What we have is a first step called human-assisted AI. We provide insights and a way to automate without human intervention, but in between there is always a human validating that information. They can apply the recommendation that the engine is providing and certify its validity. Our end-user solutions are based on AI algorithms that work by themselves.

To learn more about AI’s role in connectivity, listen to Episode 23 of The Connectivity Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Exploring Software Supply Chain Security

The software supply chain is ever-evolving. On Episode 32 of The Cyber Security Matters Podcast, we were joined by Luis Rodríguez Berzosa, the Chief Technology Officer at Xygeni, to explore the topic. He’s a physicist and mathematician who brings significant experience to the field of software engineering security, focusing on static analysis and software supply chain security. Here are his thoughts. 

How have you seen software supply chain security change over the last 20 years?

20 years is a long time in the IT industry, so our product security has improved a lot in that time. We’ve worked with APB security, position analysis, and static analysis – that’s the API security testing web application firewalls – which nobody uses anymore. Cloud-native protection has been another hot topic in recent years, and there are better mechanisms for patching or avoiding memory-related and other low-level security flows now. However, we are no better at securing the server product itself. 

Unfortunately, in the software supply chain, fewer resources are assigned to protecting the server infrastructure at the factory where software is built and deployed. Modern infrastructures have a large exposed attack surface, so the bad guys, who are always motivated to gain the most with the least effort, shifted their campaigns from the better-protected applications to the public packages and even the internal build and deployment systems. They attack the weaker points, so when we protect one thing, the attackers will look for another place to get in. Now they they use the software supply chain as an attack amplifier. 

What was your inspiration in founding the business? 

In the summer of 2021, we realised that software infrastructure security was lagging behind the rest of the sector. We started defining the project by establishing what exactly the needs were, analysing the potential market and testing what ideas could work. Then in December 2021 came the Log4J vulnerability, which created a shockwave in the entire software industry. That was the push we needed to start to decide to go on. In fact, we had been looking at cloud-native security during 2020 and 2021, but we were out of our element there because we are more traditional guys. With server security, we were at home. So we started with the project and went to market last year. We are now active in marketing and selling the platform.

What are the traditional methods of securing the software supply chain, and why aren’t they enough in today’s environment?

In the past, organisations would compile software artefacts, package them, and then digitally sign them with a code signing certificate for integrity protection. They then deployed them on an update site and were done. Now, attackers can penetrate a build system, inject malware in your software dependencies and embed malicious behaviour in your source code. They have changed their tactics and techniques. All the old methods do not work anymore because the attackers inject malicious code that will pass onto your customers. The problem is that the traditionally simple ways of protecting integrity by cold signing don’t work anymore.

One of the challenges within software supply chain security is keeping DevOps running while not whilst not falling under the supply chain attack. How does Xygeni solve this challenge?

You have to take a look at many different things. You have to automate those checks, compiling inventory and context because you have to know what is going where. You also need an alignment with industry standards because there are so many initiatives, ideas and best practices out there in supply chains. You have to get the best of them and put them on the ground to convert the generic principles into real actionable things. 

We have to try to take all the great ideas that are arising and figure out how they could be used in the real world. We put the emphasis on topics that we feel offered the best cost-benefit trade-off, such as detecting unusual activity or misconfigurations in real time. Our business is mainly international organisations who want to create software, but they feel they don’t need to secure the infrastructure. That means that features like semi-automated guidance will resolve a problem for them. They are looking for things like automation workflows and so on, so we try to provide them in our platform. Our focus is on helping users cope with a huge number of issues and the complexity of modern software.

To hear more from Luis, tune into Episode 32 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

How Tosibox is Reshaping OT Cyber Security

In Cyber Security, we are always looking for new, innovative ways to secure critical infrastructure. On Episode 29 of The Cyber Security Matters Podcast we spoke to Dmitriy Viktorov, the CTO of Tosibox, about how he’s bringing his experience with cloud protection solutions to a new market. Read on to find out more about securing data through OT networks. 

What are the main challenges associated with securing critical infrastructure?

I’m coming mainly from the IT security world, but now I’m jumping into what we call the industrial operational technology world. There are many similarities, especially on crates, but the OT and maybe IoT domains are lagging behind. They’re more conservative compared with IP or cybersecurity in general.

One thing that is quite important for customers is operational continuity. You can take some IT systems down for a short period of time if you need to patch it, update it or migrate it. In OT, it’s very difficult to do that because you are providing critical services, such as buildings, manufacturers, careers – you name it. You can’t take them down. If you want to apply a patch or you need to reconfigure something, that’s a big thing. 

We also know that the lifecycle for cybersecurity products is way longer than you might think because you don’t see the whole lifecycle. I remember when we were defining the lifecycle model, we said it would be a maximum of three years in OT, but it might actually be around five or even ten years in total. 

The other challenges in ICT and OT cybersecurity are the emphasis on legacy systems. There are several technologies in OT that are used by customers that rely on protocols, which have nothing to do with TCP IP. On the IT side, there are limited skills and technologies. It’s also about complexity and interdependencies – and again, a lack of patching and updates – and insider threats. Some infrastructures are physically exposed, which allows threats to get closer to them. 

How is Tosibox unique, and how does it solve some of those challenges?

Tosibox is in the specific niche of the whole of OT cybersecurity. However, we like focusing on network security. We are helping customers with at least one – or maybe a few – particular problems when it comes to OT cybersecurity and network segmentation. We are implementing access control, and we are making sure that our customers can do it easily, securely, and more automatically. Because, as I said previously, customers might use different technologies or different protocols, our unique proposition is that our platform is actually protocol-agnostic and even industry-agnostic. Even if you use old legacy technologies and devices, Tosibox makes it easy to connect with your IT network and then manage it remotely.

To hear more from Dmitriy, tune into Episode 29 of The Cyber Security Matters Podcast here.

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Exploring the Applications of Edge Computing 

As cloud computing has grown across the connectivity industry, so has its counterpart, edge computing. On Episode 20 of The Connectivity Matters Podcast, we were joined by Ariel Efrati, the CEO of Telco Systems Edgility, to discuss the applications of this cutting-edge technology. Read on to find out more. 

“You’ve probably experienced being in a hotel when everybody wants time to go to breakfast in the morning. So what is the most scarce resource in a hotel? It’s the elevator. If you’re staying on a high floor, you don’t get an elevator. It’s not smart enough – but why is that? It’s very simple to count how many people you have on each floor and send them the right size elevator using basic logic. As long as you have a camera and a computer, it should be possible. That’s edge computing. 

When you talk about manufacturing cars, it’s done with robots. They are welding things while the entire line moves on a belt. When the belt is misaligned, you need to recalibrate everything because otherwise the robots will weld in the wrong places. But if you could communicate between those robots because they all have cameras and a gyroscope to identify their position in space, you wouldn’t need to shut the line down. Each robot could adjust as necessary. That’s another real-life example of how edge computing can be used. 

These are real examples that we are faced with. I think traffic management is a great thing. In each traffic light, you could integrate several traffic lights, a radar, and a camera to help the flow of people. This is edge computing by nature. Traffic light signalling hasn’t changed in 40 years – we’re still using a person to set the timers based on traffic as it was on the day they were there. That’s not where we should be. We have cameras that can count the cars you can integrate that data with traffic law to moderate the traffic. Again, that’s edge computing. 

It’s endless when you think about it, and all it requires is an operating system, device and whatever function you want. It could be an AI function, it could be a visual inspection, counting, cropping, controlling – whatever you want. It could also exist in retail today. If you put enough cameras in a large retail store, you could take something from the shelf and be automatically charged for it. There’s no cashier, the system just automatically recognises what you do and charges it to your account. 

It could also apply to digital signage. These systems could identify you and show you more relevant ads as you walk into a store. That’s personalization. All of these examples are edge computing with integrated cameras and devices working on a local network to process huge amounts of data without having to transmit it to a data centre, which also creates more latency. With applications like traffic management, it has to be precise because otherwise it will cause traffic accidents. This is what edge computing is, and we’re going to see a reversal in how our data is processed, going towards edge computing.”

To find out more about the real-world applications of edge computing, tune into Episode 20 of The Connectivity Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

What Does the Future Hold for Cyber Security and Its Relationship with AI?

On Episode 26 of The Cyber Security Matters Podcast we were joined by Simon Hunt, the Chief Product Officer at Reveald. Simon is a prolific industry leader and inventor within cybersecurity and technology, specialising in protecting financial information. He also sits on a number of boards within the Cyber Security industry and volunteers with the American Red Cross. During the episode, Simon shared his insights into the relationship between Cyber Security and AI, which you can read here:

“I am super excited about the possibilities of generative AI. But, let’s remember that generative AI is guessing what it thinks the most likely word to come next will be. It’s fascinating how much reasonable content it has created just by guessing what word comes next using statistics. That’s fascinating to me. Ask Chat GPT to write a children’s story or love letters to your wife and it’s amazing. 

But the eye opener for me was that the systems I built create very complicated output, and you have to have a huge amount of expertise to interpret what it generates. We do a lot of work to turn that into stories that people understand. We found that we could throw that raw data into a generative AI model and it would make a readable explanation. If I wanted to tell somebody what their problem is, it would do that perfectly for me. 

I realised I could do it in Japanese, or Baja, I could tell it to write it in any language – and it’s not translating the English output into Japanese, it’s translating the raw data into Japanese. The translation or output is still a beautiful, understandable story. My challenge was taking raw data and making it simpler, because there used to be a huge natural language problem. Now it’s generative AI’s problem. 

Now, of course, we have the problem of misinterpretation, but we have the opportunity to eliminate the requirement for super talented experts and make our process more scalable. That is intriguing to me. I’m not trying to automate everything; I’m saying that we should automate as much as possible and redirect human talent. 

For me, AI is not discovering new things, it’s making our discoveries consumable and actionable for a wider range of people. Who knows where it will go? But now we can take entry level people that are at the beginning of their cybersecurity awareness, and make them as powerful as the experts of today. If we can do that, then we can cut the legs off this problem. 

Fundamentally, it’s not intelligence. AI is not adding any unique insight. It’s shocking how little unique insight we need to write a two page children’s story just by predicting the words that come next. However, we need to be careful with our expectations. You can’t ask it to solve cancer. If it came up with an answer, it would just have regurgitated something that a person has already tried. 

There is a challenge. If you ask AI to compare two companies, it will generate an output that would take you hours to do by hand. As a timesaver it’s amazing, but schools are worrying because it’s becoming indistinguishable from natural language, so how do you tell it’s not plagiarism? It’s a tool that we should use to take complicated information and make it consumable by people who are not domain experts. I can solve that industry challenge with predictive text.”

To hear more from Simon, tune into Episode 26 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Navigating the Fast-Paced Cyber Security Sector

On Episode 25 of The Cyber Security Matters Podcast we were joined by Jaye Tillson and John Spiegel, who are passionate cyber security evangelists and the co-hosts of “The Edge” by SSE Forum podcast. Jaye has over 20 years of experience in the cyber security industry, across IT infrastructure and zero trust architecture, while John’s background in the industry includes overseeing major projects for global retailer Columbia Sportswear. Read on to find out their perspectives on why the cyber security industry is moving so quickly. 

John: “I talked about paying off your security, which is also referred to in the industry as ‘defence in depth’. So why are people looking to move into this model? Security’s got to be simplified and streamlined. Visibility is hard when you have eight or nine point products that are chained together for remote access, or when your products don’t have API’s that integrate. Security is really hard when you just think about technology and you don’t think about the business outcomes. 

Primarily, what’s driving this change is simplified platforms which bring together technologies that were siloed. Companies are also looking to reduce their costs, not only from a vendor perspective, but from an operational perspective. On top of that, both Jay and I fell into security because of the way applications and workforce are distributed. Now you’ve got to have a different approach to security. Similarly, the way networking and security is transformed and delivered is changing. 

For you to be a player in it from a vendor perspective, you have to have the full stack. You can’t just be a networking vendor and rely on another vendor for the security aspect anymore, you have to bring both together because that’s what provides visibility, simplicity and the platform effect, which is what customers are looking for. 

Another interesting piece is David Holmes (who is an analyst for Forrester) did some research, and they asked customers who had moved over to this SASE and SSE model if they are still using the same vendors as they were using previously. Is there any buyer’s remorse? Are they looking to go back or maintain that relationship? The answer in almost 85% of the cases was ‘No, there’s no buyer’s remorse, we’re happy and we’re not looking to go backwards. This is a better approach.’ What does that mean for the industry? It means that the incumbent vendors out there are under threat. That’s why you will continue to see consolidation within the industry.”

Jaye: “I realised that having people on my network who were able to go everywhere and see everything or potentially hack everything was concerning. That’s how zero trust came about, which is built on the concept of only giving access to devices and applications that people need access to for their roles. You constantly check in, monitor and give visibility, and both SASE and SSE are based on that structure. 

Then you’ve got the consolidation element within the market. Recent statistics show that CISOs have over 100 security tools within their environment, which is impossible to manage. That’s because if you have a problem within the environment you won’t know which vendor to go to, where the gap is, what tool it is, or what you’re looking at. Consolidation is bringing more products under one banner and within one user interface, which simplifies your security. Cyber Security is a difficult place to work because you’re constantly under threat or being attacked, the legislation is constantly changing and it’s a very high pressure environment. If you can consolidate and become more simple, not only is it easier from a support perspective, it gives a better user experience.

There’s talk that ransomware is kind of dropping off, but that’s clearly not the case. We need to make everybody’s life simpler by removing and reducing the attack surface and simplifying administration, product and efficiency for the users. Zero trust is a huge thing in the USA, and the government is doing things about it which are flowing down into legislation across EMEA. Once people start to realise that their tools sit on top of that, there’s going to be a snowball effect.”

To hear more from Jaye and John about their work in the industry, tune into Episode 25 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Addressing Human Behaviour in Cyber Security

In the Cyber Security industry, one of the biggest risk factors is human behaviour. On Episode 23 of The Cyber Security Matters Podcast we were joined by Ira Winkler, the Field CISO and VP at CYE. He shared his insights on the risks of human behaviour, as well as some great anecdotes from writing multiple books on cyber security. Read on to learn from his experience. 

How have you seen cyber risk progress over your career?

When I do speaking events, I always ask people ‘how many of you are security professionals?’ Most of the audience raises their hands and I go, ‘Okay, you’re all failures, because there is no such thing as security. The definition of security is being free from risk, and you’re never going to be free from risk. So technically, we’re all cyber risk managers.’ If we’re all risk managers, how are we mitigating those risks? I do what I call cyber risk optimization, where we’re quantifying and mapping out the risks according to actual attack paths and vulnerabilities. That allows us to determine how we optimise risk by taking your potential assets, mapping them to vulnerabilities to get an actual cost, and then figuring out which are the best vulnerabilities to theoretically mitigate. 

Now, we’re at a point where machine learning is actually able to start doing things we were not able to do before. Everybody thinks machine learning is this really fancy thing, but it’s taking big data and putting it through mathematical calculations that were not available to us 10 years ago. Now we’re actually able to crunch data, look at trends, and come up with actual calculations of how to optimise risk. I’m finally able to take the concepts I wrote about in 1996-97 and implement them today. 

How do you balance user responsibility and the responsibility of the operating system? 

The solution I’m putting together is human security engineering consortia, because here’s the problem: awareness is important. I wrote ‘Security Awareness for Dummies’ because awareness is a tactic. Data leak prevention can be important to stop major attacks, and anti malware can be important to stop major attacks, so those are tactics too. The problem is that currently, when we look at the user problem, it’s being solved with individual tactics that are not coordinated through a strategy. We need a strategy to look at it from start to finish that includes both the operating system and the user responsibilities. 

You’ve got to stop and think, ‘what are my potential attack vectors? What capabilities does a user have?’ A user can only do things that you enable them to do, they only have access to data you allow them to have, they only have a computer that has the capabilities you provide them. You need to stop and think, ‘given that finite set of capabilities and data provided to a user, what is the strategy that looks at it from start to finish and best mitigates the overall risk?’ I’m not saying you can get rid of risk completely, but you need to create a strategy to mitigate as much risk as possible from start to finish, knowing the capabilities you provide to the user. 

One of my books is ‘Zen and the Art of Information Security’, which includes a concept of what makes an artist, and it’s the person’s ability to look at a block of marble and see a figure in it. They can produce different pieces of art, but they’re all made the same way. There’s a repeatable process and what they use to get what they got. Now in the same way, there’s a repeatable process for looking at human-related errors. You look at the potential attacks against users and ask ‘What mighty users do, using good will, thinking they’re doing the right thing but accidentally causing harm?’ Most damage to computer systems is done by well-meaning users who inevitably create harm. 

You don’t go around and see people saying, ‘I’m getting in my car and crashing into another car’ – that’s why they’re called accidents. We have a science in how we design roads, literally the curvature of roads is a science and when they assign speed limits to it there is a science to understanding what a user does, what their capabilities are, and how you can mitigate that to reduce the risks. In cyber risk, you should be asking similar questions, like ‘How can I proactively analyse how the user gets in the position to potentially initiate a loss and mitigate that proactively?’ Then you design the operating system to reduce the user’s inadvertent risks. 

To learn more about human behaviour and risk in Cyber Security, tune into Episode 23 of The Cyber Security Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Improving Accessibility in NewSpace

Accessibility is a key issue in the NewSpace industry. With a number of different applications for satellite technology, there is an increasing focus on enabling smaller players to enter the sector and access the NewSpace sector. On Episode 20 of The Satellite & NewSpace Matters Podcast we spoke to Nathan Monster, the CEO and Founder of A-SpaX (which means Affordable Space Access), about the company’s aims to make the opportunities that space offers accessible to as many people as possible. They offer an end to end service that spans from pre launch to delivery. Nathan also shared how we can improve accessibility as an industry. 

What’s been the biggest change in the industry that has made space more accessible to date? 

Access to space has improved with the transportation from Earth to low Earth orbit. There are more frequent launches going into orbit from more commercial companies who have developed their own launchers that go through to space. There are hundreds of rocket companies now. There has been a lot of investment in the space industry too, particularly going into launchers. I’m hoping that now that we’ve gotten into space people will start to think about the return. Questions like ‘While you’re in orbit, what are you going to do there?’ are really important. For me the answer is production and bringing the results back to people on earth. 

What has enabled accessibility more, small satellite launches or rideshare opportunities? 

It’s a complex situation because of the amount of investment that has occurred. So many commercial companies now have the chance to create a difficult transportation system, launch things and reach orbit. That should be a good thing, but it often goes wrong. Having all this competition does bring down the cost and enable a lot of commercial activity, which makes the industry more accessible, but there are downsides too. It’s the investment itself that has created more accessibility rather than rideshares or launches, but I’m interested to see which method will continue to grow accessibility in the space. 

What are the barriers to accessibility and what needs to be done to remove them?

The biggest barrier is making sure a rocket is safe and in a good state. All these commercial companies need to have systems and checks in place to make sure they’re successful. As an industry we need to support these companies so that they have the chance to reach a certain point where these protocols are in order and their systems can mature. That requires quite a lot of capital, and there will be failures along the way, but we need to expect and allow that. We need to keep backing them until they’ve built a protocol to make sure that everything is ready before the launch and is done in a proper order.

To learn more about accessibility in space, tune into The Satellite & NewSpace Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Investing in the European Space Market

The European space market has been growing over the past few years, leading to an increase in investments from a number of firms. On Episode 17 of The Satellite & NewSpace Matters Podcast we were joined by Árisz Kecskés, who is the Business Development Manager at Remred and Investment Manager at Herius Capital, the latter of which is one of the very few space-focused venture capital firms investing in startups in the European Space ecosystem. Árisz shared his insights into the European space market, including the opportunities he sees for other investors in the sector. Read on to learn more!

Where would you recommend investing in the European space market? 

The valuation landscape in Western Europe is very different from how companies are valued in the Central Eastern European region. The trick is to find these ‘rough diamond’ companies and support them throughout their development stages. If you’re looking for early stage startups, there are a lot of good companies in the Central Eastern European region, whereas Western European countries are typically in further stages. 

What you see on the market is a different approach to the industry itself. Something that we’ve noticed is that  the Central Eastern European region was more research oriented, which is tied into the heritage of how the space industry has evolved in those countries. Their transition into the industrialised space was a bit more difficult, which is understandable. So it depends what you want to invest in, but there’s lots of great companies out there. 

What future opportunities do you see for the space sector across Europe?

It depends on how risk averse someone is. I would say that a key opportunity lies in the Earth Observation market, which is seeing a lot of growth. There is still a lot of growth that can be seen in some upstream markets such as debris, and the inordinate servicing market is something that we’re very closely monitoring too. They do pose a lot of risks, but we see a lot of initiatives and enabling technologies that make that segment very interesting to look at. I’m not sure if investing in these technologies is something that we would do as an early stage investor, but it’s definitely something that I see a lot of growth opportunities in.

To learn more about Árisz’s work and other aspects of the European Space Market, tune into the full episode of The Satellite & NewSpace Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.