A recent survey of 400 chief information security officers from UK and US companies found that 72% believe AI solutions will lead to security breaches. Conversely, 80% said they intend to deploy AI tools to protect themselves against AI. It’s a fresh reminder of both the potential and threat of AI. On the one hand, the use of AI can enable unprecedented security measures, putting cybersecurity experts on the offensive against hackers. On the other, AI can lead to industrial-scale automated attacks and incredibly sophisticated attacks. For technology companies caught in the middle of this war, the big question is how worried should they be, and what can they do to protect themselves?
First, let’s take a step back and look at the current situation. According to data compiled by security firm Cobalt, cybercrime is projected to cost the global economy $9.5 trillion in 2024. 75% of security professionals report an increase in cyberattacks in the past year, and the cost of these hacks is expected to increase by at least 15% annually. For businesses, the numbers are pretty grim, too: IBM reports that the average cost of a data breach in 2023 will be $4.45 million, up 15% from 2020.
As a result, cybersecurity insurance costs have risen by 50%, and companies are now spending $215 billion on risk management products and services. Healthcare, finance, and insurance organizations and their partners are most at risk of attack. The technology industry is particularly exposed to these challenges, given the amount of sensitive data startups handle, their limited resources compared to large multinational companies, and their culture of scaling quickly at the expense of IT infrastructure and procedures.
VP of Engineering at Storyblok.
The challenge of distinguishing between AI attacks
The most striking statistic comes from CFO Magazine. According to the magazine, 85% of cybersecurity experts believe that the increase in cyberattacks in 2024 will be due to the use of generative AI by bad actors. But if we look a little closer, we see that there are no clear statistics that show what these attacks were like and therefore what impact they really had. This means that one of the most pressing problems we have is that it is very difficult to establish whether a cybersecurity incident occurred with the help of generative AI. Generative AI can automate the creation of phishing emails, social engineering attacks, or other types of malicious content.
However, because they aim to mimic human content and responses, they are very difficult to distinguish from human-created content. As a result, we don’t yet know the scale or effectiveness of generative AI attacks. When we can’t yet quantify the problem, it’s hard to know how concerned we should be.
This means that the best course of action for startups is to focus on and mitigate threats more generally. All the evidence shows that existing cybersecurity measures and solutions, backed by best practice data governance procedures, can address existing threats from AI.
Increasing cybersecurity risks
Ironically, the biggest existential threat to organizations is not necessarily AI being used in a nefarious way, but rather employees using AI carelessly or not following existing security procedures. For example, if an employee shares sensitive business information while using a service such as ChatGPT, there is a risk that the data will be retrieved at a later date, potentially leading to the leakage of sensitive data and subsequent hacking. To mitigate this threat, organizations must put in place appropriate data protection systems and better educate users of generative AI about the associated risks.
Education extends to helping employees understand AI’s current capabilities, especially its ability to counter phishing and social engineering attacks. Recently, a financial executive at a major company was tricked into paying $25 million to a scammer after being tricked into a deepfake conference call posing as the company’s CFO. So far, so scary. But as you read more about this case, you’ll see that from an AI perspective, this isn’t super sophisticated. It was just a step above a scam from a few years ago that tricked the financial departments of dozens of companies, many of them startups, into sending money to fake customer accounts by posing as the CEO’s email address. In both cases, basic security and compliance checks, or even common sense, would have uncovered the scam quickly. Teaching employees how AI can be used to generate the voices and appearances of other people and how to spot these hacks is just as important as having a robust security infrastructure.
Simply put, AI is clearly a long-term threat to cybersecurity, but until more advanced technologies emerge, strict adherence to current security measures will suffice. That said, companies should continue to follow rigorous cybersecurity best practices, review processes and educate employees as threats evolve. While the cybersecurity industry is getting used to new threats and malicious actor techniques, this is nothing new. But companies cannot afford to use outdated security technologies and procedures.
We list the best cloud antiviruses.
This article was produced as part of TechRadarPro’s Expert Insights channel, featuring the best and brightest minds in technology today. Opinions expressed here are those of the author and not necessarily those of TechRadarPro or Future plc. If you’re interested in contributing, find out more here. https://www.techradar.com/news/submit-your-story-to-techradar-pro