RFA, a global provider of IT, cybersecurity, and cloud services for the financial sector, has warned that artificial intelligence (AI) is increasing cybersecurity risks for private equity firms.
In an exclusive interview with Benzinga, Global Managing Director and Chief Risk Officer (CRO) George Ralph noted that not only are these threats becoming more sophisticated, but many executives are also discussing the potential risk for more AI-related scams.
Advancements in AI are lowering barriers to entry for hacking, as less-skilled individuals can now execute more sophisticated cyberattacks than in the past.
"Hackers before had a specific skill and now AI helps everyone have that skill. There's more entry points in terms of human error, bad leavers. There's lower skilled threat actors who can use prompt AI to help them work out how to do malicious code or feeding errors back into AI," Ralph said.
Approximately 72% of private equity firms across the U.S. and Europe reported a serious cyber incident at one of their portfolio companies in the past three years, with an average cost of $3.4 million per incident, advisory and executive search firm Russell Reynolds wrote in a report.
Ralph stated that he has run into cases where a firm hasn't issued proper governance controls for AI, and they’ve just let people use what’s available, ChatGPT, or Anthropic's Claude; this can lead to trouble as people may unknowingly give out sensitive documents or information that hackers can easily access.
Only approximately 38% of PE organizations are proactively planning for technological change, even as most are introducing new digital and AI tools and products that expand their risk exposure, the Russell report stated.
Having proper governance controls for AI is key as technology advances. Ralph believes that firms should conduct an audit, block all AI models from day one, and then create a policy surrounding AI usage. Once those policies are in place, firms can then allow the usage of AI, but with an added training component.
Phishing emails are among the most common vectors for cyberattacks, as attackers seek access to personal information. These phishing attacks are becoming more personalized because of AI, Ralph explained.
Artificial intelligence is helping hackers find more personal information, such as where you live, what kind of car you drive, and even your pet’s name, making it easier for people to fall for scams. Ralph added that hackers are also using voicemails from people's mobile phones to create deepfakes of their voices to demand access to sensitive information.
Firms are also using AI to combat AI, Ralph added. RFA's Global Security Operations Center (SOC), which helps defend against attacks for its clients, uses an agentic bot that sits behind the SOC team and acts as an auditing tool.
"It will find the source of any kind of malicious activity way faster than a human could. It doesn’t have access to the client systems and it can’t automatically remediate. We still have a human in the loop and the regulators have specified that. But we're using that to find the source of issues way faster," he noted.
Regulators are looking at AI as another database, Ralph explained.
While Europe has enacted the EU AI Act, the first comprehensive regulation on AI by a major regulator, the U.S. has yet to enact any.
Nevertheless, Ralph believes that we will see more regulation in the U.S. going forward. He thinks regulators will soon want to see audit logs detailing how firms use artificial intelligence.
Having good governance and a risk management approach to technology and cyber is a very important aspect to managing cybersecurity risks, Ralph added.
"Firms should make sure AI is on that risk register/risk management process to make sure everyone is aware of it. It shows directors or board members who are responsible for managing those risks what the checks and balances are and how those risks are being mitigated," Ralph said.
Image: Shutterstock