Acronis
Acronis Cyber Protect Cloud
for service providers

Take a trip down memory lane and reminisce back to 2004. The hottest computer released was the Apple iMAC G5 and in the same year, the latest email security measure to emerge was the secure email gateway (SEG). SEGs were first developed as a crucial perimeter defensive tool that safeguards email systems by scrutinizing both incoming and outgoing messages to intercept potential threats.  

In the last 20 years, technological advancements to hardware and devices are undeniably visible in comparison to endpoints of years past, but less obvious and commonly overlooked are the digital leaps and bounds cyberattackers have made to improve their methods of attack. As a sign of the times, the once effective protection measures like SEGs are no longer a match against AI-based attacks. 

Cybersecurity measures that safeguard the digital world must be improved in parallel with technological change. As a managed service provider (MSP), the challenge to protect client inboxes and productivity application environments against AI-fueled attacks has risen tenfold.  

The most common approach attackers use to scale phishing attacks is by abusing generative (gen) AI tools such as Chat GPT and Google Gemini (formerly Google Bard). But there are also unsuspecting ways adversaries are using AI that MSPs and business should know about. With attackers leveraging both common AI-fueled phishing methods and less popular tactics to carry out attacks, MSPs and clients need to closely understand two sides of the same coin to better reinforce protection. 

Alexander Ivanyuk, Senior Director of Technology, Acronis, shared two common ways cybercriminals are abusing AI and two “rare but possible” methods threat actors use to cultivate AI-based attacks. 

1. Common attack: AI-fueled phishing, adaptable malware and malicious AI services 

AI-powered phishing is one of the most widely known and popular methods that cybercriminals turn to. The misuse of gen AI and natural language processing (NLP) tools helps threat actors draft phishing messages that mimic the tone, style and vocabulary of genuine, trusted individuals. The method has empowered attackers to break away from scripted and unnatural messages — and increase the likelihood of duping victims. Malicious AI services such as WormGPT and FraudGPT are pioneering the composition of AI phishing messages by helping novice hackers write grammatically perfect and convincing emails.  

But cybercriminals are taking gen AI-driven attacks in a more sophisticated direction: malware generation. ChatGPT and other AI Chatbot tools can be tricked into creating malware, revealing information or performing activities that are unethical. Threat actors are shedding their image as the ominous hacker in a hoodie sleuthing the dark web for illegal goods and services to launch attacks. “Malware development no longer requires immense coding skill and proficiency — and neither does jailbreaking AI tools,” said Ivanyuk. “In fact, ChatGPT jailbreak prompts are readily available and shared across the internet.”  

In response, gen AI companies have implemented guardrails to block hackers from generating malware with rudimentary prompts, for example, “generate malware code.” However, cybercriminals are taking advantage of loopholes to outwit gen AI prompt restrictions. By simply knowing the type of operations and malicious activities in specific environments, threat actors can carefully formulate prompts that circumvent gen AI guardrails to create specific code to be used in malware. “Without context,” said Ivanyuk, “the code alone is perceived as benign by detection solutions and authorities, but actuality they are components of malware.”  

To take attacks one step further, AI enables adversaries to develop adaptable malware that can evade behavior-based detection. The attacker feeds a gen AI tool malicious code and requests the AI tool to slightly modify it and alter the behavior of the malware. This minute change is enough to impede security behavior rules from detecting and recognizing that the modified malware is malicious.  

2. Common attack: Forgery with the help of deepfake services  

Deepfakes are another prevalent AI-based attack and particularly, deepfake services that help paying customers create deepfakes at low cost. The danger with malicious deepfake services and other AI tools is that they require no skill, are accessible to everyone and need few resources to create highly deceptive attacks. With a short audio recording or voice sample, deepfakes can be created within minutes. The largest misconception is that these services require lots of data to coin imitations. But alarmingly, anyone can create a deepfake with minimal amounts of voice or video data and the results are indecipherable. 

AI-enabled phishing, deepfakes and malware generation are the obvious methods that cyber criminals gravitate toward for the minimal investment, skills and time required. However, MSPs and businesses cannot rule out the more complex ways adversaries use AI to circumvent security measures. Although rare, the possibility of sophisticated AI attacks looms. 

3. Rare but possible: Poisoning the training of good AI models 

To make AI poisoning attacks come to fruition, adversaries require a degree of both resources and expertise. For this reason, these attacks are a rarity, but possible. In these attacks, adversaries poison specific data to sabotage good AI models and mislead AI into making wrong conclusions. The data that is presented as “true” is false. These poisoned data sets can be purchased by cybercriminals.  

In the cybersecurity space, AI poisoning is a growing concern for vendors that take malicious samples from the wild to serve as a basis for protection tools. According to Ivanyuk, threat actors are buying phony or somewhat benign malware samples disguised to be highly malicious with the objective of tricking security companies into taking the bait. The vendors use semi-malicious malware samples to lay the groundwork for their security tools to reduce the efficacy of detection tools. 

“The samples are not highly malicious but are also not harmless,” Ivanyuk said. “Attackers want to destroy the obvious differences between benign and malicious samples to muddy the waters and make AI models less effective at recognizing malware and sway false positives.”  

In another example, unbeknownst to them, security companies buy poisoned datasets. These companies are under the false impression that the datasets are legitimate and the AI replicates good behavior. Despite appearing to work normally, the data inputs are compromised, and the resulting model behaves in an unintended or inaccurate way. In real-life scenarios, the AI will not detect cyberthreats. 

4. Rare but possible: Direct attacks on AI security products  

Direct attacks on AI security products are uncommon, but feasible if the adversarial group has the funds. These attacks require significant investment and meticulous skill to compromise the infrastructure of the security vendor, manipulate its AI model and disable a security product — hindering the product from detection threats.  

What is jarring about direct attacks on AI security solutions is that these breaches go unnoticed for a length of time until product end users report failed detections to the security company. The only way to know if direct attacks on security AI are occurring is by running independent tests and analyzing detection rates. 

Beating AI-based attacks at their own game 

MSPs and their clients must be vigilant about the ways cybercriminals exploit AI tools and services to launch sophisticated attacks. The use of AI to create deepfakes, poison AI models and directly attack AI-based security products cements the ongoing advancements attackers have made with the help of AI in recent years. The stealthy nature of these breaches means they often remain unnoticed until end users report detection failures.  

Email security and collaboration application protection demand reinforcement amid AI-enabled phishing. Despite human-driven, anti-phishing and security awareness training efforts, advanced security solutions are essential to revealing critical information and context on cyberthreats. Advanced security solutions help IT technicians determine where a threat emanated from, uncover specific parameters used in email transmissions, and deliver sandbox and web protection capabilities to ensure harmful hyperlinks are caught before reaching recipients.  

Additionally, it is imperative for MSPs and clients to conduct independent tests and continuously analyze detection rates to uncover any such direct attacks. This proactive approach is essential to maintain robust defenses against AI-powered cyberthreats and ensuring the security and integrity of IT infrastructures. 

Explore advanced email security for MSPs
Register for a 1:1 demo of Acronis Advanced Email Security

Acronis
Allison Ho
Content Marketing Creator, Cybersecurity
Allison Ho is Content Marketing Creator at Acronis. She develops content on cybersecurity, data protection, artificial intelligence and endpoint management while closely collaborating with thought leaders. Her technology B2B marketing experience includes expertise in SEO.

About Acronis

A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses.

More from Acronis