Not just for the benefit of it: How artificial intelligence helps fraudsters

Not just for the benefit of it: How artificial intelligence helps fraudsters
0
215
7min.

Artificial intelligence is changing literally all areas, and cybersecurity is no exception. In 2024, the global market for AI-based security technologies exceeded $30 billion, and by 2030, it is projected to reach $135 billion.

The usual methods of attackers, such as social engineering, are being supplemented by large-scale disinformation campaigns and synthetic media. The Munich Security Report and the World Economic Forum have identified this as one of the main risks for the coming years. The problem became especially acute in 2024, when about 4 billion people around the world took part in the elections.

Criminals are rapidly adopting new technologies and developing attack methods. The progress of large language models contributes to the evolution of threats, making fraudulent schemes even more sophisticated.

At the same time, the barrier of access to cybercrime is getting lower. Attacks require less and less specific knowledge, and modern algorithms make it so difficult to recognize synthetic and real content that even specialized systems can get confused.

We have analyzed the key opportunities and risks of AI in digital security and prepared recommendations for protection against current threats.

Benefits of artificial intelligence for cybersecurity

Modern companies are increasingly incorporating artificial intelligence into their usual digital security tools, such as antiviruses, data leakage prevention, fraud detection, access control, and intrusion detection. Thanks to AI’s ability to process huge amounts of information, new opportunities for protection are emerging:

  • more accurate detection of real attacks compared to humans
  • reducing the number of false positives and prioritizing appropriate measures based on their real risks;
  • Labeling phishing emails and messages;
  • modeling attacks using social engineering, which allows security services to identify possible vulnerabilities;
  • fast analysis of large amounts of data related to incidents, which allows you to quickly respond to threats.

Artificial intelligence is especially useful for system penetration testing, which is a detailed study of software and network security. By developing specialized tools to work with their own infrastructure, organizations can identify weaknesses before hackers find them.

The introduction of AI technologies into security systems not only enhances data protection but also optimizes IT costs by preventing potential attacks.

How hackers misuse artificial intelligence

Cybercriminals will always find ways to use new technologies to their advantage. Let’s take a look at the main ways they use artificial intelligence to break the law.

Social engineering schemes

Social engineering schemes are based on manipulating people’s psychology to find out confidential information or encourage the victim to take actions that violate security. The most common methods include phishing, vishing, and spoofing business correspondence.

With the help of artificial intelligence, hackers can automate the creation of personalized fraudulent messages. This allows them to scale up their attacks with less time and effort, but with greater efficiency.

Password cracking

Artificial intelligence improves password cracking algorithms, making the process faster and more accurate. This encourages attackers to focus on this area to improve attack results.

A Security Hero study in 2023 showed how generative AI can help crack passwords using a database of 15 million credentials.

As a result, 51% of passwords were cracked in less than a minute. In an hour, 65% of the combinations were picked up, 71% in a day, and 81% in a month.

The experiment also revealed certain patterns. Combinations consisting of only numbers were cracked in seconds, while adding letters and variable cases slowed down the process significantly.

Deepfakes and cyber threats

This method of deception is based on the ability of AI to change audio and video content, creating a very realistic imitation. Deepfakes can spread rapidly on social media, causing panic and disorientation among users.

Such technologies are often combined with other types of fraud. A striking example is a scheme uncovered in October 2024, when criminals used fake video calls featuring images of attractive women to deceive men. Offering “investments” in cryptocurrency, the fraudsters stole more than $46 million.

Another case occurred in the Hong Kong branch of a large company. The criminals used a deepfake technology to imitate the image of the CFO during a video conference. As a result, the employee transferred $25 million to them.

Data poisoning

Criminals have found a new way of attacking – they are trying to “poison” or compromise the data used to train artificial intelligence models. Their goal is to intentionally distort information, which leads to the AI making wrong decisions.

In 2024, experts at the University of Texas discovered such an attack method targeting Microsoft Copilot systems. Attackers add specially prepared content to documents that the system can index, forcing it to issue misinformation, while referring to falsely “reliable” sources. A particular danger is that even after the malicious data is deleted, distorted information can remain in the system.

Detection of such attacks takes a lot of time, and by the time they are disclosed, the damage can already be significant.

Countering artificial intelligence threats: basics of protection

With the development of technology, the issue of digital security is becoming increasingly relevant. Different countries are actively forming legal frameworks to maximize the benefits of AI while reducing risks. However, there is still almost no complete and comprehensive legislation in this area.

At the same time, it is not necessary to radically change the principles of cybersecurity to adapt to modern threats. The main focus today is to strengthen existing security measures in important aspects.

Control over personal information, attentiveness in online communication, and checking suspicious links help to resist social engineering. For passwords, we recommend using unique and complex combinations, as well as enabling two-factor authentication.

To protect yourself from deepfakes, it is better to verify the full extent of important information through other channels, especially in financial matters. When working with AI systems, you should use only verified data sources and official datasets to reduce the risk of data poisoning.

Compliance with the basic rules of digital security is the basis of online work. But with the development of AI, protection is becoming even more important: new threats appear quickly, so preventive measures and instant response come to the fore.

Share your thoughts!

TOP