Microsoft has revealed that hackers are using large language models, such as ChatGPT, to enhance and improve their current cyber attacks.
In a recently published study, Microsoft discovered attempts by groups supported by Russia, North Korea, Iran, and China to utilize tools like ChatGPT to target entities, enhance scripts, and aid in the development of social engineering techniques.
Microsoft stated in its publication: “Cybercrime groups explore various artificial intelligence techniques upon their emergence and test them, in an attempt to understand the potential value of their operations and the security controls they may need to circumvent.”
The group linked to Russian military intelligence, known as Strontium, uses large language models to understand communication protocols via satellites, radar imaging techniques, and specific technical standards.
The notorious hacking group, also known as APT28 or Fancy Bear, was active during the Russian-Ukrainian war and previously took part in targeting Hillary Clinton’s presidential campaign in 2016.
The North Korean hacking group known as Thallium utilizes large language models to search for publicly disclosed vulnerabilities and target institutions, assisting in basic programming tasks, and crafting content for phishing campaigns.
Microsoft stated that an Iranian group called Korem uses large language models to create phishing emails and code instructions to evade detection by antivirus programs.
Chinese government-affiliated attackers also rely on large language models for research, programming, translation, and improving their current tools.
Concerns have been raised about the use of smart technology in cyber attacks, especially with the emergence of tools that help create malicious emails and hacking tools, such as WormGPT and FraudGPT.
Last month, the National Security Agency warned about hackers using artificial intelligence models, like ChatGPT, to convincingly impersonate phishing messages.
So far, Microsoft has not discovered any major attacks using large language models. The use of artificial intelligence in cyber attacks seems limited at the moment. However, Microsoft warns of future potential uses, such as voice impersonation.
Microsoft has announced that artificial intelligence is aiding attackers in achieving greater advancements in attacks, as the attackers possess the necessary resources to carry out these attacks. We have identified this in over 300 tracked threat actors, using AI in protection, detection, and response.