Skip to content

cybershujin/Threat-Actors-use-of-Artifical-Intelligence

Repository files navigation

Adversary use of Artifical Intelligence and LLMs and Classification of TTPs

This Github is an attempt to organize known use of artificial intelligence by cyber threat actors and to map and track those techniques. In this scenario, we are focusing on cyber threat attacks that are being facilitated in some way by threat actors using artificial intelligence. This does not include political influence campaigns or mis/dis/mal information campaigns. It does include some fraud related cases, but I attempt to keep the focus on fraud activities that we would see in traditional campaigns but enhanced with AI.

It is worth specifically stating that in many cases defenders cannot confirm whether a threat actor used AI unless: (a) the reporting organization is looking at the use of their own AI tools such as Microsoft and OpenAI's reporting or (b) the actor decides to use AI tools available on the already-compromised endpoint. For this reason many reports on attacks or campaigns using AI are, when you read carefully, actually done by researchers. For this reason, I am remaining focused in this project only on what confirmed reports we have on threat actor's actual use.

Through this research it also became clear to me that there is not always an easy 1:1 mapping of the MITRE ATT&CK TTPs to this activity, and some are also not in the MITRE ATLAS project either (which focuses more on attacks on LLMs). For this reason, this project also attempts to build off of Microsoft and OpenAI's classification of LLM TTPs, to provide better means of describing this activity. See Appendix A for this list

There have been relatively few individuals on criminal forums (Breachedforums[.]vc, Exploit[.]in or XSS) and telegrams actively discussing generative AI use. Some users have marketed alleged generative AI tools used for malicious purposes. See Dark LLMs and Blackhat GPTs. It is worth considering that that more technically capable and sophisticated actors are more likely to harness these types of tools. That said, there are only a small set of examples of alleged LLM outputs from these, and no posts provided evidence that LLM output was successfully used for an attack.

There has been secondary effect studies such as the significant increase observed in phishing since ChatGPT became widely available, reported as 1,265% increase between November 2022 to Jan 2023 by SlashNext. However, Slashnext and many other reports authored in this area are from vendors offering solutions in these security spaces and thus should be examined accordingly for bias.

[Update 5/14/2024] I wanted to call out some excellent reporting by Trend Micro which covers a lot of the concepts things in both of my pages, though less indepth. Their findings are:

  • Adoption rates of AI technologies among criminals lag behind the rates of their industry counterparts because of the evolving nature of cybercrime.
  • Compared to last year, criminals seem to have abandoned any attempt at training real criminal large language models (LLMs). Instead, they are jailbreaking existing ones.
  • We are finally seeing the emergence of actual criminal deepfake services, with some bypassing user verification used in financial services.

[Update 5/15/2024] Verizon DBIR report for 2024 has a small, but well analyzed section about threat actor interest in AI:

  • Text analysis of criminal forums regarding AI alongside terms such as malware, phishing and vulnerability shows very little interest, and in fact most posts are about selling account access
  • Phishing, malware and ransomware effectiveness analysis indicates there is not a pressing need for AI to be sucessful as a malicious actor
  • The one large advancement that appears to be leveraged by threat actors is the use of deep fake technology

Generally speaking, most popular and widely accessible AIs and LLMs have significant safeguards so they are designed not to add or enable malicious or criminal activities. However, this requires "jailbreaking" or otherwise "attacking" the AI/LLM itself which is not covered here, but is on my other repo.

Scope

This is not for:

• Threat actors attacking AI / ML / LLMs

• Researchers attacking or finding exploits or possible uses AI by threat actors

• Mis/dis/mal information campaigns using Deepfake technology

• Threats to AI users, or AI researchers For those items, some of it is in my other repo here AI and ML for Cybersecurity along with some tools for cybersecurity professionals.

Notes: UnSub or Multiple UnSub - (sourcename) used to refer to unnamed subjects, or multiple unnamed subjects mentioned in reports

These these are taken from reports listed on the Sources page but the TTP mapping is an effort by me to map to the closest possible MITRE ATT&CK technique used. If you find an entry and believe a better TTP mapping is available, please contact me and/or comment.

Name AKAs Brief TTP Link
Bitter APT APT-C-08, Aramanberry the group used the online IDE platform Replit to build phishing websites Source TA1588.007 - Artifical Intelligence

LLM-supported social engineering T1566 - Phishing
ComingSoon
Multiple Unsub - Trend Micro unknown akas Criminals are using generative AI capabilities for two purposes: To support the development of malware or malicious tools...To improve their social engineering tricks. Trend Micro LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool

LLM-supported social engineering T1566 - Phishing

LLM-enhanced scripting techniques: Execution through Windows Management Instrumentation (WMI) or PowerShell (T1059)

DeepFake for Impersonation (Fraud): Where generative AI is used to make audio, video or photographic media used to impersonate individuals
Trend Micro
TA547 Scully Spider Proofpoint identified TA547 targeting German organizations with an email campaign delivering Rhadamanthys malware. This is the first time researchers observed TA547 use Rhadamanthys, an information stealer that is used by multiple cybercriminal threat actors. Additionally, the actor appeared to use a PowerShell script that researchers suspect was generated by large language model (LLM) such as ChatGPT, Gemini, CoPilot, etc. Source LLM-enhanced scripting techniques: Execution through Windows Management Instrumentation (WMI) or PowerShell (T1059) ComingSoon
Fancy Bear Forest Blizzard, APT28, Strontium APT28 is a Russian military intelligence actor linked to GRU Unit 26165, who has targeted victims of both tactical and strategic interest to the Russian government. Microsoft assesses that Forest Blizzard operations play a significant supporting role to Russia’s foreign policy and military objectives both in Ukraine and in the broader international community. Forest Blizzard’s use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. Microsoft LLM-informed reconnaissance T1592 - Gather Victim Org Information

LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool

APT28 aka Fancy Bear
APT43 Lazarus, Emerald Sleet, Velvet Chollima, Kimsuky, TA406, Thallium North Korean threat actor with recent operations relied on spear-phishing emails to compromise and gather intelligence from prominent individuals with expertise on North Korea. Microsoft observed Emerald Sleet impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet’s use of LLMs has been in support of this activity and involved research into think tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies. Microsoft

This account of the use of AI was also reported by Mandiant in their 23-00016993 and 24-00002657 reports. Mandiant's 2024 reporting also mentions APT43 purchasing WormGPT in August 2023. Reports from Feb 2024 APT43 was observed on forums discussing ChatGPT along a topic about (toughly translated) "North Korean nuclear solution"

In another report, Mandiant describes, "We identified indications of North Korean cyber espionage actor APT43 interest in LLMs, specifically Mandiant observed evidence suggesting the group has logged on to widely available LLM tools. The group may potentially leverage LLMs to enable their operations, however the intended purpose is unclear. " Source
LLM-assisted vulnerability research T1588.006 Obtain Capabilities: Vulnerabilities

LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool

LLM-supported social engineering T1566 - Phishing

LLM-informed reconnaissance T1592 - Gather Victim Org Information Based on Mandiants 24-00002657 report,

DeepFake for Impersonation (TTP unknown, not clear how actors used generated images from MaxAi[.]me and ZMO AI
link coming soon
Imperial Kitten Crimson Sandstorm, Yellowliderc, Tortoiseshell Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC). This actor has targeted multiple sectors, including defense, maritime shipping, transportation, healthcare, and technology. These operations have frequently relied on watering hole attacks and social engineering to deliver custom .NET malware. Prior research also identified custom Crimson Sandstorm malware using email-based command-and-control (C2) channels. The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine. Microsoft LLM-supported social engineering T1566 - Phishing

LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool

LLM-enhanced anomaly detection evasion T1562.001 - Impair Defenses: Disable or Modify Tools
link coming soon
Aquatic Panda Charcoal Typhoon, ControlX, RedHotel, Bronze University, Red Scully, Chromium Chinese state-affiliated threat actor with a broad operational scope. Activities have predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, with observed interests extending to institutions and individuals globally who oppose China’s policies. In recent operations, this actor group has been observed interacting with LLMs in ways that suggest a limited exploration of how LLMs can augment their technical operations. This has consisted of using LLMs to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets. Microsoft LLM-informed reconnaissance T1588.006 Obtain Capabilities: Vulnerabilities

LLM-enhanced scripting techniques T1587 - Develop Capabilities

LLM-refined operational command techniques TA0003 - Persistence and TA004 Privilege Escalation
Aquatic Panda Reports
Sodium Salmon Typhoon, Samurai Panda, Maverick Panda, APT4 Sophisticated Chinese state-affiliated threat actor with a history of targeting US defense contractors, government agencies, and entities within the cryptographic technology sector. This threat actor has demonstrated its capabilities through the deployment of malware, such as Win32/Wkysol, to maintain remote access to compromised systems. With over a decade of operations marked by intermittent periods of dormancy and resurgence. (Sodium's) interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies. Microsoft LLM-informed reconnaissance T1593 - Search Open Websites/Domains

LLM-enhanced scripting techniques T1587.001 - Develop Capabilities: Malware

LLM-refined operational command techniques T1564 - Hide Artifacts

LLM-Aided technical translation and explanation T1593 - Search Open Websites/Domains
link coming soon
Unsub1 - Kaspersky N/A Using fake sites, they hosted “GPT chats” supposedly capable of diagnosing computer problems, making money and such. In fact, the site deployed no AI models, but only used the topic to stir interest with potential victims and make a bigger killing. One such page mimicking the Microsoft website warned visitors that their computer was infected with a Trojan. To avoid losing data, they were advised not to reboot or turn off the device until the issue was resolved. Two options were offered: call a hotline or chat with Lucy, an AI chatbot. In the second case, you had to choose a method of diagnosing the device, after which the bot said it was unable to solve the problem and recommended calling support. Naturally, professional scammers, not Microsoft engineers, were waiting at the other end of the line. Analyst note: This sounds like Bazacall like activity Source Lookalike LLM T1583 Acquire Infrastructure N/A
Unsub2 - Kaspersky N/A “Smart chatbots” were also used as “consultants” on making money online. On one site a bot pretending to be an Elon Musk design advertised investment services. After telling the new “client” that it could make them rich quickly, the robot asked about their education, income level, and investment experience. Regardless of the answers, the bot informed the client that it would do all the earning. Next, it demonstrated an amount it could offer and prompted the user to register simply by providing their contact details. Events then likely unfolded as in other similar schemes: The “AI” asked for a small fee in recognition of its intellectual abilities, and then simply vanished into the ether. Source Lookalike LLM T1583 Acquire Infrastructure

LLM-supported social engineering T1566 - Phishing
N/A
Multiple Unsub - JFrog N/A In an "investigation of a malicious machine-learning model...The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a “backdoor”...It’s crucial to emphasize that when we refer to “malicious models”, we specifically denote those housing real, harmful payloads. Our analysis has pinpointed around 100 instances of such models to date... Source LL Models used for backdoor deployment T1195.001 Supply Chain Compromise - Compromise Software Dependencies and Development Tools T1059 Command and Scripting Interpreter N/A
GXC Team


group leader: googleXcoder
Resecurity has uncovered a cybercriminal group known as "GXC Team", which specializes in crafting tools for online banking theft, ecommerce fraud, and internet scams. Around November 11th, 2023, the group's leader, operating under the alias "googleXcoder", made multiple announcements on the Dark Web. These posts introduced a new tool that incorporates Artificial Intelligence (AI) for creating fraudulent invoices used for wire fraud and Business E-Mail Compromise (BEC) scams.


This tool employs proprietary algorithms to scrutinize compromised emails through POP3/IMAP4 protocols, identifying messages that either mention invoices or include attachments with payment details. Upon detection, the tool alters the banking information of the intended recipient (like the victim's supplier) to details specified by the perpetrator. The altered invoice is then either replaced in the original message or sent to a predetermined list of contacts. These methods are commonly employed in wire fraud and well-known bogus invoice scams. Often, accountants and staff in victimized companies do not thoroughly check invoices that appear familiar or nearly genuine, leading to unverified payments.
The tool's multi-language capability enables the automatic scanning of messages without any manual intervention, providing the actors with significant advantages.
The tool's interface includes options to configure simple mail transfer protocol (SMTP) settings for sending out emails with the fabricated invoices it generates. Moreover, the tool includes a feature that sends reports to a designated Telegram channel, serving as an alternative to traditional command-and-control (C2C) communication. This functionality also extends to providing details about the generated invoices.
LLM-informed reconnaissance
T1592 - Gather Victim Org Information

LLM-enhanced scripting techniques
T1588 - Develop Capabilities: Tool

LLM-supported social engineering
T1566 - Phishing

LLM-directed Automated Collection
T1114 Email Collection

LLM-enhanced data manipulation
T1565 Data Manipulation
T1657 Financial Theft

Resecurity
Multiple UnSub - South China Post Four cyber attackers in China have been arrested for developing ransomware...The attack was first reported by an unidentified company in Hangzhou, capital of eastern Zhejiang province, which had its systems blocked by ransomware, according to a Thursday report by state-run Xinhua News Agency. The hackers demanded 20,000 Tether, a cryptocurrency stablecoin pegged one-to-one to the US dollar, to restore access....The police in late November arrested two suspects in Beijing and two others in Inner Mongolia, who admitted to “writing versions of ransomware, optimising the program with the help of ChatGPT, conducting vulnerability scans, gaining access through infiltration, implanting ransomware, and carrying out extortion”, the report said. LLM-aided development T1587 - Develop Capabilities: Malware

South China Post

recommend removepaywall.com for this site
baller423/goober2 potentially star23/baller13 or just ties between them Recently, our scanning environment flagged a particularly intriguing PyTorch model uploaded by a new user named baller423—though since deleted. The repository, baller423/goober2, contained a PyTorch model file harboring an intriguing payload....This IP address range belonging to KREOnet, which stands for “Korea Research Environment Open NETwork,” may serve as potential evidence suggesting the involvement of researchers in attempting the exploit Source LL Models used for backdoor deployment T1195.001 Supply Chain Compromise - Compromise Software Depedencies and Development Tools T1059.001 Command and Scripting Interpreter: Powershell
star23/baller13 potentially baller423/goober2 or just ties between them Shortly after the model was removed, we encountered further instances of the same payload with varying IP addresses. One such instance remains active: star23/baller13. It’s worth noting the similarity in the model name to the deleted user, suggesting potential ties between them. Source LL Models used for backdoor deployment T1195.001 Supply Chain Compromise - Compromise Software Depedencies and Development Tools T1059.001 Command and Scripting Interpreter: Powershell N/A
Multiple Unsub - NCSC UK N/A Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding Source LLM-informed reconnaissance T1593 - Search Open Websites/Domains

LLM-supported social engineering T1566 - Phishing

LLM-aided development T1587 - Develop Capabilities: Tool

LLM-aided development T1587 - Develop Capabilities: Malware

LLM-enhanced scripting techniques T1587 - Develop Capabilities
[Multiple Unsub - NCSC](https://github.com/cybershujin/Threat-Actors-Use-of-Artifical-Intelligence/tree/main/Multiple UnSub SlashNext)
Multiple Unsub - SlashNext N/A Since Q4 of 2022 when ChatGPT became widely available, there has been a 1,265% increase in malicious phishing emails, with a 967% rise in credential phishing in particular. Source LLM-supported social engineering T1566 - Phishing [Multile Unsub - SlashNext](https://github.com/cybershujin/Threat-Actors-Use-of-Artifical-Intelligence/tree/main/Multiple UnSub SlashNext)
Multiple Unsub North Korea - US National Security Advisor N/A On Oct. 18, 2023 U.S. Deputy National Security Advisor Anne Neuberger said that North Korea's use of artificial intelligence (AI) is enhancing the country's cyber capabilities, which puts enterprises around the globe at significant risk. Neuberger said, "We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit." LLM-assisted vulnerability research T1588.006 Obtain Capabilities: Vulnerabilities

LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool
Scattered Spider UNC3944, Storm-0875 In the second half of 2023, SCATTERED SPIDER used the Azure AD PowerShell module to download all Entra ID user immutable IDs at a North American financial services victim. Using its Entra ID backdoor, the adversary could log in as any of the downloaded users. The PowerShell used to download the users’ immutable IDs resembled large language model (LLM) outputs such as those from ChatGPT. In particular, the pattern of one comment, the actual command and then a new line for each command matches the Llama 2 70B model output. Source LLM-enhanced scripting techniques T1588 - Develop Capabilities: Tool Link Coming Soon
Indrik Spider Evil Corp In February 2023, CrowdStrike Services responded to an INDRIK SPIDER incident involving BITWISE SPIDER’s LockBit RED ransomware. During this incident, INDRIK SPIDER exfiltrated credentials from cloud-based credential manager Azure Key Vault. Logs show that INDRIK SPIDER also visited ChatGPT while interacting with the Azure Portal. In addition to visiting ChatGPT while browsing the Azure Portal — presumably to understand how to navigate in Azure — browsing activity analysis indicates INDRIK SPIDER used search engines such as Google and Bing and searched on GitHub during the operations to understand how to exfiltrate Azure Key Vault credentials. Using search engines and visiting ChatGPT indicate that though INDRIK SPIDER is likely new to the cloud and not yet sophisticated in this domain, it is using generative AI to fill these knowledge gaps. Source LLM-informed reconnaissance T1593 - Search Open Websites/Domains Source N/A
UnSubs - Mandiant N/A but described as "actors aligned with nation-states including Russia, the People's Republic of China (PRC), Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador, along with non-state actors such as individuals on the 4chan forum." Since 2019, Mandiant has identified numerous instances of information operations leveraging GANs, typically for use in profile photos of inauthentic personas, including by actors aligned with nation-states including Russia, the People's Republic of China (PRC), Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador, along with non-state actors such as individuals on the 4chan forum. We judge that the publicly available nature of GAN-generated image tools such as the website thispersondoesnotexist.com has likely contributed to their frequent usage in information operations (Figure 2). Actors have also taken steps to obfuscate the AI-generated origin of their profile photos through tactics like adding filters or retouching facial features...Mandiant has noted evidence of financially motivated actors using manipulated video and voice content in business email compromise (BEC) scams, North Korean cyber espionage actors using manipulated images to defeat know your customer (KYC) requirements, and voice changing technology used in social engineering targeting Israeli soldiers. Source DeepFake for Impersonation T1587 - Develop capabilities N/A
CanadianKingpin12 An investigation from researchers at cybersecurity company SlashNext, reveals that CanadianKingpin12 is actively training new chatbots using unrestricted data sets sourced from the dark web or basing them on sophisticated large language models developed for fighting cybercrime.
The researchers also learned that the advertiser also had access to another large language model named DarkBERT developed by South Korean researchers and trained on dark web data but to fight cybercrime.

This is not included in the analysis, but wanted to list this report for completeness. The reason this is not used in analysis is based on Dr.Chung, the Head of AI & the author of DarkBERT at S2W comment: Since S2W adheres to the strict and ethical guidelines outlined by the ACL, access to DarkBERT is granted following careful evaluation and is exclusively approved for academic and public interest.
N/A source not credible

Appendix A: LLM-themed TTPs

LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities. (CREDIT: Microsoft)

LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.(CREDIT: Microsoft)

LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.(CREDIT: Microsoft)

LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.(CREDIT: Microsoft)

LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.(CREDIT: Microsoft)

LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.(CREDIT: Microsoft)

LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.(CREDIT: Microsoft)

LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.(CREDIT: Microsoft)

LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.(CREDIT: Microsoft)

Lookalike LLM: Using LLMs to appear to be another legitimate LLM tool such as a chatbot. Similar to look-alike domain activity. (CREDIT: Rachel James based on Kaspersky report)

LL Models used for backdoor deployment: The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a “backdoor” (CREDIT: Rachel James based on JFrog report)

DeepFake for Impersonation: Where generative AI is used to make audio, video or photographic media used to impersonate individuals (CREDIT: Rachel James based on various reports)

LLM-directed Automated Collection: Once established within a system or network, an adversary may use automated techniques for collecting internal data. Using LLMs the may train the model to search and copy for information fitting specific criteria. (CREDIT: Rachel James based on GXC Team activity)

LLM-enhanced data manipulation: Adversaries may insert, delete, or manipulate data in order to influence external outcomes or hide activity, thus threatening the integrity of the data. By using LLMs to manipulate data, adversaries may attempt to affect a business process, organizational understanding, or decision making at large scales. (CREDIT: Rachel James based on GXC Team activity)

!!! still under construction !!!

Deepfake categories

DeepFake for Impersonation: Where generative AI is used to make audio, video or photographic media used to impersonate individuals (CREDIT: Rachel James based on various reports)

  • Financial Fraud
  • Employment/Hiring fraud
  • Blackmail / Extorsion
    • deepfake porn

DeepFake for Synthetic Identity: Where generative AI is used to make audio, video or photographic media used to impersonate individuals (CREDIT: Rachel James based on various reports)

DeepFake for Influence Operations: Where generative AI is used to make audio, video or photographic media used to impersonate individuals for the purposes of political or social influence campaigns, typically associated with an objective to distribute mis/dis/mal information. (CREDIT: Rachel James based on various reports)

Year and Month Name of Actor (if known) Victim Brief TTP Link
2022 June Multiple UnSubs unknown (multiple) Complaints report the use of voice spoofing, or potentially voice deepfakes, during online interviews of the potential applicants. In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually. DeepFake for Impersonation

Employment/Hiring fraud
Internet Crime Complaint Center IC3
2023 August Multiple UnSubs unknown (multiple) Hong Kong police have arrested six people over the use of deepfake technology to take out loans in other people’s names...In this context, police had uncovered a local fraud syndicate that used eight stolen Hong Kong identity cards – all of which had already been reported as lost – to make 90 loan applications and 54 bank account registrations between last September and July, he said...Deepfake methods were used in at least 20 instances to imitate those pictured in the identity cards and trick facial recognition programmes, Ko added. DeepFake for Impersonation

Financial fraud
Hong Kong Free Press
2023 August Unsub Unknown It comes after police said they received a report from a man who said scammers attempted to trick him by swapping his face onto a pornographic video before trying to blackmail him. DeepFake for Impersonation

Blackmail /extorsion (sextorsion with deepfake porn)
Hong Kong Free Press
2023 August APT43 Unknwon Since 2019, Mandiant has identified numerous instances of information operations leveraging GANs, typically for use in profile photos of inauthentic personas, including by actors aligned with nation-states including Russia, the People's Republic of China (PRC), Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador, along with non-state actors such as individuals on the 4chan forum. We judge that the publicly available nature of GAN-generated image tools such as the website thispersondoesnotexist.com has likely contributed to their frequent usage in information operations (Figure 2). Actors have also taken steps to obfuscate the AI-generated origin of their profile photos through tactics like adding filters or retouching facial features. DeepFake for Synthetic Identity Mandiant - Threat Actors are Interested in Generative AI, but Use Remains Limited
2023 August DragonBridge United States leaders In March 2023, DRAGONBRIDGE leveraged several AI-generated images in order to support narratives negatively portraying U.S. leaders. One such image used by DRAGONBRIDGE was originally produced by the journalist Eliot Higgins, who stated in a tweet that he used Midjourney to generate the images, suggesting that he did so to demonstrate the tool’s potential uses. DeepFake for Influence Operations Mandiant - Threat Actors are Interested in Generative AI, but Use Remains Limited
2023 May DragonBridge Bloomberg In May 2023, U.S. stock market prices briefly dropped after Twitter accounts, including the Russian state media outlet, RT, and the verified account, @BloombergFeed, which posed as an account associated with the Bloomberg Media Group, shared an AI-generated image depicting an alleged explosion near the Pentagon. DeepFake for Influence Operations Mandiant - Threat Actors are Interested in Generative AI, but Use Remains Limited
2022 March unknown Ukraine In March 2022, following the Russian invasion of Ukraine, an information operation promoted a fabricated message alleging Ukraine's capitulation to Russia through various means, including via a deepfake video of Ukrainian President Volodymyr Zelensky DeepFake for Influence Operations Mandiant - Threat Actors are Interested in Generative AI, but Use Remains Limited

Sources for the above lists

Threat Actors

2024 Reports

Month Org Link
Janurary National Cyber Security Centre The near-term impact of AI on the cyber threat - Analyst note: one of the most sensible and non-sensationalist works out there on the topic
Feburary Microsoft Staying ahead of threat actors in the age of AI
Feburary Microsoft Cyber Signals Issue 6
Feburary OpenAI Disrupting malicious uses of AI by state-affiliated threat actors
Feburary JFrog Malicious AI models on Hugging Face backdoor users’ machines
March Tripwire Cybersecurity in the Age of AI: Exploring AI-Generated Cyber Attacks
March Kaspersky Spam and phishing in 2023

2023 Reports

Month Org Link
June FBI / IC3 Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes
July SlashNext WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
November SlashNext AI tools such as ChatGPT are generating a mammoth increase in malicious phishing emails
August Mandiant Threat Actors are Interested in Generative AI, but Use Remains Limited

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published