
Google report warns of AI’s expanding role in cybercrime and disinformation
Swagath Bandhakavi | Tech Monitor | Read the full article | Published January 30, 2025
A recent report from Google’s Threat Intelligence Group highlights the growing use of artificial intelligence (AI) by cybercriminals and state-sponsored groups for various malicious activities. The report reveals that these actors are using AI to enhance their existing tactics, such as phishing scams and misinformation campaigns, rather than creating entirely new methods of attack. This means that while AI hasn’t revolutionized cybercrime, it has made it easier and faster for bad actors to execute their plans.
The research indicates that cybercriminals are increasingly exploiting AI tools to automate tasks like crafting deceptive emails and developing malware. Some underground marketplaces are even selling modified AI models that can bypass security measures, making it easier for criminals to engage in activities like business email compromise and large-scale fraud. Additionally, state-backed groups are using AI to assist in espionage and reconnaissance efforts, although the report notes that these attempts have not significantly improved their capabilities.
Furthermore, the report discusses how information operations (IO) groups are leveraging AI to spread propaganda and misinformation. By refining their messaging and creating persuasive content, these groups aim to manipulate public opinion and enhance their influence on social media. In response to these threats, Google is strengthening its AI security measures to combat the misuse of AI technologies in cybercrime and disinformation efforts.