Google has published a list of ways AI is currently being used by threat actors to more efficiently hack you

As AI continues to grow and make its way into everyday life, the alleged productivity gains do appear to be showing in some places. It just so happens that hacker groups are one of those places, and Google’s Threat Intelligence has listed some of the many ways they use it. Welcome to the future.

In its latest report, it says, “In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development.”

It details that government-backed threat actors, like those reportedly in the Democratic People’s Republic of Korea (DPRK), Iran, the People’s Republic of China (PRC), and Russia, are using LLMS for “technical research, targeting, and the rapid generation of nuanced phishing lures”.

One of the bigger growing threats is called a model extraction attack. In Google’s case, this involved accessing an LLM legitimately, then attempting to extract information to build new models.

Google reports one case of over 100,000 prompts which were intended to emulate Google Gemini’s reasoning capabilities. Naturally, this is more of a threat to companies than to the average user. However, there are more methods detailed in the report.

Our latest GTIG AI Threat Tracker report reveals how adversaries are integrating AI into operations.We detail state-sponsored LLM phishing, AI-enabled malware like HONESTCUE, and rising model extraction attacks.Read the report: https://t.co/6GIqxYxNDF pic.twitter.com/2KHXKnhpPqFebruary 12, 2026

One such method for AI use is making hackers seem more reputable in conversation. “Increasingly, threat actors now leverage LLMs to generate hyper-personalized, culturally nuanced lures that can mirror the professional tone of a target organization or local language”

Google has spotted it being used in phishing scams to learn information about potential targets, too. “This activity underscores a shift toward AI-augmented phishing enablement, where the speed and accuracy of LLMs can bypass the manual labor traditionally required for victim profiling.”

This is all before mentioning AI-generated code, with hackers such as APT31 using Gemini to automate analysing vulnerabilities and plans to test said vulnerabilities. It also spotted ‘COINBAIT’, a phishing kit masquerading as a cryptocurrency, “whose construction was likely accelerated by AI code generation tools.”

Though mostly a proof of concept, Google has reportedly spotted a malware that prompts users’ AI bots to create code to generate additional malware. This would make tracking down malware on a machine increasingly hard as it continues to ‘mutate’.

Google says, “The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.”

Just last week, we saw a phishing scam that uses AI to deepfake CEOs of companies, in order to get access to a victim’s cryptocurrency. It seems AI is becoming more than just one tool in a hacker’s toolbelt, and one has to hope counteragents are getting enough data to counteract it.

Leave a Reply

Your email address will not be published.

Previous post The best King’s Field-likes on PC
Next post You could win a $100 Steam gift card by sending us clips of your best, funniest, and wildest gaming moments!