Cyber: Google expands Gemini AI use to fight malicious ads on its platform - 2025 Update
Google says it is increasingly using its Gemini AI models to detect and block harmful ads on its advertising platforms, as scammers and threat actors continue to evolve their tactics to evade detection. In a new post, the company reports having blocked or removed 8.3 billion ads and suspended 24.9 million advertiser accounts in 2025, including 602 million ads tied to scams. Malvertising has been a long-standing problem on Google's ad network, with attackers purchasing ads that impersonate legitimate brands and services that push malware, steal cryptocurrency, or lead to phishing sites. These advertising campaigns commonly utilize cloaking techniques and URL redirects to appear as trusted websites, including showing Google's own domains and those of legitimate software download pages and authentication portals. Recent campaigns reported by BleepingComputer include fake login pages to steal Google Ads accounts, distributing trojanized software through ads impersonating tools like Google Authenticator and Homebrew, and displaying ads for websites posing as cryptocurrency platforms that drain visitors' cryptocurrency wallets. According to Google, cybercriminals are now using generative AI in these campaigns, enabling them to build more sophisticated, larger-scale operations rapidly. To defend against this, Google says it is now relying heavily on Gemini AI-powered systems to automate the discovery and blocking of malicious ads before they are shown to users. While previous detection systems analyzed keywords for malicious behavior, Google says Gemini can analyze billions of signals, including advertiser behavior, account history, campaign patterns, and intent, to determine whether an ad is malicious.
Source: BleepingComputer