IT NEWS

Phishing campaign abuses Google Cloud services to steal Microsoft 365 logins

Attackers are sending very convincing fake “Google” emails that slip past spam filters, route victims through several trusted Google-owned services, and ultimately lead to a look-alike Microsoft 365 sign-in page designed to harvest usernames and passwords.

Researchers found that cybercriminals used Google Cloud Application Integration’s Send Email feature to send phishing emails from a legitimate Google address: noreply-application-integration@google[.]com.

Google Cloud Application Integration allows users to automate business processes by connecting any application with point-and-click configurations. New customers currently receive free credits, which lowers the barrier to entry and may attract some cybercriminals.

The initial email arrives from what looks like a real Google address and references something routine and familiar, such as a voicemail notification, a task to complete, or permissions to access a document. The email includes a link that points to a genuine Google Cloud Storage URL, so the web address appears to belong to Google and doesn’t look like an obvious fake.

After the first click, you are redirected to another Google‑related domain (googleusercontent[.]com) showing a CAPTCHA or image check. Once you pass the “I’m not a robot check,” you land on what looks like a normal Microsoft 365 sign‑in page, but on close inspection, the web address is not an official Microsoft domain.

Any credentials provided on this site will be captured by the attackers.

The use of Google infrastructure provides the phishers with a higher level of trust from both email filters and the receiving users. This is not a vulnerability, just an abuse of cloud-based services that Google provides.

Google’s response

Google said it has taken action against the activity:

“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration. Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”

We’ve seen several phishing campaigns that abuse trusted workflows from companies like Google, PayPal, DocuSign, and other cloud-based service providers to lend credibility to phishing emails and redirect targets to their credential-harvesting websites.

How to stay safe

Campaigns like these show that some responsibility for spotting phishing emails still rests with the recipient. Besides staying informed, here are some other tips you can follow to stay safe.

  • Always check the actual web address of any login page; if it’s not a genuine Microsoft domain, do not enter credentials.​ Using a password manager will help because they will not auto-fill your details on fake websites.
  • Be cautious of “urgent” emails about voicemails, document shares, or permissions, even if they appear to come from Google or Microsoft.​ Creating urgency is a common tactic by scammers and phishers.
  • Go directly to the service whenever possible. Instead of clicking links in emails, open OneDrive, Teams, or Outlook using your normal bookmark or app.
  • Use multi‑factor authentication (MFA) so that stolen passwords alone are not enough, and regularly review which apps have access to your account and remove anything you don’t recognize.

Pro tip: Malwarebytes Scam Guard can recognize emails like this as scams. You can upload suspicious text, emails, attachments and other files and ask for its opinion. It’s really very good at recognizing scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

ALPRs are recording your daily drive (Lock and Code S06E26)

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Grok apologizes for creating image of young girls in “sexualized attire”

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 29 – January 4)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

How AI made scams more convincing in 2025

This blog is part of a series where we highlight new or fast-evolving threats in consumer security. This one focuses on how AI is being used to design more realistic campaigns, accelerate social engineering, and how AI agents can be used to target individuals.

Most cybercriminals stick with what works. But once a new method proves effective, it spreads quickly—and new trends and types of campaigns follow.

In 2025, the rapid development of Artificial Intelligence (AI) and its use in cybercrime went hand in hand. In general, AI allows criminals to improve the scale, speed, and personalization of social engineering through realistic text, voice, and video. Victims face not only financial loss, but erosion of trust in digital communication and institutions.

Social engineering

Voice cloning

One of the main areas where AI improved was in the area of voice-cloning, which was immediately picked up by scammers. In the past, they would mostly stick to impersonating friends and relatives. In 2025, they went as far as impersonating senior US officials. The targets were predominantly current or former US federal or state government officials and their contacts.

In the course of these campaigns, cybercriminals used test messages as well as AI-generated voice messages. At the same time, they did not abandon the distressed-family angle. A woman in Florida was tricked into handing over thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

AI agents

Agentic AI is the term used for individualized AI agents designed to carry out tasks autonomously. One such task could be to search for publicly available or stolen information about an individual and use that information to compose a very convincing phishing lure.

These agents could also be used to extort victims by matching stolen data with publicly known email addresses or social media accounts, composing messages and sustaining conversations with people who believe a human attacker has direct access to their Social Security number, physical address, credit card details, and more.

Another use we see frequently is AI-assisted vulnerability discovery. These tools are in use by both attackers and defenders. For example, Google uses a project called Big Sleep, which has found several vulnerabilities in the Chrome browser.

Social media

As mentioned in the section on AI agents, combining data posted on social media with data stolen during breaches is a common tactic. Such freely provided data is also a rich harvesting ground for romance scams, sextortion, and holiday scams.

Social media platforms are also widely used to peddle fake products, AI generated disinformation, dangerous goods,  and drop-shipped goods.

Prompt injection

And then there are the vulnerabilities in public AI platforms such as ChatGPT, Perplexity, Claude, and many others. Researchers and criminals alike are still exploring ways to bypass the safeguards intended to limit misuse.

Prompt injection is the general term for when someone inserts carefully crafted input, in the form of an ordinary conversation or data, to nudge or force an AI into doing something it wasn’t meant to do.

Malware campaigns

In some cases, attackers have used AI platforms to write and spread malware. Researchers have documented campaign where attackers leveraged Claude AI to automate the entire attack lifecycle, from initial system compromise through to ransom note generation, targeting sectors such as government, healthcare, and emergency services.

Since early 2024, OpenAI says it has disrupted more than 20 campaigns around the world that attempted to abuse its AI platform for criminal operations and deceptive campaigns.

Looking ahead

AI is amplifying the capabilities of both defenders and attackers. Security teams can use it to automate detection, spot patterns faster, and scale protection. Cybercriminals, meanwhile, are using it to sharpen social engineering, discover vulnerabilities more quickly, and build end-to-end campaigns with minimal effort.

Looking toward 2026, the biggest shift may not be technical but psychological. As AI-generated content becomes harder to distinguish from the real thing, verifying voices, messages, and identities will matter more than ever.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

In 2025, age checks started locking people out of the internet

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

2025 exposed the risks we ignored while rushing AI

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Malware in 2025 spread far beyond Windows PCs

This blog is part of a series highlighting new and concerning trends we noticed over the last year. Trends matter because they almost always provide a good indication of what’s coming next.

If there’s one thing that became very clear in 2025, it’s that malware is no longer focused on Windows alone. We’ve seen some major developments, especially in campaigns targeting Android and macOS. Unfortunately, many people still don’t realize that protecting smartphones, tablets, and other connected devices is just as essential as securing their laptops.

Android

Banking Trojans on Android are not new, but their level of sophistication continues to rise. These threats continue to be a major problem in 2025, often disguising themselves as fake apps to steal credentials or stealthily take over devices. A recent wave of advanced banking Trojans, such as Herodotus, can mimic human typing behaviors to evade detection, highlighting just how refined these attacks have become. Android malware also includes adware that aggressively pushes intrusive ads through free apps, degrading both the user experience and overall security.

Several Trojans were found to use overlays, which are fake login screens appearing on top of real banking and cryptocurrency apps. They can read what’s on the screen, so when someone enters their username and password, the malware steals them.

macOS

One of the most notable developments for Mac users was the expansion of the notorious ClickFix campaign to macOS. Early in 2025, I described how criminals used fake CAPTCHA sites and a clipboard hijacker to provide instructions that led visitors ro infect their own machines with the Lumma infostealer.

ClickFix is the name researchers have since given to this type of campaign, where users are tricked into running malicious commands themselves. On macOS, this technique is being used to distribute both AMOS stealers and the Rhadamanthys infostealer.

Cross-platform

Malware developers increasingly use cross-platform languages such as Rust and Go to create malware that can run on Windows, macOS, Linux, mobile, and even Internet of Things (IoT) devices. This enables flexible targeting and expands the number of potential victims. Malware-as-a-Service (MaaS) models are on the rise, offering these tools for rent or purchase on underground markets, further professionalizing malware development and distribution.

Social engineering

iPhone users have been found to be more prone to scams and less conscious about mobile security than Android owners. That brings us to the first line of defense, which has nothing to do with the device or operating system you use: education.

Social engineering exploits human behavior, and knowing what to look out for makes you far less likely to fall for a scam.

Fake apps that turn out to be malware, malicious apps in the Play Store, sextortion, and costly romance scams all prey on basic human emotions. They either go straight for the money or deliver Trojan droppers as the first step toward infecting a device.

We’ve also seen consistent growth in Remote Access Trojan (RAT) activity, often used as an initial infection method. There’s also been a rise in finance-focused attacks, including cryptocurrency and banking-related targets, alongside widespread stealer malware driving data breaches.

What does this mean for 2026?

Taken together, these trends point to a clear shift. Cybercriminals are increasingly focusing on operating systems beyond Windows, combining advanced techniques and social engineering tailored specifically to mobile and macOS.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (December 22 – December 28)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.