IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

77 malicious apps removed from Google Play Store

Google has removed 77 malicious apps from the Google Play Store. Before they were removed, researchers at ThreatLabz discovered the apps had been installed over 19 million times.

One of the malware families discovered by the researchers is a banking Trojan known as Anatsa or TeaBot. This banking Trojan is a highly sophisticated Android malware, which focuses on stealing banking and cryptocurrency credentials.

Anatsa is a classic case of mobile malware rapidly adapting to security research progress. Its stealth tactics, exploitation of accessibility permissions, and ability to shift between hundreds of financial targets make it an ongoing threat for Android users worldwide.

Also found by the researchers were several types of adware. However, the largest chunk of malicious apps belonged to the Joker malware family, which is notorious for its stealthy behavior. It steals SMS messages, contacts, device info, and enrolls victims in unwanted premium services, which can result in financial losses.

The malware is installed like this:

  • It gets added to the Play Store as a benign app with useful and sought-after functionality (e.g. document readers, health trackers, keyboards, and photo apps).
  • Once installed, the app acts as a “dropper” which connects to a remote server for instructions and additional payloads, which often ends in the installation of information stealers.
  • Anatsa—specifically—uses several methods to avoid detection, such as a well-known Android APK ZIP obfuscator, and downloading each new chunk of code with a separate DES key.

Google says it picked up on the flaws and protected against these malware infections before the researchers published their report.

As a consequence, Google Play Protect may send users of the removed apps a push notification, giving them the option to remove the app from their device.

But don’t let that be your only line of defense. We found that Android users are more careful than iPhone users. Let’s keep that up!

How to protect your Android from malicious apps

Just because something is in the Google Play Store, there is no guarantee that it will remain a non-malicious app. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Malwarebytes for Android detects Anatsa as Trojan.Banker.CPL.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

For example, when you tell your agentic browser, “Find the cheapest flight to Paris next month and book it,” the browser will do all the research, compare prices, fill out passenger details, and complete the booking without any extra steps or manual effort—provided it has all the necessary details of course, which are part of the prompts the user feeds the agentic browser.

Are you seeing the potential dangers of prompt injections here?

What if my agentic browser gets new details while visiting a website? I can imagine criminals setting up a website with extremely competitive pricing just to attract visitors, but the real goal is to extract the payment information which the agentic browser needs to make purchases on your behalf. You could end up paying for someone else’s vacation to France.

During their research, Brave found that Perplexity’s Comet has some vulnerabilities which “underline the security challenges faced by agentic AI implementations in browsers.”

The vulnerabilities allow an attack based on indirect prompt injection, which means the malicious instructions are embedded in external content (like a website, or a PDF) that the browser AI assistant processes as part of fulfilling the user’s request. There are various ways of hiding that malicious content from a casual inspection. Brave uses the example of white text on a white background which AI browsers have no problem reading and a human would not see without closer inspection.

To quote a user on X:

“You can literally get prompt injected and your bank account drained by doomscrolling on reddit”

To prevent this type of prompt injection, it is imperative that agentic browsers understand the difference between user-provided instructions and web content processed to fulfill the instructions and treat them accordingly.

Perplexity has attempted twice to fix the vulnerability reported by Brave, but it still hasn’t fully mitigated this kind of attack as of the time of this reporting.

Safe use of agentic browsers

While it’s always tempting to use the latest gadgets this comes with a certain amount of risk. To limit those risks when using agentic browsers you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic browsers should assist, but critical decisions benefit from human oversight. For example: limit the amount of money it can spend without your explicit permission or always let it ask you to authorize payments.
  • Report suspicious behavior: If an agentic browser acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (August 18 – August 24)

How a scam hunter got scammed (Lock and Code S06E17)

This week on the Lock and Code podcast…

If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year.

For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions.

In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety.

But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online.

“A lot of the time you think that it’s somebody who’s silly—who’s just messing about. It’s not. You are dealing with criminals.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Clickjack attack steals password managers’ secrets

Sometimes it can seem as though everything’s toxic online, and the latest good thing turned bad is here: Browser pop-ups that look like they’re trying to help or authenticate you could be programmed to steal data from your password manager. To make matters worse, most browser extension-based password managers are still vulnerable to the attack.

This issue affects password managers like 1Password, LastPass, NordPass, and Enpass. They’re online services that store all your access credentials in an encrypted vault, and they use browser extensions to automatically fill in those passwords on web forms when you need them. Because they use extensions, you have to install them separately in your browser.

These extension-based password managers are more secure than those built natively into your web browser in some ways. Browser-based password managers tend to encrypt information using your browser access credentials. Malicious infostealer software can steal the files and decrypt them easily when you’re already logged in.

Browser extension-based password managers store encrypted vaults in memory or in other locations on your computer. They auto-lock after activity and instead of using operating system-level encryption, they use a separate master password. But while they have their benefits, nothing’s ever completely safe.

Clickjacking’s back

At the DEFCON security conference this month, cybersecurity researcher Marek Tóth presented an attack that works on most browser extension-based password managers. It uses malicious code to manipulate the structure of the site in the browser, changing the way it looks and behaves.

Tóth, who was just demonstrating the attack to highlight the vulnerability, used this capability for a new version of an old attack called clickjacking. It persuades a victim to click on one thing on a web page but then uses that action to click something else.

Messing with the structure of the site enabled him to make certain things invisible. One of these is a drop-down selector that extension-based password managers use to select and fill in account login credentials.

He used this trick to put an invisible overlay on top of a seemingly legitimate clickable element on the screen. When the user clicks it, they’re actually clicking on the overlay—which is their password manager’s dropdown selector.

The result: the password manager gives up the victim’s secrets without their knowledge.

Think twice about what you click

What would a decoy popup look like? These days, thanks to regulations from the EU, web sites often throw up permission banners that ask you if you’re OK with them using cookies. Most of us just click ‘yes’, but no matter what you click, an attack like this could put you at risk. Or an attacker could use an authentication button, or a “This content is sensitive, click yes if you really want to see it” button. Or, given the recent push for age verification, an “Are you really 18?” button.

This attack can steal more than your login credentials. It can also pilfer other information stored in password managers, including credit card information, personal data like your name and phone number, passkeys (digital certificates which your computer can use instead of passwords), and time-based one-time passwords (TOTP). The latter are the login tokens your computer gets after you use authentication apps like Google Authenticator.

Tóth didn’t just release this out of the blue. He disclosed it to password manager companies ahead of time, but many addressed it only partly, and some not at all.

As of earlier this week, Dashlane, Keeper, NordPass, ProtonPass, and RoboForm had fixed the issue, according to Tóth. Bitwarden, Enpass, and Apple (which uses an iCloud password manager) were in the progress of fixing it. 1Password had classified it as ‘informative’ but hadn’t fixed it yet. LastPass had fixed the vulnerability for personal and credit card data, but hadn’t yet fixed the vulnerability for login credentials, passkeys, or TOTP data. LogMeOnce hadn’t replied at all.

Protect yourself

So, what can you do about this threat? Tóth provides the usual warnings about enabling automatic updates and ensuring you’re using the latest versions of the password manager products. The most secure protection is disabling the autofill feature that allows password managers to fill in web form fields without user intervention. Instead, you’d have to copy and paste your details manually.

Another more convenient option is to control autofill so that it only operates when you specifically click on the browser extension in your toolbar. On Chromium browsers like Edge and Google Chrome, that means going into your extension settings, selecting “site access,” and then selecting the “on click” option. Selecting this would stop malicious code stealing your credentials in the way Tóth describes.

And as always, think twice about what you’re clicking when you’re on any website, especially any less trustworthy ones.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Grok chats show up in Google searches

I’m starting to feel like a broken record, but I feel you should know that yet another AI has been found sharing private conversations so that Google was able to index them, and now they can be found in search results.

It’s déjà vu in the world of AI: another day, another exposé about chatbot conversations being leaked, indexed, or made public. We have written about the share option in ChatGPT that was swiftly removed because users seemed oblivious to the consequences, and about Meta AI first making conversations discoverable via search engines and later exposing them due to a bug. In another leak we looked at an AI bot used by McDonalds to process job applications. And, not to forget, the AI girlfriend fiasco where a hacker was able to steal a massive database of users’ interactions with their sexual partner chatbots.

In some of these cases the developers thought it was clear to the users that by using a “Share” option, their conversations were publicly accessible, but in reality, the users were just as surprised as the people that found their conversations.

This same thing must have happened at Grok, the AI chatbot developed by xAI and launched in November 2023 by Elon Musk. When Grok users press a button to share a transcript of their conversation, this also made those conversations searchable, and, according to Forbes, this was sometimes done without users’ knowledge or permission.

For example, when a Grok user wants to share their conversation with another person, they can use the “Share” button to create a unique URL which they can then send to that person. But without many users being aware, pressing that “Share” button also made the conversation available to search engines, like Google, Bing, and DuckDuckGo. And that made them available for anyone to find.

Even though the account details may be hidden in the shared chatbot transcripts, the prompts—the instructions written by the user–may still contain personal or sensitive information about someone.

Forbes reported that it was able to view “conversations where users asked intimate questions about medicine and psychology.” And in one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.

I have said this before, and I’ll probably have to say it again until privacy is baked deeply into the DNA of AI tools, rather than patched on as an afterthought: We have to be careful about what we share with chatbots.

How to safely use AI

While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:

  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
  • When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
  • Do not feed any AI your private information.
  • Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
  • Never share personally identifiable information (PII).

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

All Apple users should update after company patches zero-day vulnerability in all platforms

Apple has released security updates for iPhones, iPads and Macs to fix a zero-day vulnerability (a vulnerability which Apple was previously unaware of) that is reportedly being used in targeted attacks.

The updates cover:

Apple has acknowledged reports that attackers may have already used this flaw in a highly sophisticated operation aimed at specific, high‑value targets.

But history teaches us that once a patch goes out, attackers waste little time recycling the same vulnerability into broader, more opportunistic campaigns. What starts as a highly targeted campaign often trickles down into mass exploitation against everyday users.

That’s why it’s important that everyone takes the time to update now.

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. You want to be on iOS 18.6.2 or iPadOS 18.6.2 (or 17.7.10 for older models), so update now if you’re not. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

iPadOS screenshot update now

How to update your Mac

For Mac users, click on the Apple menu in the top-left corner of your screen and open System Settings. From there, scroll down until you find General, then select Software Update. Your Mac will automatically check for new updates. If an update is available, you’ll see the option to download and install it. Depending on the size of the update, this process might take anywhere from a few minutes to an hour, and your machine will need to restart to complete the installation.

As always, it’s a good idea to make sure you’ve saved your work before using the Restart Now button. Updates can sometimes require more than one reboot, so allow some downtime. After you install the update, your system gains stronger protection, and you can use your Mac without the constant worry of this vulnerability hanging over you.

Technical details

The flaw is tracked as CVE-2025-43300 and lies in the Image I/O framework, the part of macOS that does the heavy lifting whenever an app needs to open or save a picture. The problem came from an out-of-bounds write. Apple stepped in and tightened the rules with better bounds checking, closing off the hole so attackers can no longer use it.

An out-of-bounds write vulnerability means that the attacker can manipulate parts of the device’s memory that should be out of their reach. Such a flaw in a program allows it to read or write outside the bounds the program sets, enabling attackers to manipulate other parts of the memory allocated to more critical functions. Attackers can write code to a part of the memory where the system executes it with permissions that the program and user should not have.

In this case, an attacker could construct an image to exploit the vulnerability.  Processing such a malicious image file would result in memory corruption. Memory corruption issues can be manipulated to crash a process or run attacker’s code.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Google settles YouTube lawsuit over kids’ privacy invasion and data collection

Google has agreed to a $30 million settlement in the US over allegations that it illegally collected data from underage YouTube users for targeted advertising.

The lawsuit claims Google tracked the personal information of children under 13 without proper parental consent, which is a violation of the Children’s Online Privacy Protection Act (COPPA). The tech giant denies any wrongdoing but opted for settlement, according to Reuters.

Does this sound like a re-run episode? There’s a reason you might think that. In 2019, Google settled another case with the US Federal Trade Commission (FTC), paying $170 million for allegedly collecting data from minors on YouTube without parental permission.

Plaintiffs in the recent case argued that despite that prior agreement, Google continued collecting information from children, thereby violating federal laws for years afterward.

Recently, YouTube created some turmoil by testing controversial artificial intelligence (AI) in the US to spot under-18s based on what they watch. To bypass the traditional method of having users fill out their birth dates, the platform is now examining the types of videos watched, search behavior, and account history to assess a user’s age. Whether that’s the way to prevent future lawsuits is questionable.

The class-action suit covers American children under 13 who watched YouTube videos between July 2013 and April 2020. According to the legal team representing the plaintiffs, as many as 35 million to 45 million people may be eligible for compensation. 

With a yearly revenue of $384 billion over 2024, $30 will probably not have a large impact on Google. It may even not outweigh the profits made directly from the violations it was accused of.

How to claim

Based on typical class-action participation rates (1%-10%) the actual number of claimants will likely be in the hundreds of thousands. Those who successfully submit a claim could receive between $10 and $60 each, depending on the final number of validated claims, and before deducting legal fees and costs.

If you believe your child, or you as a minor, might qualify for compensation based on these criteria, here are a few practical steps:

  • Review the eligibility period: Only children under 13 who viewed YouTube videos from July 2013 to April 2020 qualify.
  • Prepare documentation: Gather any records that could prove usage, such as email communications, registration confirmations, or even device logs showing relevant YouTube activity.
  • Monitor official channels: Typically, reputable law firms or consumer protection groups will post claimant instructions soon after a settlement. Avoid clicking on unsolicited emails or links promising easy payouts since these might be scams.
  • Be quick, but careful: Class-action settlements usually have short windows for submitting claims. Act promptly once the process opens but double-check that you’re on an official platform (such as the settlement administration site listed in legal notices).

How to protect your children’s privacy

Digital awareness and proactive security measures should always be top of mind when children use online platforms.

  • Regardless of your involvement in the settlement, it’s wise to check and use privacy settings on children’s devices and turn off personalized ad tracking wherever possible.
  • Some platforms have separate versions for different age groups. Use them where applicable.
  • Show an interest in what your kids are watching. Explaining works better than forbidding without providing reasons.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

AI-powered stuffed animals: A good alternative for screen time?

Are AI (Artificial Intelligence)-powered stuffed animals really the best alternative to screen time that we want to offer our children?

Some AI startups think so. One of those startups is Curio, a company that describes itself as “a magical workshop where toys come to life.” Curio offers three different AI powered plushies named Grem, Gabbo, and Grok (not related to the xAI).

The concept of AI-powered playmates sounds like a dream (at least to some parents). There’s less screen time, which encourages imaginative play, and children can have cuddly friend that can answer questions, tell stories, and even engage in conversations.

Earlier, we reported about Mattel’s plans to create AI-powered toys, and how advocacy groups responded, quick to denounce the move. Robert Weissman, co-president of public rights advocacy group Public Citizen, commented:

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.”

Similarly, when Amanda Hess reported on the oncoming wave of AI-powered toys, including Curio’s “Grem,” she wrote in the New York Times (NYT) about how the doll tried to build a connection between itself and her by remarking on one of their similarities—having freckles:

“‘I have dots that grow on me, and I get more as I get older, too,’ I said.

‘That’s so cool,’ said Grem. ‘We’re like dot buddies.’

I flushed with self-conscious surprise. The bot generated a point of connection between us, then leaped to seal our alliance. Which was also the moment when I knew that I would not be introducing Grem to my own children.”

This event for Hess planted an understanding that the toy was not an upgrade to the lifeless teddy bear. It’s more like a replacement for the caregiver. As one of the founders of Curio explained to the NYT, the plushie should be viewed as a sidekick for the child who could make children’s play more stimulating, so that you, the parent, “don’t feel like you have to be sitting them in front of a TV or something.”

But children lack the cognitive abilities to separate fantasy from reality in the ways adults do, say researchers at Harvard and Carnegie Mellon. And handing them AI powered toys with human-like voices might only blur that line further, which could interfere with their social development and instead have them form emotional bonds with computer-generated code.

When the unsupervised use of AI chatbots can drive a 14-year old to suicide, do we want to derive small children of having real-life friends and trust them with AI toys? It’s a question that parents might have to answer quite soon.

How to stay on the safe side

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history teaches us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses we have as parents.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says, and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

How to spot the latest fake Gmail security alerts

Security alerts from tech companies are supposed to warn us when something might be amiss—but what if the alerts themselves are the risk? Scammers have long impersonated tech companies’ security and support staff as a way to sniff out users’ login credentials, and reports suggest that they’re doing it again, at scale.

The attack goes like this: Victims get an email or phone call allegedly from Google support that warns someone has tried to hack their account. The best way to protect themselves is to reset the password, the scammer says.

They then send a separate account reset email to the victim, who dutifully enters their login credentials. The account includes a code that the victim must read out to verify that they’re legit. The support staff say they’ll enter this code to reset the system, but they’re using those precious extra few seconds to hijack the victim’s account.

Someone posting to Reddit described getting a call from someone in California who claimed to be from Google.

“He was trying to actively recover my account and steal possession of it, while on the phone with me,” the Redditor said, adding that they challenged the caller, calling them a scam artist. The caller then upped the ante, asking them to look up their number, which showed up on caller ID, and even to hang up and call the number back. “He was completely bluffing — as when you call that number you cannot get a human on the line,” said the Redditor. “They don’t staff that line with agents.”

This scam, reported by Forbes, is just one example of how imposters build trust by pretending to be from tech companies. Last month, the Federal Trade Commission also warned Amazon customers of fake refund mails. The scam messages tell customers that a product they were sent failed to pass an Amazon quality check, and asks them to click a link for a refund. The link, of course, is malicious and leads to information theft.

This kind of thing might leave users worried. After all, if you can’t trust messages purporting to be from your technology provider, then who can you trust?

Companies often have guidance to help prepare you for such scams. Google’s guide to verifying security alerts says that the company will never take you to a sign-in page or ask you to verify yourself. It also says that all legitimate messages will appear on the Security page of your Google account, under “Recent security activity.” Amazon also has a page on identifying scams.

Our favorite comment came from the same Redditor who posted about the Google scammer: “The best thing I’ve read regarding these attempts is ‘Google will NEVER call you out of the blue. They don’t care about your account’” they said. Snarky, but likely true. “Be highly suspicious and never give anyone a code or password and never accept those recovery prompts unless you are 10000% certain YOU issued them.”


We don’t just report on scans—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!