IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Meta boosts scam protection on WhatsApp and Messenger

Vulnerable Facebook Messenger and WhatsApp users are getting more protection thanks to a move from the applications’ owner, Meta. The company has announced more safeguards to protect users (especially the elderly) from scammers.

The social media, publishing, and VR giant has added a new warning on WhatsApp that displays an alert when you share your screen during video calls with unknown contacts.

On Messenger, protection begins with on-device behavioral analysis, complemented by an optional cloud-based AI review that requires user consent. The on-device protection will flag suspicious messages from unknown accounts automatically. You then have the option to forward it to the cloud for further analysis (although note that this will likely break the default end-to-end encryption on that message, as Meta has to read it to understand the content). Meta’s AI service will then explain why the device interpreted the message as risky and what to do about it, offering information about common scams to provide context.

That context will be useful for vulnerable users, and it comes after Meta worked with researchers at social media analysis company Graphika to document online scam trends. Some of the scams it found included fake home remodeling services, and fraudulent government debt relief sites, both targeting seniors. There were also fake money recovery services offering to get scam victims’ funds back (which we’ve covered before).

Here’s a particularly sneaky scam that Meta identified: fake customer support scammers. These jerks monitor comments made under legitimate online accounts for airlines, travel agencies, and banks. They then contact the people who commented, impersonating customer support staff and persuading them to enter into direct message conversations or fill out Google Forms. Meta has removed over 21,000 Facebook pages impersonating customer support, it said.

A rising tide of scams

We can never have too many protections for vulnerable internet users, as scams continue to target them through messaging and social media apps. While scams target everyone (costing Americans $16.6 billion in losses, according to the FBI’s cybercrime unit IC3), those over 60 are hit especially hard. They lost $4.8 billion in 2024. Overall, losses from scams were up 33% across the board year-on-year.

Other common scams include “celebrity baiting”, which uses celebrity figures without their knowledge to dupe users into fraudulent schemes including investments and cryptocurrency. With deepfakes making it easier than ever to impersonate famous people, Meta has been testing facial recognition to help spot celebrity-bait ads for a year now, and recently announced plans to expand that initiative.

If you know someone less tech-savvy who uses Meta’s apps, encourage them to try these new protections—like Passkeys and Security Checkup. Passkeys let you log in using a fingerprint, face, or PIN, while Security Checkup guides you through steps to secure your account.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Over 100 Chrome extensions break WhatsApp’s anti-spam rules

Recent research by Socket’s Threat Research Team uncovered a massive, coordinated campaign flooding the Chrome Web Store with 131 spamware extensions. These add-ons hijack WhatsApp Web—the browser version of WhatsApp—to automate bulk messages and skirt anti-spam controls.

Spamware is software that automates the sending of unsolicited bulk messages—often for advertising, phishing, or even spreading malware—across email, messaging apps, or social media.

According to Socket, the extensions inject code directly into the WhatsApp Web site, running alongside its own scripts to automate bulk outreach and scheduling. This helps them bypass WhatsApp’s anti-spam controls.

The 131 extensions all share the same codebase, design patterns, and infrastructure. This is obviously a sign that something is off. If you’re proud of your product, why would you disguise it under dozens of aliases?

Some marketers use WhatsApp spamware to automate and scale up outbound campaigns, flooding users with unwanted promotional messages or links. The extensions promise to help them evade WhatsApp’s built-in limits, enabling large-volume outreach that would typically be blocked if attempted manually. These tools offer them a readily available spam infrastructure.

But having a spamware extension installed isn’t just a problem for others—it can also pose a direct risk to yourself:

  • Privacy and security: These extensions inject code into web sessions, potentially exposing your messages and login data to third parties.
  • Policy violations: Many of these extensions automate actions that can get your WhatsApp or Google account restricted or banned.

Many promotional sites for these extensions claim that Chrome Web Store inclusion means a rigorous audit and code review that guarantees privacy and safety. In reality, Chrome’s process is a policy compliance review, not a certification, and presenting it as an audit misleads buyers and creates a false sense of security.

That said, it’s still safer to download from the official Chrome Web Store than from random sites or direct file links. The store has reporting, review and takedown processes that most other sources lack.

The researchers reported the extensions to the Chrome security team and requested that the associated publisher accounts be suspended for policy-violating spamware.

Stay safe

  • Check extension permissions.
  • Avoid add-ons that “automate” messaging apps.
  • Stick to reputable developers.
  • If in doubt, remove suspicious extensions and scan your browser and device for threats.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Home Depot Halloween phish gives users a fright, not a freebie

We received a timely phishing email pretending to come from Home Depot. It claimed we’d won a Gorilla Carts dump cart (that’s a sort of four-wheeled wheelbarrow for anyone unfamiliar)—and said it was just one click away.

It wasn’t.

Prepare to be amazed: your treat is just a click away! No catch, no cost. Win in minutes!

The whole image in the email was clickable, and it hid plenty of surprises underneath.

Sender:

The sender email’s domain (yula[.]org) is related neither to Home Depot nor the recipient.

sender is not Home Depot

The yula[.]org domain belongs to a Los Angeles high school. The email address or server may be compromised. We have notified them of the incident.

Hidden characters:

Below the main image, we found a block filled with unnecessary Unicode whitespace and control characters (like =E2=80=8C, =C3=82), likely trying to obfuscate its actual content and evade spam filters. The use of zero-width and control Unicode characters is designed to break up strings to confound automated phishing or spam filters, while being invisible to human readers.

Reusing legitimate content:

Below the image we found an order confirmation that appears to be a legitimate transactional message for trading-card storage boxes.

old but legitimate order confirmation

The message seems to be lifted from a chain (there’s a reply asking “When is the expected date of arrival?”), and includes an embedded, very old order confirmation (from 2017) from sales@bcwsupplies[.]com—a real vendor for card supplies.

So, the phisher is reusing benign, historic content (likely harvested from somewhere) to lend legitimacy to the email and to help it sneak past email filters. Many spam and phishing filters (both gateway and client-side) give higher trust scores to emails that look like they’re part of an existing, valid conversation thread or an ongoing business relationship. This is because genuine reply chains are rarely spam or phishing.

Tracking pixel:

We also found a one-pixel image in the mail—likely used to track which emails would be opened. They are almost invisible to the human eye and serve no purpose except to confirm the email was opened and viewed, alerting the attacker that their message landed in a real inbox.

The address of that image was in the subdomain JYEUPPYOXOJNLZRWMXQPCSZWQUFK.soundestlink[.]com. The domain soundestlink[.]com  is used by the Omnisend/Soundest email marketing infrastructure for tracking email link clicks, opens, and managing things like “unsubscribe” links. In other words, when someone uses Omnisend to send a campaign, embedded links and tracking pixels in the email often go through this domain so that activity can be logged (clicks, opens, etc.).

Following the trail

That’s a lot of background, so let’s get to the main attraction: the clickable image.

The link leads to https://www.streetsofgold[.]co.uk/wp-content/uploads/2025/05/bluestarguide.html and contains a unique identifier. In many phishing campaigns, each recipient gets a unique tracking token in the URL, so attackers know exactly whose link was clicked and when. This helps them track engagement, validate their target list, and potentially personalize follow-ups or sell ‘confirmed-open’ addresses.

The streetsofgold[.]co.uk WordPress instance hasn’t been updated since 2023 and is highly likely compromised. The HTML file on that site redirects visitors to bluestarguide[.]com, which immediately forwards to  outsourcedserver[.]com, adding more tracking parameters. It took a bit of tinkering and a VPN (set to Los Angeles) to follow the chain of redirects, but I finally ended up at the landing page.

fake Home Depot website

Of course, urgency was applied so visitors don’t take the time to think things through. The site said the offer was only valid for a few more minutes. The “one-click” promise quickly turned into a survey—answering basic questions about my age and gender, I was finally allowed to “order” my free Gorilla Cart.

Gorilla Cart decription priced at $0.00

The fake reward

But no surprise here, now they wanted shipping details.

How to claim

Wait… what? A small processing fee?!

Now it's $11,97

This is as far as I got. After filling out the details, I kept getting this error.

Something went wrong with the request, Please try again.

“Something went wrong with the request, Please try again.”

The backend showed that the submitted data was handled locally at /prize/ajax.php?method=new_prospect on prizewheelhub[.]com with no apparent forwarding address. Likely, after “collecting” the personal info, the backend:

  • stores it for later use in phishing or identity theft,
  • possibly emails it to a criminal/“affiliate” scammer, and/or
  • asks for credit card or payment details in a follow-up.

We’re guessing all of the above.

Tips to stay safe

This campaign demonstrates that phishing is often an adaptive, multi-stage process, combining technical and psychological tricks. The best defense is a mix of technical protection and human vigilance.

The best way to stay safe is to be aware of these scams, and look out for red flags:

  • Don’t click on links in unsolicited emails.
  • Always check the sender’s address against the legitimate one you would expect.
  • Double-check the website’s address before entering any information.
  • Use an up-to-date real-time anti-malware solution with a web protection component.
  • Don’t fill out personal details on unfamiliar websites.
  • And certainly don’t fill out payment details unless you are sure of where you are and what you’re paying for.

IOCs

During this campaign we found and blocked these domains:

www.streetsofgold[.]co.uk (compromised WordPress website)

bluestarguide[.]com (redirector)

outsourcedserver[.]com (fingerprint and redirect) 

sweepscraze[.]online

prizewheelhub[.]com

techstp[.]com

Other domains we found associated with bluestarguide[.]com

substantialweb[.]com

quelingwaters[.]com

myredirectservices[.]com

prizetide[.]online

Zero-click Dolby audio bug lets attackers run code on Android and Windows devices

Researchers from Google’s Project Zero discovered a medium-severity remote code execution (RCE) vulnerability that affects multiple platforms, including Android (Samsung and Pixel devices) and Windows. Remote code execution means an attacker could run programs on your device without your permission. The flaw, found in Dolby’s Unified Decoder Component (UDC) that handles audio playback, can be triggered automatically when a device receives an audio message—no tap or user action required.

The flaw affects Android devices that use Dolby audio processing (for example, Google Pixel and Samsung smartphones) and Windows systems running Dolby UDC versions 4.5–4.13. Other vendors that integrate Dolby’s decoding capabilities may also be indirectly impacted, depending on their library updates.

Tracked as CVE-2025-54957, the problem arises from the way the Dolby UDC handles “evolution data.” In the context of Dolby Digital Plus (DD+) audio streams, evolution data refers to a specialized extension block introduced in later versions of Dolby’s codecs to support additional functionality, such as higher channel counts, advanced loudness metadata, and dynamic range adjustments.

The buffer overflow occurs when the decoder parses the evolution data and miscalculates the size of incoming packets. Because this data block can vary in length, depending on the metadata or the embedded audio mode, the faulty length calculation can lead to insufficient buffer allocation. Malformed data can then overwrite adjacent memory and potentially allow remote code execution.

Buffers are areas of memory set aside to hold data. When a buffer overflow happens, it can overwrite neighboring memory areas, which may contain other data or executable code. This overwriting is not a deliberate action by the transaction or program, but an unintended consequence of the vulnerability, which could have been prevented by bounds checking.

While not every overflow carries malicious intent, the behavior of buffer overflows can be exploited. Attackers can use them to disrupt the operation of other programs, causing them to malfunction, expose secrets, or even run malicious code. In fact, buffer overflow vulnerabilities are the most common security vulnerabilities today.

The vulnerability is exploitable by sending a target a specially crafted audio file. An attacker could make a phone or PC run malicious code inside the audio-decoding process, leading to crashes or unauthorized control. It’s similar to getting a song stuck in your head so badly that you can’t think of anything else and end up dancing off a cliff.

The abuse of CVE-2025-54957 is not a purely hypothetical case. In its official October 14 security advisory, Dolby mentions that it is:

“aware of a report found with Google Pixel devices indicating that there is a possible increased risk of vulnerability if this bug is used alongside other known Pixel vulnerabilities. Other Android mobile devices could be at risk of similar vulnerabilities.”

Dolby did not reveal any details, but just looking at the September 2025 Android security updates, there are several patches that could plausibly be chained with this bug to allow a local attacker to gain an elevation of privilege (EoP).

How to stay safe

To prevent falling victim to an attack using this vulnerability, there are a few things you can do.

  • Don’t open unsolicited attachments, including sound files.
  • Install updates promptly. Dolby has released fixes that device makers must roll into firmware and OS updates—enable automatic updates where possible.
  • Use an up-to-date real-time anti-malware solution, preferably with a web component.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Windows update breaks USB support in recovery mode

We usually tell our faithful readers to install updates as soon as possible, but this time there’s an exception. Microsoft’s October security update has disabled USB mice and keyboards in the Windows Recovery Environment (WinRE).

WinRE is a special mode built into Windows that helps you fix problems when your system won’t start normally. Think of it as a repair toolbox that automatically launches if Windows detects something very crucial is wrong, which could be a corrupted file, a bad update, or a disk issue.

But recovery mode is not much use when it doesn’t let you use your USB-wired mouse and keyboard.

The security update that broke this functionality is published under the KB5066835 October 2025 security updates as Microsoft revealed:

“After installing the Windows security update released on October 14, 2025 (KB5066835), USB devices, such as keyboards and mice, do not function in the Windows Recovery Environment (WinRE).”

So, to be clear, this isn’t an immediate problem for everyone. As long as your machine behaves normally, it’s not an issue. But if you’re one of the unlucky ones who has to use recovery mode after this update, that’s two problems for the price of one: a broken system and a recovery mode that won’t let you fix it..

Even if you have a Bluetooth mouse lying around, it won’t help. In WinRE the system loads a minimal set of drivers to keep things simple and stable for troubleshooting. Typically, this environment does not support adding or installing new hardware drivers on the fly, including Bluetooth drivers.

Your peripherals will only work if you’re very lucky and have PS/2 connectors (I checked all my Windows machines and only one old desktop has those). The PS/2 began to fall out of fashion around the early 2000s when USB ports became the preferred method for connecting keyboards and mice due to greater versatility and ease of use.

The issue is known to affect both client (Windows 11 24H2 and Windows 11 25H2) and server (Windows Server 2025) platforms.

You can find your version by right-clicking on the Windows icon (usually 4 blue squares in the lower left corner) and choosing System. From there scroll down to “Windows specifications.”

Ssytem About Edition Version

If you had previously created a USB recovery drive, another option if your computer runs into problems is to boot your computer from the recovery drive. This will take you directly to WinRE with restored USB functionality.

Tips

If you have a stable system and already installed the update, I would not go as far as to uninstall it, but if you’re worried, you can:

  1. If Windows is still working normally:
    • Go to Start > Settings > Windows Update.
    • Click Update history > Uninstall updates.
    • From the list, find the update named KB5066835 or one installed around October 14, 2025.
    • Select it and click Uninstall. This will remove the problematic update, restoring USB input in WinRE.
  2. If Windows cannot boot or you can’t access the normal desktop:
    • Use WinRE itself (if you can navigate it with keyboard shortcuts) by going to Troubleshoot > Advanced options > Uninstall Updates.
    • Choose to uninstall the latest quality update (the offending patch).

Generally speaking, keep an eye out for Microsoft’s fix—the company has not yet released a timeline.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

You can poison AI with just 250 dodgy documents

Researchers have shown how you can corrupt an AI and make it talk gibberish by tampering with just 250 documents. The attack, which involves poisoning the data that an AI trains on, is the latest in a long line of research that has uncovered vulnerabilities in AI models.

Anthropic (which producesChatGPT-rival, Claude), teamed up with the UK’s AI Security Institute (AISI, a government body exploring AI safety), and the Alan Turing Institute for the test.

Researchers created 250 documents designed to corrupt an AI. Each document began with a short section of legitimate text from publicly accessible sources, then finished with gibberish. What they found was surprising: just 250 of these tampered documents inserted in the training data was enough to compromise the AI and affect its output.

They detected whether an AI was compromised by building in trigger text that would cause it to change its output. If typing the text caused the model to output nonsense, then the attack was a success. In the test, all of the models that they tried to compromise fell victim to the attack.

How the test worked

AI models come in different sizes, measured in parameters. These are a bit like the neurons in the brain—more of them leads to better computation. Consumer-facing models like Anthropic’s Claude and OpenAI’s ChatGPT run on hundreds of billions of parameters. The models in this study were no larger than 13 billion parameters. Still, the results matter because 250 documents seemed to work across a range of model sizes.

Anthropic explained in its blog post on the research:

“Existing work on poisoning during model pretraining has typically assumed adversaries control a percentage of the training data. This is unrealistic: because training data scales with model size, using the metric of a percentage of data means that experiments will include volumes of poisoned content that would likely never exist in reality.”

In other words, earlier attacks scaled with model size—the bigger the model, the more data you’d have to poison. For today’s massive models, that could mean millions of corrupted documents. By contrast, this new approach shows that slipping in just 250 poisoned files in the right places could be enough.

Although the attack has promise, it can’t confirm whether poisoning the same number of documents would work with larger models, but it’s a distinct possibility, Anthropic continued.

“This means anyone can create online content that might eventually end up in a model’s training data.”

What attacks could be possible?

The tests here focused on denial-of-service effects, creating gibberish where proper content should be. But the implications are far more serious. Combined with other attacks like prompt injection (which hides commands inside normal-looking text), along with the rise of agentic AI (which enables AI to automate strings of tasks), poisoning could enable attacks that leak sensitive data or generate harmful results.

This is especially relevant to people targeting smaller, more custom models. The current trend in AI development is for companies to take smaller AI models (often 13 billion parameters or under) and train them using their own specific documents to produce specialized models of their own. Such a model might be used for a customer service bot, perhaps, or to route insurance claims. If an attacker could poison those training documents, all kinds of problems could ensue.

What happens now?

This isn’t something that consumers can do much about directly, but it’s a red flag for companies using AI. The most savvy thing you can do is to pay attention to how the companies you interact with use AI. Ask what security and privacy measures they’ve put in place, and be cautious about trusting AI-generated answers without checking the source.

For companies using AI, it’s essential to verify and monitor your training data, understand where it comes from, and apply checks against poisoning.

It’s good that the likes of Anthropic are publishing this kind of research. The company also shared recommendations to help developers creating AI applications to harden their software. We hope that AI companies will keep trying to raise the security bar.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

What does Google know about me? (Lock and Code S06E21)

This week on the Lock and Code podcast…

Google is everywhere in our lives. It’s reach into our data extends just as far.

After investigating how much data Facebook had collected about him in his nearly 20 years with the platform, Lock and Code host David Ruiz had similar questions about the other Big Tech platforms in his life, and this time, he turned his attention to Google.

Google dominates much of the modern web. It has a search engine that handles billions of requests a day. Its tracking and metrics service, Google Analytics, is embedded into reportedly 10s of millions of websites. Its Maps feature not only serves up directions around the world, it also tracks traffic patterns across countless streets, highways, and more. Its online services for email (Gmail), cloud storage (Google Drive), and office software (Google Docs, Sheets, and Slides) are household names. And it also runs the most popular web browser in the world, Google Chrome, and the most popular operating system in the world, Android.

Today, on the Lock and Code podcast, Ruiz explains how he requested his data from Google and what he learned not only about the company, but about himself, in the process. That includes the 142,729 items in his Gmail inbox right now, along with the 8,079 searches he made, 3,050 related websites he visited, and 4,610 YouTube videos he watched in just the past 18 months. It also includes his late-night searches for worrying medical symptoms, his movements across the US as his IP address was recorded when logging into Google Maps, his emails, his photos, his notes, his old freelance work as a journalist, his outdated cover letters when he was unemployed, his teenage-year Google Chrome bookmarks, his flight and hotel searches, and even the searches he made within his own Gmail inbox and his Google Drive.

After digging into the data for long enough, Ruiz came to a frightening conclusion: Google knows whatever the hell it wants about him, it just has to look.

But Ruiz wasn’t happy to let the company’s access continue. So he has a plan.

 ”I am taking steps to change that [access] so that the next time I ask, “What does Google know about me?” I can hopefully answer: A little bit less.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Chinese gangs made over $1 billion targeting Americans with scam texts

We regularly warn our readers about new scams and phishing texts. Almost everyone gets pestered with these messages. But where are all these scam texts coming from?

According to an article in The Wall Street Journal:

“It has become a billion-dollar, highly sophisticated business benefiting criminals in China.”

In particular, the number of toll payment scam messages has exploded, rising by 350% since January 2024—allegedly, a record 330,000 such messages were reported in a single day. But we’ve also highlighted recent SMS-based scams around New York’s inflation refund program and texts from a fake Bureau of Motor Vehicles trying to steal your banking details.

Toll, postage, and refund scams might look different on the surface, but they all feed the same machine, each one crafted to look like an urgent government or service message demanding a small fee. Together, they make up an industrialized text scam ecosystem that’s earned Chinese crime groups more than $1 billion in just three years.

In a bid to tackle the problem, Project Red Hook combines the power of the US Homeland Security Investigations (HSI) with law enforcement partners and businesses to raise awareness of how Chinese organized crime groups are exploiting gift cards to launder money.

The texts are sent out in bulk from so-called SIM farms, a setup where many mobile SIM cards are placed into a rack or special device, instead of inside phones. This device connects to a computer and lets someone send thousands of text messages (or make calls) automatically and all at once. It’s reported that the SIM farms are mostly located in the US, and set up by workers who have no idea they are assisting a fraud ring.

The main goal of these scams is to steal credit card information, which is then used at the victim’s expense in a vast criminal network.

Criminals bypass multi-factor authentication (MFA, or 2FA) by adding stolen cards to mobile wallets, knowing that banks often trust the device after its first use and don’t ask for further checks. They install stolen card numbers onto Google Pay and Apple Wallets in Asia and share access to those cards with people in the US. Gig workers and money mules then use the stolen card details to buy high-value goods such as iPhones, clothes, and especially gift cards. They ship these goods to China, where criminal rings sell them and funnel the profits back into their operations.

The criminals find the people willing to make purchases through Telegram channels. On any given day, scammers employ 400 to 500 of these mules. They are paid around 12 cents for every $100 gift card they buy, according to an assistant special agent in charge at HSI.

So, with the aid of SIM farms and money mules in the US, Chinese gangs have turned text message scams into an industrial-scale operation targeting Americans. They use tech tricks and international collaboration to make over a billion dollars—much of it via toll and shipping payment scams—and launder the proceeds through digital wallets and gift cards.

Security tips

The best way to stay safe is to make sure you’re aware of the latest scam tactics. Since you’re reading our blog, you’re off to a good start.

  • Never reply to or follow links in unsolicited tax refund texts, calls, or emails, even if they look urgent.
  • Never share your Social Security number or banking details with anyone claiming to process your tax refund.
  • Go direct. If in doubt, contact the company through official channels.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit suspicious messages like these to Malwarebytes Scam Guard, which instantly flags known scams?


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

A week in security (October 13 – October 19)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Prosper data breach puts 17 million people at risk of identity theft

Peer-to-peer lending marketplace Prosper detected unauthorized activity on their systems on September 2, 2025.

It published an FAQ page later that month to address the incident. During the incident, the attacker stole personal information belonging to Prosper customers and loan applicants.

As Prosper stated:

“We have evidence that confidential, proprietary, and personal information, including Social Security numbers, was obtained, including through unauthorized queries made on Company databases that store customer and applicant data.”

While Prosper did not share the number of affected people, BleepingComputer reported that it affected 17.6 million unique email addresses.

The stolen data associated with the email addresses reportedly includes customers’ names, government-issued IDs, employment status, credit status, income levels, dates of birth, physical addresses, IP addresses, and browser user-agent details.

Prosper advised that no one gained unauthorized access to customer accounts or funds and that their customer-facing operations continued without interruption.

Even without account access, the stolen data is more than enough to fuel targeted, personalized phishing and even identity theft. The investigation is still ongoing but Prosper has promised to offer free credit monitoring, as appropriate, after determining what data was affected.

Protecting yourself after a data breach

If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.