IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

School’s AI system mistakes a bag of chips for a gun

An artificial intelligence (AI) detection system at Kenwood High School mistakenly flagged a student’s bag of potato chips as a gun, triggering a police response.

The 16-year-old had finished eating a bag of Doritos and crumpled it up in his pocket when he was done. But the school’s AI-based gun detection system mistook the crumpled foil for a firearm.

Moments later, multiple police cars arrived with officers drawing their weapons, dramatically escalating what should have been a non-event.

The student recalls:

“Police showed up, like eight cop cars, and then they all came out with guns pointed at me talking about getting on the ground. I was putting my hands up like, ‘what’s going on?’ He told me to get on my knees and arrested me and put me in cuffs.”

Systems like these scan images or video feeds for the shape and appearance of weapons. They’re meant to reduce risk, but they’re only as good as the algorithms behind them and the human judgment that follows.

Superintendent Dr. Myriam Rogers told reporters:

“The program is based on human verification and in this case the program did what it was supposed to do which was to signal an alert and for humans to take a look to find out if there was cause for concern in that moment.”

While we understand the need for safety measures against guns on school grounds, this could have been handled better. Eight police cars arriving at the scene and officers with guns drawn will certainly have had an impact on the students who witnessed it, let alone the student that was the focus of their attention.

As school principal Kate Smith said:

“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident.”

AI safety tools are designed to protect students, but they do make mistakes, and when they fail, they can create the very fear they’re meant to prevent. Until these systems can reliably tell the difference between a threat and a harmless snack, schools need stronger guardrails—and a little more human sense.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Around 70 countries sign new UN Cybercrime Convention—but not everyone’s on board

Around 70 countries have signed the new United Nations (UN) Convention against Cybercrime—the first global treaty designed to combat cybercrime through unified international rules and cooperation.

The treaty needs at least 40 UN member states to ratify it before it becomes international law. Once the 40th country does so, it will take another 90 days for the convention to become legally binding for all those who have joined.

Notably, the United States declined to sign. In a brief statement, a State Department spokesperson said:

“The United States continues to review the treaty.”

And there is a lot to review. The convention has sparked significant debate about privacy, sovereignty, and how far law enforcement powers should reach. It was created in response to the rising frequency, sophistication, and cost of cybercrime worldwide—and the growing difficulty of countering it. As cyberattacks increasingly cross borders, international cooperation has become critical.

Supporters say the treaty closes legal loopholes that allow criminals to hide in countries that turn a blind eye. It also aims to solve miscommunication by establishing common definitions of cybercrimes, especially for threats like ransomware, online fraud, and child exploitation.​

But civil rights and digital privacy advocates argue that the treaty expands surveillance and monitoring powers, in turn eroding personal freedoms, and undermines safeguards for privacy and free expression.

Cybersecurity experts fear it could even criminalize legitimate research.

Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation (EFF) stated:

“The latest UN cybercrime treaty draft not only disregards but also worsens our concerns. It perilously broadens its scope beyond the cybercrimes specifically defined in the Convention, encompassing a long list of non-cybercrimes.”

The Foundation for Defense of Democracies (FDD) goes even further, arguing that the treaty could become a platform for authoritarian states to advance ideas of state control over the internet, draw democratic governments into complicity with repression, and weaken key cybersecurity tools on which Americans depend.

“Russia and China are exporting oppression around the world and using the United Nations as legal cover.”

Even Microsoft warned that significant changes would need to be made to the original draft before it could be considered safe:

“We need to ensure that ethical hackers who use their skills to identify vulnerabilities, simulate cyberattacks, and test system defenses are protected. Key criminalization provisions are too vague and do not include a reference to criminal intent, which would ensure activities like penetration testing remain lawful.”

Those changes never came to life. Many observers now say the treaty creates a legal framework that allows monitoring, data storage, and cross-border information sharing without clear data protection. Critics argue it lacks strong, explicit safeguards for due process and human rights, particularly when it comes to cross-border data exchange and extradition.

When you think about it, the idea of having a global system to counter cybercriminals makes sense—criminals don’t care about borders, and the current patchwork of national laws only helps them hide. But to many, the real problem lies in how the treaty defines cybercrime and what governments could do in its name.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults

If you’ve had your fill of philosophical discussions with ChatGPT, CEO Sam Altman has news for you: the service will soon be able to engage in far less highbrow conversations of the sexual kind. That’s right—sexting is coming to ChatGPT. Are we really surprised?

It marks a change in sentiment for the company, which originally banned NSFW content. In an October 14 post on X, Altman said the company had kept ChatGPT “pretty restrictive” to avoid creating mental health issues for vulnerable users. But now, he says, the company has learned from that experience and feels ready to “experiment more.”

“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”

He added that by December, as age-gating expands, ChatGPT will “allow even more, like erotica for verified adults.”

This isn’t a sudden pivot. Things started to change at least as far back as May last year, when the company said in its Model Specification document that it was considering allowing ChatGPT to get a little naughty under the right circumstances.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”

It followed up on that with another statement in a February 2025 update to the document, when it starting mulling a ‘grown-up mode’ while drawing a hard boundary around things like age, sexual deepfakes, and revenge porn.

A massive market

There’s no denying the money behind this move. Analysts believe people paid $2.7 billion worldwide for a little AI companionship last year, with the market expected to balloon to $24.5 billion by 2034—a staggering 24% annual growth rate.

AI “girlfriends” and “boyfriends” already span everything from video-based virtual partners to augmented reality companions that can call you. Even big tech companies have been getting into it, with Elon Musk’s X launching a sexualized virtual companion called Ani that will apparently strip for you if you pester it enough.

People have been getting down and dirty with technology for decades, of course (phone sex lines began in the early 1980s, and cam sites have been a thing for years). But AI changes the scale entirely. There’s no limit to automation, no need for human operators, and no guarantee that the users on the other side know where the boundaries are.

We’re not judging, but the normal rules apply. This stuff is supposed to be for adults, which makes it more important than ever that parents monitor what their kids access online.

Privacy risk

Earlier this month, we covered how two AI companion apps exposed millions of private chat logs, including sexual conversations, after a database misconfiguration—a sobering reminder of how much intimate data these services collect.

It wasn’t the first time, either. Back in 2024, another AI girlfriend platform was breached, leaking users’ fantasies, chat histories, and profile data. That story showed just how vulnerable these apps can be when they mix emotional intimacy with poor security hygiene.

As AI companionship becomes mainstream, breaches like these raise tough questions about how safely this kind of data can ever really be stored.

For adults wanting a little alone time with an AI, remember to take a regular break and a sanity check. While Altman might think that OpenAI has “been able to mitigate the serious mental health issues,” experts still warn that relationships with increasingly lifelike AIs can create very real emotional risks.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

How to set up two factor authentication (2FA) on your Instagram account

Two-factor authentication (2FA) isn’t foolproof, but it is one of the best ways to protect your accounts from hackers.

It adds a small extra step when logging in, but that extra effort pays off. Instagram’s 2FA requires an additional code whenever you try to log in from an unrecognized device or browser—stopping attackers even if they have your password.

Instagram offers multiple 2FA options: text message (SMS), an authentication app (recommended), or a security key.

Instagram 2FA options

Here’s how to enable 2FA on Instagram for Android, iPhone/iPad, and the web.

How to set up 2FA for Instagram on Android

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon (three horizontal lines) in the top right.
  4. Select Accounts Center at the bottom.
  5. Tap Password and security > Two-factor authentication.
  6. Choose your Instagram account.
  7. Select a verification method: Text message (SMS), Authentication app (recommended), or WhatsApp.
    • SMS: Enter your phone number if you haven’t already. Instagram will send you a six-digit code. Enter it to confirm.
    • Authentication app: Choose an app like Google Authenticator or Duo Mobile. Scan the QR code or copy the setup key, then enter the generated code on Instagram.
    • WhatsApp: Enable text message security first, then link your WhatsApp number.
  8. Follow the on-screen instructions to finish setup.

How to set up 2FA for Instagram on iPhone or iPad

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon > Settings > Security > Two-factor authentication.
  4. Tap Get Started.
  5. Choose Authentication app (recommended), Text message, or WhatsApp.
    • Authentication app: Copy the setup key or scan the QR code with your chosen app. Enter the generated code and tap Next.
    • Text message: Turn it on, then enter the six-digit SMS code Instagram sends you.
    • WhatsApp: Enable text message first, then add WhatsApp.
  6. Follow on-screen instructions to complete the setup.

How to set up 2FA for Instagram in a web browser

  1. Go to instagram.com and log in.
  2. Open Accounts Center > Password and security.
  3. Click Two-factor authentication, then choose your account.
    • Note: If your accounts are linked, you can enable 2FA for both Instagram and your overall Meta account here.Instagram accoounts center
  4. Choose your preferred 2FA method and follow the online prompts.

Enable it today

Even the strongest password isn’t enough on its own. 2FA means a thief must have access to your an additional factor to be able to log in to your account, whether that’s a code on a physical device or a security key. That makes it far harder for criminals to break in.

Turn on 2FA for all your important accounts, especially social media and messaging apps. It only takes a few minutes, but it could save you hours—or even days—of recovery later.It’s currently the best password advice we have.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Phishing scam uses fake death notices to trick LastPass users

LastPass has alerted users about a new phishing attack that claims the recipient has died. According to the message, a family member has submitted a death certificate to gain access to the recipient’s password vault. A link in the phishing email, supposedly to stop the request, leads to a fake page that asks for the LastPass user’s master password.

Legacy request opened
Image courtesy of LastPass

“Legacy Request Opened (URGENT IF YOU ARE NOT DECEASED)

A death certificate was uploaded by a family member to regain access to the Lastpass account

If you have not passed away and you believe this is a mistake, please reply to this email with STOP”

LastPass links this campaign to CryptoChameleon (also known as UNC5356), a group that previously targeted cryptocurrency users and platforms with similar social engineering attacks. The same group used LastPass branding in a phishing kit in April 2024.

The phishing attempt exploits the legitimate inheritance process, which is an emergency access feature in LastPass that allows designated contacts request access to a vault if the account holder dies or becomes incapacitated.

Stealing someone’s password manager credentials gives attackers access to every login stored inside. We recently reported on an attempt to steal 1Password credentials.

Lastpass also notes that:

“Several of the phishing sites are clearly intended to target passkeys, reflecting both the increased interest on the part of cybercriminals in passkeys and the increased adoption on the part of consumers.”

Passkeys are a very secure replacement for passwords. They can’t be cracked, guessed or phished, and let you log in easily without having to type a password every time. Most password managers—like LastPass, 1Password, Dashlane, and Bitwarden—now store and sync passkeys across devices.

Because passkeys often protect high-value assets like banking, crypto wallets, password managers, and company accounts—they’ve become an attractive prize for attackers.

Advice for users

While passkeys themselves cannot be phished via simple credential theft, attackers can trick users into:

  • Registering a new passkey on a malicious site or a fake login page
  • Approving fraudulent device syncs or account transfers
  • Disabling passkeys and reverting to weaker login methods, then stealing those fallback credentials

LastPass and other security experts recommend:

  • Never enter your master password on links received via email or text.
  • Understand how passkeys work and keep them safe.
  • Only logging into your password manager via official apps or bookmarks.
  • Be wary of urgent or alarming messages demanding immediate action.
  • Remember that legitimate companies won’t ask for sensitive credentials via email or phone.
  • Use an up-to-date real-time anti-malware solution preferably with a web protection module.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (October 20 – October 26)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Is AI moving faster than its safety net?

You’ve probably noticed that artificial intelligence, or AI, has been everywhere lately—news, phones, apps, even in your browser. It seems like everything suddenly wants to be “powered by AI.“ If it’s not, it’s considered old school and boring. It’s easy to get swept up in the promise: smarter tools, less work, and maybe even a glimpse of the future.

But if we look at some of the things we learned just this week, that glimpse doesn’t only promise good things. There’s a quieter story running alongside the hype that you won’t see in the commercials. It’s the story of how AI’s rapid development is leaving security and privacy struggling to catch up.

And if you make use of AI assistants, chatbots, or those “smart” AI browsers popping up on your screen, those stories are worth your attention.

Are they smarter than us?

Even some of the industry’s biggest names—Steve Wozniak, Sir Richard Branson, and Stuart Russel—are worried that progress in AI is moving too fast for its own good. In an article published by ZDNet, they talk about their fear of “superintelligence,” saying they’re afraid we’ll cross the line from “AI helps humans” to “AI acts beyond human control” before we’ve figured out how to keep it in check.

These scenarios are not about killer robots or takeovers like in the movies. They’re about much smaller, subtler problems that add up. For example, an AI system designed to make customer service more efficient might accidentally share private data because it wasn’t trained to understand what’s confidential. Or an AI tool designed to optimize web traffic might quietly break privacy laws it doesn’t comprehend.

At the scale we use AI—billions of interactions per day—these oversights become serious. The problem isn’t that AI is malicious; it’s that it doesn’t understand consequences, and developers forget to set boundaries.

We’re already struggling to build basic online safety into the AI tools that are replacing our everyday ones.

AI browsers: too smart, too soon

AI browsers—and their newer cousin, the ‘agentic’ browser—do more than just display websites. They can read them, summarize them, and even perform tasks for you.

A browser that can search, write, and even act on your behalf sounds great—but you may want to rethink that. According to research reported by Futurism, some of these tools are being rolled out with deeply worrying security flaws.

Here’s the issue: many AI browsers are just as vulnerable to prompt injection as AI chatbots. The difference is that if you give an AI browser a task, it runs off on its own and you have little control over what it reads or where it goes.

Take Comet, a browser developed by the company Perplexity. Researchers at Brave found that Comet’s “AI assistant” could be tricked into doing harmful things simple because it trusted what it saw online.

In one test, researchers showed the browser a seemingly innocent image. Hidden inside that image was a line of invisible text—something no human would see, but instructions meant only for the AI. The browser followed the hidden commands and ended up opening personal emails and visiting a malicious website.

In short, the AI couldn’t tell the difference between a user’s request and an attacker’s disguised instructions. That is a typical example of a prompt injection attack, which works a bit like phishing for machines. Instead of tricking a person into clicking a bad link, it tricks an AI browser into doing it for you. Without the realization of “oops, maybe I shouldn’t have done that,” it is faster, quiet, and with access you might not even realize it has.

The AI has no idea it did something wrong. It’s just following orders, doing exactly what it was programmed to do. It doesn’t know which instructions are bad because nobody taught it how to tell the difference.

Misery loves company: spoofed AI interfaces

Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces.

According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to genuine ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot. Picture this: you open your browser, see what looks like your trusted AI helper, and ask it a question. But instead of the AI assistant helping you, it’s quietly recording every word you type.

Some of these fake sidebars even persuade users to “verify” credentials or “authorize” a quick fix. This is social engineering in a new disguise. The scammer doesn’t need to lure you away from the page, they just need to convince you that the AI you’re chatting with is legitimate. Once that trust is earned, the damage is done.

And since AI tools are designed to sound helpful, polite, and confident, most people will take their word for it. After all, if an AI browser says, “Don’t worry, this is safe to click,” who are you to argue?

What can we do?

The key problem right now is speed. We keep pushing the limits of what AI can do faster than we can make it safe. The next big problem will be the data these systems are trained on.

As long as we keep chasing the newest features, companies will keep pushing for more options and integrations—whether or not they’re ready. They’ll teach your fridge to track your diet if they think you’ll buy it.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting it with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.

Pro tip: I installed Malwarebytes’ Browser Guard on Comet, and it seems to be working fine so far. I’ll keep you posted on that.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Thousands of online stores at risk as SessionReaper attacks spread

Early September, a security researcher uncovered a new vulnerability in Magento, an open-source e-commerce platform used by thousands of online retailers, and its commercial counterpart Adobe Commerce. It sounds like something straight out of a horror movie: SessionReaper. Behind the cinematic name hides a very real and very dangerous remote code execution flaw, tracked as CVE-2025-54236. It allows attackers to hijack live customer sessions—and, in some setups, even take full control of the server that runs the store.

SessionReaper lives in a part of Magento that handles communication between the store and other services. The bug stems from improper input validation and unsafe handling of serialized data. In plain terms, Magento sometimes trusts data that no web application ever should. This lets an attacker trick the system into accepting a specially crafted “session” file as a legitimate user login—no password required.

What they can do with that login depends on how the store is configured, but researchers at SecPod warn:

“Successful exploitation of SessionReaper can lead to several severe consequences, including security feature bypass, customer account takeover, data theft, fraudulent orders, and potentially remote code execution.”

Session-stealers like this one mean a compromised store can quietly expose a shopper’s personal details, order information, or payment data to attackers. In some cases, criminals inject “skimmer” code that harvests card details as you type them in or reroutes you to phishing sites designed to look like legitimate checkouts.

A patch for the vulnerability was released on September 9, but six weeks later, roughly 62% of Magento stores reportedly remain unpatched. After someone published a proof-of-concept (PoC), cybercriminals quickly built working exploits and attacks are now spreading fast. So, while SessionReaper isn’t malware a shopper can “catch” directly, it can turn even trusted stores into possible data-theft traps until the site owners patch.

Researchers at Sansec, whose sensors monitor e-commerce attacks worldwide, report seeing more than 250 Magento stores compromised within 24 hours of the exploit code going public.

How consumers can stay safe

Web store owners should patch their Magento sites immediately. Unfortunately, regular shoppers have almost no way to tell whether a store is still vulnerable or already secured.

From a consumer’s point of view, SessionReaper is another reminder that even trusted stores can quietly become unsafe between page loads. When a platform as widespread as Magento is under active attack, the best defense often lies outside the store itself.

  • Watch out for odd behavior on a site or missing valid HTTPS, and don’t enter payment or personal data if something seems suspicious.
  • Where possible, opt for checkout options that use third-party gateways (like PayPal), as they’re isolated from the store’s servers.
  • Report suspicious e-commerce behavior to the site operator or your payment provider straight away.
  • Shop on reputable sites whenever you can, or check the reviews and reputation of any new sellers before buying.
  • Make sure your operating system, browser, and anti-malware software are up to date to protect against the latest threats.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Apple may have to open its walled garden to outside app stores

The UK’s Competition and Markets Authority (CMA) ruled that both Google and Apple have a “strategic market status.” Basically, they have a monopoly over their respective mobile platforms.

As a result, Apple may soon be required to allow rival app stores on iPhones—a major shift for the smartphone industry. Between them, Apple and Google power nearly all UK mobile devices, according to the CMA:

“Around 90–100% of UK mobile devices run on Apple or Google’s mobile platforms.”

According to analyst data cited by the BBC, around 48.5% of British consumers use iPhones, with most of the rest on Android devices. 

If enforced, this change will reshape the experience of most of the smartphone users in the UK, and we have heard similar noises coming from the EU.

Apple has pushed back, warning that EU-style regulation could limit access to new features. The company points to Apple Intelligence, which has been rolled out in other parts of the world but is not available in the EU—something Apple blames on heavy regulation.

For app developers, the move could have profound effects. Smaller software makers, often frustrated by Apple’s 15–30% commission on in-app purchases, might gain alternative distribution routes. Competing app stores might offer lower fees or more flexible rules, making the app ecosystem more diverse, and potentially more affordable for users.

Apple, however, argues that relaxing control could hurt users by weakening privacy standards and delaying feature updates.

Security and privacy

Allowing multiple app stores will undeniably reshape the iPhone’s security model. Apple’s current “closed system” approach minimizes risk by funneling all apps through its vetted App Store, where every submission goes through security reviews and malware screening. This walled approach has kept large-scale malware incidents on iPhones relatively rare compared to Android.

It remains to be seen whether competing app stores will hold the same standards or have the resources to enforce them. Users can expect more variability in safety practices, which could increase exposure to fraudulent or malware-infested software.

On the other hand, we may also see app stores that prioritize safety or cater to a more privacy-focused audience. So, it doesn’t have to be all bad—but Apple has a point when it warns about higher risk.

For most users, the safest approach will be to stick with Apple’s store or other trusted marketplaces, at least in the early days. Android’s history shows that third-party app stores often become hotspots for adware and phishing, so security education is key. Regulators and developers will need to work together to make the review process and data-handling practices transparent.

There is no set timeline for when or how the CMA will enforce these changes, or how far Apple will go to comply. The company could challenge the decision or introduce limited reforms. Either way, it’s a major step toward redefining how trust, privacy, and control are balanced in the mobile age.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Meta boosts scam protection on WhatsApp and Messenger

Vulnerable Facebook Messenger and WhatsApp users are getting more protection thanks to a move from the applications’ owner, Meta. The company has announced more safeguards to protect users (especially the elderly) from scammers.

The social media, publishing, and VR giant has added a new warning on WhatsApp that displays an alert when you share your screen during video calls with unknown contacts.

On Messenger, protection begins with on-device behavioral analysis, complemented by an optional cloud-based AI review that requires user consent. The on-device protection will flag suspicious messages from unknown accounts automatically. You then have the option to forward it to the cloud for further analysis (although note that this will likely break the default end-to-end encryption on that message, as Meta has to read it to understand the content). Meta’s AI service will then explain why the device interpreted the message as risky and what to do about it, offering information about common scams to provide context.

That context will be useful for vulnerable users, and it comes after Meta worked with researchers at social media analysis company Graphika to document online scam trends. Some of the scams it found included fake home remodeling services, and fraudulent government debt relief sites, both targeting seniors. There were also fake money recovery services offering to get scam victims’ funds back (which we’ve covered before).

Here’s a particularly sneaky scam that Meta identified: fake customer support scammers. These jerks monitor comments made under legitimate online accounts for airlines, travel agencies, and banks. They then contact the people who commented, impersonating customer support staff and persuading them to enter into direct message conversations or fill out Google Forms. Meta has removed over 21,000 Facebook pages impersonating customer support, it said.

A rising tide of scams

We can never have too many protections for vulnerable internet users, as scams continue to target them through messaging and social media apps. While scams target everyone (costing Americans $16.6 billion in losses, according to the FBI’s cybercrime unit IC3), those over 60 are hit especially hard. They lost $4.8 billion in 2024. Overall, losses from scams were up 33% across the board year-on-year.

Other common scams include “celebrity baiting”, which uses celebrity figures without their knowledge to dupe users into fraudulent schemes including investments and cryptocurrency. With deepfakes making it easier than ever to impersonate famous people, Meta has been testing facial recognition to help spot celebrity-bait ads for a year now, and recently announced plans to expand that initiative.

If you know someone less tech-savvy who uses Meta’s apps, encourage them to try these new protections—like Passkeys and Security Checkup. Passkeys let you log in using a fingerprint, face, or PIN, while Security Checkup guides you through steps to secure your account.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!