IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Grok chats show up in Google searches

I’m starting to feel like a broken record, but I feel you should know that yet another AI has been found sharing private conversations so that Google was able to index them, and now they can be found in search results.

It’s déjà vu in the world of AI: another day, another exposé about chatbot conversations being leaked, indexed, or made public. We have written about the share option in ChatGPT that was swiftly removed because users seemed oblivious to the consequences, and about Meta AI first making conversations discoverable via search engines and later exposing them due to a bug. In another leak we looked at an AI bot used by McDonalds to process job applications. And, not to forget, the AI girlfriend fiasco where a hacker was able to steal a massive database of users’ interactions with their sexual partner chatbots.

In some of these cases the developers thought it was clear to the users that by using a “Share” option, their conversations were publicly accessible, but in reality, the users were just as surprised as the people that found their conversations.

This same thing must have happened at Grok, the AI chatbot developed by xAI and launched in November 2023 by Elon Musk. When Grok users press a button to share a transcript of their conversation, this also made those conversations searchable, and, according to Forbes, this was sometimes done without users’ knowledge or permission.

For example, when a Grok user wants to share their conversation with another person, they can use the “Share” button to create a unique URL which they can then send to that person. But without many users being aware, pressing that “Share” button also made the conversation available to search engines, like Google, Bing, and DuckDuckGo. And that made them available for anyone to find.

Even though the account details may be hidden in the shared chatbot transcripts, the prompts—the instructions written by the user–may still contain personal or sensitive information about someone.

Forbes reported that it was able to view “conversations where users asked intimate questions about medicine and psychology.” And in one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.

I have said this before, and I’ll probably have to say it again until privacy is baked deeply into the DNA of AI tools, rather than patched on as an afterthought: We have to be careful about what we share with chatbots.

How to safely use AI

While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:

  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
  • When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
  • Do not feed any AI your private information.
  • Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
  • Never share personally identifiable information (PII).

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

All Apple users should update after company patches zero-day vulnerability in all platforms

Apple has released security updates for iPhones, iPads and Macs to fix a zero-day vulnerability (a vulnerability which Apple was previously unaware of) that is reportedly being used in targeted attacks.

The updates cover:

Apple has acknowledged reports that attackers may have already used this flaw in a highly sophisticated operation aimed at specific, high‑value targets.

But history teaches us that once a patch goes out, attackers waste little time recycling the same vulnerability into broader, more opportunistic campaigns. What starts as a highly targeted campaign often trickles down into mass exploitation against everyday users.

That’s why it’s important that everyone takes the time to update now.

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. You want to be on iOS 18.6.2 or iPadOS 18.6.2 (or 17.7.10 for older models), so update now if you’re not. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

iPadOS screenshot update now

How to update your Mac

For Mac users, click on the Apple menu in the top-left corner of your screen and open System Settings. From there, scroll down until you find General, then select Software Update. Your Mac will automatically check for new updates. If an update is available, you’ll see the option to download and install it. Depending on the size of the update, this process might take anywhere from a few minutes to an hour, and your machine will need to restart to complete the installation.

As always, it’s a good idea to make sure you’ve saved your work before using the Restart Now button. Updates can sometimes require more than one reboot, so allow some downtime. After you install the update, your system gains stronger protection, and you can use your Mac without the constant worry of this vulnerability hanging over you.

Technical details

The flaw is tracked as CVE-2025-43300 and lies in the Image I/O framework, the part of macOS that does the heavy lifting whenever an app needs to open or save a picture. The problem came from an out-of-bounds write. Apple stepped in and tightened the rules with better bounds checking, closing off the hole so attackers can no longer use it.

An out-of-bounds write vulnerability means that the attacker can manipulate parts of the device’s memory that should be out of their reach. Such a flaw in a program allows it to read or write outside the bounds the program sets, enabling attackers to manipulate other parts of the memory allocated to more critical functions. Attackers can write code to a part of the memory where the system executes it with permissions that the program and user should not have.

In this case, an attacker could construct an image to exploit the vulnerability.  Processing such a malicious image file would result in memory corruption. Memory corruption issues can be manipulated to crash a process or run attacker’s code.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Google settles YouTube lawsuit over kids’ privacy invasion and data collection

Google has agreed to a $30 million settlement in the US over allegations that it illegally collected data from underage YouTube users for targeted advertising.

The lawsuit claims Google tracked the personal information of children under 13 without proper parental consent, which is a violation of the Children’s Online Privacy Protection Act (COPPA). The tech giant denies any wrongdoing but opted for settlement, according to Reuters.

Does this sound like a re-run episode? There’s a reason you might think that. In 2019, Google settled another case with the US Federal Trade Commission (FTC), paying $170 million for allegedly collecting data from minors on YouTube without parental permission.

Plaintiffs in the recent case argued that despite that prior agreement, Google continued collecting information from children, thereby violating federal laws for years afterward.

Recently, YouTube created some turmoil by testing controversial artificial intelligence (AI) in the US to spot under-18s based on what they watch. To bypass the traditional method of having users fill out their birth dates, the platform is now examining the types of videos watched, search behavior, and account history to assess a user’s age. Whether that’s the way to prevent future lawsuits is questionable.

The class-action suit covers American children under 13 who watched YouTube videos between July 2013 and April 2020. According to the legal team representing the plaintiffs, as many as 35 million to 45 million people may be eligible for compensation. 

With a yearly revenue of $384 billion over 2024, $30 will probably not have a large impact on Google. It may even not outweigh the profits made directly from the violations it was accused of.

How to claim

Based on typical class-action participation rates (1%-10%) the actual number of claimants will likely be in the hundreds of thousands. Those who successfully submit a claim could receive between $10 and $60 each, depending on the final number of validated claims, and before deducting legal fees and costs.

If you believe your child, or you as a minor, might qualify for compensation based on these criteria, here are a few practical steps:

  • Review the eligibility period: Only children under 13 who viewed YouTube videos from July 2013 to April 2020 qualify.
  • Prepare documentation: Gather any records that could prove usage, such as email communications, registration confirmations, or even device logs showing relevant YouTube activity.
  • Monitor official channels: Typically, reputable law firms or consumer protection groups will post claimant instructions soon after a settlement. Avoid clicking on unsolicited emails or links promising easy payouts since these might be scams.
  • Be quick, but careful: Class-action settlements usually have short windows for submitting claims. Act promptly once the process opens but double-check that you’re on an official platform (such as the settlement administration site listed in legal notices).

How to protect your children’s privacy

Digital awareness and proactive security measures should always be top of mind when children use online platforms.

  • Regardless of your involvement in the settlement, it’s wise to check and use privacy settings on children’s devices and turn off personalized ad tracking wherever possible.
  • Some platforms have separate versions for different age groups. Use them where applicable.
  • Show an interest in what your kids are watching. Explaining works better than forbidding without providing reasons.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

AI-powered stuffed animals: A good alternative for screen time?

Are AI (Artificial Intelligence)-powered stuffed animals really the best alternative to screen time that we want to offer our children?

Some AI startups think so. One of those startups is Curio, a company that describes itself as “a magical workshop where toys come to life.” Curio offers three different AI powered plushies named Grem, Gabbo, and Grok (not related to the xAI).

The concept of AI-powered playmates sounds like a dream (at least to some parents). There’s less screen time, which encourages imaginative play, and children can have cuddly friend that can answer questions, tell stories, and even engage in conversations.

Earlier, we reported about Mattel’s plans to create AI-powered toys, and how advocacy groups responded, quick to denounce the move. Robert Weissman, co-president of public rights advocacy group Public Citizen, commented:

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.”

Similarly, when Amanda Hess reported on the oncoming wave of AI-powered toys, including Curio’s “Grem,” she wrote in the New York Times (NYT) about how the doll tried to build a connection between itself and her by remarking on one of their similarities—having freckles:

“‘I have dots that grow on me, and I get more as I get older, too,’ I said.

‘That’s so cool,’ said Grem. ‘We’re like dot buddies.’

I flushed with self-conscious surprise. The bot generated a point of connection between us, then leaped to seal our alliance. Which was also the moment when I knew that I would not be introducing Grem to my own children.”

This event for Hess planted an understanding that the toy was not an upgrade to the lifeless teddy bear. It’s more like a replacement for the caregiver. As one of the founders of Curio explained to the NYT, the plushie should be viewed as a sidekick for the child who could make children’s play more stimulating, so that you, the parent, “don’t feel like you have to be sitting them in front of a TV or something.”

But children lack the cognitive abilities to separate fantasy from reality in the ways adults do, say researchers at Harvard and Carnegie Mellon. And handing them AI powered toys with human-like voices might only blur that line further, which could interfere with their social development and instead have them form emotional bonds with computer-generated code.

When the unsupervised use of AI chatbots can drive a 14-year old to suicide, do we want to derive small children of having real-life friends and trust them with AI toys? It’s a question that parents might have to answer quite soon.

How to stay on the safe side

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history teaches us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses we have as parents.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says, and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

How to spot the latest fake Gmail security alerts

Security alerts from tech companies are supposed to warn us when something might be amiss—but what if the alerts themselves are the risk? Scammers have long impersonated tech companies’ security and support staff as a way to sniff out users’ login credentials, and reports suggest that they’re doing it again, at scale.

The attack goes like this: Victims get an email or phone call allegedly from Google support that warns someone has tried to hack their account. The best way to protect themselves is to reset the password, the scammer says.

They then send a separate account reset email to the victim, who dutifully enters their login credentials. The account includes a code that the victim must read out to verify that they’re legit. The support staff say they’ll enter this code to reset the system, but they’re using those precious extra few seconds to hijack the victim’s account.

Someone posting to Reddit described getting a call from someone in California who claimed to be from Google.

“He was trying to actively recover my account and steal possession of it, while on the phone with me,” the Redditor said, adding that they challenged the caller, calling them a scam artist. The caller then upped the ante, asking them to look up their number, which showed up on caller ID, and even to hang up and call the number back. “He was completely bluffing — as when you call that number you cannot get a human on the line,” said the Redditor. “They don’t staff that line with agents.”

This scam, reported by Forbes, is just one example of how imposters build trust by pretending to be from tech companies. Last month, the Federal Trade Commission also warned Amazon customers of fake refund mails. The scam messages tell customers that a product they were sent failed to pass an Amazon quality check, and asks them to click a link for a refund. The link, of course, is malicious and leads to information theft.

This kind of thing might leave users worried. After all, if you can’t trust messages purporting to be from your technology provider, then who can you trust?

Companies often have guidance to help prepare you for such scams. Google’s guide to verifying security alerts says that the company will never take you to a sign-in page or ask you to verify yourself. It also says that all legitimate messages will appear on the Security page of your Google account, under “Recent security activity.” Amazon also has a page on identifying scams.

Our favorite comment came from the same Redditor who posted about the Google scammer: “The best thing I’ve read regarding these attempts is ‘Google will NEVER call you out of the blue. They don’t care about your account’” they said. Snarky, but likely true. “Be highly suspicious and never give anyone a code or password and never accept those recovery prompts unless you are 10000% certain YOU issued them.”


We don’t just report on scans—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Instagram Map: What is it and how do I control it?

Instagram Map is a new feature—for Instagram, anyway—that users may have enabled without being fully aware of the consequences. The Map feature launched in the US on August 6, 2025, and is reportedly planned for a global rollout “soon.” As of mid-August 2025, not all users outside the US, especially in Europe, have received the feature yet. Community reports confirm that the rollout is happening in stages. Some users in Germany and other locations already have access, but many do not. It’s typical for Instagram features to take several weeks to reach all accounts and regions

Basically, Instagram Map allows you to share your current location with your friends. But, already, there’s the first caveat: Are all your Instagram “friends” real friends? As in, the kind that you’d like to run into whenever they feel like it?

Add to that the—for me–always nagging feeling that Meta will learn even more about you and your behavior, and you may want to change your initial choice.

If you have been careful in selecting your friends, then it’s fine—good for you! If not, you may want to narrow the group that can see your location down to “Close friends” or select a few that you trust. Or, you could consider turning sharing off completely.

What to do the first time you use Instagram Map

  1. Open Instagram and go to your Direct Messages (DM) inbox.
  2. Find and tap the Map icon at the top (near Notes or short posts section in your inbox).
  3. If you’ve never used Map before, you’ll get a prompt explaining how it works and asking for location access. Accept if you want to use it.
  4. When prompted, choose Who can see your location. Your choices:
    • Friends: Followers you follow back.
    • Close Friends: Your preselected Close Friends list.
    • Only these friends: Select specific people manually.
    • No one: Turn location sharing off entirely (still shows tagged posts).
  5. Select your preferred group and tap Share now.
Instagram Map share options
Instagram Map share options

How to make changes later

If you want to check your share settings or change them at a later point:

  1. Tap the Map feature in your DM inbox.
  2. Click the Settings icon (gear wheel) at the upper right.
  3. Choose the group to share with (Friends, Close Friends, Only these friends, or No one).
  4. Tap Done.

You can also add specific locations to a “Hidden Places” list so your real-time location never appears on the map when you visit those places. Here’s how:

  1. Open the Map feature via your DM inbox.
  2. Tap the Settings icon (gear wheel) at the top right.
  3. Tap the three-dot menu in the top corner of the settings menu.
  4. Drag a pin on the map to mark a place you want hidden.
  5. Use the slider to set a radius, which determines how wide and large the hidden zone is.
  6. Type in the name of the place and tap Done.

Sharing your location on Instagram Map is not enabled unless you actively choose to share it. What will be there are any posts that have a location tagged in them, something that’s an option every time you add photos and videos to your Stories or your grid. So, regardless of whether you choose to share your location, you can use the map to explore location-based content.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (August 11 – August 17)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Italian hotels breached for tens of thousands of scanned IDs

The Computer Emergency Response Team (CERT) for Italy’s “Agenzia per l’Italia Digitale” (AGID) issued a warning that cybercriminals are selling stolen identity documents from hotels operating in Italy.

This summer, a criminal hacker group named “mydocs” infiltrated the booking systems of at least ten Italian hotels, stealing high-resolution scans of ID documents, including passports and national ID cards, provided by guests during check-in. These documents, amounting to tens of thousands in number (potentially up to 100,000), have been offered for sale on dark web forums at prices ranging from $1000 to $10,000. Both Italian and foreign guests are affected, with luxury and city hotels among the breached venues.

While the incident appears to have taken place in June and July of this year, it is not clear how many years back the hotels’ scans are retained for, so you could be at risk if you have visited the hotels at an earlier time. AGID did not mention the hotels by name, but we hope the hotels will take it upon themselves to warn the people whose ID information may be for sale.

AGID warns that warned that the stolen data could be used for:

  • Fraudulent creation of new documents.
  • Opening bank accounts or lines of credit.
  • Social engineering attacks against individuals and their contacts.
  • Digital identity theft, with serious legal and financial implications.

Authorities advise guests to contact the hotels where they stayed if they suspect their data was compromised and to stay alert for scams or phishing attempts using their information.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

National Public Data returns after massive Social Security Number leak

Remember that data broker nobody had ever heard of, but managed to leak a database which contained the data of some 2.9 billion people? It’s back, and this time with a search function.

National Public Data suffered an alleged breach in 2024 against a data base that, it turned out, carried 272 million unique social security numbers (SSNs.) Granted, that there are limits to the safety of using a nine-digit ID in 2025, but the news that the folks at National Public Data have decided it’s time for a comeback made me slightly nauseous.

After the fall-out of the aforementioned leak and others, the site shut down in December amid a wave of lawsuits against parent company Jerico Pictures. But the people at PCMag noticed that the domain nationalpublidata[.]com has been brought back to life.

In an update page about the security incident, the new owner states:

“Jerico Pictures, Inc., the Florida company that suffered a major data breach in 2024, no longer operates this site. We have zero affiliation with them.”

Data brokers scrape, collect, and aggregate data, combining disparate details into comprehensive dossiers. Sometimes your information ends up there because of public records. And sometimes it’s the result of poor security, or, as we see a lot unfortunately, a leak, ransomware attack, or other type of data breach.

On their “About us” page the new owners note:

“We collect the data you find on our people search engine from publicly available sources, including federal, state, and local government agencies, social media pages, property ownership databases, and other reliable platforms. After the data is in our hands, we verify and filter it to make sure it is indeed accurate and up-to-date.”

Their goal:

“National Public Data is a people search website where you can find accurate information about US citizens. Our database gives you access to millions of public records to help you find the data you need the most for various purposes. Privacy, speed, and ease of use are at the heart of what we do. Start your search today and discover what you can learn.”

If you live in the US, it might be prudent to check what information they have about you and where they might have scraped that from. Did you know you can have a lot of that information removed?

In the meantime, the “info spillers” are back, and they seem to be making up for lost time. The real question isn’t if your data is at risk. It’s what you’re going to do about it now.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Romance scammers in Ghana charged with more than $100 million in theft

The Department of Justice (DOJ) extradited and indicted 4 Ghanaian nationals for allegedly stealing more than $100 million, mainly through romance scams and business email compromises.

According to a report from Comparitech, nearly 59,000 Americans fell victim to romance scams in 2024, losing an estimated $697.3 million. Our own research from last year showed that 10% of romance scam victims lose more than $10,000. The overall true cost is believed to be vastly higher than official reports, as many cases go unreported due to victims’ shame and difficulty tracing scammers.

Many of the scammers work offshore from countries where the chances of them getting apprehended are slim. But US Attorney Jay Clayton stated:

“Offshore scammers should know that we, the FBI, and our law enforcement partners will work around the world to combat online fraud and bring perpetrators to justice.”

The four men are accused of being leaders of a criminal organization based in Ghana which committed romance scams and business email compromises against individuals and businesses located across the US.

Their victims were mostly older men and women tricked into believing they were engaging in a romantic relationship online. These “relationships” sometimes start as a harmless text or by a direct message on social media and dating apps. Soon the scammer will suggest to take the conversation to a more secure platform like WhatsApp or Telegram.

The scammers will take the time to get to know you and assess what the best approach is to deceive you. Most of the time they are after your money, but sometimes they are after information. These scammers may also use other people, who are often younger, as money mules.

The people entailed in romance scams are courted and lavished with attention, until it’s time to cash in. Then the scammer suddenly needs money for travel, an illness, or other made-up reasons. Some scammers also lure victims with a supposed, great investment opportunity that you can’t afford to miss—which will turn out great for them, not the victim.

The four Ghanaian men are facing multiple charges including wire fraud, money laundering, receiving stolen money and more. In total each is facing a maximum sentence of 75 years in prison if convicted on all the charges.

Stay safe from romance scammers

The scale of losses from romance scams often eclipses that of many other types of reported consumer fraud or internet crime, demonstrating the high financial risk entailed in these emotional exploitation schemes.

So, it’s important to understand how these scams operate and how you can stay safe. Some of these tips may seem basic, but in these cases, it’s easy for people to mistake their online relationship with the scammer for a real one. This isn’t the fault of scam victims—it is just a symptom of how effective these scam methods are.

  • Don’t send money or disclose sensitive information to anyone you have never met in person.
  • Take it slow and read back answers. Scammers usually have a playbook, but sometimes you can spot inconsistencies in their answers.
  • Don’t do this alone. Allow someone in your life to share this with. Their perspective may keep your feet on the ground.
  • Cut them off early. As soon as you expect you are dealing with a scammer, stop responding. Don’t fall for sob stories or even physical threats they’ll use to keep the connection alive.
  • Check their profile picture in an online search. You may find other profiles with the same picture. This is a huge red flag.
  • The move to a “safer platform” is another red flag. They are not doing this for privacy reasons, but to stay under the radar of the platform where they first contacted you.
  • Consult with a financial advisor or investment professional who can provide an objective opinion if you’re offered an investment opportunity.
  • If you encounter something suspicious, report it to the appropriate authorities—such as local law enforcement or the FBI via its Internet Crime Complaint Center. Your action could prevent others from falling victim.  
  • Share examples (anonymized) to help others. One way to do this is to use Malwarebytes Scam Guard, which also helps you assess if a message is a scam or not.

We don’t just report on scans—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!