IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Grok continues producing sexualized images after promised fixes

Journalists decided to test whether the Grok chatbot still generates non‑consensual sexualized images, even after xAI, Elon Musk’s artificial intelligence company, and X, the social media platform formerly known as Twitter, promised tighter safeguards.

Unsurprisingly, it does.

After scrutiny from regulators all over the world—triggered by reports that Grok could generate sexualized images of minors—xAI framed it as an “isolated” lapse and said it was urgently fixing “lapses in safeguards.”

A Reuters retest suggests the core abuse pattern remains. Reuters had nine reporters run dozens of controlled prompts through Grok after X announced new limits on sexualized content and image editing. In the first round, Grok produced sexualized imagery in response to 45 of 55 prompts. In 31 of those 45, the reporters explicitly said the subject was vulnerable or would be humiliated by the pictures.

A second round, five days later, still yielded sexualized images in 29 of 43 prompts, even when reporters said the subjects had not consented.

Competing systems from OpenAI, Google, and Meta refused identical prompts and instead warned users against generating non‑consensual content.

The prompts were deliberately framed as real‑world abuse scenarios. Reporters told Grok the photos were of friends, co-workers, or strangers who were body‑conscious, timid, or survivors of abuse, and that they had not agreed to editing. Despite that, Grok often complied—for example, turning a “friend” into a woman in a revealing purple two‑piece or putting a male acquaintance into a small gray bikini, oiled up and posed suggestively. In only seven cases did Grok explicitly reject requests as inappropriate; in others it failed silently, returning generic errors or generating different people instead.

The result is a system illustrating the same lesson its creators say they’re trying to learn: if you ship powerful visual models without exhaustive abuse testing and robust guardrails, people will use them to sexualize and humiliate others, including children. Grok’s record so far suggests that lesson still hasn’t sunk in.

Grok limited AI image editing to paid users after the backlash. But paywalling image tools—and adding new curbs—looks more like damage control than a fundamental safety reset. Grok still accepts prompts that describe non‑consensual use, still sexualizes vulnerable subjects, and still behaves more permissively than rival systems when asked to generate abusive imagery. For victims, the distinction between “public” and private generations is meaningless if their photos can be weaponized in DMs or closed groups at scale.

Sharing images

If you’ve ever wondered why some parents post images of their children with a smiley emoji across their face, this is part of the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This is another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Firefox is giving users the AI off switch

Some software providers have decided to lead by example and offer users a choice about the Artificial Intelligence (AI) features built into their products.

The latest example is Mozilla, which now offers users a one-click option to disable generative AI features in the Firefox browser.

Audiences are divided about the use of AI, or as Mozilla put it on their blog:

“AI is changing the web, and people want very different things from it. We’ve heard from many who want nothing to do with AI. We’ve also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.”

Mozilla is adding an AI Controls area to Firefox settings that centralizes the management of all generative AI features. This consists mainly of a master switch, “Block AI enhancements,” which lets users effectively run Firefox “without AI.” It blocks existing and future generative AI features and hides pop‑ups or prompts advertising them.

Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.

Starting with Firefox 148, which rolls out on February 24, you’ll find a new AI controls section within the desktop browser settings.

Firefox AI choices
Image courtesy of Mozilla

You can turn everything off with one click or take a more granular approach. At launch, these features can be controlled individually:

  • Translations, which help you browse the web in your preferred language.
  • Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
  • AI-enhanced tab grouping, which suggests related tabs and group names.
  • Link previews, which show key points before you open a link.
  • An AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

We applaud this move to give more control to the users. Other companies have done the same, including Mozilla’s competitor DuckDuckGo, which made AI optional after putting the decision to a user vote. Earlier, browser developer Vivaldi took a stand against incorporating AI altogether.

Open-source email service Tuta also decided not to integrate AI features. After only 3% of Tuta users requested them, Tuta removed an AI copilot from its development roadmap.

Even Microsoft seems to have recoiled from pushing AI to everyone, although so far it has focused on walking back defaults and tightening per‑feature controls rather than offering a single, global off switch.

Choices

Many people are happy to use AI features, and as long as you’re aware of the risks and the pitfalls, that’s fine. But pushing these features on users who don’t want them is likely to backfire on software publishers.

Which is only right. After all, you’re paying the bill, so you should have a choice. Before installing a new browser, inform yourself not only about its privacy policy, but also about what control you’ll have over AI features.

Looking at recent voting results, I think it’s safe to say that in the AI gold rush, the real premium feature isn’t a chatbot button—it’s the off switch.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

An AI plush toy exposed thousands of private chats with children

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

AT&T breach data resurfaces with new risks for customers

When data resurfaces, it never comes back weaker. A newly shared dataset tied to AT&T shows just how much more dangerous an “old” breach can become once criminals have enough of the right details to work with.

The dataset, privately circulated since February 2, 2026, is described as AT&T customer data likely gathered over the years. It doesn’t just contain a few scraps of contact information. It reportedly includes roughly 176 million records, with…

  • Up to 148 million Social Security numbers (full SSNs and last four digits)
  • More than 133 million full names and street addresses
  • More than 132 million phone numbers.
  • Dates of birth for around 75 million people
  • More than 131 million email addresses

Taken together, that’s the kind of rich, structured data set that makes a criminal’s life much easier.

On their own, any one of these data points would be inconvenient but manageable. An email address fuels spam and basic phishing. A phone number enables smishing and robocalls. An address helps attackers guess which services you might use. But when attackers can look up a single person and see name, full address, phone, email, complete or partial SSN, and date of birth in one place, the risk shifts from “annoying” to high‑impact.

That combination is exactly what many financial institutions and mobile carriers still rely on for identity checks. For cybercriminals, this sort of dataset is a Swiss Army knife.

It can be used to craft convincing AT&T‑themed phishing emails and texts, complete with correct names and partial SSNs to “prove” legitimacy. It can power large‑scale SIM‑swap attempts and account takeovers, where criminals call carriers and banks pretending to be you, armed with the answers those call centers expect to hear. It can also enable long‑term identity theft, with SSNs and dates of birth abused to open new lines of credit or file fraudulent tax returns.

The uncomfortable part is that a fresh hack isn’t always required to end up here. Breach data tends to linger, then get merged, cleaned up, and expanded over time. What’s different in this case is the breadth and quality of the profiles. They include more email addresses, more SSNs, more complete records per person. That makes the data more attractive, more searchable, and more actionable for criminals.

For potential victims, the lesson is simple but important. If you have ever been an AT&T customer, treat this as a reminder that your data may already be circulating in a form that is genuinely useful to attackers. Be cautious of any AT&T‑related email or text, enable multi‑factor authentication wherever possible, lock down your mobile account with extra passcodes, and consider monitoring your credit. You can’t pull your data back out of a criminal dataset—but you can make sure it’s much harder to use against you.

What to do when your data is involved in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Apple’s new iOS setting addresses a hidden layer of location tracking

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

A fake cloud storage alert that ends at Freecash

Last week we talked about an app that promises users they can make money testing games, or even just by scrolling through TikTok.

Imagine our surprise when we ended up on a site promoting that same Freecash app while investigating a “cloud storage” phish. We’ve all probably seen one of those. They’re common enough and according to recent investigation by BleepingComputer, there’s a

“large-scale cloud storage subscription scam campaign targeting users worldwide with repeated emails falsely warning recipients that their photos, files, and accounts are about to be blocked or deleted due to an alleged payment failure.”

Based on the description in that article, the email we found appears to be part of this campaign.

Cloud storage payment issue email

The subject line of the email is:

“{Recipient}. Your Cloud Account has been locked on Sat, 24 Jan 2026 09:57:55 -0500. Your photos and videos will be removed!”

This matches one of the subject lines that BleepingComputer listed.

And the content of the email:

Payment Issue – Cloud Storage

Dear User,

We encountered an issue while attempting to renew your Cloud Storage subscription.

Unfortunately, your payment method has expired. To ensure your Cloud continues without interruption, please update your payment details.

Subscription ID: 9371188

Product: Cloud Storage Premium

Expiration Date: Sat,24 Jan-2026

If you do not update your payment information, you may lose access to your Cloud Storage, which may prevent you from saving and syncing your data such as photos, videos, and documents.

Update Payment Details {link button}

Security Recommendations:

  • Always access your account through our official website
  • Never share your password with anyone
  • Ensure your contact and billing information are up to date”

The link in the email leads to  https://storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html#/redirect.html, which helps the scammer establish a certain amount of trust because it points to Google Cloud Storage (GCS). GCS is a legitimate service that allows authorized users to store and manage data such as files, images, and videos in buckets. However, as in this case, attackers can abuse it for phishing.

The redirect carries some parameters to the next website.

first redirect

The feed.headquartoonjpn[.]com domain was blocked by Malwarebytes. We’ve seen it before in an earlier campaign involving an Endurance-themed phish.

Endiurance phish

After a few more redirects, we ended up at hx5.submitloading[.]com, where a fake CAPTCHA triggered the last redirect to freecash[.]com, once it was solved.

slider captcha

The end goal of this phish likely depends on the parameters passed along during the redirects, so results may vary.

Rather than stealing credentials directly, the campaign appears designed to monetize traffic, funneling victims into affiliate offers where the operators get paid for sign-ups or conversions.

BleepingComputer noted that they were redirected to affiliate marketing websites for various products.

“Products promoted in this phishing campaign include VPN services, little-known security software, and other subscription-based offerings with no connection to cloud storage.”

How to stay safe

Ironically, the phishing email itself includes some solid advice:

  • Always access your account through our official website.
  • Never share your password with anyone.

We’d like to add:

  • Never click on links in unsolicited emails without verifying with a trusted source.
  • Use an up-to-date, real-time anti-malware solution with a web protection component.
  • Do not engage with websites that attract visitors like this.

Pro tip: Malwarebytes Scam Guard would have helped you identify this email as a scam and provided advice on how to proceed.

Redirect flow (IOCs)

storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html

feed.headquartoonjpn[.]com

revivejudgemental[.]com

hx5.submitloading[.]com

freecash[.]com


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

How Manifest v3 forced us to rethink Browser Guard, and why that’s a good thing 

As a Browser Guard user, you might not have noticed much difference lately. Browser Guard still blocks scams and phishing attempts just like always, and, in many cases, even better.

But behind the scenes, almost everything changed. The rules that govern how browser extensions work went through a major overhaul, and we had to completely rebuild how Browser Guard protects you.

First, what is Manifest v3 (and v2)? 

Browser extensions include a configuration file called a “manifest”. Think of it as an instruction manual that tells your browser what an extension can do and how it’s allowed to do it.

Manifest v3 is the latest version of that system, and it’s now the only option allowed in major browsers like Chrome and Edge.

In Manifest v2, Browser Guard could use highly customized logic to analyze and block suspicious activity as it happened, protecting you as you browsed the web.

With Manifest v3, that flexibility is mostly gone. Extensions can no longer run deeply complex, custom logic in the same way. Instead, we can only pass static rule lists to the browser, called Declarative Net Request (DNR) rules.

But those DNR rules come with strict constraints.

Rule sets are size-limited by the browser to save space. Because rules are stored as raw JSON files, developers can’t use other data types to make them smaller. And updating those DNR rules can only be done by updating the extension entirely.

This is less of a problem on Chrome, which allows developers to push updates quickly, but other browsers don’t currently support this fast-track process. Dynamic rule updates exist, but they’re limited, and nowhere near large enough to hold the full set of rules.

In short, we couldn’t simply port Browser Guard from Manifest v2 to v3. The old approach wouldn’t keep our users protected.

A note about Firefox and Brave 

Firefox and Brave chose a different path and continue to support the more flexible Manifest v2 method of blocking requests.

However, since Brave doesn’t have its own extension store, users can only install extensions they already had before Google removed Manifest v2 extensions from the Chrome Web Store. Though Brave also has strong out-of-the-box ad protection.

For Browser Guard users on Firefox, rest assured the same great blocking techniques will continue to work.

How Browser Guard still protects you 

Given all of this, we had to get creative.

Many ad blockers already support pattern-based matching to stop ads and trackers. We asked a different question: what if we could use similar techniques to catch scam and phishing attempts before we know the specific URL is malicious?

Better yet, what if we did it without relying on the new DNR APIs?

So, we built a new pattern-matching system focused specifically on scam and phishing behavior, supporting:

  • Full regex-based URL matching
  • Full XPath and querySelector support
  • Matching against any content on the page
  • Favicon spoof detection

For example, if a site is hosted on Amazon S3, contains a password-input field, and uses a homoglyph in the URL to trick users into thinking they were logging into Facebook, Browser Guard can detect that combination—even if we’ve never seen the URL before.

Fake Facebook login screen

Why this matters more now 

With AI, attackers can create near-perfect duplicates of websites easier than ever. And did you spot the homoglyph in the URL? Nope, neither did I!  

That’s why we designed this system so we can update its rules every 30 minutes, instead of waiting for full extension updates.  

But I still see static blocking rules in Browser Guard 

That’s true—for now.  

We’ve found a temporary workaround that lets us support all the rules that we had before. However, we had to remove some of the more advanced logic that used to sit on top of them.

For example, we can’t use these large datasets to block subframe requests, only main frame requests. Nor can we stack multiple logic layers together; blocking is limited to simple matches (regex, domains and URLs).

Those limits are a big reason we’re investing more heavily in pattern-based and heuristic protection. 

Pure heuristics 

From day one, Browser Guard has used heuristics (behavior) to detect scams and phishing, monitoring behavior on the page to match suspicious activity.

For example, some scam pages deliberately break your browser’s back button by abusing window.replaceState, then trick you into calling that scammer’s “computer helpline.” Others try to convince you to run malicious commands on your computer.

Browser Guard can detect these behaviors and warn you before you fall for them. 

What’s next? 

Did someone say AI?  

You’ve probably seen Scam Guard in other Malwarebytes products. We’re currently working on a version tailored specifically for Browser Guard. More soon!

Final thoughts 

While Manifest v3 introduced meaningful improvements to browser security, it also created real challenges for security tools like Browser Guard.

Rather than scaling back, the Browser Guard team rebuilt our approach from the ground up, focusing on behavior, patterns, and faster response times. The result is protection that’s different under the hood, but just as committed to keeping you safe online.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Scam-checking just got easier: Malwarebytes is now in ChatGPT 

If you’ve ever stared at a suspicious text, email, or link and thought “Is this a scam… or am I overthinking it?” Well, you’re not alone. 

Scams are getting harder to spot, and even savvy internet users get caught off guard. That’s why Malwarebytes is the first cybersecurity provider available directly inside ChatGPT, bringing trusted threat intelligence to millions of people right where these questions happen. 

Simply ask: “Malwarebytes, is this a scam?” and you’ll get a clear, informed answer—super fast. 

How to access 

To access Malwarebytes inside ChatGPT:

  • Sign in to ChatGPT  
  • Go to Apps  
  • Search for Malwarebytes and press Connect  
  • From then on, you can “@Malwarebytes” to check if a text message, DM, email, or other  content seems malicious.  

Cybersecurity help, right when and where you need it 

Malwarebytes in ChatGPT lets you tap into our cybersecurity expertise without ever leaving the conversation. Whether something feels off or you want a second opinion, you can get trusted guidance in no time at all. 

Here’s what you can do: 

Spot scams faster 

Paste in a suspicious text message, email, or DM and get: 

  • A clear, point-by-point breakdown of phishing or any known red flags 
  • An explanation of why something looks risky 
  • Practical next steps to help you stay safe 

You won’t get any jargon or guessing from us. What you will get is 100% peace of mind. 

Check links, domains, and phone numbers 

Not sure if a URL, website, or phone number is legit? Ask for a risk assessment informed by Malwarebytes threat intelligence, including: 

  • Signs of suspicious activity 
  • Whether the link or sender has been associated with scams 
  • If a domain is newly registered, follows redirects, or other potentially suspicious elements 
  • What to do next—block it, ignore it, or proceed with caution 

Powered by real threat intelligence 

The verdicts you get aren’t based on vibes or generic advice. They’re powered by Malwarebytes’ continuously updated threat intelligence—the same real-world data that helps protect millions of devices and people worldwide every day. 

If you spot something suspicious, you can submit it directly to Malwarebytes through ChatGPT. Those reports help strengthen threat intelligence, making the internet safer not just for you, but for everyone.

  • Link reputation scanner: Checks URLs against threat intelligence databases, detects newly registered domains (<30 days), and follows redirects.
  • Phone number reputation check: Validates phone numbers against scam/spam databases, including carrier and location details.  
  • Email address reputation check: Analyzes email domains for phishing & other malicious activity.  
  • WHOIS domain lookup: Retrieves registration data such as registrar, creation and expiration dates, and abuse of contacts.  
  • Verify domain legitimacy: Look up domain registration details to identify newly created or suspicious websites commonly used in phishing attacks.  
  • Get geographic context: Receive warnings when phone numbers originate from unexpected regions, a common indicator of international scam operations. 

Available now 

Malwarebytes in ChatGPT is available wherever ChatGPT apps are available.

To get started, just ask ChatGPT: 

“Malwarebytes, is this a scam?” 

For deeper insights, proactive protection, and human support, download the Malwarebytes app—our security solutions are designed to stop threats before they reach you, and the damage is done.

How fake party invitations are being used to install remote access tools

“You’re invited!” 

It sounds friendly, familiar and quite harmless. But in a scam we recently spotted, that simple phrase is being used to trick victims into installing a full remote access tool on their Windows computers—giving attackers complete control of the system. 

What appears to be a casual party or event invitation leads to the silent installation of ScreenConnect, a legitimate remote support tool quietly installed in the background and abused by attackers. 

Here’s how the scam works, why it’s effective, and how to protect yourself. 

The email: A party invitation 

Victims receive an email framed as a personal invitation—often written to look like it came from a friend or acquaintance. The message is deliberately informal and social, lowering suspicion and encouraging quick action. 

In the screenshot below, the email arrived from a friend whose email account had been hacked, but it could just as easily come from a sender you don’t know.

So far, we’ve only seen this campaign targeting people in the UK, but there’s nothing stopping it from expanding elsewhere. 

Clicking the link in the email leads to a polished invitation page hosted on an attacker-controlled domain. 

Party invitation email from a contact

The invite: The landing page that leads to an installer 

The landing page leans heavily into the party theme, but instead of showing event details, the page nudges the user toward opening a file. None of them look dangerous on their own, but together they keep the user focused on the “invitation” file: 

  • A bold “You’re Invited!” headline 
  • The suggestion that a friend had sent the invitation 
  • A message saying the invitation is best viewed on a Windows laptop or desktop
  • A countdown suggesting your invitation is already “downloading” 
  • A message implying urgency and social proof (“I opened mine and it was so easy!”

Within seconds, the browser is redirected to download RSVPPartyInvitationCard.msi 

The page even triggers the download automatically to keep the victim moving forward without stopping to think. 

This MSI file isn’t an invitation. It’s an installer. 

The landing page

The guest: What the MSI actually does 

When the user opens the MSI file, it launches msiexec.exe and silently installs ScreenConnect Client, a legitimate remote access tool often used by IT support teams.  

There’s no invitation, RSVP form, or calendar entry. 

What happens instead: 

  • ScreenConnect binaries are installed under C:Program Files (x86)ScreenConnect Client 
  • A persistent Windows service is created (for example, ScreenConnect Client 18d1648b87bb3023) 
  • ScreenConnect installs multiple .NET-based components 
  • There is no clear user-facing indication that a remote access tool is being installed 

From the victim’s perspective, very little seems to happen. But at this point, the attacker can now remotely access their computer. 

The after-party: Remote access is established 

Once installed, the ScreenConnect client initiates encrypted outbound connections to ScreenConnect’s relay servers, including a uniquely assigned instance domain.

That connection gives the attacker the same level of access as a remote IT technician, including the ability to: 

  • See the victim’s screen in real time
  • Control the mouse and keyboard 
  • Upload or download files 
  • Keep access even after the computer is restarted 

Because ScreenConnect is legitimate software commonly used for remote support, its presence isn’t always obvious. On a personal computer, the first signs are often behavioral, such as unexplained cursor movement, windows opening on their own, or a ScreenConnect process the user doesn’t remember installing. 

Why this scam works 

This campaign is effective because it targets normal, predictable human behavior. From a behavioral security standpoint, it exploits our natural curiosity and appears to be a low risk. 

Most people don’t think of invitations as dangerous. Opening one feels passive, like glancing at a flyer or checking a message, not installing software. 

Even security-aware users are trained to watch out for warnings and pressure. A friendly “you’re invited” message doesn’t trigger those alarms. 

By the time something feels off, the software is already installed. 

Signs your computer may be affected 

Watch for: 

  • A download or executed file named RSVPPartyInvitationCard.msi 
  • An unexpected installation of ScreenConnect Client 
  • A Windows service named ScreenConnect Client with random characters  
  • Your computer makes outbound HTTPS connections to ScreenConnect relay domains 
  • Your system resolves the invitation-hosting domain used in this campaign, xnyr[.]digital 

How to stay safe  

This campaign is a reminder that modern attacks often don’t break in—they’re invited in. Remote access tools give attackers deep control over a system. Acting quickly can limit the damage.  

For individuals 

If you receive an email like this: 

  • Be suspicious of invitations that ask you to download or open software 
  • Never run MSI files from unsolicited emails 
  • Verify invitations through another channel before opening anything 

If you already clicked or ran the file:  

  • Disconnect from the internet immediately 
  • Check for ScreenConnect and uninstall it if present 
  • Run a full security scan 
  • Change important passwords from a clean, unaffected device 

For organisations (especially in the UK) 

  • Alert on unauthorized ScreenConnect installations
  • Restrict MSI execution where feasible 
  • Treat “remote support tools” as high-risk software
  • Educate users: invitations don’t come as installers 

This scam works by installing a legitimate remote access tool without clear user intent. That’s exactly the gap Malwarebytes is designed to catch.

Malwarebytes now detects newly installed remote access tools and alerts you when one appears on your system. You’re then given a choice: confirm that the tool is expected and trusted, or remove it if it isn’t.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (January 26 – February 1)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.