IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

The US digital doxxing of H-1B applicants is a massive privacy misstep

Technology professionals hoping to come and work in the US face a new privacy concern. Starting December 15, skilled workers on H-1B visas and their families must flip their social media profiles to public before their consular interviews. It’s a deeply risky move from a security and privacy perspective.

According to a missive from the US State Department, immigration officers use all available information to vet newcomers for signs that they pose a threat to national security. That includes an “online presence review.” That review now requires not just H-1B applicants but also H-4 applicants (their dependents who want to move with them to the US) to “adjust the privacy settings on all of their social media profiles to ‘public.’”

An internal State Department cable obtained by CBS had sharper language: it instructs officers to screen for “any indications of hostility toward the citizens, culture, government, institutions, or founding principles of the United States.” What that means is unclear, but if your friends like posting strong political opinions, you should be worried.

This isn’t the first time that the government has forced people to lift the curtain on their private digital lives. The US State Department forced student visa applicants to make their social media profiles public in June this year.

This is a big deal for a lot of people. The H-1B program allows companies to temporarily hire foreign workers in specialty jobs. The US processed around 400,000 visas under the H-1B program last year, most of which were applications to renew employment, according to the Pew Research Center. When you factor in those workers’ dependents, we’re talking well over a million people. This decision forces them into long-term digital exposure that threatens not just them, but the US too.

Why forced public exposure is a security disaster

A lot of these H-1B workers work for defense contractors, chip makers, AI labs, and big tech companies. These are organizations that foreign powers (especially those hostile to the US) care a lot about, and that makes those H-1B employees primary targets for them.

Making H-1B holders’ real names, faces, and daily routines public is a form of digital doxxing. The policy exposes far more personal information than is safe, creating significant new risks.

This information gives these actors a free organizational chart, complete with up-to-date information on who’s likely to be working on chip designs and sensitive software.

It also gives the same people all they need to target people on that chart. They have information on H-1B holders and their dependents, including intelligence about their friends and family, their interests, their regular locations, and even what kinds of technology they use. They become more exposed to risks like SIM swapping and swatting.

This public information also turns employees into organizational attack vectors. Adversaries can use personal and professional data to enhance spear-phishing and business email compromise techniques that cost organizations dearly. Public social media content becomes training data for fraud, serving up audio and video that threat actors can use to create lifelike impersonations of company employees.

Social media profiles also give adversaries an ideal way to approach people. They have a nasty habit of exploiting social media to target assets for recruitment. The head of MI5 warned two years ago that Chinese state actors had approached an estimated 20,000 Britons via LinkedIn to steal industrial or technological secrets.

Armed with a deep, intimate understanding of what makes their targets tick, attackers stand a much better chance of co-opting them. One person might need money because of a gambling problem or a sick relative. Another might be lonely and a perfect target for a romance scam.

Or how about basic extortion? LGBTQ+ individuals from countries where homosexuality is criminalized risk exposure to regimes that could harm them when they return. Family in hostile countries become bargaining chips. In some regions, families of high-value employees could face increased exposure if this information becomes accessible. Foreign nation states are good at exploiting pain points. This policy means that they won’t have to look far for them.

Visa applications might assume they can simply make an account private again once officials have evaluated them. But adversary states to the US are actively seeking such information. They have vast online surveillance operations that scrape public social media accounts. As soon as they notice someone showing up in the US with H-1B visa status, they’ll be ready to mine account data that they’ve already scraped.

So what is an H-1B applicant to do? Deleting accounts is a bad idea, because sudden disappearance can trigger suspicion and officers may detect forensic traces. A safer approach is to pause new posting and carefully review older content before making profiles public. Removing or hiding posts that reveal personal routines, locations, or sensitive opinions reduces what can be taken out of context or used for targeting once accounts are exposed.

The irony is that spies are likely using fake social media accounts honed for years to slip under the radar. That means they’ll keep operating in the dark while legitimate H-1B applicants are the ones who become vulnerable. So this policy may unintentionally create the very risks it aims to prevent. And it also normalizes mandatory public exposure as a condition of government interaction.

We’re at a crossroads. Today, visa applicants, their families, and their employers are at risk. The infrastructure exists to expand this approach in the future. Or officials could stop now and rethink, before these risks become more deeply entrenched.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Google ads funnel Mac users to poisoned AI chats that spread the AMOS infostealer

Researchers have found evidence that AI conversations were inserted in Google search results to mislead macOS users into installing the Atomic macOS Stealer (AMOS). Both Grok and ChatGPT were found to have been abused in these attacks.

Forensic investigation of an AMOS alert showed the infection chain started when the user ran a Google search for “clear disk space on macOS.” Following that trail, the researchers found not one, but two poisoned AI conversations with instructions. Their testing showed that similar searches produced the same type of results, indicating this was a deliberate attempt to infect Mac users.

The search results led to AI conversations which provided clearly laid out instructions to run a command in the macOS Terminal. That command would end with the machine being infected with the AMOS malware.

If that sounds familiar, you may have read our post about sponsored search results that led to fake macOS software on GitHub. In that campaign, sponsored ads and SEO-poisoned search results pointed users to GitHub pages impersonating legitimate macOS software, where attackers provided step-by-step instructions that ultimately installed the AMOS infostealer.

As the researchers pointed out:

“Once the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal command decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest credentials, escalate privileges, and establish persistence without ever triggering a security warning.”

This is dangerous for the user on many levels. Because there is no prompt or review, the user does not get a chance to see or assess what the downloaded script will do before it runs. It bypasses security because of the use of the command line, it can bypass normal file download protections and execute anything the attacker wants.

Other researchers have found a campaign that combines elements of both attacks: the shared AI conversation and fake software install instructions. They found user guides for installing OpenAI’s new Atlas browser for macOS through shared ChatGPT conversations, which in reality led to AMOS infections.

So how does this work?

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide which in reality will infect a system. ChatGPT’s sharing feature creates a public link to a single conversation that exists in the owner’s account. Attackers can craft a chat to produce the instructions they need and then tidy up the visible conversation so that what’s shared looks like a short, clean guide rather than a long back-and-forth.

Most major chat interfaces (including Grok on X) also let users delete conversations or selectively share screenshots. That makes it easy for criminals to present only the polished, “helpful” part of a conversation and hide how they arrived there.

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide that, in reality, installs malware. ChatGPT’s sharing feature creates a public link to a conversation that lives in the owner’s account. Attackers can curate their conversations to create a short, clean conversation which they can share.

Then the criminals either pay for a sponsored search result pointing to the shared conversation or they use SEO techniques to get their posts high in the search results. Sponsored search results can be customized to look a lot like legitimate results. You’ll need to check who the advertiser is to find out it’s not real.

sponsored ad for ChatGPT Atlas which looks very real
Image courtesy of Kaspersky

From there, it’s a waiting game for the criminals. They rely on victims to find these AI conversations through search and then faithfully follow the step-by-step instructions.

How to stay safe

These attacks are clever and use legitimate platforms to reach their targets. But there are some precautions you can take.

  • First and foremost, and I can’t say this often enough: Don’t click on sponsored search results. We have seen so many cases where sponsored results lead to malware, that we recommend skipping them or make sure you never see them. At best they cost the company you looked for money and at worst you fall prey to imposters.
  • If you’re thinking about following a sponsored advertisement, check the advertiser first. Is it the company you’d expect to pay for that ad? Click the three‑dot menu next to the ad, then choose options like “About this ad” or “About this advertiser” to view the verified advertiser name and location.
  • Use real-time anti-malware protection, preferably one that includes a web protection component.
  • Never run copy-pasted commands from random pages or forums, even if they’re hosted on seemingly legitimate domains, and especially not commands that look like curl … | bash or similar combinations.

If you’ve scanned your Mac and found the AMOS information stealer:

  • Remove any suspicious login items, LaunchAgents, or LaunchDaemons from the Library folders to ensure the malware does not persist after reboot.
  • If any signs of persistent backdoor or unusual activity remain, strongly consider a full clean reinstall of macOS to ensure all malware components are eradicated. Only restore files from known clean backups. Do not reuse backups or Time Machine images that may be tainted by the infostealer.
  • After reinstalling, check for additional rogue browser extensions, cryptowallet apps, and system modifications.
  • Change all the passwords that were stored on the affected system and enable multi-factor authentication (MFA) for your important accounts.

If all this sounds too difficult for you to do yourself, ask someone or a company you trust to help you—our support team is happy to assist you if you have any concerns.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

How private is your VPN?

When you’re shopping around for a Virtual Private Network (VPN) you’ll find yourself in a sea of promises like “military-grade encryption!” and “total anonymity!” You can’t scroll two inches without someone waving around these fancy terms.

But not all VPNs can be trusted. Some VPNs genuinely protect your privacy, and some only sound like they do.

With VPN usage rising around the world for streaming, travel, remote work, and basic digital safety, understanding what makes a VPN truly private matters more than ever.

After years of trying VPNs for myself, privacy-minded family members, and a few mission-critical projects, here’s what I wish everyone knew.

Why do you even need a VPN?

If you’re wondering whether a VPN is worth it, you’re not alone. As your privacy-conscious consumer advocate, let me break down three time-saving and cost-saving benefits of using a privacy-first VPN.

Keep your browsing private

Ever feel like someone’s always looking over your shoulder online? Without a VPN, your internet service provider, and sometimes websites or governments, can keep tabs on what you do. A VPN encrypts your traffic and swaps out your real IP address for one of its own, letting you browse, shop, and read without a digital paper trail following you around.

I’ve run into this myself while traveling. There were times when I needed a VPN just to access US or European web apps that were blocked in certain Asian countries. In other cases, I preferred to appear “based” in the US so that English-language apps would load naturally, instead of defaulting to the local language, currency, or content of the country I was visiting.

Watch what you want, but pay less

Some of your favorite shows and websites are locked away simply because of where you live. In many cases, subscription or pay-per-view prices are higher in more prosperous regions. With a VPN, you can connect to servers in other countries and unlock content that isn’t available at home.

For example, when All Elite Wrestling (AEW) announced its major 2022 pay-per-view featuring CM Punk vs. Jon Moxley, US fans paid $49.99 through Bleacher Report. Fans in the UK, meanwhile, watched the exact same event on FiteTV for $23 less, around half the price. Because platforms determine pricing based on your IP address, a VPN server in another region can show you the pricing available in that country. Savings like that can make a VPN pay for itself quickly.

Stay safe on coffee-shop Wi-Fi

Before you join a network named “Starbucks Guest WiFi,” remember that nothing stops a cybercriminal from broadcasting a hotspot with the same name. Public Wi-Fi is convenient, but it’s also one of the easiest places for someone to snoop on your traffic.

Connecting to your VPN immediately encrypts everything you send or receive. That means you can check email, pay bills, or browse privately without worrying about someone nearby intercepting your information. Getting compromised will cost far more in money, time, and stress than most privacy-first VPN subscriptions.

But what actually makes a VPN privacy-first?

For a VPN, “privacy-first” can’t be just a nice slogan. It’s a mindset that shapes every technical, business, and legal decision.

A privacy-first VPN:

  • Collects as little data as possible — only the minimum needed to run the service.
  • Enforces a real no-logs policy through design, not marketing.
  • Builds privacy into everything, from software to server operations.
  • Practices transparency, often through open-source components and independent audits.

If a VPN can’t explain how it handles these areas, that’s a red flag.

What is WireGuard and why is it such a big deal?

WireGuard isn’t a VPN service. It’s the protocol that powers many modern VPNs, including Malwarebytes Privacy VPN. It’s the engine that handles encryption and securely routes your traffic.

WireGuard is the superstar in the VPN world. Unlike clunkier, older protocols (like OpenVPN or IPSec) it’s deliberately lean and built for the modern internet. Its small codebase is easier to audit and leaves fewer places for bugs to hide. It’s fully open-source, so researchers can dig into exactly how it works.

Its cryptography is fast, efficient, and modern with strong encryption, solid key exchange, and lightweight hashing that reduces overhead. In practice, that means better privacy and better performance without a provider having to gather connection data just to keep speeds usable.

Of course, WireGuard is just the foundation. Each VPN implements it differently. The better ones add privacy-friendly tweaks like rotating IP addresses or avoiding static identifiers so that even they can’t link sessions back to individual users.

How to compare VPNs

With VPN usage rising, especially where new age-verification rules have sparked debate about whether VPNs might face future scrutiny, it’s more important than ever to choose providers with strong, transparent privacy practices.

When you boil it down, a handful of questions reveal almost everything about how a VPN treats your privacy:

  • Who controls the infrastructure?
  • Are the servers RAM-only?
  • Which protocol is used, and how is it implemented?
  • What laws apply to the company?
  • Have experts audited the service?
  • Do transparency reports or warrant canaries exist and stay updated?
  • Can you sign up and pay without giving away your entire identity?

If a VPN provider gets evasive about any of this, or runs its service “for free” while collecting data to make the numbers work, that tells you almost everything you need to know.

privacy online

Why infrastructure ownership matters

One of the most revealing questions you can ask is deceptively simple: Who actually owns the servers?

Most VPNs rent hardware from large data centers or cloud platforms. When they do, your traffic travels through machines managed not only by the VPN’s engineers, but also by whoever runs those facilities. That introduces an access question: Who else has their hands on the hardware?

When a VPN owns and operates its equipment, including racks and networking gear, it reduces the number of unknowns dramatically. The fewer third parties in the chain, the easier it is to stand behind privacy guarantees.

RAM-only (diskless) servers: the gold standard

RAM-only servers take this a step further. Because everything runs in memory, nothing is ever written to a hard drive. Pull the plug and the entire working state disappears instantly, like wiping a whiteboard clean. That means no logs sitting quietly on a disk, nothing for an intruder or authorities to seize, and nothing left behind if ownership, personnel, or legal circumstances change.

This setup also tends to go hand-in-hand with owning the hardware. Most public cloud environments simply don’t allow true diskless deployments with full control over the underlying machine.

Other privacy features to watch for

Even with strong infrastructure and protocols, the details still matter. A solid kill switch keeps your traffic from leaking if the connection drops. Private DNS prevents queries from being routed through third parties. Multi-hop routes make correlation attacks harder. And torrent users may want carefully implemented port forwarding that doesn’t introduce side channels.

These aren’t flashy features, but they show whether a provider has considered the full privacy landscape, not just the obvious parts.

Audits and transparency reports

A provider that truly stands behind its privacy claims will welcome outside inspection. Independent audits, published findings, and ongoing transparency reports help confirm whether logging is disabled in practice, not just in principle. Some companies also maintain warrant canaries (more on this below). None of these are perfect, but together they paint a clear picture of how seriously the VPN treats user trust.

A warrant canary in the VPN coalmine

Okay, so here’s something interesting: some companies use something called a “warrant canary” to quietly let us know if they’ve received a top-secret government request for data. Here’s the deal…it’s illegal for them to simply tell us, “Hey, the government’s snooping around.” So, instead, they publish a simple statement that says something like, “As of January 2026, we haven’t received any secret orders for your data.”

The clever part is that they update this statement on a regular basis. If it suddenly disappears or just stops getting updated, it could mean the company got hit with one of these hush-hush requests and legally can’t talk about it. It’s like the digital version of a warning signal. It is nothing flashy, but if you’re paying attention, you’ll spot when something changes.

It’s not a perfect system (and who knows what the courts will think of it in the future), but a warrant canary is one-way companies try to be on our side, finding ways to keep us in the loop even when they’re told to stay silent. So, give an extra ounce of trust to companies that publish these regularly.

Where privacy-first VPNs are heading

Expect to see continued evolution: new cryptography built for a post-quantum world, more transparency from providers, decentralized and community-run VPN options, and tighter integration with secure messaging, encrypted DNS, and whatever comes next.

It’s also worth keeping an eye on how governments respond to rising VPN use. In the UK, for example, new age-verification rules triggered a huge spike in VPN sign-ups and a public debate about whether VPN usage should be monitored more closely. There’s no proposal to restrict or ban VPNs, but the conversation is active.

If you care about your privacy online, don’t settle for slick marketing. Look for the real foundations like modern protocols, owned and well-managed infrastructure, RAM-only servers, regular audits, and a culture that treats transparency as a habit, not a stunt.

Privacy is engineered, not simply promised. With the right VPN, you stay in control of your digital life instead of hoping someone else remembers to keep your secrets safe.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

DroidLock malware locks you out of your Android device and demands ransom

Researchers have analyzed a new threat campaign actively targeting Android users. The malware, named DroidLock, takes over a device and then holds it for ransom. The campaign to date has primarily targeted Spanish-speaking users, but researchers warn it could spread.

DroidLock is delivered via phishing sites that trick users into installing a malicious app pretending to be, for example, a telecom provider or other familiar brand. The app is really a dropper that installs malware able to take complete control of the device by abusing Device Admin and Accessibility Services permissions.

Once the victim grants accessibility permission, the malware starts approving additional permissions on its own. This can include access to SMS, call logs, contacts, and audio, which gives attackers more leverage in a ransom demand.

DroidLock also leverages Accessibility Services to create overlays on other apps. The overlays can capture device unlock patterns (giving the attacker full access) and also show a fake Android update screen, instructing victims not to power off or restart their devices.

DroidLock uses Virtual Network Computing (VNC) for remote access and control. With this, attackers can control the device in real time, including starting camera, muting sound, manipulating notifications, and uninstalling apps, and use overlays to capture lock patterns and app credentials. They can also deny access to the device by changing the PIN.

The researchers warn that:

“Once installed, DroidLock can wipe devices, change PINs, intercept OTPs (One-Time Passwords), and remotely control the user interface”

Unlike regular ransomware, DroidLock doesn’t encrypt files. But by blocking access and threatening to destroy everything unless a ransom is paid, it reaches the same outcome.

ransom note
Image courtesy of Zimperium

Urgent
Last chance
Time remaining {starts at 24 hours}
After this all files wil be deleted forever!
Your files will be permanently destroyed!
Contact us immediately at this email or lose everything forever: {email address}
Include your device ID {ID}
Payment required within 24 hours
No police, no recovery tools, no tricks
Every second counts!

How to stay safe

If this campaign turns out to be successful in Spain, we’ll undoubtedly see it emerge in other countries as well. So here are a few pointers to stay safe:

  • Only install apps from official app stores and avoid installing apps promoted in links in SMS, email, or messaging apps.
  • Before installing apps, verify the developer name, number of downloads, and user reviews rather than trusting a single promotional link.
  • Protect your devices. Use an up-to-date real-time anti-malware solution like Malwarebytes for Android, which already detects this malware.
  • Scrutinize permissions. Does an app really need the permissions it’s requesting to do the job you want it to do? Especially if it asks for accessibility, SMS, or camera access.
  • Keep Android, Google Play services, and all important apps up to date to get the latest security fixes.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Malwarebytes for Mac now has smarter, deeper scans 

Say hello to the upgraded Malwarebytes for Mac—now with more robust protection, more control, and the same trusted defense you count on every day.

We’ve given our Mac scan engine a serious intelligence boost, so it thinks faster and digs deeper. The new enhanced scan searches across more of your system to hunt down even the most advanced threats, from stealthy infostealers to zero-hour malware, all while keeping the straightforward experience you love. 

But that’s not all. We’ve also achieved a major performance boost, with up to 90% lower CPU usage for Malwarebytes for Mac.

What’s new 

The upgrade comes with three new scan options designed to fit the way you work: 

Malwarebytes Mac New Scan Options 1
  • Quick scan: A speedy sweep of the usual suspects. 
  • Threat scan: A full system check that is now your default. 
  • Custom scan: Total control, letting you choose exactly what to scan, including folders and external drives. 
Malwarebytes Schedule custom mac scan

It’s smarter protection that adapts to your needs. 

What to expect 

Your first enhanced scan may take a little longer. That’s because it’s covering more of your system than ever before to make sure nothing slips through the cracks. And with external drive scanning and WiFi security alerts, there is nowhere for viruses, infostealers, or spyware to linger.

After that, you’ll notice the difference. Scans will feel faster, lighter, and more intuitive. 

In fact, the always-on, automated protection from Malwarebytes for Mac has always kept your Mac safe by monitoring every file you open, download, or save. Now, we have made it significantly more efficient. Our latest enhancements reduced CPU usage by up to 90%. What that means for you is a faster, snappier, and more responsive experience.

Mac CPU improvement

No action needed. Your protection just got better. 

You don’t have to lift a finger; your protection simply levels up. Open Malwarebytes and explore the new scan options when you’re ready. Don’t see them yet? Make sure you’re on the latest version (5.18.2) under Profile → About Malwarebytes. If you aren’t, go to the Malwarebytes menu and select Check for updates.

Welcome to the next era of Mac security from Malwarebytes. More robust coverage, harnessing the same trusted protection you know, directly in your control. 

Another Chrome zero-day under attack: update now

Google issued an extra patch for a security vulnerability in Chrome that is being actively exploited, and it’s urging users to update. The patch fixes two flaws in Chrome’s V8 engine, and for one of them Google says an exploit already exists in the wild.

Both issues are described as “type confusion” vulnerabilities in the V8 JavaScript engine. These occur when Chrome doesn’t verify the object type it’s handling and then uses it incorrectly. In other words, the browser mistakes one type of data for another—like treating a list as a single value or a number as text. That can cause unpredictable behavior and, in some cases, let attackers manipulate memory and run code remotely through crafted JavaScript on a malicious or compromised website.

So, these vulnerabilities lie in the part of Chrome you use the most: browsing. And since Chrome is by far the world’s most popular browser, with an estimated 3.4 billion users, that makes for a massive target. When Chrome has a security flaw that can be triggered just by visiting a website, billions of users are exposed until they update.

That’s why it’s important to install these patches promptly. Staying unpatched means you could be at risk just by browsing the web. Attackers often exploit these kinds of flaws before most users have a chance to update. Always let Chrome update itself, and don’t delay restarting it as updates usually fix exactly this kind of risk.

How to update Chrome

The latest version number is 144.0.7444.175/.176 for Windows, 144.0.7444.175 for Linux, and 144.0.7444.176 for macOS. So, if your Chrome is on version 144.0.7444.175 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To update manually, click the More menu (three dots), then go to Settings > About Chrome. If an update is available, Chrome will start downloading it. Restart Chrome to complete the update, and you’ll be protected against these vulnerabilities.

You can also find step-by-step instructions in our guide to how to update Chrome on every operating system.

Chrome is up to date

2025 exploited zero-days in Chrome

Public reporting indicates that Chrome has seen at least seven zero-days exploited in 2025, several of them in the V8 JavaScript engine and some linked to targeted espionage.

So, 2025 has been a relatively busy year for Chrome zero‑days.

In March, a sandbox escape tracked as CVE‑2025‑2783 showed up in espionage operations against Russian targets.

May brought more bad news: an account‑hijacking flaw (CVE‑2025‑4664), followed in June by multiple V8 issues (including CVE‑2025‑5419 and CVE‑2025‑6558) that let attackers run code in the browser and in some cases hop over the sandbox boundary.

September added a V8 type‑confusion bug (CVE‑2025‑10585) serious enough to justify another out‑of‑band patch.

And with this latest update, Google has patched CVE-2025-13223, reported by Google’s Threat Analysis Group (TAG), which focuses on spyware and nation-state attackers who regularly use zero-days for espionage.

If we’re lucky, this update will close out 2025’s run of Chrome zero-days.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

December Patch Tuesday fixes three zero-days, including one that hijacks Windows devices

These updates from Microsoft fix serious security issues, including three that attackers are already exploiting to take control of Windows systems.

In total, the security update resolves 57 Microsoft security vulnerabilities. Microsoft isn’t releasing new features for Windows 10 anymore, so Windows 10 users will only see security updates and fixes for bugs introduced by previous security updates.

What’s been fixed

Microsoft releases important security updates on the second Tuesday of every month—known as “Patch Tuesday.” This month’s patches fix critical flaws in Windows 10, Windows 11, Windows Server, Office, and related services.

There are three zero‑days: CVE‑2025‑62221 is an actively exploited privilege‑escalation bug in the Windows Cloud Files Mini Filter Driver. Two are publicly disclosed flaws: CVE-2025-64671, which is a GitHub Copilot for JetBrains remote code execution (RCE) vulnerability, and CVE‑2025‑54100, an RCE issue in Windows PowerShell.

PowerShell received some extra attention, as from now on users will be warned whenever the Invoke‑WebRequest command fetches web pages without safe parameters.​

The warning is to prevent accidental script execution from web content. It highlights the risk that script code embedded in a downloaded page might run during parsing, and recommends using the -UseBasicParsing switch to avoid running any page scripts.

There is no explicit statement from Microsoft tying the new Invoke‑WebRequest warning directly to ClickFix, but it clearly addresses the abuse pattern that ClickFix and similar campaigns rely on: tricking users into running web‑fetched PowerShell code without understanding what it does.

How to apply fixes and check you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected automatic updates earlier, you may see this under Update history:
Successfully installed security updates
  • Or you may see a Restart required message, which means all you have to do is restart your system and you’re done updating.
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!
You're up to date

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

GhostFrame phishing kit fuels widespread attacks against millions

GhostFrame is a new phishing-as-a-service (PhaaS) kit, tracked since September 2025, that has already powered more than a million phishing attacks.

Threat analysts spotted a series of phishing attacks featuring tools and techniques they hadn’t seen before. A few months later, they had linked over a million attempts to this same kit, which they named GhostFrame for its stealthy use of iframes. The kit hides its malicious activity inside iframes loaded from constantly changing subdomains.

An iframe is a small browser window embedded inside a web page, allowing content to load from another site without sending you away–like an embedded YouTube video or a Google Map. That embedded bit is usually an iframe and is normally harmless.

GhostFrame abuses it in several ways. It dynamically generates a unique subdomain for each victim and can rotate subdomains even during an active session, undermining domain‑based detection and blocking. It also includes several anti‑analysis tricks: disabling right‑click, blocking common keyboard shortcuts, and interfering with browser developer tools, which makes it harder for analysts or cautious users to inspect what is going on behind the scenes.

As a PhaaS kit, GhostFrame is able to spoof legitimate services by adjusting page titles and favicons to match the brand being impersonated. This and its detection-evasion techniques show how PhaaS developers are innovating around web architecture (iframes, subdomains, streaming features) and not just improving email templates.

Hiding sign-in forms inside non‑obvious features (like image streaming or large‑file handlers) is another attempt to get around static content scanners. Think of it as attackers hiding a fake login box inside a “video player” instead of putting the login box directly on the page, so many security tools don’t realize it’s a login box at all. Those tools are often tuned to look for normal HTML forms and password fields in the page code, and here the sensitive bits are tucked away in a feature that is supposed to handle big image or file data streams.

Normally, an image‑streaming or large‑file function is just a way to deliver big images or other “binary large objects” (BLOBs) efficiently to the browser. Instead of putting the login form directly on the page, GhostFrame turns it into what looks like image data. To the user, it looks just like a real Microsoft 365 login screen, but to a basic scanner reading the HTML, it looks like regular, harmless image handling.

Generally speaking, the rise of GhostFrame illuminates a trend that PhaaS is arming less-skilled cybercriminals while raising the bar for defenders. We recently covered Sneaky 2FA and Lighthouse as examples of PhaaS kits that are extremely popular among attackers.

So, what can we do?

Pairing a password manager with multi-factor authentication (MFA) offers the best protection.

But as always, you’re the first line of defense. Don’t click on links in unsolicited messages of any type before verifying and confirming they were sent by someone you trust. Staying informed is important as well, because you know what to expect and what to look for.

And remember: it’s not just about trusting what you see on the screen. Layered security stops attackers before they can get anywhere.

Another effective security layer to defend against phishing attacks is Malwarebytes’ free browser extension, Browser Guard, which detects and blocks phishing attacks heuristically.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Prompt injection is a problem that may never be fixed, warns NCSC

Prompt injection is shaping up to be one of the most stubborn problems in AI security, and the UK’s National Cyber Security Centre (NCSC) has warned that it may never be “fixed” in the way SQL injection was.

Two years ago, the NCSC said prompt injection might turn out to be the “SQL injection of the future.” Apparently, they have come to realize it’s even worse.

Prompt injection works because AI models can’t tell the difference between the app’s instructions and the attacker’s instructions, so they sometimes obey the wrong one.

To avoid this, AI providers set up their models with guardrails: tools that help developers stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to explain how to produce anthrax spores at scale, guardrails would ideally detect that request as undesirable and refuse to acknowledge it.

Getting an AI to go outside those boundaries is often referred to as jailbreaking. Guardrails are the safety systems that try to keep AI models from saying or doing harmful things. Jailbreaking is when someone crafts one or more prompts to get around those safety systems and make the model do what it’s not supposed to do. Prompt injection is a specific way of doing that: An attacker hides their own instructions inside user input or external content, so the model follows those hidden instructions instead of the original guardrails.

The danger grows when Large Language Models (LLMs), like ChatGPT, Claude or Gemini, stop being chatbots in a box and start acting as “autonomous agents” that can move money, read email, or change settings. If a model is wired into a bank’s internal tools, HR systems, or developer pipelines, a successful prompt injection stops being an embarrassing answer and becomes a potential data breach or fraud incident.

We’ve already seen several methods of prompt injection emerge. For example, researchers found that posting embedded instructions on Reddit could potentially get agentic browsers to drain the user’s bank account. Or attackers could use specially crafted dodgy documents to corrupt an AI. Even seemingly harmless images can be weaponized in prompt injection attacks.

Why we shouldn’t compare prompt injection with SQL injection

The temptation to frame prompt injection as “SQL injection for AI” is understandable. Both are injection attacks that smuggle harmful instructions into something that should have been safe. But the NCSC stresses that this comparison is dangerous if it leads teams to assume that a similar one‑shot fix is around the corner.

The comparison to SQL injection attacks alone was enough to make me nervous. The first documented SQL injection exploit was in 1998 by cybersecurity researcher Jeff Forristal, and we still see them today, 27 years later. 

SQL injection became manageable because developers could draw a firm line between commands and untrusted input, and then enforce that line with libraries and frameworks. With LLMs, that line simply does not exist inside the model: Every token is fair game for interpretation as an instruction. That is why the NCSC believes prompt injection may never be totally mitigated and could drive a wave of data breaches as more systems plug LLMs into sensitive back‑ends.

Does this mean we have set up our AI models wrong? Maybe. Under the hood of an LLM, there’s no distinction made between data or instructions; it simply predicts the most likely next token from the text so far. This can lead to “confused deputy attacks.”

The NCSC warns that as more organizations bolt generative AI onto existing applications without designing for prompt injection from the start, the industry could see a surge of incidents similar to the SQL injection‑driven breaches of 10—15 years ago. Possibly even worse, because the possible failure modes are uncharted territory for now.

What can users do?

The NCSC provides advice for developers to reduce the risks of prompt injection. But how can we, as users, stay safe?

  • Take advice provided by AI agents with a grain of salt. Double-check what they’re telling you, especially when it’s important.
  • Limit the powers you provide to agentic browsers or other agents. Don’t let them handle large financial transactions or delete files. Take warning from this story where a developer found their entire D drive deleted.
  • Only connect AI assistants to the minimum data and systems they truly need, and keep anything that would be catastrophic to lose out of their control.
  • Treat AI‑driven workflows like any other exposed surface and log interactions so unusual behavior can be spotted and investigated.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

EU fines X $140m, tied to verification rules that make impostor scams easier

The European Commission slapped social networking company X with a €120 million ($140 million) fine last week for what it says was a lack of transparency with its European users.

The fine, the first ever penalty under the EU’s landmark Digital Services Act, addressed three specific violations with allocated penalties.

The first was a deceptive blue checkmark system. X touted this feature, first introduced by Musk when he bought Twitter in 2022, as a way to verify your identity on X. However, the Commission accused it of failing to actually verify users. It said:

“On X, anyone can pay to obtain the ‘verified’ status without the company meaningfully verifying who is behind the account, making it difficult for users to judge the authenticity of accounts and content they engage with.”

The company also blocked researchers from accessing its public data, the Commission complained, arguing that it undermined research into systemic risks in the EU.

Finally, the fine covers a lack of transparency around X’s advertising records. Its advertising repository doesn’t support the DSA’s standards, the Commission said, accusing it of lacking critical information such as advertising topic and content.

This makes it more difficult for researchers and the public to evaluate potential risks in online advertising according to the Commission.

Before Musk took over Twitter and renamed it to X, the company would independently verify select accounts using information including institutional email addresses to prove the owners’ identities. Today, you can get a blue checkmark that says you’re verified for $8 per month if you have an account on X that has been active for 30 days and can prove you own your phone number. X killed off the old verification system, with its authentic, notable, and active requirement, on April 1, 2023.

An explosion in imposter accounts

The tricky thing about weaker verification measures is that people can abuse them. Within days of Musk announcing the new blue checkmark verifications, someone registered a fake account for pharmaceutical company Eli Lilly and tweeted “insulin is free now”, tanking the stock over 4%.

Other impersonators verifying fake accounts at the time targeted Tesla, Trump, and Tony Blair, among others.

Weak verification measures are especially dangerous in an era where fake accounts are rife. Many people have fallen victim to fake social media accounts that scammers set up to impersonate legitimate brands’ customer support.

Musk, who threatened a court battle when the EC released its preliminary findings on the investigation last year, confirmed that X deactivated the EC’s advertising account in retaliation, but also called for the abolition of the EU.

This isn’t the social media company’s first tussle with regulators. In May 2022, before Musk bought it, Twitter settled with the FTC and DoJ for $150 million over allegations that it used peoples’ non-public security numbers for targeted advertising.

There are also other ongoing DSA-related investigations into X. The EU is probing its recommendation system. Ireland is looking into its handling of customer complaints about online content.

What comes next

X has 60 working days to address the checkmark violations and 90 days for advertising and researcher access, although given Musk’s previous commentary we wouldn’t be surprised to see him take the EU to court.

Failure to comply would trigger additional periodic penalties. The DSA allows fines up to 6% of global revenue.

Meanwhile, the core problem persists: anyone can still buy a ‘verified’ checkmark from X with extremely weak verification. So if anyone with a blue checkmark contacts you on the platform, don’t take their authenticity for granted.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.