IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Scammers are still sending us their fake Robinhood security alerts

A short while ago, our friends at Malwaretips wrote about a text scam impersonating Robinhood, a popular US-based investment app that lets people trade stocks and cryptocurrencies. The scam warns users about supposed “suspicious activity” on their accounts.

As if to demonstrate that this phishing campaign is still very much alive, one of our employees received one of those texts.

screenshot scam text message

“Alert!

Robinhood Securities Risk Warning:

Our automated security check system has detected anomalies in your account, indicating a potential theft. A dedicated security check link is required for review. Please click the link below to log in to your account and complete the security check.

Immediate Action: https://www-robinhood.cweegpsnko[.]net/Verify

(If the link isn’t clickable, reply Y and reopen this message to click the link, or copy it into your browser.)

Robinhood Securities Official Security Team”

As usual, we see some red flags:

  • Foreign number: The country code +243 belongs to the Democratic Republic of the Congo, not the US, where the real Robinhood is based.
  • Urgency: The phrase “Immediate Action” is designed to pressure you.
  • Fake domain: The URL that tries to look like the legitimate robinhood.com website.
  • Reply: The instructions to reply “Y” if a link isn’t clickable are a common phishing tactic.

But if the target follows the instructions to visit the link, they would find a reasonably convincing copy of Robinhood’s login page. It wouldn’t be automatically localized like the real one, but nobody in the US would know the difference. Logging in there hands the scammers your Robinhood login credentials and allows them to clean out your account.

According to Malwaretips, some of the fake websites even redirected you to the legitimate site after showing the “verification complete” message.

They also warned that some scammers will try to harvest additional personal data from the account, including:

  • Tax documents
  • Full name
  • Social Security Number (if on file)
  • Bank account information

How to stay safe

What to do if you receive texts like these

The best tip to stay safe is to make sure you’re aware of the latest scam tactics. Since you’re reading our blog, you’re off to a good start.

  • Never reply to or follow links in unsolicited tax refund texts, calls, or emails, even if they look urgent.
  • Never share your Social Security number or banking details with anyone claiming to process your tax refund.
  • Go direct. If in doubt, contact the company through official channels.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit suspicious messages like these to Malwarebytes Scam Guard, which instantly flags known scams?

What to do if you clicked the phishing link

Indicators of compromise (IOCs)

www-robinhood.cweegpsnko[.]net

www-robinhood.fflroyalty[.]com

robinhood-securelogin[.]com

robinhood-verification[.]net


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Satellites leak voice calls, text messages and more

Scientists from several US universities intercepted unencrypted broadcast through geostationary satellites using only off-the-shelf equipment on a university rooftop.

Geostationary satellites move at the same speed as the Earth’s rotation so it seems as though they are always above the same exact location. To maintain this position, they orbit at an altitude of roughly 22,000 miles (36,000 kilometers).

This makes them ideal for relaying phone calls, text messages, and internet data. Since these satellites can cover vast areas—including remote and hard-to-reach areas—they provide reliable connectivity for everything from rural cell towers to airplanes and ships, even where cables don’t reach.

That same stability makes them convenient for people who want to eavesdrop, because you only need to point your equipment once. The researchers who did this described their findings in a paper called “Don’t Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites.”

The team scanned the IP traffic on 39 GEO satellites across 25 distinct longitudes with 411 transponders using consumer-grade equipment. About half of the signals they captured contained clear text IP traffic.

This means there was no encryption at either the link layer or the network layer. This allowed the team to observe internal communications from organizations that rely on these satellites to connect remote critical infrastructure and field operations.

Among the intercepted data were private voice calls, text messages, and call metadata sent through cellular backhaul—the data that travels between cell towers and the central network.

Commercial and retail organizations transmitted inventory records, internal communications, and business data over these satellite links. Banks leaked ATM-related transactions and network management commands. Entertainment and aviation communications were also intercepted, including in-flight entertainment audio and aircraft data.

The researchers also captured industrial control signals for utility infrastructure, including job scheduling and grid monitoring commands. Military (from the US and Mexico) communications were exposed, revealing asset tracking information and operational details such as surveillance data for vessel movements.

The research reveals a pervasive lack of standardized encryption protocols, leaving much of this traffic vulnerable to interception by any technically capable individual with suitable equipment. They concluded that despite the sensitive nature of the data, satellite communication security is often neglected, creating substantial opportunities for eavesdropping, espionage, and potential misuse.

The researchers stated:

“There is a clear mismatch between how satellite customers expect data to be secured and how it is secured in practice; the severity of the vulnerabilities we discovered has certainly revised our own threat models for communications.”

After the scientists reported their findings, T-Mobile took steps to address the issue, but other unnamed providers have yet to patch the vulnerabilities.

This study highlights the importance of making sure your communications are encrypted before they leave your devices. Do not rely solely on providers to keep your data safe. Use secure communication apps like Signal or WhatsApp, choose voice-over-internet (VoIP) providers that encrypt calls and messages, and protect your internet data with a VPN that creates a secure, encrypted tunnel.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

AI-driven scams are preying on Gen Z’s digital lives​

Gone are the days when extortion was only the plot line of crime dramas—today, these threatening tactics target anyone with a smartphone. As AI makes fake voices and videos sound and look real, high-pressure plays like sextortion, deepfakes, and virtual kidnapping feel more believable than ever before, tricking even the most digitally savvy users. Gen Z and Millennials are most at risk, accounting for two in three victims of extortion scams. These scammers prey on what’s personal, wreaking havoc on their victims’ privacy, reputations, and peace of mind.

Extortion Image 1

Our latest research shows that one in three mobile users has been targeted by an extortion scam, and nearly one in five has fallen victim. Gen Z is hit hardest: more than half (58%) have been targets, and over 1 in 4 (28%) have been a victim. Sextortion—threatening to leak nude photos or videos or expose pornographic search history— is particularly notable, with one in six mobile users reporting they’ve been a target. Among Gen Z, that number jumps to 38%.

Five things to know about mobile extortion scams

1. Who’s most at risk: Gen Z and Millennials with a risk tolerant profile

Compared to victims and targets of other types of mobile scams, extortion victims tend to be younger, male, and mobile-first. Their profile:

  • Young: 69% of victims and 64% of targets are Gen Z or Millennial (vs. 52%/40% of victims and targets of other types of scams, respectively)
  • Male: 65% of victims and 60% of targets are male (vs. 48%/45%)
  • Parents: 45% of victims and 41% of targets are parents (vs. 36%/26%)
  • Minorities: 53% of victims are non-white (vs. 39%)
  • Mobile-first: 52% of victims and 46% of targets agree “I’m more likely to click a link on my phone than on my laptop” (vs. 42%/36%)

However, this simply shows how targets and victims skew. Behaviors typically play a bigger role in overall risk.

2. What the damage looks like: emotional and deeply personal

Extortion criminals use personal, high-stakes threats in their scams. Victims and targets of extortion scams in our survey report experiences ranging from scammers threatening to expose nude photos and videos to claims that a family member was in an accident.

These personalized, high-pressure threats make extortion victims especially vulnerable, and while victims of all mobile scams suffer serious emotional, financial, and functional fallout at the hands of their scammers, extortion victims experience outsized impact:

  • Nearly 9 in 10 extortion victims reported emotional harm because of the scam they experienced
  • 35% experienced blackmail or harassment
  • 21% experienced damage to their reputation
  • 19% faced consequences at work or school

Even when targets don’t fall victim, the threats alone can cause emotional harm:

“I didn’t lose anything, I was just scared because they wanted to inform all my friends, family, and employers how perverted I was because I supposedly watched porn.”   

—Gen Z survey respondent, DACH region

Extortion Image 2

3. Why it’s getting worse: AI is raising the stakes

AI is increasingly good at making fake feel real, giving criminals even more of an advantage when manipulating and extorting victims. One in five mobile users has been the target of a deepfake scam and nearly as many have encountered a virtual kidnapping scam (a decades-old tactic that now often uses AI voice cloning). Two in five (43%) Gen Z users have been a target of one of these.

 Who AI scams hit: Victims and targets skew Gen Z and iPhone users with a deep digital footprint. This could leave their personal information, images, or even voice more accessible to cybercriminals who want to use it as part of a scam.

  • Gen Z: 45% (vs. 31% for extortion victims and targets overall)
  • iPhone users: 62% (vs. 51% overall)
  • Data sharers*: 81% (vs. 71% overall)

*Agree with the statement: “I understand that sharing personal information with apps, on social media, or on messaging services can be risky, but I am okay with that risk”

Extortion Image 3

So why might exposure be higher for Gen Z? Digital natives are most entrenched in mobile-first behavior and most active in low-oversight casual commerce (DMing for deals, using buy/sell/trade groups, clicking on ads to purchase or download, sending money for a future service). They also show up more on alternative platforms like Discord, Tumblr, Twitch, and Mastodon, where identity checks are lighter and parasocial trust runs high, creating a sweet spot for scammers. 

“The scammer makes you believe it is a legit conversation. They/He/She talk to you like they know you. Trying to convince you they are supporting/helping you in some way to fix something. When they are just fishing for more information!”

— Gen Z US Survey Respondent   

For victims of AI-driven scams, the fallout is even more extreme: 32% suffered reputation damage (vs. 21% for extortion victims overall), 29% suffered work/school consequences (vs. 11%), 24% had their personal information stolen (vs. 14%), and 21% had financial accounts opened in their name (vs. 13%), underscoring the threat of these evolving scams. 

4. Where the risk lives: constant, cross-channel exposure   

Scammers know the more they approach a target, the more likely they are to create a victim. 78% of extortion victims and 63% of targets experience scam attempts daily (vs. 44%/36% in other scam groups), driving alert fatigue and making it more likely that a scammer will slip through the cracks.

Extortion victims and targets also over-index on using informal buying and selling channels—spaces like social media where identity is fuzzy, protections are lacking, and decisions are quick. Being in more casual spaces more frequently increases the odds of a scam landing for anyone.

5. How mindset shapes risk: overconfident and under-protected

Seven in ten extortion victims say they’re confident they can spot a scam, more than half believe they could recoup any financial losses, and most trust their phone’s safety features. At the same time, many victims and targets simply don’t worry about mobile scams at all, resulting in a lack of protective measures. Adoption of security basics (security software, strong/unique passwords, multi-factor authentication, timely system updates, permission hygiene, data backups) remains low, even after painful firsthand experience.

How to cut the risk

Most of us use our phones to shop, find deals, and pay—and we deserve to be able to do that safely. Adopting preventative security measures (such as using mobile security software), practicing good mobile hygiene (such as checking app permissions), and remembering STOP, our simple scam response framework, can keep scammers at bay: 

S—Slow down: Don’t let urgency or pressure push you into action. Take a breath before responding. Legitimate businesses like your bank or credit card don’t push immediate action.

T—Test them: If you answered the phone and are feeling panicked about the situation, likely involving a family member or friend, ask a question only the real person would know—something that can’t be found online.  

O—Opt out: If it feels off, hang up or end the conversation. You can always say the connection dropped.

P—Prove it: Confirm the person is who they say they are by reaching out yourself through a trusted number, website, or method you’ve used before.

The criminals behind extortion scams pour time and money into targeting their victims, constantly evolving their tactics to make the scams more believable and hard-hitting. If you’ve been the victim of an extortion scam, sharing your story can help others spot the signs before it’s too late, reduce the stigma of being a victim, and put the shame where it belongs: on the criminals.

As Malwarebytes Global Head of Scam and AI Research Shahak Shalev puts it:

“If we can remove the stigma and silence around scams, I think we can help everyone take a step back and pause before acting on one of these threats”


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Pixel-stealing “Pixnapping” attack targets Android devices

Researchers at US universities have demonstrated how a malicious Android app can trick the system into leaking pixel data. That may sound harmless, but imagine if a malicious app on your Android device could glimpse tiny bits of information on your screen—even the parts you thought were secure, like your two-factor authentication (2FA) codes.

That’s the chilling idea behind “Pixnapping” attacks described in the research paper coming from University of California (Berkeley and San Diego), University of Washington, and Carnegie Mellon University.

A pixel is one of the tiny colored dots that make up what you see on your device’s display. The researchers built a pixel-stealing framework that bypasses all browser protections and can even lift secrets from non-browser apps such as Google Maps, Signal, and Venmo—as well as websites like Gmail. It can even steal 2FA codes from Google Authenticator.

Pixnapping is a classic side-channel attack—stealing secrets not by breaking into software, but by observing physical clues that devices give off during normal use. Pixel-stealing ideas date back to 2013, but this research shows new tricks for extracting sensitive data by measuring how specific pixels behave.

The researchers tested their framework on modern Google Pixel phones (6, 7, 8, 9) and a Samsung Galaxy S25 and succeeded in stealing secrets from both browsers and non-browser apps. They disclosed the findings to Google and Samsung in early 2025. As of October 2025, Google has patched part of the vulnerability, but some workarounds remain and both companies are still working on a full fix. Other Android devices may also be vulnerable.

The technical knowledge required to perform such an attack is enormous. This isn’t “script kiddie” territory: Attackers would need deep knowledge of Android internals and graphics hardware. But once developed, a Pixnapping app could be disguised as something harmless and distributed like any other piece of Android malware.

To perform an attack, someone would have to convince or trick the target into installing the malicious app on their device.

This app abuses Android Intents—a fundamental part of how apps communicate and interact with each other on Android devices. You can think of an intent like a message, or request, that one app sends either to another app or to the Android operating system itself, asking for something to happen.

The malicious app’s programming will stack nearly transparent windows over the app it wants to spy on and watch for subtle timing signals that depend on pixel color.

It doesn’t take long—the paper shows it can steal temporary 2FA codes from Google Authenticator in under 30 seconds. Once stolen, the data is sent to a command-and-control (C2) server controlled by the attacker.

How to stay safe

From the steps it takes to perform such an attack we can list some steps that can keep your 2FA codes and other secrets safe.

  1. Update regularly: Make sure your device and apps have the latest security updates. Google and Samsung are rolling out fixes; don’t ignore those update prompts. The underlying vulnerability is tracked as CVE-2025-48561.
  2. Be cautious installing apps: Only install apps from trusted sources like Google Play and check reviews and permissions before installing. Avoid sideloading unknown APKs and ask yourself if the permissions an app asks for are really needed for what you want it to do.
  3. Review permissions: Android improved its permission system, but check regularly what apps can do, and don’t hesitate to remove permissions of the ones you don’t use often.
  4. Use app screenshots wisely: Don’t store or display sensitive info (like codes, addresses, or logins) in apps unless needed, and close apps after use.
  5. Monitor security newsLook for announcements from Google and Samsung about patches for this vulnerability, and act on them.
  6. Enable Play ProtectKeep Play Protect active to help spot malicious apps before they’re installed.
  7. Use up-to-date real-time anti-malware protection on your Android device, preferably with a web protection module.

If you’re worried about your 2FA codes getting stolen, consider switching to hardware token 2FA options.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Researchers break OpenAI guardrails

The maker of ChatGPT released a toolkit to help protect its AI from attack earlier this month. Almost immediately, someone broke it.

On October 6, OpenAI ran an event called DevDay where it unveiled a raft of new tools and services for software programmers who use its products. As part of that, it announced a tool called AgentKit that lets developers create AI agents using its ChatGPT AI technology. Agents are specialized AI programs that can tackle narrow sets of tasks on their own, making more autonomous decisions. They can also work together to automate tasks (such as, say, finding a good restaurant in a city you’re traveling to and then booking you a table).

Agents like this are more powerful than earlier versions of AI that would do one task and then come back to you for the next set of instructions. That’s partly what inspired OpenAI to include Guardrails in AgentKit.

Guardrails is a set of tools that help developers to stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to tell you how to produce anthrax spores at scale, Guardrails would ideally detect that request and refuse it.

People often try to get AI to break its own rules using something called “jailbreaking”. There are various jailbreaking techniques, but one of the simplest is role-playing. If a person asked for instructions to make a bomb, the AI might have said no, but if they then tell the AI it’s just for a novel they’re writing, then it might have complied. Organizations like OpenAI that produce powerful AI models are constantly figuring out ways that people might try to jailbreak their models using techniques like these, and building new protections against them. Guardrails is their attempt to open those protections up to developers.

As with any new security mechanism, researchers quickly tried to break Guardrails. In this case, AI security company HiddenLayer had a go, and conquered the jailbreak protection pretty quickly.

ChatGPT is a large language model (LLM), which is a statistical model trained on so much text that it can answer your questions like a human. The problem is that Guardrails is also based on an LLM, which it uses to analyze requests that people send to the LLM it’s protecting. HiddenLayer realized that if an LLM is protecting an LLM, then you could use the same kind of attack to fool both.

To do this, they used what’s known as a prompt injection attack. That’s where you insert text into a prompt that contains carefully coded instructions for the AI.

The Guardrails LLM analyzes a user’s request and assigns a confidence score to decide whether it’s a jailbreak attempt. HiddenLayer’s team crafted a prompt that persuaded the LLM to lower its confidence score, so that they could get it to accept their normally unacceptable prompt.

OpenAI’s Guardrails offering also includes a prompt injection detector. So HiddenLayer used a prompt injection attack to break that as well.

This isn’t the first time that people have figured out ways to make LLMs do things they shouldn’t. Just this April, HiddenLayer created a ‘Policy Puppetry‘ technique that worked across all major models by convincing LLMs that they were actually looking at configuration files that governed how the LLM worked.

Jailbreaking is a widespread problem in the AI world. In March, Palo Alto Networks’ threat research team Unit 42 compared three major platforms and found that one of them barely blocked half of its jailbreak attempts (although others fared better).

OpenAI has been warning about this issue since at least December 2023, when it published a guide for developers on how they could use LLMs to create their own guardrails. It said:

“When using LLMs as a guardrail, be aware that they have the same vulnerabilities as your base LLM call itself.”

We certainly shouldn’t poke fun at the AI vendors’ attempts to protect their LLMs from attack. It’s a difficult problem to crack, and just as in other areas of cybersecurity, there’s a constant game of cat and mouse between attackers and defenders.

What this shows is that you should always be careful about what you tell an AI assistant or chatbot—because while it feels private, it might not be. There might be someone half a world away diligently trying to bend the AI to their will and extract all the secrets they can from it.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using ThreatDown Vulnerability and Patch Management.

Phishing scams exploit New York’s inflation refund program

A warning from the New York State on their website informs visitors that:

“Scammers are calling, mailing, and texting taxpayers about income tax refunds, including the inflation refund check.” 

Here’s the warning on the website:

New York State Department of Taxation and Finance warning

We can confirm that several phishing campaigns are exploiting a legitimate initiative from New York State, which automatically sends refund checks to eligible residents to help offset the effects of inflation.

Although eligible residents do not need to apply, sign up or provide personal information, the scammers are asking targets to provide payment information to receive their refund.

BleepingComputer reported an example of a SMS-based phishing (smishing) campaign with that objective.

text message example

“New York Department of Revenue

Your refund request has been processed and approved. Please provide accurate payment information by September 29, 2025. Funds will be deposited into your bank account or mailed to you via paper check within 1-2 business days.

URL (now offline)

  • Failure to submit the required payment information by September 29, 2025, will result in permanent forfeiture of this refund….”

As you can see, it uses all the classic phishing techniques: you need to act fast, or the consequences will be severe. The sending number is from outside the US (Philippines) and the URL they want you to follow is not an official one (Official New York State Tax Department website and online services are under tax.ny.gov).

If recipients click the link, they are directed to a fake site impersonating the tax department, which asks for personal data such as name, address, email, phone, and Social Security Number—enough information for identity theft.

Scammers typically jump at opportunities like these—situations where people expect to receive some kind of payment, but are uncertain about the process. By telling victims they need to act fast or they will miss out, they hope to catch targets off guard and act on impulse.

How to stay safe

  • Never reply to or click links in unsolicited tax refund texts, calls, or emails.
  • Do not provide your Social Security number or banking details to anyone claiming to process your tax refund.
  • Legitimate inflation refunds are sent automatically if you’re eligible, there are no actions required.
  • If in doubt, contact the alleged source through known legitimate lines of communication to ask for confirmation.
  • Report scam messages and suspicious contacts to the NYS Tax Department or IRS immediately.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit scams like these to Malwarebytes Scam Guard? It immediately identified the text shown above as a scam.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (October 6 – October 12)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Apple voices concerns over age-check law that could put user privacy at risk

Apple has raised concerns about a new Texas state law, SB 2420, which introduces age assurance requirements for app stores and app developers.

One of its main objections is that the requirements are over the top and don’t take into account what the user is actually trying to do. Apple stated:

“We are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores.”

Starting January 1, 2026, anyone creating a new Apple account will need to confirm they’re over 18. Users under 18 will have to join a Family Sharing group and get parental consent to download or buy apps, or make in-app purchases.

With age verification comes the requirement for companies to collect, store, and manage sensitive documents or data points (such as government IDs or parental authority details). The more of this data that’s stored, the greater the consequences if it’s breached.

Apple’s pushback against SB2420 is an explicit call to consider the inherent privacy risks of increased age verification mandates. It argues the requirement should only apply to apps and services where age checks are genuinely needed.

Adding to the complexity, individual states are making their own laws to protect minors online, but all using different methods of implementation. Apple reportedly warned developers that similar laws will take effect in Utah and Louisiana later in the year, so they should be prepared.

Discord’s data breach highlights the risks of age verification

An illustration of Apple’s concerns was the recent third-party breach at a customer support provider for Discord. Discord stated that cybercriminals targeted a firm that helped to verify the ages of its users. Discord did not name the company involved, but has revoked the provider’s access to the system that was targeted in the breach.

The compromise exposed sensitive government ID images for around 70,000 users who submitted age-verification data. The criminals claim to have stolen the data of 5.5 million unique users from the company’s Zendesk support system instance, including government IDs and partial payment information for some people.

We agree with Apple that regulators should be aware of the risks that come with implementing different sets of requirements. We don’t want to see regulatory pressure to collect sensitive information lead to the kind of breaches that everyone’s afraid of.

When sensitive information like government ID photos, full names, and contact details are exposed in a breach, criminals gain powerful tools for identity theft. With access to these details, a fraudster can impersonate someone to access their bank accounts, open new credit lines, or make major purchases in their name. Access to government-issued IDs enables attackers to create convincing fake documents, pass verification checks at financial institutions, and sell authentic-looking identities on the dark web to other criminals. The resulting identity theft can cause victims long-term financial and personal damage.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Your passwords don’t need so many fiddly characters, NIST says

It’s once again time to change your passwords, but if one government agency has its way, this might be the very last time you do it.   

After nearly four years of work to update and modernize its guidance for how companies, organizations, and businesses should protect their systems and their employees, the US National Institute of Standards and Technology has released its latest guidelines for password creation, and it comes with some serious changes.

Gone are the days of resetting your and your employees’ passwords every month or so, and no longer should you or your small business worry about requiring special characters, numbers, and capital letters when creating those passwords. Further, password “hints” and basic security questions are no longer suitable means of password recovery, and password length, above all other factors, is the most meaningful measure of strength.

The newly published rules will not only change the security best practices at government agencies, they will also influence the many industries that are subject to regulatory compliance, as several data protection laws require that organizations employ modern security standards on an evolving basis.

In short, here’s what NIST has included in its updated guidelines:

  • Password “complexity” (special characters, numbers) is out.
  • Password length is in (as it has been for years).
  • Regularly scheduled password resets are out.
  • Passwords resets used strictly as a response to a security breach are in.
  • Basic security questions and “hints” for password recovery are out.
  • Password recovery links and authentication codes are in.  

The guidelines are not mandatory for everyday businesses, and so there is no “deadline” to work against. But small businesses should heed the guidelines as probably the strongest and simplest best practices they can quickly adopt to protect themselves and their employees from hackers, thieves, and online scammers. In fact, according to Verizon’s 2025 Data Breach Investigations Report, “credential abuse,” which includes theft and brute-force attacks against passwords, “is still the most common vector” in small business breaches.

Here’s what some of NIST’s guidelines mean for password security and management.

1. The longer the password the stronger the defense

“Password length is a primary factor in characterizing password strength,” NIST said in its new guidance. But exactly how long a password should be will depend on its use.

If a password can be used as the only form of authentication (meaning that an employee doesn’t need to also send a one-time passcode or to confirm their login through a separate app on a smartphone), then those passwords should be, at minimum, 15 characters in length. If a password is just one piece of a multifactor authentication setup, then passwords can be as few as 8 characters.

Also, employees should be able to create passwords as long as 64 characters.

2. Less emphasis on “complexity”

Requiring employees to use special characters (&^%$), numbers, and capital letters doesn’t lead to increased security, NIST said. Instead, it just leads to predictable, bad passwords.

“A user who might have chosen ‘password’ as their password would be relatively likely to choose ‘Password1’ if required to include an uppercase letter and a number or ‘Password1!’ if a symbol is also required,” the agency said. “Since users’ password choices are often predictable, attackers are likely to guess passwords that have previously proven successful.”

In response, organizations should change any rules that require password “complexity” and instead set up rules that favor password length.

3. No more regularly scheduled password resets

In the mid-2010s, it wasn’t unusual to learn about an office that changed its WiFi password every week. Now, this extreme rotation is coming to a stop.

According to NIST’s latest guidance, passwords should only be reset after they have been compromised. Here, NIST was also firm in its recommendation—a compromised password must lead to a password reset by an organization or business.

4. No more password “hints” or security questions

Decades ago, users could set up little password “hints” to jog their memory if they forgot a password, and they could even set up answers to biographical questions to access a forgotten password. But these types of questions—like “What street did you grow up on?” and “What is your mother’s maiden name?”—are easy enough to fraudulently answer in today’s data-breached world.

Password recovery should instead be deployed through recovery codes or links sent to a user through email, text, voice, or even the postal service.

5. Password “blocklists” should be used

Just because a password fits a list of requirements doesn’t make it strong. To protect against this, NIST recommended that organizations should have a password “blocklist”—a set of words and phrases that will be rejected if an employee tries to use them when creating a password.

“This list should include passwords from previous breach corpuses, dictionary words used as passwords, and specific words (e.g., the name of the service itself) that users are likely to choose,” NIST said.

Curious where to start? “Password,” obviously, “Password1,” and don’t forget “Password1!”

Strengthening more than passwords

Password strength and management are vital to the overall cybersecurity of any small business, and it should serve as a first step towards online protection. But there’s more to online protection today. Hackers and scammers will deploy a variety of tools to crack into a business, steal its data, extort its owners, and cause as much pain as possible. For 24/7 antivirus protection, AI-powered scam guidance, and constant web security against malicious websites and connections, use Malwarebytes for Teams.

Millions of (very) private chats exposed by two AI companion apps

Cybernews discovered how two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users.

This is not the first time we have to write about AI “girlfriends” exposing their secrets—and it probably won’t be the last. This latest incident is a reminder that not every developer takes user privacy seriously.

This was not a sophisticated hack that required a skilled approach. All it took was knowing how to look for unprotected services. Researchers found a publicly exposed and unprotected streaming and content delivery system—a Kafka Broker instance.

Think of it like a post office that stores and delivers confidential mail. Now, imagine the manager leaves the front doors wide open, with no locks, guards, or ID checks. Anyone can walk in, look through private letters and photos, and grab whatever catches their eye.

That’s what happened with the two AI apps. The “post office” (Kafka Broker) was left open on the internet without locks (no authentication or access controls). Anyone who knew its address could enter and see every private message, photo, and the purchases users made.

The Kafka broker instance was handling real-time data streams for two apps, which are available on Android and iOS: Chattee Chat – AI Companion and GiMe Chat – AI Companion.

The exposed data belonged to over 400,000 people and included 43 million messages and over 600,000 images and videos. The content shared with and created by the AI models was not suitable for a work environment (NSFW), the researchers found.

One of the apps—Chattee—was particularly popular, with over 300,000 downloads, mostly in the US. Both apps were developed by Imagime Interactive Limited, a Hong Kong-based developer, though only Chattee gained significant popularity.

While the apps didn’t reveal names or email addresses, they did expose IP addresses and unique device identifiers, which attackers could combine with data from previous breaches to identify users.

The researchers concluded:

“Users should be aware that conversations with AI companions may not be as private as claimed. Companies hosting such apps may not properly secure their systems. This leaves intimate messages and any other shared data vulnerable to malicious actors, who leverage any viable opportunities for financial gain.”

It doesn’t take a genius cybercriminal with access to data from other breaches to turn the information they found here into something they can use for sextortion.

Another thing that the information shows is that the developer’s revenue from the apps exceeded $1 million. If only they had spent a few of those dollars on security. Securing a Kafka Broker instance is not technically difficult or especially costly. Setting up proper security mostly requires configuration changes, not major purchases.

Leaks like this one can lead to harassment, reputational damage, financial fraud, and targeted attacks on users whose trust was abused—which does not make for happy customers.

Protecting yourself after a data breach

The leak has been closed after responsible disclosure by the researchers, but there is no guarantee they were the first to find out about the exposure. If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.