IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Researchers break OpenAI guardrails

The maker of ChatGPT released a toolkit to help protect its AI from attack earlier this month. Almost immediately, someone broke it.

On October 6, OpenAI ran an event called DevDay where it unveiled a raft of new tools and services for software programmers who use its products. As part of that, it announced a tool called AgentKit that lets developers create AI agents using its ChatGPT AI technology. Agents are specialized AI programs that can tackle narrow sets of tasks on their own, making more autonomous decisions. They can also work together to automate tasks (such as, say, finding a good restaurant in a city you’re traveling to and then booking you a table).

Agents like this are more powerful than earlier versions of AI that would do one task and then come back to you for the next set of instructions. That’s partly what inspired OpenAI to include Guardrails in AgentKit.

Guardrails is a set of tools that help developers to stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to tell you how to produce anthrax spores at scale, Guardrails would ideally detect that request and refuse it.

People often try to get AI to break its own rules using something called “jailbreaking”. There are various jailbreaking techniques, but one of the simplest is role-playing. If a person asked for instructions to make a bomb, the AI might have said no, but if they then tell the AI it’s just for a novel they’re writing, then it might have complied. Organizations like OpenAI that produce powerful AI models are constantly figuring out ways that people might try to jailbreak their models using techniques like these, and building new protections against them. Guardrails is their attempt to open those protections up to developers.

As with any new security mechanism, researchers quickly tried to break Guardrails. In this case, AI security company HiddenLayer had a go, and conquered the jailbreak protection pretty quickly.

ChatGPT is a large language model (LLM), which is a statistical model trained on so much text that it can answer your questions like a human. The problem is that Guardrails is also based on an LLM, which it uses to analyze requests that people send to the LLM it’s protecting. HiddenLayer realized that if an LLM is protecting an LLM, then you could use the same kind of attack to fool both.

To do this, they used what’s known as a prompt injection attack. That’s where you insert text into a prompt that contains carefully coded instructions for the AI.

The Guardrails LLM analyzes a user’s request and assigns a confidence score to decide whether it’s a jailbreak attempt. HiddenLayer’s team crafted a prompt that persuaded the LLM to lower its confidence score, so that they could get it to accept their normally unacceptable prompt.

OpenAI’s Guardrails offering also includes a prompt injection detector. So HiddenLayer used a prompt injection attack to break that as well.

This isn’t the first time that people have figured out ways to make LLMs do things they shouldn’t. Just this April, HiddenLayer created a ‘Policy Puppetry‘ technique that worked across all major models by convincing LLMs that they were actually looking at configuration files that governed how the LLM worked.

Jailbreaking is a widespread problem in the AI world. In March, Palo Alto Networks’ threat research team Unit 42 compared three major platforms and found that one of them barely blocked half of its jailbreak attempts (although others fared better).

OpenAI has been warning about this issue since at least December 2023, when it published a guide for developers on how they could use LLMs to create their own guardrails. It said:

“When using LLMs as a guardrail, be aware that they have the same vulnerabilities as your base LLM call itself.”

We certainly shouldn’t poke fun at the AI vendors’ attempts to protect their LLMs from attack. It’s a difficult problem to crack, and just as in other areas of cybersecurity, there’s a constant game of cat and mouse between attackers and defenders.

What this shows is that you should always be careful about what you tell an AI assistant or chatbot—because while it feels private, it might not be. There might be someone half a world away diligently trying to bend the AI to their will and extract all the secrets they can from it.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using ThreatDown Vulnerability and Patch Management.

Phishing scams exploit New York’s inflation refund program

A warning from the New York State on their website informs visitors that:

“Scammers are calling, mailing, and texting taxpayers about income tax refunds, including the inflation refund check.” 

Here’s the warning on the website:

New York State Department of Taxation and Finance warning

We can confirm that several phishing campaigns are exploiting a legitimate initiative from New York State, which automatically sends refund checks to eligible residents to help offset the effects of inflation.

Although eligible residents do not need to apply, sign up or provide personal information, the scammers are asking targets to provide payment information to receive their refund.

BleepingComputer reported an example of a SMS-based phishing (smishing) campaign with that objective.

text message example

“New York Department of Revenue

Your refund request has been processed and approved. Please provide accurate payment information by September 29, 2025. Funds will be deposited into your bank account or mailed to you via paper check within 1-2 business days.

URL (now offline)

  • Failure to submit the required payment information by September 29, 2025, will result in permanent forfeiture of this refund….”

As you can see, it uses all the classic phishing techniques: you need to act fast, or the consequences will be severe. The sending number is from outside the US (Philippines) and the URL they want you to follow is not an official one (Official New York State Tax Department website and online services are under tax.ny.gov).

If recipients click the link, they are directed to a fake site impersonating the tax department, which asks for personal data such as name, address, email, phone, and Social Security Number—enough information for identity theft.

Scammers typically jump at opportunities like these—situations where people expect to receive some kind of payment, but are uncertain about the process. By telling victims they need to act fast or they will miss out, they hope to catch targets off guard and act on impulse.

How to stay safe

  • Never reply to or click links in unsolicited tax refund texts, calls, or emails.
  • Do not provide your Social Security number or banking details to anyone claiming to process your tax refund.
  • Legitimate inflation refunds are sent automatically if you’re eligible, there are no actions required.
  • If in doubt, contact the alleged source through known legitimate lines of communication to ask for confirmation.
  • Report scam messages and suspicious contacts to the NYS Tax Department or IRS immediately.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit scams like these to Malwarebytes Scam Guard? It immediately identified the text shown above as a scam.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (October 6 – October 12)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Apple voices concerns over age-check law that could put user privacy at risk

Apple has raised concerns about a new Texas state law, SB 2420, which introduces age assurance requirements for app stores and app developers.

One of its main objections is that the requirements are over the top and don’t take into account what the user is actually trying to do. Apple stated:

“We are concerned that SB2420 impacts the privacy of users by requiring the collection of sensitive, personally identifiable information to download any app, even if a user simply wants to check the weather or sports scores.”

Starting January 1, 2026, anyone creating a new Apple account will need to confirm they’re over 18. Users under 18 will have to join a Family Sharing group and get parental consent to download or buy apps, or make in-app purchases.

With age verification comes the requirement for companies to collect, store, and manage sensitive documents or data points (such as government IDs or parental authority details). The more of this data that’s stored, the greater the consequences if it’s breached.

Apple’s pushback against SB2420 is an explicit call to consider the inherent privacy risks of increased age verification mandates. It argues the requirement should only apply to apps and services where age checks are genuinely needed.

Adding to the complexity, individual states are making their own laws to protect minors online, but all using different methods of implementation. Apple reportedly warned developers that similar laws will take effect in Utah and Louisiana later in the year, so they should be prepared.

Discord’s data breach highlights the risks of age verification

An illustration of Apple’s concerns was the recent third-party breach at a customer support provider for Discord. Discord stated that cybercriminals targeted a firm that helped to verify the ages of its users. Discord did not name the company involved, but has revoked the provider’s access to the system that was targeted in the breach.

The compromise exposed sensitive government ID images for around 70,000 users who submitted age-verification data. The criminals claim to have stolen the data of 5.5 million unique users from the company’s Zendesk support system instance, including government IDs and partial payment information for some people.

We agree with Apple that regulators should be aware of the risks that come with implementing different sets of requirements. We don’t want to see regulatory pressure to collect sensitive information lead to the kind of breaches that everyone’s afraid of.

When sensitive information like government ID photos, full names, and contact details are exposed in a breach, criminals gain powerful tools for identity theft. With access to these details, a fraudster can impersonate someone to access their bank accounts, open new credit lines, or make major purchases in their name. Access to government-issued IDs enables attackers to create convincing fake documents, pass verification checks at financial institutions, and sell authentic-looking identities on the dark web to other criminals. The resulting identity theft can cause victims long-term financial and personal damage.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Your passwords don’t need so many fiddly characters, NIST says

It’s once again time to change your passwords, but if one government agency has its way, this might be the very last time you do it.   

After nearly four years of work to update and modernize its guidance for how companies, organizations, and businesses should protect their systems and their employees, the US National Institute of Standards and Technology has released its latest guidelines for password creation, and it comes with some serious changes.

Gone are the days of resetting your and your employees’ passwords every month or so, and no longer should you or your small business worry about requiring special characters, numbers, and capital letters when creating those passwords. Further, password “hints” and basic security questions are no longer suitable means of password recovery, and password length, above all other factors, is the most meaningful measure of strength.

The newly published rules will not only change the security best practices at government agencies, they will also influence the many industries that are subject to regulatory compliance, as several data protection laws require that organizations employ modern security standards on an evolving basis.

In short, here’s what NIST has included in its updated guidelines:

  • Password “complexity” (special characters, numbers) is out.
  • Password length is in (as it has been for years).
  • Regularly scheduled password resets are out.
  • Passwords resets used strictly as a response to a security breach are in.
  • Basic security questions and “hints” for password recovery are out.
  • Password recovery links and authentication codes are in.  

The guidelines are not mandatory for everyday businesses, and so there is no “deadline” to work against. But small businesses should heed the guidelines as probably the strongest and simplest best practices they can quickly adopt to protect themselves and their employees from hackers, thieves, and online scammers. In fact, according to Verizon’s 2025 Data Breach Investigations Report, “credential abuse,” which includes theft and brute-force attacks against passwords, “is still the most common vector” in small business breaches.

Here’s what some of NIST’s guidelines mean for password security and management.

1. The longer the password the stronger the defense

“Password length is a primary factor in characterizing password strength,” NIST said in its new guidance. But exactly how long a password should be will depend on its use.

If a password can be used as the only form of authentication (meaning that an employee doesn’t need to also send a one-time passcode or to confirm their login through a separate app on a smartphone), then those passwords should be, at minimum, 15 characters in length. If a password is just one piece of a multifactor authentication setup, then passwords can be as few as 8 characters.

Also, employees should be able to create passwords as long as 64 characters.

2. Less emphasis on “complexity”

Requiring employees to use special characters (&^%$), numbers, and capital letters doesn’t lead to increased security, NIST said. Instead, it just leads to predictable, bad passwords.

“A user who might have chosen ‘password’ as their password would be relatively likely to choose ‘Password1’ if required to include an uppercase letter and a number or ‘Password1!’ if a symbol is also required,” the agency said. “Since users’ password choices are often predictable, attackers are likely to guess passwords that have previously proven successful.”

In response, organizations should change any rules that require password “complexity” and instead set up rules that favor password length.

3. No more regularly scheduled password resets

In the mid-2010s, it wasn’t unusual to learn about an office that changed its WiFi password every week. Now, this extreme rotation is coming to a stop.

According to NIST’s latest guidance, passwords should only be reset after they have been compromised. Here, NIST was also firm in its recommendation—a compromised password must lead to a password reset by an organization or business.

4. No more password “hints” or security questions

Decades ago, users could set up little password “hints” to jog their memory if they forgot a password, and they could even set up answers to biographical questions to access a forgotten password. But these types of questions—like “What street did you grow up on?” and “What is your mother’s maiden name?”—are easy enough to fraudulently answer in today’s data-breached world.

Password recovery should instead be deployed through recovery codes or links sent to a user through email, text, voice, or even the postal service.

5. Password “blocklists” should be used

Just because a password fits a list of requirements doesn’t make it strong. To protect against this, NIST recommended that organizations should have a password “blocklist”—a set of words and phrases that will be rejected if an employee tries to use them when creating a password.

“This list should include passwords from previous breach corpuses, dictionary words used as passwords, and specific words (e.g., the name of the service itself) that users are likely to choose,” NIST said.

Curious where to start? “Password,” obviously, “Password1,” and don’t forget “Password1!”

Strengthening more than passwords

Password strength and management are vital to the overall cybersecurity of any small business, and it should serve as a first step towards online protection. But there’s more to online protection today. Hackers and scammers will deploy a variety of tools to crack into a business, steal its data, extort its owners, and cause as much pain as possible. For 24/7 antivirus protection, AI-powered scam guidance, and constant web security against malicious websites and connections, use Malwarebytes for Teams.

Millions of (very) private chats exposed by two AI companion apps

Cybernews discovered how two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users.

This is not the first time we have to write about AI “girlfriends” exposing their secrets—and it probably won’t be the last. This latest incident is a reminder that not every developer takes user privacy seriously.

This was not a sophisticated hack that required a skilled approach. All it took was knowing how to look for unprotected services. Researchers found a publicly exposed and unprotected streaming and content delivery system—a Kafka Broker instance.

Think of it like a post office that stores and delivers confidential mail. Now, imagine the manager leaves the front doors wide open, with no locks, guards, or ID checks. Anyone can walk in, look through private letters and photos, and grab whatever catches their eye.

That’s what happened with the two AI apps. The “post office” (Kafka Broker) was left open on the internet without locks (no authentication or access controls). Anyone who knew its address could enter and see every private message, photo, and the purchases users made.

The Kafka broker instance was handling real-time data streams for two apps, which are available on Android and iOS: Chattee Chat – AI Companion and GiMe Chat – AI Companion.

The exposed data belonged to over 400,000 people and included 43 million messages and over 600,000 images and videos. The content shared with and created by the AI models was not suitable for a work environment (NSFW), the researchers found.

One of the apps—Chattee—was particularly popular, with over 300,000 downloads, mostly in the US. Both apps were developed by Imagime Interactive Limited, a Hong Kong-based developer, though only Chattee gained significant popularity.

While the apps didn’t reveal names or email addresses, they did expose IP addresses and unique device identifiers, which attackers could combine with data from previous breaches to identify users.

The researchers concluded:

“Users should be aware that conversations with AI companions may not be as private as claimed. Companies hosting such apps may not properly secure their systems. This leaves intimate messages and any other shared data vulnerable to malicious actors, who leverage any viable opportunities for financial gain.”

It doesn’t take a genius cybercriminal with access to data from other breaches to turn the information they found here into something they can use for sextortion.

Another thing that the information shows is that the developer’s revenue from the apps exceeded $1 million. If only they had spent a few of those dollars on security. Securing a Kafka Broker instance is not technically difficult or especially costly. Setting up proper security mostly requires configuration changes, not major purchases.

Leaks like this one can lead to harassment, reputational damage, financial fraud, and targeted attacks on users whose trust was abused—which does not make for happy customers.

Protecting yourself after a data breach

The leak has been closed after responsible disclosure by the researchers, but there is no guarantee they were the first to find out about the exposure. If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Fake VPN and streaming app drops malware that drains your bank account

Security researchers are warning Android users to delete a fake VPN and streaming app that can let criminals take over their phones and drain their bank accounts.

The app, Mobdro Pro IP TV + VPN, was discovered by researchers at Cleafy to be a malicious sideloaded app, not a legitimate VPN. Their analysis found it installs Klopatra, a new Android banking Trojan and remote-access tool with no links to known malware families.

Klopatra targets banking customers and gives attackers full remote control of infected devices, allowing them to steal credentials and carry out fraudulent transactions.

The researchers found that:

“Klopatra’s effectiveness lies in a carefully orchestrated infection chain, which begins with social engineering and culminates in the complete takeover of the victim’s device. Each stage is designed to overcome the defenses of the user and the Android operating system.”

The lure works by pretending to be an IPTV app that offers free, high-quality TV channels. Because pirated streaming apps are so common, users often expect to install them from unofficial websites (sideloading), unintentionally bypassing the protections of the Google Play Store.

Klopatra is an extreme example of a fake virtual private network (VPN) used to spread malware, but it’s not the only reason to be cautious. Even genuine VPNs on Google Play can have hidden risks, from vague ownership to weak privacy protections.

Even genuine VPNs can be risky

VPNs are often promoted as essential tools for privacy, circumventing geo-blocks, or bypassing age verification controls. For hundreds of millions of users, VPN connections are the solution to hide the user’s IP address and location, and to encrypt web traffic so it’s useless when intercepted.

But picking a VPN you can trust is not always easy. Even if you get one from the official Play Store.

A recent study, the VPN Transparency Report 2025 by the Open Technology Fund, revealed alarming shortcomings among some of the world’s most-downloaded VPN apps. The researchers examined the ownership, operation, and development of 32 commercial VPNs, collectively used by more than a billion people.

Among the apps flagged as “concerning” are very popular solutions like Turbo VPN, VPN Proxy Master, XY VPN, and 3X VPN – Smooth Browsing, each of which has been downloaded at least 100 million times from the Google Play Store.

Some of these solutions even provide a false sense of privacy by using technologies that weren’t designed for privacy at all, the study claims. They found that several:

“providers use the Shadowsocks tunneling protocol [which is not designed for confidentiality] to build the VPN tunnel, and claim their users’ connections are secure.”

The report emphasizes how important it is to gather information before installing a VPN: it’s worth learning who runs it, how it’s built, and what it does with your data. This is key for users to make informed decisions.

Practical tips on how to protect yourself

  • Stick to trusted sources. Download apps—especially VPNs and streaming services—only from Google Play, Apple’s App Store, or the official provider. Never install something just because a link in a forum or message promises a shortcut.
  • Check an app’s permissions. If an app asks for control over your device, your settings, Accessibility Services, or wants to install other apps, stop and ask yourself why. Does it really need those permissions to do what you expect it to do?
  • Use layered, up-to-date protection. Install real-time anti-malware protection on your Android that scans for new downloads and suspicious activity. Keep both your security software and your device system updated—patches fix vulnerabilities that attackers can exploit.
  • Stay informed. Follow trustworthy cybersecurity news and share important warnings with friends and family.

If you think you’ve been affected:

Delete any suspicious VPN or IPTV apps, run a trusted security scan, and reset your banking credentials if you suspect your device has ever been compromised. For your peace of mind and your wallet’s safety, choose your VPN wisely.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

California just put people back in control of their data

California’s 2025 legislative session closed with 14 new privacy and AI-related bills. We’d like to highlight a few of the most relevant signed bills and encourage other states and countries to follow California’s example.

Let’s go over some of the bills that were signed by the governor and how they might affect consumers.

Social media account cancellation (AB 656)

Governor Gavin Newsom signed AB 656, which not only requires social media companies to make canceling an account straightforward and clear, it should also ensure that cancellation triggers full deletion of the user’s personal data. The governor stated:

“It shouldn’t be hard to delete social media accounts, and it shouldn’t be even harder to take back control of personal data. With these bills, social media users can be assured that when they delete their accounts, they do not leave their data behind.”

Canceling social media accounts is a well-known struggle for those who have tried. Many users face a frustrating obstacle course—deletion options buried deep in menus, confusing jargon, or multiple steps to confirm deletion. Even then, fully erasing personal information can require extra, non-obvious steps. This often leaves users uncertain whether their digital footprint is truly gone for good.

Lawmakers and press releases have not yet specified the effective date for AB 656. However, California usually enacts consumer privacy bills on January 1 of the year after they pass. Based on this pattern, AB 656 will likely take effect on January 1, 2026, unless future legislative updates or guidance announce a different date.

Opt Me Out (AB 566)

This act requires browsers to include a setting that allows users to send an opt-out preference signal, letting Californians stop the sale or sharing of their data with a single action, instead of opting out site by site.

Effective January 1, 2027, the California Opt Me Out Act mandates that a business:

 “shall not develop or maintain a browser that does not include functionality configurable by a consumer that enables the browser to send an opt-out preference signal to businesses which the consumer interacts with the browser.”

For example, a user with an opt-out preference signal enabled on their internet browser will automatically send an opt-out request to each website and third party they encounter during a browsing session. 

Strengthening data broker laws (SB 361)

Data brokers collect and sell personal information by aggregating data from various public and private sources. SB 361 expands the obligations for data brokers to be transparent about personal information they collected and access, tightening oversight by the California Privacy Protection Agency (CPPA).

Effective January 1, 2026, this law will require data brokers to make much more detailed disclosures when registering with the CPPA. For example, they must reveal whether they collect sensitive personal information and whether they have sold or shared consumers’ data with a foreign actor, the federal government, other state governments, law enforcement, or developers of generative AI systems or models.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

One stolen iPhone uncovered a network smuggling thousands of devices to China

If you think Apple’s ‘Find My’ feature was just there to help you locate your phone when it slipped down the side of the couch, think again. It turns out this service also helps law enforcement capture criminals.

The original “Find My iPhone” was introduced in 2010 as a feature on the iPhone. It was a separate service from “Find My Friends,” which allows you to track the location of contacts who consent. Apple merged these in 2019 for iOS 13. Today, the service works with AirPods, Macs, and even third-party devices. It uses Bluetooth to send short-range signals that can be picked up by other Apple devices, which then relay the lost item’s location to Apple. When you open your iPhone to locate a missing device on a map, that’s what you’re using.

It turns out that “Find My” is great for finding stolen devices too.

On Christmas Eve last year, a phone theft victim used the service to track their stolen device. The signal led police to a warehouse near Heathrow Airport containing almost 900 stolen phones destined for Hong Kong.

This discovery prompted police to launch Operation Echosteep, an investigation that lasted nearly a year. By its end, it had resulted in 46 arrests following raids across 28 locations. Police in the UK recovered over 2,000 stolen devices, exposing a criminal network that was smuggling up to 40,000 phones each year. The stolen devices eventually ended up in China, where they could sell for a high price.

Phone theft is a scourge in London, with street thieves on e-bikes turning this nefarious activity into a business. They can sell the phones they steal for £300 each (around $400). They will usually wrap devices in aluminum foil to block tracking signals.

This isn’t the first time a phone has been tracked. The Financial Times reported in May about a tech entrepreneur named Sam Amrani, whose phone was stolen in Kensington, London. He tracked his phone’s journey all the way to a neighborhood in Shenzhen, China, that is known for its second-hand phone market. Even phones that are activation-locked to avoid data being stolen can still be stripped for parts, retaining up to 30% of their value.

Lawmakers in the UK have voiced concerns about the growing phone theft problem. At a Parliamentary hearing in June, they asked Apple and Google why they weren’t doing more to build anti-theft measures into their systems.

Every device that connects to a mobile network has a unique identification number called an International Mobile Equipment Identity (IMEI). When a device is reported stolen, its IMEI can be added to a global “blacklist” managed by the Global System for Mobile Communications Association (GSMA). Mobile networks can then block that phone from connecting.

However, this system only works in countries where carriers actively enforce the blacklist. Many don’t—which means a phone stolen in one country can still be sold and used in another.

That’s why lawmakers want Apple and Google to go further. As Member of Parliament, Martin Wrigley, complained at the hearing:

“You can stop this by blocking IMEIs on the GSMA IMEI blacklist, and you’re just deciding not to do so yet.”

If Apple and Google also refused to activate or connect GSMA-blacklisted devices to iCloud or Google accounts, those phones would become useless anywhere in the world. That would make stolen phones far less valuable and could significantly reduce theft.

The UK government isn’t waiting for tech companies to act. In February it introduced the Crime and Policing Bill, giving the police new powers to search premises where stolen devices have been geolocated—without needing a warrant.

In the meantime, what can you do to protect yourself? Keep your phone in your pocket, preferably an inside pocket, and don’t walk down the street blithely holding it to your ear or gazing at the screen. Use earphones if you need to talk, but stay alert to your surroundings. That’s good advice for anyone, anywhere—whether they’re using a phone or not.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Modeling scams see mature models as attractive new prospects

The BBC reported on modeling scams targeting older models. Modeling scams aren’t new, but it’s worth looking at how they spread today, how to spot them, and—most importantly—how to avoid falling victim to them.

The classic pitch goes like this: Someone walks up to you in the street and says, “You look so good, you should be a model.” Very flattering and something many would have a hard time ignoring.

As you might expect, some of these unsolicited contacts are up to no good.

This aspiring model warned about a specific agency that duped her

But in our current social-media-driven society, the same approach happens online—via direct messages and ads recruiting “nonstandard” models, including people way past their twenties.

For years, modeling scammers have targeted young people wanting to become rich and famous. Now, they’ve widened the net. You’ll see phrases like “silver hair models,” “mature models over 50,” or “experienced life models” to target older adults: an even more attractive target.

Scammers assume that older adults have more savings and less debt. They may also exploit social isolation and unfamiliarity with technology and online risks. This makes seniors appealing targets, especially for scams promising lucrative opportunities such as modeling contracts or paid photoshoots.

Scammers often pressure their targets to pay up front for portfolios, photoshoots and “registration fees,” steering them to specific providers and payment methods. Some insist that victims use PayPal’s Friends and Family option, which carries no extra fee but removes buyer protection and any possibility of a refund if disputes arise—meaning you cannot get your money back through PayPal.

Not every scam is about money. Some run fake casting calls or set up fabricated agency websites to collect personal information for future scams, or to obtain explicit photographs that they later sell or circulate on the dark web.

How to avoid becoming a modeling scam victim

  • Research the company. Search the school or agency name with terms like “scam,” “review,” or “complaint.” If possible, check recognized industry databases or legitimate talent listings. Inspect profiles for red flags like stolen photos or brand-new accounts with few followers.
  • Never pay an agency up front. Legitimate agencies earn a commission when you book work, not from “registration” or mandatory photo packages.
  • Don’t let them dictate how to pay. Avoid payment methods that reduce protection, like PayPal Friends and Family. If someone is specific on how you pay, that’s a warning sign.
  • Avoid agencies that force you to use their staff for your photoshoots or auditions. If an agency says you have to use its photographer or makeup artist, don’t work with them. An agency should let you hire your own makeup artist and photographer.
  • Ask if the company or school is licensed or bonded, if your state requires it. Check this information with your local consumer protection agency or your state attorney general—and make sure the license is current.
  • Get references. Ask for names and contact details of models or actors who’ve recently gotten work through the agency. Scam agencies sometimes display photos of successful models they never represented, or claim connections with well-known companies that never hired their talent. Verify these claims by contacting the people or companies directly.
  • Get everything in writing. Capture all promises and terms in a contract and keep copies of important documents.
  • Watch for vague promises and guaranteed high earnings. If someone promises “instant fame,” “guaranteed work,” or extremely high pay, it’s likely a scam. Legitimate modeling work is competitive and rarely guarantees jobs.
  • Don’t send suggestive or personal photos or share private information. Professional agencies will not ask for explicit photos or ask for sensitive details like your address or financial information before you’re actually signed and onboarded.
  • Trust your instincts—and get a second opinion. If something feels off because of high-pressure tactics, pushy behavior, or you’re uncomfortable step back and rethink or ask a friend’s advice. Scammers often rush or pressure for quick commitments.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!