IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Fake PayPal invoice from Geek Squad is a tech support scam

One of our employees received this suspicious email and showed it to me. Although it’s a pretty straightforward attempt to lure targets into calling the scammers, it’s worth writing up because it looks like it was sent out in bulk.

Let’s look at the red flags.

Firstly, the sender address:

Email comes from tinapal83638@gmail.com and is sent to undisclosed recipients, with the target in BCC

PayPal doesn’t use Gmail addresses to send invoices, and they also don’t put your address in the blind carbon copy (BCC) field. BCC hides the list of recipients, which is often a sign the email was sent to a large group.

And “Tina Pal” must be Pay’s evil twin—one who doesn’t know it’s customary to address your customers by name rather than “PayPal customer.”

Because the message came from a genuine Gmail address, the authentication results (SPF, DKIM, and DMARC) all pass. That only proves the email wasn’t spoofed and was sent from a legitimate Gmail server, not that it’s actually from PayPal.

The red flag here is that PayPal emails will not come from random Gmail addresses. Official communications come from addresses like service@paypal.com.

The email body itself was empty but came with a randomly named attachment—two red flags in one. PayPal would at least use some branding in the email and never expect their customers to open an attachment.

Here’s what the invoice in the attachment looked like:

PayPal branded invoice

“PayPal Notification:

Your account has been billed $823.00. The payment will be processed in the next 24 hours. Didn’t make this purchase? Contact PayPal Support right now.”

More red flags:

  • Urgency: “The payment will be processed in the next 24 hours” or else the rather large amount of $823 is gone.
  • Phone number only: This isn’t how you normally dispute PayPal charges. Genuine PayPal emails direct you to log in to your account or use their online Resolution Center, not to call a number.
  • Unverified number: Reverse lookup tools don’t show it as PayPal’s. Scammers often spoof phone numbers or register them under unrelated businesses. An official PayPal support number will appear on PayPal’s website and be recognized by lookup tools.
  • Brand mismatch: An invoice comes from the company charging you, not from the payment provider. So, this one should have been branded for Geek Squad or be titled something like “payment notification.”

What tech support scammers do

In this type of tech support scam, the target calls the listed number, and the “tech” on the other end asks to remotely log in to their computer to check for “viruses.” They might run a short program to open command prompts and folders, just to scare and distract the victim. Then they’ll ask to install another tool to “fix” things, which will search the computer for anything they can turn into money. Others will sell you fake protection software and bill you for their services. Either way, the result is the same: you’ll be scammed out of a lot of money.

Safety tips

The best way to stay safe is to stay informed about the tricks scammers use. Learn to spot the red flags that almost always give away scams and phishing emails, and remember:

  • Do not open unsolicited attachments.
  • Use verified, official ways to contact companies. Don’t call numbers listed in suspicious emails or attachments.
  • Beware of someone wanting to connect to your computer remotely. One of the tech support scammer’s biggest weapons is their ability to connect remotely to their victims. If they do this, they essentially have total access to all of your files and folders.

If you’ve already fallen victim to a tech support scam:

  • Paid the scammer? Contact your credit card company or bank and let them know what’s happened. You may also want to file a complaint with the FTC or contact your local law enforcement, depending on your region.
  • Shared a password? If you shared your password with a scammer, change it everywhere it’s used. Consider using a password manager and enable 2FA for important accounts.
  • Scan your system: If scammers had access to your system, they may have planted a backdoor so they can revisit whenever they feel like it. Malwarebytes can remove these and other software left behind by scammers.
  • Watch your accounts: Keep an eye out for unexpected payments or suspicious charges on your credit cards and bank accounts.
  • Be wary of suspicious emails. If you’ve fallen for one scam, they may target you again.

Pro tip: Malwarebytes Scam Guard recognized this email as a scam. Upload any suspicious text, emails, attachments and other files to ask for its opinion. It’s really very good at recognizing scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Atlas browser’s Omnibox opens up new privacy and security risks

It seems that with every new agentic browser we discover yet another way to abuse one.

OpenAI recently introduced a ChatGPT based AI browser called Atlas. It didn’t take researchers long to find that the combined search and prompt bar—called the Omnibox—can be exploited.

By pasting a specially crafted link into the Omnibox, attackers can trick Atlas into treating the entire input as a trusted user prompt instead of a URL. That bypasses many safety checks and allows injected instructions to be run with elevated trust.

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.” We’ve discussed the dangers of prompt injection before, but the bottom line is simple: when you give your browser the power to act on your behalf, you also give criminals the chance to abuse that trust.

As researchers at Brave noted:

“AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a {specially fabricated} Reddit post could result in an attacker being able to steal money or your private data.”

Axios reports that Atlas’s dual-purpose Omnibox opens fresh privacy and security risks for users. That’s the downside of combining too much functionality without strong guardrails. But when new features take priority over user security and privacy, those guardrails get overlooked.

Despite researchers demonstrating vulnerabilities, OpenAI claims to have implemented protections to prevent any real dangers. According to its help page:

“Agent mode runs also operates under boundaries:

System access: Cannot run code in the browser, download files, or install extensions.

Data access: Cannot access other apps on your computer or your file system, read or write ChatGPT memories, access saved passwords, or use autofill data.

Browsing activity: Pages ChatGPT visits in agent mode are not added to your browsing history.”

Agentic AI browsers like OpenAI’s Atlas face a fundamental security challenge: separating real user intent from injected, potentially malicious instructions. They often fail because they interpret any instructions they find as user prompts. Without stricter input validation and more robust boundaries, these tools remain highly vulnerable to prompt injection attacks—with potentially severe consequences for privacy and data security.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Gmail breach panic? It’s a misunderstanding, not a hack

After a misinterpretation of an interview with a security researcher, several media outlets hinted at a major Gmail breach.

Reporters claimed the incident took place in April. In reality, the researcher had said there was an enormous amount of Gmail usernames and passwords circulating on the dark web.

Those are two very different things. The credentials probably stem from a great many past attacks and breaches over the years.

But the rumors spread quickly—enough that Google felt it had to deny that their Gmail systems had suffered a breach.

“The inaccurate reports are stemming from a misunderstanding of infostealer databases, which routinely compile various credential theft activity occurring across the web. It’s not reflective of a new attack aimed at any one person, tool, or platform.”

What happens is that cybercriminals buy and sell databases containing stolen usernames and passwords from data breaches, information stealers, and phishing campaigns. They do this to expand their reach or combine data from different sources to create more targeted attacks.

The downside for them is that many of these credentials are outdated, invalid, or linked to accounts that are no longer in use.

The downside for everyone else is that misleading reporting like this causes panic where there’s no need for it—whether it stems from misunderstanding technical details or from the pressure to make a headline.

Still, it’s always smart to check whether your email address has been caught up in a breach.

You can use our Digital Footprint scanner to see if your personal information is exposed online and take steps to secure it. If you find any passwords that you still use, change them immediately and enable multi-factor authentication (2FA) for those accounts wherever possible.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

School’s AI system mistakes a bag of chips for a gun

An artificial intelligence (AI) detection system at Kenwood High School mistakenly flagged a student’s bag of potato chips as a gun, triggering a police response.

The 16-year-old had finished eating a bag of Doritos and crumpled it up in his pocket when he was done. But the school’s AI-based gun detection system mistook the crumpled foil for a firearm.

Moments later, multiple police cars arrived with officers drawing their weapons, dramatically escalating what should have been a non-event.

The student recalls:

“Police showed up, like eight cop cars, and then they all came out with guns pointed at me talking about getting on the ground. I was putting my hands up like, ‘what’s going on?’ He told me to get on my knees and arrested me and put me in cuffs.”

Systems like these scan images or video feeds for the shape and appearance of weapons. They’re meant to reduce risk, but they’re only as good as the algorithms behind them and the human judgment that follows.

Superintendent Dr. Myriam Rogers told reporters:

“The program is based on human verification and in this case the program did what it was supposed to do which was to signal an alert and for humans to take a look to find out if there was cause for concern in that moment.”

While we understand the need for safety measures against guns on school grounds, this could have been handled better. Eight police cars arriving at the scene and officers with guns drawn will certainly have had an impact on the students who witnessed it, let alone the student that was the focus of their attention.

As school principal Kate Smith said:

“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident.”

AI safety tools are designed to protect students, but they do make mistakes, and when they fail, they can create the very fear they’re meant to prevent. Until these systems can reliably tell the difference between a threat and a harmless snack, schools need stronger guardrails—and a little more human sense.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Around 70 countries sign new UN Cybercrime Convention—but not everyone’s on board

Around 70 countries have signed the new United Nations (UN) Convention against Cybercrime—the first global treaty designed to combat cybercrime through unified international rules and cooperation.

The treaty needs at least 40 UN member states to ratify it before it becomes international law. Once the 40th country does so, it will take another 90 days for the convention to become legally binding for all those who have joined.

Notably, the United States declined to sign. In a brief statement, a State Department spokesperson said:

“The United States continues to review the treaty.”

And there is a lot to review. The convention has sparked significant debate about privacy, sovereignty, and how far law enforcement powers should reach. It was created in response to the rising frequency, sophistication, and cost of cybercrime worldwide—and the growing difficulty of countering it. As cyberattacks increasingly cross borders, international cooperation has become critical.

Supporters say the treaty closes legal loopholes that allow criminals to hide in countries that turn a blind eye. It also aims to solve miscommunication by establishing common definitions of cybercrimes, especially for threats like ransomware, online fraud, and child exploitation.​

But civil rights and digital privacy advocates argue that the treaty expands surveillance and monitoring powers, in turn eroding personal freedoms, and undermines safeguards for privacy and free expression.

Cybersecurity experts fear it could even criminalize legitimate research.

Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation (EFF) stated:

“The latest UN cybercrime treaty draft not only disregards but also worsens our concerns. It perilously broadens its scope beyond the cybercrimes specifically defined in the Convention, encompassing a long list of non-cybercrimes.”

The Foundation for Defense of Democracies (FDD) goes even further, arguing that the treaty could become a platform for authoritarian states to advance ideas of state control over the internet, draw democratic governments into complicity with repression, and weaken key cybersecurity tools on which Americans depend.

“Russia and China are exporting oppression around the world and using the United Nations as legal cover.”

Even Microsoft warned that significant changes would need to be made to the original draft before it could be considered safe:

“We need to ensure that ethical hackers who use their skills to identify vulnerabilities, simulate cyberattacks, and test system defenses are protected. Key criminalization provisions are too vague and do not include a reference to criminal intent, which would ensure activities like penetration testing remain lawful.”

Those changes never came to life. Many observers now say the treaty creates a legal framework that allows monitoring, data storage, and cross-border information sharing without clear data protection. Critics argue it lacks strong, explicit safeguards for due process and human rights, particularly when it comes to cross-border data exchange and extradition.

When you think about it, the idea of having a global system to counter cybercriminals makes sense—criminals don’t care about borders, and the current patchwork of national laws only helps them hide. But to many, the real problem lies in how the treaty defines cybercrime and what governments could do in its name.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults

If you’ve had your fill of philosophical discussions with ChatGPT, CEO Sam Altman has news for you: the service will soon be able to engage in far less highbrow conversations of the sexual kind. That’s right—sexting is coming to ChatGPT. Are we really surprised?

It marks a change in sentiment for the company, which originally banned NSFW content. In an October 14 post on X, Altman said the company had kept ChatGPT “pretty restrictive” to avoid creating mental health issues for vulnerable users. But now, he says, the company has learned from that experience and feels ready to “experiment more.”

“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”

He added that by December, as age-gating expands, ChatGPT will “allow even more, like erotica for verified adults.”

This isn’t a sudden pivot. Things started to change at least as far back as May last year, when the company said in its Model Specification document that it was considering allowing ChatGPT to get a little naughty under the right circumstances.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”

It followed up on that with another statement in a February 2025 update to the document, when it starting mulling a ‘grown-up mode’ while drawing a hard boundary around things like age, sexual deepfakes, and revenge porn.

A massive market

There’s no denying the money behind this move. Analysts believe people paid $2.7 billion worldwide for a little AI companionship last year, with the market expected to balloon to $24.5 billion by 2034—a staggering 24% annual growth rate.

AI “girlfriends” and “boyfriends” already span everything from video-based virtual partners to augmented reality companions that can call you. Even big tech companies have been getting into it, with Elon Musk’s X launching a sexualized virtual companion called Ani that will apparently strip for you if you pester it enough.

People have been getting down and dirty with technology for decades, of course (phone sex lines began in the early 1980s, and cam sites have been a thing for years). But AI changes the scale entirely. There’s no limit to automation, no need for human operators, and no guarantee that the users on the other side know where the boundaries are.

We’re not judging, but the normal rules apply. This stuff is supposed to be for adults, which makes it more important than ever that parents monitor what their kids access online.

Privacy risk

Earlier this month, we covered how two AI companion apps exposed millions of private chat logs, including sexual conversations, after a database misconfiguration—a sobering reminder of how much intimate data these services collect.

It wasn’t the first time, either. Back in 2024, another AI girlfriend platform was breached, leaking users’ fantasies, chat histories, and profile data. That story showed just how vulnerable these apps can be when they mix emotional intimacy with poor security hygiene.

As AI companionship becomes mainstream, breaches like these raise tough questions about how safely this kind of data can ever really be stored.

For adults wanting a little alone time with an AI, remember to take a regular break and a sanity check. While Altman might think that OpenAI has “been able to mitigate the serious mental health issues,” experts still warn that relationships with increasingly lifelike AIs can create very real emotional risks.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

How to set up two factor authentication (2FA) on your Instagram account

Two-factor authentication (2FA) isn’t foolproof, but it is one of the best ways to protect your accounts from hackers.

It adds a small extra step when logging in, but that extra effort pays off. Instagram’s 2FA requires an additional code whenever you try to log in from an unrecognized device or browser—stopping attackers even if they have your password.

Instagram offers multiple 2FA options: text message (SMS), an authentication app (recommended), or a security key.

Instagram 2FA options

Here’s how to enable 2FA on Instagram for Android, iPhone/iPad, and the web.

How to set up 2FA for Instagram on Android

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon (three horizontal lines) in the top right.
  4. Select Accounts Center at the bottom.
  5. Tap Password and security > Two-factor authentication.
  6. Choose your Instagram account.
  7. Select a verification method: Text message (SMS), Authentication app (recommended), or WhatsApp.
    • SMS: Enter your phone number if you haven’t already. Instagram will send you a six-digit code. Enter it to confirm.
    • Authentication app: Choose an app like Google Authenticator or Duo Mobile. Scan the QR code or copy the setup key, then enter the generated code on Instagram.
    • WhatsApp: Enable text message security first, then link your WhatsApp number.
  8. Follow the on-screen instructions to finish setup.

How to set up 2FA for Instagram on iPhone or iPad

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon > Settings > Security > Two-factor authentication.
  4. Tap Get Started.
  5. Choose Authentication app (recommended), Text message, or WhatsApp.
    • Authentication app: Copy the setup key or scan the QR code with your chosen app. Enter the generated code and tap Next.
    • Text message: Turn it on, then enter the six-digit SMS code Instagram sends you.
    • WhatsApp: Enable text message first, then add WhatsApp.
  6. Follow on-screen instructions to complete the setup.

How to set up 2FA for Instagram in a web browser

  1. Go to instagram.com and log in.
  2. Open Accounts Center > Password and security.
  3. Click Two-factor authentication, then choose your account.
    • Note: If your accounts are linked, you can enable 2FA for both Instagram and your overall Meta account here.Instagram accoounts center
  4. Choose your preferred 2FA method and follow the online prompts.

Enable it today

Even the strongest password isn’t enough on its own. 2FA means a thief must have access to your an additional factor to be able to log in to your account, whether that’s a code on a physical device or a security key. That makes it far harder for criminals to break in.

Turn on 2FA for all your important accounts, especially social media and messaging apps. It only takes a few minutes, but it could save you hours—or even days—of recovery later.It’s currently the best password advice we have.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Phishing scam uses fake death notices to trick LastPass users

LastPass has alerted users about a new phishing attack that claims the recipient has died. According to the message, a family member has submitted a death certificate to gain access to the recipient’s password vault. A link in the phishing email, supposedly to stop the request, leads to a fake page that asks for the LastPass user’s master password.

Legacy request opened
Image courtesy of LastPass

“Legacy Request Opened (URGENT IF YOU ARE NOT DECEASED)

A death certificate was uploaded by a family member to regain access to the Lastpass account

If you have not passed away and you believe this is a mistake, please reply to this email with STOP”

LastPass links this campaign to CryptoChameleon (also known as UNC5356), a group that previously targeted cryptocurrency users and platforms with similar social engineering attacks. The same group used LastPass branding in a phishing kit in April 2024.

The phishing attempt exploits the legitimate inheritance process, which is an emergency access feature in LastPass that allows designated contacts request access to a vault if the account holder dies or becomes incapacitated.

Stealing someone’s password manager credentials gives attackers access to every login stored inside. We recently reported on an attempt to steal 1Password credentials.

Lastpass also notes that:

“Several of the phishing sites are clearly intended to target passkeys, reflecting both the increased interest on the part of cybercriminals in passkeys and the increased adoption on the part of consumers.”

Passkeys are a very secure replacement for passwords. They can’t be cracked, guessed or phished, and let you log in easily without having to type a password every time. Most password managers—like LastPass, 1Password, Dashlane, and Bitwarden—now store and sync passkeys across devices.

Because passkeys often protect high-value assets like banking, crypto wallets, password managers, and company accounts—they’ve become an attractive prize for attackers.

Advice for users

While passkeys themselves cannot be phished via simple credential theft, attackers can trick users into:

  • Registering a new passkey on a malicious site or a fake login page
  • Approving fraudulent device syncs or account transfers
  • Disabling passkeys and reverting to weaker login methods, then stealing those fallback credentials

LastPass and other security experts recommend:

  • Never enter your master password on links received via email or text.
  • Understand how passkeys work and keep them safe.
  • Only logging into your password manager via official apps or bookmarks.
  • Be wary of urgent or alarming messages demanding immediate action.
  • Remember that legitimate companies won’t ask for sensitive credentials via email or phone.
  • Use an up-to-date real-time anti-malware solution preferably with a web protection module.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (October 20 – October 26)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Is AI moving faster than its safety net?

You’ve probably noticed that artificial intelligence, or AI, has been everywhere lately—news, phones, apps, even in your browser. It seems like everything suddenly wants to be “powered by AI.“ If it’s not, it’s considered old school and boring. It’s easy to get swept up in the promise: smarter tools, less work, and maybe even a glimpse of the future.

But if we look at some of the things we learned just this week, that glimpse doesn’t only promise good things. There’s a quieter story running alongside the hype that you won’t see in the commercials. It’s the story of how AI’s rapid development is leaving security and privacy struggling to catch up.

And if you make use of AI assistants, chatbots, or those “smart” AI browsers popping up on your screen, those stories are worth your attention.

Are they smarter than us?

Even some of the industry’s biggest names—Steve Wozniak, Sir Richard Branson, and Stuart Russel—are worried that progress in AI is moving too fast for its own good. In an article published by ZDNet, they talk about their fear of “superintelligence,” saying they’re afraid we’ll cross the line from “AI helps humans” to “AI acts beyond human control” before we’ve figured out how to keep it in check.

These scenarios are not about killer robots or takeovers like in the movies. They’re about much smaller, subtler problems that add up. For example, an AI system designed to make customer service more efficient might accidentally share private data because it wasn’t trained to understand what’s confidential. Or an AI tool designed to optimize web traffic might quietly break privacy laws it doesn’t comprehend.

At the scale we use AI—billions of interactions per day—these oversights become serious. The problem isn’t that AI is malicious; it’s that it doesn’t understand consequences, and developers forget to set boundaries.

We’re already struggling to build basic online safety into the AI tools that are replacing our everyday ones.

AI browsers: too smart, too soon

AI browsers—and their newer cousin, the ‘agentic’ browser—do more than just display websites. They can read them, summarize them, and even perform tasks for you.

A browser that can search, write, and even act on your behalf sounds great—but you may want to rethink that. According to research reported by Futurism, some of these tools are being rolled out with deeply worrying security flaws.

Here’s the issue: many AI browsers are just as vulnerable to prompt injection as AI chatbots. The difference is that if you give an AI browser a task, it runs off on its own and you have little control over what it reads or where it goes.

Take Comet, a browser developed by the company Perplexity. Researchers at Brave found that Comet’s “AI assistant” could be tricked into doing harmful things simple because it trusted what it saw online.

In one test, researchers showed the browser a seemingly innocent image. Hidden inside that image was a line of invisible text—something no human would see, but instructions meant only for the AI. The browser followed the hidden commands and ended up opening personal emails and visiting a malicious website.

In short, the AI couldn’t tell the difference between a user’s request and an attacker’s disguised instructions. That is a typical example of a prompt injection attack, which works a bit like phishing for machines. Instead of tricking a person into clicking a bad link, it tricks an AI browser into doing it for you. Without the realization of “oops, maybe I shouldn’t have done that,” it is faster, quiet, and with access you might not even realize it has.

The AI has no idea it did something wrong. It’s just following orders, doing exactly what it was programmed to do. It doesn’t know which instructions are bad because nobody taught it how to tell the difference.

Misery loves company: spoofed AI interfaces

Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces.

According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to genuine ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot. Picture this: you open your browser, see what looks like your trusted AI helper, and ask it a question. But instead of the AI assistant helping you, it’s quietly recording every word you type.

Some of these fake sidebars even persuade users to “verify” credentials or “authorize” a quick fix. This is social engineering in a new disguise. The scammer doesn’t need to lure you away from the page, they just need to convince you that the AI you’re chatting with is legitimate. Once that trust is earned, the damage is done.

And since AI tools are designed to sound helpful, polite, and confident, most people will take their word for it. After all, if an AI browser says, “Don’t worry, this is safe to click,” who are you to argue?

What can we do?

The key problem right now is speed. We keep pushing the limits of what AI can do faster than we can make it safe. The next big problem will be the data these systems are trained on.

As long as we keep chasing the newest features, companies will keep pushing for more options and integrations—whether or not they’re ready. They’ll teach your fridge to track your diet if they think you’ll buy it.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting it with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.

Pro tip: I installed Malwarebytes’ Browser Guard on Comet, and it seems to be working fine so far. I’ll keep you posted on that.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.