IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Website Updates & Privacy Policy Refresh at Mako Logics

We’re excited to share some recent updates to makologics.com designed to improve your experience and better reflect our commitment to security, transparency, and service quality.

What’s New on Our Website

Over the past few weeks, we’ve been working behind the scenes to modernize and refine our website. The goal is simple: make it easier for visitors to find the information they need, understand our services, and connect with our team.

Here’s what you’ll notice:

  • Improved content and clearer service pages to better explain how we help businesses with Managed IT Services, Cloud Solutions, Cybersecurity, and IT Consulting

  • Better organization and navigation so you can find solutions faster

  • Updated messaging that reflects how we support growing businesses across Houston, The Woodlands, and surrounding areas

  • A more professional, modern look and feel aligned with the level of service we deliver to our clients every day

These updates are part of our ongoing effort to make makologics.com a more useful resource for both current clients and businesses exploring better IT solutions.

Updates to Our Privacy Policy

Along with the website refresh, we’ve also updated our Privacy Policy to make it clearer, more transparent, and easier to understand how we collect, use, and protect information.

Your trust matters to us. The updated Privacy Policy explains:

  • What information we collect and when

  • How we use and protect that information

  • How we handle security, cookies, and third-party services

  • Your rights and choices when interacting with our website and services

We’ve rewritten the policy in clearer language and updated it to reflect current best practices around data protection, security, and online privacy.

Our Commitment to Security and Transparency

As an IT services provider, security and trust aren’t just services we offer—they’re principles we operate by. Whether it’s protecting your business infrastructure or safeguarding information shared through our website, we take data protection seriously.

These updates are another step in our commitment to:

  • Transparency in how we operate

  • Strong security practices

  • Clear communication with our clients and partners

Take a Look

We invite you to explore the updated website and review our new Privacy Policy. If you have any questions about the changes or how we protect your information, our team is always happy to talk.

Thank you for trusting Mako Logics as your IT partner. We’re excited about what’s ahead and look forward to continuing to support your business with secure, reliable, and forward-thinking technology solutions.

Age verification vendor Persona left frontend exposed, researchers say

Researchers investigating Discord’s age-verification checks say they discovered an exposed frontend belonging to Persona, the identity-verification vendor used by Discord. It revealed a far more expansive surveillance and financial intelligence stack than a simple “teen safety” tool.

A short while ago we reported that Discord will limit profiles to teen-appropriate mode until you verify your age. That means anyone would wants to continue using Discord as before would have to let it scan their face—and the internet was far from happy.

To analyze these scans, Discord uses biometric identity verification start-up Persona Identities, Inc. a venture that offers Know Your Customer (KYC) and Anti-Money Laundering (AML) solutions that rely on biometric identity checks to estimate a user’s age.

To demonstrate the privacy implications, researchers took a closer look and found a publicly exposed Persona frontend on a US government–authorized server, with 2,456 accessible files.

You read that right. According to researcher “Celeste” the exposed code, which has now been removed, sat at a US government-authorized endpoint that appears to have been isolated from its regular work environment.

In those files, the researchers found details about the extensive surveillance Persona software performs on its users. Beyond checking their age, the software performs 269 distinct verification checks, runs facial recognition against watchlists and politically exposed persons, screens “adverse media” across 14 categories (including terrorism and espionage), and assigns risk and similarity scores.

Persona collects—and can retain for up to three years—IP addresses, browser and device fingerprints, government ID numbers, phone numbers, names, faces, plus a battery of “selfie” analytics like suspicious-entity detection, pose repeat detection, and age inconsistency checks.



At a time when age verification is very much a hot topic, this is not the kind of news to persuade privacy advocates that age verification is in our best interest. Sending data obtained during age verification checks to data brokers and foreign governments—reportedly Persona was tested by Discord in the UK—will not install the level of trust needed for users to feel comfortable submitting to this kind of scrutiny.

This comes amid broader questions about whether age verification is actually doing what it’s supposed to do. Euronews looked at the effect of Australia’s world-leading ban on social media for under-16s. Australia’s new rules have only been in force for six weeks, but while the country’s internet regulator says it has shut down about 4.7 million accounts held by under‑16s on platforms like TikTok, Instagram, Snapchat, YouTube, X, Twitch, Reddit, and Threads, children and parents describe a very different reality. Interviews with teenagers, parents and researchers indicate that many children are still accessing banned apps through simple workarounds.

According to The Rage,  Discord has stated it will not continue to use Persona for age verification. However, other platforms reported to use Persona include:

  • Roblox: Uses Persona’s facial age estimation and ID verification as the core of its “age checks to chat” system.
  • OpenAI / ChatGPT: OpenAI’s help center explains that if you need to verify being 18+, “Persona is a trusted third-party company we use to help verify age,” and that Persona may ask for a live selfie and/or government ID.
  • Lime: The ride-sharing service deploys custom age verification flows with Persona to meet each region’s unique requirements.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Facebook ads spread fake Windows 11 downloads that steal passwords and crypto wallets

Attackers are running paid Facebook ads that look like official Microsoft promotions, then directing users to near-perfect clones of the Windows 11 download page. Click Download Now and instead of a Windows update, you get a malicious installer—one that silently steals saved passwords, browser sessions, and cryptocurrency wallet data.

“I just wanted to update Windows”

The attack starts with something completely ordinary: a Facebook ad. It looks professional, uses Microsoft branding, and promotes what appears to be the latest Windows 11 update. If you have been meaning to keep your PC current, it feels like a convenient shortcut.

Windows 11 example 2
A fraudulent Windows 11 update ad found on Facebook
Windows 11 example 1
A fraudulent Windows 11 update ad found on Facebook

Click the ad and you land on a site that looks almost identical to Microsoft’s real Software Download page. The logo, layout, fonts, and even the legal text in the footer are copied. The only obvious difference is in the address bar. Instead of microsoft.com, you’ll see one of these lookalike domains:

  • ms-25h2-download[.]pro
  • ms-25h2-update[.]pro
  • ms25h2-download[.]pro
  • ms25h2-update[.]pro

The “25H2” in domain names is deliberate. It mimics the naming convention Microsoft uses for Windows releases—24H2, the current version, was on everyone’s lips when this campaign launched, making the fake domains look plausible at a glance.

Geofencing: only the right targets get the payload

This campaign does not blindly infect everyone who visits the site.

Before delivering the malware, the fake page checks who you are. If you connect from a data center IP address—often used by security researchers and automated scanners—you get redirected to google.com. The site looks harmless.

Only visitors who appear to be regular home or office users receive the malicious file.

This technique, known as geofencing combined with sandbox detection, is what allowed this campaign to run for as long as it did without being caught and shut down by automated systems. The infrastructure is configured to evade automated security analysis.

When a targeted user clicks Download now, the site triggers a Facebook Pixel “Lead” event—the same tracking method legitimate advertisers use to measure conversions. The attackers are monitoring which victims take the bait and optimizing their ad spend in real time.

Fake Windows 11 installer

A 75 MB “installer” served straight from GitHub

If you pass the checks, the site downloads a file named ms-update32.exe. At 75 MB, it feels like a legitimate Windows installer.

The file is hosted on GitHub, a trusted platform used by millions of developers. That means the download arrives over HTTPS with a valid security certificate. Because it comes from a reputable domain, browsers do not automatically flag it as suspicious.

The installer was built using Inno Setup, a legitimate tool often abused by malware authors because it creates professional-looking installation packages.

What happens when you run it

Before doing anything damaging, the installer checks whether it is being watched. It looks for virtual machine environments, debugger software, and analysis tools. If it finds any of them, it stops. This is the same evasion logic that lets it slip past many automated security sandboxes—those systems run inside virtual machines by design.

On a real user’s machine, the installer proceeds to extract and deploy its components.

The most significant component is a full Electron-based application installed to C:Users<USER>AppDataRoamingLunarApplication. Electron is a legitimate framework used by apps like Slack and Visual Studio Code. That makes it a useful disguise.

The choice of name is not accidental. “Lunar” is a brand associated with cryptocurrency tooling, and the application comes bundled with Node.js libraries specifically designed to create ZIP archives—suggesting it collects data, packages it up, and sends it out. Likely targets include cryptocurrency wallet files, seed phrases, browser credential stores, and session cookies.

At the same time, two obfuscated PowerShell scripts with randomised filenames are written to the %TEMP% folder and executed with a command line that deliberately disables Windows script-signing protections:

powershell.exe -NoProfile -NoLogo -InputFormat Text -NoExit -ExecutionPolicy Unrestricted -Command -

Hiding in the registry, covering its tracks

To survive reboots, the malware writes a large binary blob to the Windows registry under: HKEY_LOCAL_MACHINESYSTEMSoftwareMicrosoftTIPAggregateResults.

The TIP (Text Input Processor) registry path is a legitimate Windows component, which makes it less likely to raise suspicion.

Telemetry also shows behavior consistent with process injection. The malware creates Windows processes in a suspended state, injects code into them, and resumes execution. This allows the malicious code to run under the identity of a legitimate process, reducing the chance of detection.

Once execution is established, the installer deletes temporary files to reduce its forensic footprint. It can also initiate system shutdown or reboot operations, potentially to interfere with analysis.

The malware uses multiple encryption and obfuscation techniques, including RC4, HC-128, XOR encoding, and FNV hashing for API resolution. These methods make static analysis more difficult.

The Facebook ads angle

The use of paid Facebook advertising to distribute malware is worth pausing on. This is not a phishing email that lands in a spam folder, or a malicious result buried in a search page. These are paid Facebook ads appearing alongside posts from friends and family.

The attackers ran two parallel ad campaigns, each pointing to separate phishing domains. Each campaign used its own Facebook Pixel ID and tracking parameters. If one domain or ad account gets shut down, the other can continue running.

The use of two parallel domains and two separate advertising campaigns suggests the operators have redundancy built in—if one domain is taken down or one ad account is suspended, the other continues running.

What to do if you think you’ve been affected

This campaign is technically polished and operationally aware. The infrastructure demonstrates awareness of common security research and sandboxing techniques. They understand how people download software and have chosen Facebook advertising as their delivery vector precisely because it reaches real users in a context where trust is high.

Remember: Windows updates come from Windows Update inside your system settings—not from a website and never from a social media ad. Microsoft does not advertise Windows updates on Facebook.

And a pro tip: Malwarebytes would have detected and blocked the identified payload and associated infrastructure.

If you downloaded and ran a file from either of these sites, treat the system as compromised and act quickly.

  • Do not log into any accounts from that computer until it has been scanned and cleaned.
  • Run a full scan with Malwarebytes immediately.
  • Change passwords for important accounts like email, banking, and social media from a different, clean device.
  • If you use cryptocurrency wallets on that machine, move funds to a new wallet with a new seed phrase generated on a clean device.
  • Consider alerting your bank and enabling fraud monitoring if any financial credentials were stored on or accessible from that device.

For IT and security teams:

  • Block the phishing domains at DNS and web proxy
  • Alert on PowerShell execution with -ExecutionPolicy Unrestricted in non-administrative contexts
  • Hunt for the LunarApplication directory and randomized .yiz.ps1 / .unx.ps1 files in %TEMP%

Indicators of Compromise (IOCs)

File hash (SHA-256)

  • c634838f255e0a691f8be3eab45f2015f7f3572fba2124142cf9fe1d227416aa (ms-update32.exe)

Domains

  • ms-25h2-download[.]pro
  • ms-25h2-update[.]pro
  • ms25h2-download[.]pro
  • ms25h2-update[.]pro
  • raw.githubusercontent.com/preconfigured/dl/refs/heads/main/ms-update32.exe (payload delivery URL)

File system artifacts

Registry

  • HKEY_LOCAL_MACHINESYSTEMSoftwareMicrosoftTIPAggregateResults (large binary data — persistence)

Facebook advertising infrastructure

  • Pixel ID: 1483936789828513
  • Pixel ID: 955896793066177
  • Campaign ID: 52530946232510
  • Campaign ID: 6984509026382

AI-generated passwords are a security risk

Using Artificial Intelligence (AI) to generate your passwords is a bad idea. It’s likely to give that password to a criminal who can then use it in a dictionary attack—which is when an attacker runs through a prepared list of likely passwords (words, phrases, patterns) with automated tools until one of them works, instead of trying every possible combination.

AI cybersecurity firm Irregular tested ChatGPT, Claude, and Gemini and found that the passwords they generate are “highly predictable,” and not truly random. When they tested Claude, 50 prompts produced just 23 unique passwords. One string appeared 10 times, while many others shared the same structure.

This could turn out to be a problem.

Traditionally, attackers build or download wordlists made of common passwords, real‑world leaks, and patterned variants (words plus numbers and symbols) to use in dictionary attacks. It requires almost no effort to add a thousand or so passwords commonly provided by AI chatbots.

AI chatbots are trained to provide answers based on what they’ve learned. They are good at predicting what comes next based on what they already have, not at inventing something completely new.

As the researchers put it:

“LLMs work by predicting the most likely next token, which is the exact opposite of what secure password generation requires: uniform, unpredictable randomness.”

In the past, we explained why computers are not very good at randomness in the first place. Password managers get around this fact by using dedicated cryptographic random number generators that mix in real‑world entropy, instead of the pattern‑based text generation you see with LLMs.

In other words, a good password manager doesn’t “invent” your password the way an AI does. It asks the operating system for cryptographic random bits and turns those directly into characters, so there’s no hidden pattern for attackers to learn.

A website or platform where you submit such passwords may tell you they’re strong, but the same basic reasoning as to why you shouldn’t reuse passwords applies. What use is a strong password if cybercriminals already have it?

As always, we prefer passkeys over passwords, but we realize this isn’t always an option. If you have to use a password, don’t let an AI make one up for you. It’s just not safe. And if you already did, consider changing it and add multi-factor authentication (2FA) to make the account more secure.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Intimate products maker Tenga spilled customer data

Tenga confirmed reports published by several outlets that the company notified customers of a data breach.

The Japanese manufacturer of adult products appears to have fallen victim to a phishing attack targeting one of its employees. Tenga reportedly wrote in the data breach notification:

“An unauthorized party gained access to the professional email account of one of our employees.”

This unauthorized access exposed the contents of said account’s inbox, potentially including customer names, email addresses, past correspondence, order details, and customer service inquiries.

In its official statement, Tenga said a “limited segment” of US customers who interacted with the company were impacted by the incident. Regarding the scope of the stolen data, it stated:

“The information involved was limited to customer email addresses and related correspondence history. No sensitive personal data, such as Social Security numbers, billing/credit card information, or TENGA/iroha Store passwords were jeopardized in this incident.”

From the wording of Tenga’s online statement, it seems the compromised account was used to send spam emails that included an attachment.

“Attachment Safety: We want to state clearly that there is no risk to your device or data if the suspicious attachment was not opened. The risk was limited to the potential execution of the attachment within the specific ‘spam’ window (February 12, 2026, between 12am and 1am PT).”



We reached out to Tenga about this “suspicious attachment” but have not heard back at the time of writing. We’ll keep you posted.

Tenga proactively contacted potentially affected customers. It advises them to change passwords and remain vigilant about any unusual activity. We would add that affected customers should be on the lookout for sextortion-themed phishing attempts.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.
  • Use our free Digital Footprint scan to see whether your personal information has been exposed online.

What do cybercriminals know about you?

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

Meta patents AI that could keep you posting from beyond the grave

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Betterment data breach might be worse than we thought

Betterment LLC is an investment advisor registered with US Securities and Exchange Commission (SEC). The company disclosed a January 2026 incident in which an attacker used social engineering to access a third‑party platform used for customer communications, then abused it to send crypto‑themed phishing messages and exfiltrate contact and identity data for more than a million people.

What makes this particularly concerning is the depth of the exposed information. This isn’t just a list of email addresses. The leaked files include retirement plan details, financial interests, internal meeting notes, and pipeline data. It’s information that gives cybercriminals real context about a person’s finances and professional life.

What’s worse is that ransomware group Shiny Hunters claims that, since Betterment refused to pay their demanded ransom, it is publishing the stolen data.

Shiny Hunters claim

While Betterment has not revealed the number of affected customers in its online communications, general consensus indicates that the data of 1.4 million customers was involved. And now, every cybercriminal can download this information at their leisure.

We analyzed some of the data and found one particularly worrying CSV file with detailed data on 181,487 people. This file included information such as:

  • Full names (first and last)
  • Personal email addresses (e.g., Gmail)
  • Work email addresses
  • Company name and employer info
  • Job titles and roles
  • Phone numbers (both mobile and work numbers)
  • Addresses and company websites
  • Plan details—company retirement/401k plans, assets, participants
  • Survey responses, deal and client pipeline details, meeting notes
  • Financial needs/interests (e.g., requesting a securities-backed line of credit for a house purchase)


This kind of data is a gold mine for phishers, who can use it in targeted attacks. It has enough context to craft convincing, individually tailored phishing emails. For example:

  • Addressing someone by their real name, company, and job title
  • Referencing the company’s retirement or financial plans
  • Impersonating Betterment advisors or plan administrators
  • Initiating scam calls about financial advice

Combined with data from other breaches it could even be worse and lead to identity theft.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Job scam uses fake Google Forms site to harvest Google logins

As part of our investigation into a job-themed phishing campaign, we came across several suspicious URLs that all looked like this:

https://forms.google.ss-o[.]com/forms/d/e/{unique_id}/viewform?form=opportunitysec&promo=

The subdomain forms.google.ss-o[.]com is a clear attempt to impersonate the legitimate forms.google.com. The “ss-o” is likely introduced to look like “single sign-on,” an authentication method that allows users to securely log in to multiple, independent applications or websites using one single set of credentials (username and password).

Unfortunately, when we tried to visit the URLs we were redirected to the local Google search website. This is a common phisher’s tactic to prevent victims from sharing their personalized links with researchers or online analysis.

After some digging, we found a file called generation_form.php on the same domain, which we believe the phishing crew used to create these links. The landing page for the campaign was: https://forms.google.ss-o[.]com/generation_form.php?form=opportunitysec

The generation_form.php script does what the name implies: It creates a personalized URL for the person clicking that link.

With that knowledge in hand, we could check what the phish was all about. Our personalized link brought us to this website:

Fake Google Forms site
Fake Google Forms site

The greyed out “form” behind the prompt promises:

  • We’re Hiring! Customer Support Executive (International Process)
  • Are you looking to kick-start or advance your career…
  • The fields in the form: Full Name, Email address, and an essay field “Please describe in detail why we should choose you”
  • Buttons: “Submit” and “Clear form.”

The whole web page emulates Google Forms, including logo images, color schemes, a notice about not “submitting passwords,” and legal links. At the bottom, it even includes the typical Google Forms disclaimer (“This content is neither created nor endorsed by Google.”) for authenticity.

Clicking the “Sign in” button took us to https://id-v4[.]com/generation.php, which has now been taken down. The domain id-v4.com has been used in several phishing campaigns for almost a year. In this case, it asked for Google account credentials.

Given the “job opportunity” angle, we suspect links were distributed through targeted emails or LinkedIn messages.

How to stay safe

Lures that promise remote job opportunities are very common these days. Here are a few pointers to help keep you safe from targeted attacks like this:

  • Do not click on links in unsolicited job offers.
  • Use a password manager, which would not have filled in your Google username and password on a fake website.
  • Use an up to date, real-time anti-malware solution with a web protection component.

Pro tip: Malwarebytes Scam Guard identified this attack as a scam just by looking at the URL.

IOCs

id-v4[.]com

forms.google.ss-o[.]com

forms.google.ss-o.com blocked by Malwarebytes
Blocked by Malwarebytes

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Chrome “preloading” could be leaking your data and causing problems in Browser Guard

This article explains why Chrome’s “preloading” feature can cause scary-looking blocks in Malwarebytes Browser Guard and how to turn it off.

Modern browsers want to provide content instantly. To do that, Chrome includes a feature called page preloading. When this is enabled, Chrome doesn’t just wait for you to click a link. It guesses what you’re likely to click next and starts loading those pages in the background—before you decide whether to visit them.

That guesswork happens in several places. When you type a search into the address bar, Chrome may start preloading one or more of the top search results so that, if you click them, they open almost immediately. It can also preload pages that are linked from the site you’re currently on, based on Google’s prediction that they’re “likely next steps.” All of this happens quietly, without any extra tabs opening, and often without any obvious sign that more pages are being fetched.​

From a performance point of view, that’s clever. From a privacy and security point of view, it’s more complicated.

Those preloaded pages can run code, drop cookies, and contact servers, even if you never actually visit them in the traditional sense. In other words, your browser can talk to a site you didn’t consciously choose to open.​

Malwarebytes Browser Guard inspects web traffic and blocks connections to domains it considers malicious or suspicious. So, if Chrome decides to preload a search result that leads to a site on our blocklist, Browser Guard will still do its job and stop that background connection. The result can be confusing: You see a warning page (called a block page) for a site you don’t recognize and are sure you never clicked.

Nothing unusual is happening there, and it does not mean your browser is “clicking links by itself.” It simply means Chrome’s preloading feature made a behind-the-scenes request, and Browser Guard intercepted it as designed. Other privacy tools take a similar approach. Some popular content blockers disable preloading by default because it leaks more data and can contact unwanted sites.

For now, the simplest way to stop these unexpected block pages is to turn off preloading in Chrome’s settings, which prevents those speculative background requests.

How to manage Chrome’s preloading setting

We recommend turning off page preloading in Chrome to protect your browsing privacy and to stop seeing unexpected block pages when searching the web. If you don’t want to turn off page preloading, you can try using a different browser and repeating your search.

To turn off page preloading:

  1. In your browser search bar, enter: chrome://settings
  2. In the left sidebar, click Performance.
  3. Scroll down to Speed, then toggle Preload pages off.
How to turn preload pages on and off

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.