IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Malware-infected printer delivered something extra to Windows users

You’d hope that spending $6,000 on a printer would give you a secure experience, free from viruses and other malware. However, in the case of Procolored printers, you’d be wrong.

The Shenzen-based company sells UV printers, which are able to print on a variety of materials including wood, acrylic, tile, and plastic. They come with all kinds of attractive features. However as reviewer Cameron Coward found out, they also came with malware (at least, until recently).

Coward received a review model of the Procolored V11 pro DTO UC printer that came with software on a USB thumb drive. “One of those was the Microsoft Visual C++ Redistributable in a zip folder,” he said in a review of the product. “But as soon as I unzipped it, Windows Defender quarantined the files and informed me that it found a Floxif virus.”

Floxif is a family of malware that infects a computer and installs a backdoor, giving the attacker control of the machine and allowing them to download other malware onto the system.

Coward also tried to download the control software for the printer from Procolored’s website, which linked to the mega.nz file sharing site. When he tried to download it, Google Chrome detected a virus and blocked it.

He checked in with the vendor, who denied that there was any malware and said the virus software was spotting a false positive (when it mistakenly identifies legitimate software as malicious).

Getting a second opinion

Coward asked for help on Reddit, and Karsten Hahn, principal malware researcher for cybersecurity company G Data CyberDefense, investigated the issue. After scanning 8 GB of software files for the Procolored products, all maintained on mega.nz, Hahn found no evidence of Floxif, he reported in an account of the investigation.

He did find two malware strains in the files, though. Win32.Backdoor.XRedRAT.A is a backdoor that first cropped up in other analyses last year. It gives the attacker complete control over the victim’s PC, including letting them enter command-line instructions, log keystrokes, and download or delete files.

The second, MSIL.Trojan-Stealer.CoinStealer.H, steals cryptocurrency from victims’ machines. It replaces cryptocurrency addresses in the clipboard with the attacker’s own, which has already received around $100,000 in presumably ill-gotten funds.

Both malware files were detected by Malwarebytes’  Machine Learning component DDS as Generic.Malware.AI.DDS, so Malwarebytes/ThreatDown customers were protected against these threats.

After confronting Procolored with the evidence, the company responded to Hahn:

“The software hosted on our website was initially transferred via USB drives. It is possible that a virus was introduced during this process.”

The organization said that it had taken steps to solve the problem, including temporarily taking down all software from its website and scanning all of its files.

“Only after passing stringent virus and security checks will the software be re-uploaded. This is a top priority for us, and we are taking it very seriously.”

However, Procolored hadn’t taken things seriously before that point. Searching the internet, Coward found that many owners of Procolored machines had reported the same issue. The infected files had been up for months.

A history of bundled malware

You might not think that this story applies to you. After all, only a small subset of our readers would be interested in buying such as specialist printer. However, this isn’t the only time when a manufacturer has shipped a product riddled with malware.

2017 saw IBM accidentally ship malware on a USB key containing initialization software for its storage devices. In 2018, Schneider Electric had to warn customers that some of the USB drives shipped with its battery monitoring software were infected with malware.

In 2019, we discovered that US government program providing Android phones to low-income users was found to be shipping them with malware.

Some of these malicious products were shipped on purpose by people who should have known better. In 2005, Sony shipped hidden software on its audio CDs that installed itself on Windows computers to stop them making digital copies. Removing it rendered the Windows installation useless.

The takeaway is this: just because a company has a respected brand doesn’t mean they can’t make mistakes. Take just as much care when installing something from a ‘reliable’ source as you would when doing anything else. Security software and caution go a long way.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

23andMe and its customers’ genetic data bought by a pharmaceutical org

The bankrupt genetic testing company 23andMe has been scooped up by drug producer Regeneron Pharmaceuticals for $256 million dollars.

But why would a pharmaceutical company like Regeneron buy a bankrupt genetics testing company like 23andMe for such a large amount of money?

Well, Regeneron is a leading biotechnology company that invents, develops, and monetizes life-transforming medicines for people with serious diseases. So, it seems obvious that Regeneron’s primary interest lies in the genetic data collected by 23andMe, and the situation raises complex ethical, privacy, and security concerns that customers should understand and address.

Regeneron has pledged to uphold data privacy and security, working closely with a court-appointed Customer Privacy Ombudsman, acknowledging the importance of customer data protection and the ethical use of genetic information.

Dr. George Yancopoulos, Regeneron’s president, said in a statement:

“We believe we can help 23andMe deliver and build upon its mission to help people learn about their own DNA and how to improve their personal health, while furthering Regeneron’s efforts to improve the health and wellness of many.”

However, the scenario is less grim than the fears uttered by Senator Cassidy, chair of the US Senate Health, Education, Labor, and Pensions Committee, who expressed concerns about foreign adversaries, including the Chinese Communist Party, acquiring the sensitive genetic data of millions of Americans through 23andMe.

Regeneron already manages genetic data from nearly three million people, so 23andMe’s 15 million customers significantly expand this resource. Besides the genetic data itself, Regeneron likely values the consumer genetics business infrastructure and research services that 23andMe built, which can complement Regeneron’s pharmaceutical pipeline and personalized medicine efforts.

Genetic data is uniquely sensitive because it contains deeply personal information about an individual’s health risks, ancestry, and even family relationships. Unlike traditional medical records protected under HIPAA, 23andMe’s genetic data is covered primarily by consumer privacy laws, which offer weaker protections.

What can consumers do to protect their data?

Customers should actively manage their data on 23andMe by reviewing policies, deleting data if desired, and staying vigilant about how their sensitive genetic information is used.

People that have submitted samples to 23andMe have three different options, each providing a different level of privacy.

1. Delete your genetic data from 23andMe

For 23andMe customers who want to delete their data from 23andMe:

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • You will be asked to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (make sure you’re using a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

2. Destroy your 23andMe test sample

If you previously opted to have your saliva sample and DNA stored by 23andMe, but want to change that preference, you can do so from your account settings page, under “Preferences.”

3. Revoke permission for your genetic data to be used for research

If you previously consented to 23andMe and third-party researchers using your genetic data and sample for research, you may withdraw consent from the account settings page, under Research and Product Consents.

Check if you were caught up in the 23AndMe data breach

Additionally, you may want to check if your data was exposed in the 2023 data breach. We recommend that you run a scan using our free Digital Footprint Portal to see if your data was exposed in the breach, and then to take additional steps to protect yourself (we’ll walk you through those).


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

How Los Angeles banned smartphones in schools (Lock and Code S06E10)

This week on the Lock and Code podcast…

There’s a problem in class today, and the second largest school district in the United States is trying to solve it.

After looking at the growing body of research that has associated increased smartphone and social media usage with increased levels of anxiety, depression, suicidal thoughts, and isolation—especially amongst adolescents and teenagers—Los Angeles Unified School District (LAUSD) implemented a cellphone ban across its 1,000 schools for its more than 500,000 students.

Under the ban, students who are kindergartners all the way through high school seniors cannot use cellphones, smartphones, smart watches, earbuds, smart glasses, and any other electronic devices that can send messages, receive calls, or browse the internet. Phones are not allowed at lunch or during passing periods between classes, and, under the ban, individual schools decide how students’ phones are stored, be that in lockers, in magnetically sealed pouches, or just placed into sleeves at the front door of every classroom, away from students’ reach.

The ban was approved by the Los Angeles Unified School District through what is called a “resolution”—which the board voted on last year. LAUSD Board Member Nick Melvoin, who sponsored the resolution, said the overall ban was the right decision to help students.  

“The research is clear: widespread use of smartphones and social media by kids and adolescents is harmful to their mental health, distracts from learning, and stifles meaningful in-person interaction.”

Today, on the Lock and Code podcast with host David Ruiz, we speak with LAUSD Board Member Nick Melvoin about the smartphone ban, how exceptions were determined, where opposition arose, and whether it is “working.” Melvoin also speaks about the biggest changes he has seen in the first few months of the cellphone ban, especially the simple reintroduction of noise in hallways.

“[During a school visit last year,] every single kid was on their phone, every single kid. They were standing there looking, texting again, sometimes texting someone who was within a few feet of them, and it was quiet.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Update your Chrome to fix serious actively exploited vulnerability

Google released an emergency update for the Chrome browser to patch an actively exploited vulnerability that could have serious ramifications.

The update brings the Stable channel to versions 136.0.7103.113/.114 for Windows and Mac and 136.0.7103.113 for Linux.

The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To manually get the update, click Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is restart the browser in order for the update to complete, and for you to be safe from those vulnerabilities.

Chrome up to date version 136.0.7103.114

This update is crucial since it addresses an actively exploited vulnerability which could allow an attacker to steal information you share with other websites. Google says it’s aware that knowledge of CVE-2025-4664 exists in the wild. But while Google didn’t acknowledge that the vulnerability is actually being actively exploited, the Cybersecurity and Infrastructure Security Agency (CISA) added the vulnerability to its Known Exploited Vulnerabilities catalog—a strong indication the vulnerability is being used out there.

Technical details

The vulnerability tracked as CVE-2025–4664, lies in the Chrome Loader component, which handles resource requests. When you visit a website, your browser often needs to load additional pieces of that site, such as images, scripts, or stylesheets, which may come from various sources. The Loader manages these requests to fetch and display those resources properly.

While it does that, it should enforce security policies that prevent one website from accessing data belonging to another website, a principle known as the “same-origin policy.”

The vulnerability lies in the fact that those security policies were not applied properly to Link headers. This allowed an attacker to set a referrer-policy in the Link header which tells Chrome to include full URLs, including sensitive query parameters.

This is undesirable since query parameters in full URLs often contain sensitive information such as OAuth tokens (used for authentication), session identifiers, and other private data.

Imagine you visit a website related to sensitive or financial information, and the URL includes a secret code in the address bar that proves it’s really you. Normally, when your browser loads images or other content from different websites, it keeps that secret code private. But because of this Chrome Loader flaw, a successful attacker can trick your browser into sending that secret code to a malicious website just by embedding an image or other resource there.

The attacker could, for example, embed a hidden image hosted at their own server, and harvest the full URLs. This means they can steal your private information without you realizing it, potentially letting them take over your account or other online services.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (May 12 – May 18)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Data broker protection rule quietly withdrawn by CFPB

The Consumer Financial Protection Bureau (CFPB) has decided to withdraw a 2024 rule to limit the sale of Americans’ personal information by data brokers.

In a Federal Register notice published yesterday, the CFPB said it “has determined that legislative rulemaking is not necessary or appropriate at this time to address the subject matter”.

The data brokerage industry generates an estimated $300 billion in annual revenue. Data brokers actively collect and sell your Personally Identifiable Information (PII), including financial details, personal behavior, and interests, for profit. They often do this without seeking your consent or without making it clear that you have given consent.

The CFPB proposed the rule in December 2024 to curb data brokers from selling Americans’ sensitive personal and financial information. By restricting the sale of personal identifiers such as Social Security Numbers (SSNs) and phone numbers, the rule aimed to ensure that companies share financial data, like income, only for legitimate purposes, such as facilitating a mortgage approval, rather than selling it on to scammers who target people in financial distress.

The proposal sought to make data brokers comply with federal law and address serious threats posed by current industry practices. It targeted not only national security, surveillance, and criminal exploitation risks, but also aimed to limit doxxing and protect the personal safety of law enforcement personnel and domestic violence survivors.

The CFPB intended to treat data brokers like credit bureaus and background check companies, requiring them to comply with the Fair Credit Reporting Act (FCRA) regardless of how they use financial information. The proposal would also have required data brokers to obtain much more explicit and separately authorized consumer consent.

By setting it up this way it wouldn’t have interfered with the existing pathways created for and by the FCRA while offering more consumer protection.

However, acting CFPB Director Russell Vought said the agency had determined the rule was not for now, pointing to “updates to Bureau policies.”

Watchdog groups have a different view on the matter though. Matt Schwartz, a policy analyst at Consumer Reports, stated it would leave consumers vulnerable:

“Data brokers collect a treasure trove of sensitive information about virtually every American and sell that information widely, including to scammers looking to rip off consumers.”

If data brokers would be required to comply with the FCRA:

  • They would have to ensure the accuracy and privacy of the data they collect and share.
  • Consumers must be provided with mechanisms to dispute and correct inaccurate information.
  • Consumers should be notified when their data is used for decisions about credit, insurance, or employment.
  • They could face enforcement actions and penalties for non-compliance, as the Federal Trade Commission (FTC) and CFPB have done in the past.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Meta sent cease and desist letter over AI training

EU privacy advocacy group NOYB has clapped back at Meta over its plans to start training its AI model on European users’ data. In a cease and desist letter to the social networking giant’s Irish operation signed by founder Max Schrems, the non-profit demanded that it justify its actions or risk legal action.

In April, Meta told users that it was going to start training its generative AI models on their data.

Schrems uses several arguments against Meta in the NOYB complaint:

1. Meta’s ‘legitimate interests’ are illegitimate

NOYB continues to question Meta’s use of opt-out mechanisms rather than excluding all EU users from the process and requiring them to opt in to the scheme. “Meta may face massive legal risks – just because it relies on an “opt-out” instead of an “opt-in” system for AI training,” NOYB said on its site.

Companies who want to process personal data without explicit consent must demonstrate a legitimate interest to do so under GDPR. Meta hasn’t published information about how it justifies those interests, says Schrems. He has trouble seeing how training a general-purpose AI model could be deemed a legitimate interest because it violates a key GDPR principle; limiting data process to specific goals.

NOYB doesn’t believe that Meta can enforce GDPR rights for personal data like the right to be forgotten once an AI system is trained on it, especially if that system is an open-source one like Meta’s Llama AI model.

“How should it have a ‘legitimate interest’ to suck up all data for AI training?” Schrems said. “While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users.”

2. What you don’t know can hurt you

Schrems warns that people who don’t have a Facebook account but just happen to be mentioned or caught in a picture on a user’s account will be at risk under Meta’s AI training plans. They might not even be aware that their information has been used to train AI, and therefore couldn’t object, he argues.

3. Differentiation is difficult

NOYB also worries that the social media giant couldn’t realistically separate people whose data is linked on the system. For example, what happens if two users are in the same picture, but one has opted out of AI training and one hasn’t? Or they’re in different geographies and one is protected under GDPR and one isn’t?

Trying to separate data gets even stickier when trying to separate ‘special category’ data, which GDPR treats as especially sensitive. This includes things like religious beliefs or sexual orientation.

“Based on previous legal submissions by Meta, we therefore have serious doubts that Meta can indeed technically implement a clean and proper differentiation between users that performed an opt-out and users that did not,” Schrems says.

Other arguments

People who have been entering their data into Facebook for the last two decades could not have been expected to know that Facebook would use their data to train AI now, the letter said. That data is private because it tries hard to protect it from web scrapers, and limits who can see it.

In any case, other EU laws would make it the proposed AI training illegal, NOYB warns. It points to the Digital Markets Act, which stops companies cross-referencing personal data between services without consent.

Meta, which says that it won’t train its AI on private user messages, had originally delayed the process altogether after pushback from the Irish Data Privacy commissioner. Last month the company said that had “engaged constructively” with the regulator. There has been no further news from the Irish DPC on the issue aside from a statement thanking the European Data Protection Board for an opinion on the matter handed down in December. That opinion left the specifics of AI training policy up to national regulators.

“We also understand that the actions planned by Meta were neither approved by the Irish DPC nor other Concerned Supervisory Authorities (CSAs) in the EU/EEA. We therefore have to assume that Meta is openly disregarding previous guidance by the relevant Supervisory Authorities (SAs),” Schrems’ letter said.

NOYB has asked Meta to justify itself or sign a cease and desist order by May 21. Otherwise, it threatens legal action by May 27, which is the date that Meta is due to start its training. If it brings an action under the EU’s new Collective Redress Scheme, it could obtain an injunction from different jurisdictions outside Ireland to shut down the training process and delete the data. A class action suit might also be possible, Schrems added.

In a statement to Reuters, Meta called NOYB “wrong on the facts and the law”, saying that gives adequate opt-out options for users.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Google to pay $1.38 billion over privacy violations

The state of Texas reached a mammoth financial agreement with Google last week, securing $1.375 billion in payments to settle two three year-old lawsuits.

The Office of Texas Attorney General Ken Paxton originally filed the first lawsuit against Google in January 2022, complaining that the tech giant collected users’ geolocation data. It alleged that Google has continued to track users’ locations even after they thought they had disabled the feature, and then used the data to serve them advertisements.

Then in May 2022, the state updated that lawsuit to include another allegation—that the company wasn’t being fully up front about the data it collected from users in private browsing mode, also known as Incognito Mode.

Google warned users in its Incognito Mode splash page that that their ISPs, employers or schools, and websites they visited in this mode might still collect data about their activity. The suit called this “insufficient to alert Texans to the amount, kind, and richness of data-collection that persists during Incognito mode.” By promising private browsing, Google created an expectation of privacy that it didn’t fulfil, it alleged.

In October that year, Texas launched another suit that accused Google of collecting biometric information including voice prints and records of facial geometry, using services like Google Photos and the Google Assistant product along with its Nest Hub Max product.

A collection of 40 states had pursued Google over the location tracking issue and had secured a $391.5m payout in 2022, but others had gone it alone. Arizona settled for $85m. Paxton played his own game of Texas Hold’em and won. In a press release on Friday he proudly compared his settlement to the multi-state suit, pointing out that it was “almost a billion dollars less than Texas’s recovery”.

This isn’t the first time that Paxton has trounced Google in a legal fight. In 2023 Texas was part of an all-state $700m settlement with the company over anti-competitive practices in its Play store. It also settled for $8m that year over a deceptive advertising claim. Google paid DJs to promote a new Pixel phone model even though it hadn’t been released yet and they had never used it, the state said.

Last August, it also won a four-year legal battle with Google over monopolistic search practices.

In financial terms, this is Paxton’s second-largest victory by a whisker against Big Tech companies. Texas settled for a $1.4bn payout from Facebook and Instagram owner Meta last July, after suing it for capturing biometric data on Texans. The suit specifically targeted the company’s use of facial recognition to power its Tag Suggestions feature, which enabled users to easily identify and tag people in photos.

The allegations of biometric data misuse against both Meta and Google were bought under the Texas Capture or Use of Biometric Data Act, which it introduced in 2009. The geolocation and private browsing accusations were bought under another Texas law, the Deceptive Trade Practices Act. At a federal level, there is still no cohesive consumer data privacy law, despite several efforts to introduce them on the Hill.

Emboldened by earlier victories, Paxton set up a legal swat team to pursue Big Tech last June. The team, which works within his office’s consumer division, will go after companies that play fast and loose with consumer data, Paxton said. He warned:

“As many companies seek more and more ways to exploit data they collect about consumers, I am doubling down to protect privacy rights.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Android users bombarded with unskippable ads

Researchers have discovered a very versatile ad fraud network—known as Kaleidoscope—that bombards users with unskippable ads.

Normally, ad fraud is not a concern for users of infected devices. They might experience some sluggish behavior on their device, but often that’s the extent of it. Ad fraud is a type of scam aimed at companies, causing them to pay for advertisements that nobody actually sees or clicks on. Instead of real people viewing or clicking on ads, fraudsters use automated programs (bots) or other tricks to generate fake views, clicks, or interactions.

As a result, the advertising company pays for ads without receiving any real value in return. Users of infected devices usually don’t notice anything, since the malicious activity takes place in the background. This also helps the malware avoid detection.

However, the newly discovered ad fraud operation, dubbed Kaleidoscope, is different. Kaleidoscope targets Android users through seemingly legitimate apps in the Google Play Store, as well as malicious lookalikes distributed through third-party app stores.

Both versions of the app share the same app ID. Researchers found over 130 apps associated with Kaleidoscope, resulting in approximately 2.5 million fraudulent installs per month.

Advertisers believe they are paying for ads shown in the “legitimate” app, while users who download versions from third-party app stores are bombarded with the same ads—but they can’t skip them. Because both apps use the same app ID, advertisers never know the difference.

Kaleidoscope is very similar to, and appears to be built on, the CaramelAds ad fraud network, which also used duplicate apps and shares similarities in code and underlying infrastructure.

The researchers explain:

“The malicious app delivers intrusive out-of-context ads under the guise of the benign app ID in the form of full-screen interstitial images and videos, triggered even without user interaction.”

How to protect your device

Google Play Protect automatically protects users against apps that engage in malicious behavior. As a result, the researchers didn’t find any malicious Kaleidoscope versions on the Google Play Store.

To keep your devices free from ad fraud related malware:

  • Get your apps from the Google Play store whenever you can.
  • Be careful about the permissions you allow a new app. Does it really need those permissions for what it’s supposed to do? In this case the “Display over other apps” should raise a red flag.
  • Dubious ad sites often request permission to display notifications. Allowing this will increase the number of ads as they push them to the device’s notification bar.
  • Use up-to-date and active security software on your Android.

Malwarebytes detects malware from the Kaleidoscope family as Adware.AdLoader.EXTNXN.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

A week in security (May 4 – May 10)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.