IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Tracking down a trojan: An inside look at threat hunting in a corporate network

At Malwarebytes, we talk a lot about the importance of threat hunting for SMBs—and not for no good reason, either. Just consider the fact that, when a threat actor breaches a network, they don’t attack right away. The median amount of time between system compromise and detection is 21 days.

By that time, it’s often too late. Data has been harvested or ransomware has been deployed.

Threat hunting helps find and remediate highly-obfuscated threats like these that quietly lurk in the network, siphoning off confidential data and searching for credentials to access the “keys to the kingdom.”

The bad news for small-to-medium sized businesses (SMBs): Manually intensive and costly threat-hunting tools usually restrict this practice to larger organizations with an advanced cybersecurity model and a well-staffed security operations center (SOC).

That’s where Malwarebytes Managed Detection and Response (MDR) comes in.

Malwarebytes MDR is a service that provides around-the-clock monitoring of an organization’s environment for signs of a cyberattack.

But talk is cheap: let’s look at a real time where Malwarebytes MDR successfully helped a company detect and respond to a potent banking Trojan known as QBot.

The Incident

On a date left undisclosed for security reasons, a reputable oil and gas company we’ll refer to as Company 1 experienced an intrusion in their network. The culprit was Qakbot (also known as QBot).

QBot is notorious for its abilities to steal sensitive information, like login credentials, financial data, and personal information, and even create backdoors for additional malware to infiltrate the compromised system. What’s more, it also facilitates remote access to the compromised machines.

QBot has recently been observed being distributed as part of a phishing campaign using PDFs and Windows Script Files (WSF).

easset upload file47677 266184 e

The QBot campaign illustrated (Source: Jerome Segura | Malwarebytes Labs)

QBot attacks start with a reply-chain phishing email, when threat actors reply to a chain of emails with a malicious link or attachment.

easset upload file89775 266184 e

A sample reply-chain phishing email in French, carrying a PDF attachment disguised as a cancellation letter. (Source: BleepingComputer)

Once someone in the email chain opens the attached PDF, they see a message saying, “This document contains protected files, to display them, click on the ‘open’ button.” Clicking the button downloads a ZIP file containing the WSF script.

easset upload file55401 266184 e

The heavily obfuscated script contains a mix of JS and VBScript code that, when run, triggers a PowerShell that then downloads the QBot DLL from a list of hardcoded URLs. This script tries each URL until a file is downloaded to the Windows Temp folder (%TEMP%) and executed.

Once QBot runs, it issues a PING command to check for an internet connection. It then injects itself into wermgr.exe, a legitimate Windows Error Manager program, to run quietly in the background.

The Infection

The initial infection at Company 1 was traced to a laptop in their network.The Qakbot malware used Windows Script File (WSF), executed by WSCRIPT.EXE, to launch a PowerShell script encoded in Base64.

easset upload file52946 266184 e

The Process Graph tile under the Suspicious Activity page in Nebula shows a visual representation of the files or processes touched by the suspicious activity.

easset upload file54071 266184 e

Clicking on the node to view more details, we see WSCRIPT.EXE was used to execute a Windows Script File, which spawned an instance of PS executing a Base64 encoded command.

easset upload file70676 266184 e

Node detail showing malicious encoded PowerShell script.

This script was designed to be patient and stealthy.

It first initiated a waiting period of 4 seconds before creating an array of URLs, presumably leading to malicious websites. The malware then attempted to download a file from each URL, with each file being checked for a minimum size of 100,000 bytes, implying a meaningful content requirement. If a download failed, the script would wait for 4 seconds before moving to the next URL.

The downloaded files were executed using the RUNDLL32.EXE Windows utility, which was invoked from the PowerShell instance. This allowed the downloaded file, dubbed “FreeformOzarkite.marseillais,” to load and execute its malicious payload.

easset upload file29101 266184 e

RUNDLL32.EXE was invoked from the previous instance of PowerShell to execute a malicious payload or module that is stored in the file “FreeformOzarkite.marseillais” in the temporary folder of the infected user. 

The Malicious DLL

A specific DLL file, identified as zibkwyxdtpcrqshpuqkoomcoba.dll, was found to be one of the malicious codes executed by the Qakbot infection.

easset upload file13711 266184 e

Node detail showing the malicious DLL is executed (zibkwyxdtpcrqshpuqkoomcoba.dll).

Decomposition of this DLL revealed several nefarious functions, including:

  • Code injection into other processes.
  • Harvesting of sensitive data, like Chrome and Outlook passwords, Wi-Fi passwords, and Bitcoin wallets.
  • Capturing screenshots.
  • Modifying system settings, like disabling the User Account Control (UAC), to make the system more vulnerable to further attacks.
  • Communication with a remote command and control (C&C) server for data exfiltration and remote command execution.

The team also saw system enumeration utilizing WHOAMI.EXE and IPCONFIG.EXE:

  • whoami /all
  • ipconfig /all

Data Exfiltration and Remediation

The malware attempted to send the collected data to a known Qakbot C2 IP address. This is presumably where the stolen data would be accumulated and analyzed by the malicious actors.

However, the Malwarebytes MDR team promptly detected and contained this threat, taking steps such as cleaning the system of the infection, informing Company 1 of the incident, and providing actionable recommendations to prevent future compromises.

Threat hunting with MDR

easset upload file38670 266184 e

How Malwarebytes MDR works

Threat hunting is essential for small-and-medium-sized businesses, as attackers can potentially remain undetected for over two weeks after compromising a network.

Unfortunately, threat hunting is complicated and requires a dedicated SOC and seasoned cybersecurity staff, barring most SMBs from utilizing this important security practice. 

In this article, we’ve outlined the significant role that Malwarebytes MDR can play in uncovering, managing, and remediating threats like Qakbot, helping you avoid business disruption and financial loss.

Want to learn more about Malwarebytes MDR and threat hunting? Click the link below for a quote. 

Stop Qbot attacks today

Employee guilty of joining ransomware attack on his own company

A 28-year old IT Security Analyst pleaded guilty and will consequently be convicted of blackmail and unauthorized access to a computer with intent to commit other offences.

It all started when the UK gene and cell therapy company Oxford BioMedica fell victim to a cybersecurity incident which involved unauthorized access to part of the company’s computer systems on 27 February, 2018. The intruder notified senior staff members at the company and demanded a ransom. As an IT Security Analyst at the company, Ashley Liles was tasked with investigating the incident.

He worked alongside colleagues and the police in an attempt to mitigate the incident. But at some point he must have decided to use the circumstances to enrich himself. According to the South East Regional Organised Crime Unit (SEROCU), Liles commenced a separate and secondary attack against the company.

As part of his plan he changed the Bitcoin payment address of the attacker to his own in emails to the board members. And he set up an email address very similar to that of the attacker. From that email address he began emailing his employer to pressurize the company to pay the ransom.

Unfortunately for Liles, a payment was never made and the unauthorized access to the private emails was noticed during the investigation. Due to some poor choices when it came to his own security, the police arrested Liles and searched his home.

The unauthorized access to the emails could be traced back to his home address, which gave the police sufficient grounds to seize a computer, laptop, phone, and a USB stick. Despite his attempts to wipe the data from his devices, the police was able to recover enough data to act as evidence to prove his crimes and establish his direct involvement.

Liles denied any involvement for five years. But on May 17, 2023 during a hearing at Reading Crown Court, he changed his plea to guilty. The case has now been adjourned for sentencing at the same court on July 11, 2023.

While this definitely qualifies as an insider threat, this one seems to have been opportunistic rather than premeditated. The term is often associated with disgruntled employees, but they can also be coerced, or jump on an opportunity that presents itself, as Liles did. The case emphasizes the need for effective access control policies, even when an emergency presents itself. You do not want to make the scope of the incident worse by giving up your access policies in light of an investigation.

Access to resources should always be limited to what is needed to get the job done. And incidental access should be revoked when the need is no longer there. We’re not saying that every employee should be treated as a suspect or potential insider threat. That will result in an unworkable situation. But you should have measures in place to limit the damage and find any culprit.

How to avoid ransomware

  • Block common forms of entry. Create a plan for patching vulnerabilities in internet-facing systems quickly; and disable or harden remote access like RDP and VPNs.
  • Prevent intrusions. Stop threats early before they can even infiltrate or infect your endpoints. Use endpoint security software that can prevent exploits and malware used to deliver ransomware.
  • Detect intrusions. Make it harder for intruders to operate inside your organization by segmenting networks and assigning access rights prudently. Use EDR or MDR to detect unusual activity before an attack occurs.
  • Stop malicious encryption. Deploy Endpoint Detection and Response software like Malwarebytes EDR that uses multiple different detection techniques to identify ransomware, and ransomware rollback to restore damaged system files.
  • Create offsite, offline backups. Keep backups offsite and offline, beyond the reach of attackers. Test them regularly to make sure you can restore essential business functions swiftly.
  • Don’t get attacked twice. Once you’ve isolated the outbreak and stopped the first attack, you must remove every trace of the attackers, their malware, their tools, and their methods of entry, to avoid being attacked again.

Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

AI generated Pentagon explosion photograph goes viral on Twitter

Twitter’s recent changes to checkmark verification continue to cause chaos, this time in the realm of potentially dangerous misinformation. A checkmarked account claimed to show images of explosions close to important landmarks like the Pentagon. These images quickly went viral despite being AI generated and containing multiple overt errors for anyone looking at the supposed photographs.

How did this happen?

Until recently, the social media routine when an important news story breaks would be as follows:

  • Something happens, and it’s reported on by verified accounts on Twitter
  • This news filters out to non-verified accounts

“Verified” accounts are now paid for by anybody who wants to sign up to the $8 a month Twitter Blue service. There’s no real guarantee that a checkmarked video game company, celebrity, or news source is in fact who they claim to be. There have been many instances of this new policy injecting some mayhem into social media already. Fake Nintendo dispensing offensive images and the infamous “Insulin is free” Tweet causing a stock dive spring to mind.

People have taken the “anything goes in checkmark land” approach and are running with it.

What’s happening now is:

  • Fake stories are promoted by checkmarked accounts
  • Those stories filter out to non-checkmarked accounts
  • People in search of facts try to find non-checkmarked (but real) journalists and news agencies while ignoring the checkmarked accounts.

This is made more difficult by changes to how Twitter displays replies, as paid accounts “float” to the top of any conversation. As a result, a situation where a checkmarked account goes viral through a combination of real people, genuine “verified” accounts, and those looking to spread misinformation can potentially result in disaster.

In this case, several checkmarked accounts made claims of explosions near the Pentagon and then the White House. 

Bellingcat investigators quickly debunked the imagery for what it is: Poorly done, with errors galore.

Despite how odd the images looked, with no people, mashed up railings, and walls that melt into one another, it made no difference. The visibility of the bogus tweets rocketed and soon there was the possibility of a needless terror-attack panic taking place.

Many US Government, law enforcement, and first responder accounts no longer have a checkmark as they declined to pay for Twitter Blue. Thankfully some have the new grey Government badge, and Arlington County Fire Department was able to confirm that there was no explosion.

What’s interesting about this one is that it highlights how you can post terrible, amateur imagery with no attempt to polish it and enough people will still believe it to make it go viral. In this case, it went viral to the extent that the Pentagon Force Protection Agency had to help debunk it. As Bleeping Computer notes, the PFPA isn’t even verified anymore.

There is no easy answer or collection of tips for avoiding this kind of thing on social media. At least, not on Twitter in its current setup. A once valuable source for breaking, potentially critical warnings about dangerous weather or major incidents simply cannot be trusted as it used to be.

The very best you can do is follow the Government or emergency response accounts which sport the grey badge. There are also gold checkmarks for “verified organisations”, but even there problems remain. A fake Disney Junior account was recently granted a gold check mark out of the blue and chaos ensued.

No, South Park is not coming to Disney Junior.

As for the aim of the accounts pushing misinformation, it’s hard to say. Many paid accounts are simply wanting to troll. Others could be part of dedicated dis/misinformation farms, run by individuals or collectives. It’s also common to see accounts go viral with content, and then switch out to something else entirely once enough reach has been gained. It might be about a different topic, or it could be something harmful.

Even outside the realm of paid accounts, misinformation and fakes can flourish. Just recently, Twitter experienced a return of fake NHS nurses, after having experienced a similar wave back in 2020.

Should any of the fake nurse accounts decide to pay $8 a month, they’ll have the same posting power as the profiles pushing fake explosions. Spam is becoming a big problem on publicly posted and private messages:

AI is already capable of producing realistic looking images, yet the spammers and scammers are using any old picture without care for how convincing it looks. The combination of “breaking news” messaging and an official looking checkmark easily tips it over the edge, and those liable to fall for it simply don’t examine imagery in detail in the first place. Twitter is going to have to invest some serious time into clamping down on spam and bots which naturally help feed the disinformation waves. The big question is: Can the embattled social media giant do it?


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Google to pay $40m for “deceptive and unfair” location tracking practices

Google is going to pay $39.9 million to Washington State to put to rest a lawsuit about its location tracking practices which has been in play since last year. Google was accused of “misleading consumers” by State Attorney General Bob Ferguson. From the AG press release:

Attorney General Bob Ferguson today announced Google will pay $39.9 million to Washington state as a result of his office’s lawsuit over misleading location tracking practices. Google will also implement a slate of court-ordered reforms to increase transparency about its location tracking settings.

Ferguson’s lawsuit against Google asserted that the tech giant deceptively led consumers to believe that they have control over how Google collects and uses their location data. In reality, consumers could not effectively prevent Google from collecting, storing and profiting from their location data.

The lawsuit itself, announced back in January 2022, claimed Google used a “number of deceptive and unfair practices” to obtain user content for tracking. Practices highlighted included “hard to find” location settings, misleading descriptions of location settings, and “repeated nudging” to enable location settings alongside incomplete disclosures of Google’s location data collection.

These practices were set alongside the large amount of profit Google generated from using consumer data to sell advertising. Google made close to $150 billion from advertising in 2020, and the case pointed out that location data is a key component of said advertising. As per the Attorney General:

(Google) has a financial incentive to dissuade users from withholding access to that data.

The location based argument is focused on the discrepancy between claims related to what data Google stores in theory with location data turned off, and what it obtains in practice:

When users enable a setting called “Location History,” Google saves data on users’ location to, as it says in its account settings, “give you personalised maps, recommendations based on places you’ve visited, and more.”

Google told users that when Location History was disabled, the company did not continue to store the user’s location. For years, Google’s help page stated, “With Location History off, the places you go are no longer stored.” That statement was false. For example, the company collects location data under a separate setting — “Web & App Activity” — that is defaulted “on” for all Google Accounts.

The consent decree filed on Wednesday means Google will need to be more transparent with regard to tracking. The search engine giant will also need to provide more detailed information in cases where location technologies are involved.

AG Ferguson had this to say:

Google denied Washington consumers the ability to choose whether the company could track their sensitive location data, deceived them about their privacy options and profited from that conduct. Today’s resolution holds one of the most powerful corporations accountable for its unethical and unlawful tactics.

Google has been on the receiving end of legal action led by Ferguson for some time now. Just last month, he partnered with the US Department of Justice and a bipartisan group of attorneys general for an antitrust lawsuit aiming to break up Google’s monopolisation of display advertising. There have also been other antitrust lawsuits in this space, and in 2021 Google paid $423,659.76 in relation to violating the state’s campaign finance disclosure law.

We still don’t know how these proposed changes will take shape in terms of what consumers will see. “…with no federal law governing online privacy in the United States, state regulators are forced to make do with what they have” according to Android Central. With Ferguson showing no signs of letting up, Washington State is taking that philosophy to the max.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Update now! Apple issues patches for three actively used zero-days

Apple has rolled out security updates for Safari 16.5, watchOS 9.5, tvOS 16.5, iOS 16.5, iPadOS 16.5, iOS 15.7.6, iPadOS 15.7.6, macOS Big Sur 11.7.7, macOS Ventura 13.4, and macOS Monterey 12.6.6.

Among the security updates were patches for three actively exploited zero-day vulnerabilities. All these actively exploited vulnerabilities are directly related to the WebKit browser engine.

WebKit is the engine that powers the Safari web browser on Macs as well as all browsers on iOS and iPadOS (all web browsers on iOS and iPadOS are obliged to use it). It is also the web browser engine used by Mail, App Store, and many other apps on macOS, iOS, and Linux.

Devices impacted by the identified exploits include:

  • All iPad Pro models
  • iPad Air (3rd generation and later)
  • iPad (5th generation and later)
  • iPad Mini (5th generation and later)
  • iPhone 6s and later models
  • Mac workstations and laptops running macOS, Big Sur, Monterey, and Ventura
  • Apple Watch (series 4 and later)
  • Apple TV 4K and HD

The updates may already have reached you in your regular update routines, but it doesn’t hurt to check if your device is at the latest update level. If a Safari update is available for your device, you can get it by updating or upgrading macOS, iOS, or iPadOS:

The Common Vulnerabilities and Exposures (CVE) database lists publicly disclosed computer security flaws. The CVE containing the information about the new zero-day is:

  • CVE-2023-32409: An issue where remote attacker may be able to break out of Web Content sandbox was addressed with improved bounds checks.

The notes about the security updates also revealed some information about the Apple’s Rapid Security Response (RSR) update we reported about earlier this month.

RSR is a new type of software patch delivered between Apple’s regular, scheduled software updates. Previously, Apple security fixes came bundled along with features and improvements, but RSRs only carry security fixes. They’re meant to make the deployment of security improvements faster and more frequent.

We now know that the CVEs patched in that RSR update are listed as:

  • CVE-2023-28204: An out-of-bounds read issue in WebKit was addressed with improved input validation. Processing web content may disclose sensitive information.
  • CVE-2023-32373: A use-after-free issue in WebKit which was addressed with improved memory management. Processing maliciously crafted web content may lead to arbitrary code execution.

An out-of-bounds write or read flaw makes it possible to manipulate parts of the memory which are allocated to more critical functions. This could allow an attacker to write code to a part of the memory where it will be executed with permissions that the program and user should not have.

Use after free (UAF) is a vulnerability due to incorrect use of dynamic memory during a program’s operation. If after freeing a memory location a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using Malwarebytes Vulnerability and Patch Management.

Malvertising via brand impersonation is back again

Web search is about to embark on a new journey thanks to artificial intelligence technology that online giants such as Microsoft and Google are experimenting with. Yet, there is a problem when it comes to malicious ads displayed by search engines that AI likely won’t be able to fix.

In recent months, numerous incidents have shown that malvertising is on the rise again and affecting the user experience and trust in their favorite search engine. Indeed, Search Engine Results Pages (SERPs) include paid Google ads that in some cases lead to scams or malware.

One particularly devious kind of malvertising is brand impersonation where criminals are buying ads and going as far as displaying the official brand’s website within the ad snippet. We previously reported several incidents to Google and it appeared that those ads using official URLs were no longer getting through. However, just recently we noticed a surge in new campaigns again.

Brand abuse: Scammers exploit users’ trust

It only takes a few seconds between a search and a click on a result, and most of the time that click happens to be on whatever shows up first. This is why advertisers are buying ads on search engines, not only to drive traffic towards their brands but also to outpace potential competitors. Unfortunately, not all advertisers have good intentions and the worst of them will exploit anything they can to put out ads that are malicious.

For about a week we decided to pull some examples and focused on Amazon-related searches since it is a popular search term (although other popular brands are affected as well). The ads we found were not only claiming to be Amazon’s official website, they also displayed the amazon.com URL in the ad.

Malicious ad for Amazon
Figure 1a: Malicious advert

Network traffic
Figure 1b: Related network traffic

Malicious ad for Amazon
Figure 2a: Malicious advert

Network traffic
Figure 2b: Related network traffic

Malicious ad for Amazon
Figure 3a: Malicious advert

Network traffic
Figure 3b: Related network traffic

Malicious ad for Amazon
Figure 4a: Malicious ad

Network traffic
Figure 4b: Related network traffic

Malicious ad for Amazon
Figure 5a: Malicious ad

Network traffic
Figure 5b: Related network traffic

Below is an animation showing what happens when a victim clicks on one of those ads:

Animation showing a click on an ad leading to a tech support scam pageFigure 6: Malicious advert leads to phishing page

While most of the brand impersonations we have seen recently are pushing tech support scams, this is not the only threat facing consumers. For example, we saw an ad that pretended to be Amazon’s login page but instead redirects users to a phishing site, first stealing their password before collecting their credit card number. 

Malicious ad leading to phishing page
Figure 7: A malicious ad leading to a phishing site

How are these criminals evading detection?

Ad URL

Part of the problem here is that advertisers can be legitimate affiliates and associated with the Amazon brand. Here’s an example of a seller that is advertising on Google and has their own page as an affiliate on Amazon:

Proper way to advertise as Amazon affiliate
Figure 8: An advertiser leveraging the Amazon brand correctly

The problem comes when an advertiser that displays a brand’s official URL within the ad snippet (i.e. https://www.amazon.com) is allowed to submit an ad URL that has nothing to do with that brand. We have seen many examples that include URL shorteners, cloaking services or domains freshly registered for the sole purpose of malicious activity.

Spreadsheet used to report malvertising incidents

Figure 9: Incidents related to Amazon searches tracked in malvertising spreadsheet

The screenshot above is part of a document we have shared with Google where we and other researchers track new malvertising campaigns ranging from scams to malware distribution.

Anti-bot traffic funneling and cloaking

Threat actors often rely on traffic filtering services to push malicious content exclusively to intended victims. Practically all the malicious ads we showed earlier used a kind of traffic distribution and filtering system. This market is a bit of a gray area with some companies advertising as anti-bot or anti-fraud providers while others are shamelessly advertising in places frequented by online criminals.

The goal is to not only game Google’s and other ad networks but also to ensure that only qualified traffic is allowed to come through. With most malvertising from click ads, the practice comes down to something called cloaking.

With cloaking, there are two types of URLs used: the legitimate URL (or decoy) and the money URL (the malicious one). In the picture below we see such parameters as well as the threat actor’s money page which contains folders for Amazon (amz) and YouTube – another keyword abused by malvertisers – (ytb) malvertising campaigns:

Cloaking redirect

Figure 10: Cloaking parameters showing the money page

In this specific case we discovered a number of domains registered by the scammer, serving more or less the same purpose. One important thing to remember is that these domains are not immediately seen by Google. For example, the traffic filtering service will detect if a click is from a real user or a machine. It can then decide to forward the bogus click to Amazon’s website and therefore maintain its cover.

Scammer domains

Figure 11: Infrastructure used to redirect Google ads to tech support scams

For real traffic, these domains will act as intermediary to the payload pages which tend to be highly disposable and ever changing. There is a simple reason in that these are clearly malicious and will get reported and taken down. However, it is rare for the malvertising infrastructure to actually be disrupted because it is further upstream and rarely documented properly. This allows threat actors to continue with their malicious ad campaigns and simply swap payload pages.

Can Bard fix Google’s malvertising problem?

We asked Google’s AI chatbot Bard if it could fix the malvertising problem that seems to be plaguing its search engine. At first Bard said it was not able to solve this issue:

Asking Bard if it can fix malvertising

Figure 12: Bard answering a query about malvertising

However, on a second attempt Bard claimed it could after all help to fix the malvertising problem:

Asking Bard if it can fix malvertising again

Figure 13: Bard answering the same question in a different way

Regardless, malvertising is a complex issue and given the billions of daily ad impressions, it’s easy for someone nefarious to abuse any given platform. But we don’t need AI to identify certain elements that allow threat actors to impersonate brands. Also, while educating users about malvertising is important, we can’t blame them for clicking on paid ads that are supposedly verified as trusted.

Needless to say that these incidents will encourage users to install ad blockers at the chagrin of publishers whose revenues are heavily dependent on advertising. In the end, it comes down to the user experience and ensuring that it comes first, before anything else.

We then asked for some tips to protect against malvertising. We couldn’t help but notice that Bard suggested using an ad blocker, although a small disclaimer at the bottom clearly states that Bard may display information that does not represent Google’s views. Indeed, the ad industry accounts for almost 80% of Google’s revenues.

Asking Bard for some tips on malvertising

Figure 14: Bard offers some tips on how to protect from malvertising

Malvertising has been a problem for many years and it’s unlikely to change any time soon. It’s important for users to be aware that criminals can buy ads and successfully bypass security mechanisms all the while impersonating well-known brands. If you decide to type the URL in the address bar instead, remember to be careful not to make a typo. This is another area that is highly targeted by typosquatters and can also involve malvertising redirects.

All of the ads mentioned in this blog post have been reported to Google. We would like to thank the people working in the ad unit for their continued support.

Indicators of Compromise

Redirects:

tinyurl[.]com/amzs10
tinyurl[.]com/amz01111

Cloaking domains:

601rajilg[.]xyz
hesit[.]xyz
maydoo[.]xyz
pizz[.]site
ferdo[.]xyz
tableq[.]xyz
veast[.]site
amazonsell[.]pro
amaazoon[.]org
atzipfinder[.]com

Tech support scam domains:

ryderlawns[.]xyz
akochar[.]site
gerots[.]s3.eu-north-1[.]amazonaws[.]com
pay-pal-customer-helpline-app-tt6y3[.]ondigitalocean[.]app
micrwindow-app-38sqh[.]ondigitalocean[.]app
fekon[.]s3.ap-south-1[.]amazonaws[.]com

Malwarebytes Browser Guard provides additional protection to standard ad-blocking features by covering a larger area of the attack chain all the way to domains controlled by attackers. Thanks to its built-in heuristic engine it can also proactively block never-before-seen malicious websites.

We always recommend using a layered approach to security and for malvertising you will need web protection combined with anti-malware protection. Malwarebytes Premium for consumers and Endpoint Protection for businesses provide real-time protection against such threats.

TRY NOW

Webinar recap: EDR vs MDR for business success

Did you miss our recent webinar on EDR vs. MDR? Don’t worry, we’ve got you covered!

In this blog post, we’ll be recapping the highlights and key takeaways from the webinar hosted by Marcin Kleczynski, CEO and co-founder of Malwarebytes, and featuring guest speaker Joseph Blankenship, Vice President and research director at Forrester.

  • Introducing EDR and MDR: The webinar began with an overview of EDR and MDR. The speakers explained that EDR provides visibility into endpoint activity, while MDR offers 24/7 monitoring and management of security technologies and incident response services. They also pointed out that EDR solutions can be challenging for businesses without dedicated security teams and that building an in-house SOC can be expensive and difficult.
  • Limitations of Endpoint Protection and EDR: The speakers discussed the limitations of endpoint protection and EDR, specifically when it comes to advanced threats like ransomware or Advanced Persistent Threats (APTs) that use Living off the Land (LOTL) attacks and fileless malware. These threats can hide in memory and blend in with normal activity, making them difficult to detect without trained specialists who are proactively hunting for them.
  • How MDR Can Help: To address these challenges, the speakers spoke about outsourcing to an MDR provider. MDR providers work with clients to understand their security technology stack, make recommendations, and agree on response actions to take. Incident response and threat hunting are part of the MDR service, and the provider will have a plan in place to shut down threats, contain them, and eradicate them so businesses can get back to.. erm… business.
  • Which Is Right for Your Business? The choice between EDR and MDR comes down to the resources you have available and the level of security you require. If you have a dedicated security team and the resources to manage and maintain an EDR solution, EDR may be the right choice for you. However, if you lack dedicated security resources, MDR may be a better option as it provides continuous monitoring and incident response services.

Want to learn more about EDR and MDR and which is right for your business? Be sure to watch the full webinar recording on-demand and get valuable insights from industry experts on how to improve your security operations and protect against ransomware and fileless malware.

Watch now!

ChatGPT: Cybersecurity friend or foe?

If you haven’t heard about ChatGPT yet, perhaps you’ve just been thawed from cryogenic slumber or returned from six months off the grid. ChatGPT—the much-hyped, artificial intelligence (AI) chatbot that provides human-like responses from an enormous knowledge base—has been embraced practically everywhere, from private sector businesses to K–12 classrooms.

Upon its launch in November 2022, tech enthusiasts quickly jumped at the shiny new disruptor, and for good reason: ChatGPT has the potential to democratize AI, personalize and simplify digital research, and assist in both creative problem-solving and tackling “busywork.” But the security community and other technology leaders have started raising the alarm, worried about the program’s potential to write malware and spread mis- and disinformation.

Do you think your organization should embrace ChatGPT? Or do you believe implementing the platform will compromise your company’s cybersecurity posture? Read on to learn more about the pros, cons, conversations, and controversies surrounding ChatGPT, including a call to (halt) action from technology leaders.

Why ChatGPT security concerns are both overblown and valid

Artificial intelligence (AI) was once a science fiction cautionary tale—stories consistently warned against designing machines capable of surpassing human ingenuity. But over the last 20 years, AI has transitioned from the stuff of nightmares to mainstream adoption. From the first Roomba in 2002 to the first virtual assistant (Siri) in 2011, AI has slowly and steadily penetrated the consumer technology market, often with little comprehension from buyers that artificial intelligence is actually powering the functionality behind their favorite devices.

As of November 2022, people can no longer ignore the artificial elephant in the room. Enter: ChatGPT. Developed by OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer) is an AI-powered chatbot that uses an enormous and sophisticated language model to generate human-like responses in text format. ChatGPT amassed an incredible 100 million users within two months of its explosive launch, and its website receives over 1 billion visitors per month.

Unlike other chatbots, which are typically only able to produce canned replies on a few subjects, ChatGPT can mimic the style of celebrity CEOs, craft business pitches, compose music and song lyrics, answer test questions, simulate an entire chat room, write computer code, and much more. For example, Malwarebytes asked ChatGPT to write the opening paragraph of a novel about an antiquated antivirus program that relies on signature-based detection to stop new and emerging threats.

Here’s what the program came back with:

“The antivirus program blinked to life, its archaic interface flickering on the outdated CRT monitor. The program’s once cutting-edge signature-based detection technology was now decades out of date, unable to keep pace with the rapidly evolving landscape of digital threats. But for the small team of programmers responsible for maintaining the program, it was still their best line of defense against the unrelenting tide of new and unknown viruses that threatened to breach their systems. As they watched the program scan through the code, line by line, they knew that they were fighting an uphill battle – but they had no other choice but to keep trying.”

Such articulate, detailed responses caused a near frenzy upon ChatGPT’s initial release. Many corporate customers and app developers have rushed to implement the platform, and its newer Dall-E program for generating images, into their business processes and products. However, the security community and those wary of artificial intelligence’s steady drumbeat forward have warned organizations to exercise caution over a myriad of potential risks.

Because of its meteoric rise into public consciousness and rapid adoption, the generative AI chatbot has been the subject of continuing, complex conversations about its impact on the cybersecurity industry, threat landscape, and humanity as a whole. Will ChatGPT be the sentient harbinger of death some have claimed? Or is it a unicorn that’s going to solve every business, academic, and creative problem? The answer, as usual, lies somewhere in the gray.

Security pros of ChatGPT

AI can be a powerful tool for cybersecurity and information technology professionals. It will change the way we defend against cyberattacks by improving the industry’s ability to detect and respond to threats in real time. And it will help businesses shore up their IT infrastructure to better withstand the constant stream of increasingly-sophisticated attacks. Most effective security solutions today, including Malwarebytes, already employ some form of machine learning. That’s why some in the security community argue that generative AI tools can be safely deployed to strengthen an organization’s cybersecurity posture as long as they’re implemented according to best practices.

Increases efficiency

ChatGPT can increase efficiency for cybersecurity staff on the front lines. For one, it can significantly reduce notification fatigue, a growing concern within the field. With companies grappling with limited resources and a widening talent gap, a tool like ChatGPT could simplify certain labor-intensive tasks and give defenders back valuable time to commit to higher-level strategic thinking. ChatGPT can be trained to identify and mitigate network security threats like DDoS attacks when used in conjunction with other technologies. It can also help automate security incident analysis and vulnerability detection, as well as more accurately filter spam.

Assists engineers

Malware analysts and reverse engineers could also benefit from ChatGPT’s assistance on traditionally challenging tasks, such as writing proof-of-concept code, comparing language- or platform-specific conventions, and analyzing malware samples. The chatbot can also help engineers learn how to write in different programming languages, master difficult software programs, and understand vulnerabilities and exploit code.

Trains employees

ChatGPT’s security applications aren’t limited to Information Security (IS) personnel. The program can help close the security knowledge gap by assisting in employee training. Cybersecurity training is crucial for organizations interested in mitigating cyberattacks and fraud, yet IT departments are often far too busy to offer more than a single course per year. ChatGPT can step in to offer insights on identifying the latest scams, avoiding social engineering pitfalls, and setting stronger passwords in concise, conversational text that may be more effective than a lecture or slide presentation.

Aids law enforcement

Finally, ChatGPT has the potential to assist law enforcement with investigating and anticipating criminal activities. In a March 2023 report from Europol, subject matter experts found that ChatGPT and other large language models (LLMs) opened up “explorative communication” for law enforcement to quickly gather key information without having to manually search through and summarize data from search engines. LLMs can significantly speed up the learning process, enabling a much faster gateway into technological comprehension than was previously thought possible. This could help officers get a leg up on cybercriminals whose understanding of emerging technologies have typically outpaced their own.

Security concerns overblown

Not long after ChatGPT was first introduced, the inevitable hand wringing by technology decision-makers took hold. In a February survey of IT professionals by Blackberry, 51 percent predicted we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71 percent believed nation states are likely already using the technology for malicious purposes.

The following month, thousands of tech leaders, including Steve Wozniak and Elon Musk, signed an open letter to all AI labs calling on them to pause the development of systems more powerful than the latest version of ChatGPT for at least six months. The letter cites the potential for profound risks to society and humanity that arise from the rapid development of advanced AI systems without shared safety protocols. More than 27,500 signatures have since been added to the letter.

However, even when ChatGPT is engaged in ominous activities, the outcomes at present are rather harmless. Since OpenAI allows developers to modify its official APIs, some have tested a few nefarious theories by creating ChaosGPT, an internet-connected “evil” version that runs actions users do not intend. One user commanded the AI to destroy humanity, and it planned a nuclear winter, all while maintaining its own Twitter account, which was ultimately suspended.

ChaosGPT tweet

So maybe ChatGPT isn’t going to take over the world just yet—what about some of the more realistic security concerns being voiced, like the ability to develop malware or phishing kits?

When it comes to writing malicious code, ChatGPT isn’t yet ready for prime time. In fact, the platform is a terrible programmer in general. It’s currently easier for an expert threat actor to create malware from scratch than to spend time correcting what ChatGPT has produced. The fear that ChatGPT would hand script kiddies the programming power to produce thousands of new malware strains is unfounded, as amateur cybercriminals lack the knowledge to pick up on minor errors in code, as well as the understanding of how code works.

One of our researchers recently embarked on an experiment to get ChatGPT to write ransomware, and despite the chatbot’s initial protests that it couldn’t “engage in activities that violate ethical or legal standards, including those related to cybercrime or ransomware,” with a little coaxing, ChatGPT eventually complied. The result: snippets of ransomware code that switched languages throughout, stopped short after a certain number of characters, dropped features at random, and were essentially incoherent and useless.

Since the primary focus of ChatGPT’s training was in language skills, security pros have been most anxious about its ability to generate believable phishing kits. While the chatbot can produce a clean phishing email that’s free from grammatical or spelling errors, many modern phishing samples already do the same. The AI tool’s phishing skills begin and end with writing emails because, again, it lacks the coding talent to produce other elements like credential harvesters, infected macros, or obfuscated code. Its attempts so far have been rudimentary at best—and that’s with the assistance of other tools and researchers.

ChatGPT can only pull from what’s already in its public database, and it has only been trained on data up until 2021. Even today, there are simply not enough well-written phishing scripts in the wild for ChatGPT to surpass what cybercriminals have already developed. In addition, OpenAI has safety protocols that explicitly prohibit the use of its models for malware development, fraud (including spam and scams), and invasions of privacy. Unfortunately, that hasn’t stopped crafty individuals from “jailbreaking” ChatGPT to get around them.

ChatGPT security cons

Just because some of the worst fears about ChatGPT are overhyped doesn’t mean there are no justifiable concerns. According to the NIST AI Risk Management Framework published in January, an AI system can only be deemed trustworthy if it adheres to the following six criteria:  

  1. Valid and reliable
  2. Safe
  3. Secure and resilient
  4. Accountable and transparent
  5. Explainable and interpretable
  6. Fair with harmful biases managed

However, risks can emerge from socio-technical tensions and ambiguity related to how an AI program is used, its interactions with other systems, who operates it, and the context in which it is deployed.

Racial and gender bias

There are many inherent uncertainties in LLMs that render them opaque by nature, including limited explainability and interpretability, and a lack of transparency and accountability, including insufficient documentation. Researchers have also reported multiple cases of harmful bias in AI, including crime prediction algorithms that unfairly target Black and Latino people and facial recognition systems that have difficulty accurately identifying people of color. Without proper controls, ChatGPT could amplify, perpetuate, and exacerbate toxic stereotypes, leading to undesirable or inequitable outcomes for certain communities and individuals.

Lack of verifiable metrics

AI systems suffer from a deficit of verifiable measurement metrics, which would help security teams determine whether a particular program is safe, secure, and resilient. What little data exists is far from robust and lacks consensus among AI developers and security professionals alike. What’s worse, different AI developers interpret risk in different ways and measure it at different intervals in the AI lifecycle, which could yield inconsistent results. Some threats may be latent at one time but increase as AI systems adapt and evolve.

Cybercriminal experimentation

Despite its struggles with malicious code, ChatGPT has already been weaponized by enterprising cybercriminals. By January, threat actors in underground forums were experimenting with ChatGPT to recreate malware variants and techniques described in research publications. Criminals shared malicious tools, such as an information stealer, an automated exploit, and a program designed to phish for credentials. Researchers also discovered cybercriminals exchanging ideas about how to create dark web marketplaces using ChatGPT that sell stolen credentials, malware, or even drugs in exchange for cryptocurrency.

Vulnerabilities and exploits

There are few ways to know in advance if an LLM is free from vulnerabilities. In March, OpenAI temporarily took down ChatGPT because of a bug that allowed some users to see the titles of other people’s chat histories and first messages of newly-created conversations. After further investigation, OpenAI discovered the vulnerability had exposed some user payment and personal data, including first and last names, email addresses, payment addresses, the last four digits of credit card numbers, and card expiration dates. While OpenAI claims, “We are confident that there is no ongoing risk to users’ data,” there’s no way (at present) to confirm or deny whether personal information was exfiltrated for criminal purposes.

Also in March, OpenAI massively expanded ChatGPT’s capabilities to support plugins that allow access to live data from the web, as well as from third-party applications like Expedia and Instacart. In code provided to ChatGPT customers interested in integrating the plugins, security analysts found a potentially serious information disclosure vulnerability. The bug can be leveraged to capture secret keys and root passwords, and researchers have already seen attempted exploits in the wild.

Privacy concerns

Compounding worries that vulnerabilities could lead to data breaches, several top brands recently chastised employees for entering sensitive business data into ChatGPT without realizing that all messages are saved on OpenAI’s servers. When Samsung engineers asked ChatGPT to fix errors in their source code, they accidentally leaked confidential notes from internal meetings and performance data in the process. An executive at another company cut-and-pasted the firm’s 2023 strategy into ChatGPT to create a slide deck, and a doctor submitted his patient’s name and medical condition for ChatGPT to craft a letter to his insurance company.

Chat with ChatGPT

Both privacy and security concerns have prompted major banks, including Bank of America, JPMorgan Chase, Goldman Sachs, and Wells Fargo, to restrict or all-out ban ChatGPT and other generative AI models until they can be further vetted. Even private companies like Amazon, Microsoft, and Walmart have issued warnings to their staff to refrain from divulging proprietary information or sharing personal or customer data on ChatGPT as well.

Social engineering

Finally, cybercriminals wouldn’t be cybercriminals if they didn’t capitalize on ChatGPT’s wild popularity. Because of its accelerated growth, ChatGPT was forced to throttle its free tool and launch a $20/month paid tier for those wanting unlimited access. This gave threat actors the ammunition to develop convincing social engineering schemes that promised uninterrupted, free access to ChatGPT but really lured users into entering their credentials on malicious webpages or unknowingly installing malware. Security researchers also found more than 50 malicious Android apps on Google Play and elsewhere that spoof ChatGPT’s icon and name but are designed for nefarious purposes.

ChatGPT’s disinformation problem

While vulnerabilities, data breaches, and social engineering are valid concerns, what’s causing the most anxiety at Malwarebytes is ChatGPT’s ability to spread misinformation and disinformation on a massive scale. That which enamors the public most—ChatGPT’s ability to generate thoughtful, human-like responses—is the very same capability that could lull users into a false sense of security. Just because ChatGPT’s answers sound natural and intelligent doesn’t mean they are accurate. Incorrect information and associated biases are often incorporated into its responses.

OpenAI CEO Sam Altman himself expressed worries that ChatGPT and other LLMs have the potential to sow widespread discord through extensive disinformation campaigns. Altman said the latest version, GPT-4, is still susceptible to “hallucinating” incorrect facts and can be manipulated to produce deceptive or harmful content. “The model will boldly assert made-up things as if they were completely true,” he told ABC News.

In the age of clickbait journalism and social media, it can be challenging to discern the difference between fake and authentic content, propaganda or legitimate fact. With ChatGPT, bad actors can use the AI to quickly write fake news stories that mimic the voice and tone of established journalists, celebrities, or even politicians. For example, Malwarebytes was able to get ChatGPT to write a story in the voice of Barack Obama about the earthquake in Turkey, which could easily be modified to spread disinformation or collect fraudulent payments through fake donation links.

Educational concerns

In education, mis- and disinformation are especially troubling byproducts of ChatGPT that have led some of the biggest school districts in the US to ban the program from K–12 classrooms. From its lack of cultural competency to its potential to undermine human teachers, academia is understandably apprehensive. For every student using ChatGPT to research debate prompts or develop study guides, there’s another abusing the platform to plagiarize essays or take exams.

The education industry might be willing (for now) to let teachers use ChatGPT for simple tasks like creating lesson plans and emailing parents, but the tool will likely remain off-limits for students, or at least highly regulated in public schools. Educators are aware that over-reliance on AI-powered tools and generated content could lead to a decrease in problem solving, creativity, and critical thinking—the very skills teachers and administrators aim to develop in students. Without them, it’ll be that much harder to recognize and avoid misinformation.

Final verdict

Suggesting that ChatGPT is low risk and unworthy of the security community’s attention is like putting your head in the sand and pretending AI doesn’t exist. ChatGPT is only the start of the generative AI revolution. Our industry should take its potential for disruption—and destruction—seriously and focus on developing safeguards to combat AI threats. Halting “dangerous” research on advanced models ignores the reality of rampant AI use today. Instead, it’s better to demand NIST’s criteria for trustworthiness and establish regulation around the development of AI through both government intervention and corporate security innovation.

Some artificial intelligence regulation is already on the books: the 2022 Algorithmic Accountability Act requires US businesses to assess critical AI algorithms and provide public disclosures for increased transparency. The legislation was endorsed by AI advocates and experts, and it sets the stage for future government oversight. With AI laws proposed in Canada and Europe as well, we’re one step closer to providing some important guardrails for AI. In fact, expect to see changes (aka limitations) implemented to ChatGPT in the near future in response to a country-wide ban by the Italian government.

Just as cybersecurity relies on commercial software to defend people and businesses, so too might generative AI models. New companies are already springing up that specialize in AI vulnerability detection, bot mitigation, and data input cleansing. One such company, Kasada Pty, has been tracking ChatGPT misuse and abuse. Another new tool from Robust Intelligence, modeled after VirusTotal, scans AI applications for security flaws and tests whether they’re as effective as advertised or if they have issues around bias. And Hugging Face, one of the most popular repositories of machine learning models, has been working with Microsoft’s threat intelligence team on an application that scans AI programs for cyberthreats.

As organizations look to integrate ChatGPT—whether to augment employee tasks, make workflows more efficient, or supplement cyberdefenses—it will be important to note the program’s risks alongside its benefits, and recognize that generative AI still requires an appreciative amount of oversight before large-scale adoption. Security leaders should consider AI-related vulnerabilities across their people, processes, and technology—especially those related to mis- and disinformation. By putting the right safeguards in place, generative AI tools can be used to support existing security infrastructures.

Awareness alone won’t solve the more nebulous threats associated with ChatGPT. To bring disparate security efforts together, the AI community will need to adopt a similar modus operandi to traditional software, which benefits from an entire ecosystem of government, academia, and enterprise that has developed over more than 20 years. That system is in its infancy for LLMs like ChatGPT today, but continued diligence—plus a learning model of its own—should integrate cybersecurity in a symbiotic relationship.  The benefits of ChatGPT are many, and there’s no doubt that generative AI tools have the potential to transform humanity. In what way, remains to be seen.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Identity crisis: How an anti-porn crusade could jam the Internet, featuring Alec Muffett: Lock and Code S04E11

On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States. 

The changes are, ostensibly, over pornography. 

In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID

But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone.

Look no further than Utah. 

On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you’re in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you’re presented with a message from adult film star Cheri Deville who explains that:

“As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”

Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he’s seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology. 

“The battle cry of these people have has always been—either directly or mocked as being—’Could somebody think of the children?’ And I’m thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she’s an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity.”

Muffett continued:

“I’m trying to protect that for her. I’d like to see more people grasping for that.”

Tune in today.

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.


Malwarebytes Privacy VPN can encrypt your connection when using public WiFi, and it can block companies and websites from seeing your IP address and location to identify who you are, where you live, or what you’re doing on the Internet.

TRY NOW


Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)

Additional Resources and Links for today’s episode:

A Sequence of Spankingly Bad Ideas.” – An analysis of age verification technology presentations from 2016. Alec Muffett.

Adults might have to buy £10 ‘porn passes’ from newsagents to prove their age online.” – The United Kingdom proposes an “adult pass” for purchase in 2018 to comply with earlier efforts for online age verification. Metro. 

Age verification won’t block porn. But it will spell the end of ethical porn.” – An independent porn producer explains how compliance costs for age verification could shut down small outfits that make, film, and sell ethical pornography. The Guardian. 

Minnesota’s Attempt to Copy California’s Constitutionally Defective Age Appropriate Design Code is an Utter Fail.” – Age verification creeps into US proposals. Technology and Marketing Law Blog, run by Eric Goldman. 

Nationwide push to require social media age verification raises questions about privacy, industry standards.” – Cyberscoop.

The Fundamental Problems with Social Media Age Verification Legislation.” – R Street Institute.

YouTube’s age verification in action. – Various methods and requirements shown in Google’s Support center for ID verification across the globe. 

When You Try to Watch Pornhub in Utah, You See Me Instead. Here’s Why.” – Cheri Deville’s call for specialized phones for minors. Rolling Stone. 

A week in security (May 15-21)

Last week on Malwarebytes Labs:

Stay safe!


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW