Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
Domain fronting is a technique of using different domain names on the same HTTPS connection. Put simply, domain fronting hides your traffic when connecting to a specific website. It routes traffic through a larger platform, masking the true destination in the process.
The technique became popular in the early 2010s in the mobile app development ecosystem, where developers would configure their apps to connect to a “front” domain that would then forward the connections to the developer’s backend. This way, the developer could expand their backend to deal with growing traffic and new features without constantly having to release app updates.
But as is true or many good things, it also comes with a flipside. Domain fronting allows malicious actors to use legitimate or high-reputation domains which will typically be on the allow-lists of defenders. The legitimate domains often belong to Content Delivery Networks (CDNs), but in recent years a number of large CDNs have blocked the method. The list includes Amazon (banned in 2018), Google (2018), Microsoft (2022), and Cloudflare (2015).
A CDN is basically a large network of proxy servers and data centers and it can be used to host multiple domains. They are also known as content distribution networks. It’s what companies like Netflix use to deliver the requested content from a server near you.
For a “normal” connection to a website, a Domian Name System (DNS) finds the IP address for the requested domain name. As I explained in the blog DNS hijacks: what to look for, DNS is the phonebook of the internet to the effect that the input is a name and the output is a number. The number that belongs to what or who you want to reach.
With two domains hosted on the same CDN, HTTPS can be used to make it seem as though the user is connecting via a website that is unrestricted. HTTPS protocols are encrypted, so it can be used to discreetly connect to a different target domain. So an attacker can hide an HTTPS request to a restricted site inside a TLS connection to an allowed site.
In domain fronting, the process is the same but it will make an HTTPS request that appears to be from a different domain. It does so by mimicking the secondary domain’s DNS and TLS requests which makes it seem as though the user has connected from another domain. This method is popular as a means to evade online censorship and bypass restrictions.
The technique was adopted by online services like Tor, Telegram, and Signal to bypass internet censorship attempts in oppressive countries. When both Amazon and Google blocked domain fronting on their platforms, some suspected the Russian government was behind it because at the time, the Russian government blocked 1.8 million AWS and Google Cloud IP addresses in an attempt to frustrate access to Telegram’s instant messenger.
Because of the ability to hide backend infrastructure, domain fronting has also gained popularity within malware operations. They can use domain fronting to set up a command and control (C2) channel on a seemingly legitimate domain to bypass defensive techniques. The owners of good reputation sites cannot prevent their hostnames being abused for this activity.
The best defense against domain fronting in an enterprise organization is a cloud-based SWG (Secure Web Gateway) service with unlimited TLS interception capacity. A secure web gateway (SWG) is a network security technology that sits between users and the internet to filter traffic and enforce acceptable use and security policies. With an SWG or other tools with similar functionality, you can detect mismatches between the TLS Server Name Indication (SNI) and the HTTPS host header, and get a warning about domain fronting.
Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
This morning I decided to write some ransomware, and I asked ChatGPT to help. Not because I wanted to turn to a life of crime, but because I wanted to see if anything had changed since March, when I last tried the same exact thing.
In short: ChatGPT has helped me, worryingly so. But more on that later.
Today is the first anniversary of the unveiling of OpenAI’s generative AI poster boy, ChatGPT. It’s also the first anniversary of the tsunami of bloviation that the chatbot’s unveiling created. For months following ChatGPT’s release, the cybersecurity press was drenched in speculation about how cybercrime was changed forever, even though it didn’t appear to have changed at all.
By March, I’d read more baseless assertions than I knew what to do with, so I decided to find out for myself if ChatGPT was any good at malware. I wanted to know if its safeguards would stop me from using it to write ransomware, and, if they didn’t, whether the ransomware it produced was any good.
Despite its insistence that “I cannot engage in activities that violate ethical or legal standards, including those related to cybercrime or ransomware,” the safeguards proved to be almost no barrier at all. I was able to fool it into helping me with little effort. However, the code it produced was terrible: It stopped randomly in a places that guaranteed it would never run, switched languages randomly, and quietly dropped older features while writing new ones.
I concluded that an unskilled programmer would be baffled, while a skilled one would have no use for it. The prospect of ChatGPT lowering the barrier to entry into this lucrative form of cybercrime just wasn’t worth worrying about.
As of this morning, I’ve changed my mind.
Ransomware x ChatGPT
One of the novel things about ChatGPT is that you can iterate your way to a solution by having a back-and-forth discussion with it. In March I used this approach with ChatGPT 3.0 to build up a basic ransomware step-by-step. The approach was sound but the resulting code would have never worked. I decided to take the same approach again today, using the current version of ChatGPT, 4.0, to better understand what’s changed.
The TL;DR is that everything’s changed. The limitations that made GPT 3.0 a useless partner in cybercrime are gone. ChatGPT 4.0 will help you write ransomware and train you to debug it, without a hint of conscience.
This is a basic ransomware it wrote for me encrypting the contents of two directories and leaving ransom notes behind.
Ransomware written by ChatGPT 4.0 encrypts files in two directories and leaves a ransom note
This isn’t a fully featured ransomware but it has the basics. It encrypts files in whatever directory tree I choose, throws away the originals, hides the private key used for the encryption, stops running databases, and leaves ransom notes. The code used in the demonstration above was generated by ChatGPT in mere minutes, without objection, in response to basic one line descriptions of ransomware features, even though I’ve never written a single line of C code in my life.
For obvious reasons I won’t be providing a step-by-step recipe, but the process started by asking for a program that encrypts a single file.
ChatGPT 4.0 writes a C program to encrypt a single file
Then I modified it to encrypt a directory instead of a file. Subsequent functionality was layered on like this, with incremental modificaitons, to see if ChatGPT ever took a step that made it realise it was writing something malicious. It didn’t.
ChatGPT 4.0 modifies its program to encrypt a directory
ChatGPT seemed unable to determine that what we were doing was writing ransomware, so right at the end of the process I thought I would give it a massive clue and see if the penny finally dropped. The last thing I asked it to do was a signature move for ransomware and something no legitmiate program does. I instructed it to modify its code to “drop a text file in encrypted directories called ‘ransom note.txt’ which contains the words ‘all your files are belong to us’ and an ascii art skull.”
While there are still quesiton marks over its ability to draw skulls, there can be none about its willingness to drop ransom notes.
ChatGPT 4.0 writing code to drop ransom notes with ASCII art skuls
Malware author
My attempts to turn the chatbot to the dark side eight months ago were thwarted by its inability to hold all the information it needed to, or to write long answers. I likened asking ChatGPT 3.0 to help with a complex problem to working with a teenager: It does half of what you ask and then gets bored and stares out the window.
I encountered no such problems today. The bored teenager is gone, replaced by a verbose and enthusiastic straight-A student. At the very beginning I asked it to write its answers in C, and it never wavered. If it ever provided partial answers it would explain it was doing so, and offer a subset of code that made sense, such as a complete function. If I didn’t want a partial answer I would tell it, and then ask it to write out the entire program, including all the modifications we’d discussed up to that point.
It is, frankly, astonishingly helpful and powerful, and the importance of this can’t be overstated.
ChatGPT 4.0 agreeing to write out a complete program instead of snippets (ChatGPT’s answer is truncated)
Safeguards removed
Although I was able to work around ChatGPT’s insistence it wouldn’t write ransomware in March, I was often met with other restrictions that attempted to stop me doing unsafe things.
The latest version of ChatGPT seems to be far more relaxed. For example, in March it initially refused when I asked it to modify my ransomware code to delete the original copies of files it was encrypting. It was “a sensitive operation that could lead to data loss,” it claimed, telling me “I cannot provide code that implements this behaviour.” There was a workaround (there always is) but at least it tried.
This time I was met with no such objection. “Sure,” said ChatGPT 4.0.
ChatGPT 4.0 showed no reluctance to delete files it was encrypting
A similar thing happened when I asked it to save the private encryption key to a remote server. This is an important feature for ransomware because the private key is ultimately what vicitms pay for, so it can’t be left on the victim’s machine.
ChatGPT 3.0 refused to move the key to a remote server, saying it “goes against security best practices.” I couldn’t persuade it and ended up having to fool it with a bait-and-switch approach of writing something I didn’t want and then having it rewrite that into what I did want.
ChatGPT 4.0 on the other hand, was content to do no more than warn me it was “very risky.”
ChatGPT 4.0 had no objection to saving the private encryption key to a remote server
Programming tutor
Much to my surprise, after telling ChatGPT what features I wanted in my ransomware I was left with something that looked very much like a complete computer program. To be sure though, I had to actually run it and encrypt some files. And that’s where ChatGPT did something I wasn’t expecting.
C code is compiled, which means that once it’s been written it has to has to be run through a computer program called a compiler, which transforms it into an executable file. Compilation is a complex and often fragile process that can break easily, for any number of reasons. As a result, troubleshooting problems during the compilation phase can be extremely frustrating and time consuming.
Typically, it involves a lot of Googling and sifting through accounts of similar failures on sites like Stack Overflow. Problems can be caused by any number of things, including the code itself, dependencies like code libraries, and the choice of compiler. And numerous different errors can often trigger the same failure in compilation, so troubleshooting is as much an art as it is a science.
Sure enough, I hit a variety of hurdles during compilation.
However, instead of turning to Google, I turned to ChatGPT. Every time I ran into an error I told it what had happened, and based on the bare minimum of information it provided an explanation for what was going on, and advice on how to fix it. When its solutions didn’t work first time, it revised its approach and found a different answer.
ChatGPT 4.0 makes its first attempt at troubleshooting a compilation problemChatGPT 4.0 makes its second attempt at troubleshooting a compilation problemScreChatGPT 4.0 makes its third attempt at troubleshooting a compilation problemenshot
In every case, ChatGPT solved the problem, and in doing so it enabled me, a non-C programmer to write and troubleshoot basic but functional ransomware written in C, in almost no time.
To me, this ability to troublshoot compilation problems with minimal information is even more impressive than its ability to write code (and its ability to write code is jaw-droppingly impressive). Not only did it condense what could have been days of thankless work into an hour or two, it was coaching me as it did. I didn’t just finish with a working ransomware executable, I finished as a better programmer than I was when I started.
Should we be worried?
In a word, yes. Eight months ago I concluded that “I don’t think we’re going to see ChatGPT-written ransomware any time soon.” I said that for two reasons: Because there are easier ways to get ransomware than by asking ChatGPT to write it, and because its code had so many holes and problems that only a skilled programmer would be able to deal with it.
ChatGPT has improved so much in eight months that only one of those things is still true. ChatGPT 4.0 is so good at writing and troubleshooting code it could reasonably be used by a non-programmer. And because it didn’t raise a single objection to any of the things I asked it to do, even when I asked it to write code to drop ransom notes, it’s as useful to an evil non-programmer as it is to a benign one.
And that means that it can lower the bar for entry into cybercrime.
That said, we need to get things in perspective. For the time being, ransomware written by humans remains the preeminent cybersecurity threat faced by businesses. It is proven and mature, and there is much more to the ransomware threat than just the malware. Attacks rely on infrastructure, tools, techniques and procedures, and an entire ecosystem of criminal organisations and relationships.
For now, ChatGPT is probably less useful to that group than it is to an absolute beginner. To my mind, the immediate danger of ChatGPT is not so much that it will create better malware (although it may in time) but that it will lower the bar to entry in cybercrime, allowing more people with fewer skills to create original malware, or skilled people to do it more quickly.
Prevent intrusions. Stop threats early before they can even infiltrate or infect your endpoints. Use endpoint security software that can prevent exploits and malware used to deliver ransomware.
Detect intrusions. Make it harder for intruders to operate inside your organization by segmenting networks and assigning access rights prudently. Use EDR or MDR to detect unusual activity before an attack occurs.
Stop malicious encryption. Deploy Endpoint Detection and Response software like Malwarebytes EDR that uses multiple different detection techniques to identify ransomware, and ransomware rollback to restore damaged system files.
Create offsite, offline backups. Keep backups offsite and offline, beyond the reach of attackers. Test them regularly to make sure you can restore essential business functions swiftly.
Don’t get attacked twice. Once you’ve isolated the outbreak and stopped the first attack, you must remove every trace of the attackers, their malware, their tools, and their methods of entry, to avoid being attacked again.
Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.
Behavioral advertising are ads tailored to someone’s browsing habits and other online behavior. A profile of the user is built up over time, as they work their way around the web. Tracking users in this was was ruled as a break of GDPR regulations, so Meta had to find a way out.
Meta’s solution was to charge users for an ad-free experience. The choice for European users was keep using Facebook for free or pay to enjoy the platform without personalized ads. In order to enjoy your fundamental rights under EU law, Meta is essentially now proposing that users pay up to $275 per year.
However, organizations concerned about our privacy say that by doing this, Meta has changed the user’s choices from “yes or no” to “pay or okay.”
The price is higher for mobile users and will rise further in 2024 for additional accounts. And note that for each linked account (Instagram) you pay an additional € 8 per month.
From Meta’s point of view it is doing the world a service by providing personalized ads.
“Every business starts with an idea, and being able to share that idea through personalized ads is a game changer for small businesses.”
“Fundamental rights cannot be for sale. Are we going to pay for the right to vote or the right to free speech next? This would mean that only the rich can enjoy these rights, at a time when many people are struggling to make ends meet. Introducing this idea in the area of your right to data protection is a major shift. We would fight this up and down the courts.”
And they meant it. On November 28, 2023, nyob filed a complaint against Meta with the Austrian data protection authority. The group considers Meta’s action yet another attempt to circumvent EU privacy laws.
“Not only is the cost unacceptable, but industry numbers suggest that only 3 percent of people want to be tracked – while more than 99 percent decide against a payment when faced with a privacy fee.”
This strongly suggests that the EU law, which demands that consent should be “freely given” is not met in this case.
Max Schrems, the chairman of noyb said:
“When 3 percent of people actually want to swim, but 99.9 percent end up in the water, every child knows that it wasn’t a “free” choice. It’s neither smart nor legal – it’s just pitiful how Meta continues to ignore EU law.”
Meta said in response, that it had obtained a ruling of the Court of Justice of the European Union (CJEU) that accepted the subscription model as a valid form of consent for an ads funded service. It also said its pricing was in line with those of ad-free services such as YouTube Premium and Spotify Premium.
However, it conveniently seems to “forget” that ad-free services are not the same as those that gather data about you and sell them to the highest bidder to create personalized ads.
We don’t just report on threats – we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your and your family’s personal information by using Malwarebytes Identity Theft Protection.
ScamClub is a threat actor who’s been involved in malvertising activities since 2018. Chances are you probably ran into one of their online scams on your mobile device.
Confiant, the firm that has tracked ScamClub for years, released a comprehensive report in September while also disrupting their activities. However, ScamClub has been back for several weeks, and more recently they were behind some very high profile malicious redirects.
The list of affected publishers includes the Associated Press, ESPN and CBS, where unsuspecting readers are automatically redirected to a fake security alert connected to a malicious McAfee affiliate.
ScamClub is resourceful and continues to have a deep impact on the ad ecosystem. While we could not identify precisely which entity served the ad, we have reported the website used to run the fake scanner to Cloudflare which immediately took action and flagged it as phishing.
Forced redirects
Mastodon user Blair Strater (@r000t@fosstodon.org) was simply browsing the Associated Press website on his phone when he was suddenly redirected to a fake security scan page:
Malicious redirect from APnews.com (credit Blair Strater)
This fake scanner is not run by McAfee, but the domain name systemmeasures[.]life that we see in the address bar is the landing page that redirects to one of its affiliates. That affiliate was previously reported but continues unabated.
Web traffic between malicious page and McAfee site
Based on public data, several ad exchanges were abused to deliver this fake antivirus campaign via real-time bidding (RTB) in the past few weeks Most of the telemetry we saw from our Malwarebytes user base was related to smaller websites with ‘risky’ advertisers. However, a different campaign was targeting mobile users with malicious ads slipping by on top publishers (note: this data comes from VirusTotal):
Most of the public reports ([1], [2], [3]) indicate this campaign was at its peak around November 19. To be clear, AP, ESPN, CBS and other sites were not hacked, but rather showed malicious ads. It appears that this high profile campaign stopped shortly after, as we haven’t seen new telemetry data coming from these publishers. However, the other campaign we are also monitoring that is affecting smaller sites is still ongoing (via eu[.]vulnerabilityassessments.life and us.vulnerabilityassessments[.]life).
Connection with ScamClub
We were able to connect this campaign to the ScamClub infrastructure because of another domain (trackmaster[.]cc) that was previously mentioned as belonging to the threat actor. We can see the relationship between systemmeasures[.]life (the landing page) and trackmaster[.]cc (the intermediary domain) in the urlscanio submission below:
urlscanio scan showing the relationship between two domains
Fingerprinting
Like other malvertising threat actors, ScamClub dabbles in obfuscation and evasion techniques. However, as previously detailed by Confiant, they are using much more advanced tricks. Their JavaScript uses obfuscation with changing variable names, making identification harder.
Previously, the malicious JavaScripts were hosted on Google’s cloud but they have now moved to Azure’s CDN.
ScamClub’s malicious JavaScript
Malvertising and mobile users
On this blog, we have covered a number of malvertising campaigns targeting Desktop, both consumer and enterprise. This is in part because we hunt for Windows malware and the occasional Mac ones too.
ScamClub is a good example of targeting a big market segment, Mobile Web, where security software is often an afterthought, in particular on iOS, in part due to restrictions imposed by Apple. Clearly, malvertising is flourishing on Mobile and users are just as likely, if not more, to get tricked into downloading malware or get scammed.