IT NEWS

A week in security (Oct 11 – Oct 17)

Last week on Malwarebytes Labs

Other cybersecurity news

Stay safe, everyone!

The post A week in security (Oct 11 – Oct 17) appeared first on Malwarebytes Labs.

Multiple vulnerabilities in popular WordPress plugin WP Fastest Cache

Multiple vulnerabilities have been found in the popular WordPress plugin WP Fastest Cache during an internal audit by the Jetpack Scan team.

Jetpack reports that it found an Authenticated SQL Injection vulnerability and a Stored XSS (Cross-Site Scripting) via Cross-Site Request Forgery (CSRF) issue.

WP Fastest Cache

WP Fastest cache is a plugin that is most useful for WordPress-based sites that attract a lot of visitors. To save the RAM and CPU time needed to render a page, the plugin creates caches of static html files, so that the pages do not need to be rendered for every visit separately.

This results in a speed improvement which in turn improves the visitor experience and the SEO ranking of the site. WP Fastest Cache is open source software and comes in free and paid versions.

WP Fastest Cache currently has more than a million active installations according to its WordPress description page.

Authenticated SQL Injection vulnerability

This particular vulnerability can only be exploited on sites where the Classic Editor plugin is both installed and activated.  Classic Editor is an official plugin maintained by the WordPress team that restores the previous (“classic”) WordPress editor and the “Edit Post” screen.

SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database, and has become a common issue with database-driven web sites. This bug could grant attackers access to privileged information from the affected site’s database, such as usernames and (hashed) passwords.

Stored XSS issue

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-24869 and received a CVSS score of 9.6 out of 10.

Cross-site request forgery (CSRF), also known as one-click attack or session riding, is a type of exploit of a website where unauthorized commands are submitted from a user that the web application trusts. A CSRF attack forces an end user to execute unwanted actions on a web application in which they’re currently authenticated. With a little help of social engineering, an attacker may trick the users of a web application into executing actions of the attacker’s choosing. If the victim is an administrative account, CSRF can compromise the entire web application.

Cross-Site Scripting (XSS) is a vulnerability that exploits the client environment within the browser, allowing an attacker to inject arbitrary code onto the target’s instance and environment. Basically the application does not process received information as intended. An attacker can use such a vulnerability to create input that allows them to inject additional code into a website.

In this case it was possible due to a lack of validation during user privilege checks. The plugin allowed a potential attacker to perform any desired action on the target website. Hence, an adversary could even store malicious JavaScript code on the site. Which in case of an online shop could be a web skimmer designed to retrieve customer payment information.

Mitigation

Website owners should download and install the latest version of the WP Fastest Cache plugin (version 0.9.5) in which these vulnerabilities have been fixed. Jetpack recommends users update as soon as possible, as both vulnerabilities have a high technical impact if exploited. At the time of writing 650,000 instances were still on a vulnerable version.

For more general tips on how to secure you CMS, we recommend reading our article on How to secure your content management system.

Stay safe, everyone!

The post Multiple vulnerabilities in popular WordPress plugin WP Fastest Cache appeared first on Malwarebytes Labs.

“Killware”: Is it just as bad as it sounds?

On October 12, after interviewing US Secretary of Homeland Security Alejandro Mayorkas, USA TODAY’s editorial board warned its readers about a dangerous new form of cyberattack under this eye-catching headline:

The next big cyberthreat isn’t ransomware. It’s killware. And it’s just as bad as it sounds.”

But while “killware” sounds scary, the term itself is unhelpful when describing the many types of cyberattacks that, like USA TODAY wrote, “can literally end lives,” and that’s because nearly any type of hack, no matter the intention, can result in death. Complicating this is the fact that the known cyberattacks that have allegedly led to deaths already have a category: ransomware. Further, the term “killware” can confuse antivirus customers seeking reassurance that their own vendor is protecting them from this threat, but antivirus vendors do not stop attacks based on intent, they stop attacks based on method.

As an example, Malwarebytes Director of Threat Intelligence Jerome Segura said that Malwarebytes does not have any specific Indicators of Compromise (IOCs) for “killware” and that, instead, “we continue to protect our customers with our different layers of protection.”

“Many of our layers are ‘payload indifferent’ meaning we block the attack regardless of what it is meant to do (it could be to ransom, it could be to destroy MBRs, or anything in between). We don’t focus on that end payload so much as blocking how an attacker might get there.”

Think of it like this: Locksmiths don’t develop one set of locks to prevent robberies and another set of locks to prevent assault—they develop locks to primarily prevent break-ins, no matter what an invader has planned.

“Killware” is too loose a term to be useful

In February, an employee for a water treatment facility in Oldsmar, Florida, saw the mouse on his computer screen moving around without his involvement. The employee, according to Wired, thought this was somewhat normal, as his workplace used a tool that allowed for remote employees and supervisors to take control of computers at the plant itself. But when the employee saw the cursor move around a second time in the same day, he reportedly saw an attempt by an intruder to maliciously increase the chemical levels at the water treatment facility, upping the amount of sodium hydroxide—which can be corrosive in high quantities—to dangerous levels.

In USA TODAY’s article about “killware,” Secretary Mayorkas pointed directly to this cyberattack. It was different than other cyberattacks, Mayorkas said, because it “was not for financial gain but rather purely to do harm.”

But if the attack was truly meant to harm or even kill people—which it very well may have—what good does it do to associate it with this new “killware” category? “Killware,” after all, still has the “ware” suffix in it, meaning that it should have at least some relationship to a piece of software, or a program, or perhaps many lines of code.

The breach at the Oldsmar water plant, however, may have involved no malware at all. No spear-phishing attack against an executive’s personal device. No surreptitious implantation of spyware to collect admin credentials. No initial breach and lateral movement. Instead, there’s a frustratingly simpler theory: Reused passwords across the entire water treatment plant for a crucial, remote access tool.

Following the attack at the Oldsmar facility, the state of Massachusetts issued a cybersecurity advisory notice to public water suppliers, detailing a few basic cybersecurity flaws that may have played a role in the attack. As the state said in its advisory:

“The unidentified actors accessed the water treatment plant’s [supervisory control and data data acquisition (SCADA)] controls via remote access software, TeamViewer, which was installed on one of several computers the water treatment plant personnel used to conduct system status checks and to respond to alarms or any other issues that arose during the water treatment process. All computers used by water plant personnel were connected to the SCADA system and used the 32-bit version of the Windows 7 operating system. Further, all computers shared the same password for remote access and appeared to be connected directly to the Internet without any type of firewall protection installed.”

Further, in testifying about the attack to the House Committee on Homeland Security, former Cybersecurity and Infrastructure Security Agency Director Chris Krebs said that the attack was “very likely” caused by “a disgruntled employee,” wrote Washington Post report Ellen Nakashima.

So, the attack may have come from a former employee, who may already have possessed the remote access credentials, which were already the same credentials for every user at the water treatment facility, which also lacked firewall protections.

What part of this attack chain, then, should be labeled “killware”?

Truthfully, none, and that’s because labeling anything as “killware” ignores the basic facts about cybersecurity defenses. Cybersecurity vendors do not categorize or identify attacks based on their final intentions. A reused password is a bad idea, but it isn’t a bad idea that can only be used to harm people. Lacking firewalls protections, similarly, are poor practice, but they aren’t poor practice that can only be used to threaten people’s lives.

In fact, even if cybersecurity vendors wanted to categorize attacks by intention, how could they?

Earlier this year, a bereaved mother filed a lawsuit against a hospital in Alabama that, she claims, failed to provide adequate care to her baby because the hospital was hamstrung by a ransomware attack. The hospital’s inability to properly care for her baby, the lawsuit said, eventually led to her child’s death. Nearly a year prior, a patient’s death during a ransomware attack on a German hospital brought similar allegations—though no lawsuits—but those allegations fell apart in the months following the attack, as the chief public prosecutor tasked with investigating the attack concluded that, even without the treatment delays caused by the ransomware attack, the patient likely would have died.

Neither of these situations involved hackers whose end goal was purely to harm or kill people. The intent, as is clear in almost every single ransomware attack, is to get paid. Ransomware attacks on hospitals, specifically, may use the threat of death as leverage for their end goal, but even the threat of death does not alter the end goal, which is to get paid potentially millions of dollars. If we even tried to use the “killware” term on these attacks, they wouldn’t fit, despite the end result.

Finally, labeling attacks as “killware” does a disservice to both cybersecurity vendors and the public because, if “killware” is a term that requires understanding an attacker’s intent, then “killware” must be applied after an attack has already happened. Good cybersecurity tools don’t just clean up an attack after it’s happened, they actually prevent attacks from happening in the first place. How then, possibly, could a cybersecurity provider prevent an attack that, by its definitional nature, cannot be determined until it’s already happened?

Remember the human

“Killware,” as a term, helps no one and it only increases panic. It conjures up images of hackers gone amok and dark-web-trained serial killers who work with nothing but a laptop—images that might actually be a better fit for over-dramatized procedural cop dramas on TV.

Importantly, “killware” fails to recognize that, already, attacks on computers, machines, devices, and networks have a dramatic impact on the people who use them. Ransomware attacks already cause tremendous emotional and mental harm to the people tasked with cleaning them up. Online scams already ruin people’s lives by emptying their bank accounts.

We do not need a new term that focuses even more on the attacker in cyberthreats. What we need is to remember that cyberattacks, already, are attacks against people, no matter their intent.

The post “Killware”: Is it just as bad as it sounds? appeared first on Malwarebytes Labs.

What is an .exe file? Is it the same as an executable?

You may often see .exe files but you may not know what they are. Is it the same as an executable file? The short answer is no. So what’s the difference?

What is an .exe file?

Exe in this context is a file extension denoting an executable file for Microsoft Windows. Windows file names have two parts. The file’s name, followed by a period followed by the extension (suffix). The extension is a three- or four-letter abbreviation that signifies the file type.

I hear some advanced users moaning in the back of the class, because there are many exceptions. But as a general rule, everything behind the last period in the filename is the extension. For example, because Windows default settings don’t always show the extension of a file, some malware authors name their files really_trustworthy.doc.exe, hoping that the user’s Windows settings cause it to hide the .exe part and have the user believe this is a document they can safely open.

By using this trick in filenames like YourTickets.pdf.exe, malware like Cryptolocker was mailed to millions of potential victims. The icon was the same as legitimate pdf files so it was hard for some receivers to spot the difference. Usually the mails pretend to be from a worldwide courier service, but they also mask themselves as a travel agency.

Wait, what? Is a .exe file a virus?

An .exe file can be a virus, but that is certainly not true for all of them. In fact, the majority are safe to use or even necessary for your Windows system to run. It all depends on what is in an .exe file. Basically .exe files are programs that have been translated into machine code (compiled). So, whether an .exe file is malicious or not depends on the code that went into it.

Most of the normal .exe file will adhere to the Portable Executable (PE) file format. The name “Portable Executable” refers to the fact that the format is not architecture specific, meaning they can be used in 32-bit and 64-bit versions of Windows operating systems. By this standard format the actual code can be found in the .text section(s) of an executable.

How do I open an .exe file?

This is an ambiguous question that deserves two answers.

To use an .exe file you can usually just double click it. You may get a security prompt before it actually runs, but technically you will have initiated running the program inside the .exe file.

If you want to look what is inside an .exe file then that is a much more complicated question. It depends why you want to look inside. Examining files without running them is called static analysis, whereas dynamic analysis is done by executing the program you want to study. As mentioned before, .exe files have been compiled by machine code, so you need special programs to do static analysis. The most well-known program to do this is IDA Pro, which translates machine code back to assembly code. This makes an .exe more understandable, but it still takes a special skillset to make the step from reading assembly code to understanding what a program does.

Difference to an executable

The definition of an executable file is: “A computer file that contains an encoded sequence of instructions that the system can execute directly when the user clicks the file icon. Executable files commonly have an .exe file extension, but there are hundreds of other executable file formats.

So, every true .exe file is an executable but not every executable file has the .exe extension. We mentioned before that .exe files are commonly intended for use on systems running on a Windows OS . That doesn’t mean you can’t open an .exe file on, say, your Android device, but you will need an emulator or something similar to make that happen. The same is true if you are wondering how to open an .exe file on a system running macOS.

Are .exe files safe to open?

It’s not safe to open any .exe file you encounter.. Just like any other file, it depends on the source of the file as to whether you can trust it or not. If you receive an .exe file from an untrusted source, you should use your anti-malware scanner to scan the file and find out whether it is malicious or not. If you’re still in doubt, get a second opinion by uploading it to VirusTotal to check if any of the participating vendors detects the file.

Can an .exe file run itself?

Any executable file needs a trigger to run. A trigger can be a user double-clicking the file, but it can also be done from the Windows registry, for example when Windows starts up. So the closest an .exe file can come to running itself is by creating a copy in a certain location and then point a startup registry key to that location. Or by dropping the copy or a shortcut in the Startup folder, since all the files in that folder get run when Windows starts.

But there are other triggers. For example, there are Autoplay and Autorun options in Windows that get executed at the connection of, for example, USB devices. Malware can be hidden in the firmware of devices that get executed once the device is connected, etc. Which is one reason not to trust USB sticks you find in a parking lot or that get handed out as swag.  You do not want to be responsible for the next cyber incident in your organization, right?

Other executable files

All the potentially bad stuff I have written about .exe files is just as true for almost all other executable files, so it’s not true that .exe files are bad by nature or that they should be trusted the least. The same dangers can be associated with other executable files. Unfortunately, other operating systems have their own viruses which use their own executable files, but that’s for another day.

Stay safe, everyone!

The post What is an .exe file? Is it the same as an executable? appeared first on Malwarebytes Labs.

Inside Apple: How Apple’s attitude impacts security

Last week saw the fourth occurrence of the Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Because of the control it exerts over its ecosystem, understanding Apple’s attitude to security—and it’s willingness to act as a security “dance partner”—are crucial to securing Apple systems, and developing Apple security software.

I was at OBTS, and this is what I learned about Apple’s current attitude to privacy, security, and communication.

Apple’s not great at working with security researchers

It’s no great surprise to anyone that Apple has a rocky relationship with many security researchers. Years ago, well-known researcher and co-author of the book “The Mac Hacker’s Handbook”, Charlie Miller, figured out how to get a “malicious” proof-of-concept app into the App Store, and reported this to Apple after having achieved it. His reward? A lifetime ban from Apple’s developer program.

This says a lot about Apple’s relationship with third-party security researchers. Unfortunately, things haven’t changed much over the years, and this is a constant cause of strains in the relationship between Apple and the people trying to tell it about security issues. During the conference, Apple got booed several times by the audience following reports from OBTS speakers of mismanaged bug reports and patches.

What is it that Apple has been accused of doing? There have been multiple offenses, unfortunately. First, a number of security researchers have reported getting significantly lower bug bounties from Apple’s bug bounty program than they should have earned. For example, Cedric Owens (@cedowens) discovered a bug in macOS that would allow an attacker to access sensitive information. Apple’s bug bounty program states that such bugs are worth up to $100,000. They paid Cedric $5,000, quibbling over the definition of “sensitive data.” (For the record: Cedric’s bug absolutely gave access to what any security researcher or IT admin would consider sensitive data… more on this later.)

Other researchers have reported similar issues, with significantly reduced payments for bugs that should have qualified for more. Further, there is often a significant wait for the bounties to be paid, after the bugs have been fixed—sometimes six months or more. Apple also had a tendency to “go silent,” not responding to researchers appropriately during the process of handling bug reports, and has repeatedly failed to properly credit researchers, or even mention important bugs, in its release notes.

All this leaves a sour taste in many researchers’ mouths, and some have decided to either publicly release their vulnerabilities—as in the case of David Tokarev, who published three vulnerabilities after Apple failed to act on them for many months—or to sell those vulnerabilities on the “gray market,” where they can earn more money.

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program
Screenshot of David Tokarev’s blog, disclosing three 0-day vulnerabilities

Keep in mind here that Apple is one of the richest companies in the world. Paying out the highest prices for security bugs would be pennies compared to Apple’s yearly profits.

A patching myth busted

It has long been a rule of thumb that Apple supports the current system, plus the previous two, with security-related patches. Currently, that would mean macOS 11 (Big Sur), plus macOS 10.15 (Catalina) and macOS 10.14 (Mojave).

However, this is not something Apple has ever stated. I honestly couldn’t tell you where this idea came from, but I’ve heard it echoed around the Mac community for nearly two decades. Although researchers and some IT admins have questioned for years whether this “conventional wisdom” is actually true, many believe it. Josh Long (@theJoshMeister) did a lot of research into this, and presented his findings at the conference.

There have been many bugs in the last year that were fixed for only some of the “current three” systems. This was known to a degree, but Josh’s data was eye-opening as to the extent to which it was happening. Folks who were aware of some of these discrepancies theorized that some of these bugs may not have affected all three systems, and that may explain why patches were never released for them.

However, Josh was able to track down security researchers who had found these bugs, and confirmed that, in at least one case, Mojave was affected by a bug that had been patched in Catalina and Big Sur only. Thus, we know now that this rule of thumb is false. This confirmed many people’s suspicions, but there are many others who have continued to believe in the myth. It’s echoing around Apple’s own forums, among other places.

The fact that this speculation persisted for years, and that research was even necessary to prove it false, is a major failing on the part of Apple. Microsoft tells its users whether a system is still supported or not. Why can’t Apple do the same? Staying silent, and allowing people to believe the myth of the “three supported systems,” means that some machines are left vulnerable to attack.

At this point, you should assume that only the most current system—Big Sur at the moment, but soon to be Monterey—is the most secure system, and that there may be known vulnerabilities left unpatched in all others. This means you should feel a bigger sense of urgency at upgrading when a new system like Monterey comes out, rather than waiting for months to upgrade.

Apple loves privacy, but you can still be tracked

Apple is well-known for its strong stance on privacy. (I say that as if Apple isn’t well-known otherwise, and you might say, “What’s the name of that company that really likes privacy?”) However, we heard plenty of talk about data access and tracking despite this. (Or maybe because of Apple’s views on privacy, it’s more interesting when we learn how to violate it?)

Eva Galperin (@evacide) talked about how stalkers can track you on iOS, despite Apple’s protections. From a technical perspective, spyware—defined as software running on the device that surveils and tracks you—is not much of a thing, because of Apple’s restrictions on what apps can do, plus the fact that you can’t hide an app on iOS.

However, Eva showed how spyware companies are nonetheless capable of enabling you to creep on your ex. Many of these companies provide web portals where you enter your stalking victim’s Apple ID and password, which enables tracking via iCloud’s features. iCloud email can be read, as well as notes, reminders, files on iCloud Drive, and more. Find My can provide the victim’s location. Photos synced up to iCloud can be viewed. And so on.

You might say, “But wait! This requires me to know my victim’s Apple ID password, and have access to their two-factor authentication! Therefore, this is a non-issue.”

However, keep in mind that in many domestic abuse situations, the attacker has exactly this kind of information. Further, Apple ID credentials can easily be found in data breaches, for potential victims who have used the same password for Apple ID that they’ve used elsewhere, and there are techniques attackers can use to capture two-factor authentication codes.

Plus, let’s all remember the situation a few years back where someone was able to trick Apple support into helping them gain access to celebrity accounts, in order to steal their nude photos from iCloud.

On a different topic, Sarah Edwards (@iamevltwin) talked about the Apple Wallet. As a forensics expert, Sarah has a deep understanding of data and how to access it, and demonstrated the kind of data that could be obtained with access to iPhone backups. If an attacker could gain access to those backups, there’s a wealth of information about your daily activities, places that you frequent, and many other things to be harvested.

Apple has gone bananas… and who is Keith?

The most amusing part of the conference came during Sarah Edwards’ talk, when she discussed the data found in a particular database for Apple Wallet. This database contained hundreds of tables, and most of them were named after fruit. Yes, you heard me correctly—bananas, oranges, lemons, …durians! These are all the names of tables in a database relating to your wallet.

On first glance, this is quite puzzling. But it does make a certain amount of sense. If you’re trying to extract some data from this database, you’re going to have to put in a lot of work to figure out how to find it. The table names are not going to help you at all. That’s a pretty good thing, although I don’t envy the developers who have to keep all those databases straight. (“Where did we put the data on library cards again? Oh, yeah, in ‘kiwis!’”)

Although many of those tables are still a mystery, Sarah had been able to determine the purpose of some of them, through experimentation and observation. Still, many tables contained only things like identification numbers and timestamps, which by themselves are meaningless.

(As an aside, if the “durians” table doesn’t contain information relating to pay toilet transactions, I’ll be extremely disappointed!)

All privacy-related discussions aside, these table names remind me of Apple’s fun and playful side, which we so rarely get to see these days. Everyone knows Apple’s secretive facade, and security researchers often experience Apple’s sharp edges.

However, long-time Apple users know and love the “fun Apple.” This is the Apple that inscribed the signatures of all the engineers on the inside of the early one-piece Mac cases, where only a few would ever see them. Or the Apple that included a calendar file containing a history of Tolkien’s Middle Earth hidden in every copy of macOS. Or the Apple that used to Rickroll you on their Apple Watch support page!

give you up.png

Especially amusing was the discovery that, buried in the midst of all the fruit, there was a database simply named “keith.” Who is this Keith, and why is he in the wallet? Inquiring minds want to know!

For all of Apple’s flaws that we love to complain about, the discovery of this database brought back memories of the Apple that I love, and reminded me that it’s not just a faceless corporation, but is also a company full of people who also know and love the same Apple that I do.

The post Inside Apple: How Apple’s attitude impacts security appeared first on Malwarebytes Labs.

Adblocker promises to blocks ads, injects them instead

Researchers at Imperva uncovered a new ad injection campaign based on an adblocker named AllBlock. The AllBlock extension was available at the time of writing for Chrome and Opera in the respective web stores.

While disguising your adware as an adblocker may seem counterintuitive, it is actually a smart thing to do. But let’s have a look at what they did and how, first.

AllBlock

As we mentioned, AllBlock is advertised as an adblocker on its site. It promises to block advertisements on YouTube and Facebook, among others.

AllBlock website

When you’re installing the Chrome extension, the permissions it asks for make sense for an adblocker.

extension permissions

Even though that may seem like a lot to allow, and it is almost a carte blanche, any adblocker that you expect to work effectively will need a full set of permissions to at least “read and change all your data on all websites.”

What Imperva found is that the extension replaces all the URLs on the site a user is visiting with URLs that lead to an affiliate. This ad injection technique means that when the user clicks on any of the modified links on the webpage, they will be redirected to an affiliate link. Via this affiliate fraud, the attacker earns money when specific actions like registration or sale of the product take place.

Ad injection

Ad injection is the name for a set of techniques by which ads are inserted in webpages without getting the permission of site owners or paying them. Some of the most commonly seen tactics are:

  • Replacing existing ads with ads provided by the attacker
  • Adding ads to sites that normally have none
  • Adding or changing affiliate codes so the attacker gets paid instead of the affiliate that had permission to advertise on a site

To pull this off, malicious browser extensions, malware, and stored cross-site scripting (XSS) are the most commonly found techniques.

In this case it was a malicious extension that used some interesting methods.

To make the extension look legitimate, the developers actually implemented ad blocking functionality. Further, the code was not obfuscated and nothing immediately screams malware.

All the URLs that are present in a visited website are sent to a remote server. This server replies with a set of URLs to replace them with. The reading and replacing of the URLs is done by the extension which was given permissions to do so.

To avoid detection, the threat actor has taken a few more measures besides looking harmless. The malicious javascript file detects debugging, it clears the debug console every 100 ms, and major search engines (with a special focus on Russian engines) are excluded.

A part of the code in the bg.js script that is part of the extension makes an HTTP request to allblock.net/api/stat/?id=nfofcmcpljmjdningbllljenopcmdhjf and receives a JSON response with two base64 encoded properties “data” and “urls”. The “data” part is the code that gets injected on every site the affected browser opens, and the “urls” part looks like this:

{"youtubeInput":["*://*.youtube.com/get_video_info*adunit*","*://*.g.doubleclick.net/pagead*","*://*.youtube.com/pagead*","*://*.googlesyndication.com/pagead*","*://*.google.com/pagead*","*://*.youtube.com/youtube*ad_break*"],"vkInput":["https://vk.com/al_video.php?act=ad_event*","https://vk.com/al_video.php?act=ads_stat*","https://vk.com/ads_rotate*","https://ad.mail.ru/*","https://ads.adfox.ru/*"]}

Conclusion

The extension the Imperva team found actually blocks ads, but it also runs a background script that injects a snippet of JavaScript code into every new tab that a user opens in the affected browser. The end goal is to make money by replacing legitimate URLs on the website with URLs of their own. These URLs include affiliate codes, so they get paid if you click on one of those links and benefit from any sales that may come out of these clicks.

Ad blockers that are able to block advertisements on popular social media like YouTube and Facebook may seem like the holy grail to some users. To those that are interested in ad blocking and haven’t found the right program yet, please read “How to block ads like a pro.”

And as we have mentioned before, it makes sense to give ad blockers the permissions that they need to do their job. So we feel the need to emphasize that you should only give those permission to extensions that you actually trust, not just because you think “it” needs them.

Ad blocker campaigns

The Imperva team writes on their blog that they believe that there is a larger campaign taking place that may utilize different delivery methods and more extensions.

In our own Malwarebytes’ research we have found a series of adblockers that were pushed out through websites showing fake alerts like this one.

FakeFlash
If you keep stumbling over these and when you click on one of them, you might even welcome the offer of an adblocker, right?

We could not find anything wrong with these extensions, and we also found that they were all using the publicly available Adguard blocklist. So we didn’t really follow up on them because, same as the one described above, they looked legitimate. The only thing that really made them look suspicious was that they were promoted through these “fake alert” sites.

For now it is hard to tell whether we have been tracking the same or similar campaigns. Since I haven’t seen the bg.js script before they may be completely different, but I will try and contact the Imperva team and compare notes. If anything interesting comes out of that, we will let you know.

Stay safe, everyone!

The post Adblocker promises to blocks ads, injects them instead appeared first on Malwarebytes Labs.

Patch now! Microsoft fixes 71 Windows vulnerabilities in October Patch Tuesday

Yesterday we told you about Apple’s latest patches. Today we turn to Microsoft and its Patch Tuesday.

Microsoft tends to provide a lot of information around its patches and, so, there’s a lot to digest and piece together to give you an overview of the most important ones. In total, Microsoft has fixed 71 Windows vulnerabilities, 81 if you include those for Microsoft Edge.

One of the vulnerabilities immediately jumps out since it was used in the wild as part of the MysterySnail attacks, attributed by the researchers that discovered it to a Chinese speaking APT group called IronHusky.

MysterySnail

Earlier this month, researchers discovered that a zero-day exploit was used in widespread espionage campaigns against IT companies, military contractors, and diplomatic entities. The payload of these MysterySnail attacks is a Remote Access Trojan (RAT). The actively exploited vulnerability allows malware or an attacker to gain elevated privileges on a Windows device. So far, the MysterySnail RAT has only been spotted on Windows Servers, but the vulnerability can also be used against non-server Windows Operating Systems.

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-40449, a Win32k Elevation of Privilege (EoP) vulnerability, which means the vulnerability allows a user to raise their permissions.

PrintNightmare

I scared you by mentioning PrintNightmare, right? Well, that may not be completely in vain. The same researchers that discovered the PrintNightmare vulnerability have found yet another vulnerability in Microsoft’s Windows Print Spooler. This one is listed as CVE-2021-36970, a Windows Print Spooler spoofing vulnerability. The exploitation is known to be easy, and the attack may be initiated remotely. No form of authentication is needed for a successful exploitation, but it does require some action by the intended target. We may be hearing more about this one.

Exchange again

An Exchange bug that gets a CVSS score of 9.0 out of 10 is enough to make my hair stand on end. Listed as CVE-2021-26427, this one is a Microsoft Exchange Server Remote Code Execution (RCE) vulnerability. The exploitation appears to be easy and the attack can be initiated remotely. A single authentication is required for exploitation, so the attacker will need to have some kind of access to exploit this one, which may be why Microsoft listed it as “exploitation less likely.” Exchange Servers are an attractive target and so we have seen a lot of attacks. One worrying flaw reveals users’ passwords and might provide attackers with the credentials they need to use this vulnerability.

Critical Microsoft Word vulnerability

One of the three vulnerabilities classified as critical is an RCE vulnerability in Word, listed as CVE-2021-40486. The vulnerability could allow a remote attacker to trick a victim into opening a specially crafted file, executing arbitrary code on their system.

The other two critical vulnerabilities are RCE flaws in Windows Hyper-V, the virtualization component built into Windows. These vulnerabilities are listed as CVE-2021-38672 and CVE-2021-40461.

Windows DNS Server RCE

The last one is only of interest if you are running a server that is configured to act as a DNS server. Listed as CVE-2021-40469, a Windows DNS Server Remote Code Execution vulnerability. The exploitation is known to be easy. The attack may be launched remotely, but the exploitation requires an enhanced level of successful authentication. The vulnerability was disclosed in the form of a Proof-of-Concept (PoC). While it may not be up to you to maintain or patch a DNS server, it’s good to know that this vulnerability exists in case we see weird connection issues as a result of a DNS hijack or denial-of-service.

While many details are still unknown, we have tried to list the ones we can expect to surface as real world problems if they are not patched as soon as possible.

Stay safe, everyone!

The post Patch now! Microsoft fixes 71 Windows vulnerabilities in October Patch Tuesday appeared first on Malwarebytes Labs.

“Free Steam game” scams on TikTok are Among Us

TikTok has long since evolved beyond being thought of as “just” dance clips, also becoming a home for educational and informative content presented in a fun and casual way. There are accounts themed around pretty much any interest you can think of, and one of the biggest is gaming.

It’s not all entirely innocent, however. Sometimes we observe new twists on old scams, or slick videos designed to obscure some sleight of hand. Shall we take a look?

Free Steam game accounts: be careful what you wish for

Games are expensive. Even without the costs of downloadable content (DLC), you also have things like season passes, in-game currency frequently purchased with real money, lootboxes, and more. FOMO (fear of missing out) is a big driver for timed exclusives and must have items, and all of these constant pressures drive gamers to want a bit of a discount. Where it tends to go wrong is with the promise of everything being free. If it’s too good to be true, and so on.

What we sometimes see on TikTok is gaming-themed accounts making many of the same promises you see on other platforms. Free games, free items, free stuff. Everything is definitely free with no strings attached. Would RandomAccountGuy3856 lie to you?

The answer is, of course, “Yes, RandomAccountGuy3856 absolutely would lie to you”.

Taking a walk through free game town

This is a typical free game account which you’ll find on TikTok:

tiktok0

As you can see, it’s pretty minimal and is simply a stack of the same video uploaded repeatedly. The site claims to offer free games and keys.

tiktok00

The site itself appears to have recently been taken offline. Thanks to the magic of cached content, we can still piece things together and figure out the process.

 The front page splash at the start of last month looked as follows:

tiktok1

They’re claiming to offer up free versions of the incredibly popular Among Us game. However, they also claim to have special hacked versions up for grabs. These versions let the player cheat in various ways. There’s also the reassurance you won’t get banned, which is used as further encouragement to download the altered editions.

This process involves selecting which edition you want, and then hitting the download button. They claim to offer Android, PC, and iOS flavours.

No matter what button you hit, you see the below pop-up. You may well be familiar with these from years of surfing:

tiktok3

The text reads as follows:

Before downloading, we need to make sure you are a real user, and not an automated bot. This helps us keep making these kind of hacks and keep them on Google for a long time

Hitting the verify now button opens a new tab, with a new destination. Unfortunately, it’s not a very good one. As our detection page states, we have that particular URL blocked because it is associated with malvertising.

Running down the timer on TikTok fakeouts

These are old tricks, essentially given a fresh lick of paint and an enticing video to go with it. There’s just something a bit more personal about having what looks like real people telling you genuine-sounding things in a short video clip. It all feels very informal and casual, and that’s exactly the kind of ambience a scammer would look to hit you with alongside their dubious websites and offers.

Even when accounts like the above aren’t purged by TikTok, the sites they link to are often here today, gone tomorrow. Everything is purely geared towards driving as much ad/malvertising traffic as possible.

As tempting as the promise of free gaming is, please be on your guard. There are risky games, and then there are risky games.

The post “Free Steam game” scams on TikTok are Among Us appeared first on Malwarebytes Labs.

The joy of phishing your employees

Many companies set up phishing test programs for their employees, often as part of a compliance requirement involving ongoing employee education on security topics. The aim of these programs is to train employees on how to spot a malicious link, not click it, and forward it on to the appropriate responder, but most of these programs do not meaningfully achieve this. Let’s take a look at some common pitfalls and how to step around them.

You’re annoying your employees

Click-through rates on a real phish average between 10 and 33 percent of untrained users, depending on which security vendor you ask. But test phishes are sent to everyone, indiscriminately, taking time and energy away from those who are more or less doing the right thing already.

And while an organizational baseline is useful, and compliance can mandate a certain degree of repetition, repeatedly testing all employees without any sort of targeting can create a certain degree of security blindness on their part. There’s also often a lack of real-world tactics on the part of the tester due to a need to hit large quantities of people at the same time.

A better solution is to conduct infrequent, all-hands tests as a baseline, then take a look at your failures. Do you have clusters, and where are they? What job function is most common in the failures, and how does that map to overall security risk? A repeated failure in Marketing has a different impact than one in Finance.

With a good grasp of where your risk is, you can start focusing on problem areas of the organization with challenging, more frequent tests that use real-world tactics. While an all hands phish might be an untargeted credential harvester, a high-risk phish test might look more like a malicious invoice sent by a fake vendor to a select group in the Finance department.

You’re not including execs

Executives are frequently not included in enterprise security testing, most likely due to difficulty getting buy-in on a topic that some C-Suites view as esoteric. They also are a population most likely to engage in off-channel communications like SMS or bring your own device (BYOD) mobile mail using unsupported clients. However, executives—if successfully phished—can cause some of the most significant dollar losses to the organization than anyone else. While a single compromised credential pair at the ground level is typically a recoverable incident, business email compromise (BEC) aimed at an executive has caused up to $121 million dollars in single-incident losses.

Successful inclusion of executives in a phishing training program would involve spearphishing, rather than a canned phish. The key indicator of a well-formed phish is mirroring the tactics found in the wild, so your high-value targets require a high effort pitch. Make sure that your phish test vendor includes a markup editor to construct custom phishes from scratch so that you can alternate between a canned mass mailer and a laser-focused spearphish, as needed.

You’re not changing your approach

Just as security staff can get alert fatigue and start missing important alarms from their tooling, non-technical staff can get test fatigue and start associating threats with one particular phish format that you use too much. Best practice should include frequent rotation of pitch type and threat type; malicious link, malicious attachment, and pure scam threats present differently and have their own threat ecosystems that warrant their own test formats.

If you’ve been using your test failures to highlight problem areas, that’s a great place to start varying how you conduct your tests. A failure cluster in a Finance department would respond fairly well to attachment-based phish tests, with pitch text focused around payment themed keywords. Given that impact of a breach to that department would also be high risk, more frequent and more difficult tests give better outcomes over the long term. The key point is that phish tests are sensors for organizational risk and should be tuned for accuracy frequently.

You’re not using the data

Okay, so you’ve checked that compliance tick box, created a test schedule that ratchets in to your problem areas over time, and you’re running custom spearphishes against your execs. You can call it a day, right?

Hitting these marks can get you a large security advantage over other companies, but to really realize the full advantages of a security training program, you need to start sifting through the data that the program generates.

A great place to start is looking at where your failures sit. Are they evenly distributed, or do they cluster in particular departments? Are they individual contributors, or management? More importantly, which types of phishes do they click on most?

All of these questions can drive identification of high risk areas of the company, as well as prioritize which security controls should be implemented first. Rather than a top-down command approach, looking at the impact of a simulated attack can provide a clear view of where to start with a broader security improvement program.

If it’s not fun, you’re doing it wrong

Last and most importantly, this should be fun. The more creativity and variety injected into the process by security staff, the more effective the user awareness will be. And that doesn’t just extend to phish variety—user reports can and should be acknowledged at the organizational level.

Users can submit phish pitches, or preferred organization targets. Some phish test vendors even include stats broken out by department or manager that lend themselves very well towards friendly competition. Engaging employees beyond “Don’t do that” not only creates better security outcomes, but it tends to create better communication outcomes throughout a company.

Most corporate phishing programs do not meet their stated goals. The reasons for this can include overweighting compliance goals to the exclusion of others, complacency in test format, vendor choice making it tough to analyze data from the program, and failure to give dedicated resources to testing. These are largely avoidable if an organization shifts focus on their testing programs from a checkbox to risk analysis.

Overall, folding phish testing into a broader look at cyber risk can provide hard data that can drive security controls and increase organizational buy-in.

The post The joy of phishing your employees appeared first on Malwarebytes Labs.

ExpressVPN made a choice, and so did I: Lock and Code S02E19

On September 14, the US Department of Justice announced that it had resolved an earlier investigation into an international cyber hacking campaign coming from the United Arab Emirates that has reportedly impacted hundreds of journalists, activists, and human rights defenders in Yemen, Iran, Turkey, and Qatar. The campaign, called Project Raven, has been in clandestine operation for years, and it has relied increasingly on a computer system called “Karma.”

But in a bizarre twist, this tale of surveillance abroad tapered inwards into a tale of privacy at home, as one of the three men named by the Department of Justice for violating several US laws—and helping build Karma itself—is Daniel Gericke, the chief information officer at ExpressVPN.

Today, on Lock and Code, host David Ruiz explores how these developments impacted his personal decision in a VPN service. For years, Ruiz had been a paying customer of the VPN, but a deep interest in surveillance and a background in anti-surveillance advocacy forced him to reconsider.

Tune in to hear the depth of the unveiled surveillance campaign, who it affected, for how long, and what role, specifically, Gericke had in it, on this week’s Lock and Code podcast, by Malwarebytes Labs.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

The post ExpressVPN made a choice, and so did I: Lock and Code S02E19 appeared first on Malwarebytes Labs.