IT NEWS

Trojan Source: Hiding malicious code in plain sight

Researchers at the University of Cambridge, UK, have released details of a cunning and insidious new class of software vulnerability that allows attackers to hide code in plain sight, within the source code of computer programs. The techniques demonstrated by the researchers could be used to poison open source software, and the vast software supply chains they feed, by adding flaws, vulnerabilities, or malicious code, that are invisible to human code reviewers.

The new class of vulnerabilities, dubbed “Trojan Source“, affect a who’s who of the world’s most widely-used programming languages—including the five most popular: Python, Java, JavaScript, and C#, and C—putting enormous numbers of computer programs at risk.

How it works

Most computer code starts life as a set of instructions written in a so-called “high level” language, like Python or Java, which is designed to be easy for humans to read, write and understand. These high level language instructions are then processed by a computer program—an interpreter or a compiler—into a low level-language, such as bytecode or machine code.

A lot of source code looks very English, and is written using the same limited set of letters, numbers, and punctuation I used to create this article. However, it can potentially include any of the roughly 150,000 characters included in the Unicode standard, a sort of grand-unified human alphabet. Unicode provides a unique number (called a code point) for almost all the characters we use to communicate—from Kanji and currency symbols to Roman numerals and emojis.

To a computer, every unicode character is just a different number, but humans are less discerning. Some unicode characters are invisible to humans, and many of them look very similar to one another.

Trojan Source attacks exploit the fact that humans and compilers may interpret the same source code in two different ways. By playing on those differences, it’s possible for attackers to create malicious source code that appears harmless to human eyes.

Trojan Source attacks come in two flavours:

Homoglyph attacks

Homoglyphs are sets of characters that look identical or very similar. They are already widely used by scammers to create lookalike web addresses and app names, such asFаⅽeЬoοk.com and WhatѕAрp. (I’ve used examples that deliberately look odd, so you can see what I mean, but attackers aren’t so charitable.)

The Trojan Source paper shows that the same trick can be used to mislead humans when they read source code, by using lookalike class names, function names, and variables. The researchers use the example of a malicious edit to an existing codebase that already contains a function called hashPassword, which might be called during a login process. It imagines an attacker inserting a similar-looking function called hаshPasssword (the a has been replaced), which calls the original function but also leaks the user’s password.

Would a busy code reviewer spot the imposter? I suspect not. The authors suspect not too, and say they were able “…to successfully implement homoglyph attack proofs-of-concept in every language discussed in this paper; that is, C, C++, C#, JavaScript, Java, Rust, Go, and Python”.

Bidi attacks

Alongside the characters you can see, Unicode also contains a number of invisible control characters that indicate to computer programs how things should be interpreted or displayed. The most obvious and often used are probably the carriage return and line feed characters that mark the end of a line of text you write. Chances are, you use them every day without realising.

Among its invisible control characters, Unicode also includes characters for setting the direction of text, so that it can handle languages that are read left-to-right like English, and languages that are read right-to-left, like Hebrew, and mixtures of the two. The control characters allow a phrase like left-to-right to be reversed, so it reads thgir-ot-tfel, or for it to be rearranged so that chucks of left-to-right text are arranged in a right-to-left order (or vice versa), so it reads right-to-left, for example.

Since these control characters are about arranging text for human consumption, the text editors used for reading source code tend to respect them, but compilers and interpreters don’t. And while compilers and interpreters tend not to allow control characters in the source code itself, they often do allow them in the comments that document the code, and in text strings processed by the code.

That difference between the way that humans and compilers “see” the source can be used to hide malicious code.

The researchers show that an attacker could use bidirectional control characters in comments to completely change the meaning of a piece of code, which they illustrate with a simple example.

In our fictional scenario, attackers have disabled a line of code that should only run if the user is an admin, by putting it in comments. The compiler sees this:

/* if (isAdmin) { begin admins only */

The attacker knows that a human code reviewer should identify this as a security problem, so they add some bidirectional control characters to rearrange the code for human eyes, making it look as if they have simply added a comment before the admin check, and that the check still works. The code reviewer sees this:

/* begin admins only */ if (isAdmin) {

This is a simple example to illustrate the point, but it’s not difficult to imagine that an adversary with time and money could come up with attacks that are far more subtle and much harder to spot.

Of course the attacks only work if attackers have access to the source code, but that doesn’t present the barrier you might expect. Modern software projects are often complex jigsaw puzzles composed of other, smaller projects in absurdly convoluted supply chains (although “supply webs” might be a more accurate description). Those supply webs invariably include some open source code, somewhere, and open source projects often allow anyone to make a contribution to their code, provided it gets past the watchful eye of a human reviewer.

Tinfoil hat time?

With so much software potentially at risk from Trojan Source, you might be tempted to throw your computers in the river, hide in the cellar, and put on your tinfoil headgear, but don’t.

For a level-headed perspective, I spoke to Malwarebytes’ security researcher and Director of Mac and Mobile, Thomas Reed. Reed’s perspective is, yes, it’s a supply-chain threat, but the problem isn’t this specific vulnerability so much as the fragility of the supply chain itself.

“The biggest danger from my perspective is usage in open-source projects that are used by commercial software, which I imagine isn’t all that unique a perspective. The danger is there, though, with or without Trojan Source, because a lot of open source projects aren’t getting any kind of in depth source reviews.”

This isn’t the first research to find a vulnerability that could affect basically everything. In fact, they’re surprisingly common. You may remember KRACK, the 2017 research that revealed that our Wi-Fi security was broken, everywhere. Or the Spectre and Meltdown vulnerabilities from a year later that affected generations of hard to patch, and hard to replace, processor chips. And what did we do? We patched and moved on, just like we always do.

The good news is that the work on that has already started, with an extensive process of vulnerability disclosure that began in July, when researchers contacted nineteen separate organizations about their findings. They have since contacted more organizations, including the CERT Coordination Centre, and been issued a pair of CVEs, CVE-2021-42574 and CVE-2021-42694.

There are plenty of choke points where Trojan Source attacks might be picked up, such as public code repositories like GitHub, code editors and Integrated Development Environments, static analysis tools, and of the actual code compilers themselves. A lot of code will have to pass through several of these chokepoints before going live, so we will soon have plenty of defence in depth.

And spotting or stopping the attacks should be fairly easy, now we know to look for them. The researchers suggest several methods, starting with the most simple: Simply banning the use of bidirectional control characters in comments entirely. Where it’s humans rather than computers that are reading the code, text editors could add a visual marker to control characters, just as word processors can be made to show paragraph marks, and other invisible characters.

If you want to know more about the research, check out the research paper Trojan Source: Invisible Vulnerabilities, by Nicholas Boucher and Ross Anderson.

The post Trojan Source: Hiding malicious code in plain sight appeared first on Malwarebytes Labs.

What is Twitch?

Twitch is primarily a site dedicated to live streaming content. It also offers the ability to chat with others in the Stream you happen to be in via text. The primary draw of Twitch streams is video games and e-sports, leading to the rise of many big name streamers and content creators.

Is Twitch just for gaming?

In addition to gaming streams, Twitch also offers user generated content on a wide variety of themes and subjects. Everything from watching somebody sleep, or musical events, to walking around the streets of Japan shopping for clothes is available.

What age is Twitch for?

Statistics show a heavy leaning towards younger age ranges, with 41% of them in the 16-24 bracket and 32% in the 25-34 demographic. The proliferation of younger users makes it an appealing target for scammers.

Is it free? What is Twitch Prime?

The default Twitch experience is free to use. You can open up the Twitch website or download the app and start watching content right away. There’s no payment required to do this. However, Twitch does have paid options in the form of subscriptions, and also Prime Gaming (often referred to as “Twitch Prime”). Being a subscriber supports specific channels and also adds functionality for the user, such as emotes. Paid features and services make Twitch accounts an attractive proposition for scammers.

What are the dangers of Twitch?

It’s a variety of malware, phish pages, and social engineering.

  1. Fake spam blogs, which may or may not claim to be official Twitch sources, offer up some kind of “fix”. It could be related to stream quality, or audio, or broken emotes (for example). In one case, we found malware served up as an “audio fix”. This file actually steals the streamer’s Stream Key and gives it to the malware author. From there, they’re able to take control of the Stream and send out whatever they want to their audience, as well as change the channel name.
  2. Bogus video plugins are also a popular way of tricking people into running files that are not necessary to use Twitch. We found an imitation Twitch site offering up a “video player plugin” required to stream the site’s content. In actuality, the file is an installer manager which we detect as a PUP (Potentially Unwanted Program). The program offers a variety of installs, and also opens a streaming site unrelated to Twitch. Though listed as “free”, often these types of site require a paid monthly subscription to view the content – only registering on the site is “free”.
  3. Fake “bombing” tools. Twitch bombing is where bots jump into someone’s channel and entice viewers away to another stream. This is a bad enough thing to happen, but the waters are muddied further when you discover fake tools claiming to help you “bomb” are actually just Trojans or other forms of PUP.
  4. Discord/Twitch crossovers. We often see bots in Discord channels, claiming to be from Twitch bearing free gifts. These generally direct potential victims to phishing pages hunting for Discord credentials.

Has Twitch ever been compromised?

Yes. Data was exposed to the internet after a server configuration change. This alteration was taken advantage of by a third party. Although no payment or address data was found to be leaked, a number of security practices were advised in any case. The data was classed as “Part 1”, leading some to suspect a second data dump containing said payment or address data. At time of writing, no such data has materialised. Users of Twitch should be on their guard for any kind of scam or social engineering regardless. We’re too close to the incident to know for sure if everything is now back to normal. As far as regular Twitch use goes, however, you’re almost certainly good to go.

Is Twitch safe?

A lot of the tricks above are used on many other websites whether related to gaming or not. If you make use of Twitch security settings, and keep up to date with the latest security happenings along the way, in theory you should be fine.

There’s always the possibility of a service being compromised, and as we’ve seen, this happened to Twitch itself not long ago. However, this kind of attack is out of your hands. Keep things locked down, make use of 2FA, and steer clear of the “something for nothing” scams. Nobody can possibly fault you for doing the very best you can to keep your account and Twitch itself safe from harm.

The post What is Twitch? appeared first on Malwarebytes Labs.

Google patches zero-day vulnerability, and others, in Android

Google has issued security patches for the Android Operating System. In total, the patches address 39 vulnerabilities. There are indications that one of the patched vulnerabilities may be under limited, targeted exploitation.

The most severe of these issues is a critical security vulnerability in the System component that could enable a remote attacker using a specially crafted transmission to execute arbitrary code within the context of a privileged process.

Let’s have a closer look at the vulnerabilities that might seem interesting from a cybercriminal’s perspective.

The zero-day

Google has issued a patch for a possibly actively exploited zero-day vulnerability in the Android kernel. The vulnerability that got listed under CVE-2021-1048, could allow an attacker with limited access to a device, for example through a malicious app, to elevate his privileges (EoP). Further details about this vulnerability have not been provided by Google, except that it is caused by a use-after-free (UAF) weakness and that it may be under limited, targeted exploitation.

Use after free is a vulnerability caused by incorrect use of dynamic memory during a program’s operation. If after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program. In this case that means they could run malicious code with the permissions granted to the legitimate program.

Android TV

The most severe vulnerability in the Android TV could enable a proximate attacker to silently pair with a TV and execute arbitrary code with no privileges or user interaction required. This vulnerability, listed under CVE-2021-0889, lies in Android TV’s remote service component.

CVE-2021-0918 and CVE-2021-0930

In the System section of the security bulletin we can find two Remote Code Execution (RCE) vulnerabilities that are rated as Critical. The severity assessment is based on the effect that exploiting the vulnerability would possibly have on an affected device, assuming the platform and service mitigations are turned off for development purposes or if successfully bypassed.

The most severe vulnerability in this section could enable a remote attacker using a specially crafted transmission to execute arbitrary code within the context of a privileged process. At this point it is unclear whether this description applies to CVE-2021-0913 or CVE-2021-0930 since both are listed as critical RCE’s.

No more details were provided, but Google has used the description “a specially crafted transmission” for Bluetooth vulnerabilities in the past.

Chipsets

Besides vulnerabilities in the Android code, Google has fixed vulnerabilities introduced by some of the chipset manufacturers that Android uses. This round we spotted MediaTek and Qualcomm closed-source components. Two of the vulnerabilities in the Qualcomm software are listed as CVE-2021-1924 and CVE-2021-1975, and have been listed as critical. The severity assessment of these issues is provided directly by Qualcomm.

CVE-2021-1975 is located in the data-modem and can be used remotely. It is a possible heap overflow due to improper length check of domain while parsing the DNS response. This vulnerability got a CVSS rating of 9.8.  

Heap is the name for a region of a process’ memory which is used to store dynamic variables. A buffer overflow is a type of software vulnerability that exists when an area of memory within a software application reaches its address boundary and writes into an adjacent memory region. In software exploit code, two common areas that are targeted for overflows are the stack and the heap.

Android patch levels

Security patch levels of 2021-11-06 or later address all of these issues. To learn how to check a device’s security patch level, see Check and update your Android version.

Google releases at least two patch levels each month, and for November, they are 2021-11-01, 2021-11-05, and 2021-11-06.

For those who see an update alert marked as 2021-11-01, it means that they will get the following:

  • November framework patches
  • October framework patches
  • October vendor and kernel

Those who see either 2021-11-05 or 2021-11-06 patch levels will receive all of the above, plus the November vendor and kernel patches.

Stay safe, everyone!

The post Google patches zero-day vulnerability, and others, in Android appeared first on Malwarebytes Labs.

Zuckerberg’s Metaverse, and the possible privacy and security concerns

The news is currently jam-packed with tales of Facebook’s Meta project. Of particular interest to me is Facebook’s long-stated desire to introduce adverts into the VR space, and what this may mean for Meta too. I’ve talked about the privacy and legal aspects of adverts in gaming and other tech activities many times down the years.

An advert in every home

Back in the Xbox 360 days, I explained how even in 2009 console dashboards were increasingly filled with adverts. A few years later I also highlighted how gamers resorted to using HOSTS files or OpenDNS to block advertisers from placing adverts onto the screen. Sure, they ended up with lots of black empty boxes but they felt it was preferable to the alternative.

Adverts and tracking in gaming has never gone away, and in many cases has only become worse. In 2017, I presented findings on what gamers could expect to see in many EULAs and privacy policies. I also covered, in detail, what kind of things you should expect with regards advertising in VR/AR platforms.

The Advergaming wilderness years

Things sort of fizzled out in VR/AR for advergaming for a few years. The technology has been there, but the big push has been around advertising in VR more generally. Advergaming is still pretty niche, and VR headsets always seem to be on the cusp of becoming the next big thing…but then not quite getting there.

What this realm has been crying out for, is a massive platform push. Step up to the plate, Facebook. Now with all new Meta.

A frosty Meta reception

The promotional material for Meta hasn’t had the best of receptions. There’s still a lot of things in there which simply don’t make sense, and provide no real indication of how it’s going to work. Even so, something VR/AR-centric is definitely going to be the end result, we just don’t know what specific form it’s going to take. But what we do know is that advertising will be a big part of it. Some of the basic ideas already thrown around suggest a gamification of reality, seen through the lens of Meta.

We’ve been down this privacy road before with Google Glass and other AR specs. What are some of the possible concerns and issues related to privacy and security in this new world of virtual augmented realities?

Avoiding the physical risks of VR

If you’re going to spend a lot more time in headsets, it pays to be mindful of your surroundings. There’s already been one VR death that we know of, and we don’t need any more. I’ve spent a fair amount of time with a headset on for advergaming research, and below are the rules I generally follow to keep myself safe. We don’t know what Meta will say in terms of physical security yet, but encouraging a big push into VR should probably be accompanied by suggestions similar to these:

  1. Some VR games require you to stand up, or move around. They’re quite physical. Others are fine to play sitting down, and you might use a mouse and keyboard or a controller. If you’re doing the latter, you won’t want to accidentally hit your screen. You’re not looking at it anyway, so consider turning it around so it faces away from you. If your layout doesn’t allow for this, you can often align the “front view” of the game (what you see, in other words) to be aligned in a different direction from the TV or monitor the PC is plugged into. So you’re still able to have yourself facing a different direction. Note that this will only work if you’re using a controller or wands. You can’t really sit at a right angle to your screen if you still need the mouse and keyboard.
  2. Wire safety is crucial. It’s incredibly easy to get your legs tangled up and then have a head/floor incident. Some people install overhead hooks to manage wires. Where this isn’t possible, cable ties are also handy. If all else fails, there are apps you can use which will show you if cords are tangling while in-game.
  3. Some platforms use “chaperone” modes. These map out the safe floorspace area while playing.
  4. I’ve seen many “Oh no, I bashed my toddler on the head with my wand” type posts down the years. There used to be no easy way to get the attention of someone in a headset without risking a bash from a flailing arm or leg. Thankfully there are safeguards which can be used. For example, the Steam “knock knock” feature.
  5. Orientation is another problem. I don’t remember where I got this tip from, but placing a fan next to wherever your TFT or TV is located means you’ll always know where everything in the room is related to your position. Finally, if you’re on carpet then put down a rubber mat or similar so you know where the safe zone is. If you’re on wood, then a few squares of carpet or a rug will do.

That’s the physical side of things covered, though there’s probably room for improvement. Now we move onto the digital concerns. Let’s start the ball rolling with what is probably the biggest problem for Facebook/Meta specifically:

Advertising in Facebook related VR realms just isn’t that popular

In June, we looked at what happened when Facebook announced it was going to do some advert testing in games. The title selected for this was something called Blaston. Although the adverts arguably stuck out badly from the game’s futuristic environment, the ad tracking side of things was pretty non-invasive. No movement data was used to determine ad success, no information was processed or stored locally, and conversation content was not recorded. Compared to the kind of deep-dive practices which happen on your desktop every time you open your browser, this is an incredibly light touch.

Despite this, the test didn’t seem to go very well. The developers were told by players “We don’t want this” and they decided not to do it anymore. Like many popular VR games, it’s a paid title and not a freebie. Ads in expensive console and PC games tend to get a rough time of things by default. It seems the same is true for VR titles. The fact that players on some VR platforms would see these ads as opposed to others pretty much sealed their fate.

There’s no easy way round this, and Facebook/Meta has a big hill to climb here.

Data breaches are still a thing even in VR land

Users of a pornography-based VR app were in the news back in 2018. Researchers found it was possible to view information including email addresses and device names for app users along with download details for anyone who’d paid using PayPal. Even though you’re interacting with a virtual or augmented world via headset or mobile, your data is still ending up somewhere other than the visor on your head.

It’s never been easier to pick up cheap DIY tools and get making some VR apps. We often wonder how much security work goes into cheap IoT devices and regular mobile apps, and the same thing applies to VR and AR. At this point, we simply don’t know what the future holds in this respect. If Meta allows for third party apps somewhere down the line, we need to know what security measures are in place to protect user data, and also screen for potentially malicious or insecure apps.

Augmented reality specs are on thin ice regarding privacy concerns

Look, we’ve been here before. People were so carried away with the idea of tiny digital lenses on their face that we soon ended up with lots of privacy invading overreach. Oh no, my fancy glasses are banned from public restrooms. Ah, this eatery won’t let me sit inside with other customers. Whoops, the local cinema has accused me of recording a movie and sent me to space prison.

And so on.

Any maker of AR glasses must surely be aware of the privacy furore just waiting to explode again the moment someone does something bad with their branded specs in the accompanying news stories.

Facebook seems to be conscious of the Glass issues years prior, but some of its solutions to these privacy issues are arguably a little bit lacking in solid details so far. Tying real world product functionality to be dependent on social media accounts generally is also risky. We need to see a lot more meat on the bone where addressing safety and privacy issues arising from AR glasses is concerned. Whoever manages to crack this problem will reap the benefits, but will they be able to pull it off in the first place?

The privacy concerns issue isn’t really helped by some of the commentary from Mark Zuckerberg himself. He commented that a “killer use case” for AR glasses is being able to do something the person you’re talking to is unaware of.

We’re in a time where privacy focused people have seen years of awful tech practices. At this current moment, we’re all waiting for the next privacy fallout from a data breach. With the myriad ways bad people can abuse people through technology placed in their homes, the stakes for real/digital crossovers have never been higher.

And then, in all of this, we have the man at the forefront of a new, unreleased real/virtual crossover normalising a (mildly) deceptive use of technology towards people unaware that it’s happening.

This seems like a bad idea.

Don’t make it easy for criminals

Another selling point of Meta is being able to reproduce your home inside the VR space. This sounds cool, but there’s already plenty of VR apps and desktop-based programs you can do this in already. Yes, I made my home in Fallout 4. Yes, I blew it up shortly afterwards.

The difference is, the only person able to see it before it went kaboom was me.

There’s almost certainly going to be a social dimension to Meta’s home building. Friends will want to come and hang out at your (digital) place, right?

Where this could be a cause for concern is privacy settings. We need to make sure people are able to make their homes private, or inaccessible to strangers. I’ve seen similar situations in games where your home can be opened to the public. Sometimes you can port accessibility restrictions from house to house. Other times, homes or apartments are listed in public databases in-game and you’re free to visit wherever you want.

VR and AR allows for a lot more realistic homebuilding in digital spaces. There are furniture store apps which allow you to use AR and place items in your home to see if it fits the space intended for it. Could we see people scanning portions of their home and inserting it into Meta spaces? How about accurate replicas of rooms and their furniture?

The danger is we’ll be making scale models which could be used for any dubious purpose you care to mention. What if you’re able to make the outside of your home resemble the real thing too? Why stop at your home, when you can port in the whole street via public map databases?

Now you have a proper digital replica of your everyday life which strangers can visit. They can use this data and OSINT (open source intelligence) to figure out where you live. A dubious character might keep an eye on your social media feeds till you say you’re on holiday for 2 weeks. At that point, you might have your first burglary using VR as a launchpad…and an incredibly accurate floorplan of your home for reference while doing it.

Making Meta mountains out of molehills?

This is all wild speculation, but it’s very easy to see a way several unrelated aspects of VR/AR could unintentionally help people up to no good. If the right privacy tools don’t exist, if users aren’t given warnings as to why doing x or y in VR isn’t safe, it could be bad. A senior lecturer in digital cultures recently said “Facebook’s VR push is about data, not gaming”. I’d have to respectfully disagree.

All of the proposed coolest looking features seen so far are indeed all about gaming. If it isn’t Force ghost chessplayers, it’s Force ghost fencing battles. Wanting to make your own home digital and show it off is gamifying the experience. You can’t get any more gamey than oft-frustrated attempts to jam adverts into popular video game titles.

The games are absolutely the hook, and the way in, to vast quantities of data. Regardless of which direction Meta goes in with this, it’s up to the people wearing the headsets and glasses to be comfortable with their choices and be aware of the privacy perils of VR and AR.

It’s a whole new digital world out there.

The post Zuckerberg’s Metaverse, and the possible privacy and security concerns appeared first on Malwarebytes Labs.

This Steam phish baits you with free Discord Nitro

Weeks ago, we talked about the one effective lure that could get a Discord user to consider clicking on a scam link they were generously given, either by a random user or a legitimate contact who also happened to have fallen for the same ploy: free Discord Nitro subscriptions.

And similar to how scammers repeatedly prey on Discord users, they also prey on Steam users (Remember that “I accidentally reported you” scam?).

There’s novelty, however, in scammers preying on both at the same time. It’s not something you normally come across every day.

This Discord scam is not after your Discord credentials

There’s a fresh, active scam circulating in Discord right now that is propagated by either bot accounts or accounts controlled by scammers. Below is a sample screenshot of what you might find sitting in your direct messages:

FAKE DIS
This is just one variant of the scam.

See, here free nitro for 1 month, just link your Steam account and enjoy –
{partially redacted URL}

Once Discord users click the link, they are directed to a website that was made to look and feel like a legitimate Discord page.

fake discord website
Since when did Steam start giving away free Discord Nitro?

Clicking the “Get Nitro” button opens something that deceptively resembles a Steam pop-up, when in fact, it’s actually not a separate window but the pop-up is part of the website itself.

This tactic is similar to that used by fraudsters about two years ago, described here by Reddit user /Bangaladore. In the post, he describes in detail how he (or his friend) found out that the pop-up is actually not a pop-up: “If you try to drag the window off of the parent chrome window, what happens? You can’t. It just stops at the edge. If you scroll up and down on the original page, the Steam sign in the [sic] window goes with it. A normal pop up does not act like this.”

fake discord popup
Uhh…

As you can see above, this particular pop-up had a bit of a problem loading the elements, thus the borked look. But we’d like to point out that, while the websites we visited and analyzed related to this scam use the same interface, there are just times when the code breaks and the spoofed URL in the fake address bar doesn’t show as it should. Here’s a better example from a related scam website that perfectly loaded up everything:

better presentation
Note that the fake pop-up window displays the proper “steamcommunity.com” domain—but do not be fooled. This is just another way for scammers to make fake things look believably real.

When Discord users key in their Steam credentials in the fake pop-up, it will show them the error message saying “The account name or password that you have entered is incorrect”. Behind the scenes, though, their Steam credentials have already been stored into the scam website.

Below is a clip of the scam in action (Kudos to Stefan Dasic who analyzed the URLs and recorded this clip):

steam phish

Malwarebytes already blocks 195[dot]133[dot]16[dot]40, the IP of this scam. We also found more than a hundred other scammy domains sitting on this IP. Here’s a sampling:

1nitro.club
appnitro-discord.com
asstralissteam.org.ru
discord-steam-promo.com
discordgifte.com
dicsord-ticket.com
discord-appnitro.com
ds-nitro.com
nitro-discordapp.com
nitrodsgiveways.com
steam-nitro.online

Stay safe out there! And please don’t just click links that came out of the blue.

The post This Steam phish baits you with free Discord Nitro appeared first on Malwarebytes Labs.

Is Apple’s Safari browser the last, best hope for web privacy?

What browser do you use?

There’s a good chance—roughly one in seven—that it’s Google Chrome. And even if you prefer a different browser, there’s a good chance that you’re using something that’s based on Google Chrome, such as Edge, Vivaldi, Chromium, Brave, or Opera.

After a decade and and a half of relatively healthy competition between vendors, the World Wide Web is trending towards a browser monoculture. We’ve been there before and history suggests it’s bad news.

Last time it was Microsoft in the driver’s seat, and open standards and security were left tumbling about in the rear without a seat belt. This time Google has its hands on the wheel, and it’s our privacy in the back seat, being taken for a ride.

Chrome needs a counterweight and, thankfully, it still has one in Apple’s Safari browser. It’s imperfect, for sure, and its glacial pace of development might even be holding us all up, as Scott Gilbertson thoughtfully illustrated in a recent article on The Register. But it might also be the last, best hope for browser privacy we have.

Hear me out…

How Chrome ate the web

Google Chrome first appeared in 2008 and rapidly established itself as a browser that couldn’t be ignored, thanks to some catchy marketing on Google’s massive advertising platform. It was an excellent product with a ravenous appetite for market share, and its noisy focus on speed and security forced its rivals to take notice and compete on the same terms. Everyone benefitted.

And because none of the major browser vendors had enough market share to “embrace, extend and extinguish“, as Microsoft had attempted when Internet Explorer was dominant, everyone was forced to follow the same open standards. This meant that web applications mostly worked the same way, no matter what browser you used.

However, as Chrome’s popularity increased, Google was able to exert more and more influence on the web in service of its ad-based business model, to the detriment of users’ privacy.

For example, in 2016 Google introduced AMP, a set of web standards that were designed to make websites faster on mobile devices. In a move that could have come straight out of Redmond circa 1996, the AMP rulebook was written by Google and varied wildly from the open standards everyone had been working towards for the past fifteen years or so.

AMP was superficially open, but there was no AMP without Google. To use AMP your pages had to load code from Google-owned domains, debugging your code required Google-owned tools, your pages were stored in a Google-owned cache, and they were displayed under a Google-owned domain, so that users weren’t really on your website anymore, they were looking at your web pages on Google, thank you very much.

To incentivise the use of AMP, Google leveraged its search monopoly by creating “reserved” slots at the top of its mobile search rankings that were only available to AMP pages. If you wanted to top the search rankings, you had to play the AMP game.

Google pulled another bullish move in 2018 when it decided that logging into and out of a Google website like GMail or YouTube was the same as logging into the Chrome browser, because it could. So instead of being logged into the giant surveillance monster while you were using its websites, you were logged into the giant surveillance monster all the time, unless you remembered to log out of the browser, which of course you didn’t, because people just don’t think about logging in and out of their browser.

And then this year we had a great illustration of the bind that Google’s in even when it tries to do the right thing. It’s got the message that users want less tracking and more privacy, but unlike Firefox and Safari, Chrome can’t simply block the third-party cookies used for tracking, because Google’s advertising business model (and therefore Chrome’s very existence) depends on them.

Chrome is planning to ban third-party cookies, but not until at least 2023—years after Safari—because it needs to establish a replacement tracking tech.

The replacement is called Federated Learning of Cohorts (FLoC), and it’s designed to thread the needle of enabling targeted ads while keeping users anonymous, by lumping similar users into great big groups, called Cohorts. It may yet deliver ads that disrespect your privacy less, but it’s a brand new technology and it’s off to a slow, rocky start.

FLoC shows us why even a benign Google monoculture would hold back user privacy, and why Chrome needs a counterweight.

The other candidates

Edge

On the face of it, Microsoft seems a good potential counterweight to Google (stop sniggering at the back, a counterweight doesn’t need to be perfect, it just needs to have different weaknesses and be hard to kill).

Everyone who uses Windows gets its browser for free, and Microsoft has been happy to use privacy as a stick to beat its rival when it suits. For example, when it launched Internet Explorer 10, Microsoft enabled the nascent Do Not Track feature by default, a pro-privacy step that it knew Google couldn’t follow without cutting off its ad revenue. (Admittedly, it probably crashed the entire Do Not Track program in the process, but it was a terrible idea that was never going to work.)

Unfortunately, Microsoft handed in its big stick when it adopted Chrome as the basis for its own Edge browser, effectively removing one of the last pillars holding up the open standards-based web.

Mozilla Firefox

Mozilla Firefox is my favourite browser and I would love to be talking it up as a potential counterweight to Chrome. After all, it walks the walk in terms of pro-privacy features, and it has already ended one browser monopoly, in 2002, when it emerged to challenge Internet Explorer’s lazy grip on the web.

Unfortunately, as good as it is, Firefox is on shaky ground. It costs a fortune to keep Firefox in the browser game, and the vast majority of the money it needs comes from Google, which pays hundreds of millions of dollars a year for the privilege of being Firefox’s default search engine. The deal is up in 2023 and Firefox’s market share is dwindling.

Our counterweight can’t stand in Google’s way while also depending on its largesse.

The case for Safari

Apple’s Safari is very much the “also ran” in the pantheon of modern browsers. It has never been cutting edge, or coveted, it’s only ever been, well, there. It isn’t my favourite browser. It’s not even my second favourite browser.

Gilbertson’s Register article rightly points out that Safari is a laggard when it comes to new features, saying “Apple’s Safari lags considerably behind its peers in supporting web features … well behind the competition”. But how much does that matter, really? The web was mostly feature complete years ago, and modern web standards are often complex definitions of things that almost nobody needs.

It may be a bit “low energy”, but we don’t actually need Safari to be better than Chrome at web standards, or to become the best, or the most popular browser, it just needs to be good where Chrome is bad, too big to ignore, and unlikely to fail.

Well, Apple is good where Google is bad: It’s business model doesn’t rely on advertising, so it can be unabashedly pro-privacy. And it’s been pro-privacy long enough for us to judge it on its track record, which is actually pretty good, recent hiccups notwithstanding.

For example, where Chrome can’t afford to block third-party cookies for another year or more, Safari has been going one better since 2017, when it introduced Intelligent Tracking Protection, a clever box of tricks that blocks other forms of cross-site tracking. And there’s plenty more besides.

And, yes, Safari is currently too big to ignore, and even getting a bit bigger. In fact it’s the only major browser that’s gained market share since the arrival of Chrome.

Statcounter puts Safari’s share of the desktop browser market at a steady 9.5 percent, and its share of the mobile browser market at about 25. Even its modest share of the desktop market is too large to be ignored by anyone serious about building a web app, but it’s the iPhone that’s most likely to be a thorn in the side of anyone thinking of ignoring Apple’s browser.

According to Statista, the iPhone had a 14 percent global market share in the second quarter of 2021, but its data also shows that the iPhone’s global market share jumps to 20 percent in the last quarter of each and every year, presumably because of Christmas sales. This speaks to the platform’s continued desirability, which has always been Apple’s bulwark against cheaper and more capable competitors.

iPhone users also spend more money than Android users, and in rich countries like the USA, where you’ll find enormous software markets and lots of startups, the iPhone has a whopping 50 percent of the market or more.

The people who build the websites you use like Apple, and whether you like it or not, that matters.

When it comes to protecting privacy on the web, the most important thing might be the phones in the pockets of the web developers and the CEO.

The post Is Apple’s Safari browser the last, best hope for web privacy? appeared first on Malwarebytes Labs.

A week in security (Oct 25 – Oct 31)

Last week on Malwarebytes Labs

Other cybersecurity news

Stay safe, everyone!

The post A week in security (Oct 25 – Oct 31) appeared first on Malwarebytes Labs.

Celebrity jewelry house Graff falls victim to ransomware

Data on countless celebrities, including politicians, is apparently now in the hands of ransomware attackers after a group using the Conti variant compromised systems of one of the world’s most exclusive jewelry houses, Graff.

Despite what mathematicians like to think, there is an exception to every rule. When we wrote in our Demographics of Cybercrime Report that money (or its absence) changes our sense of safety, that wasn’t meant to imply that the rich feel like they’re bigger targets. Quite the opposite, those that don’t have money were found to feel less safe online. But the fact that the rich are, in fact, more attractive targets is of course true.

High-end targets

The personal information of celebrities like Oprah Winfrey, David and Victoria Beckham, Tom Hanks, and Melania and Donald Trump were stolen during a ransomware attack on Graff.  The Conti Ransomware gang have claimed responsibility.

Conti news site
The Conti News site claims to have published 1% of the stolen data

Conti is one of the gangs that, besides encrypting files, exfiltrate data from the compromised systems. When the victim refuses to pay the ransom, the gang publishes the exfiltrated data, or sells them to the highest bidder. Conti recently announced that they will also publish data as soon as details or screenshots of the ransom negotiations process are leaked to journalists.

The Conti gang also recently made the news recently when they put the access to compromised networks up for sale, as well as when some underpaid turncoat leaked their manuals, technical guides, and software on an underground forum.

According to Graff, the vast majority of clients have not been the victim of personal data loss and those that were affected have been informed by mail.

The target

From the all-caps official statement on its site, Graff is shaken but not stirred.

“PLEASE BE ASSURED THAT WE REACTED SWIFTLY TO SHUT DOWN OUR NETWORK AND DIRECTLY INFORMED THOSE INDIVIDUALS WHOSE PERSONAL DATA WAS AFFECTED, ADVISING THEM ON APPROPRIATE STEPS TO TAKE. WE ALSO NOTIFIED THE INFORMATION COMMISSIONER’S OFFICE AND CONTINUE TO WORK WITH LAW ENFORCEMENT AGENCIES. FORTUNATELY, THANKS TO OUR ROBUST BACK-UP FACILITIES, NO DATA WAS IRREVOCABLY LOST. WE WERE ABLE TO REBUILD AND RESTART OUR SYSTEMS WITHIN DAYS TO CONTINUE TO OPERATE EFFECTIVELY AND ALL OUR SHOPS AND ECOMMERCE PLATFORM WERE UNAFFECTED AND CONTINUED TO OPERATE WITHOUT INTERRUPTION.“

The investigation

A spokesman for the UK’s Information Commissioner’s Office (ICO), which can impose fines of up to 4% of a company’s turnover for failing to comply with the Data Protection Act, said:

“We have received a report from Graff Diamonds Ltd regarding a ransomware attack. We will be contacting the organization to make further enquiries in relation to the information that has been provided.”

Unfortunately, knowing who did it and knowing who to arrest, and how, are two very different things when it comes to cybercrime. Sometimes attribution is hard, but even in cases where law enforcement knows who is behind the attack, it doesn’t make it easy to apprehend the evil-doers.

In this case, the group that was behind the attack made a public confession and published proof, but we don’t know the real names of the people in this group. We have good reason to assume that they are in Russia, but even of that we can’t be sure.

It is only in rare cases that cybercriminals travel to countries where they run the risk of being extradited to the US or another country where there is a warrant out for them.

What’s next?

In the case of high-end jeweler Graff, it doesn’t sound as if they have plans to pay the ransom, so it is highly likely that more of the exfiltrated data will be published on the Conti leak site.

The data that were stolen do not seem to be of an alarmingly private nature. Conti has been known to attack targets in the public health sector where far more delicate information is to be found. But maybe with this attack it has angered some people that have the power to make things happen.

Want to know more about Conti?

The post Celebrity jewelry house Graff falls victim to ransomware appeared first on Malwarebytes Labs.

Lessons from a real-life ransomware attack

Ransomware attacks, despite dramatically increasing in frequency this summer, remain opaque for many potential victims. It isn’t anyone’s fault, necessarily, since news articles about ransomware attacks often focus on the attack, the suspected threat actors, the ransomware type, and, well, not much else. Sadly, there’s rarely discussion about the lengthy recovery, which, according to the Ransomware Task Force, can last an average of 287 days, or about the complicated matter that the biggest, claimed defense to ransomware attacks—backups—often fail.

There also isn’t enough coverage about the human impact from ransomware. These cyberattacks do not just hit machines—they hit businesses, organizations, and the people who help those places run.

To better understand the nuts and bolts of a ransomware attack, we spoke to Ski Kacaroski, a systems administrator who, in 2019, helped pulled his school district out of a ransomware nightmare that encrypted crucial data, locked up vital systems, and even threatened employee pay. Kacaroski spoke at length on our Lock and Code podcast, which can be heard in full below, offering several insights for those who may not know the severity of a ransomware attack.

Here are some of the most surprising and insightful lessons that he shared with us.

The first few hours are critical

At 11:37 pm on the night of September 20, 2019, cybercriminals launched a ransomware attack against the Northshore School District, which is north of Seattle in Washington State. The cybercriminals deployed the Ryuk ransomware against the school district, which relied on a datacenter of 300 Windows and Linux black box servers. The district also managed 4,000 staff members’ devices, including Windows, Mac, and Chromebook workstations, along with many iPad tablets.

The morning after the attack, Kacaroski got a phone call from one of the school district’s database administrators about problems with the database server. Shortly after logging into his employer’s VPN and poking around, Kacaroski learned that the server had been hit with ransomware. He saw one, unencrypted file—a ransomware note from the threat actors—and countless .ryuk file extensions nearly everywhere else.

These first few hours after the attack, Kacaroski said, are when he made a crucial mistake.

“If I was to redo this again, the minute I saw the first one [hit], I would’ve just pulled the power on every single box, ASAP,” Kacaroski said. “I definitely cost us probably a few boxes by not doing that quickly enough. But you never think you’re going to be hit by ransomware, so that’s not usually the first thing you consider when somebody reports the system is not working right.”

Kacaroski said that his school district’s cyber insurance provider later told his team that ransomware operators often target only Windows machines in these attacks. That kind of knowledge could have helped Kacaroski prioritize his and his colleague’s immediate reactions, protecting the Windows machines without worrying about any real threats to the Linux and Mac machines.

Your backups may not work 

In the immediate aftermath of the attack, Kacaroski said he and his colleague, another sysadmin who works on Windows, were dealing with “an incredible amount of uncertainty.” They did not know what critical services had been hit, they were still trying to figure out which drives were operational by pinging them, and they were still working under the assumption that all of their devices—not just Windows machines—could be threatened.

But at least initially, Kacaroski said he and his colleague were feeling somewhat confident. After all, Kacaroski said, his school district had implemented proper backups. Or so he thought.

“We have a very good backup system, or at least what we thought was an extremely solid, rock-solid backup system,” Kacaroski said. “And then we find out, at about 4 or 5 hours after the attack, that our backup system is completely gone.”

Kacaroski’s situation is, believe it or not, somewhat common. Earlier this year, despite having a backup system in place, the meat supplier JBS still decided to pay $11 million to its attackers to obtain a decryption key after getting hit with ransomware. The biggest mistake that organizations make in setting up their backups, as we discussed in a separate episode of Lock and Code, is that those backups are not properly and regularly tested.

This moment of realization, Kacaroski said, hit him and his colleague hard.

“It started to really sink in that I’m going to have to rebuild 180 Windows servers, and more importantly, rebuild Active Directory from scratch, with all those accounts and groups, and everything in it,” Kacaroski said. “That part really, really hurt us.”

A ransomware attack can be a months-long process

The attack against Northshore School District was not an overnight decision by a single group of hackers. In fact, it wasn’t even the work of one group of hackers.

According to Kacaroski, after both the FBI and the Department of Homeland Security helped investigate the attack on Northshore, employees learned about a months-long process that most likely led to the eventual ransomware infection. The initial breach into Northshore’s servers likely began in March 2019, six months before the final attack, and it involved a group of hackers simply installing Emotet to gain access to Northshore’s servers. Once access had been gained, that first group of hackers then sold its access off to another group of hackers who, according to Kacaroski’s learnings from the FBI, then installed TrickBot to obtain domain credentials. Once those credentials were swiped, the group that deployed TrickBot sold that information to yet another group of hackers, which were believed to be the same group that pushed the Ryuk ransomware onto the school district’s machines.

Interestingly, Kacaroski said that the school district was told that the attack was likely uncoordinated between the three different groups, with the groups acting independently and simply leveraging the prior group’s access.

What also surprised Kacaroski is that the Ryuk ransomware gangs operate like a franchise.

“What we’ve been told is the Ryuk group is a franchise like McDonalds,” Kacaroski said. “There’s the Ryuk group that runs the West Coast, the one that does the East Coast, the one that does something in between, and they don’t actually pay for access to the Ryuk stuff unless they have a successful attack, so they basically pay a fee back to the people that wrote it every time that they have a successful attack.”

There are more ransomware attacks than you’ve heard about—far more

The week after Northshore School District was hit with ransomware, its cyber insurance providers said four additional payments were made to other ransomware victims. That’s just one week in late 2019. With the number of attacks being reported on today, and the recorded, increased frequency of known attacks, we can safely assume that the number of undisclosed ransomware attacks has simply skyrocketed.

In immediate recovery, first prioritize and then look for “surprise” systems

In responding to the crisis of a ransomware attack, organizations need to prioritize what systems need to go back online first. Often, that work is made “easy” for an organization because ransomware will often hit just days—or hours—before crucial deadlines.

For Northshore School District, their ransomware attack happened just days before employees were scheduled to be paid. That’s a deadline that simply can’t be missed, Kacaroski said.

“Payroll has to run—it is a legal thing. You can not not pay people. You have to pay them, which means four days after the attack, we had to have payroll up and running,” Kacaroski said. “That was the most critical thing.”

The school district then prioritized getting Active Directory and the student record system back online, as those systems were used countless times each day to simply help the school run. The student record system, Kacaroski said, was used by teachers, parents, and students themselves, and it needed to go back online quick.

Finally, Kacaroski warned about what he called “surprise” systems—systems that are in place that an organization may not know about or may not understand are crucial until they’re gone. For Northshore School District, that system was for the school’s cafeteria and payment records.

“We had no clue that [the food services system] did 10,000 meals a day and 30,000 dollars… a day. We had no clue if the students had paid for their meals or haven’t paid or they owed us money,” Kacaroski said. “That one took a long time to get up and working because it was a distributed system and it had no backups at all.”

Avoid chokepoints during a long, collaborative recovery

The Northshore School District sysadmins are a small team of two, and in responding to the ransomware attack, there was only so much they could do—literally. Employees need to go home to sleep, and they need time to eat—as simple and basic as that sounds. Further, when recovering from a ransomware attack, there will almost always be what Kacaroski called a “system admin chokepoint.”

Because system administrators know how the systems themselves work, they can often become the single points of contact for rebuilding the entire business, piece by piece. Those system administrators can then get overburdened by too many teams coming to them repeatedly for information, sign-offs, and verifications.

To help move the recovery process forward, Kacaroski said organizations should find ways to free up their sysadmins, either by finding ways to rebuild systems independently, or by adding more sysadmins temporarily.

For Northshore School District, both methods were used.

After the attack, Kacaroski said his school district called up a local hosting firm that had done good work on small jobs that the school district itself couldn’t—or didn’t have the time to—do. Right after getting off the phone, that firm sent three additional sysadmins to help clean up the problem, Kacaroski said.

“We called them up. They gave us… essentially full-time, experienced sysadmins,” Kacaroski said. “We went from two to five. A huge increase.”

Kacaroski said that the beefed-up sysadmin team also gained some valuable breathing room when the school district found a paper-based workaround for its food services system. The school pared down its offerings and began providing only three options for school lunches for children. Each day during this temporary fix, the school could easily mark down, on paper, how many lunch options of each type were purchased by the students, still keeping accurate records while giving the school extra time to rebuild any digital services. Further, the school district decided to move its student record system, which was comprised of 27 Windows servers, to a SaaS solution, Kacaroski said.

“We had a vendor that we had a good relationship with, they dropped everything, and what is normally a six-month migration, they did in six days,” Kacaroski said. “But the most critical part is it didn’t have to go through the system admin chokepoint. That was a whole different group and they could just work on it on their own.”

All along the way, Kacaroski stressed the importance of strong relationships. Aided by local vendors, other school districts, parents, and other teams inside the school district itself, Northshore was able to recover about 80 – 85 percent of its systems and files in just two months, Kacaroski said.

““Like I say,” Kacaroski said, “relationships were the most critical thing.”


Listen to our full conversation on Lock and Code below

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post Lessons from a real-life ransomware attack appeared first on Malwarebytes Labs.

Update your OptinMonster WordPress plugin immediately

WordPress, the incredibly popular content management platform, is currently dealing with a nasty plugin bug which allows redirects.

What is a WordPress plugin?

Like most blogging platforms, WordPress allows you to change up its default functionality. This is done by adding bits of kit called plugins. Some will be from WordPress itself, others are created and maintained by third parties. Any plugin can be potentially unsafe, or coded poorly, or compromised in some way. It’s also entirely possible for rogues to make their own innocent looking plugin and cause chaos.

Plugins are often in the news for these kinds of problems. Just this month, we covered a WordPress plugin susceptible to multiple vulnerabilities. Last month, it was a plugin leaving shoppers vulnerable to cross site scripting bugs and a form of JavaScript injection. There are so many plugins that it’s a surefire bet another plugin will be the latest compromise before long. And even when it’s not possible to be 100% sure a plugin was involved in an attack, you can end up with a bad situation very quickly. Shall we see what’s happened this time?

Bug causes problems for up to 1 million sites

Yes, an astonishing 1 million WordPress sites have been affected this time around. A plugin called OptinMonster is a tool designed to make your site “sticky”. That is, keep people around for longer, convert interest to sales, sign up to newsletters, build up elements of your site, and more.

This plugin relies on API endpoints to do its job. An API is an Application Programming Interface, and you can read a fantastic plain-English description of what an API is and does here.

Sadly, it seems some of the endpoints weren’t secure, and attackers with API keys designed for use with the OptinMonster service could get up to no good. Changes could be made to accounts, or malicious code could be placed on the site without a visitor’s knowledge.

CVE-2021-39341

The bug, known as CVE-2021-39341 and discovered at the end of September, has been addressed by the OptinMonster developers. Stolen API keys have been invalidated, and a patch was released on the October 7. It’s possible more updates may appear over the next few weeks.

What should I do if I have OptinMonster on my website?

If your API key has been revoked, you’ll have to create a new one. You should also ensure your plugin is kept up to date. In fact, you should be doing this for all of your plugins. It may be worth checking if they’re still maintained, and browsing the latest reviews to see if people are suddenly complaining about peculiar activity.

If you have plugins installed which you don’t use at all, or only very rarely, it may be worth having a spring clean. Often we rush to install dozens of plugins on a new website, and before we know it, we’ve forgotten what half of them are. There they sit, for months or years, just waiting for a juicy vulnerability to come along. Why take the risk?

There’s a number of ways you can keep your WordPress site safe from harm where plugins are concerned. Our advice is to devote some time to digging through the weeds and see what exactly you have lurking in the undergrowth.

The post Update your OptinMonster WordPress plugin immediately appeared first on Malwarebytes Labs.