IT NEWS

A week in security (October 12 – October 18)

Last week on Malwarebytes Labs, we looked at journalism’s role in cybersecurity on our Lock and Code podcast, gave tips for safer shopping on Amazon Prime day, and discussed an APT attack springing into life as Academia returned to the real and virtual campus environment. We also dug into potential FIFA 21 scams, the return of QR code scams, Covid fatigue, and the absence of Deepfakes from the 2020 US elections.

Other cybersecurity news

  • Coronavirus SMS spoof risk: Researcher warns that genuine messages can be impersonated (Source: The Register)

Stay safe, everyone!

The post A week in security (October 12 – October 18) appeared first on Malwarebytes Labs.

Deepfakes and the 2020 United States election: missing in action?

If you believe reports in the news, impending deepfake disaster is headed our way in time for the 2020 United States election. Political intrigue, dubious clips, mischief and mayhem were all promised. We’ll need to be careful around clips of the President issuing statements about being at war, or politicians making defamatory statements. Everything is up for grabs, and in play, or at stake. Then, all of a sudden…it wasn’t.

Nothing happened. Nothing has continued to happen. Where did our politically charged deepfake mayhem go to? Could it still happen? Is there time? With all the increasingly surreal things happening on a daily basis, would anybody even care?

The answer is a cautious “no, they probably wouldn’t.” As we’ve mentioned previously, there are two main schools of thought on this. Shall we have a quick refresher?

Following the flow

Stance 1: Catastrophe and chaos rain down from the heavens. The missiles will launch. Extreme political shenanigans will cause skulduggery and intrigue of the highest order. Democracy as we know it is imperilled. None of us will emerge unscathed. Deepfakes imperil everything.

Stance 2: Deepfakes have jumped the shark. They’d have been effective political tools when nobody knew about them. They’re more useful for subversive influence campaigns off the beaten track. You have to put them in the places you least expect, because people quite literally expect them. They’re yesterday’s news.

Two fairly diverse stances, and most people seem to fall in one of the two camps. As far as the US election goes, what is the current state of play?

2020 US election: current state of play

Imagine our surprise when instead of deepfaked election chaos, we have a poorly distorted gif you can make on your phone. It’s heralded as the first strike of deepfakes “for electioneering purposes”.

It’s dreadful. Something you’d see in the comment section of a Myspace page, as pieces of face smear and warp this way and that. People are willing to call pretty much anything a deepfake to add weight to their points. The knock-on effect of this is overload and gradual disinterest due to hype. Things many would consider a deepfake are turned away at the door as a result of everything in sight being called a deepfake.

This is a frankly ludicrous situation. Even so, outside of the slightly tired clips we’ve already seen, there doesn’t appear to be any election inroad for scammers or those up to no good.

What happened to my US election deepfakes?

The short answer is people seem to be much more taken with pornographic possibilities than bringing down Governments. According to Sensity data, the US is the most heavily targeted nation for deepfake activity. That’s some 45.4%, versus the UK in second place with just 10.4%, South Korea with 9.1%, and India at 5.2%. The most popular targeted sector is entertainment with 63.9%, followed by fashion at 20.4%, and politics with a measly 4.5%.

We’ve seen very few (if any) political deepfakes aimed at South Korean politicians. For all intents and purposes, they don’t exist. What there is an incredible amount of, are pornographic fakes of South Korean K-Pop singers shared on forums and marketplaces. This probably explains South Korea’s appearance in third place overall and is absolutely contributing to the high entertainment sector rating.

Similarly adding to both US and entertainment tallies, are US actresses and singers. Again, most of those clips tend to be pornographic in nature. This isn’t a slow trickle of generated content. It’s no exaggeration to say that one single site will generate pages of new fakes per day, with even more in the private/paid-for sections on their forums.

This is awful news for the actresses and singers currently doomed to find themselves uploaded all over these sites without permission. Politicians, for the most part, get off lightly.

What are we left with?

Besides the half dozen or so clips from professional orgs saying “What if Trump/Obama/Johnson/Corbyn said THIS” with a clip of said politician saying it (and they’re not that great either), it’s basically amateur hour out there. There’s a reasonably consistent drip-feed of parody clips on YouTube, Facebook, and Twitter. It’s not Donald Trump declaring war on China. It isn’t Joe Biden announcing an urgent press briefing about Hilary Clinton’s emails. It’s not Alexandria Ocasio-Cortez telling voters to stay home because the local voting station has closed.

What it is, is Donald Trump and Joe Biden badly lip-syncing their way through Bohemian Rhapsody on YouTube. It’s Trump and Biden talking about a large spoon edited into the shot with voices provided by someone else. I was particularly taken by the Biden/Trump rap battle doing the rounds on Twitter.

As you may have guessed, I’m not massively impressed by what’s on offer so far. If nothing else, one of the best clips for entertainment purposes I’ve seen so far is from RT, the Russian state-controlled news network. 

Big money, minimal returns?

Consider how much money RT must have available for media projects, and what they could theoretically sink into something they clearly want to make a big splash with. And yet, for all that…it’s some guy in a Donald Trump wig, with an incredibly obviously fake head pasted underneath it. The lips don’t really work, the face floats around the screen a bit, evidently not sharing the same frame of reference as the body. The voice, too, has a distinct whiff of fragments stitched together.

So, a convincing fake? Not at all. However, is that the actual aim? Is it deliberately bad, so they don’t run a theoretical risk of getting into trouble somehow? Or is this quite literally the best they can do?

If it is, to the RT team who put it together: I’m sorry. Please, don’t cry. I’m aiming for constructive criticism here.

They’re inside the walls

Curiously, instead of a wave of super-dubious deepfakes making you lose faith in the electoral system, we’ve ended up with…elected representatives slinging the fakes around instead.

By fakes, I don’t mean typical “cheapfakes”, or photoshops. I mean actual deepfakes.

Well, one deepfake. Just one.

“If our campaign can make a video like this, imagine what Putin is doing right now”

Bold words from Democratic candidate Phil Ehr, in relation to a deepfake his campaign team made showing Republican Matt Gaetz having a political change of heart. He wants to show how video and audio manipulation can influence elections and other important events.

Educating the public in electioneering shenanigans is certainly a worthwhile goal. Unfortunately, I have to highlight a few problems with the approach:

  1. People don’t watch things from start to finish. Whole articles go unread beyond the title and maybe the first paragraph. TV shows progress no further than the first ad break. People don’t watch ad breaks. It’s quite possible many people will get as far as Matt Gaetz saying how cool he thinks Barack Obama is, then abandon ship under the impression it was all genuine.
  2. “If we can make a video like this” implies what you’re about to see is an incredible work of art. It’s terrible. The synthetic Matt Gaetz looks like he wandered in off the set of a Playstation 3 game. The voice is better, but still betrayed by that halting, staccato lilt so common in audio fakery. One would hope the visuals being so bad would take care of 1), but people not really paying attention or with a TV on in the background are in for a world of badly digitised hurt.

An acceptable use of technology?

However you stack this one up, I think it’s broadly unhelpful to normalise fakes in this way during election cycles regardless of intention. Note there’s also no “WARNING: THIS IS FAKE” type message at the start of the clip. This is bad, considering you can detach media from Tweets and repurpose.

It’s the easiest thing in the world to copy the code for the video and paste it into your own Tweet minus his disclaimer. You could just as easily download it, edit out the part at the end which explains the purpose, and put it back on social media platforms. There’s so many ways you can get up to mischief with a clip like this it’s not even funny.

Bottom line: I don’t think this is a good idea.

Fakes in different realms

Other organisations have made politically-themed fakes to cement the theoretical problems posed by deepfakes during election time, and these ones are actually quite good. You can still see the traces of uncanny valley in there though, and we must once again ask: is it worth the effort? When major news cycles rotate around things as basic as conspiracy theories and manipulation, perhaps fake Putin isn’t the big problem here.

If you were in any doubt as to where the law enforcement action is on this subject: it’s currently pornography. Use of celebrity faces in deepfakes is now officially attracting the attention of the thin blue line. You can read more on deepfake threats (political or otherwise) in this presentation by expert Kelsey Farish.

Cleaning up the house

That isn’t to say things might not change. Depending on how fierce the US election battle is fought, strange deepfake things could still be afoot at the eleventh hour. Whether it makes any difference or not is another thing altogether, and if low-grade memes or conspiracy theories are enough to get the job done then that’s what people will continue to do.

Having said that: you can keep a watchful eye on possible foreign interference in the US election via this newly released attribution tracker. Malign interference campaigns will probably continue as the main driver of GAN generated imagery. Always be skeptical, regardless of suspicions over AI involvement. The truth is most definitely out there…it just might take a little longer to reach than usual.

The post Deepfakes and the 2020 United States election: missing in action? appeared first on Malwarebytes Labs.

How Covid fatigue puts your physical and digital health in jeopardy

After six months of social distancing, sheltering in place, working from home, distance learning, mask-wearing, hand-washing, and plenty of hand-wringing, people are pretty damn tired of COVID-19. And with no magic bullet (yet) and no end in sight, annoyance has turned into exasperation and even desperation.

Doctors and mental health professionals call this Covid fatigue.

Covid fatigue, not to be confused with fatigue as a symptom of the COVID-19 infection, can be characterized by denial, defeatism, and careless or reckless behavior in response to feeling overwhelmed and exhausted by a constant stream of pandemic-related information. And since COVID-19’s impact on our lives has been both profound and long-lasting, the fatigue is further pronounced by such prolonged exposure to intense stress. Conflicting information about the seriousness of the virus does little to provide relief. Instead, emotions are extra muddied by uncertainty about how stressed we should really be feeling.

Those of us in cybersecurity recognize this emotional response well. We’ve seen it play out in the digital realm in the form of security fatigue and alert fatigue, or what some doctors call “caution fatigue.” And we understand that if it isn’t addressed, it can lead to dangerous choices for the health and safety of people in the real world and online.

COVID-19 has upended nearly every facet of our lives, driving us into the open arms of the Internet like never before. Yet, as we struggle with anxiety and burnout related to the pandemic, our fatigue spills over into our online behavior. And with so many working and schooling from home, the stakes have never been higher.

So, when we see users exhibiting classic symptoms of Covid fatigue, security fatigue, or other caution fatigue, we feel their pain but recognize that this behavior can’t go on unchecked. If you think that you, your friends and family, or coworkers might be experiencing Covid fatigue, read on to learn how to recognize the symptoms, why they are dangerous, and what can be done to fight against it.

What is Covid fatigue?

To understand Covid fatigue, it helps to first zoom out and consider that fatigue is a natural response to any ongoing stressful situation or threat. When you couple that with the need to take specific actions to protect against that threat, you get caution fatigue. In an interview for a WebMD special report, Jacqueline Gollan, Associate Professor of Psychiatry and Behavioral Science at Northwestern’s Feinberg School of Medicine, explains what she means by the term caution fatigue:

“[Caution fatigue] is really low motivation or interest in taking safety precautions. It occurs because the constant state of being [on] alert for a threat can activate a stress hormone called cortisol, and that can affect our health and our brain function…When we’re subjected to high levels of stress, we start to desensitize to that stress. And then we begin to pay less attention to risky situations.”

Caution fatigue, then, can apply to numerous situations where individuals are under siege for an extended period of time and grow tired of being required to employ protective measures. This is especially true when the threat is not perceived as imminent or direct, and even more prominent when the threat is invisible. Other factors that increase caution fatigue include:

  • Lack of transparency into the threat or the reasons for the restrictions
  • Unfair or overly complicated restrictions or recommendations for safety precautions
  • Inconsistent actions and mixed messages about which measures are effective
  • Unpredictable changes to safety measures, including using subjective criteria to alter directions

Looking at this list in the context of the coronavirus pandemic, it appears we’ve checked off all the boxes, turning what was strong public support for COVID-19 response strategies into a collective case of the Mondays. According to an October report by the World Health Organization (WHO), pandemic fatigue has reached over 60 percent in some parts of Europe. In the United States, a July 2020 Kaiser Family Foundation poll found that 53 percent of Americans believed the pandemic had harmed their mental health.

WHO says that Covid fatigue is expressed through an increasing number of people not sufficiently following recommendations and restrictions, decreasing their effort to stay informed about the pandemic, and having lower risk perceptions related to COVID-19. Previously effective core messages about washing hands, wearing face masks, practicing proper hygiene, and maintaining physical distance may now be lost in the shuffle. Instead, vigilance is replaced by denial (I won’t get infected) or nihilism (we’re all screwed anyway, so I might as well do what I want).

What does Covid fatigue have to do with cybersecurity?

Covid fatigue shares characteristics with another form of fatigue that has long plagued the cybersecurity industry: security fatigue. In 2017, the National Institute of Standards in Technology (NIST) published a study stating that security fatigue was the threshold at which users found it too hard or burdensome to maintain security, a phenomenon affecting 63 percent of its participants.

The NIST report went further to say, “People are told they need to be constantly on alert, constantly ‘doing something,’ but they are not even sure what that something is or what might happen if they do or do not do it.”

Security fatigue and its cousin alert fatigue (which technicians are likely already familiar with) prevent users from taking definitive steps to protect themselves while connected to the Internet. Every news story on ransomware or major breach of personally identifiable information (PII) or cyberattack by a nation-state comes with its own set of “here’s how to protect against this” steps to follow.

Some of those instructions may be complex or incredibly specific, contributing to confusion (especially for those who aren’t tech savvy). Likewise, the constant pinging from alert notifications on security software may result in IT teams dismissing those alerts altogether.

Although there have been efforts to reduce security and alert fatigue, they likely make themselves known on a regular basis to anyone working in IT and security. For other users, security fatigue might flow as an undercurrent or barely register. But when you add Covid fatigue to the recipe, you get a dangerous cocktail of weary indifference.

Now, those with Covid fatigue aren’t just endangering themselves by ignoring best health practices and tuning out the latest news. They’re also letting their fatigue-influenced behavior spill over into other areas, including conducting business (or pleasure) online.

Because COVID-19 has forced much of the globe to spend a lot more time online, it has opened up the floodgates for cybercriminal activity, misinformation, and digital infection. Here, at the crossroads of Covid, security, and alert fatigue, people might find themselves in just as much danger on the Internet as they would be at a packed rally of maskless, cheering crowds.

Caroline Wong, CSO of pentest-as-a-service company Cobalt, recently spoke to Malwarebytes employees at a virtual fireside chat about Covid fatigue.

“One of the things that I worry about the most is anxiety and burnout and what that means for human error,” she said. “When we’re anxious, maybe we’re more likely to fall for a phishing scam. When I’m burnt out, maybe I’m more likely to purposefully or accidentally take some kind of a shortcut. Every behavior of an employee affects the security posture of the company.”

And behaviors have changed drastically for both users and cybercriminals since the onset of COVID-19. Here are a few examples of how threat actors are taking advantage of fatigued users:

  • Now that more people are shopping online to avoid crowded stores, cybercriminals have stepped up their credit card skimming efforts on legitimate sites. In just the first month of sheltering in place, digital skimming was up 26 percent. Users were previously told that a site secured by “https” and a lock icon should be safe. Those rules are now out the window.
  • Threat actors have weaponized information on COVID-19, using it as a hook to lure phishing victims, from SBA scams to nation-state espionage. Just consuming information about COVID-19 from the wrong source, then, could compromise users’ safety.
  • Students are distance learning, often on their own devices. And parents/individuals are mostly working from home, again using their (unprotected) personal devices to conduct work, or work devices to conduct personal errands. Cybercriminals look to capitalize on these risky choices by targeting employees on insecure devices and infiltrating business/school networks in the process.

“I think the biggest threat from Covid fatigue comes down to the massive distraction it causes,” said Adam Kujawa, Director of Malwarebytes Labs. “People who are so desperate for hope might scrutinize less and end up falling into a trap or exposing themselves to cyberthreats, just for the idea of relief.”

Combine this with the general malaise brought on by Covid fatigue, and you get an exponentially higher chance of infecting your home and business networks, rendering your devices obsolete, having your PII stolen and sold on the black market, opening the door for nation-state actors to spy on your organization, or even inviting threat actors to seize company files and ransom them for a hefty price.

How to fight Covid fatigue

If one of the symptoms of fatigue is feeling overwhelmed by a heavy dose of information and advice about what to do to combat a threat, how do you go about giving important information and advice about what to do to combat that threat? One method would be to consider the factors that are causing stress and fatigue and then deliver simple, actionable instructions to counter those factors. For example, if a constantly changing outlook on the future of the pandemic and other mixed messages are creating anxiety, consider only visiting a small selection of websites to find answers.

In researching for this article, I came across dozens of different recommendations for combatting Covid and security fatigue. Rather than overwhelm readers with too many choices, I opted to boil down all instructions to the three most pertinent. For battling Covid fatigue, try:

  1. Turning to a coping mechanism. Take a five minute break from the screen or TV if COVID-19 news is getting you down. If you need more time, spend it consumed in a favorite hobby to re-energize.
  2. Lowering your expectations. This may sound crude, but what it really means is give yourself a break. If you’re forgetting words or taking a long time to complete a project, forgive yourself. And if you think a vaccine will definitely be here in January 2021, perhaps consider placing your hopes elsewhere.
  3. Talking to someone. COVID-19 has been isolating for all of us. When loneliness strikes, schedule a virtual happy hour with a close friend, jump on a phone call with family members, or book an appointment with a trusted counselor.

In addition, remember these key preventative measures for keeping the virus at bay, recommended by leading scientists:

  1. Wear a mask in public. That includes not just stores and workplaces, but at any gathering with people outside your household.
  2. Wash your hands frequently. Especially after being around other people or handling any objects that came from outside your home.
  3. Practice social distancing. When in doubt, stay at least six feet away from others. Refrain from gathering in large groups, especially indoors in poorly-ventilated areas.

And finally, to ensure you don’t let Covid fatigue transform into security fatigue, remember these three important rules:

  1. Use a password manager. To avoid re-using passwords across accounts or having to remember 27 different ones, a password manager will keep your account credentials encrypted inside a digital vault, which can only be opened by a single master password. For extra protection, employ multi-factor authentication.
  2. Use security software on all of your devices, including your mobile phone. (iPhones don’t allow for external antivirus protection, but they do let users download robocall blockers and apps that secure mobile browsers.)
  3. Use common sense. We’ve learned that “trust but verify” doesn’t work for the Internet. If it seems too good to be true…you know the rest.

The post How Covid fatigue puts your physical and digital health in jeopardy appeared first on Malwarebytes Labs.

QR code scams are making a comeback

Just when we thought the QR code was on its way out, the pandemic has led to a return of the scannable shortcut. COVID-19 has meant finding a digital equivalent to things normally handed out physically, like menus, tour guides, and other paperwork, and many organizations have adopted the QR code to help with this. And so, it would seem, have criminals. Scammers have dusted off their book of tricks that abuse QR codes, and we’re starting to see new scams. Or maybe just old scams in new places.

What is a QR code again?

A quick recap for those that missed it. A Quick Response (QR) code is nothing more than a two-dimensional barcode. This type of code was designed to be read by robots that keep track of items in a factory. As a QR code takes up a lot less space than a legacy barcode, its usage soon spread.

Smartphones can easily read QR codes—all it takes is a camera and a small piece of software. Some apps, like banking apps, have QR code-reading software incorporated to make it easier for users to make online payments. In some other cases, QR codes are used as part of a login procedure.

QR codes are easy to generate and they are hard to tell apart. To most human eyes, they all look the same. More or less like this:

QR code
URL to my contributor profile here

Why are QR codes coming back?

For some time, these QR codes were mainly in use in industrial environments to help keep track of inventory and production. Later they gained some popularity among advertisers because it was easier for consumers to scan a code than to type a long URL. But people couldn’t tell from a QR code where scanning would lead them, so they got cautious and QR codes started to disappear. Then along came the pandemic and entrepreneurs had to get creative about protecting their customers against a real life virus infection.

To name an example, for fear of spreading COVID-19 through many people touching the same menu in a restaurant, businesses placed QR codes on their tables so customers could scan the code and open the menu in the browser on their phone. Clean and easy. Unless a previous visitor with bad intentions had replaced the QR code with his own. Enter QR code scams.

Some known QR code scams

The easiest QR code scam to pull off is clickjacking. Some people get paid to lure others into clicking on a certain link. What better way than to replace QR codes on a popular monument, for example, where people expect to find background information about the landmark by following the link in the QR code. Instead, the replaced QR code takes them to a sleazy site and the clickjacking operator gets paid his fee.

Another trick is the small advance payment scam. For some services, it’s accepted as normal to make an advance payment before you can use that service. For example, to rent a shared bike, you are asked to make a small payment to open the lock on the bike. The QR code to identify the bike and start the payment procedure is printed on the bike. But the legitimate QR codes can be replaced by criminals that are happy to receive these small payments into their own account.

Phishing links can just as easily be disguised as QR codes. Phishers place QR codes where it makes sense for the user. So, for example, if someone is expecting to login to start a payment procedure or to get access to a certain service, the scammers may place a QR code there. We’ve also seen phishing mails equipped with fraudulent QR codes.

Phishing QR code
Image courtesy of Proofpoint

The email shown above instructed the receiver to install the “security app” from their bank to avoid their account being locked down. However, it pointed to a malicious app outside of the webstore. The user had to allow installs from an unknown source to do this, which should have been a huge red flag, but still some people fell for it.

Lastly, there’s the redirect payments scam, which was used by a website that facilitated Bitcoin payments. While the user entered a Bitcoin address as the receiver, the website generated a QR code for a different Bitcoin address to receive the payment. It’s yet another scam that demonstrates that QR codes are too hard for humans to read.

How to avoid QR code scams

There are a few common sense methods to avoid the worse QR code scams:

  • Do not trust emails from unknown senders.
  • Do not scan a QR code embedded in an email. Treat them the same as links because, well, that’s what they are.
  • Check to see whether a different QR code sticker was pasted over the original and, if so, stay away from it. Or better yet, ask if it’s OK to remove it.
  • Use a QR scanner that checks or displays the URL before it follows the link.
  • Use a scam blocker or web filter on your device to protect you against known scams.

Even if the mail from a bank looks legitimate, you should at least double-check with the bank (using a contact number you’ve found on a letter or their website) if they ask you to log in on a site other than their own, install software, or pay for something you haven’t ordered.

As an extra precaution, do not use your banking app to scan QR codes if they fall outside of the normal pattern of a payment procedure.

Do I want to know what’s next?

Maybe not, but forewarned is forearmed. One method in development to replace QR codes on Android devices is the Near Field Communication (NFC) tag. NFC tags, like QR codes, do not require an app to read them on more modern devices. Most of the recent iPhones and Androids can read third-party NFC tags without requiring extra software, although older models may need an app to read them.

NFC tags are also impossible to read by humans but they do require an actual presence, i.e. they can’t be sent by mail. But with the rise in popularity of contactless payments, we may see more scams focusing on this type of communication.

Stay safe, everyone!

The post QR code scams are making a comeback appeared first on Malwarebytes Labs.

Caught in the payment fraud net: when, not if?

Sometimes, I think there are three certainties in life: death, taxes, and some form of payment fraud. Security reporter Danny Palmer experienced this a little while ago, and has spent a significant amount of time tracking the journey of his card details from the UK to Suriname. His deep-dive confirmed that it is easy to become tangled up in fraud, even if you’re very careful. I myself have experienced one of the more peculiar forms of credit card theft, detailed below.

Sometimes it’s you…

Right off the bat, let’s clarify that there are ways to both help and hinder the security of your payment information.

Maybe you switched something off while traveling for easy access and forgot to turn it back on at the other end. Perhaps there was some ancient Hotmail account still tied to something important with a password on six hundred thousand password dumps. Maybe you did one of those “Without giving your exact date of birth, please tell us something you’d recognise from your childhood and also your exact date of birth and credit card number” things bouncing around on social media.

These are all ways you can inadvertently generate problems for yourself at a later date.

Sometimes it isn’t you

On the other hand, instead of winding up in one of the above examples, let’s say you successfully navigated all perils.

You secured your desktop, installed some security software, followed the advice to keep your system up to date, and avoided all dubious installs. Locking down your phone was a great idea. Reading some blogs on password managers was the icing on the cake. You’ve done it all, and anything going wrong after this will have to be one heck of a fight.

There is, however, a third path outside of what you do or don’t do to keep data secure.

Occasionally, the issue is elsewhere

Maybe people you don’t know, who you entrusted with the well-being of your card data, did something wrong. Perhaps a Point of Sale terminal is missing vital patches. The store across town didn’t keep an eye on their ATM, and the company responsible for it didn’t have a means to combat the skimmer strapped across the card slot. The clothing store you bought your jacket from did a terrible job of locking down payment data and everything is sitting in the clear.

This is absolutely one of those “whatever will be, will be” moments.

The…good?…news about hacks outside of your control is, they can happen to anyone. Including people who work in security. As a result, you shouldn’t feel like you’ve done something wrong. In many cases, you almost certainly haven’t. It’s way beyond time to normalise the notion that huge servings of guilt aren’t a pre-requisite for data theft.

Setting the scene: My experience with card fraud

When I received my fraud missive through the post, it was shortly after an incredibly time consuming and complicated continent-spanning house move. Did I make a multitude of payments in all directions? You bet. Shipping, storage, local transportation, and a terrifyingly long list of general administrative and paperwork duties from one end of a country to another.

I avoided using my banking debit card throughout the process, relying on my credit card instead. There’s a reason for this.

Interlude: why I used a credit card

If you buy something with your debit card and it ends up with a scammer, you may have problems recovering your funds. You may well have to endure a lengthy dispute process, or prove you weren’t being negligent in order to get your money back.

Increasingly, banks are making this a little harder to do.

If you bank online, you’ll almost certainly have seen a digital caveat any time you go to transfer money. They’re usually along the lines of waiving the ability to reclaim your money back if tricked into sending your cash to a scammer. They’ll ask you to confirm you know who you’re sending the money to or place the responsibility for transferring funds directly on your own shoulders. Perhaps they’ll try and get out of paying up if your PC was compromised by malware. If you pay by cheque, you could get into all sorts of tedious wrangling behind the scenes too.

Even without all of the above, your bank may well have a number of minimum best practices for you to follow. Unless you want to run into potential pitfalls, try and keep things ship-shape there too.

Meanwhile, the credit card is a fast-track to getting your money back, because it’s the incredibly large and powerful credit company getting their money back. You’re just there for the ride, as it were. This in no way removes your requirement to be responsible with your details, but from experience, I’ve had more success righting a cash-related wrong where it involved credit rather than debit. It’s an added form of leverage and protection. The real shame is that isn’t usually the case when paying with your own money. Once again, we’re back in the land of “whatever will be, will be”.

End of interlude: when things go wrong

I don’t know exactly what happened with my card, or who took the details. I’ve no idea if the details were swiped from an insecure database, or a store had Point of Sale malware on a terminal. I can’t say if it was cloned from one of the few times I had to use an ATM.

Stop and think about the places you frequently buy items from. Maybe even draw up a list on a map. You’ll almost certainly have a handful of stores you use regularly, with a few random places thrown in for good measure. Perhaps you avoid ATMs completely, opting for cashback in stores instead. You probably shop online at the same places too, with a few more off-the-beaten-track sites popping up here and there, too.

You may get lucky and discover one of them has had a breach. If they’re small shops or family businesses, sorry…you probably won’t read about it in the news. Website compromises can lay undetected for a long time. Same for Point of Sale malware on physical terminals. Your shopping circle of trust only extends so far and is only useful for figuring out a breach up to a point. After that, it’s guesswork and for various reasons, your bank/credit card company won’t disclose investigation information.

The scammers strike

What I do know, is that a letter came through the door telling me someone had tried to make a purchase of around 14 thousand pounds on my credit card. Their big plan was to order a huge supply of wine from a wine merchant. What I was told by the bank, is that these aren’t places you can typically wander in off the street and throw some wine in a shopping trolley. These are organisations which sell directly to retailers.

Logic suggests that card fraud circles around small, inconspicuous transactions to remain off the grid. Nothing screams small, inconspicuous transactions like “a purchase more than the limit on your card for a bulk supply of rare, expensive wine from a direct to store wine merchant unavailable to the public”.

Though this is outside my realm of experience, my guess is a successful purchase would’ve resulted in the wine being sold on in ways which obscure the source of the original funds. By the time anyone has figured out what happened, the scammer has turned a profit and I’m left holding the incredibly large wine bag.

Luckily for me, “Make small inconspicuous transactions” doesn’t appear to have been in their playbook. Even if the fraud detection team had somehow missed this utterly out of character purchase, the scammers also managed to blow past my credit card limit. I assume the big fraud detection machine exploded and required a bit of a lie down afterwards to recover.

Dealing with the aftermath

I was very lucky, if you can call it that, because of the baffling way the scammers tried to rip me off. If the ludicrous size of the attempted payment hadn’t set alarm bells ringing, the unusual items purchased probably would have given the same end result. Similarly, Danny Palmer’s card flagged the fraud tripwires before any money was taken. Banks and credit card companies are constantly adding new ways to detect dubious antics and also make logging into banking portals a safer experience.

All the same, we shouldn’t rely on others too much to ensure our metaphorical bacon is saved at the last minute. Keep locking things down, be observant when using ATMs, and familiarise yourself with the security procedures for your payment method of choice. We can’t stop everything from going wrong, but we can certainly help tip the odds a little bit more in our favour.

I probably won’t crack open a bottle of wine to celebrate, though.

The post Caught in the payment fraud net: when, not if? appeared first on Malwarebytes Labs.

Lock and Code S1Ep16: Investigating digital vulnerabilities with Samy Kamkar

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Samy Kamkar, chief security officer and co-founder of Open Path, about the digital vulnerabilities in our physical world.

If you look through a recent history of hacking, you’ll find the clear significance of experimentation. In 2015, security researchers hacked a Jeep Cherokee and took over its steering, transmission, and brakes. In 2019, researchers accessed medical scanning equipment to alter X-ray images, inserting fraudulent, visual signs of cancer in a hypothetical patient.

Experimentation in cybersecurity helps us learn about our vulnerabilities.

Today, we’re discussing one such experiment—a garage door opener called “Open Sesame,” developed by Kamkar himself.

Tune in to hear about the “Open Sesame,” how it works, what happened after its research was presented, and how the public should navigate and understand a world rife with potential vulnerabilities on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news:

  • Threat intelligence researchers from Group-IB has outed a new Russian-speaking ransomware gang called OldGremlin, and it has been targeting big companies in Russia. (Source: CyberScoop)
  • Tyler Technologies, a product vendor of US states and counties during election seasons, recently admitted that an unknown party has hacked their internal systems. (Source: Reuters)
  • Graphika unearthed a campaign they called Operation Naval Gazing, which is aimed at supporting China’s territorial claim in the South China Sea. (Source: TechCrunch)
  • As the US elections draw near, the FBI and CISA warn voters against efforts and interference from foreign actors potentially spreading disinformation regarding election results. (Source: The Internet Crime Complaint Center (IC3))
  • Activision, the video game publisher for Call of Duty (CoD), denied that it had been hacked after reports that more than 500,000 accounts have had their login information leaked. (Source: Dexerto)

Stay safe, everyone!

The post Lock and Code S1Ep16: Investigating digital vulnerabilities with Samy Kamkar appeared first on Malwarebytes Labs.

Taurus Project stealer now spreading via malvertising campaign

For the past several months, Taurus Project—a relatively new stealer that appeared in the spring of 2020—has been distributed via malspam campaigns targeting users in the United States. The macro-laced documents spawn a PowerShell script that invokes certutil to run an autoit script ultimately responsible for downloading the Taurus binary.

Taurus was originally built as a fork by the developer behind Predator the thief. It boasts many of the same capabilities as Predator the thief, namely the ability to steal credentials from browsers, FTP, VPN, and email clients as well as cryptocurrency wallets.

Starting in late August, we began noticing large malvertising campaigns, including, in particular, one campaign that we dubbed Malsmoke that distributes Smoke Loader. During the past few days we observed a new infection pushing the Taurus stealer.

Campaign scope

Like the other malvertising campaigns we covered, this latest one is also targeting visitors to adult sites. Victims are mostly from the US, but also Australia and the UK.

Traffic is fed into the Fallout exploit kit, probably one of the most dominant drive-by toolsets at the moment. The Taurus stealer is deployed onto vulnerable systems running unpatched versions of Internet Explorer or Flash Player.

Taurus traffic
Figure 1: Traffic capture showing the malvertising chain into Fallout EK loading Taurus

Because of code similarities, many sandboxes and security products will detect Taurus as Predator the thief.

taurus string
Figure 2: The string ‘TAURUS’ as seen in the malware binary

The execution flow is indeed pretty much identical with scraping the system for data to steal, exfiltrating it and then loading additional malware payloads. In this instance we observed SystemBC and QBot.

Stealer – loader combo continues to be popular

Stealers are a popular malware payload these days and some families have diversified to become more than plain stealers, not only in terms of advanced features but also as loaders for additional malware.

Even though the threat actors behind Predator the thief have appeared to have handed over a fork of their original creation and disappeared, the market for stealers is still very strong.

Malwarebytes users are protected against this threat via our anti-exploit layer which stops the Fallout exploit kit.

We would like to thank Fumik0_ for background information about Predator the thief and Taurus.

Indicators of Compromise

Malvertising infrastructure

casigamewin[.]com

Redirector

89.203.249[.]76

Taurus binary

84f6fd5103bfa97b8479af5a6db82100149167690502bb0231e6832fc463af13

Taurus C2

111.90.149[.]143

SystemBC

charliehospital[.]com/soc.exe
c08ae3fc4f7db6848f829eb7548530e2522ee3eb60a57b2c38cd1bdc862f5d6f

QBot

regencymyanmar[.]com/nt.exe
3aabdde5f35be00031d3f70aa1317b694e279692197ef7e13855654164218754

The post Taurus Project stealer now spreading via malvertising campaign appeared first on Malwarebytes Labs.

Sandbox in security: what is it, and how it relates to malware

To better understand modern malware detection methods, it’s a good idea to look at sandboxes. In cybersecurity, the use of sandboxes has gained a lot of traction over the last decade or so. With the plethora of new malware coming our way every day, security researchers needed something to test new programs without investing too much of their precious time.

Sandboxes provide ideal, secluded environments to screen certain malware types without giving that malware a chance to spread. Based on the observed behavior, the samples can then be classified as harmless, malicious, or “needs a closer look.”

Running programs in such a secluded environment is referred to as sandboxing and the environment the samples are allowed to run in are called sandboxes.

Definition of sandboxing

Let’s start with a definition so we know what we are talking about. There are many definitions around but I’m partial to this one:

“Sandboxing is a software management strategy that isolates applications from critical system resources and other programs. Sandboxing helps reduce the impact any individual program or app will have on your system.”

I’m not partial to this definition because it is more correct than other definitions, but because it says exactly what we want from a sandbox in malware research: No impact on critical system resources. We want the malware to show us what it does, but we don’t want it to disturb our monitoring or infect other important systems. Preferably, we want it to create a full report and be able to reset the sandbox quickly so it’s ready for the next sample.

Malware detection and sandboxing

Coming from that definition, we can say that a cybersecurity sandbox is a physical or virtual environment used to open files or run programs without the chance of any sample interfering with our monitoring or permanently affecting the device they are running on. Sandboxing is used to test code or applications that could be malicious before serving it up to critical devices.

In cybersecurity, sandboxing is used as a method to test software which would end up being categorized as “safe” or “unsafe” after the test. In many cases, the code will be allowed to run and a machine learning (ML) algorithm or another type of Artificial Intelligence (AI) will be used to classify the sample or move it further upstream for closer determination.

Malware and online sandboxes

As sandbox technology development further progressed and as the demand for a quick method to test software arose, we saw the introduction of online sandboxes. These are websites where you can submit a sample and receive a report about the actions of the sample as observed by the online sandbox.

It still takes an experienced eye to determine from these reports whether the submitted sample was malicious or not, but for many system administrators in a small organization, it’s a quick check that lets them decide whether they want to allow something to run inside their security perimeter.

Some of these online sandboxes have even taken this procedure one step further and allow user input during the monitoring process.

anyrun
Any.run interactive sandbox

This is an ideal setup for those types of situations where the intended victim needs to unzip a password-protected attachment and enable content in a Word document. Or those pesky adware installers that require you to scroll through their End User License Agreement (EULA) and click on “Agree” and “Install.” As you can imagine. these will not do much on a fully automated sandbox, but for a malware analyst, these samples would fall into the category that requires human attention anyway.

Sandbox sensitivity

In the ongoing “arms race” between malware writers and security professionals, malware writers started to add routines to their programs that check if they are running in a virtual environment. When the programs detect that they are running in a sandbox or on a virtual machine (VM), they throw an error or just stop running silently. Some even perform some harmless task to throw us off their track. Either way, these sandbox-evading malware samples don’t execute their malicious code when they detect that they are running inside a controlled environment. Their main concern is that researchers would be able to monitor the behavior and come up with counter strategies, like blocking the URLs that the sample tries to contact.

Some of the methods that malware uses to determine whether it is running in a sandbox are:

  • Delaying execution to make use of the time-out that is built into most sandboxes.
  • Hardware fingerprinting. Sandboxes and Virtual Machines can be recognized as they are typically different from physical machines. A much lower usage of resources, for example, is one such indicator.
  • Measuring user interaction. Some malware requires the user to be active for it to run, even if it’s only a moving mouse-pointer.
  • Network detection. Some samples will not run on non-networked systems.
  • Checking other running programs. Some samples look for processes that are known to be used for monitoring and refuse to run when they are active. Also the absence of other software may be considered an indicator of running on a sandbox.

Sandboxes and virtual machines

In the previous paragraph we referenced both virtual machines and sandboxes. However, while sandboxes and virtual machines share enough characteristics to get them confused for one another, they are in fact two different technologies.

What really sets them apart is that the Virtual Machine is always acting as if it were a complete system. A sandbox can be made much more limited.  For instance, a sandbox can be made to run only in the browser and none of the other applications on the system would notice it was even there. On the other hand, a Virtual Machine that is entirely separated from the rest of the world, including its host, would be considered a sandbox.

To make the circle complete, so to speak, we have seen malware delivered in the form of a VM. This type of attack was observed in two separate families, Maze and Ragnar Locker. The Maze threat actors bundled a VirtualBox installer and the weaponized VM virtual drive inside a msi file (Windows installer package). The attackers then used a batch script called starter.bat to launch the attack from within the VM.


If you’d like to know more technical details about these attacks, here’s some recommended reading: Maze attackers adopt Ragnar Locker virtual machine technique


The future of sandboxing

Keeping in mind that containerization and virtual machines are becoming more common as a replacement for physical machines, we wonder whether cybercriminals can afford to cancel their attack when they find out they are running on a sandbox or virtual machine.

On the other hand, the malware detection methods developed around sandboxes are getting more sophisticated every day.

So, could this be the field where the arms race is in favor of the good guys? Only the future will be able tell us.

Stay safe, everyone!

The post Sandbox in security: what is it, and how it relates to malware appeared first on Malwarebytes Labs.

Phishers spoof reliable cybersecurity training company to garner clicks

“It happens to the best of us.”

And, indeed, no adage is better suited to a phishing campaign that recently made headlines.

Fraudsters used the brand, KnowBe4—a trusted cybersecurity company that offers security awareness training for organizations—to gain recipients’ trust, their Microsoft Outlook credentials, and other personally identifiable information (PII). This is according to findings from our friends at Cofense Intelligence, who did a comprehensive analysis of the campaign, and of course, KnowBe4, who first reported about it.

e161725d99f3357ee852feff5b9679ad
Screenshot of phishing email courtesy of KnowBe4

Email details are as follows:

Subject: Training Reminder: Due Date

Message body:

Good morning

Your Security Awareness Training will expire within the next 24hrs. You only have 1 day to complete the following assignment:

– 2020 KnowBe4 Security Awareness Training

Please note this training is not available on the employee training Portal. You need to use the link below to complete the training:

hxxps://training[.]knowb[.]e4[.]com/auth/saml/4d851fef35c0f

This training link is also available on Security Awareness Training.

Use the URL: training[.]knowbe[.]4[.]com/login if you like to access the training outside of the network. Please use your email on the initial KnowBe4 login screen. Once the browser directs you to authentication page, please enter your username, password, and click the “Sign in” button to access the training.

Your training record will be available within 30 days after the campaign is concluded.

Thank you for helping to keep our organization safe from cybercrime.

Information Security Officer

“Poor English” is usually a hallmark of a scam email, according to majority of cybersecurity experts, and phishing emails are notoriously known for it. The above training-themed email may have fooled several recipients who are quite forgiving to some English errors—after all, typos do happen.

However, we should remember to also look at the URLs closely, both on the email and where it really leads to when you hover a mouse pointer over each one of them. Granted this is a straightforward, unsophisticated scam, which makes discerning it easier. It also gives us the notion that whoever the campaign is trying to bag, they’re only after those who aren’t careful enough to look closely or critical enough to perceive that something is amiss.

To the seemingly untrained eye, the URLs on the email may seem genuine, but they’re not. If you’re familiar with a URL’s structure, you’ll realize quickly that they’re not even close to being genuine. Take, for example, training[.]knowb[.]e4[.]com. The main domain here is e4[.]com. As for training[.]knowbe[.]4[.]com, the main domain is 4[.]com. Basic familiarity to URLs can save you from falling for scams like this.

Once users click any of the links, they are directed to a destination that doesn’t bear the KnowBe4 brand but to what appears to be a Microsoft Outlook sign in page, asking for credentials.

aacebc78976b4875c1b1d37e77c64bc1
Screenshot of the first Outlook 365 phish page courtesy of KnowBe4

Again, take note of the URL in the address bar.

According to Cofense, similar phishing pages like this are hosted on at least 30 sites since April of this year. They also found traces of other current or previous phishing campaigns that were themed around sexual harassment training, another learning course many organizations require their employees to take.

Going back: Once the Outlook username and password combination were provided and the user clicks “sign in”, they are directed to another Outlook page, this time asking for details that are more personal, such as date of birth and physical address.

32ba8803350fc5658b327e7fee1819fb
Screenshot of the second Outlook 365 phish page courtesy of Cofense

As the phishing kit had already been taken down at the time of writing, testing couldn’t show what happens next after clicking “Verify Now”. But based on the sexual harassment training phishing campaign, which used the same kit, redirecting to a legitimate sexual harassment training page, it’s logical to conclude that users would also be directed to a security awareness training website, which may or may not necessarily be KnowBe4.

This isn’t the first time the KnowBe4 brand—or other cybersecurity brands for that matter—have been abused to defraud people. The company was first used in phishing campaigns in September 2018 and in January 2019.

In February of this year, a NortonLifeLock phishing scam, wherein threat actors forced a remote access Trojan (RAT) installation onto victim systems by making a malformed Word document appear to be password-protected by NortonLifeLock, was found in the wild.

In April 2019, sophisticated Office 365 credential stealers didn’t only craft fake Microsoft alert types around certain Microsoft products, they also mimicked the return path of Barracuda Networks, a well-known email security provider, and include it in the phishing email’s Received header, making the email appear that it passed through Barracuda servers. This would make it seem like it could be trusted, and thus, safe to open, when—upon closer inspection—it’s not.

Every organization has a brand to protect. And the first step to do this is to realize early on that their brand could be misused or abused by those who want to make illicit gains. That said, no brand is truly safe. Heck, even Malwarebytes has doppelgängers.

Businesses must be actively looking for those banking on their names online. Customers, on the other hand, must know and accept that online criminals can get to them through the services they use by pretending to be these companies. It’s no longer enough to readily trust emails based on the logos they purport to bear. It’s time to start carefully reading emails you care about and scrutinizing them, from the supposed sender to the email links and/or attachments.

Never attempt to click anything on dubious emails or visit the destinations by copying and pasting them on a browser unless you’re in a virtual machine. And if you don’t have time to do the investigative work yourself, ask. Give your service provider a call or report a potential phishing attempt. This way, you’re not only helping yourself but also alerting your provider and helping those who would have fallen for a scam if not for your efforts.

Stay safe!

The post Phishers spoof reliable cybersecurity training company to garner clicks appeared first on Malwarebytes Labs.

A week in security (September 14 – 20)

Last week on Malwarebytes Labs, we looked at Fintech industry developments, specifically the differences between Europe and the US, and we analyzed how some charities and the advertising industry are tied together. We also told readers about what companies can do to counter domain name abuse.

In our Lock and Code podcast we talked to Pieter Arntz about safely using Google Chrome Extensions.

Other cybersecurity news

  • Researchers discovered the Zerologon Windows exploit, which lets attackers instantly become admins on enterprise networks. (Source: TechSpot)
  • A technology firm linked to the Chinese Communist Party has created and mined a global database of 2.4 million individuals. (Source: The Diplomat)
  • Five Chinese nationals and two Malaysian nationals linked to APT41 were charged in connection with a global hacking campaign. (Source: Cyberscoop)
  • How do stolen credit cards get used halfway around the world? Danny Palmer tried to find out. (Source: ZDNet)
  • Cybersecurity companies noticed a surge in DDoS attacks targeting the education and academic sector. (Source: BleepingComputer)
  • A bluetooth vulnerability dubbed BLURtooth that overwrites Bluetooth encryption keys was reported last week by two research groups. (Source: TechXplore)
  • The US Department of the Interior (DoI) failed its latest computer security assessment, mostly for a lack of Wi-Fi defenses. (Source: The Register)
  • A woman in Germany died during a ransomware attack on a hospital, in what may be the first death directly linked to a cyberattack on a hospital. (Source: The Verge)
  • In a transformation of the threat portfolio, web-phishing targeting various online services almost doubled during the COVID-19 pandemic. (Source: Security Affairs)
  • UK business owners were targeted by a phishing scam that attempts to gain sensitive information by impersonating Her Majesty’s Revenue and Customs (HMRC). (Source: Infosecurity Magazine)

Stay safe, everyone!

The post A week in security (September 14 – 20) appeared first on Malwarebytes Labs.