IT NEWS

Apple shines and buffs Mac security—Is it enough to stop today’s malware?

There’s a lot going on in the Mac security world lately.

Over the last few months, Apple has ramped up security efforts across its platforms. From an endpoint security framework overhaul of macOS Catalina to phasing out kernel extensions, the tech giant has been battening down the hatches—especially of macOS and Mac computer hardware.

Despite Apple’s best efforts—or perhaps as a result of them—the Mac threat landscape has become even more dangerous. But instead of welcoming allied assistance via third-party security vendors, Apple is closing the gate. And cybercriminals are closing the gap.

A crack in the Mac door

It seems like only yesterday there weren’t many breaking news stories on Mac security threats to bite into. In fact, news on Apple cyberthreats wasn’t just infrequent—it was inconsequential. But over the last few years, credible threats, exploits, and hacks of Apple products have become more persistent. There was KeRanger ransomware in 2016. Several effective Mac-facing miners joined the crypto-rush in 2018. The iOS vulnerability exploited by checkm8 rattled quite a few cages in late 2019.

However, from the start of 2020 onward, the malicious momentum has been building. In the 2020 State of Malware Report, Malwarebytes researchers found that Mac malware—primarily backdoors, data stealers, and cryptominers—had risen by 61 percent over the previous year.

2020 served Apple users with a number of targeted attacks using RATs and APTs developed by nation-state actors from China, North Korea, and Vietnam. Some of these made their way into the wild; others appeared on journalists’ iPhones. ThiefQuest, a Mac malware masquerading as ransomware, was discovered in mid-2020.

Despite having the most locked-down security system of Apple’s platforms, iOS was particularly pummelled in the last year. A zero-click exploit remained unpatched for six months of 2020, leaving innocent iPhone users unaware that anyone nearby could completely take over their device without touching it. In November 2020, Apple released patches for three zero-day vulnerabilities in iOS and iPadOS that were being actively exploited in the wild.

Unfortunately, 2021 is proving to be similarly rotten for Apple. Just last week, the company released a patch for iPhone, iPad, and MacBook for a bug that could allow code execution through websites hosting malicious code. Reading between the lines, this means its browsers were vulnerable to exploits that could be launched from malicious website content, including images and ads.

While Apple didn’t comment on whether this particular vulnerability had been discovered by cybercriminals, the company released patches for three separate security bugs that were being actively exploited in January 2021. (Note: These are a different three vulnerabilities than the zero-days found in November.) And just a couple weeks ago, there was Silver Sparrow.

Silver Sparrow is a new Mac malware that swooped in on February 18 and was found on nearly 40,000 endpoints by Malwarebytes detection engines. At first considered a reasonably dangerous threat (researchers now believe it’s a form of adware), Silver Sparrow is nevertheless a malware family of intrigue for showcasing “mature” capabilities, such as the ability to remove itself, which is usually reserved for stealth operations.

One of Silver Sparrow’s more advanced features is the ability to run natively on the M1 chip, which Apple introduced to macOS in November. The M1 chip is central to Apple’s latest security features for Mac computers, and that makes it central to the apparent security paradigm shift happening within the company’s walls.

Apple security paradigm shift

And what paradigm shift is that? Macs running the M1 chip now support the same degree of robust security Apple consumers expect from their iOS devices, which means features like Kernel Integrity Protection, Fast Permission Restrictions (which help mitigate web-based or runtime attacks), and Pointer Authentication Codes. There are also several data protections and a built-in Secure Enclave. Put plainly: Apple have baked security directly into the hardware of their Macs.

But the security changes aren’t limited to the M1 chip or even macOS. On February 18, the company released its Platform Security Guide, which details the changes in iOS 14, iPadOS 14, macOS Big Sur, tvOS 14, and more—and there are many. From an optional password manager feature in Safari that looks out for saved passwords involved in data breaches to new digital security for car keys on Apple Watches and the iPhone, the security sweep appears to be comprehensive. In the guide preamble, Apple touts:

“Apple continues to push the boundaries of what’s possible in security and privacy. Apple silicon forms the foundation for…system integrity features never before featured on the Mac. These integrity features help prevent common attack techniques that target memory, manipulate instructions, and use JavaScript on the web. They combine to help make sure that even if attacker code somehow executes, the damage it can do is dramatically reduced.”

Looking at the collective security improvements made to Macs over the last several months—the M1 chips, changes to system extensions, an entirely new endpoint security framework—it appears Apple is making great strides against the recent uptick in cyberattacks. In fact, they should be commended for developing many beneficial technologies that help Mac (and iPhone) users stay more secure. However, not all of the changes are for the better.

Securing themselves in the foot

Unlike their Microsoft counterparts, Apple have been historically far more reticent about working with others—and that extends to third-party antivirus programs and security researchers alike. Their recent security upgrades for macOS and MacBook hardware are, unfortunately, right on brand.

The security components of M1-based Macs are harder to analyze and verify for those looking in from the outside. Security researchers and the tools they use may be thwarted by a less-than-transparent environment. Essentially, the new developments have hidden Mac defenses behind castle walls, which could make it more difficult for users, businesses, or analysts to know whether their devices have been compromised.

In a recent article in the MIT Technology Review, journalist Patrick Howell O’Neill said that Apple’s security succeeds in keeping almost all of the usual bad guys out, but when the most advanced hackers do break in, “Apple’s extraordinary defenses end up protecting the attackers themselves.” Those threat actors with the resources to develop or pay for a zero-day exploit can pole jump over the Apple security wall and, once inside, move around fairly undetected because of its locked-down, secretive nature.

Mac system extensions and the endpoint security framework introduced in Catalina are similarly problematic. Third-party software developers must apply to Apple for system extensions, and they aren’t just handing them out like masks and sanitizer. Once a developer gets a system extension approval from Apple, though, that developer’s software is protected by System Integrity Protection—and it’s nearly impossible to remove the extension unless you’re the owner of the software.

That’s great for legitimate third-party software programs, like Malwarebytes for Mac, especially in protecting against outside threats that might try to disable security software during an attack. But not every company that applies for system extensions is legitimate.

There have already been a few examples of developers known for cranking out potentially unwanted programs (PUPs) getting extensions from Apple. Because of this, some PUPs can no longer be fully removed by Malwarebytes (or any other security vendor) from Mac computers running Catalina or Big Sur. And while there are some ways that users can manually remove these programs, they are by no means straight-forward or intuitive.

No matter the malware

There’s been much fuss made about “actual” Mac malware in the press (and in this very article), but PUPs and adware are a significant issue for Mac computers. Cue the classic rebuttal: “But it’s only PUPs!” While many like to trivialize them, PUPs and adware open the door for more vulnerabilities, making an attack by malicious software even easier. Adware, for example, can host malicious advertising (malvertising), which can push exploits or redirects to malicious websites. If the most recent vulnerability patched by Apple wasn’t already being exploited, that would have been a perfect opportunity for cybercriminals to penetrate the almighty Apple defenses.

As discovered in the State of Malware Report, PUPs represented more than 76 percent of Mac detections in 2020. Adware accounted for another 22 percent. Actual malware designed for Macs is but a small slice of the apple. But it’s a growing slice for businesses with Mac endpoints.

In 2020, Mac threat actors decided to take a page out of the Windows cybercriminal book and turn their attention toward larger organizations instead of individuals. To that end, Mac malware increased on business endpoints by 31 percent in 2020—remote work and all. There may not be as many “actual” malware attacks on Mac endpoints as on Windows, but the share of Macs in business environments has been increasing, especially since the start of the pandemic.

Apple has developed some impressive armor for its Macs, but it doesn’t protect against the full scope of what’s out there. Further, Apple only uses static rules definitions for its anti-malware protection, which means it won’t stop malware it doesn’t already recognize. A security program that uses behavioral detection methods (heuristic analysis), like Malwarebytes Endpoint Detection and Response, has the potential to catch a lot of bad apples that Apple hasn’t seen yet.

As time goes on, we’re increasingly in danger of a major attack waged against Macs. There are still a myriad of Mac users who don’t install any third-party security. Fundamentally, Macs still aren’t all that difficult to infect—even with all the bells and whistles. And by closing their systems, Apple is limiting the capabilities of additional third-party security layers to assist in stopping that major attack from doing major damage.

Apple’s days of sitting on the security fence are certainly over. Time will tell if their fortress-like defenses win out, or if they’ll eventually need to depend on their allies for assistance.

The post Apple shines and buffs Mac security—Is it enough to stop today’s malware? appeared first on Malwarebytes Labs.

Careers in cybersecurity: Malwarebytes talks to teachers and students

Every year, I take part in talks for universities and schools. The theme is often breaking into infosec. I give advice to teens considering pursuing tech as a further area of study. I explain a typical working day for degree undergraduates. Sometimes I’m asked to give examples of conference talks. I get to dust off some oldies and give a snapshot of security research circa [insert year of choice here].

I’ve been doing this for about five years now, and it’s incredibly helpful for me and (hopefully) students too. I see real concerns from people who’ll end up being the next wave of researchers, writers, and communicators.

Get involved: benefits for the education space

If you work in security research and are considering doing something similar, you should! It’s helpful for many reasons:

  • It gives you a solid idea of what the next generation find interesting, research-wise. Which bits of tech do they love? What do they think will be an issue down the line? Maybe they prefer virtual machines to bare metal. Perhaps we’ll have an hour-long debate over the rights and wrongs of paying malware authors. You won’t know until you try it!
  • If you do any amount of public speaking, interviews, talks, whatever: it keeps you from going rusty. The Pandemic has shut down many conferences and sent more than a few online. If you’re unsure about doing online talks when your background is “real world only”, it’s helpful practice. Want to know what works in virtual spaces? This will definitely help.
  • Schools and universities really get a lot from these events. It’s usually quite difficult for them to get people booked in to speak about things. From experience, educators will absolutely appreciate any outreach or help you can give their students. It’s a win-win for everybody.

“I thought it was all code”

Something I emphasise is that information security has a huge number of different backgrounds in its overall makeup. I’ve met many despondent students who felt their coding skills weren’t up to scratch. The students’ impression is that everything is 100% coding or programming.

It’s true, coding and programming can be incredibly difficult things to understand. Skills like reverse engineering malware can take years to perfect. There’s no guarantee of being able to keep pace with malware developments in the meantime, either.

Well, there’s lots of fun ways issues like that can be addressed.

Even so, “I thought you had to be a qualified coder / programmer” is something I hear all the time. If not that, they often feel a lack of skills in one area negates everything else they’re good at.

It’s quite a relief for them to find out this doesn’t have to be the case.

The myth of the “expert at everything”

In media, security researchers are often presented as experts on all topics imaginable. The reality is people excel in their own little niche field and have a variable skillset for everything else. Experienced security pros know when to ask for help, and there’s absolutely nothing wrong with it. You really don’t have to know everything, all the time. This is another concern relayed to me by many students over the last few years.

The many paths to the security industry

When doing these sessions, a few key talking points come up time and again. Quite a few students have to be convinced that lots of security folk don’t necessarily even have technology qualifications. There’s also many roles which don’t involve any coding whatsoever. However, these are roles students haven’t considered, because they didn’t necessarily have any idea they existed.

Some of the deepest hardware knowledge I’ve come across is from people in sales teams. Do you like the idea of public-facing research? There’s blog and press opportunities for that. Is the idea of promoting your company’s research to a wide audience an exciting one? There’s probably a spot in marketing for you. At the furthest reaches of “no tech involvement whatsoever”, security organisations need people to design things. Maybe it’s time to dust off that design degree and start sending in your resume?

Whatever your skillset as a student, there is absolutely something you can do. That talent of yours will be a benefit to an organisation in the security space.

Thinking outside the box

One of the most interesting things about fresh talent is watching it pull apart new technology and highlight unforeseen dangers.

Look at some of the things we dig into on our very own blog. Web beacons, virtual/augmented reality, the Internet of Things, deepfakes, malign influence campaigns, securing accounts after someone’s died, and much more. The industry as a whole is more open to new / different research than it’s ever been. It has to be, or bad people will be getting away with virtual murder while everyone twiddles their thumbs.

In the last few days we’ve seen a run on art related NFT theft. Try telling someone that 12 months ago and see what the reaction would be. Someone out there has an idea for a solution to this kind of problem. They just don’t know it yet. It’s up to us to encourage them and see what kind of cool solutions they can come up with.

Talking with teachers: Holly Smylie

Computer Science teacher Holly Smylie, who sat in one of our talks, has given some insight into how the industry can help students:

Open days and talks are great in terms of giving students access to positive role models from the industry such as yourself. It essentially gives them an exposure to experiences of infosec that they may otherwise not have had from their environment, meaning that it can make a massive difference in terms of their future career aspirations and later life chances. 

I think that one of the greatest take away from your talk for my students was that although qualifications are obviously important, they aren’t the be all and end all. There are still other routes into the sector without the “usual qualifications”. It allows them to think beyond an exact route into something they want to know more about. Also, I think that there is more that our industry could do in terms of addressing the gender imbalance – whether this is providing talks or networking between students and female experts in the industry.

Again, these role models for students at school and even uni-level via talks, open days, visiting companies, etc can often be the tipping point for female students who do not believe that they would succeed in this industry (as it is still very male dominated). Again, I think this just fits in with broadening female students’ horizons to the world of infosec and giving them confidence that they will be just as valued as our male colleagues.

Closing thoughts

According to some predictions, there’s a huge number of jobs which will go unfilled into the next year. I’m not convinced the numbers will be as big as that. Even so, helping students of all ages with paths into the security industry can only be a good thing. The pandemic hasn’t made technology learning easy over the last year. I’m glad we at Malwarebytes have been able to pitch in and give students some possible careers to think about.

Special thanks to Holly, and the schools and Universities we’ve run these sessions for. We wish your students success in the years to come.

The post Careers in cybersecurity: Malwarebytes talks to teachers and students appeared first on Malwarebytes Labs.

ProxyLogon PoCs trigger a game of whack-a-mole

As we reported recently, the use of the Microsoft Exchange Server ProxyLogon vulnerabilities has gone from “limited and targeted attacks” to a full-size panic in no time.

Criminal activities, ranging in severity from planting crypto-miners to deploying ransomware, and conducted by numerous groups, have quickly followed the original exploitation by APT groups to spy on organizations.

With the focus of many security and IT professionals now firmly fixed on the world’s vulnerable Exchange servers, proof-of-concept exploits (PoCs) have surfaced left and right.

Some argue that since some attackers already possess exploit code, it’s only right for defenders to have it too, so they can test their systems by simulating what those attackers might do. Others say that PoC code doesn’t redress the balance because it’s a leg up for everyone, including criminals who haven’t created their own exploits yet.

And while most researchers deliberately omit specific components of a PoC, others feel compelled to publish full working exploits, enabling even the most technically challenged script-kiddies to use them maliciously.

All of which explains some people in the computer security community are busy tying to publish ProxyLogon PoCs, others are trying to stop them.

Purposely broken exploit

Bleeping Computer reports that a security researcher has released a proof-of-concept exploit that requires slight modification to install web shells on Microsoft Exchange servers vulnerable to the actively exploited ProxyLogon vulnerabilities.

“Firstly, the PoC I gave can not run correctly. It will be crashed with many of errors. Just for trolling the reader,” Jang told BleepingComputer.

Soon after the PoC was published, the publication reports that Jang received an email from Microsoft-owned GitHub stating that the PoC was being taken down as it violated the site’s Acceptable Use Policies.

GitHub under fire

GitHub received a ton of criticism for removing the proof-of-concept exploit. In a statement, the site said it took down the PoC to protect devices that are being actively exploited.

“We understand that the publication and distribution of proof of concept exploit code has educational and research value to the security community, and our goal is to balance that benefit with keeping the broader ecosystem safe. In accordance with our Acceptable Use Policies, GitHub disabled the gist following reports that it contains proof of concept code for a recently disclosed vulnerability that is being actively exploited.”

The main reason for criticism was that the vulnerability has a patch, so Microsoft had no reason to have the PoC removed. Some researchers also claimed GitHub has a double standard, since it has allowed PoC code for patched vulnerabilities affecting other organizations’ software in the past.

We have some sympathy with Microsoft here: a patch may be available but that doesn’t mean everyone is protected. A patch is only useful once it has been applied, and tens of thousands of servers are still unpatched.

Reverse engineering an exploit

To demonstrate how researchers go about turning a vulnerability into an exploit, Praetorian posted their methodology for a ProxyLogon attack chain.

By examining the differences (diffing) between a pre-patch binary and post-patch binary they were able to identify exactly what changes were made. These changes were then reverse engineered to assist in reproducing the original bug.

Cat is out of the bag

The problem with removing PoCs from a platform like GitHub is that the code will just re-surface elsewhere. It is very hard to make the Internet, as a collective brain, forget something.

Even if the author doesn’t post it somewhere else, there will always be that individual that has already copied the content before it was removed. Or another who is inspired to try to create their own.

For Malwarebytes Labs, one size doesn’t fit all. Sometimes a PoC can help to improve security, and sometimes some restraint is needed. Each situation needs to be judged on its merits.

The current situation is a crisis, and despite efforts to take down the emerging ProxyLogon PoCs, or neuter them by making them less than fully functional, you can bet they will be put to use by criminals. This while the owners of the remaining unpatched systems are scrambling to save what they can.

Other Malwarebytes posts on the ProxyLogon vulnerability:

Stay safe, everyone!

The post ProxyLogon PoCs trigger a game of whack-a-mole appeared first on Malwarebytes Labs.

Royal Mail scam says your parcel is waiting for delivery

Expecting a delivery? Watch out for phishing attempts warning of held packages and bogus shipping fees. This Royal Mail delivery scam begins with a text message out of the blue, claiming:

Your Royal Mail parcel is waiting for delivery. Please confirm the settlement amount of 2.99 GBP via:

Uk(dot)royalmail-bill(dot)com

Lots of folks may assume this text message is genuine, along with the URL. This would be a mistake. What we have is a simple but effective phish. It takes advantage of several real-world factors to ensure it’s possibly a bit more believable than other missives landing in mailboxes.

What are they up to? Let’s find out.

“If you do not pay this your package will be returned”

The link leads to a fake Royal Mail page which as good as repeats the message from the text, with one important addition:

If you do not pay this your package will be returned to sender

fakeroyalmailsite1

It doesn’t mention how long is left until the package is returned. (There’s nothing like a bit of sudden pressure to make people jump through some hoops.)

The phishing page has two sections. The first asks for a lot of personal details like name, address, phone number, and email address. Clicking the continue button leads to a request for payment information, in order to pay the non-existent fee.

fakeroyalmailsite2

If the victim continues, the phisher has both their personal information and their credit card.

Why this phishing attack works

This is a smart scam, for a number of reasons.

  1. The phish carries the usual markers of urgency and a request for information. It also doesn’t provide any clue about what’s in the non-existent package or who it’s from, tweaking victims’ fear of missing out, while promising to make that information available for a reasonably small and realistic fee.
  2. The endless pandemic ensures huge numbers of people are buying everything online. It’s not uncommon for households to have a steady army of delivery people at the door. A week’s shopping, clothes, entertainment items, schoolbooks for the kids, and more besides are all conveyor-belting their way into homes daily. It’s quite easy to forget which parcels have been ordered and which have already arrived.
  3. Text messages being sent from an “official” delivery company number is a practice long since abandoned, and numbers are easy to spoof anyway. If you’re waiting on a parcel, you could get a message from pretty much any number at all including the personal mobile of the driver themselves so checking if the number is official or not is no help.
  4. In the UK, Brexit is causing no end of confusion over delivery charges. People and organisations simply don’t seem to know what to expect, and this kind of phishing scam plays off that confusion to the max. If you’re waiting on something from outside the UK and find out a parcel is almost within reach? It’s likely you may be tempted to fill in the payment information request so as not to risk having the package returned to sender.

Next steps

If you or anyone you know has been caught by this, contacting banks or credit card companies is a priority. This would also be a good time to explore our in-depth look at phishing tactics. It’s a particularly unpleasant scam to be caught out by, when a majority of people are reliant on postal services. If you’re in doubt over the status of a parcel, go directly to your delivery service’s website. What you’ll lose in time, you’ll more than make back in terms of your bank account remaining safe and sound.

The post Royal Mail scam says your parcel is waiting for delivery appeared first on Malwarebytes Labs.

How your iPhone could tell you if you’re being stalked

The latest iOS beta suggests that Apple’s next big update will include an iPhone feature that warns users about hidden, physical surveillance of their location. The feature detects AirTags, Apple’s answer to trackable fobs made by Tile, and serves to block the potential abuse of the much-rumored product.

While the feature represents great potential, digital surveillance experts said that they were left with more questions than answers, including whether surveilled iPhone users will be pointed to helpful resources after receiving a warning, how the feature will integrate with non-Apple products—if at all—and whether Apple coordinated with any domestic abuse advocates on the actual language included in the warnings.

Erica Olsen, director of Safety Net at the National Network to End Domestic Violence, emphasized the sensitivities of telling anyone—particularly domestic abuse survivors—about unknown surveillance that relies on a hidden device.

“It could be extremely scary to get a notification about a device and have no idea where to start to locate and disable it,” Olsen said. “That’s not to say that it’s a bad thing; it just needs to be thorough.”

Apple did not respond to questions regarding the language of its notifications or about the company’s potential outreach to external domestic abuse advocates in crafting the feature. Members of the Coalition Against Stalkerware—of which Malwarebytes is a founding partner—said they were open to collaborate with Apple on the feature.

New “Item Safety Alerts”

According to 9to5Mac, the latest beta version for iOS 14.5 includes an update to the “Find My” app, which helps users locate iPhones, iPads, iPod Touches, and Mac computers that may have been lost or stolen. Importantly, while each of those devices can run the Find My app for their respective operating systems, it is only the iPhone version of the app—as witnessed in the iOS 14.5 beta—that includes a new setting called “Item Safety Alerts.”

The setting is turned on by default, and, according to Apple blogger and iOS developer Benjamin Mayo, any attempts to turn off the setting will result in a warning that reads:

“The owner of an unknown item will be able to see your location and you will no longer receive notifications when an unknown item is found moving with you.”

As the iOS update is still in beta, there is limited information, and the “notifications” referenced in the Item Safety Alerts advisory have not been revealed. However, the advisory itself reveals the purpose of the alerts: To warn iPhone users in the future about whether separate, unknown devices are being tracked that are in close, frequent proximity to their iPhone.

In theory, this type of surveillance has been possible for years. By abusing the intentions of Apple’s Find My app, a stalker or a domestic abuser could plant a device that can be tracked by Find My, such as an iPhone or an iPod touch, onto a victim and track their movements. But, while this type of location monitoring was possible, it also had some obvious obstacles. One, purchasing a capable device could be expensive, and two, the actual devices that can be tracked are rather easy to find, even to unsuspecting victims. After all, it isn’t every day that someone just happens to find an entirely different phone in their gym bag.

Those obstacles could fade away, though, if Apple follows through on releasing its next, rumored product.

According to multiple tech news outlets, Apple will release physical location-tracking tags in 2021, dubbed “AirTags.” The devices could directly compete with the company Tile, which makes small, physical squares of plastic which can slipped into personal items likes luggage, purses, backpacks, wallets, and other important items that could be lost or stolen.

Unfortunately, the smaller a location-tracking device is, the easier it is to use it against someone without their consent, as revealed by a woman in Houston who said her ex stalked her after planting a Tile device in her car. The woman, who remained anonymous for her safety, told ABC 13 news in an interview:

“It was shocking. In a million years, it never occurred to me that could be possible and instantly everything made sense. I think that’s what’s important that for people who are in a domestic violence situation or stalking situation to know that should be a consideration.”

The iOS 14.5 beta feature, then, makes much more sense when accounting for a potential future with Apple’s AirTags. Malicious users could purchase AirTags and sneak them into a person’s purse or their backpack without their knowledge.

The new “Item Safety Alerts” could curb that type of abuse, though, warning users about unrecognized devices that are located in the same vicinity as their current device, but are not registered through their own Find My app.

Important considerations for Apple

Several representatives from members of the Coalition Against Stalkerware said that Apple’s new feature has real potential to help users, but without more details, many questions remain.

Tara Hairston, head of public affairs for North America at Kaspersky, said she wanted to know more about how Find My could work with third-party devices, so that clandestine surveillance could be detected beyond the use of Apple’s rumored AirTags, and beyond the use of an iPhone, too. According to 9to5Mac, the updates to Find My include a new “Item” tab to track third-party accessories, but questions from Malwarebytes Labs to Apple about the extent of that cross-functionality went unanswered.

Hairston also expressed concerns about the development of the feature.

“A question I have is whether Apple has discussed the alert’s language with professionals and advocates that work with domestic violence survivors to ensure that it is not re-traumatizing for them,” Hairstone said. “Furthermore, does Apple plan to provide information regarding what someone should do if they confirm that they are being tracked, especially if they are a survivor? Accounting for these types of safety considerations would result in more holistic support for vulnerable populations.”

These are routine considerations for the Coalition Against Stalkerware, which was intentionally built as a cross-disciplinary group to help protect users from the threats of stalkerware. For the same reason that the coalition’s domestic violence advocates are not the experts on technological sample detection, the coalition’s cybersecurity vendors are not the experts on protecting survivors from domestic abuse. But when the members work together, they can do informed, great things, like developing a new way to detect stalkerware which can happen outside of a compromised device—a critical need that many cybersecurity vendors did not know about until joining the coalition.

At Malwarebytes Labs, we await the release of Apple’s feature, and we are eager to learn about the work that went into it. Any company taking steps to limit non-consensual surveillance is a good thing. Let’s work together to make it great.

The post How your iPhone could tell you if you’re being stalked appeared first on Malwarebytes Labs.

The Malwarebytes 2021 State of Malware report: Lock and Code S02E04

This week on Lock and Code, we discuss the top security headlines generated right here on Labs. In addition, we tune in to a special presentation from Adam Kujawa about the 2021 State of Malware report, which analyzed the top cybercrime goals of 2020 amidst the global pandemic.

If you just pay attention to the numbers from last year, you might get the wrong idea. After all, malware detections for both consumers and businesses decreased in 2020 compared to 2019. That sounds like good news, but it wasn’t. Behind those lowered numbers were more skillful, more precise attacks that derailed major corporations, hospitals, and schools with record-setting ransom demands.

Tune in to hear about how cybercrime has changed, along with examples of some of the most nefarious malware upgrades, on the latest episode of Lock and Code, with host David Ruiz.

https://feed.podbean.com/lockandcode/feed.xml

You can also find us on the Apple iTunes storeSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news

Stay safe!

The post The Malwarebytes 2021 State of Malware report: Lock and Code S02E04 appeared first on Malwarebytes Labs.

150,000 Verkada security cameras hacked—to make a point

Hackers were able to gain access to camera feeds from Verkada, a tech company that specializes in video security and physical access control, to demonstrate how prevalent surveillance is, reports say.

Unfortunately, it also exposed the inner workings of hospitals, clinics, and mental health institutions; banks; police departments; prisons; schools; and companies like Tesla and Cloudflare, after at least 150,000 cameras were compromised as part of this demonstration.

Verkada is still investigating the scale and scope of the breach.

The attack

Swiss hacker and member of the hacking collective “APT-69420 Arson Cats,” Tillie Kottmann, claimed credit for the Verkada hack. When asked why, they told Bloomberg: “lots of curiosity, fighting for freedom of information and against intellectual property, a huge dose of anti-capitalism, a hint of anarchism—and it’s also just too much fun not to do it.”

Kottmann was also credited for breaching Intel in August 2020 and Nissan Motors in January 2021.

All of Kottmann’s tweets related to the Verkada hack contain the #OperationPanopticon hashtag, which references the panopticon, a prison architecture that allows a supervisor to have full view of its inmates without them knowing that they’re being watched. It is also a metaphor used to illustrate surveillance technology.

It isn’t clear if this operation is a name for just the Verkada hack, or a name for a series of breaches against surveillance companies that could affect millions, with Verkada just the first company to be targeted and breached.

Speaking to Bloomberg, Kottmann said this incident “exposes just how broadly we’re being surveilled, and how little care is put into at least securing the platforms used to do so, pursuing nothing but profit. It’s just wild how I can just see the things we always knew are happening, but we never got to see.”

Twitter suspended Kottmann’s account after they leaked Tesla security footage.

When asked how they were able to breach Verkada, Kottmann claimed that they were able to get an administrator account credential, which was publicly available online for some reason, with “super admin” rights, which gave them access to any camera, belonging to any of the company’s clients.

IPVM reports that a source “with direct knowledge” discovered that “basically every team member” at Verkada, including executives, had super-admin privileges.

IPVM also reports that super-admin access went further than simply letting the hackers see whatever they wanted:

Not only did Super Admin provide access to video feeds … it provided access to the root shell inside the cameras running inside each customer’s facility.

The response

In a statement about the incident, Verkada confirmed IPVM’s reporting, admitting that attackers had “gained access to a tool that allowed the execution of shell commands on a subset of customer cameras”.

According to the company, attackers gained access via a Jenkins server “used by our support team to perform bulk maintenance operations on customer cameras”, which gave them access to “video and image data from a limited number of cameras from a subset of client organizations”. Attackers also gained access to lists of client account administrators and sales orders.

Seeking to reassure customers, the company said it had now secured its systems.

First, we have identified the attack vector used in this incident, and we are confident that all customer systems were secured as of approximately noon PST on March 9, 2021. If you are a Verkada customer, no action is required on your part.

This isn’t Verkada’s first bout with negative publicity. In October 2020, three employees were fired after they abused Verkada’s own video surveillance system to capture and pass on media of female colleagues with sexually explicit jokes in one of the company’s Slack rooms.

Motherboard’s Vice was able to interview a Verkada employee who was unimpressed by the whole incident, saying “the big picture for me having worked at the company is that it has opened my eyes to how surveillance can be abused by people in power.”

The fallout

The hack raises serious questions about who had access to what, and why, and highlights both the security and privacy risks that come with admin and super-admin accounts. Simply, the more administrators there are, the more targets there are.

Administrator or super-administrator accounts should only be issued to people who need them to do their job, and those people should only use them if an account with lower privileges can’t be used. They should never be used for convenience.

Speaking to Bloomberg about the consent and privacy implications, Eva Galperin, the Electronic Frontier Foundation’s director of cybersecurity, made the point that companies who use a network of cameras may not expect that someone other than the company’s security team are watching them.

“There are many legitimate reasons to have surveillance inside of a company,” Galperin said in a Bloomberg interview. “The most important part is to have the informed consent of your employees.”

Finally, it should not be forgotten that Verkada and its customers were the victims of a crime. Accessing other people’s computers without their consent is still illegal, no matter how good your point is.

The post 150,000 Verkada security cameras hacked—to make a point appeared first on Malwarebytes Labs.

Ransomware is targeting vulnerable Microsoft Exchange servers

The Microsoft Exchange attacks using the ProxyLogon vulnerability, and previously associated with the dropping of malicious web shells, are taking on a ransomware twist. Until now, the name of the game has been compromise and data exfiltration, with a bit of cryptomining on the side.

To summarise: In ten days we’ve gone from “limited and targeted attacks” by a nation-state actor, to countless attacks by a number of groups against anyone with a vulnerable server. And in the space of a week the severity has escalated from unused web shells to ransomware. Depending on how the uptake in patching goes, this could well evolve again.

The danger of this pivot to ransomware is the sheer number of potential targets. Needless to say, it is essential that you install the Exchange updates required to keep your systems safe from harm.

The scale of the problem

Internet intelligence group Shadowserver has attempted to quantify the problem of exposed Exchange servers by scanning the Internet looking for vulnerable machines.

It has made two startling conclusions. The first is that as many as 68,500 servers may have been compromised by the so-called Hafnium threat actor before Microsoft released patches for its Exchange zero-days.

The total dataset distributed includes over 68500 distinct IP addresses. Of these IP addresses, there is high certainty that 8911 IP addresses were compromised. However, the remaining IP addresses included in the report are also very likely compromised too, since they were targeted with the OWA 0-day exploit before Microsoft publicly released patches for Exchange.

The groups second insight, is that at the time of its most recent scan, three days ago, 64,088 unique IP addresses were assessed as “still having exposed Microsoft Exchange Server vulnerabilities“. According to the group, the USA has by far the largest population of vulnerable servers, with almost 17,500.

The group’s research partner, the Dutch Institute for Vulnerability Disclosure, reported separately that nearly 20% of the 250,000 servers it scanned were vulnerable.

Which ever way you slice it, there are still a lot of vulnerable Exchange servers out there, and history suggests it will take a considerable time to patch them all.

With that out of the way: what, exactly, is the ransomware angle to this latest round of ProxyLogon attacks?

Introducing DearCry ransomware

Bad actors are now using Exchange exploits to gain entry to networks, before manually running DearCry ransomware.

This is an indicator of how easy Exchange exploitation is becoming. For years, targeted ransomware attacks have been synonymous with brute-force attacks on RDP ports. It’s such a common tactic, it’s easy to forget that criminals were simply using the easiest method of entry available.

The ransomware, first reported by BleepingComputer, has been dubbed “DearCry”. This is because it uses “DEARCRY!” as a file marker inside every encrypted file.

Malwarebytes and Microsoft have both independently confirmed that ProxyLogon is the entry vector for DearCry.

At the time of writing, it seems there is no way to decrypt the files without payment. As ever, prevention is better than cure, but if you are attacked successfully you’ll wish you’d secured your off-site backups and put a disaster recovery plan in place.

Once encryption takes place, the inevitable ransom note is deployed.

With backups and plans to restore them in place, victims can choose to ignore the attackers and carry on as normal. However, it is possible copies of the compromised files remain in the hands of the ransomware authors. This is how you get leaks further down the line.

According to the Bleeping Computer, a demand for $16,000 was made to one victim for the safe decryption of their files. There isn’t enough information available at this stage to determine if $16,000 is the going rate for DearCry attacks, or if there’s some variance to the amounts requested.

What’s certain is that other ransomware gangs will happily charge vastly greater sums, and if Exchange exploitation proves easier than RDP access, they will use it.

It’s time to update

If you haven’t already patched your systems, please do so right away and search your systems for signs of compromise.

Malwarebytes detects web shells planted on comprised Exchange servers as Backdoor.Hafnium. When the ransomware was still unknown, DearCry attacks would have been detected proactively as Malware.Ransom.Agent.Generic.

exchange 1
Nebula

We’ll update the timeline in our first article on this topic as more developments and fresh information comes to light.

The post Ransomware is targeting vulnerable Microsoft Exchange servers appeared first on Malwarebytes Labs.

Police credit “unlocked” SKY ECC encryption for organized crime bust

At the moment, I’m really torn, and I need your help. Let me tell you what is going on. I read these statements and they can’t both be true, right?

“The continuous monitoring of the illegal Sky ECC communication service tool by investigators in three countries has provided invaluable insights into hundreds of millions of messages exchanged between criminals.”

“SKY ECC platform remains secure and no authorized Sky ECC device has been hacked.”

I’ll give you some more background and then you can help me decide.

Arrests made

It was reported today that Belgian police invaded 200 locations and arrested 48 people (this was a big headline in Belgium). Two of those people are suspected of being corrupt cops in the Antwerp police force. The police stated they were able to make these arrests because they were able to intercept and read messages on encrypted phones provided by SKY ECC.

Europol claims “invaluable insights”

Europol released a statement about the background of these actions, which started:

Judicial and law enforcement authorities in Belgium, France and the Netherlands have in close cooperation enabled major interventions to block the further use of encrypted communications by large-scale organised crime groups (OCGs), with the support of Europol and Eurojust. The continuous monitoring of the illegal Sky ECC communication service tool by investigators in the three countries involved has provided invaluable insights into hundreds of millions of messages exchanged between criminals. 

It went on to describe the operations as “an essential part of the continuous effort of judiciary and law enforcement in the EU and third countries to disrupt the illegal use of encrypted communications”.

SKY ECC says it “remains secure”

Sky ECC advertises itself as “most secure messaging platform you can buy”, and has around 170,000 users worldwide.

In response to the articles published in the Dutch and Belgian press, SKY ECC let the public know that all allegations that Belgian and/or Dutch authorities have cracked or hacked SKY ECC encrypted communication software are false, stating:

SKY ECC is built on “zero-trust” security principles which assumes every request as a breach and verifies it by employing layers of security to protect its users’ messages. All SKY ECC communications are encrypted through private tunnels via private distributed networks. All messages are encrypted with today’s highest level of encryption.

SKY ECC statement
SKY ECC website

Unlocked encryption

Are you still with me? Now, if we think hard, there are some scenarios where both statements could be true. Maybe the police are talking about analysing unencrypted meta data, or had access to a limited number of decryption keys. Or maybe they had someone on the inside feeding them information. But those go out of the window when we read the Europol statement and find the sentence “By successfully unlocking the encryption of Sky ECC…”

Who can you trust?

“Who do we trust?” is an important question in many security and privacy related matters. It may be the way I was raised, but I tend to trust the police in these matters, even if not every police force is equipped to deal with modern cybercrimes.

Of course, there is a chance that whoever drafted the Europol statement made an error, or that “unlocking the encryption” is a deliberate red herring to protect another source. But I cannot overlook that Europol and Eurojust (European Union Agency for Criminal Justice Cooperation) happen to have an excellent track record in this field.

SKY ECC on the other hand has every reason to deny it has been breached. Proof that it has could prove to be destructive for a company whose customers are invested in trusting its equipment and services.

A third possibility

There is a third possibility too, raised in the SKY ECC statement. In it, the company says (my emphasis) “distributors in Belgium and the Netherlands brought to our attention that a fake phishing application falsely branded as SKY ECC was illegally created, modified and side-loaded onto unsecure devices, and security features of authorized SKY ECC phones were eliminated in these bogus devices which were then sold through unauthorized channels.”

If the police hacked, or even created, an insecure imposter device they can monitor—one that fools potential criminals into believing they have the real thing—then it is possible for both sides to be telling at least a partial truth.

Is the proof in the pudding?

Arrests in these countries are not made lightly, so the police force must have had some information to go on. And the sheer number of arrests made leads us to believe that this was not the result of the police having access to one device (one server may be a more likely option, or many fake devices).

As you can tell, I seem to have made up my mind along the way. But we appreciate your thoughts on the matter.

If any side decides to reveal more information, we will keep you updated.

Stay safe, everyone!

The post Police credit “unlocked” SKY ECC encryption for organized crime bust appeared first on Malwarebytes Labs.

5 common VPN myths busted

Virtual Private Networks (VPNs) are popular but often misunderstood. There are many misconceptions about them—misconceptions that may be stopping people from adding a useful layer to their security and privacy defenses.

So, let’s do some myth busting.

1. VPNs are for illegal activity

Some people think that VPNs are only useful for doing things like torrenting, accessing geo-locked content, or getting around work/school/government firewalls. While they certainly are used for those activities, that doesn’t mean that’s all they’re good for or that everyone who uses a VPN is planning to do something illegal or immoral with it.

As awareness of corporate surveillance and criminal hacking has grown, so have concerns about personal privacy. Many people believe that it should be their choice when and how they give up some of their privacy, and don’t want prying eyes on their normal, legitimate behavior. A VPN gives them more control over what they share and with who, and a little less  to worry about.

2. I don’t need a mobile VPN

Some people think they don’t need a mobile VPN because their carrier looks after their security, or has a lot to say about privacy.

While it is true that carriers and ISPs have a secure telecom infrastructure because they are bound by law to do so, many ISPs have shown they are also very interested in tracking their subscribers and profiting off their data. Even though it feels like you’re protected behind that special IP address that is automatically assigned to you by the ISP when you take up the service, your ISP can, themselves, see exactly:

  • When you log on and off
  • The websites you visit
  • How much time you spend on those sites.
  • and more… depending on your habits and the apps you use

Using a VPN shifts your trust from your ISP to your VPN provider, so you can choose to use your carrier’s secure telecom infrastructure without giving your carrier access to your browsing data.

3. VPNs will slow down my internet connection

Since a VPN sends your network traffic on a bit of detour it has to travel further than it would without a VPN. Technically that means your traffic is slower, but that doesn’t necessary mean it has to be noticeably slower. Most VPN providers offer you the choice to choose a server near you, which makes the detour smaller.

server choices

Also, encrypting and decrypting data takes time. However, there is a benefit to using a next-gen VPN with modern encryption compared to older VPNs. The technology has improved over the years and VPNs have become faster and more efficient.

4. My VPN won’t let me watch Netflix

Many streaming sites and apps don’t like it when you use a VPN to watch their content and some just outright ban it (because they have an obligation to lock certain content based on region), which leaves some people believing they have to choose between privacy or entertainment.

Now, if getting around locked content is not your main purpose for using a VPN, simply look for one that offers a bypass feature, otherwise known as a split tunnel. This basically tells your VPN that certain apps get a pass and can connect without being encrypted, thus “splitting” the tunnel into TWO–one that is private and one that is not.

So, you can have your banking app running, shielded by your VPN, and watch Netflix.

5. VPNs are for geeks and power users

While this may have been true in the past, VPNs have become easier to use over the years. With the introduction of paid VPNs, vendors have taken it upon themselves to lower prices and improve quality, and to make their products easier to use.

You should expect a straightforward installation process and intuitive functionality that makes using a VPN just as easy as checking your mail or browsing social media (safely).

Stay safe, everyone!

The post 5 common VPN myths busted appeared first on Malwarebytes Labs.