IT NEWS

Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Adam Kujawa, security evangelist and director of Malwarebytes Labs, about “security hubris,” the simple phenomenon in which businesses are less secure than they actually believe.

Ask yourself, right now, on a scale from one to ten, how cybersecure are you? Now, do you have any reused passwords for your online accounts? Does your home router still have its default password? If your business rolled out new software for you to use for working from home (WFH), do you know if those software platforms are secure?

If your original answer is looking a little more shaky, don’t be surprised. That is security hubris

Tune in to hear about the dangers of security hubris to a business, how to protect against it, and about how Malwarebytes found it within our most recent report, “Enduring from home: COVID-19’s impact on business security,” on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

You can also find us on the Apple iTunes store, Google Play Music, and Spotify, plus whatever preferred podcast platform you use.

Other cybersecurity news:

  • The US government issued a warning about North Korean hackers targeting banks worldwide. (Source: BleepingComputer)
  • A team of academics from Switzerland has discovered a security bug that can be abused to bypass PIN codes for Visa contactless payments. (Source: ZDNet)
  • For governments and armed forces around the world, the digital domain has become a potential battlefield. (Source: Public Technology)
  • A new hacker hacker-for-hire group is targeting organizations worldwide with malware hidden inside malicious 3Ds Max plugins. (Source: Security Affairs)
  • The Qbot trojan evolves to hijack legitimate email threads. (Source: BetaNews)

Stay safe, everyone!

The post Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa appeared first on Malwarebytes Labs.

Apple’s notarization process fails to protect

In macOS Mojave, Apple introduced the concept of notarization, a process that developers can go through to ensure that their software is malware-free (and must go through for their software to run on macOS Catalina). This is meant to be another layer in Apple’s protection against malware. Unfortunately, it’s starting to look like notarization may be less security and more security theater.

What is notarization?

Notarization goes hand-in-hand with another security feature: code signing. So let’s talk about that first.

Code signing is a cryptographic process that enables a developer to provide authentication to their software. It both verifies who created the software and verifies the integrity of the software. By code signing an app, developers can (to some degree) prevent it from being modified maliciously—or at the very least, make such modifications easily detectable.

The code signing process has been integral to Mac software development for years. The user has to jump through hoops to run unsigned software, so little mainstream Mac software today comes unsigned.

However, Mac software that is distributed outside the App Store never had to go through any kind of checks. This meant that malware authors would obtain a code signing certificate from Apple (for a mere $99) and use that to sign their malware, enabling it to run without trouble. Of course, when discovered, Apple can revoke the code signing certificate, thus neutralizing the malware. However, malware can often go undiscovered for years, as illustrated best by the FruitFly malware, which went undetected for at least 10 years.

In light of this problem, Apple created a process they call “notarization.” This process involves developers submitting their software to Apple. That software goes through some kind of automated scan to ensure it doesn’t contain malware, and then is either rejected or notarized (i.e., certified as malware-free by Apple—in theory).

In macOS Catalina, software that is not notarized is prevented from running at all. If you try, you will simply be told “do not pass Go, do not collect $200.” (Or in Apple’s words, it can’t be opened because “Apple cannot check it for malicious software.”)

The message displayed by Catalina for older versions of Spotify
The message displayed by Catalina for older versions of Spotify

There are, of course, ways to run software that is not signed or not notarized, but there’s no indication as to how this is done from the error message, so as far as legitimate developers are concerned, it’s not an option.

So how’s that working out so far?

The big question on everyone’s minds when notarization was announced at Apple’s WWDC conference in 2019, was, “How effective is this going to be?” Many were quite optimistic that this would spell the end of Mac malware once and for all. However, those of us in the security industry did not drink the Kool-Aid. Turns out, our skepticism was warranted.

There are a couple tricks that the bad guys are using, in light of the new requirements. One is simple: Don’t sign or notarize the apps at all.

We’re seeing quite a few cases where malware authors have stopped signing their software, and have instead been shipping it with instructions to the user on how to run it.

Unsigned Mac malware

As can be seen from the above screenshot, the malware comes on a disk image (.dmg) file with a custom background. That background image shows instructions for opening the software, which is neither signed nor notarized.

The irony here is that we see lots of people getting infected with this malware—a variant of the Shlayer or Bundlore adware, depending on who you ask—despite the minor difficulty of opening it. Meanwhile, the installation of security software on macOS has gotten to be so difficult that we get a fair number of support cases about it.

The other option, of course, is for threat actors to get their malware notarized.

Notarize malware?! Say it ain’t so!

In theory, the notarization process is supposed to weed out anything malicious. In practice, nobody really understands exactly how notarization works, and Apple is not inclined to share details. (For good reason—if they told the bad guys how they were checking for malware, the bad guys would know how to avoid getting caught by those checks.)

All developers and security researchers know is that notarization is fast. I’ve personally notarized software quite a few times at this point, and it usually takes less than a couple minutes between submission and receipt of the e-mail confirming success of notarization. That means there’s definitely no human intervention involved in the process, as there is with App Store reviews. Whatever it is, it’s solely automated.

I’ve assumed since notarization was first introduced that it would turn out to be fallible. I’ve even toyed with the idea of testing this process, though the risk of getting my developer account “Charlie Millered” has prevented me from doing so. (Charlie Miller is a well-known security researcher who created a proof-of-concept malware app and got it into the iOS App Store in 2011. Even though he notified Apple after getting the app approved, Apple still revoked his developer account and he has been banned from further Apple development activity ever since.)

It turns out, though, that all I had to do was wait for the bad guys to run the test for me. According to new findings, Mac security researcher Patrick Wardle has discovered samples of the Shlayer adware that are notarized. Yes, that’s correct. Apple’s notarization process has allowed known malware to pass through undetected, and to be implicitly vouched for by Apple.

How did they do that?

We’re still not exactly sure what the Shlayer folks did to get their malware notarized, but increasingly, it’s looking like they did nothing at all. On the surface, little has changed.

Comparison of two Shlayer installers

The above screenshot shows a notarized Shlayer sample on the left, and an older one on the right. There’s no difference at all in the appearance. But what about when you dive into the code?

Comparison of the code of two Shlayer samples

This screenshot is hardly a comprehensive look into the code. It simply shows the entry point, and the names of a number of the functions found in the code. Still, at this level, any differences in the code are minor.

It’s entirely possible that something in this code, somewhere, was modified to break any detection that Apple might have had for this adware. Without knowing how (if?) Apple was detecting the older sample (shown on the right), it would be quite difficult to identify whether any changes were made to the notarized sample (on the left) that would break that detection.

This leaves us facing two distinct possibilities, neither of which is particularly appealing. Either Apple was able to detect Shlayer as part of the notarization process, but breaking that detection was trivial, or Apple had nothing in the notarization process to detect Shlayer, which has been around for a couple years at this point.

What does this mean?

This discovery doesn’t change anything from my perspective, as a skeptical and somewhat paranoid security researcher. However, it should help “normal” Mac users open their eyes and recognize that the Apple stamp does not automatically mean “safe.”

Apple wants you to believe that their systems are safe from malware. Although they no longer run the infamous “Macs don’t get viruses” ads, Apple never talks about malware publicly, and loves to give the impression that its systems are secure. Unfortunately, the opposite has been proven to be the case with great regularity. Macs—and iOS devices like iPhones and iPads, for that matter—are not invulnerable, and their built-in security mechanisms cannot protect users completely from infection.

Don’t get me wrong, I still use and love Mac and iOS devices. I don’t want to give the impression that they shouldn’t be used at all. It’s important to understand, though, that you must be just as careful with what you do with your Apple devices as you would be with your Windows or Android devices. And when in doubt, an extra layer of anti-malware protection goes a long way in providing peace of mind.

The post Apple’s notarization process fails to protect appeared first on Malwarebytes Labs.

Missing person scams: what to watch out for

Social media has a long history of people asking for help or giving advice to other users. One common feature is the ubiquitous “missing person” post. You’ve almost certainly seen one, and may well have amplified such a Facebook post, or Tweet, or even blog.

The sheer reach and virality of social media is perfect for alerting others. It really is akin to climbing onto a rooftop with a foghorn and blasting out your message to the masses. However, the flipside is an ugly one.

Social media is also a breeding ground for phishers, scammers, trolls, and domestic abusers working themselves into missing person narratives. When this happens, there can be serious consequences.

“My friend is missing, please retweet…”

Panicked, urgent requests for information are how these missing person scams spread. They’re very popular on social media and can easily spread across the specific geographically-located demographic the message needs to go to.

If posted to platforms other than Twitter, they may well also come with a few links which offer additional information. The links may or may not be official law enforcement resources.

Occasionally, links lead to dedicated missing person detection organisations offering additional services.

You may well receive a missing person notice or request through email, as opposed something posted to the wider world at large.

All useful ways to get the word out, but also very open to exploitation.

How can this go wrong?

The ease of sharing on social media is also the biggest danger where missing person requests are concerned. If someone pops up in your timeline begging for help to find a relative who went missing overnight, the impulse to share is very strong. It takes less than a second to hit Retweet or share, and you’ve done your bit for the day.

However.

If you’re not performing due diligence on who is doing the sharing, this could potentially endanger the person in the images. Is the person sharing the information directly a verified presence on the platform you’re using, or a newly created throwaway account?

If they are verified, are they sharing it from a position of personal interest, or simply retweeting somebody else? Do they know the person they’re retweeting, or is it a random person? Do they link to a website, and is it an official law enforcement source or something else altogether?

Even if the person sharing it first-hand is verified or they know the person they’re sharing content  from, that doesn’t mean what you’re seeing is on the level.

What if the non-verified person is a domestic abuser, looking for an easy way to track down someone who’s escaped their malign presence? What if the verified individual is the abuser? We simply don’t know, but by the time you may have considered this the Tweet has already been and gone.

When maliciousness is “Just a prank, bro”

Even if the person asking to find somebody isn’t some form of domestic abuser, there’s a rapidly sliding scale of badness waiting to pounce. Often, people will put these sorts of requests out for a joke, or as part of a meme. They’ll grab an image most likely well known in one geographic region but not another, and then share asking for information. This can often bleed into other memes.

“Have you seen this person, they stole my phone and didn’t realise it took a picture” is a popular one, often at the expense of a local D-list celebrity. In the same way, people will often make bad taste jokes but related to missing children. To avoid the gag being punctured early, they may avoid using imagery from actual abduction cases and grab a still from a random YouTube clip or something from an image resource.

A little girl, lost in Doncaster?

One such example of this happened in the last few weeks. A still image appeared to show a small child in distress, bolted onto a “missing” plea for help.

Well, she really was in distress…but as a result of an ice hockey player leaving his team in 2015, and not because she’d gone missing or been abducted. There’s a link provided claiming to offer CCTV footage of a non-existent abduction, though reports don’t say where the links took eager clickers.

A panic-filled message supplied with a link is a common tactic in these realms. The same thing happened with a similar story in April of 2019. Someone claimed their 10-year-old sister had gone missing outside school after an argument with her friend. However, it didn’t take long for the thread to unravel. Observant Facebook users noted that schools would have been closed on the day it supposedly happened.

Additionally, others mentioned that they’d seen the same missing sister message from multiple Facebook profiles. As with the most recent fake missing story, we don’t know where the link wound up. People understandably either steered clear or visited but didn’t take a screenshot and never spoke of it again.

“My child is missing”: an eternally popular scam

There was another one doing the rounds of June this year, once more claiming a child was missing. The seemingly US-centric language-oriented page appeared for British users in Lichfield, Bloxeich, Wolverhampton, and Walsall. Mentioning “police captains” and “downtown” fairly gave the game away, hinting at its generic cut and paste origins. The fact it cites multiple conflicting dates as to when the kidnapping took place is also a giveaway.

This one was apparently a Facebook phish, and was quite successful in 2020. So much so, that it first appeared in March, and then May before putting in its June performance. Scammers continue to use it because it’s easy to throw together, and it works.

Exploiting a genuine request

It’s not just scammers taking the lead and posting fake missing person scam posts. They’ll also insert themselves into other people’s misery and do whatever they can to grab some ill-gotten gains. An example of this dates to 2013, where someone mentions that they’d tried to reunite with their long-lost sister, via a “Have you seen this person” style letter.

The letter was published in a magazine, and someone got in touch. Unfortunately, that person claimed they held the sister hostage and needed to pay a ransom. The cover story quickly fell apart after they claimed certain relatives where dead when they were alive, and the missing person scam was foiled. 

Here’s a similar awful scam from 2016, where Facebook scammers claimed someone’s missing daughter was a sex worker in Atlanta. They said she was being trafficked and could be “bought back” for $70,000. A terrible thing to tell someone, but then these people aren’t looking to play fair.

Fake detection agencies

Some of these fakes will find you via post-box, as opposed merely lurking online. There have been cases where so-called “recovery bureaus” drop you a note claiming to be able to lead you to missing people. When you meet up with arranged contacts though, the demands for big slices of cash start coming. What information they do have is likely publicly sourced or otherwise easily obtainable (and not worth the asking price).

Looking for validation

Helping people is great and assisting on social media is a good thing. We just need to be careful we’re aiding the right people. While it may not always be possible for a missing person alert to come directly from an official police source, it would be worth taking a little bit of time to dig into the message, and the person posting it, before sharing further.

The issue of people going missing is bad enough; we shouldn’t look to compound misery by unwittingly aiding people up to no good.

The post Missing person scams: what to watch out for appeared first on Malwarebytes Labs.

Good news: Stalkerware survey results show majority of people aren’t creepy

Back in July, we sent out a survey to Malwarebytes Labs readers on the subject of stalkerware—the term used to describe apps that can potentially invade someone’s privacy. We asked one question: “Have you ever used an app to monitor your partner’s phone?” 

The results were reassuring.

We received 4,578 responses from readers all over the world to our stalkerware survey and the answer was a resounding “NO.” An overwhelming 98.23 percent of respondents said they had not used an app to monitor their partner’s phone.

Chart Q1 200820

For our part, Malwarebytes takes stalkerware seriously. We’ve been detecting apps with monitoring capabilities for more than six years—now Malwarebytes for WindowsMac, or Android detects and allows users to block applications that attempt to monitor your online behavior and/or physical whereabouts without your knowledge or consent. Last year, we helped co-found the Coalition Against Stalkerware with the Electronic Frontier Foundation, the National Network to End Domestic Violence, and several other AV vendors and advocacy groups.

It stands to reason that a readership comprised of Malwarebytes customers and people with a strong interest in cybersecurity would say “no” to stalkerware—we’ve spoken up about the potential privacy concerns associated with using these apps and the danger of equipping software with high-grade surveillance capabilities for a long time. We didn’t want to assume everyone agreed with us, but the data from our stalkerware survey shows our instincts were right.

No to stalkerware

Beyond a simple yes or no, we also asked our survey-takers to explain why they answered the way they did. The most common answer by far was a mutual respect and trust for their partner. In fact, “respect,” “trust,” and “privacy” were the three most commonly-used words by our participants in their responses:

“My partner and I share our lives … To monitor someone else’s phone is a tragic lack of trust.”

Many of those surveyed cited the Golden Rule (treat others the way you want to be treated) as their reason for not using stalkerware-type apps:

“I wouldn’t want anyone to monitor me so I therefore I would not monitor them.”

Others saw it as a clear-cut issue of ethics:

“People are entitled to their privacy as long as they do not do things that are illegal. Their rights end at the beginning of mine.”

Some respondents shared harrowing real-life accounts of being a victim of stalkerware or otherwise having their privacy violated:

“I have been a victim of stalking several times when vicious criminals used my own surveillance cameras to spy on my activity then used it to break into my apartment.”

Stalkerware vs. location sharing vs. parental monitoring

Many of those surveyed, answering either yes or no, made a distinction between stalkerware-type apps writ large and location-sharing apps like Apple’s Find My Phone and Google Maps. Location sharing was generally considered acceptable because users volunteered to share their own information and sharing was limited to their current location.

“My wife & myself allow Apple Find My Phone to track each other if required. I was keen that should I not arrive home from a run, she could find out where I was in the case of a health issue or accident.”

Also considered okay by our respondents were the types of parental controls packaged in by default with their various devices. Many respondents specifically mentioned tracking their child’s location:

“It would not be ok with me if someone was monitoring me and I would never do it to anyone else, the only thing I would like is be able to track my child if kidnapped.”

Some parents admitted to using monitoring of some kind with their children, but it wasn’t clear how far they were willing to go and if children were aware they were being monitored:

“The only reason I have set up parental control for my son is for his safety most importantly.”

This is the murky world of parental-monitoring apps. On one end of the spectrum there are the first-party parental controls like those built into the iPhone and Nintendo Switch. These controls allow parents to restrict screen time and approve games and additional content on an ad hoc basis. Then there are third-party apps, which provide limited capabilities to track one thing and one thing only, like, say, a child’s location, or their screen time, or the websites they are visiting.

On the other end of the spectrum, there are apps in the same parental monitoring category that can provide a far broader breadth of monitoring, from tracking all of a child’s interactions on social media to using a keylogger that might even reveal online searches meant to stay private. 

You can hear more about our take on these apps in our latest podcast episode, but the long and the short of it is that Malwarebytes doesn’t recommend them, as they can feature much of the same high-tech surveillance capabilities of nation-state malware and stalkerware, but often lack basic cybersecurity and privacy measures.

Who said ‘yes’ to stalkerware?

Of course, our stalkerware survey analysis would not be complete without taking a look at the 81 responses from those who said “yes” to using apps to monitor their partners’ phone.

Again, the majority of respondents made a distinction between consensual location-sharing apps and the more intrusive types of monitoring that stalkerware can provide. Many of those who answered “yes” to using an app to monitor their partner’s phone said things like:

“My wife and I have both enabled Google’s location sharing service. It can be useful if we need to know where each other is.”

And:

“Only the Find My iPhone app. My wife is out running or hiking by herself quite often and she knows I want to know if she is safe.”

Of the 81 people who said they use apps to monitor their partners’ phones, only nine cited issues of trust, cheating, “being lied to” or “change in partner’s behavior.” Of those nine, two said their partner agreed to install the app.

NortonLifeLock’s online creeping study

The results of the Labs stalkerware survey are especially interesting when compared to the Online Creeping Survey conducted by NortonLifeLock, another founding member of the Coalition Against Stalkerware.

This survey of more than 2,000 adults in the United States found that 46 percent of respondents admitted to “stalking” an ex or current partner online “by checking in on them without their knowledge or consent.”

Twenty-nine percent of those surveyed admitted to checking a current or former partner’s phone. Twenty-one percent admitted to looking through a partner’s search history on one of their devices without permission. Nine percent admitted to creating a fake social media profile to check in on their partners.

When compared to the Labs stalkerware survey, it would seem that online stalking is considered more acceptable when couched under the term “checking in.” For perspective, if one were to swap the word “diary” for “phone,” we don’t think too many people would feel comfortable admitting, “Hey, I’m just ‘checking in’ on my girlfriend/wife’s diary. No big deal.”

Stalkerware in a pandemic

Finally, we can’t end this piece without at least acknowledging the strange and scary times we’re living in. Shelter-in-place orders at the start of the coronavirus pandemic became de facto jail sentences for stalkerware and domestic violence victims, imprisoning them with their abusers. No surprise, The New York Times reported an increase in the number of domestic violence victims seeking help since March.

For some users, however, the pandemic has brought on a different kind of suffering. One survey respondent best summed up the current malaise of anxiety, fear, and depression: 

“No partner to monitor lol.”

We like to think, dear reader, that they’re not laughing at themselves and the challenges of finding a partner during COVID. Rather, they’re laughing at all of us.

Stalkerware resources

As mentioned earlier, Malwarebytes for WindowsMac, or Android will detect and let users remove stalkerware-type applications. And if you think you might have stalkerware on your mobile device, be sure to check out our article on what to do when you find stalkerware or suspect you’re the victim of stalkerware.

Here are a few other important reads on stalkerware:

Stalkerware and online stalking are accepted by Americans. Why?

Stalkerware’s legal enforcement problem

Awareness of stalkerware, monitoring apps, and spyware on the rise

How to protect against stalkerware

The post Good news: Stalkerware survey results show majority of people aren’t creepy appeared first on Malwarebytes Labs.

The cybersecurity skills gap is misunderstood

Nearly every year, a trade association, a university, an independent researcher, or a large corporation—and sometimes all of them and many in between—push out the latest research on the cybersecurity skills gap, the now-decade-plus-old idea that the global economy lacks a growing number of cybersecurity professionals who cannot be found.

It is, as one report said, a “state of emergency.” It would be nice, then, if the numbers made more sense.

In 2010, according to one study focused on the United States, the cybersecurity skills gap included at least 10,000 individuals. In 2015, according to a separate analysis, that number was 209,000. Also, in 2015, according to yet another report, that number was more than 1 million. Today, that number is both a projected 3.5 million by 2021 and a current 4.07 million, worldwide.

PK Agarwal, dean of the University of California Santa Cruz Silicon Valley Extension, has followed these numbers for years. He followed the data in personal interest, and he followed it more deeply when building programs at Northeastern University Silicon Valley, the educational hub opened by the private Boston-based university, where he most recently served as regional dean and CEO. During his research, he uncovered something.

“In terms of actual numbers, if you’re looking at the supply and demand gap in cybersecurity, you’ll see tons of reports,” Agarwal said. “They’ll be all over the map.”

He continued: “Yes, there is a shortage, but it is not a systemic shortage. It is in certain sweet spots. That’s the reality. That’s the actual truth.”

Like Agarwal said, there are “sweet spots” of truth to the cybersecurity skills gap—there can be difficulties in finding immediate need on deadline-driven projects, or in finding professionals trained in a crucial software tool that a company cannot spend time training current employees on.

But more broadly, the cybersecurity skills gap, according to recruiters, hiring managers, and academics, is misunderstood. Rather than a lack of talent, there is sometimes, on behalf of companies, a lack of understanding in how to find and hire that talent.

By posting overly restrictive job requirements, demanding contradictory skillsets, refusing to hire remote workers, offering non-competitive rates, and failing to see minorities, women, and veterans as viable candidates, businesses could miss out on the very real, very accessible cybersecurity talent out there.

In other words, if you are not able to find a cybersecurity expert for your company, that doesn’t mean they don’t exist. It means you might need help in finding them.

Number games

In 2010, the Center for Strategic & International Studies (CSIS) released its report “A Human Capital Crisis in Cybersecurity.” According to the paper, “the cyber threat to the United States affects all aspects of society, business, and government, but there is neither a broad cadre of cyber experts nor an established cyber career field to build upon, particularly within the Federal government.”

Further, according to Jim Gosler, a then-visiting NSA scientist and the founding director of the CIA’s Clandestine Information Technology Office, only 1,000 security experts were available in the US with the “specialized skills to operate effectively in cyberspace.” The country, Gosler said in interviews, needed 10,000 to 30,000.

Though the cybersecurity skills gap was likely spotted before 2010, the CSIS paper partly captures a theory that draws supports today—the skills gap is a lack of talent.

Years later, the cybersecurity skills gap reportedly grew into a chasm. It would soon span the world.  

In 2016, the Enterprise Strategy Group called the cybersecurity skills gap a “state of emergency,” unveiling research that showed that 46 percent of senior IT and cybersecurity professionals at midmarket and enterprise companies described their departments’ lack of cybersecurity skills as “problematic.” The same year, separate data compiled by the professional IT association ISACA predicted that the entire world would be short 2 million cyber security professionals by the year 2019.

But by 2019, that prediction had already come true, according to a survey published that year by the International Information System Security Certification Consortium, or (ISC)2. The world, the group said, employed 2.8 million cybersecurity professionals, but it needed 4.07 million.

At the same time, a recent study projected that the skills gap in 2021 would be lower than the (ISC)2 estimate for today—instead predicting a need of 3.5 million professionals by next year. Throughout the years, separate studies have offered similarly conflicting numbers.

The variation can be dizzying, but it can be explained by a variation in motivations, said Agarwal. He said these reports do not exist in a vacuum, but are rather drawn up for companies and, perhaps unsurprisingly, for major universities, which rely on this data to help create new programs and to develop curriculum to attract current and prospective students.

It’s a path Agarwal went down years ago when developing a Master’s program in computer science at Northeastern University Silicon Valley extension. The data, he said, supported the program, showing some 14,000 Bay Area jobs that listed a Master’s degree as a requirement, while neighboring Bay Area schools were only on track to produce fewer than 500 Master’s graduates that year.

“There was a massive gap, so we launched the campus,” Agarwal said. The program garnered interest, but not as much as the data suggested.

Agarwal remembered thinking at the time: “What the hell is going on?” 

It turns out, a lot was going on, Agarwal said. For many students, the prospect of more student debt for a potentially higher pay was not enough to get them into the program. Further, the salaries for Bachelor’s graduates and Master’s graduates were close enough that students had a difficult time seeing the value in getting the advanced degree.

That weariness towards a Master’s degree in computer science also plagues cybersecurity education today, Agarwal said, comparing it to an advanced degree in Biology.

“Cybersecurity at the Master’s level is about the same as in Biology—it has no market value,” Agarwal said. “If you have a BA [in Biology], you’re a lab rat. If you have an MA, you’re a senior lab rat.”

So, imagine the confusion for cybersecurity candidates who, when applying for jobs, find Master’s degrees listed as requirements. And yet, that is far from uncommon. The requirement, like many others, can drive candidates away.

Searching different

For companies that feel like the cybersecurity talent they need simply does not exist, recruiters and strategists have different advice: Look for cybersecurity talent in a different way. That means more lenient degree and certification requirements, more openness to working remotely, and hiring for the aptitude of a candidate, rather than going down a must-have wish list.

Jim Johnson, senior vice president and Chief Technology Officer for the international recruiting agency Robert Half, said that, when he thinks about client needs in cybersecurity, he often recalls a conference panel he watched years ago. A panel of hiring experts, Johnson said, was asked a simple question: How do you find people?

One person, Johnson recalled, said “You need to be okay hiring people who know nothing.”

The lesson, Johnson said, was that companies should hire for aptitude and the ability to learn.

“You hire the personality that fits what you’re looking for,” Johnson said. “If they don’t have everything technically, but they’re a shoo-in for being able to learn it, that’s the person you bring up.”

Johnson also explained that, for some candidates, restrictive job requirements can actually scare them away. Johnson’s advice for companies is that they understand what they’re looking for, but they don’t make the requirements for the job itself so restrictive that it causes hesitation for some potential candidates.

“You might miss a great hire because you required three certifications and they had one, or they’re in the process of getting one,” Johnson said.

Similarly, Thomas Kranz, longtime cybersecurity consultant and current cybersecurity strategy adviser for organizations, called job requirements that specifically call for degrees as “the biggest barrier companies face when trying to hire cybersecurity talent.”

“This is an attitude that belongs firmly in the last century,” Kranz wrote. ‘Must have a [Bachelor of Science] or advanced degree’ goes hand in hand with ‘Why can’t we find the candidates we need?’”

This thinking has caught on beyond the world of recruiters.

In February, more than a dozen companies, including Malwarebytes, pledged to adopt the Aspen Institute’s “Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce.”

The very first principle requires companies to “widen the aperture of candidate pipelines, including expanding recruitment focus beyond applicants with four-year degrees or using non-gender biased job descriptions.”

At Malwarebytes, the practice of removing strict degree requirements from cybersecurity job descriptions has been in place for cybersecurity hires for at least a year and a half.

“I will never list a BA or BS as a hard requirement for most positions,” said Malwarebytes Chief Information Security Officer John Donovan. “Work and life experience help to round out candidates, especially for cyber-security roles.” Donovan added that, for more junior positions, there are “creative ways to broaden the applicant pool,” such as using the recruiting programs YearUp, NPower, and others.

The two organizations, like many others, help transition individuals to tech-focused careers, offering training classes, internships, and access to a corporate world that was perhaps beyond reach.

These types of career development groups can also help a company looking to broaden its search to include typically overlooked communities, including minorities, women, disabled people, and veterans.

Take, for example, the International Consortium of Minority Cybersecurity Professionals, which creates opportunities for women and minorities to advance in the field, or the nonprofit Women in CyberSecurity (WiCyS), which recently developed a veterans’ program. WiCyS primarily works to cultivate the careers of women in cybersecurity by offering training sessions, providing mentorship, granting scholarships, and working with interested corporate partners.

“In cybersecurity, there are challenges that have never existed before,” said Lynn Dohm, executive director for WiCyS. “We need multitasking, diversity of thought, and people from all different backgrounds, all genders, and all ethnicities to tackle these challenges from all different perspectives.”

Finally, for companies still having trouble finding cybersecurity talent, Robert Half’s Johnson recommended broadening the search—literally. Cybersecurity jobs no longer need to be filled by someone located within a 40-mile radius, he said, and if anything, the current pandemic has reinforced this idea.

“The affect of the pandemic, which has shifted how people do their jobs, has made us now realize that the whole working remote thing isn’t as scary as we thought,” Johnson said.

But companies should understand that remote work is as much a boon to them as it is to potential candidates. No longer are qualified candidates limited in their search by what they can physically get to—now, they can apply for jobs that may seem more appealing that are much farther from where they live.

And that, of course, will have an impact on salary, Johnson said.

“While Bay Area salaries or a New York salary, while those might not change dramatically, what is changing is the folks that might be being recruited in Des Moines or in Omaha or Oklahoma City, who have traditionally been limited [regionally], now they’re being recruited by companies on the coast,” Johnson said.

“That’s affecting local companies, which are paying those $80,000 salaries. Now those candidates are being offered $85,000 to work remotely. Now I’ve got to compete with that.”

Planning ahead

The cybersecurity skills gap need not frighten a company or a senior cybersecurity manager looking to hire. There are many actionable steps that a business can take today to help broaden their search and find the talent that perhaps other companies are ignoring.

First, stop including hard degree requirements in job descriptions. The same goes for cybersecurity certifications. Second, start accepting the idea of remote work for these teams. The value of “butts in seats” means next to nothing right now, so get used to it. Third, understand that remote work means potentially better pay for the candidates you’re trying to hire, so look at the market data and pay appropriately. Fourth, connect with a recruiting organization, like WiCyS, if you want some extra help in creating a diverse and representative team. Fifth, also considering looking inwards, as your next cybersecurity hire might actually be a cybersecurity promotion.

And the last piece of advice, at least according to Robert Half’s Johnson? Hire a recruiter.

The post The cybersecurity skills gap is misunderstood appeared first on Malwarebytes Labs.

A week in security (August 17 – 23)

Last week on Malwarebytes Labs, we looked at the impact of COVID-19 on healthcare cybersecurity, dug into some pandemic stats in terms of how workforces coped with going remote, and served up a crash course on malware detection. Our most recent Lock and Code podcast explored the safety of parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 17 – 23) appeared first on Malwarebytes Labs.

‘Just tell me how to fix my computer:’ a crash course on malware detection

Malware. You’ve heard the term before, and you know it’s bad for your computer—like a computer virus. Which begs the question: Do the terms “malware” and “computer virus” mean the same thing? How do you know if your computer is infected with malware? Is “malware detection” just a fancy phrase for antivirus? For that matter, are anti-malware and antivirus programs the same? And let’s not forget about Apple and Android users, who are probably wondering if they need cybersecurity software at all.

This is the point where your head explodes.

All you want to do is get your work done, Zoom your friends/family, Instacart a bottle of wine, and stream a movie till you go to bed. But it’s during these everyday tasks that we let our guard down and are most susceptible to malware, which includes such cyberthreats as ransomware, Trojans, spyware, stalkerware, and, yes, viruses.

To add insult to injury, cybercriminals deliver malware using sneaky social engineering tricks, such as fooling people into opening email attachments that infect their computers or asking them to update their personal ~~~information on malicious websites pretending to be legitimate. Sounds awful, right? It sure is!

The good news is that staying safe online is actually fairly easy. All it takes is a little common sense, a basic understanding of how threats work, and a security program that can detect and protect against malware. Think of it like street smarts but for the Internet. With these three elements, you can safely avoid the majority of the dangers online today.

So, for the Luddites and the technologically challenged among our readership, this is your crash course on malware detection. In this article, we’ll answer all the questions you wish you didn’t have to ask like:

  • What is malware?
  • How can I detect malware?
  • Is Windows Defender good enough?
  • Do Mac and mobile devices need anti-malware?
  • How do you remove malware?
  • How do you prevent malware infections?

What is malware?

Malware, or “malicious software,” is a catchall term that refers to any malicious program that is harmful to your devices. Targets for malware can include your laptop, tablet, mobile phone, and WiFi router. Even household items like smart TVs, smart fridges, and newer cars with lots of onboard technology can be vulnerable. Put it this way: If it connects to the Internet, there’s a chance it could be infected with malware.

There are many types of malware, but here’s a gloss on the more infamous and/or popular examples in rotation today.

Adware

Adware, or advertising-supported software, is software that displays unwanted advertising on your computer or mobile device. As stated in the Malwarebytes Labs 2020 State of Malware Report, adware is the most common threat to Windows, Mac, and Android devices today.

While it may not be considered as dangerous as some other forms of malware, such as ransomware, adware has become increasingly aggressive and malicious over the last couple years, redirecting users from their online searches to advertising-supported results, adding unnecessary toolbars to browsers, peppering screens with hard-to-close pop-up ads, and making it difficult for users to uninstall.

Computer virus

A computer virus is a form of malware that attaches to another program (such as a document), which can then replicate and spread on its own after an initial execution on a system involving human interaction. But computer viruses aren’t as prevalent as they once were. Cybercriminals today tend to focus their efforts on more lucrative threats like ransomware.

Trojan

A Trojan is a program that hides its true intentions, often appearing legitimate but actually conducting malicious business. There are many families of malware that can be considered Trojans, from information-stealers to banking Trojans that siphon off account credentials and money.

Once active on a system, a Trojan can quietly steal your personal info, spam other potential victims from your account, or even load other forms of malware. One of the more effective Trojans on the market today is called Emotet, which has evolved from a basic info-stealer to a tool for spreading other forms of malware to other systems—especially within business networks.

Ransomware

Ransomware is a type of malware that locks you out of your device and/or encrypts your files, then forces you to pay a ransom to get them back. Many high-profile attacks against businesses, schools, and local government agencies over the last four years have included ransomware of some kind. Some of the more notorious recent strains of ransomware include Ryuk, Sodinokibi, and WastedLocker.

How can I detect malware?

There are a few ways to spot malware on your device. It may be running slower than usual. You may have loads of ads bombarding your screen. Your files may be frozen or your battery life may drain faster than usual. Or there may be no sign of infection at all.

That’s why good malware detection starts with a good anti-malware program. For our purposes, “good” anti-malware is going to be a program that can detect and protect against any of the threats we’ve covered above and then some, including what’s known as zero-day or zero-hour exploits. These are new threats developed by cybercriminals to exploit vulnerabilities, or weaknesses in code, that have not yet been detected or fixed by the company that created them. (That’s why when companies do fix these vulnerabilities, they issue patches, or updates, and notify users immediately.)

Antivirus and other legacy cybersecurity software rely on something called signature-based detection in order to stop threats. Signature-based detection works by comparing every file on your computer against a list of known malware. Each threat carries a signature that functions much like a set of fingerprints. If your security program finds code on your computer that matches the signature of a known threat, it’ll isolate and remove the malicious program.

While signature-based detection can be effective for protecting against known threats, it is time-consuming and resource-intensive for your computer. To continue our fingerprint analogy, signature-based detection can only spot threats with an established rap sheet. Brand-new malware, zero-day, and zero-hour exploits are free to spread and cause damage until security researchers identify the threat and reverse-engineer it, adding its signature to an increasingly bloated database.

This is where heuristic analysis comes in. Heuristic analysis relies on investigating a program’s behavior to determine whether a bit of computer code is malicious or not. In other words, if a program is acting like malware, it probably is malware. After demonstrating suspicious behavior, files are quarantined and can be manually or automatically removed—without having to add signatures to the database.

The best anti-malware programs, then, can protect against new and emerging zero-day/zero-hour threats using heuristic analysis, as well as threats we already know about using traditional signature-based detection. If your antivirus or anti-malware relies on signature-based malware detection alone to keep your system safe—you’re not really safe.

Is Windows Defender good enough?

Maybe you’re using Windows Defender because your computer came with it preinstalled. It seems fine, but you’ve never looked at other options. Or maybe you have Windows Defender and your computer somehow got an infection anyways. Either way, here’s something to consider: Defender is one of the most targeted security programs by cybercriminals. And there are whole categories of threats that Windows Defender doesn’t protect against.

The majority of threats detected today are found using signature-less technologies, but there are several other methods of malware detection that, when layered together, offer optimal protection over Windows Defender. Malwarebytes Premium, for example, uses a layered approach to threat detection that includes heuristic analysis technology as just one of its components. Other major components include ransomware protection and rollback, web protection, and anti-exploit technology.

Do Mac and mobile devices need anti-malware?

In 2019 for the first time ever, Macs outpaced Windows PCs in number of threats detected per endpoint. Over the last few years, Mac adware has exploded, debunking the myth that Macs are safe from cyberthreats. While Macs’ built-in AV blocks some malware, Mac adware has become so aggressive that it warrants extra anti-malware protection.

Meanwhile, Mac’s mobile counterpart, the iPhone, does not allow outside anti-malware programs to be downloaded. (Apple says its own built-in iOS protection is enough.) However, there are some privacy apps, web browser protection, and scam call blockers users can try for added safety.

As for Android, malware attacks from threats, such as adware, monitoring apps, and other potentially unwanted programs (PUPs) are more common. At best, PUPs serve up annoying ads you can’t get rid. At worst, they’ll discretely steal information from your phone.

Also, because the Android environment allows for third-party downloads, it’s a bit more vulnerable to malware and PUPs than the iPhone. So we recommend a good anti-malware solution for your Android device as well.

How can I remove malware?

Malware detection is the important first step for any cybersecurity solution. But what happens next? If you get a malware infection on one of your devices, the good news is you can easily remove it. The process of identifying and removing cyberthreats from your computer systems is called “remediation.”

To conduct a thorough remediation of your device, download an anti-malware program and run a scan. Before doing so, make sure you back up your files. Afterwards, change all of your account passwords in case they were compromised in the malware attack. And if you’re dealing with a tough infection, you’re in luck: Malwarebytes has a rock-solid reputation for removing malware that other programs can’t even detect let alone remove.

If you need to clean an infected computer now, download Malwarebytes for free, review these tips for remediation, and run a scan to see which threats are hiding on your devices.

How do I protect against malware?

Yes, it’s possible to clean up an infected computer and fully remove malware from your system. But the damage from some forms of malware, like ransomware, cannot be undone. If it’s encrypted your files and you haven’t backed them up, the jig is up. So your best defense is to beat the bad guys at their own game—by preventing infection in the first place.

There are a few ways to do this. Keeping all devices updated with the latest software patches will block threats designed to exploit older vulnerabilities. Automating backups of files to an encrypted cloud storage platform won’t protect against ransomware attacks, but it will ensure that you needn’t pay the ransom to retrieve your files. Training on cybersecurity best practices, including how to spot a phishing attack, tech support scam, or other social engineering technique, also helps stave off insider threats.

However, the best way to prevent malware infection is to use an antivirus/anti-malware program with layered protection that stops a wide range of cyberthreats in real time—whether it’s a malicious website or a brand-new malware family never before seen “in the wild.”

But if you have antivirus already and threats are getting through, maybe it’s time to move on to a program that’ll “just fix your computer” so you can stop worrying about malware detection and start…participating in distance learning classes? Ordering groceries? Having your virtual doctor’s appointment? Developing a vaccine? Literally anything else.

The post ‘Just tell me how to fix my computer:’ a crash course on malware detection appeared first on Malwarebytes Labs.

20 percent of organizations experienced breach due to remote worker, Labs report reveals

It is no surprise that moving to a fully remote work environment due to COVID-19 would cause a number of changes in organizations’ approaches to cybersecurity. What has been surprising, however, are some of the unanticipated shifts in employee habits and how they have impacted the security posture of businesses large and small.

Our latest Malwarebytes Labs report, Enduring from Home: COVID-19’s Impact on Business Security, reveals some unexpected data about security concerns with today’s remote workforce.

Our report combines Malwarebytes product telemetry with survey results from 200 IT and cybersecurity decision makers from small businesses to large enterprises, unearthing new security concerns that surfaced after the pandemic forced US businesses to send their workers home.

The data showed that since organizations moved to a work from home (WFH) model, the potential for cyberattacks and breaches has increased. While this isn’t entirely unexpected, the magnitude of this increase is surprising. Since the start of the pandemic, 20 percent of respondents said they faced a security breaches as a result of a remote worker. This in turn has increased costs, with 24 percent of respondents saying they paid unexpected expenses to address a cybersecurity breach or malware attack following shelter-in-place orders.

We noticed a stark increase in the use of personal devices for work: 28 percent of respondents admitted they’re using personal devices for work-related activities more than their work-issued devices. Beyond that, we found that 61 percent of respondents’ organizations did not urge employees to use antivirus solutions on their personal devices, further compounding the increase in attack surface with a lack of adequate protection.

We found a startling contrast between the IT leaders’ confidence in their security during the transition to work from home (WFH) environments, and their actual security postures, demonstrating a continued problem of security hubris. Roughly three quarters (73.2 percent) of our survey respondents gave their organizations a score of 7 or above on preparedness for the transition to WFH, yet 45 percent of respondents’ organizations did not perform security and online privacy analyses of necessary software tools for WFH collaboration.

Additional report takeaways

  • 18 percent of respondents admitted that, for their employees, cybersecurity was not a priority, while 5 percent said their employees were a security risk and oblivious to security best practices.
  • At the same time, 44 percent of respondents’ organizations did not provide cybersecurity training that focused on potential threats of working from home (like ensuring home networks had strong passwords, or devices were not left within reach of non-authorized users).
  • While 61 percent of respondents’ organizations provided work-issued devices to employees as needed, 65 percent did not deploy a new antivirus (AV) solution for those same devices.

To learn more about the increasing risks uncovered in today’s remote workforce population, read our full report:

Enduring from Home: COVID-19’s Impact on Business Security

The post 20 percent of organizations experienced breach due to remote worker, Labs report reveals appeared first on Malwarebytes Labs.

The impact of COVID-19 on healthcare cybersecurity

As if stress levels in the healthcare industry weren’t high enough due to the COVID-19 pandemic, risks to its already fragile cybersecurity infrastructure are at an all-time high. From increased cyberattacks to exacerbated vulnerabilities to costly human errors, if healthcare cybersecurity wasn’t circling the drain before, COVID-19 sent it into a tailspin.

No time to shop for a better solution

As a consequence of being too occupied with fighting off the virus, some healthcare organizations have found themselves unable to shop for different security solutions better suited for their current situation.

For example, the Public Health England (PHE) agency, which is responsible for managing the COVID-19 outbreak in England, decided to prolong their existing contract with their main IT provider without allowing competitors to put in an offer. They did this to ensure their main task, monitoring the widespread disease, could go forward without having to worry about service interruptions or other concerns.

Extending a contract without looking at competitors is not only a recipe for getting a bad deal, but it also means organizations are unable to improve on the flaws they may have found in existing systems and software.

Attacks targeting healthcare organizations

Even though there were some early promises of removing healthcare providers as targets after COVID-19 struck, cybercriminals just couldn’t be bothered to do the right thing for once. In fact, we have seen some malware attacks specifically target healthcare organizations since the start of the pandemic.

Hospitals and other healthcare organizations have shifted their focus and resources to their primary role. While this is completely understandable, it has placed them in a vulnerable situation. Throughout the COVID-19 pandemic, an increasing amount of health data is being controlled and stored by the government and healthcare organizations. Reportedly this has driven a rise in targeted, sophisticated cyberattacks designed to take advantage of an increasingly connected environment.

In healthcare, it’s also led to a rise in nation-state attacks, in an effort to steal valuable COVID-19 data and disrupt care operations. In fact, the sector has become both a target and a method of social engineering advanced attacks. Malicious actors taking advantage of the pandemic have already launched a series of phishing campaigns using COVID-19 as a lure to drop malware or ransomware.

COVID-19 has not only placed healthcare organizations in direct danger of cyberattacks, but some have become victims of collateral damage. There are, for example, COVID-19-themed business email compromise (BEC) attacks that might be aiming for exceptionally rich targets. However, some will settle for less if it is an easy target—like one that might be preoccupied with fighting a global pandemic.

Ransomware attacks

As mentioned before, hospitals and other healthcare organizations run the risk of falling victim to “spray and prey” attack methods used by some cybercriminals. Ransomware is only one of the possible consequences, but arguably the most disruptive when it comes to healthcare operations—especially those in charge of caring for seriously ill patients.

INTERPOL has issued a warning to organizations at the forefront of the global response to the COVID-19 outbreak about ransomware attacks designed to lock them out of their critical systems in an attempt to extort payments. INTERPOL’s Cybercrime Threat Response team detected a significant increase in the number of attempted ransomware attacks against key organizations and infrastructure engaged in the virus response.

Special COVID-19 facilities

During the pandemic, many countries constructed or refurbished special buildings to house COVID-19 patients. These were created to quickly increase capacity while keeping the COVID patients separate from others. But these ad-hoc COVID-19 medical centers now have a unique set of vulnerabilities: They are remote, they sit outside of a defense-in-depth architecture, and the very nature of their existence means security will be a lower priority. Not only are these facilities prone to be understaffed in IT departments, but the biggest possible chunk of their budget is deployed to help the patients.

Another point of interest is the transfer of patient data from within the regular hospital setting to these temporary locations. It is clear that the staff working in COVID facilities will need the information about their patients, but how safely is that information being stored and transferred? Is it as protected in the new environment as the old one?

Data theft and protection

A few months ago, when the pandemic proved to be hard to beat, many agencies reported about targeted efforts by cybercriminals to lift coronavirus research, patient data, and more from the healthcare, pharmaceutical, and research industries. Among these agencies were the National Security Agency, the FBI, the Department of Homeland Security’s Cybersecurity and Infrastructure Agency, and the UK National Cyber Security.

In the spring, many countries started discussing the use of contact tracing and/or tracking apps in an effort to help keep the pandemic under control. Apps that would warn users if they had been in the proximity of an infected user. Understandably, many privacy concerns were raised by advocates and journalists.

There is so much data being gathered and shared with the intention of fighting COVID-19, but there’s also the need to protect individuals’ personal information. So, several US senators introduced the COVID-19 Consumer Data Protection Act. The legislation would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, device, geolocation, and proximity data. The bill will also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.

The impact

Even though such a protection act might be welcome and needed, the consequences for an already stressed healthcare cybersecurity industry might be too overwhelming. One could argue that data protection legislation should not be passed on a case by case basis, but should be in place to protect citizens at all times, not just when extra measures are needed to fight a pandemic.

In the meantime, we at Malwarebytes will do our part to support those in the healthcare industry by keeping malware off their machines—that’s one less virus to worry about.

Stay safe everyone!

The post The impact of COVID-19 on healthcare cybersecurity appeared first on Malwarebytes Labs.

Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Emory Roane, policy counsel at Privacy Rights Clearinghouse, about parental monitoring apps.

These tools offer parents the capabilities to spot where their children go, read what their kids read, and prevent them from, for instance, visiting websites deemed inappropriate. And, for the likely majority of parents using these tools, their motives are sympathetic—being online can be a legitimately confusing and dangerous experience.

But where parental monitoring apps begin to cause concern is just how powerful they are.

Tune in to hear about the capabilities of parental monitoring apps, how parents can choose to safely use these with their children, and more, on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news

  • Intel experienced a leak due to “intel123″—the weak password that secured its server. (Source: Computer Business Review)
  • Fresh Zoom vulnerabilities for its Linux client were demonstrated at DEFCON 2020. (Source: The Hacker News)
  • Researchers saw an increase in scam attacks against users of Netflix, YouTube, HBO, and Twitch. (Source: The Independent)
  • TikTok was found collecting MAC addresses from mobile devices, a tactic that may have violated Google’s policies. (Source: The Wall Street Journal)
  • Several ads of apps labelled “stalkerware” can still be found in Google Play’s search results after the search giant’s advertising ban already took effect (Source: TechCrunch)

Stay safe, everyone!

The post Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane appeared first on Malwarebytes Labs.