Archive for author: makoadmin

A week in security (August 31 – September 6)

Last week on Malwarebytes Labs, we dug into security hubris on the Lock and Code podcast, explored ways in which Apple’s notarization process may not be hitting all the right notes, and detailed a new web skimmer. We also explained how to keep distance learners secure, talked about PCI DSS compliance, and revealed that SMB security posture is weakened by COVID-19.

Other cybersecurity news

  • School’s out for cyber attacker: Arrests made after multiple DDoS attacks target district networks (Source: Miami-Dade office of communications)
  • Long arm of the  law: British citizen extradited to the US regarding $2m in scam charges (Source: The Register)
  • Warning signs: Your servers could be at risk should you spot cryptomining activity taking place (Source: Help Net  Security)
  • Election threats: How ransomware could spell trouble for the upcoming US election (Source: GovTech)
  • Lloyd’s bank phish warning: A scam SMS attack is the order of the day for this bank’s customers (Source: Computer Weekly)
  • COVID-19 scammers play on data breach fears: An interesting look at how old breach data is being repackaged to coax payment information from potential victims (Source: The Record)
  • Fake ASDA mails in circulation: Missives offering entry into a competition for a £1,000 gift card should be ignored (Source: My London)
  • Ad scams on TikTok: Researchers look at some of the ways bad ads make their way to the person holding the device (Source: Tenable)
  • I can’t dance to this: Warner music group stores compromised by hackers (Source: Bleeping Computer)
  • Fakes on Facebook: The social media giant takes down fake content run by a US-based pr firm (Source: Buzzfeed)

Stay safe, everyone!

The post A week in security (August 31 – September 6) appeared first on Malwarebytes Labs.

SMB cybersecurity posture weakened by COVID-19, Labs report finds

In August, Malwarebytes Labs analyzed the damage caused by COVID-19 to business cybersecurity. Because of immediate, mandated transitions to working from home (WFH), businesses across the United States suffered more data breaches, lost more dollars, and increased their overall attack surfaces, all while experiencing a worrying lack of cybersecurity awareness on behalf of workers and IT and security directors.

Today, we have parsed the data to understand the pandemic’s effect on, specifically, small- and medium-sized businesses (SMBs).

The data on SMB cybersecurity is troubling.

Despite smart maneuvering by some SMBs—like those that provided cybersecurity trainings focused on WFH threats, or those that refrained from rolling out a new software tool because of its security or privacy risks—28 percent of SMBs still paid unexpected expenses to address a malware attack, and 22 percent suffered a security breach due to a remote worker.

Those numbers are higher than the averages we found for companies of all sizes in August—by a respective 4 percent and 2 percent.

The numbers don’t look good. But perhaps more worrying than the actions that befell our respondents are the actions they might fail to take themselves. For example, while a majority of SMBs said that they planned to install a more permanent WFH model for employees in the future, the same number of SMBs said they did not plan to deploy an antivirus solution that can specifically protect those distributed workforces.

Further, while SMBs widely agreed that they were using more video conferencing, online communication, and cloud storage platforms during WFH—thus expanding their online attack surface—a worrying number of respondents said they did not complete any cybersecurity or online privacy reviews of those software tools before making them available to employees.

The cybersecurity posture of organizations of all sizes, including SMBs, can and should be taken seriously—especially as WFH becomes the new normal.

A closer look at SMB cybersecurity

Today’s data represents a follow-up to our August report, Enduring from Home: COVID-19’s Impact on Business Security, in which we surveyed more than 200 IT and cybersecurity executives, directors, and managers from businesses of all sizes. Our analysis today takes a magnifying glass to the more than 100 respondents who work for companies that have between 100 and 1,249 employees.

We separated the data into three bands according to company size: companies with 100–349 employees; companies with 350–699 employees; and companies with 700–1,249 employees.

At times, certain patterns or unique findings emerged within those bands.

For example, larger SMBs had far greater concerns about the effectiveness of a remote IT workforce. When asked about their biggest cybersecurity concerns with employees now working remotely, 50 percent of respondents working at companies with 700–1,249 employees said “our IT support may not be as effective in supporting remote workers.”

Respondents from smaller organizations, however, were not as concerned. Only 27.3 percent of respondents from the smallest businesses we surveyed (100–349 employees ) and 21.6 percent of midsized companies (350–699 employees) answered the same.

Intuitively, this makes sense—larger companies have more employees and more potential opportunities for ad-hoc cybersecurity and IT issues that should be addressed. But without an office, those issues might be ignored by employees. Similarly, those issues might become so frequent that they overwhelm remote IT workers.

Elsewhere in the data, in at least one situation, we found a potential correlation between company size and pandemic impact.

Like we said above, across all SMBs, 28 percent said they paid unexpected expenses to address a malware attack.

But that percentage increased depending on the size of the company affected. Surprise malware expenses hit 21.2 percent of companies with 100–349 employees, 29.7 percent of companies with 350–699 employees, and 30.4 percent of companies with 700–1,249 employees.

Maybe, then, there is some truth to the age-old saying: They bigger they are, the harder they fall.

Not every discovered trend was worrying, though.

Good trends in SMB cybersecurity

The immediate transition to WFH hit businesses everywhere, no matter their size. With no preparation time and sometimes lacking clarity from local and state governments for what was considered safe, businesses were forced to chart their own paths.

Despite these pressures, many SMBs rose to the occasion to protect their businesses and their employees, while also providing their workers with the tools and software necessary to succeed in their roles.

For example, 58.2 percent of respondents said their business provided work-issued devices as needed, and 41.4 percent said their business deployed previously unused software tools to maintain communication and productivity. Further, 56.9 percent of respondents said their business performed a cybersecurity and online privacy analysis of newly deployed software tools, while 21.6 percent said that those reviews led to a decision to not deploy a software tool.

Finally, 55.2 percent of respondents said their business provided cybersecurity trainings focused on the specific cybersecurity threats of WFH, with information on the importance of secured home networks, strong passwords, and unauthorized device access.

As SMBs showed promising action in the immediate transition to WFH, they also responded with encouraging preparations for the future.

More than half—56.9 percent—of respondents said their business would “develop stronger remote security policies,” 50 percent said their business would “host more cybersecurity trainings tailored for working from home,” and 48.2 percent said their business would “develop cybersecurity and online privacy reviews for new, necessary software in the transition to working from home.”

That last point is a welcome one. Though, as we showed, 56.9 percent of respondents said their business “performed a cybersecurity and online privacy analysis of any newly-deployed software tools,” those reviews may have been ad-hoc. Codifying these types of reviews into a broader set of policies is a good practice.

While all of these are encouraging trends, we cannot neglect some of the more worrying data points. In fact, one of our survey respondents accurately described some of same risks that we uncovered.

“Employees are not as vigilant as they would be working from home about potential cyber attacks,” said a Florida IT director at a company of 100 – 349 employees. “We’ve seen some lax efforts from some of our better more observant employees in the last few months.”

Conflicting postures in SMB cybersecurity

In our main report in August, we found potential cases of security hubris—the simple phenomenon in which a business believes it is more secure than it actually is. In our deeper analysis of SMB cybersecurity, similar trends emerged.

For example, when we asked SMB respondents to rank their preparedness to transition to WFH on a scale from 1–10, a majority ranked themselves highly—62 percent gave their business an 8 or higher, and 74.1 percent gave their business a 7 or higher.

However, our respondents’ actual transition to WFH did not involve the type of preparation and cybersecurity protection that would typically warrant such high evaluations.

Yes, 55.2 percent said they provided cybersecurity trainings focused on the specific cybersecurity threats of WFH, but think about the 44.8 percent who did not respond that way. Yes, 57 percent said they performed a cybersecurity and online privacy analysis of new software tools, but that likely means that more than 40 percent did not. Also, only 34.5 percent of respondents said they deployed a new antivirus tool for devices provided by the organization, which leaves us scratching our heads about the roughly 65 percent who did not say the same. What gives?

Amidst the transition to WFH, our SMB respondents entirely agreed on one aspect—they are using more tools, more frequently.

We found that 81.9 percent of SMB respondents said that their usage of video conferencing platforms, like Zoom, and Microsoft Teams, had increased “slightly more” or “significantly more,” 75 percent said the same about their increased use of online instant messaging platforms, and 69.8 percent said the same about their increased use of cloud storage platforms. Relatedly, 33 percent of respondents said they are using personal devices for work more often than their work-issued device, compared to the time before the pandemic.

Put into perspective, more software tools being used more frequently, with some employees reporting more frequent personal device usage, all points to one big problem—an increased attack surface.

And yet, even with this hard data showing an increased attack surface, 65.5 percent of respondents said their organizations were at least “equally secure” as they were before the pandemic; within those numbers, 35.4 percent went further, saying their business was actually “slightly more” or “significantly more” secure.

On our podcast Lock and Code, security evangelist and Malwarebytes Labs director Adam Kujawa explained why these positions are likely impossible to square.

“For the most part, I don’t see how people can actually say they’re more secure,” Kujawa said about the results from our broader COVID-19 report in August. “There may be an idea that, because folks are distributed—because remote workers are no longer located in a single, physical space—that they are somehow decentralized, and therefor harder to gain access to by cybercriminals.”

Kujawa continued: “The reality is that that is complete baloney.”

The clearest discrepancy between the words and the actions of SMBs came in the responses to their future. When asked about future plans to protect their businesses, 54.3 percent of SMB respondents said they would “install a more permanent work-from-home model for employees who do not need to be in the office every day.” However, just 38.8 percent said they would “deploy an antivirus solution that can better handle a more dispersed, remote workforce.”

This is disappointing because it seems so obvious. Any plans to install a more permanent workforce must include plans to protect that workforce.

Future proof

The advice that we offer to bolster SMB cybersecurity is similar to the advice we had for businesses of all sizes that were hit by the pandemic. Companies can come in many, many sizes, but none of those sizes are too small to care about cybersecurity.

You can read the full report to get a better understanding of those steps. In the meantime, though, if you’re really stumped, seriously, consider an antivirus solution.  

The post SMB cybersecurity posture weakened by COVID-19, Labs report finds appeared first on Malwarebytes Labs.

PCI DSS compliance: why it’s important and how to adhere

PCI DSS is short for Payment Card Industry Data Security Standard. Every party involved in accepting credit card payments is expected to comply with the PCI DSS. The PCI Standard is mandated by the card brands, but administered by the Payment Card Industry Security Standards Council (PCI SSC). The standard was created to increase controls around cardholder data to reduce credit card fraud.

The PCI Security Standards Council’s mission is to enhance global payment account data security by developing standards and supporting services that drive education, awareness, and effective implementation by stakeholders.

Compliance will ensure that a company can uphold a positive image and build consumer trust. This also helps build consumer loyalty, since customers are more likely to return to a service or product from a company they consider to be trustworthy.

What exactly is PCI DSS?

PCI DSS is an international security standard that was developed in cooperation between several credit card companies. The PCI DSS tells companies how to keep their card and transaction data safe.

When the PCI DSS was published in 2004, it was expected that organizations would achieve effective and sustainable compliance within about five years. Some 15 years later, less than half of organizations maintain programs that prevent PCI DSS security controls from falling out of place within a few months after formal compliance validation. According to a 2019 Verizon Payment Security Report, research shows that PCI sustainability is trending downward since 2017.

An increase in online transactions

One of the side effects of the COVID-19 pandemic has been an increase in online transactions. As more people worldwide have started to work from home and practice social distancing to combat the spread of COVID-19, businesses must prepare to handle a higher percentage of online transactions.

After all, it is likely that these online customers will continue to shop online when they learn to appreciate the ease of use, especially if they are confident about the security of their online transactions. However, with this rise in the frequency of digital payments comes the increased threat of data breaches and digital fraud.

The elements of compliance

A recent Bank of America report states that small businesses are protecting themselves by implementing industry security standards, like PCI compliance. Specifically, PCI Compliance Requirement 5 indicates that you must protect all systems against malware and regularly update anti-malware software. PCI DSS Requirement 5 has four distinct elements that imply they need to be addressed daily:

  • 5.1: For a sample of system components, including all operating system types commonly affected by malicious software, verify that anti-malware software is deployed.
  • 5.2.b: Examine anti-malware configurations, including the master installation of the software, to verify anti-malware mechanisms are configured to perform automatic updates and periodic scans.
  • 5.2.d: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware’s software log generation is enabled, and logs are retained per PCI DSS Requirement 10.7.
  • 5.3.b: Examine anti-malware configurations, including the master installation of the software and a sample of system components, to verify that the anti-malware software cannot be disabled or altered by users.

Basically, this boils down to our regular advice pillars:

  • Make sure software (including anti-malware) is updated.
  • Perform automatic and/or periodic scans for malware.
  • Log and retain the results of those scans.
  • Make sure protection software (especially anti-malware) can’t be disabled.

Common problems and objections

The first requirement (5.1) requires an organization to maintain an accurate inventory of their devices and the operating systems on those devices. However, configuration management database (CMDB) solutions are notorious for not being completely implemented. As a result, it can be quite an exercise to determine if every system that needs anti-malware software is installed. If so, look for a solution that provides an inventory of protected endpoints for you. You may use such an inventory for auditing your CMDB and verifying compliance.

Endpoint Groups

The next hurdle with requirement 5.1 is that we still run into pushback from macOS and Linux users/administrators over their need to run an antivirus solution. Yet, a review of the CVE database debunks those claims.

Yes, these OSes have fewer vulnerabilities than Windows. However, they would still be “commonly affected,” given the number of vulnerabilities and the frequency with which those vulnerabilities are published. And as we have reported in the past, Mac threat detections are on the rise and actually outpace Windows in sheer volume. Using a solution that can cover all the operating systems in use in your organization can help you organize and control all your devices without adding extra software.

Sometimes, you will get pushback from server administrators who swear that any antivirus solution takes too much CPU to run and adversely affects server performance. While it’s getting better, we still regularly encounter people who make this claim but then fail to provide documented proof. (Not that we don’t believe them, as there are several legacy antivirus programs that can adversely affect performance.)

However, in most cases, the person is making these claims based on past experiences and not on trials of a more contemporary solution. No matter how you look at this, you will have to deploy anti-malware on Windows, macOS, and Linux Server endpoints to meet the PCI DSS.

Why compliance matters

Data from the Verizon Threat Research Advisory Center (VTRAC) demonstrates that a compliance program without the proper controls to protect data has a more than 95 percent probability of not being sustainable and is more likely to be the potential target of a cyberattack.

The costs of a successful cyberattack are not limited to liabilities and loss of reputation. There are also repairs to be made and reorganizations may be necessary, especially when you are dealing with ransomware or a data breach.

A data breach also involves lost opportunities and competitive disadvantages that are near impossible to quantify. The 2019 IBM/Ponemon Institute study calculated the cost of a data breach at $242 per stolen record, and more than $8 million for an average breach in the US. Ransomware is the biggest financial threat of all cyberattacks, causing an estimated $ 7.5 billion in damage in 2019 for the US alone.

For those companies engaged in online transactions, reputational damage can be fatal. Imagine customers shying away from the payment portal as soon as they spot your logo. PCI compliance, then, is not just a regulation—it could quite literally save your company’s bacon.

So stay safe (which in this case means staying compliant)!

The post PCI DSS compliance: why it’s important and how to adhere appeared first on Malwarebytes Labs.

How to keep K–12 distance learners cybersecure this school year

With the pandemic still in full swing, educational institutions across the US are kicking off the 2020–2021 school year in widely different ways, from re-opening classrooms to full-time distance learning. Sadly, as schools embracing virtual instruction struggle with compounding IT challenges on top of an already brittle infrastructure, they are nowhere near closing the K-12 cybersecurity gap.

Kids have no choice but to continue their studies within the current social and health climate. On top of this, they must get used to new learning setups—possibly multiple ones—whether they’re full-on distance learning, homeschooling, or a hybrid of in-class and home instruction.

Regardless of which of these setups school districts, parents, or guardians decide are best suited for their children, one thing should remain a priority: the overall security of students’ learning experience during the pandemic. For this, many careful and considerable preparations are needed.

New term, new terms

Parents in the United States are participating in their children’s learning like never before—and that was before the pandemic forced their hand. Now more than ever, it’s important to become familiar with the different educational settings to consider which is best suited for their family.

Full-on distance learning

Classes are held online while students are safe in their own homes. Teachers may offer virtual classes out of their own homes as well, or they may be using their empty classrooms for better bandwidth.

This setup requires families to have, ideally, a dedicated laptop or computer students can use for class sessions and independent work. In addition, a strong Internet connection is necessary to support both students and parents working from home. However, children in low-income families may have difficulties accessing this technology, unless the school is handing out laptops and hot spot devices for Wi-Fi. Often, there are delays distributing equipment and materials—not to mention a possible learning curve thanks to the Digital Divide.

Full-on distance learning provides children with the benefit of teacher instruction while being safe from exposure to the coronavirus.

Homeschool learning or homeschooling

Classes are held at home, with the parent or guardian acting as teacher, counselor, and yes, even IT expert to their kids. Nowadays, this setup is often called temporary homeschooling or emergency homeschooling. Although this is a viable and potentially budget-friendly option for some families, note that unavoidable challenges may arise along the way. This might be especially true for older children who are more accustomed to using technology in their studies.

This isn’t to say that the lack of technology use when instructing kids would result in low quality of learning. In fact, a study from Tilburg University [PDF] on the comparison between traditional learning and digital learning among kids ages 6 to 8 showed that children perform better when taught the traditional way—although, the study further noted, that they are more receptive to digital learning methods. But perhaps the most relevant implication from the study is this: The role of teachers (in this article’s context, the parents and guardians) in achieving desirable learning outcomes continues to be a central factor.

Parents and guardians may be faced with the challenge of out-of-the-box-thinking when it comes to creating valuable lessons for their kids that target their learning style while keeping them on track for their grade level.

Hybrid learning

This is a combination of in-class and home instruction, wherein students go to school part-time with significant social distancing and safety measures, such as wearing masks, regular sanitizing of facilities and properties, and regular cleaning of hands. Students may be split into smaller groups, have staggered arrival times, and spend only a portion of their week in the classroom.

For the rest of students’ time, parents or guardians are tasked with continuing instruction at home. During these days or hours, parents or guardians must grapple with the same stressors on time, creativity, patience, and digital safety as those in distance learning and homeschooling models.

New methods of teaching and learning might be borne out of the combination of any or all three setups listed above. But regardless of how children must continue their education—with the worst or best of circumstances in mind—supporting their emotional and mental well-being is a priority. To achieve peace of mind and keep students focused on instruction, parents must also prioritize securing their children’s devices from online threats and the invasion of privacy.

Old threats, new risks

It’s a given that the learning environments that expose children to online threats and risk their privacy the most involve the use of technology. Some are familiar, and some are born from the changes introduced by the pandemic. Let’s look at the risk factors that make K-12 cybersecurity essential in schools and in homes.

Zoombombing. This is a cyberthreat that recently caught steam due to the increased use of Zoom, a now-popular web conference tool. Employees, celebrities, friends, and family have used this app (and apps like it) to communicate in larger groups. Now it’s commonly adopted by schools for virtual instruction hours.

Since shelter-in-place procedures were enforced, stories of Zoombombing incidents have appeared left and right. Take, for example, the case of the unknown man who hacked into a Berkeley virtual class over Zoom to expose himself to high school students and shout obscenities. What made this case notable was the fact that the teacher of that class followed the recommended procedures to secure the session, yet a breach still took place.

Privacy issues. When it comes to children’s data, privacy is almost always the top issue. And there are many ways such data can be compromised: from organizational data breaches—something we’re all too familiar with at this point—to accidental leaking to unconsented data gathering from tools and/or apps introduced in a rush.

An accidental leaking incident happened in Oakland when administrators inadvertently posted hundreds of access codes and passwords used in online classes and video conferences to the public, allowing anyone with a Gmail account to not only join these classes but access student data.

In April 2020, a father filed a case against Google on behalf of his two kids for violating the Children’s Online Privacy Protection Act (COPPA) and the Biometric Information Privacy Act (BIPA) of Illinois. The father, Clinton Farwell, alleges that Google’s G Suite for Education service collects the data—their PII and biometrics—of children, who are aged 13 and below, to “secretly and unlawfully monitor and profile children, but to do so without the knowledge or consent of those children’s parents.”

This happened two months after Hector Balderas, the attorney general of New Mexico, filed a case against the company for continuing to track children outside the classroom.

Ransomware attacks. Educational institutions aren’t immune to ransomware attacks. Panama-Buena Vista Union School. Fort Worth Independent. Crystal Lake Community High School. These are just some of the total districts—284 schools in all—that were affected by ransomware from the start of 2020 until the first week of April. Unfortunately, the pandemic won’t make them less of a target—only more.

With a lot of K-12 schools adjusting to the pandemic—often introducing tools and apps that cater to remote learning without conducting security audits—it is almost expected that something bad is going to happen. The mad scrambling to address the sudden change in demand only shows how unprepared these school districts were. It’s also unfortunate that administrative staff have to figure things out and learn by themselves on how to better protect student data, especially if they don’t have a dedicated IT team. And, often, that learning curve is quite steep.

Phishing scams. In the context of the education industry, phishing scams have always been an ever-present threat. According to Doug Levin, the founder and president of the K-12 Cybersecurity Resource Center, schools are subjected to “drive-by” phishing, in particular.

“Scammers and criminals really understand the human psyche and the desire for people to get more information and to feel in some cases, I think it’s fair to say in terms of coronavirus, some level of panic,” Levin said in an interview with EdWeek. “That makes people more likely to suspend judgment for messages that might otherwise be suspicious, and more likely to click on a document because it sounds urgent and important and relevant to them, even if they weren’t expecting it.”

Security tips for parents and guardians

To ensure distance learning and homeschooled students have an uninterrupted learning experience, parents or guardians should make sure that all the tools and gadgets their kids use to start school are prepared. In fact, doing so is similar to how to keep work devices secure while working from home. For clarity’s sake, let’s flush out some general steps, shall we?

Secure your Wi-Fi

  • Make sure that the router or the hotspot is using a strong password. Not only that, switch up the password every couple months to keep it fresh.
  • Make sure that all firmware is updated.
  • Change the router’s admin credentials.
  • Turn on the router’s firewall.

Secure their device(s)

  • Make sure students’ computers or other devices are password-protected and lock automatically after a short period of time. This way, work won’t be lost by a pet running wild or a curious younger sister smashing some buttons.

    For schools that issue student laptops, the most common operating system is ChromeOS (Chromebooks). Here’s a simple and quick guide on how parents and guardians can lock Chromebooks. The password doesn’t need to be complicated, as you and your child should be able to remember it. Decide on a pass phrase together, but don’t share it with the other kids in the house.

  • Ensure that the firewall is enabled in the device.
  • Enforce two-factor authentication (2FA).
  • Ensure that the device has end-point protection installed and running in real time.

Secure your child’s data

  • Schools use a learning management solution (LMS) to track children’s activities. It is also what kids use to access resources that they need for learning.

    Make sure that your child’s LMS password follows the school’s guidelines on how to create a high entropy password. If the school doesn’t specify strong password guidelines, create a strong password yourself. Password managers can usually do this for you if you feel that thinking up a complicated one and remembering it is too much of a chore.

  • It also pays to limit the use of the device your child uses for studying to only schoolwork. If there are other devices in the house, they can be used to access social media, YouTube, video games, and other recreational activities. This will lessen their chances of encountering an online threat on the same device that stores all their student data.

Secure your child’s privacy

There was a case before where a school accidentally turned the cameras on of school-issued devices the students were using. It blew up in the news because it greatly violated one’s privacy. Although this may be considered a rare incident, assume that you can’t be too careful when the device your kid uses has a built-in camera.

Students are often required to show their faces on video conference software so teachers know they are paying attention. But for all the other time spent on assignments, it’s a good idea to cover up built-in cameras. There are laptop camera covers parents or guardians can purchase to slide across the lens when it’s not in use.

New challenges, new opportunities to learn

While education authorities have had their hands full for months now, parents and guardians can do their part, too, by keeping their transition to a new learning environment as safe and frictionless as possible. As you may already know, some states have relaxed their lockdown rules, allowing schools to re-open. However, the technology train has left the station.

Even as in-person instruction continues, educational tech will become even more integral to students’ learning experiences. Keeping those specialized software suites, apps, communication tools, and devices safe from cyberthreats and privacy invasions will be imperative for all future generations of learners.

Safe, not sorry

While IT departments in educational institutions continue to wrestle with current cybersecurity challenges, parents and guardians have to step up their efforts and contribute to K-12 cybersecurity as a whole. Lock down your children’s devices, whether they use them in the classroom or at home. True, it will not guarantee 100 percent protection from cybercriminals, but at the very least, you can be assured that your kids and their devices will remain far out of reach.

Stay safe!

The post How to keep K–12 distance learners cybersecure this school year appeared first on Malwarebytes Labs.

New web skimmer steals credit card data, sends to crooks via Telegram

The digital credit card skimming landscape keeps evolving, often borrowing techniques used by other malware authors in order to avoid detection.

As defenders, we look for any kind of artifacts and malicious infrastructure that we might be able to identify to protect our users and alert affected merchants. These malicious artifacts can range from compromised stores to malicious JavaScript, domains, and IP addresses used to host a skimmer and exfiltrate data.

One such artifact is a so-called “gate,” which is typically a domain or IP address where stolen customer data is being sent and collected by cybercriminals. Typically, we see threat actors either stand up their own gate infrastructure or use compromised resources.

However, there are variations that involve abusing legitimate programs and services, thereby blending in with normal traffic. In this blog, we take a look at the latest web skimming trick, which consists of sending stolen credit card data via the popular instant messaging platform Telegram.

An otherwise normal shopping experience

We are seeing a large number of e-commerce sites attacked either through a common vulnerability or stolen credentials. Unaware shoppers may visit a merchant that has been compromised with a web skimmer and make a purchase while unknowingly handing over their credit card data to criminals.

Skimmers insert themselves seamlessly within the shopping experience and only those with a keen eye for detail or who are armed with the proper network tools may notice something’s not right.

diagram
Figure 1: Credit card skimmer using Telegram bot

The skimmer will become active on the payment page and surreptitiously exfiltrate the personal and banking information entered by the customer. In simple terms, things like name, address, credit card number, expiry, and CVV will be leaked via an instant message sent to a private Telegram channel.

Telegram-based skimmer

Telegram is a popular and legitimate instant messaging service that provides end-to-end encryption. A number of cybercriminals abuse it for their daily communications but also for automated tasks found in malware.

Attackers have used Telegram to exfiltrate data before, for example via traditional Trojan horses, such as the Masad stealer. However, security researcher @AffableKraut shared the first publicly documented instance of a credit card skimmer used in Telegram in a Twitter thread.

The skimmer code keeps with tradition in that it checks for the usual web debuggers to prevent being analyzed. It also looks for fields of interest, such as billing, payment, credit card number, expiration, and CVV.

skimmer1
Figure 2: First part of the skimmer code

The novelty is the presence of the Telegram code to exfiltrate the stolen data. The skimmer’s author encoded the bot ID and channel, as well as the Telegram API request with simple Base64 encoding to keep it away from prying eyes.

skimmer1b
Figure 3: Skimming code containing Telegram’s API

The exfiltration is triggered only if the browser’s current URL contains a keyword indicative of a shopping site and when the user validates the purchase. At this point, the browser will send the payment details to both the legitimate payment processor and the cybercriminals.

telegram
Figure 4: A purchase where credit card data is stolen and exfiltrated

The fraudulent data exchange is conducted via Telegram’s API, which posts payment details into a chat channel. That data was previously encrypted to make identification more difficult.

For threat actors, this data exfiltration mechanism is efficient and doesn’t require them to keep up infrastructure that could be taken down or blocked by defenders. They can even receive a notification in real time for each new victim, helping them quickly monetize the stolen cards in underground markets.

Challenges with network protection

Defending against this variant of a skimming attack is a little more tricky since it relies on a legitimate communication service. One could obviously block all connections to Telegram at the network level, but attackers could easily switch to another provider or platform (as they have done before) and still get away with it.

Malwarebytes Browser Guard will identify and block this specific skimming attack without disabling or interfering with the use of Telegram or its API. So far we have only identified a couple of online stores that have been compromised with this variant, but there are likely several more.

block
Figure 5: Malwarebytes blocking this skimming attack

As always, we need to adapt our tools and methodologies to keep up with financially-motivated attacks targeting e-commerce platforms. Online merchants also play a huge role in derailing this criminal enterprise and preserving the trust of their customer base. By being proactive and vigilant, security researchers and e-commerce vendors can work together to defeat cybercriminals standing in the way of legitimate business.

The post New web skimmer steals credit card data, sends to crooks via Telegram appeared first on Malwarebytes Labs.

Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Adam Kujawa, security evangelist and director of Malwarebytes Labs, about “security hubris,” the simple phenomenon in which businesses are less secure than they actually believe.

Ask yourself, right now, on a scale from one to ten, how cybersecure are you? Now, do you have any reused passwords for your online accounts? Does your home router still have its default password? If your business rolled out new software for you to use for working from home (WFH), do you know if those software platforms are secure?

If your original answer is looking a little more shaky, don’t be surprised. That is security hubris

Tune in to hear about the dangers of security hubris to a business, how to protect against it, and about how Malwarebytes found it within our most recent report, “Enduring from home: COVID-19’s impact on business security,” on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

You can also find us on the Apple iTunes store, Google Play Music, and Spotify, plus whatever preferred podcast platform you use.

Other cybersecurity news:

  • The US government issued a warning about North Korean hackers targeting banks worldwide. (Source: BleepingComputer)
  • A team of academics from Switzerland has discovered a security bug that can be abused to bypass PIN codes for Visa contactless payments. (Source: ZDNet)
  • For governments and armed forces around the world, the digital domain has become a potential battlefield. (Source: Public Technology)
  • A new hacker hacker-for-hire group is targeting organizations worldwide with malware hidden inside malicious 3Ds Max plugins. (Source: Security Affairs)
  • The Qbot trojan evolves to hijack legitimate email threads. (Source: BetaNews)

Stay safe, everyone!

The post Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa appeared first on Malwarebytes Labs.

Apple’s notarization process fails to protect

In macOS Mojave, Apple introduced the concept of notarization, a process that developers can go through to ensure that their software is malware-free (and must go through for their software to run on macOS Catalina). This is meant to be another layer in Apple’s protection against malware. Unfortunately, it’s starting to look like notarization may be less security and more security theater.

What is notarization?

Notarization goes hand-in-hand with another security feature: code signing. So let’s talk about that first.

Code signing is a cryptographic process that enables a developer to provide authentication to their software. It both verifies who created the software and verifies the integrity of the software. By code signing an app, developers can (to some degree) prevent it from being modified maliciously—or at the very least, make such modifications easily detectable.

The code signing process has been integral to Mac software development for years. The user has to jump through hoops to run unsigned software, so little mainstream Mac software today comes unsigned.

However, Mac software that is distributed outside the App Store never had to go through any kind of checks. This meant that malware authors would obtain a code signing certificate from Apple (for a mere $99) and use that to sign their malware, enabling it to run without trouble. Of course, when discovered, Apple can revoke the code signing certificate, thus neutralizing the malware. However, malware can often go undiscovered for years, as illustrated best by the FruitFly malware, which went undetected for at least 10 years.

In light of this problem, Apple created a process they call “notarization.” This process involves developers submitting their software to Apple. That software goes through some kind of automated scan to ensure it doesn’t contain malware, and then is either rejected or notarized (i.e., certified as malware-free by Apple—in theory).

In macOS Catalina, software that is not notarized is prevented from running at all. If you try, you will simply be told “do not pass Go, do not collect $200.” (Or in Apple’s words, it can’t be opened because “Apple cannot check it for malicious software.”)

The message displayed by Catalina for older versions of Spotify
The message displayed by Catalina for older versions of Spotify

There are, of course, ways to run software that is not signed or not notarized, but there’s no indication as to how this is done from the error message, so as far as legitimate developers are concerned, it’s not an option.

So how’s that working out so far?

The big question on everyone’s minds when notarization was announced at Apple’s WWDC conference in 2019, was, “How effective is this going to be?” Many were quite optimistic that this would spell the end of Mac malware once and for all. However, those of us in the security industry did not drink the Kool-Aid. Turns out, our skepticism was warranted.

There are a couple tricks that the bad guys are using, in light of the new requirements. One is simple: Don’t sign or notarize the apps at all.

We’re seeing quite a few cases where malware authors have stopped signing their software, and have instead been shipping it with instructions to the user on how to run it.

Unsigned Mac malware

As can be seen from the above screenshot, the malware comes on a disk image (.dmg) file with a custom background. That background image shows instructions for opening the software, which is neither signed nor notarized.

The irony here is that we see lots of people getting infected with this malware—a variant of the Shlayer or Bundlore adware, depending on who you ask—despite the minor difficulty of opening it. Meanwhile, the installation of security software on macOS has gotten to be so difficult that we get a fair number of support cases about it.

The other option, of course, is for threat actors to get their malware notarized.

Notarize malware?! Say it ain’t so!

In theory, the notarization process is supposed to weed out anything malicious. In practice, nobody really understands exactly how notarization works, and Apple is not inclined to share details. (For good reason—if they told the bad guys how they were checking for malware, the bad guys would know how to avoid getting caught by those checks.)

All developers and security researchers know is that notarization is fast. I’ve personally notarized software quite a few times at this point, and it usually takes less than a couple minutes between submission and receipt of the e-mail confirming success of notarization. That means there’s definitely no human intervention involved in the process, as there is with App Store reviews. Whatever it is, it’s solely automated.

I’ve assumed since notarization was first introduced that it would turn out to be fallible. I’ve even toyed with the idea of testing this process, though the risk of getting my developer account “Charlie Millered” has prevented me from doing so. (Charlie Miller is a well-known security researcher who created a proof-of-concept malware app and got it into the iOS App Store in 2011. Even though he notified Apple after getting the app approved, Apple still revoked his developer account and he has been banned from further Apple development activity ever since.)

It turns out, though, that all I had to do was wait for the bad guys to run the test for me. According to new findings, Mac security researcher Patrick Wardle has discovered samples of the Shlayer adware that are notarized. Yes, that’s correct. Apple’s notarization process has allowed known malware to pass through undetected, and to be implicitly vouched for by Apple.

How did they do that?

We’re still not exactly sure what the Shlayer folks did to get their malware notarized, but increasingly, it’s looking like they did nothing at all. On the surface, little has changed.

Comparison of two Shlayer installers

The above screenshot shows a notarized Shlayer sample on the left, and an older one on the right. There’s no difference at all in the appearance. But what about when you dive into the code?

Comparison of the code of two Shlayer samples

This screenshot is hardly a comprehensive look into the code. It simply shows the entry point, and the names of a number of the functions found in the code. Still, at this level, any differences in the code are minor.

It’s entirely possible that something in this code, somewhere, was modified to break any detection that Apple might have had for this adware. Without knowing how (if?) Apple was detecting the older sample (shown on the right), it would be quite difficult to identify whether any changes were made to the notarized sample (on the left) that would break that detection.

This leaves us facing two distinct possibilities, neither of which is particularly appealing. Either Apple was able to detect Shlayer as part of the notarization process, but breaking that detection was trivial, or Apple had nothing in the notarization process to detect Shlayer, which has been around for a couple years at this point.

What does this mean?

This discovery doesn’t change anything from my perspective, as a skeptical and somewhat paranoid security researcher. However, it should help “normal” Mac users open their eyes and recognize that the Apple stamp does not automatically mean “safe.”

Apple wants you to believe that their systems are safe from malware. Although they no longer run the infamous “Macs don’t get viruses” ads, Apple never talks about malware publicly, and loves to give the impression that its systems are secure. Unfortunately, the opposite has been proven to be the case with great regularity. Macs—and iOS devices like iPhones and iPads, for that matter—are not invulnerable, and their built-in security mechanisms cannot protect users completely from infection.

Don’t get me wrong, I still use and love Mac and iOS devices. I don’t want to give the impression that they shouldn’t be used at all. It’s important to understand, though, that you must be just as careful with what you do with your Apple devices as you would be with your Windows or Android devices. And when in doubt, an extra layer of anti-malware protection goes a long way in providing peace of mind.

The post Apple’s notarization process fails to protect appeared first on Malwarebytes Labs.

Missing person scams: what to watch out for

Social media has a long history of people asking for help or giving advice to other users. One common feature is the ubiquitous “missing person” post. You’ve almost certainly seen one, and may well have amplified such a Facebook post, or Tweet, or even blog.

The sheer reach and virality of social media is perfect for alerting others. It really is akin to climbing onto a rooftop with a foghorn and blasting out your message to the masses. However, the flipside is an ugly one.

Social media is also a breeding ground for phishers, scammers, trolls, and domestic abusers working themselves into missing person narratives. When this happens, there can be serious consequences.

“My friend is missing, please retweet…”

Panicked, urgent requests for information are how these missing person scams spread. They’re very popular on social media and can easily spread across the specific geographically-located demographic the message needs to go to.

If posted to platforms other than Twitter, they may well also come with a few links which offer additional information. The links may or may not be official law enforcement resources.

Occasionally, links lead to dedicated missing person detection organisations offering additional services.

You may well receive a missing person notice or request through email, as opposed something posted to the wider world at large.

All useful ways to get the word out, but also very open to exploitation.

How can this go wrong?

The ease of sharing on social media is also the biggest danger where missing person requests are concerned. If someone pops up in your timeline begging for help to find a relative who went missing overnight, the impulse to share is very strong. It takes less than a second to hit Retweet or share, and you’ve done your bit for the day.

However.

If you’re not performing due diligence on who is doing the sharing, this could potentially endanger the person in the images. Is the person sharing the information directly a verified presence on the platform you’re using, or a newly created throwaway account?

If they are verified, are they sharing it from a position of personal interest, or simply retweeting somebody else? Do they know the person they’re retweeting, or is it a random person? Do they link to a website, and is it an official law enforcement source or something else altogether?

Even if the person sharing it first-hand is verified or they know the person they’re sharing content  from, that doesn’t mean what you’re seeing is on the level.

What if the non-verified person is a domestic abuser, looking for an easy way to track down someone who’s escaped their malign presence? What if the verified individual is the abuser? We simply don’t know, but by the time you may have considered this the Tweet has already been and gone.

When maliciousness is “Just a prank, bro”

Even if the person asking to find somebody isn’t some form of domestic abuser, there’s a rapidly sliding scale of badness waiting to pounce. Often, people will put these sorts of requests out for a joke, or as part of a meme. They’ll grab an image most likely well known in one geographic region but not another, and then share asking for information. This can often bleed into other memes.

“Have you seen this person, they stole my phone and didn’t realise it took a picture” is a popular one, often at the expense of a local D-list celebrity. In the same way, people will often make bad taste jokes but related to missing children. To avoid the gag being punctured early, they may avoid using imagery from actual abduction cases and grab a still from a random YouTube clip or something from an image resource.

A little girl, lost in Doncaster?

One such example of this happened in the last few weeks. A still image appeared to show a small child in distress, bolted onto a “missing” plea for help.

Well, she really was in distress…but as a result of an ice hockey player leaving his team in 2015, and not because she’d gone missing or been abducted. There’s a link provided claiming to offer CCTV footage of a non-existent abduction, though reports don’t say where the links took eager clickers.

A panic-filled message supplied with a link is a common tactic in these realms. The same thing happened with a similar story in April of 2019. Someone claimed their 10-year-old sister had gone missing outside school after an argument with her friend. However, it didn’t take long for the thread to unravel. Observant Facebook users noted that schools would have been closed on the day it supposedly happened.

Additionally, others mentioned that they’d seen the same missing sister message from multiple Facebook profiles. As with the most recent fake missing story, we don’t know where the link wound up. People understandably either steered clear or visited but didn’t take a screenshot and never spoke of it again.

“My child is missing”: an eternally popular scam

There was another one doing the rounds of June this year, once more claiming a child was missing. The seemingly US-centric language-oriented page appeared for British users in Lichfield, Bloxeich, Wolverhampton, and Walsall. Mentioning “police captains” and “downtown” fairly gave the game away, hinting at its generic cut and paste origins. The fact it cites multiple conflicting dates as to when the kidnapping took place is also a giveaway.

This one was apparently a Facebook phish, and was quite successful in 2020. So much so, that it first appeared in March, and then May before putting in its June performance. Scammers continue to use it because it’s easy to throw together, and it works.

Exploiting a genuine request

It’s not just scammers taking the lead and posting fake missing person scam posts. They’ll also insert themselves into other people’s misery and do whatever they can to grab some ill-gotten gains. An example of this dates to 2013, where someone mentions that they’d tried to reunite with their long-lost sister, via a “Have you seen this person” style letter.

The letter was published in a magazine, and someone got in touch. Unfortunately, that person claimed they held the sister hostage and needed to pay a ransom. The cover story quickly fell apart after they claimed certain relatives where dead when they were alive, and the missing person scam was foiled. 

Here’s a similar awful scam from 2016, where Facebook scammers claimed someone’s missing daughter was a sex worker in Atlanta. They said she was being trafficked and could be “bought back” for $70,000. A terrible thing to tell someone, but then these people aren’t looking to play fair.

Fake detection agencies

Some of these fakes will find you via post-box, as opposed merely lurking online. There have been cases where so-called “recovery bureaus” drop you a note claiming to be able to lead you to missing people. When you meet up with arranged contacts though, the demands for big slices of cash start coming. What information they do have is likely publicly sourced or otherwise easily obtainable (and not worth the asking price).

Looking for validation

Helping people is great and assisting on social media is a good thing. We just need to be careful we’re aiding the right people. While it may not always be possible for a missing person alert to come directly from an official police source, it would be worth taking a little bit of time to dig into the message, and the person posting it, before sharing further.

The issue of people going missing is bad enough; we shouldn’t look to compound misery by unwittingly aiding people up to no good.

The post Missing person scams: what to watch out for appeared first on Malwarebytes Labs.

Good news: Stalkerware survey results show majority of people aren’t creepy

Back in July, we sent out a survey to Malwarebytes Labs readers on the subject of stalkerware—the term used to describe apps that can potentially invade someone’s privacy. We asked one question: “Have you ever used an app to monitor your partner’s phone?” 

The results were reassuring.

We received 4,578 responses from readers all over the world to our stalkerware survey and the answer was a resounding “NO.” An overwhelming 98.23 percent of respondents said they had not used an app to monitor their partner’s phone.

Chart Q1 200820

For our part, Malwarebytes takes stalkerware seriously. We’ve been detecting apps with monitoring capabilities for more than six years—now Malwarebytes for WindowsMac, or Android detects and allows users to block applications that attempt to monitor your online behavior and/or physical whereabouts without your knowledge or consent. Last year, we helped co-found the Coalition Against Stalkerware with the Electronic Frontier Foundation, the National Network to End Domestic Violence, and several other AV vendors and advocacy groups.

It stands to reason that a readership comprised of Malwarebytes customers and people with a strong interest in cybersecurity would say “no” to stalkerware—we’ve spoken up about the potential privacy concerns associated with using these apps and the danger of equipping software with high-grade surveillance capabilities for a long time. We didn’t want to assume everyone agreed with us, but the data from our stalkerware survey shows our instincts were right.

No to stalkerware

Beyond a simple yes or no, we also asked our survey-takers to explain why they answered the way they did. The most common answer by far was a mutual respect and trust for their partner. In fact, “respect,” “trust,” and “privacy” were the three most commonly-used words by our participants in their responses:

“My partner and I share our lives … To monitor someone else’s phone is a tragic lack of trust.”

Many of those surveyed cited the Golden Rule (treat others the way you want to be treated) as their reason for not using stalkerware-type apps:

“I wouldn’t want anyone to monitor me so I therefore I would not monitor them.”

Others saw it as a clear-cut issue of ethics:

“People are entitled to their privacy as long as they do not do things that are illegal. Their rights end at the beginning of mine.”

Some respondents shared harrowing real-life accounts of being a victim of stalkerware or otherwise having their privacy violated:

“I have been a victim of stalking several times when vicious criminals used my own surveillance cameras to spy on my activity then used it to break into my apartment.”

Stalkerware vs. location sharing vs. parental monitoring

Many of those surveyed, answering either yes or no, made a distinction between stalkerware-type apps writ large and location-sharing apps like Apple’s Find My Phone and Google Maps. Location sharing was generally considered acceptable because users volunteered to share their own information and sharing was limited to their current location.

“My wife & myself allow Apple Find My Phone to track each other if required. I was keen that should I not arrive home from a run, she could find out where I was in the case of a health issue or accident.”

Also considered okay by our respondents were the types of parental controls packaged in by default with their various devices. Many respondents specifically mentioned tracking their child’s location:

“It would not be ok with me if someone was monitoring me and I would never do it to anyone else, the only thing I would like is be able to track my child if kidnapped.”

Some parents admitted to using monitoring of some kind with their children, but it wasn’t clear how far they were willing to go and if children were aware they were being monitored:

“The only reason I have set up parental control for my son is for his safety most importantly.”

This is the murky world of parental-monitoring apps. On one end of the spectrum there are the first-party parental controls like those built into the iPhone and Nintendo Switch. These controls allow parents to restrict screen time and approve games and additional content on an ad hoc basis. Then there are third-party apps, which provide limited capabilities to track one thing and one thing only, like, say, a child’s location, or their screen time, or the websites they are visiting.

On the other end of the spectrum, there are apps in the same parental monitoring category that can provide a far broader breadth of monitoring, from tracking all of a child’s interactions on social media to using a keylogger that might even reveal online searches meant to stay private. 

You can hear more about our take on these apps in our latest podcast episode, but the long and the short of it is that Malwarebytes doesn’t recommend them, as they can feature much of the same high-tech surveillance capabilities of nation-state malware and stalkerware, but often lack basic cybersecurity and privacy measures.

Who said ‘yes’ to stalkerware?

Of course, our stalkerware survey analysis would not be complete without taking a look at the 81 responses from those who said “yes” to using apps to monitor their partners’ phone.

Again, the majority of respondents made a distinction between consensual location-sharing apps and the more intrusive types of monitoring that stalkerware can provide. Many of those who answered “yes” to using an app to monitor their partner’s phone said things like:

“My wife and I have both enabled Google’s location sharing service. It can be useful if we need to know where each other is.”

And:

“Only the Find My iPhone app. My wife is out running or hiking by herself quite often and she knows I want to know if she is safe.”

Of the 81 people who said they use apps to monitor their partners’ phones, only nine cited issues of trust, cheating, “being lied to” or “change in partner’s behavior.” Of those nine, two said their partner agreed to install the app.

NortonLifeLock’s online creeping study

The results of the Labs stalkerware survey are especially interesting when compared to the Online Creeping Survey conducted by NortonLifeLock, another founding member of the Coalition Against Stalkerware.

This survey of more than 2,000 adults in the United States found that 46 percent of respondents admitted to “stalking” an ex or current partner online “by checking in on them without their knowledge or consent.”

Twenty-nine percent of those surveyed admitted to checking a current or former partner’s phone. Twenty-one percent admitted to looking through a partner’s search history on one of their devices without permission. Nine percent admitted to creating a fake social media profile to check in on their partners.

When compared to the Labs stalkerware survey, it would seem that online stalking is considered more acceptable when couched under the term “checking in.” For perspective, if one were to swap the word “diary” for “phone,” we don’t think too many people would feel comfortable admitting, “Hey, I’m just ‘checking in’ on my girlfriend/wife’s diary. No big deal.”

Stalkerware in a pandemic

Finally, we can’t end this piece without at least acknowledging the strange and scary times we’re living in. Shelter-in-place orders at the start of the coronavirus pandemic became de facto jail sentences for stalkerware and domestic violence victims, imprisoning them with their abusers. No surprise, The New York Times reported an increase in the number of domestic violence victims seeking help since March.

For some users, however, the pandemic has brought on a different kind of suffering. One survey respondent best summed up the current malaise of anxiety, fear, and depression: 

“No partner to monitor lol.”

We like to think, dear reader, that they’re not laughing at themselves and the challenges of finding a partner during COVID. Rather, they’re laughing at all of us.

Stalkerware resources

As mentioned earlier, Malwarebytes for WindowsMac, or Android will detect and let users remove stalkerware-type applications. And if you think you might have stalkerware on your mobile device, be sure to check out our article on what to do when you find stalkerware or suspect you’re the victim of stalkerware.

Here are a few other important reads on stalkerware:

Stalkerware and online stalking are accepted by Americans. Why?

Stalkerware’s legal enforcement problem

Awareness of stalkerware, monitoring apps, and spyware on the rise

How to protect against stalkerware

The post Good news: Stalkerware survey results show majority of people aren’t creepy appeared first on Malwarebytes Labs.

The cybersecurity skills gap is misunderstood

Nearly every year, a trade association, a university, an independent researcher, or a large corporation—and sometimes all of them and many in between—push out the latest research on the cybersecurity skills gap, the now-decade-plus-old idea that the global economy lacks a growing number of cybersecurity professionals who cannot be found.

It is, as one report said, a “state of emergency.” It would be nice, then, if the numbers made more sense.

In 2010, according to one study focused on the United States, the cybersecurity skills gap included at least 10,000 individuals. In 2015, according to a separate analysis, that number was 209,000. Also, in 2015, according to yet another report, that number was more than 1 million. Today, that number is both a projected 3.5 million by 2021 and a current 4.07 million, worldwide.

PK Agarwal, dean of the University of California Santa Cruz Silicon Valley Extension, has followed these numbers for years. He followed the data in personal interest, and he followed it more deeply when building programs at Northeastern University Silicon Valley, the educational hub opened by the private Boston-based university, where he most recently served as regional dean and CEO. During his research, he uncovered something.

“In terms of actual numbers, if you’re looking at the supply and demand gap in cybersecurity, you’ll see tons of reports,” Agarwal said. “They’ll be all over the map.”

He continued: “Yes, there is a shortage, but it is not a systemic shortage. It is in certain sweet spots. That’s the reality. That’s the actual truth.”

Like Agarwal said, there are “sweet spots” of truth to the cybersecurity skills gap—there can be difficulties in finding immediate need on deadline-driven projects, or in finding professionals trained in a crucial software tool that a company cannot spend time training current employees on.

But more broadly, the cybersecurity skills gap, according to recruiters, hiring managers, and academics, is misunderstood. Rather than a lack of talent, there is sometimes, on behalf of companies, a lack of understanding in how to find and hire that talent.

By posting overly restrictive job requirements, demanding contradictory skillsets, refusing to hire remote workers, offering non-competitive rates, and failing to see minorities, women, and veterans as viable candidates, businesses could miss out on the very real, very accessible cybersecurity talent out there.

In other words, if you are not able to find a cybersecurity expert for your company, that doesn’t mean they don’t exist. It means you might need help in finding them.

Number games

In 2010, the Center for Strategic & International Studies (CSIS) released its report “A Human Capital Crisis in Cybersecurity.” According to the paper, “the cyber threat to the United States affects all aspects of society, business, and government, but there is neither a broad cadre of cyber experts nor an established cyber career field to build upon, particularly within the Federal government.”

Further, according to Jim Gosler, a then-visiting NSA scientist and the founding director of the CIA’s Clandestine Information Technology Office, only 1,000 security experts were available in the US with the “specialized skills to operate effectively in cyberspace.” The country, Gosler said in interviews, needed 10,000 to 30,000.

Though the cybersecurity skills gap was likely spotted before 2010, the CSIS paper partly captures a theory that draws supports today—the skills gap is a lack of talent.

Years later, the cybersecurity skills gap reportedly grew into a chasm. It would soon span the world.  

In 2016, the Enterprise Strategy Group called the cybersecurity skills gap a “state of emergency,” unveiling research that showed that 46 percent of senior IT and cybersecurity professionals at midmarket and enterprise companies described their departments’ lack of cybersecurity skills as “problematic.” The same year, separate data compiled by the professional IT association ISACA predicted that the entire world would be short 2 million cyber security professionals by the year 2019.

But by 2019, that prediction had already come true, according to a survey published that year by the International Information System Security Certification Consortium, or (ISC)2. The world, the group said, employed 2.8 million cybersecurity professionals, but it needed 4.07 million.

At the same time, a recent study projected that the skills gap in 2021 would be lower than the (ISC)2 estimate for today—instead predicting a need of 3.5 million professionals by next year. Throughout the years, separate studies have offered similarly conflicting numbers.

The variation can be dizzying, but it can be explained by a variation in motivations, said Agarwal. He said these reports do not exist in a vacuum, but are rather drawn up for companies and, perhaps unsurprisingly, for major universities, which rely on this data to help create new programs and to develop curriculum to attract current and prospective students.

It’s a path Agarwal went down years ago when developing a Master’s program in computer science at Northeastern University Silicon Valley extension. The data, he said, supported the program, showing some 14,000 Bay Area jobs that listed a Master’s degree as a requirement, while neighboring Bay Area schools were only on track to produce fewer than 500 Master’s graduates that year.

“There was a massive gap, so we launched the campus,” Agarwal said. The program garnered interest, but not as much as the data suggested.

Agarwal remembered thinking at the time: “What the hell is going on?” 

It turns out, a lot was going on, Agarwal said. For many students, the prospect of more student debt for a potentially higher pay was not enough to get them into the program. Further, the salaries for Bachelor’s graduates and Master’s graduates were close enough that students had a difficult time seeing the value in getting the advanced degree.

That weariness towards a Master’s degree in computer science also plagues cybersecurity education today, Agarwal said, comparing it to an advanced degree in Biology.

“Cybersecurity at the Master’s level is about the same as in Biology—it has no market value,” Agarwal said. “If you have a BA [in Biology], you’re a lab rat. If you have an MA, you’re a senior lab rat.”

So, imagine the confusion for cybersecurity candidates who, when applying for jobs, find Master’s degrees listed as requirements. And yet, that is far from uncommon. The requirement, like many others, can drive candidates away.

Searching different

For companies that feel like the cybersecurity talent they need simply does not exist, recruiters and strategists have different advice: Look for cybersecurity talent in a different way. That means more lenient degree and certification requirements, more openness to working remotely, and hiring for the aptitude of a candidate, rather than going down a must-have wish list.

Jim Johnson, senior vice president and Chief Technology Officer for the international recruiting agency Robert Half, said that, when he thinks about client needs in cybersecurity, he often recalls a conference panel he watched years ago. A panel of hiring experts, Johnson said, was asked a simple question: How do you find people?

One person, Johnson recalled, said “You need to be okay hiring people who know nothing.”

The lesson, Johnson said, was that companies should hire for aptitude and the ability to learn.

“You hire the personality that fits what you’re looking for,” Johnson said. “If they don’t have everything technically, but they’re a shoo-in for being able to learn it, that’s the person you bring up.”

Johnson also explained that, for some candidates, restrictive job requirements can actually scare them away. Johnson’s advice for companies is that they understand what they’re looking for, but they don’t make the requirements for the job itself so restrictive that it causes hesitation for some potential candidates.

“You might miss a great hire because you required three certifications and they had one, or they’re in the process of getting one,” Johnson said.

Similarly, Thomas Kranz, longtime cybersecurity consultant and current cybersecurity strategy adviser for organizations, called job requirements that specifically call for degrees as “the biggest barrier companies face when trying to hire cybersecurity talent.”

“This is an attitude that belongs firmly in the last century,” Kranz wrote. ‘Must have a [Bachelor of Science] or advanced degree’ goes hand in hand with ‘Why can’t we find the candidates we need?’”

This thinking has caught on beyond the world of recruiters.

In February, more than a dozen companies, including Malwarebytes, pledged to adopt the Aspen Institute’s “Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce.”

The very first principle requires companies to “widen the aperture of candidate pipelines, including expanding recruitment focus beyond applicants with four-year degrees or using non-gender biased job descriptions.”

At Malwarebytes, the practice of removing strict degree requirements from cybersecurity job descriptions has been in place for cybersecurity hires for at least a year and a half.

“I will never list a BA or BS as a hard requirement for most positions,” said Malwarebytes Chief Information Security Officer John Donovan. “Work and life experience help to round out candidates, especially for cyber-security roles.” Donovan added that, for more junior positions, there are “creative ways to broaden the applicant pool,” such as using the recruiting programs YearUp, NPower, and others.

The two organizations, like many others, help transition individuals to tech-focused careers, offering training classes, internships, and access to a corporate world that was perhaps beyond reach.

These types of career development groups can also help a company looking to broaden its search to include typically overlooked communities, including minorities, women, disabled people, and veterans.

Take, for example, the International Consortium of Minority Cybersecurity Professionals, which creates opportunities for women and minorities to advance in the field, or the nonprofit Women in CyberSecurity (WiCyS), which recently developed a veterans’ program. WiCyS primarily works to cultivate the careers of women in cybersecurity by offering training sessions, providing mentorship, granting scholarships, and working with interested corporate partners.

“In cybersecurity, there are challenges that have never existed before,” said Lynn Dohm, executive director for WiCyS. “We need multitasking, diversity of thought, and people from all different backgrounds, all genders, and all ethnicities to tackle these challenges from all different perspectives.”

Finally, for companies still having trouble finding cybersecurity talent, Robert Half’s Johnson recommended broadening the search—literally. Cybersecurity jobs no longer need to be filled by someone located within a 40-mile radius, he said, and if anything, the current pandemic has reinforced this idea.

“The affect of the pandemic, which has shifted how people do their jobs, has made us now realize that the whole working remote thing isn’t as scary as we thought,” Johnson said.

But companies should understand that remote work is as much a boon to them as it is to potential candidates. No longer are qualified candidates limited in their search by what they can physically get to—now, they can apply for jobs that may seem more appealing that are much farther from where they live.

And that, of course, will have an impact on salary, Johnson said.

“While Bay Area salaries or a New York salary, while those might not change dramatically, what is changing is the folks that might be being recruited in Des Moines or in Omaha or Oklahoma City, who have traditionally been limited [regionally], now they’re being recruited by companies on the coast,” Johnson said.

“That’s affecting local companies, which are paying those $80,000 salaries. Now those candidates are being offered $85,000 to work remotely. Now I’ve got to compete with that.”

Planning ahead

The cybersecurity skills gap need not frighten a company or a senior cybersecurity manager looking to hire. There are many actionable steps that a business can take today to help broaden their search and find the talent that perhaps other companies are ignoring.

First, stop including hard degree requirements in job descriptions. The same goes for cybersecurity certifications. Second, start accepting the idea of remote work for these teams. The value of “butts in seats” means next to nothing right now, so get used to it. Third, understand that remote work means potentially better pay for the candidates you’re trying to hire, so look at the market data and pay appropriately. Fourth, connect with a recruiting organization, like WiCyS, if you want some extra help in creating a diverse and representative team. Fifth, also considering looking inwards, as your next cybersecurity hire might actually be a cybersecurity promotion.

And the last piece of advice, at least according to Robert Half’s Johnson? Hire a recruiter.

The post The cybersecurity skills gap is misunderstood appeared first on Malwarebytes Labs.