IT NEWS

Lock and Code S1Ep7: Sounding the trumpet on web browser privacy with Pieter Arntz

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Pieter Arntz, malware intelligence researcher at Malwarebytes, about web browser privacy—an often neglected subcategory of data privacy. Without the proper restrictions, browsers can allow web trackers to follow you around the Internet, resulting in that curious ad seeming to find you from website to website. But, according to Arntz, there are ways to fight back.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on:

Plus other cybersecurity news:

Stay safe, everyone!

The post Lock and Code S1Ep7: Sounding the trumpet on web browser privacy with Pieter Arntz appeared first on Malwarebytes Labs.

Going dark: encryption and law enforcement

UPDATE, 05/22/2020: In the advent of the EARN IT Act, the debate on government subversion of encryption has reignited.  Given that the material conditions of the technology have not changed, and the arguments given in favor of the bill are not novel, we’ve decided to republish the following blog outlining our stance on the subject.

Originally published July 25, 2017

We’re hearing it a lot lately: encryption is an insurmountable roadblock between law enforcement and keeping us safe. They can’t gather intelligence on terrorists because they use encryption. They can’t convict criminals because they won’t hand over encryption keys. They can’t stop bad things from happening because bad guys won’t unlock their phones. Therefore—strictly to keep us safe—the tech industry must provide them with means to weaken, circumvent, or otherwise subvert encryption, all for the public good. No “backdoors”, mind you; they simply want a way for encryption to work for good people, but not bad. This is dangerous nonsense, for a lot of reasons.

1. It’s technically incorrect

63526000

Encryption sustains its value by providing an end to end protection of data, as well as what we call “data at rest.” Governments have asked for both means of observing data in transit, as well as retrieving data at rest on devices of interest. They also insist that they have no interest in weakening encryption as a whole, but just in retrieving the information they need for an investigation. From a technical perspective, this is contradictory gibberish. An encryption algorithm either encodes sensitive data or it doesn’t—the only method for allowing a third-party to gain access to plain-text data would be to either provide them with the private keys of the communicants in question or maintain an exploitable flaw in the algorithm that a third-party could take advantage of. Despite government protestations to the contrary, this makes intuitive sense: how could you possibly generate encryption secure against one party (hackers) but not another (government)? Algorithms cannot discern good intentions, so they must be secure against everyone.

2. They have a myriad of other options to get what they need

shutterstock 416392966

Let’s assume for a moment that a government entity has a reasonable suspicion that a crime has been committed, a reasonable certainty that a certain person did it, and a reasonable suspicion that evidence leading to a conviction lies on an encrypted device. Historically, government entities have not checked all these boxes before attempting to subvert decryption, but let’s give them the benefit of the doubt for the moment. Options available to various levels of law enforcement and/or intelligence include, but are not limited to:

  • Eavesdropping on unencrypted or misconfigured comms of a suspect’s contact
  • Collecting unencrypted metadata to characterize the encrypted data
  • Detaining the suspect indefinitely until they “voluntarily” decrypt the device
  • Geolocation to place the suspect in proximity to the crime
  • Link analysis to place the suspect in social contact with confirmed criminals
  • Grabbing unencrypted data at rest from compliant third party providers
  • Eavesdropping on other channels where the suspect describes the encrypted data
  • Wrench decryption

Given the panoply of tools available to the authorities, why would they need to start an investigation by breaking the one tool available to the average user that keeps their data safe from hackers?

3. They’re not really “going dark”

shutterstock 86230

In 1993, a cryptographic device called the “clipper chip” was proposed by the government to encrypt data while holding private keys in a “key escrow” controlled by law enforcement. Rather than breaking the encryption, law enforcement would have simply had a decryption key available. For everyone. An academic analysis of why this was a stunningly bad idea can be found here.

Given that this program was shuttered in response to an overwhelmingly negative public opinion, has law enforcement and intelligence agencies been unable to collect data for the past 24 years? Or have they turned to other investigatory tools available to them as appropriate?

4. If we do give them a backdoor, what would they do with it?

1984-style heavy handed tactics are unlikely at present time, but a government breach that results in loss of control of the backdoor? Much more likely. The breach at OPM most likely endangered the information of up to a third of adult Americans, depending on who and how you count. (We don’t know for sure because the government didn’t say how they counted.) That breach involved data of sensitive, valuable, government employees. Would they do any better with a backdoor that impacts technology used by pretty much everyone?

No, they wouldn’t.

Let’s take a look at how they secure their own networks, post OPM. Oh dear….

If the most powerful and richest government in the world cannot secure their own classified data, why should we trust them with ours? The former head of the FBI once called for an “adult conversation” on encryption. We agree. So here’s a modest counter-proposal:

  • Stop over-classifying cyberthreat intelligence. The security community cannot fix what it does not know. Threat intelligence over a year old is effectively worthless.
  • Send subject matter experts to participate in ISACs, not “liaisons.”
  • Collaborate in the ISACs in good faith: shared intelligence should have context and collaboration should extend beyond lists of IOCs.
  • Exchange analytic tradecraft: analysts in the government often use techniques that while obscure, are not classified. This will improve tradecraft on both sides.
  • Meet the DHS standard for securing your own machines, classified or otherwise. No one would trust someone with a key escrow if those keys are held in a leaky colander.

We think these are reasonable requests that can help keep people safe, without breaking the encryption the world relies on daily to do business, conduct private conversations, and on occasion, express thoughts without fear of reprisal. We hope you agree.

The post Going dark: encryption and law enforcement appeared first on Malwarebytes Labs.

Shining a light on “Silent Night” Zloader/Zbot

When it comes to banking Trojans, ZeuS is probably the most famous one ever released. Since its source code originally leaked in 2011, several new variants proliferated online. That includes a past fork called Terdot Zbot/Zloader, which we extensively covered in 2017.

But recently, we observed another bot, with a design reminiscent of ZeuS, that seems to be fairly new (a 1.0 version was compiled at the end of November 2019), and is actively developed.

We decided to investigate.

Since the specific name of this malware was unknown among researchers for a long time, it happened to be referenced by a generic term Zloader/Zbot (a common name used to refer to any malware related to the ZeuS family).

Our investigation led us to find that this is a new family built upon the ZeuS heritage, being sold under the name “Silent Night,” perhaps in reference to a biochemical weapon used in the 2002 movie xXx.

The initial sample is a downloader, fetching the core malicious module and injecting it into various running processes. We can also see several legitimate components involved, just like in Terdot’s case.

In our newly published paper, which we produced in collaboration with HYAS, we take a deep dive into the functionality of this malware and its Command-and-Control (C2) panel. We provide a way to cluster the samples based on the values in the bot’s config files, while also comparing “Silent Night” with some other Zbots that have been popular in recent years, including Terdot.

Download the full report
PDF

The post Shining a light on “Silent Night” Zloader/Zbot appeared first on Malwarebytes Labs.

10 best practices for MSPs to secure their clients and themselves from ransomware

Lock-downs and social distancing may be on, but when it comes to addressing the need for IT support—whether by current of potential clients—it’s business as usual for MSPs.

And, boy, is it a struggle.

On the one hand, they keep an eye on their remote workers to ensure they’re still doing their job securely and safely in the comfort of their own homes. On the other hand, they must also address the ever-present threats of cybercrime. Although some threat actors were vocal about easing off on targeting hospitals and other organizations that are key to helping societies move forward again, sadly not all of them are like this.

Letting up and turning a blind eye to such groups is almost tantamount to not putting security in mind when safeguarding your organization’s future. Ransomware, in particular, has impacted the business world—MSPs included—unlike any other malware type. Business-to-business (B2B) companies not protecting themselves or their clients against it is simply not an option.

Why abide by best cybersecurity practices

The majority of what impacts MSPs in the event of a breach is not that different from what affects other B2B entities that keep data of their clients. MSPs are preferred targets because of the eventual cascade of successful infiltration they promise to threat actors. Traditionally, cybercrime groups target multiple companies, usually fashioning their campaigns based on intel they gleaned about them. For attackers, hitting one MSP is tantamount to hitting multiple companies at the same time with significantly lower effort and exponentially higher gain.

In the event of a ransomware attack, MSPs will have to face:

  • Potential loss of data. Attacks threaten not just the data that belongs to the MSP, but also those of their clients.
  • Cessation of services. An MSP suffering from a ransomware attack wouldn’t be able to provide service to their many business clients, who in turn also need support for their IT needs. The lack of support leaves them vulnerable to attacks.
  • Loss of time. Time is an asset that is best used in providing the best service an MSP can offer. So, the more time spent attempting to recover from a ransomware attack, the less MSPs earn.
  • High financial cost. Mitigating and remediating from a ransomware attack can be exorbitantly expensive. A lot of hardware may need replacing; third-party companies, fines and penalties, and lawsuits may need paying; and a good PR firm to help salvage the company’s reputation post-breach may need hiring.
  • A crisis of credibility. Customers decide whether they stay with their current MSP or move to a new, more secure one, post-breach. Losing clients can deal a heavy blow to any business. And it can get worse if the word is out about an MSP and it hasn’t done anything to address its problems.

To serve and protect: a call for MSPs, too

To best protect their clients, MSPs must first protect themselves. Here are 10 best practices we advise them to take.

Educate your employees. Education shouldn’t stop with their clients; it should start within their own backyard. Remember, what employees don’t know may get the company in trouble.

Undergo cybersecurity training for two reasons: [1] to further aid their clients as more and more are expecting MSPs to provide this kind of service in addition to what they already offer, and [2] to have a general knowledge on basic computing hygiene, which will greatly help protect the MSP from online threats, such as phishing, when practiced.

Keeping your employees apprised with the latest threats will put MSPs on top of providing support to clients. Continuously simulating threats within their environment will also keep employee knowledge sharp and more adaptable to situations when it calls for one.

Invest in solutions that will protect you at their weak points. Threat actors see MSPs as low hanging fruit due to their sometimes poor security hygiene and outdated systems. Needless to say, MSPs must protect their assets like any other business.

To take this step, MSPs must first recognize what their assets are and find out where they lack protection. This means potentially hiring a third-party to do an audit or conduct a penetration test. For example, if a security inspection reveals that the MSP is not using a firewall to protect their servers, they may be advised to place one at the perimeter of high-risk networks. Moreover, place firewalls between endpoints within the network to limit host to host communication.

A full security suite that actively scans for malware, blocks potentially dubious URLs, quarantines malicious threats, and protects their employees from emails with malicious attachments and potentially harmful media can help in nipping online threats that target MSPs in the bud.

Backup sensitive files and data regularly. While backing up files is expected to be a staple service from MSPs, it is unfortunately largely overlooked.

According to a 2017 report from The 2112 Group and Barracuda, only 29 percent of MSPs backup data.

It has become essential now more than ever for MSPs to prioritize creating a backup strategy in their repertoire if they want to better protect their clients and address complications posed by ransomware. We recommend an effective three-point plan to guide you further.

Patch, patch, and patch some more. Some MSPs may just be inexperienced at protecting their own systems, thus, they miss out on updating their operating system and other software.

According to a 2018 study by the Ponemon Institute, 57 percent of companies that suffered a breach in the previous year said the breach was possibly caused by poor patch management. Worse, 34 percent of these had already known of their software vulnerabilities before they were attacked. This suggests that even when a patch is available for software an MSP uses, they either don’t apply it or manage patching poorly.

As you may already know, it’s not difficult for anyone to go through an open door the same way that it doesn’t take a genius to find out and exploit a software flaw—there’s a tool for that, after all.

MSPs should create a great patch management strategy and stick to it. But if they think this is too much to handle, scope out a good third-party provider that could do the job just as well.

Restrict or limit accounts with clients. It’s tragic that many companies hit with ransomware—MSPs included—are confirmed to be compromised by the use of stolen credentials, which is gained primarily via phishing. This point is to ensure that while MSPs must know their limits when in a client network, clients in turn must ensure that their MSP adheres to company password and permission management best practices they already have in place.

There are several ways organizations can limit MSPs regarding what they’re authorized to do and how deep in the network they’re allowed to go. MSP accounts must be removed from enterprise (EA) or domain administrator (DA) groups. Give them only the bare minimum access to systems they service. Client organizations should also restrict MSP accounts using time, such as setting an expiration date and time for MSP accounts based on the end-of-contract date; temporarily disabling accounts until their work is needed; and restricting MSP service hours only within business hours if this is required.

Isolate networks with servers housing sensitive information. MSPs should know better than to connect all their servers, including those where they keep extremely sensitive data of their customers and logs, to one network that is also public facing. Not only will this put their data at risk of being affected in the event of breach, there is also the possibility that someone who doesn’t have ill intent may stumble across the data online—especially if the MSP hasn’t secured it properly.

In the event of a threat actor successfully infiltrating an MSP, network segmentation will serve as a barrier between them and the MSP’s critical servers. Done right, this will not only prevent a potential outbreak from spreading further within the network but also hinders the bad guys—and malicious insiders—from viewing or grabbing sensitive data.

Monitor network activity continuously. Knowing that they are now targets, MSPs must invest in resources that would provide them 24/7 network monitoring and logging. This way, actively searching for anomalies and unusual behavior within the network—usually an indication of a possible attack—would be a lot easier to spot and investigate. To benchmark what is normal traffic within an MSP’s environments, they may need the aid of third-party platforms to create a baseline and alert them for network activities it finds out of the ordinary.

Enable multi-factor authentication (MFA). Username and password combinations are no longer enough to secure the types of sensitive data that MSPs are expected to protect. A layered approach to putting data under lock and key is an essential need, and there are multiple methods of authentication that MSPs can choose from that they can couple with those credentials.

Disable or remove inactive accounts. You’d think it would only be practical to remove or disable accounts of former employees. Yet, it is easy to forget or procrastinate on spring cleaning accounts, especially when the MSP is already swamped with high priority tasks. Perhaps they have forgotten that this, although simple to do, is also a critical task.

Having a good account management system or process in place should have this sorted. After all, threat actors only need a tiny opening to exploit, and an MSP’s goal—like any other business’s, cybersecurity-wise—is to make itself a hard target by making it as difficult as possible to for threat actors to infiltrate them.

Avoid shortcuts. While it’s tempting to cut corners or take unsecured shortcuts, especially when certain situations may seem viable for it, it’s important for MSPs to step back and realize that taking such measures may give them the benefit they expect but the risks will also increase.

For example: a stressed and busy MSP employee uses an access utility instead of loading up a VPN to apply an update on a client’s server. Only this time, he forgets to close the opening he created with the utility. While the task he was assigned to do is done, he also left his client’s server vulnerable.

Great expectations

Clients are expecting a lot from MSPs, relying on them for everything, and looking to them as the technology and service experts who understand their need for security and how to address them. Yet often, those expectations aren’t met or provided for.

MSPs, it’s time to gain competitive advantage in your space by ensuring that your company is as secure as it can be, so you can better give security advice, measures, and aid to the clients you serve. In the end, you can’t give security if you don’t have it yourself.

The post 10 best practices for MSPs to secure their clients and themselves from ransomware appeared first on Malwarebytes Labs.

When the coronavirus infodemic strikes

Social media sites are stepping up their efforts in the war against misinformation… specifically, the coronavirus/COVID-19 infodemic. There’s a seemingly endless stream of potentially dangerous misinformation flying around online related to the COVID-19 pandemic, and that could have fatal results.

It’s boomtown in fake-news land riding high on the wave of people being left with their tech devices 24/7. I myself regularly see everything posted online from “hand gel is an immunizer” (nope) and “children can’t be affected” (not true) to “UK rules mean domestic abuse survivors have to stay with their abusive spouse” (absolutely not true at all and hugely dangerous to claim). 

We even have engineers being spat on thanks to 5G conspiracy theories potentially resulting in transmission of coronavirus. Turns out a global pandemic is a lightning rod for pushing people to conspiracy theories galore, to the extent that some folks have to go hunting for guides to wean their family members away from internet fake outs. There are serious consequences taking shape, via every source imaginable—no matter how baffling.

What is being done to tackle these tall tales online?

Youtube: We begin with the video monolith, removing multiple “Coronavirus is caused by 5G” videos (one of which had more than 1.5m views) after an investigation by PressGazette. Some of the other clips about Bill Gates, the media, and related subjects were from big number, verified accounts—often with adverts overlaid from advertisers who didn’t want their promotions associated with said content. While YouTube claims to have removed thousands of videos “since early February,” the video giant and many others are under intense pressure to take things up a notch or two.

While the top search results for “5G coronavirus” in YouTube currently bring back a variety of verified sources debunking the many conspiracy claims, filtering videos by what was posted “today” results in an assortment of freshly uploaded clips of people filming 5G towers and tagging them with “Coronavirus” in the titles. Should you see something specifically pushing a conspiracy theory, the report options are still quite generic:

  • Sexual content
  • Violent or repulsive content
  • Hateful or abusive content
  • Harmful or dangerous acts
  • Spam or misleading

While you’d likely select the last option, there’s still nothing specifically about the pandemic itself. This may be concerning, considering a recent study by BMJ Global Health found that one in four of the most popular videos about the pandemic contained misinformation. What that looks like is 62 million views across 19 dubious videos out of 69 popular videos from one single day. It’s quite concerning.

Twitter: This is an interesting one, as Twitter are looking to flag Tweets and/or accounts pushing bad information in relation to COVID-19. While this is a good move, it appears to be something done entirely at their end; if you try to flag a Tweet yourself as COVID-19 misinformation, there’s no option to do so in the reporting tab. “It’s suspicious or spam” and “It’s abusive or harmful” are the closest, but there’s nothing specific in the follow up options tied to either of those selections.

This feels a bit like a missed opportunity, though there will be reasons why this isn’t available as an option. Perhaps they anticipate false flag and troll reporting of valid data, though one would hope their internal processes for flagging bad content would be able to counteract this possibility.

Facebook: The social media giant came under fire in April for their approach to the misinformation crisis, with large view counts, bad content not flagged as false, and up to 22 days for warnings to be issued, leading one campaign director at a crowdfunded activist group to claim they were “at the epicentre of the misinformation crisis.”

Ouch.

Facebook decided to start notifying users who’d interacted with what can reliably be called “the bad stuff” to try and push back on content rife in groups and elsewhere. Facebook continues to address the problem with multiple initiatives including tackling bad ads, linking people to credible information, and combating rogue data across multiple apps. The sheer size of their user base suggests this fight is far from over, though.

TikTok: Thinking that conspiracy theories and misinformation wouldn’t pop up on viral music/clip sensation TikTok is probably a bad idea. In some cases it’s flourished on the platform away from serious researcher eyes still focused on the big social media platforms such as Twitter and Facebook.

While TikTok is somewhat unique with regards having COVID-19 misinformation as a specific reporting category, it’s not exactly been plain sailing. Popular hashtag categories seemingly have more than their fair share of bad content, tying bad data and poorly sourced claims to cool songs and snappy soundbites.

Internet Archive: Even the Internet Archive isn’t safe from coronavirus shenanigans as people use saved pages to continue spreading bad links online. Even if a bad site is taken down, flagged as harmful, or removed from search engines, the act of scooping it up and placing it on Archive.org forevermore is a way for the people behind the sites to keep pushing those links. For their part, the Internet Archive is fighting  back with clear warning messages on some of the discredited content.

Beware a second Infodemic wave

Although some major online platforms were slow to respond to the bogus information wave, most of them now seem to at least have some sort of game plan in place. It’s debatable how much of it is working, but something is likely better than nothing and tactics continue to evolve in response to those hawking digital shenanigans.

However, it seems at least some warnings of the present so-called Infodemic were not heeded across many years and now we’re reaping the whirlwind. From Governments and healthcare organisations to the general public and online content sharing platforms, we’ve all been caught on the backfoot to various degrees. While the current genie is out of the bottle and won’t be going back in anytime soon, it’s up to all of us to think how we could do it better next time—because there will absolutely be a next time.

The post When the coronavirus infodemic strikes appeared first on Malwarebytes Labs.

A week in security (May 11 – May 17)

Last week on Malwarebytes Labs, we explained why RevenueWire has to pay $6.7 million to settle FTC charges, how CVSS works: characterizing and scoring vulnerabilities, and we talked about how and why hackers hit a major law firm with Sodinokibi ransomware.

We also launched another episode of our podcast Lock and Code, this time speaking with Chris Boyd, lead malware intelligence analyst at Malwarebytes, about facial recognition technology—its early history, its proven failures at accuracy, and whether improving the technology would actually be “good” for society.

Other cybersecurity news

  • A new attack method was disclosed that targets devices with a Thunderbolt port, allowing an evil maid attack. (Source: SecurityWeek)
  • Almost four million users of MobiFriends, a popular Android dating app, have had their personal and log-in data stolen by hackers. (Source: IT Security Guru)
  • Cognizant estimates that the April ransomware attack that affected its internal network will cost the IT services firm between $50 and $70 million. (Source: GovInfoSecurity)
  • The database for the defunct hacker forum WeLeakData is being sold on the dark web and exposes the private conversations of hackers who used the site. (Source: BleepingComputer)
  • The U.S. government released information about three new malware strains used by state-sponsored North Korean hackers. (Source: The Hacker News)
  • Details were published about PrintDemon, a vulnerability in the Windows printing service that impacts all Windows versions going back to Windows NT 4. (Source: ZDNet)
  • US intel agencies expressed the need for a concerted campaign to patch for the top 10 most exploited vulnerabilities. (Source: CBR online)
  • Magellan Health, the Fortune 500 insurance company, has reported a ransomware attack and a data breach. (Source: ThreatPost)
  • Researchers found a new cyber-espionage framework called Ramsay, developed to collect and exfiltrate sensitive files from air-gapped networks. (Source: DarkReading)
  • The EFF called attention to the many ways in which the EARN IT Act would be a disaster for Internet users’ free speech and security. (Source: Electronic Frontier Foubndation)

Stay safe, everyone!

The post A week in security (May 11 – May 17) appeared first on Malwarebytes Labs.

Sodinokibi drops greatest hits collection, and crime is the secret ingredient

When a group of celebrities ask to speak with their lawyer, they usually don’t have to call in a bunch of other people to go speak with their lawyer. However, in this case it may well be a thing a little down the line. A huge array of musicians including Bruce Springsteen, Lady Gaga, Madonna, Run DMC and many more have had documents galore pilfered by the Sodinokibi gang.

Around 756GB of files including touring details, music rights, and correspondence were stolen – some of which was sitting pretty on a site accessible through TOR as proof of the sticky-fingered shenanigans. The law firm affected is Grubman Shire Meiselas & Sacks, a major player handling huge contacts for global megastars on a daily basis. Although they handle TV stars, actors, and sports personalities and more, so far the only data referenced online appears to be in relation to singers / songwriters. 

Why?

The assumption is the data is being displayed as a preview of things to come; pay a ransom, or the data gets it (and by gets it, we mean “everything is published online in disastrous fashion”). The Sodinobiki gang are not to be trifled with, having already brought the walls crashing down upon Travelex not so long ago.

Hot targets…

Legal firms are becoming a hot target for malware focused criminals as they realise the value of the data they’re sitting on. Break in, exfiltrate the files, then send a few ransom notes to show them you A) have the files and B) mean business. If they refuse to pay up, drop the files and walk away from the inevitable carnage of reputational damage + compromised clients.

Who or what is Sodinokibi?

Put simply, a devastatingly successful criminal group with a penchant for Ransomware, data theft, and extortion. Sporting a popular Ransomware as a Service business model, they spiked hard in May of 2019 with a ramp-up in attacks on business and (to some degree) consumers. Their ransomware went a long way to filling the void left by GandCrab group’s “retirement,” and multiple, smaller spikes took place until an eventual decline for both consumer and business towards the end of July.

There were six versions of Sodinokibi released into the wild between April to July alone, helping to keep the security industry and targets on their toes over a very condensed period. Vulnerabilities, phishing campaigns using malicious links, malvertising, and even compromised MSPs to help launch the ransomware waves. You should absolutely lock down your MSP, by the way.

Technical details on the attack?

This is a breaking story and for various reasons the affected parties aren’t going to spill the beans just yet, especially with investigations ongoing. Having said that, there’s every probability they used ransomware to get the job done and that this was a targeted attack. How is Sodinokibi ransomware faring at the moment?

Sodinokibi ransomware statistics

This likely isn’t part of any huge spam wave. Our monthly data for consumer and business shows the last big spike in Ransom.Sodinokibi back in December:

sodi
Overall detections for months in 2019 and 2020

Business detections hovered between 200 to 280 from September to November 2019, before exploding over December to just under 7,000. It quickly dropped back down to 260 in February 2020, with a slight spike of 1,447 in April.

Consumer, meanwhile, followed a slightly more convoluted path with a peak of just over 600 in November 2019, and numbers ranging from 293 in July 2019 to 228 in March 2020 and generally low numbers elsewhere (76 in August 2019, 70 in December 2019, and 109 in April 2020).

In conclusion, then, ensure your ransomware armory is fully stocked and ready to go should you be sitting on lots of incredibly valuable entertainer documents, or indeed anything at all. Whether hit by random attacks or targeted mayhem, the end result is still the same: lots of headaches, and quite a few calls to legal.

Or, in this case, many calls to legal.

The post Sodinokibi drops greatest hits collection, and crime is the secret ingredient appeared first on Malwarebytes Labs.

How CVSS works: characterizing and scoring vulnerabilities

The Common Vulnerability Scoring System (CVSS) provides software developers, testers, and security and IT professionals with a standardized process for assessing vulnerabilities. You can use the CVSS to assess the threat level of each vulnerability, and then prioritize mitigation accordingly.

This article explains how the CVSS works, including a review of its components, and describes the importance of using a standardize process for assessing vulnerabilities.

What is a software vulnerability?

A software vulnerability is any weakness in the codebase that can be exploited. Vulnerabilities can result from a variety of coding mistakes, including faulty logic, inadequate validation mechanisms, or lack of protection against buffer overflows. Unprotected APIs and issues contributed by third-party libraries are also common sources of vulnerabilities.

Regardless of the source of the vulnerability, all present some risk to users and organizations. Until vulnerabilities are discovered and patched, or fixed in a software update, attackers can exploit them to damage systems, cause outages, steal data, or deploy and spread malware.

How vulnerabilities are reported

The way in which vulnerabilities are reported depends on the type of software they are discovered on and the type of vulnerability they appear to be. In addition, the perceived importance of the vulnerability to the finder is a factor in how it’s reported.

Typically, vulnerabilities are found and reported by security researchers, penetration testers, and users themselves. Security researchers and penetration testers may work full-time for organizations or they may function as freelancers working under a bug bounty program.

When vulnerabilities are minor or can be easily fixed by the user without vendor or community help, issues are more likely to go unreported. Likewise, if a severe issue is discovered by a black hat researcher, or cybercriminal, it may not be reported. Generally, however, vulnerabilities are reported to organizations or developers when found.

If a vulnerability is found in proprietary software, it may be reported directly to the vendor or to a third-party oversight organization, such as the non-profit security organization, MITRE. If one is found in open-source software, it may be reported to the community as a whole, to the project managers, or to an oversight group.

When vulnerabilities are reported to a group like MITRE, the organization assigns the issue an ID number and notifies the vendor or project manager. The responsible party then has 30 to 90 days to develop a fix or patch the issue before the information is made public. This reduces the chance that attackers can exploit the vulnerability before a solution is available.

What is CVSS?

The Common Vulnerability Scoring System (CVSS) is a set of free, open standards. These standards are maintained by the Forum of Incident Response and Security Teams (FIRST), a non-profit security organization. The standards use a scale of 0.0 to 10.0, with 10.0 representing the highest severity. The most recent version released is CVSS 3.1, released in June 2019.

These standards are used to help security
researchers, software users, and vulnerability tracking organizations measure
and report on the severity of vulnerabilities. CVSS can also help security
teams and developers prioritize threats and allocate resources effectively.

How CVSS scoring works

CVSS scoring is based on a combination of several subsets of scores. The only requirement for categorizing a vulnerability with a CVSS is the completion of the base score components. However, it is recommended that reporters also include temporal scores and environmental metrics for a more accurate evaluation.

The base score of the CVSS is assessed using
an exploitability subscore, an impact subscore, and a scope subscore. These
three contain metrics for assessing the scope of attacks, the importance of
impacted data and systems, and the scope subscore assesses the impact of the
attack on seemingly unaffected systems.

Base score

The base score is meant to represent the
inherent qualities of a vulnerability. These qualities should not change over
time nor should qualities be dependent on individual environments. To calculate
the base score, reporters must calculate the composite of three subscores.

Exploitability
subscore

The exploitability subscore measures the
qualities of a vulnerable component. These qualities help researchers define
how easily a vulnerability can be exploited by attackers. This subscore is
composed of the following metrics:

Metric Measurement Scale (low to high)
Attack vector (AV) How easy it is for attackers to access a vulnerability Physical (presence)
Local (presence)
Adjacent (connected networks)
Network (remote)
Attack complexity (AC) What prerequisites are necessary for exploitation Low
High
Privileges required (PR) The level of privileges needed to exploit the vulnerability None
Low
High
User interaction (UI) Whether exploitation requires actions from a tertiary user Binary—either None or Required

Impact
subscore

The impact subscore measures the effects that
successful exploitation has on the vulnerable component. It defines how a component
is affected based on the change from pre to post exploit. This subscore is
composed of the following metrics:

Metric Measurement Scale
Confidentiality (C) Loss of data confidentiality in the component or wider systems None
Low
High
Integrity (I) Loss of data integrity throughout the component system None
Low
High
Availability (A) Loss of availability of the component or attached systems None
Low
High

Scope
subscore

The scope score measures what impact a vulnerability may have on components other than the one affected by the vulnerability. It tries to account for the overall system damage that an attacker can execute by exploiting the reported vulnerability. This is a binary scoring with scope being changed or unchanged.

Temporal score

The temporal score measures aspects of the
vulnerability according to its current status as a known vulnerability. This
score includes the following metrics:

Metric Measurement Scale (from low to high)
Exploit code maturity (E) The availability of tools or code that can be used to exploit the vulnerability Proof of concept
Functional
Unproven
High
Not defined
Remediation level (RL) The level of remediation currently available to users Official fix
Workaround
Temporary fix
Unavailable
Not defined
Report confidence
(RC)
The degree of accuracy of the vulnerability report Unknown
Reasonable
Confirmed
Not defined

Environmental metrics

Environmental metrics measure the severity of the vulnerability adjusted for its impact on individual systems. These metrics are customizations of the metrics used to calculate the base score. Environmental metrics are most useful when applied internally by security teams calculating severity in relation to their own systems.

The importance of standardization

CVSS provides comprehensive guidelines for assessing vulnerabilities. This scoring system is used by many and has a wide range of applications. However, perhaps the most important aspect of the CVSS is that it provides a unified standard for all relevant parties. Standardization is crucial when responding to risks and prioritizing mitigation.

CVSS scores are more than just a means of standardization. These scores have practical applications and can have a significant impact in helping security teams and product developers prioritize their efforts. 

Within an organization, security teams can use CVSS scores to efficiently allocate limited resources. These resources may include monitoring capabilities, time devoted to patching, or even threat hunting to determine if a vulnerability has already been exploited. This is particularly valuable for small teams who may not have the resources necessary to address every vulnerability.

CVSS scores can also be useful for security researchers. These scores can help highlight components that are especially vulnerable or tactics and tools that are particularly effective. Researchers can then apply this knowledge to developing new security practices and tools to help detect and eliminate threats from the start. 

Finally, CVSS scores can be informative for developers and testers in preventing vulnerabilities in the first place. Careful analysis of high ranking vulnerabilities can help software development teams prioritize testing. It can also help highlight areas where code security best practices can be improved. Rather than waiting until their own product is discovered to be vulnerable, teams can learn from other’s mistakes

The post How CVSS works: characterizing and scoring vulnerabilities appeared first on Malwarebytes Labs.

RevenueWire to pay $6.7 million to settle FTC charges

What can you do as a scammer when no legitimate payment provider wants to process your payments anymore? Or, what if you are growing sick and tired of these same payment providers reimbursing disgruntled customers who claim that your products didn’t fix computers, like—you know—you said they would?

Simple. You rely on some novel help. That is, until you get caught.

Let us tell you a story of intrigue and wrongdoing that resulted in a multi-million-dollar settlement issued by the US Federal Trade Commission.

How do tech support scammers work?

Some of the worst internet criminals are those who prey on the weakest groups in our society. Communities that are uncomfortable and less experienced with computers are already at a disadvantage, and tech support scammers make shameless use of these circumstances, demanding payments for bogus solutions to entirely non-existent tech problems. First comes the hook: There is something wrong with your computer. Then comes the sell: Only we can fix it.

But, that money stream can dry up because of a legitimate link in the chain—payment processors. Scammers may first receive services from legitimate payment processors, but soon, those providers will notice a high volume of complaints and wizen up to the actions of their customer, consequently kicking them out and warning their colleagues to stay away from said customer.

It’s a real problem for many tech support scammers, and one that has pushed some into even accepting gift cards as payment, just to circumvent the problem that they were refused by practically every payment provider.

But for a small group of call centers and software makers recently investigated by the FTC, there was a better option than gift cards.

Enter RevenueWire Inc., a Canadian company doing business as “SafeCart.”

Solution: start your own payment provider

The setup was clever.

First, RevenueWire entered into contracts with banks and payment processors in the US in order to obtain and open merchant accounts, thus allowing it to accept debit and credit payments. RevenueWire then entered into contracts with tech support call centers with a less-than-stellar track record. Further, according to the FTC, RevenueWire worked with separate, third-party software companies, including PC Cleaner Inc. and Boost Software Inc., which would direct consumers of their own software to any tech support call centers that were now working with RevenueWire.

In essence, RevenueWire engaged in a miniature, controlled economy, gaining vast insight into an entire ecosystem that included making software, selling it, providing tech support services, and funneling payments made along the way.

This made for a complete alignment of businesses. According to the FTC, the stakeholders in this organization closely cooperated with and participated in companies that acted as telemarketers, software builders, website designers, and call center operators. Plus, they now had their own payment provider.

What they did

The organization’s objective, according to the FTC, was to swindle customers out of their money while pretending to be tech support operators.

When you are a tech support scammer there are three main angles you can work to get your “clients” to call you. Calling the clients and trying to sell them your services is one method, and they didn’t shy away it. But as you can imagine, the chances of deceiving a client are much higher when you get a client to actually call you. To achieve this, tech support scammers can:

  • Pretend to represent a legitimate company, where Microsoft and Apple are probably the most well-known examples. But Malwarebytes has been the victim of impersonation a few times as well.
  • Use advertising to get prospects to visit websites with the number you want them to call, to the degree of using browser lockers and fake online scanners to convince the visitor their computer has serious issues, that only you know how to solve—at a steep price.
  • Release fake anti-malware software that prominently displays your number as a help resource. Again, convincing the user that he needs to buy the software to fix those problems, and probably some extra services to go.

In the case at hand, the tech support call centers working with RevenueWire were run by Vast Tech Support, LLC (“Vast”) and Inbound Call Experts, LLC (“ICE”). The FTC filed legal actions against both these companies in the past.

What charges did RevenueWire receive?

RevenueWire was charged under the accusation that they laundered credit card payments for, and assisted and facilitated, two tech support scams previously sued by the FTC. You guessed it, those two tech support scammers were Vast and ICE.

As Andrew Smith, director of the FTC’s Bureau of Consumer Protection, said in a news release when the settlement was announced:

“Finding ways to get paid – without getting caught – is essential for scammers who steal money from consumers. And that’s exactly what RevenueWire did for tech support scammers when it laundered their transactions through the credit card system.”

The FTC said that RevenueWire violated the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affection commerce.” RevenueWire also, according to the FTC, violated the the Telemarketing Sales Rule, “provided substantial assistance and support to one or more sellers or telemarketers, whom they knew, or consciously avoided knowing, were violating” sections of the Telemarketing Sales Rule, and they “submitted charges through RevenueWire’s merchant account for companies that made false statements to consumers.”

What about the people that worked for RevenueWire?

Only a few people that worked at RevenueWire knew about the complete business model. Most of the employees never learned—and may have been shocked to learn—about the company’s true nature when it was named in the FTC settlement. Some may have figured out what was going on while they were working for RevenueWire, but almost no-one was told up front. A fraud analyst working for RevenueWire repeatedly warned executives about dealings with “crooks,” according to evidence published in the FTC’s report. This information was directly shared with Roberta Leach, RevenueWire’s CEO and a named defendant in the FTC case.

The good news

In a recent press release, the FTC announced that RevenueWire and its CEO, Roberta Leach will pay $6.75 million to settle charges that they laundered credit card payments for, and assisted and facilitated, two tech support scams previously shut down by the FTC.

The FTC stipulated:

“Consumers throughout the country have been injured by tech support scams in which fraudsters deceptively market services to ‘fix’ purported problems on consumers’ computers. The FTC and state law enforcers have brought cases against the software sellers and call centers involved in these scams, including call centers operated by Vast Tech Support, LLC (‘Vast’) and Inbound Call Experts, LLC (‘ICE’). FTC v. Boost Software, Inc., No. 14-81397 (S.D. Fla. filed Nov. 10, 2014); FTC v. Inbound Call Experts, LLC, No. 14-81395 (S.D. Fla. filed Nov. 10, 2014). RevenueWire, Inc. and its Chief Executive Officer (collectively, ‘Defendants’) have played a key role in many of these scams, including the Vast and ICE scams. Using a business model named ‘Call Stream,’ the Defendants have provided lead generation, business development, payment processing, and money distribution services to numerous tech support fraudsters, leading to hundreds of millions of dollars of consumer injury.”

We are pleased to learn that the FTC successfully went after this enabler and payment provider, especially in this case as they knew what they were doing and the FTC could build on the cases where they ruled against the scammers themselves.

Malwarebytes’ fight against tech support scammers

Malwarebytes has been involved in the fight against tech support scammers ever since the beginning of our company, even though it is not something that results in a profit for us. We feel that tech support scammers give the industry a bad name by proxy and as pointed out earlier, some of them even pretend to represent our company. Also, we care about everyone’s safety, not just the safety of our paying customers.

If you want to be sure to get help from our actual support team, don’t contact just any number you find while searching, but reach out to us through our Support portal.

Stay safe, everyone, and remain vigilant!

The post RevenueWire to pay $6.7 million to settle FTC charges appeared first on Malwarebytes Labs.

Lock and Code S1Ep6: Recognizing facial recognition’s flaws with Chris Boyd

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Chris Boyd, lead malware intelligence analyst at Malwarebytes, about facial recognition technology—its early history, its proven failures at accuracy, and whether improving the technology would actually be “good” for society.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on:

Plus other cybersecurity news:

Stay safe, everyone!

The post Lock and Code S1Ep6: Recognizing facial recognition’s flaws with Chris Boyd appeared first on Malwarebytes Labs.