IT NEWS

Beware password-spraying fancy bears

The NSA, FBI, and CISA, in cooperation with the UK’s National Cyber Security Centre (NCSC), have issued a report that describes in detail why, and how, they think that a Russian military unit is behind large-scale brute-force attacks on the cloud-IT resources of government and private sector companies around the world. The report states:

Since at least mid-2019 through early 2021, Russian General Staff Main Intelligence Directorate (GRU) 85th Main Special Service Center (GTsSS), military unit 26165, used a Kubernetes® cluster to conduct widespread, distributed, and anonymized brute force access attempts against hundreds of government and private sector targets worldwide.

The agencies are pointing their collective fingers at military unit 26165 of the Russian General Staff Main Intelligence Directorate (commonly know as GRU), which you will find often referred to as Fancy Bear, APT28, Strontium, and some other names.

The targets

Most of the named activity is aimed at organizations using Microsoft Office 365 cloud services, but the attacks are certainly not limited to those. They also targeted other service providers and on-premises email servers using a variety of different protocols. I use the present tense on purpose as these  attacks are almost certainly still ongoing.

The campaign is said to have targeted hundreds of US and foreign organizations, including US government and defense entities. While the sum of the targeting is global in nature, it has predominantly focused on entities in the US and Europe.

The method

The report includes a graphic that explains the most prevalent attack method.

Brute Forski

Some attacks used known vulnerabilities that allowed remote code execution (RCE), while others started by trying to identify valid credentials through password spraying. Password spraying involves using a limited list of credentials against a large number of accounts, a brute-force tactic that’s useful if you don’t really care which accounts you take over.

The attacks were launched from a Kubernetes cluster. A Kubernetes cluster is a “container orchestration” system for running a large number of containerized applications. The applications in the cluster used TOR and commercial VPN services to avoid revealing their IP addresses.

Once initial access had been secured, attackers used a variety of well-known tactics, techniques, and procedures (TTPs) to escalate privileges, establish persistence, move laterally, and collect additional information.

If any of the cloud service credentials the attackers discovered were sufficiently privileged, they were used to exfiltrate data. Where this was not an option, for example when mail was not handled in the cloud, the threat actor used a modified and obfuscated version of the reGeorg web shell to maintain persistent access on a target’s Outlook Web Access server. The reGeorg webshell creates a socks proxy for intranet penetration and as such can be used as a means to gain persistence.

Mitigation and detection

The report contains a number of mitigation methods but makes a special plea for multi-factor authentication (MFA).

Network managers should adopt and expand usage of multi-factor authentication to help counter the effectiveness of this capability. Additional mitigations to ensure strong access controls include time-out and lock-out features…

MFA stops password spraying and other forms of brute-force attacks in their tracks. It doesn’t matter how long your password list is, or how many password attempts you make—if you don’t have a user’s second authentication factor, such as a numeric code or fingerprint, you cannot get access. Time-out and lock-out features are useful for strengthening weak passwords by reducing the number of guesses an attacker can make.

The report also mentions the mandatory use of strong passwords as a useful mitigation. It is, but if that were easy, the other mitigations wouldn’t be necessary. Aim for strong passwords, but plan for bad ones.

Other mitigation methods mentioned in the report include:

  • Implementing a Zero-Trust security model.
  • Captcha, when human interaction is required.
  • Analytics for detecting anomalous authentication activity.

For detection purposes the report lists a few incomplete or truncated versions of legitimate User-Agent strings that the attackers used:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:63.0) Gecko/20100101 Firefox/63.0
Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0.1 Safari/605.1.15
Microsoft Office/14.0 (Windows NT 6.1; Microsoft Outlook 14.0.7162; Pro
Microsoft Office/14.0 (Windows NT 6.1; Microsoft Outlook 14.0.7166; Pro)
Microsoft Office/14.0 (Windows NT 6.1; Microsoft Outlook 14.0.7143; Pro)
Microsoft Office/15.0 (Windows NT 6.1; Microsoft Outlook 15.0.4605; Pro)

The report also provides a Yara rule matches the reGeorg variant web shell used by the actors.

rule reGeorg_Variant_Web shell
{
    strings:
        $pageLanguage = "<%@ Page Language="C#""
        $obfuscationFunction = "StrTr"
        $target = "target_str"
        $IPcomms = "System.Net.IPEndPoint"
        $addHeader = "Response.AddHeader"
        $socket = "Socket"
    condition:
        5 of them
}

The report warns that the rule does not uniquely GRU activity since the web shell is publicly available.

The post Beware password-spraying fancy bears appeared first on Malwarebytes Labs.

PrintNightmare 0-day can be used to take over Windows domain controllers

In a rush to be the first to publish a proof-of-concept (PoC), researchers have published a write-up and a demo exploit to demonstrate a vulnerability that has been dubbed PrintNightmare. Only to find out they had alerted the world to a new 0-day vulnerability by accident.

What happened?

In June, Microsoft patched a vulnerability in the Windows Print Spooler that was listed as CVE-2021-1675. At first it was classified as an elevation of privilege (EoP) vulnerability. Which means that someone with limited access to a system could raise their privilege level, giving them more power over the affected system. This type of vulnerability is serious, especially when it is found in a widely used service like the Windows Print Spooler. A few weeks after the patch Microsoft raised the level of seriousness to a remote code execution (RCE) vulnerability. RCE vulnerabilities allow a malicious actor to execute their code on a different machine on the same network.

As per usual, the general advice was to install the patches from Microsoft and you’re done. Fast forward another week and a researcher announced he’d found a way to exploit the vulnerability to achieve both local privilege escalation and remote code execution. This actually happens a lot when researchers reverse engineer a patch.

Only in this case it had an unexpected consequence. A different team of researchers had also found an RCE vulnerability in the Print Spooler service. They called theirs PrintNightmare and believed it was the same as CVE-2021-1675. They were working on a presentation to be held at the Black Hat security conference. But now they feared that the other team had stumbled over the same vulnerability, so they published their work, believing it was covered by the patch already released by Microsoft.

But the patch for CVE-2021-1675 didn’t seem to work against the PrintNightmare vulnerability. It appeared that PrintNightmare and CVE-2021-1675 were in fact two very similar but different vulnerabilities in the Print Spooler.

And with that, it looked as if the PrintNightmare team had, unwittingly, disclosed a new 0-day vulnerability irresponsibly. (Disclosure of vulnerabilities is considered responsible if a vendor is given enough time to issue a patch.)

Since then, some security researchers have argued that CVE-2021-1675 and PrintNightmare are the same, and others have reported that the CVE-2021-1675 patch works on some systems.

Whether they are the same or not, what is not in doubt is that there are live Windows systems where PrintNightmare cannot be patched. And unfortunately, it seems that the systems where the patch doesn’t work are Windows Domain Controllers, which is very much the worst case scenario.

PrintNightmare

The Print Spooler service is embedded in the Windows operating system and manages the printing process. It is running by default on most Windows machines, including Active Directory servers.

It handles preliminary functions of finding and loading the print driver, creating print jobs, and then ultimately printing. This service has been around “forever” and it has been a fruitful hunting ground for vulnerabilities, with many flaws being found and fixed over the years. Remember Stuxnet? Stuxnet also exploited a vulnerability in the Print Spooler service as part of the set of vulnerabilities the worm used to spread.

PrintNightmare can be triggered by an unprivileged user attempting to load a malicious driver remotely. Using the vulnerability, researchers have been able to gain SYSTEM privileges, and achieved remote code execution with the highest privileges on a fully patched system.

To exploit the flaw, attackers would first have to gain access to a network with a vulnerable machine. Although this provides some measure of protection, it is worth noting that there are underground markets where criminals can purchase this kind of access for a few dollars.

If they can secure any kind of access, they can potentially use PrintNightmare to turn a normal user into an all-powerful Domain Admin. As a Domain Admin they could then act almost with impunity, spreading ransomware, deleting backups and even disabling security software.

Mitigation

Considering the large number of machines that may be vulnerable to PrintNightmare, and that several methods to exploit the vulnerability have been published, it seems likely there will soon be malicious use-cases for this vulnerability.

There are a few things you can do until the vulnerability is patched. Microsoft will probably try to patch the vulnerability before next patch Tuesday (July 12), but until then you can:

  • Disable the Print Spooler service on machines that do not need it. Please note that stopping the service without disabling may not be enough.
  • For the systems that do need the Print Spooler service to be running make sure they are not exposed to the internet.

I realize the above will not be easy or even feasible in every case. For those machines that need the Print Spooler service and also need to be accessible from outside the LAN, very carefully limit and monitor access events and permissions. Also at all costs avoid running the Print Spooler service on any domain controllers.

For further measures it is good to know that the exploit works by dropping a DLL in a subdirectory under C:WindowsSystem32spooldrivers, so system administrators can create a “Deny to modify” rule for that directory and its subdirectories so that even the SYSTEM account can not place a new DLL in them.

This remains a developing situation and we will update this article if more information becomes available.

The post PrintNightmare 0-day can be used to take over Windows domain controllers appeared first on Malwarebytes Labs.

SMS authentication code includes ad: a very bad idea

SMS authentication codes are back in the news, and the word I’d use to summarise their reappearance is “embattled.”

I can still remember a time where two-factor authentication (2FA), authentication grids, regional lockouts, Yubikeys, and offline authentication apps simply did not exist. And if they did, people out there sitting next to you, or on the bus, or in your office, typically did not use them. If you were phished, that was it. Your account was gone unless a non-convoluted recovery process was available.

Then, two-factor authentication slowly became a thing. The uptake still isn’t great, but it’s an improvement. If you’re phished now, it (probably) won’t hurt you because the attacker also needs your authentication code. If they don’t have the code, they’ll sit and stare forlornly at your password then give up.

You’re going to ask me about the “probably” bit now, aren’t you.

Which flavor of two-factor do you prefer?

There are caveats to two-factor, and it largely depends which kind of two-factor we’re talking about.

Most people I know, and a majority of people I encounter online, use 1 of the following types:

  • SMS codes. These are sent to your mobile, via your carrier. The code is punched into the website after you enter a password, and that combined with the code lets you log in. Codes typically expire after a short period of time to ensure lots of valuable codes aren’t left lying around all over the place.
  • Authenticator apps. These are apps which generate codes between short intervals, and they work offline. Do you find yourself in a location where you have no carrier signal? It doesn’t matter with an authenticator app. I’ve known people who changed complex passwords to very basic ones they could remember when going overseas so they could still use their accounts. This isn’t great, and a common workaround in the days before apps became widely available.

Of the two, apps are recommended as the more secure approach.

What’s the problem with SMS?

You’ve had your account password stolen. You’re still safe. They can’t get in without the SMS code. You’re still safe. The attacker’s decided to contact your network provider. You’re still…wait, what? They’re on the phone to customer support, pretending to be you. You’re…possibly in a bit of trouble here. They claim to have lost your phone, and could the network please redirect authentication codes to a “replacement” device.

You’re basically doomed, sorry.

An Authenticator victory is (mostly) assured

With authenticators, there’s nobody outside of your control at the phone company being phished. This stacks the odds heavily in your favour. However, I don’t want to give you the wrong impression. Nothing is 100 precent bulletproof. Apps can occasionally fall foul to the most inventive of schemes.

Of the two, using an authenticator is still the best way to do things. So, then.

Embattled.

Of SMS codes and carrier ads

A developer Tweeted that they encountered a bizarre situation with a Google SMS code. Namely: the Google verification code came with an advert for a VPN service bolted on. You could even “tap to load preview”.

The initial thought was “Why is Google placing ads on these codes”? That was quickly cleared up, however. Google wasn’t responsible for this. The network carrier was to blame.

Introducing doubt into security practices

Consider that we’re talking about codes designed to make your security stronger, with whatever privacy friendly enhancements such a thing may bring. The aim of the game is retreating to your hidey-hole and watching the attacks pass by harmlessly.

The aim of the game is not to have adverts bolted on to security code texts, from carriers able to read everything. Depending on ad tech used, it’s entirely possible to make the ads “relevant” or targeted to the content of the SMS. Of all the things the ad could’ve been about, it’s interesting that it happens to be about VPNs and staying safe from hackers.

This also opens up discussions on consent, who the ad network is, what’s happening to your data/messages, and what kind of say you have in the matter. Worst case scenario, the ad leads to a rogue page or phishing site. There can’t be many more ways to damage the reputation of using SMS codes as an added layer of security.

Keep using codes, but consider migrating to apps

Bottom line: this is a bad idea, will certainly put people off an already beleaguered security measure, and shouldn’t be happening.

Authenticator apps are available on pretty much all mobile platforms at this point, and there’s never been a better moment to consider making the switch. This is not a great thing to see happening, but hopefully Google casting an “Oh dear, what are you up to” eye over the carrier will deter others from replicating. Should you happen to see ads bolted on to your SMS codes, consider politely reaching out to some of the many Google security folks on twitter.

We don’t need to see any more adverts attached to authentication codes.

The post SMS authentication code includes ad: a very bad idea appeared first on Malwarebytes Labs.

Microsoft exec reveals “routine” secrecy orders from government investigators

Microsoft executive Tom Burt told Congressional lawmakers Wednesday that Federal law enforcement agencies send “routine” secret orders for customer information from the Seattle-based company, numbering anywhere from 2,400 to 3,500 such requests a year.

“While the recent news about secret investigations is shocking, most shocking is just how routine secrecy orders have become when law enforcement targets an American’s email, text messages, or other sensitive data stored in the cloud,” said Burt, Microsoft’s corporate vice president for customer security and trust, at a hearing held by the House of Representatives Judiciary Committee. “This abuse is not new. It is also not unique to one Administration and is not limited to investigations targeting the media and Congress.”

Burt’s comments come amidst a roiling crisis of government overreach and press freedom, and as several members of Congress explore legislative options to limit federal law enforcement powers within secret investigations.

In May, several newspapers and outlets revealed that the Department of Justice during President Donald Trump’s administration secretly obtained the phone and email records of journalists at The Washington Post, The New York Times, and CNN, and just one month later, some of those same publications revealed that in 2018, the Department of Justice also secretly subpoenaed Apple for the data of two Democrats on the House Intelligence Committee—as well as the data of the Democrats’ current and former staffers and family members.

Apple complied with the subpoena but said it did not know that the data being requested belonged to two members of Congress. Further, Apple said it could not disclose the request because of a secrecy order that came with it.

“In this case, the subpoena, which was issued by a federal grand jury and included a nondisclosure order signed by a federal magistrate judge, provided no information on the nature of the investigation and it would have been virtually impossible for Apple to understand the intent of the desired information without digging through users’ accounts,” said Apple spokesperson Fred Sainz in the statement. “Consistent with the request, Apple limited the information it provided to account subscriber information and did not provide any content such as emails or pictures.”

Burt’s testimony on Wednesday highlighted the many perceived problems with such secretive orders.

One problem, Burt said, is the sheer volume. If Microsoft alone received about 3,500 requests in one year, then “add the demands likely served on Facebook, Apple, Google, Twitter, and others, and you get a frightening sense of the mountain of secrecy orders used by federal law enforcement in recent years.”

Second, Burt said, is the problem that obtaining a secrecy order is simply too easy. According to Burt, the template that the Department of Justice relies on to apply for a secrecy order “does not even require facts justifying the need for secrecy.”

And finally, one of the largest problems with secrecy orders is that many of them, Burt asserted, are misguided:

“Examples of some of the recent abuse we’ve seen are secrecy orders when the account holder was a victim, not a target of the investigation. Or when the investigation targets just one account at a reputable company, government, or university, but the secrecy order bars notice to anyone in that organization. Or where the government has secretly demanded records in order to evade on ongoing discovery dispute.”

Burt said that Microsoft takes it upon itself to scrutinize and challenge certain secrecy orders in court, but, he added “litigation is no substitute for legislative reform.”

Burt likely found a sympathetic audience within the House Judiciary Committee, as members of both political parties on the committee have voiced support for stripping secretive authorities away from the Department of Justice. During the hearing held Wednesday, Committee Chairman Jerrold Nadler made his stance clear:

“We cannot trust the department to police itself.”

The post Microsoft exec reveals “routine” secrecy orders from government investigators appeared first on Malwarebytes Labs.

Second colossal LinkedIn “breach” in 3 months, almost all users affected

LinkedIn has reportedly been breached—again—following reports of a massive sale of information scraped from 500M LinkedIn user profiles in the underground in May. According to Privacy Shark, the VPN company who first reported on this incident, a seller called TomLiner showed them he was in possession of 700 million Linkedin user records. That means almost all (92 percent) of LinkedIn’s users are affected by this.

privacy shark tomliner
The underground seller known as TomLiner is in possession of the 700M LinkedIn records on sale. They’re also classed as a “GOD User”, which could suggest that their name has weigh in the underground market. (Source: Privacy Shark)

RestorePrivacy, an information site about privacy, examined the proof the seller put out and found the following information, scraped from LinkedIn user profiles:

  • Email addresses
  • Full names
  • Phone numbers
  • Physical addresses
  • Geolocation records
  • LinkedIn username and profile URL
  • Personal and professional experience/background
  • Genders
  • Other social media account usernames

Note that account credentials and banking details don’t appear to be part of the proof. This suggests that the data was scraped rather than breached. Scraping happens when somebody uses a computer program to pull public data from a website, using the website in a way it wasn’t intended to be used. Each individual request or visit is similar to a real user visiting a web page, but the sum total of all the visits leaves the scraper with an enormous database of information.

How was the seller able to scrape hundreds of millions of records? According to RestorePrivacy, the seller abused LinkedIn’s API, a similar tactic to the one used in the almost-as-enormous April LinkedIn “breach”, and the huge Facebook “breach” in the same month.

restoreprivacy linkedin api
The seller confirmed that they abused LinkedIn’s API to scrape data. And sells them for $5,000 USD. (Source: RestorePrivacy)

In a statement, Privacy Shark garnered from Leonna Spilman, who spoke on behalf of LinkedIn, the company claims there is really no breach: “While we’re still investigating this issue, our initial analysis indicates that the dataset includes information scraped from LinkedIn as well as information obtained from other sources. This was not a LinkedIn data breach and our investigation has determined that no private LinkedIn member data was exposed. Scraping data from LinkedIn is a violation of our Terms of Service and we are constantly working to ensure our members’ privacy is protected.”

Spilman’s statement echoes the one LinkedIn released after the April “leak” blow out: “We have investigated an alleged set of LinkedIn data that has been posted for sale and have determined that it is actually an aggregation of data from a number of websites and companies. It does include publicly viewable member profile data that appears to have been scraped from LinkedIn. This was not a LinkedIn data breach, and no private member account data from LinkedIn was included in what we’ve been able to review.”

restore privacy sample scrape
A redacted shot from a small bit of the “proof-of-breach” sample given by the underground seller. (Source: RestorePrivacy)

What to do?

Some may read news stories like this and think “Eh, they just got my info that I wanted to be public. It’s not a big deal, right?”

Look at it this way: Having, say, your email address or contact number available for everyone—even strangers—to see is risky. If they know these two things about you, you can be a candidate target for spam campaigns: email, SMS, and robocalls. We don’t know anyone who likes receiving these campaigns.

To make matters worse, the more that scammers know about you, the more plausible and enticing they can make their messages for, and the easier it is for them to pretend to be you when scamming others.

If you’re a LinkedIn user, and you’re worried about the possible repercussions, now is a good time to take the time to sit down and audit your LinkedIn profile.

Start with security: Make sure you have two-factor authentication (2FA) enabled. You may also want to check whether your email address or phone numbers are on HaveIBeenPwned (LinkedIn suffered a genuine breach in 2012, and over 100 million passwords were stolen).


Don’t know what HaveIBeenPwned is? Check our writeup about it what it is and how to use it here!


Take a look at your LinkedIn profile and decide which bits of it you’d rather make private. After all, if a company shows interest in hiring you, you can give them some of your info, such as your contact number, if they ask for it. Better yet, consider setting up a Zoom call with them instead. Remember that you, as a LinkedIn user, can decide which information to show or hide, and who gets to see them, too.

Stay safe!

The post Second colossal LinkedIn “breach” in 3 months, almost all users affected appeared first on Malwarebytes Labs.

Babuk ransomware builder leaked following muddled “retirement”

In the last days of April 2021, the operators of Babuk ransomware announced they were going to focus on demanding a ransom for information stolen from compromised networks, leaving the encryption part of their operation behind. It meant that they no longer needed ransomware at all.

“Babuk changes direction, we no longer encrypt information on networks, we will get to you and take your data, we will notify you about it if you do not get in touch we make an announcement”

And now, in one of the last days of June, a researcher has discovered the Babuk builder used to create the ransomware’s unique payloads and decryption modules.

Confusion

There are some doubts on how the Babuk operators planned to proceed after they contradicted their own announcement by also announcing they planned to switch to the Ransomware-as-a-Service (RaaS) model and so-called “double extortion”. Double extortion entails both encrypting a victim’s data and threatening to leak it. A threat actor operating the RaaS model provides the infrastructure, including the ransomware, for other threat actors to use.

This business model makes it hard to fathom why RaaS customers would be interested in working with Babuk operators, if they abandoned the encryption part of the model. Extortion by threatening to release stolen data does not require the same specialized knowledge or infrastructure as encrypting data.

History of Babuk

The Babuk operators surfaced at the end of 2020 and managed to make a name for themselves by attacking Washington DC’s Metropolitan Police Department (MPD), after which they released the personal data of several MPD officers. Shortly after that, they announced they would terminate their operation.

“The babuk project will be closed, its source code will be made publicly available, we will do something like Open Source RaaS, everyone can make their own product based on our product.”

At the time, many suspected they were making this move to dodge the heat that was turned up as a result of their attack on the MPD.

It needs to be said that the Babuk operators were always a bit fickle in their communications. One moment they would announce something, only to delete it shortly after and issue a new statement. As our esteemed colleague Adam Kujawa, director of Malwarebytes Labs said when Maze announced its retirement:  

“Ransom actors are professional liars and scammers; to believe anything they say is a mistake.”

How did the builder end up on VirusTotal?

That is the puzzling question here. VirusTotal (VT) is often used as a quick way for interested parties to check whether a file is malicious or not. But it has been a while since malware authors were dunce enough to upload their work to VT to check whether it would be detected by the anti-malware industry or not. The vendors that cooperate on VT have access to any files uploaded there. So, if their freshly created malware was not detected immediately, it would be soon after. Since those days, malware authors have their own services to run these checks without sharing their work with the anti-malware vendors.

By uploading the builder to VirusTotal they were basically making the source code available. There are a few possible scenarios on why someone would upload the Babuk builder:

  1. Someone received or found the file and did not trust it, so they checked it for malware on VT. It is very unlikely that someone would get this file without knowing what it is. And if a cybercriminal wanted to check who detects this, they would use a service that does not share it with anti-malware vendors. But accidents happen and we have all heard the stories of important documents getting uploaded to VT to check whether they were clean.
  2. Someone wanted to destroy the Babuk operation by throwing their builder under the (VT) bus. This only seems likely if one of the competitors or associates wanted to ensure that the Babuk operators would really stop the encryption part of its business, or at least wanted to slow it down for some time.
  3. The Babuk operators chose this as an odd way to make the source code available. This seems very unlikely as they would certainly have made this known through their usual channels, if this was the plan.

Maybe we have missed the scenario that describes what really happened. As always our comments are open for your ideas.

Another fact that may be of consequence, somehow, is that researchers found several defects in Babuk’s encryption and decryption code. These flaws show up when an attack involves ESXi servers and they are severe enough to result in a total loss of data for the victim.

Decryption

It will take a thorough analysis of the Babuk builder before we know whether it contains enough information to create software that can decrypt files encrypted by Babuk ransomware. That would be nice for the victims that did not pay the ransom. We will keep you posted.

The post Babuk ransomware builder leaked following muddled “retirement” appeared first on Malwarebytes Labs.

Police seize DoubleVPN data, servers, and domain

A coordinated effort between global law enforcement agencies—led by the Dutch National Police—shut down a VPN service that was advertised on cybercrime forums. The VPN company promised users the ability to double- and triple-encrypt their web traffic to obscure their location and identity.

The service, called DoubleVPN, had its domain page seized on June 29. According to a splash page that has replaced DoubleVPN’s domain, in seizing the VPN’s infrastructure, law enforcement also seized “personal information, logs, and statistics kept by DoubleVPN about all of its customers.”

“Servers were seized across the world where DoubleVPN had hosted content, and the web domains were replaced with a law enforcement splash page,” Europol said in a press release issued Wednesday. The takedown effort received support from law enforcement and judicial authorities in The Netherlands, Germany, the United Kingdom, Canada, the United States, Sweden, Italy, Bulgaria, and Switzerland, along with coordination from Europol and Eurojust.

According to an archive of DoubleVPN’s domain before it was seized, the company offered “simple,” “double,” and “triple” encryption to customers. Like any VPN service, DoubleVPN told its users that their web activity would first be encrypted through a VPN tunnel before connecting them to the Internet. The additional layers of encryption advertised by the company—which came in costlier monthly subscription plans—came from additional connections to VPN servers that DoubleVPN controlled.

In its press release, Europol said DoubleVPN “was heavily advertised on both Russian and English-speaking underground cybercrime forums as a means to mask the location and identities of ransomware operators and phishing fraudsters.” A screen capture taken by the news outlet BleepingComputer appears to support this. In the image, a hacker forum user is answering a question about the “best, fully anonymous” VPN service and they offer two options. One of those options is DoubleVPN.


Hear the story of how a cyberstalker who hid his activity through a VPN was eventually caught

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”


The takedown now marks at least the third time this year that law enforcement agencies across the world have come together to stop cybercrime.

In January, Europol was also involved in taking down the infrastructure of the Emotet botnet, and just two weeks ago, Ukrainian law enforcement officials—aided internationally—arrested several individuals allegedly involved in money laundering for the Clop ransomware gang.

The post Police seize DoubleVPN data, servers, and domain appeared first on Malwarebytes Labs.

Fired by algorithm: The future’s here and it’s a robot wearing a white collar

Black Mirror meets 1984. Imagine that your employer uses a bot to keep track of your “production level.” And when this bot finds that you are an under-performer it fires off a contract-termination mail. Does this sound like the world you live in? Unfortunately, for some people it is.

The case

Amazon.com has used algorithms for many years to manage the millions of third-party merchants on its online marketplace. In those years many sellers have been booted for selling counterfeit goods and jacking up prices. Which makes sense, when it’s justified. But who do you argue with if the deciding party is a bot?

Now, according to an investigation by Bloomberg, Amazon is dealing with its Flex drivers in the same way. Flex drivers are “gig” workers who handle packages that haven’t made it on to an Amazon van but need to be de delivered the same day.

Tracking the workflow

So, being fired by a bot is not something that we want to warn you about because it might happen in the future. It already happens. As we have reported before, many employers find it necessary to spy on their workforce, especially now that working from home (WFH) is at discussion. Should we continue to work from home now that it looks like offices are slowly opening up again in many countries? Can we find some middle ground now that we have found out that WFH works much better than we expected? By now many organizations have the tools and infrastructure in place to allow WFH where and when possible. Do workers even want to continue working from home? I imagine many will be happy to return to the office even if they won’t say it out loud. Does being monitored, be it at home or in the office, make any of this easier?

Doomsday scenario?

So, what does workflow tracking have to do with bots firing real people? Well, in Amazon’s case the algorithm received information about the times the drivers were active, how many deliveries they made in that time, and whether delivered packages fell victim to theft by so-called “porch pirates”. These numbers were crunched into a rating for each individual driver. One too many bad ratings and the driver could expect to get the mail that told them their services were no longer needed.

Bloomberg interviewed 15 Flex drivers, including four who say they were wrongly terminated, as well as former Amazon managers who say the largely automated system is insufficiently aware of the challenges drivers face every day.

Blame the method, not the bot

Some will argue that computers are heartless machines, and they are right. But what about the managers that leave this kind of decisions to the machines? Are they hiding behind the decision the algorithm made because they are not brave enough to make those decisions themselves? Or is hiring and firing such a legal minefield that it’s easier to leave it to an algorithm?

It’s not even the blind trust in the algorithm that is infuriating. It’s the shrug when such a life-changing decision is left to a machine. And how would management be able to find out whether there are flaws in the algorithm without thorough investigations? According to Bloomberg, many Amazon Flex drivers did not take their dispute to arbitration because of a $200 fee and little expectation of success. In doing that they may also have denied the algorithm the kind of “false positive” data it would need in order to improve.

Artificial Intelligence and human decisions

In several business functions, such as marketing and distribution, artificial intelligence (AI) has been able to speed up processes and provide decision-makers with reliable insights. In my opinion that describes how this should work. The algorithm can produce all the numbers it wants and a human decision maker can assess whether there is a reason to talk to the employee that seems to be performing below par. Find out what is going on. What is the reason for the lack of results? Discuss how performance can be brought back to a satisfactory level. Have a conversation that empowers the employee. Research has shown that when employees feel empowered at work, this results in stronger job performance, job satisfaction, and commitment to the organization. That sounds a lot better than getting caught up in arbitration cases.

The underlying problem

Former Amazon managers who spoke to Bloomberg accuse their old employer of knowing that delegating work to algorithms would lead to mistakes and damaging headlines. Instead, they say, Amazon decided it was cheaper to trust the algorithms than pay people to investigate mistaken firings, so long as the drivers could be replaced easily.

Those that get fired by the bot and did take the trouble to challenge their poor ratings say they got automated responses. At least, they were unable to tell if they were communicating with real people. According to Bloomberg, a former employee at a driver support call center claims dozens of part-time seasonal workers with little training were overseeing issues for millions of drivers.

Algorithms

Amazon has automated its human-resources operation more than most companies. Maybe these are teething troubles, or maybe they overdid it. What’s certain is that, whether it’s at Amazon or elsewhere, the use of algorithms to make decisions that have a big impact on people’s lives is making headway. Before we go any further into turning Black Mirror from a work of fiction to a documentary series, it may be wise to think about how impactful we will allow these decisions to be, and whether there are any red lines we shouldn’t cross.

The post Fired by algorithm: The future’s here and it’s a robot wearing a white collar appeared first on Malwarebytes Labs.

Binance receives the ban hammer from UK’s FCA

Binance, the world’s largest and most popular cryptocurrency exchange network, has had a rough few days.

First, Japan’s financial regulator, the Financial Services Agency (FSA), issued its second warning to Binance on Friday, 25 June, for operating in the country without permission (The first warning was issued in 2018).

That same day, Binance withdrew its services from Ontario, Canada after the Ontario Securities Commission (OSC) published a Notice of Hearing and Statement of Allegation against Bybit, another crypto trading platform that is based in Singapore, taking it as a sign for them to bail. The OSC has accused Bybit of noncompliance with province regulations.

Then on Saturday, 26 June, the UK’s own financial regulator, the Financial Conduct Authority (FCA), ordered Binance to cease activities in the UK. The warning reads:

“Most firms advertising and selling investments in cryptoassets are not authorised by the FCA. This means that if you invest in certain cryptoassets you will not have access to the Financial Ombudsman Service or the Financial Services Compensation Scheme if things go wrong.

While we don’t regulate cryptoassets like Bitcoin or Ether, we do regulate certain cryptoasset derivatives (such as futures contracts, contracts for difference and options), as well as those cryptoassets we would consider ‘securities’. […] A firm must be authorised by us to advertise or sell these products in the UK.”

Binance Markets Limited, Binance’s unit in the UK, filed a registration with the FCA but withdrew its application in May due to not meeting anti-money laundering requirements.

According to the FCA’s Financial Services Register page for Binance Markets Limited, Binance must put up a public notice on its website and apps stating to its UK users that Binance Market is banned from offering its service. The FCA also ordered Binance to “not promote or accept any new applications for lending by retail customers through the operation of its Electronic Lending System, and must cease marketing any reference to EddieUK/Binance/BinanceUK being an FCA regulated platform for buying and trading cryptocurrencies.”

Binance troubles in the first half of 2021

In March, Bloomberg reported that the US Commodity Futures Trading Commission (CFTC) investigated Binance for whether the crypto trading platform, which isn’t registered with the agency, allowed US citizens to buy and sell derivatives—something that the CFTC regulates. But as this report went out, Binance hasn’t been charged with any wrongdoing. That said, Changpeng Zhao, CEO of Binance, took to Twitter to air his thoughts.

The following month, the Federal Financial Supervisory Authority—or BaFin, Germany’s financial regulation—issued a warning to Binance for potentially violating a securities laws for putting on offer “stock tokens” without correct documentation. This means that Binance allegedly failed to issue a prospectus.

A prospectus is an official document that generally tells investors what a particular investment is about so they can make an informed decision. It has information on financial security to potential investors, the company offering the investment, and what the financial risks are that accompany an investment.

Offering stock tokens that track the movement of shares in (at that time) MicroStrategy, Tesla, and Coinbase represent securities that require a prospectus. These stocks are bought and sold using Binance’s own cryptocurrency.

Magic words

As the FCA issued a warning to British consumers about Binance Markets Limited, the financial regulator also offered words of wisdom to anyone interested in investing in cryptocurrency assets: Do your research.

It’s very easy to get caught in the hype, and the loudest drones could be enough to drown out any more sensible voices. Doing your research, reading up more about the company you’re going to be investing in and what you’re investing on, and reading stories that show successes and failures in such investments could put one’s head in better perspective to not make hasty decisions. Furthermore, make sure they are legally recognized to conduct business in your country, else no one will back you up if or when things go south—and sometimes they do pretty quickly.

“Check with Companies House to see if the firm is registered as a UK company and for directors’ names. To see if others have posted any concerns, search online for the firm’s name, directors’ names and the product you are considering,” the FCA urges the British public, “Always be wary if you are contacted out of the blue, pressured to invest quickly or promised returns that sound too good to be true.”

The post Binance receives the ban hammer from UK’s FCA appeared first on Malwarebytes Labs.

A week in security (June 21 – June 27)

Last week on Malwarebytes Labs:

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (June 21 – June 27) appeared first on Malwarebytes Labs.