IT NEWS

Would real identities make social media safer?

“Use real identities to reduce abuse online” is a talking point you’ve almost certainly seen down the years. It also seems to come around like clockwork every other month, and is currently a hot topic in the UK after prominent journalists / media personalities raised the issue.

It’s an interesting idea, but the devil is in the details. “Verified identities solve the problem” won’t address the new problems such an approach creates. Is it possible to make this work, or is it all just pie in the sky?

Real users still behave badly

Think back to some of the worst arguments you’ve seen on social media. They almost certainly involve verified accounts somewhere in the mix. Often they initiate the aggression, or wade into replies and make it worse.

They may also utilise platform features to spread the argument further afield. Accounts with large followings on Twitter will do this via quote tweeting. They may simply retweet a stance they disagree with to initiate a so-called “pile on“, or retweet other people arguing, or quote tweet adding their own commentary along the way. They may even retweet their own replies.

Once this happens, it’s often game over for the other person whose notifications are essentially ruined with a flood of angry responses. I could be wrong, but I don’t believe I’ve ever seen a verified account banned for causing a pile on. I have, however, seen small accounts targeted by such things delete their profile completely. On balance, this doesn’t seem particularly fair.

Realness doesn’t equate to accuracy

Going back to Twitter, this is somewhat a problem of their own making. Whether an accurate assumption or not, the verified system was originally where you assumed all the celebrities you liked ended up. Twitter expanded it to include other people of note, for example authors, athletes, scientists, and so on. Then lots of folks were handed verification simply for working in news / media orgs. Alongside this, for a period of time you could submit a request to be verified and if you passed the bar, you got your checkmark.

Already, you can see how the system was torn between notions of “Is this a badge of notability, identity, or something else altogether?” Things became even more confusing as for a few years, the Twitter verification information page insisted verification was not currently happening…while new checkmarks continued to be given out.

The scheme is currently undergoing renovation, but it remains to be seen what happens with it.

Whether intentional or not, people seem to trust verified accounts as trustworthy voices of reason. This is not sensible, as people will tweet whatever they feel like. If we’re asking, “Does verification help reduce abuse or misinformation”, it can be argued that no, it does not. A drop of 73% in election misinformation after Twitter suspended Donald Trump is a frankly staggering statistic.

This alone should be a fatal blow to the “Use a real identity and things will magically be better somehow” idea.

Facebook’s foray into real names

Facebook already requires you to register an account with your legal name. The problem is if they think your name is not real, you’re locked out and have to try and regain access. This has had very mixed results, causing problems over everything from “fake” names to Star Wars.

Consider all the effort involved in policing this, and the hassle for site users, and then compare that with the number of accounts who are happily pushing large-scale propaganda campaigns via fake profiles on Facebook anyway.

Is it really worth all that effort? Is it helping?

Access denied

If we want everyone online with a real ID, there’s many privacy issues up for debate if identity documents are involved. There’s also the massive problem of access. The international gold-standard for ID is your passport. Many verification schemes ask for scans of your passport at some point.

Problem: lots of people don’t have passports, because it’s not a mandatory document. Depending on country, it might be very expensive. It could involve a complicated process or have its own barriers to entry. Live in a different country to the one you were born in? You may only have a residence permit. It’s possible your passport has expired. Will they even accept an expired passport?

In 2018, around 76% of people had a passport in the UK. That compares with 42% of Americans and 66% of Canadians. That leaves an awful lot of people out of the loop across just 3 locations. This is before you factor the rest of the world in.

Unless passports are somehow made free worldwide, or a universal form of ID is created, people will lose out. When crucial services like banking, tax, municipal services, gas and electricity are all moved online, this seems irresponsible. We typically don’t need to show our energy company a scan of our passport to use their service online. Does it make sense that the bar to entry is so much higher to post on a social network?

There are limited circumstances where a social network currently may ask to see a form of identification. That’s mostly tied to issues of death and memorialisation. Similarly, some verification processes involve passport scans.

Scanning everybody, though? That’s going to cause additional problems…

All the eggs, in the biggest basket

Any social media app containing something approaching the whole world’s passports is instantly a massive target for hacks and scams. It’s debatable if they could keep it all secure and locked down—they only have to fail once. For comparison, the UK’s Home Office deals with a frankly unimaginable volume of personal data. Passports, birth certificates, wedding certificates, photographs, personal emails, biometrics, the works. Some of this is outsourced to third parties.

It is incredibly important this data is kept under lock and key. This is now the point where we mention a 120% rise in data loss incidents. With 4,204 incidents “in the last financial year” alone, that’s an awful lot of problems related to paper documents and electronic devices. If this is the scale of the issue for UKGOV despite their best efforts, imagine the problem for a much less wealthy social media site. It just seems too much of a leap of faith to think this would end in anything but disaster.

This leads us neatly on to…

Data theft fallout

When people say that losing their anonymity online “isn’t a problem”, or “wouldn’t bug me”, that’s great for them. But just because something isn’t in their threat model, doesn’t mean it can’t hurt someone else, as the EFF’s Eva Galperin pointed out on Twitter only recently:

Some people are at risk from domestic violence or racial abuse. For some, anonymity is built into aspects of their job. For others, their stay in a country might be. conditional but they’d like to speak up on the issues affecting them without feeling they’re jeopardising their status.

“You’re not living in a repressive regime” should not be the barrier to entry for privacy. Treating your right to keep yourself safe from data abuse isn’t a special exemption, kept out of reach except in the direst of emergencies. This normalises the idea of privacy and safety as an exception. You know who loves it when privacy and safety are treated as abnormal?

People who’d rather you have as little of it as possible, that’s who.

Same again next time?

I’ve seen this discussion come around many, many times now. No matter the circumstance, it tends to fizzle out and be resurrected a few months later. In the UK, at least, “everyone should supply ID” will collapse under weight of sheer impossibility. The task there is made harder by virtue of the fact there is no nationally issued, mandatory identity card system in operation.

Things are a little more complicated in the US, where anonymous online speech is concerned. The legal provision that protects free speech online—Section 230—is under increasing scrutiny. It remains to be seen how things will play out there.

Having said that, this talking point will return. When it does, you’ll be armed with the knowledge that data privacy is incredibly important. Due to a variety of social, legal, and practical problems in this particular realm, social media sites won’t be asking you for verification any time soon.

The post Would real identities make social media safer? appeared first on Malwarebytes Labs.

Credit card skimmer piggybacks on Magento 1 hacking spree

Back in the fall of 2020 threat actors started to massively exploit a vulnerability in the no-longer maintained Magento 1 software branch. As a result, thousands of e-commerce shops were compromised and many of them injected with credit card skimming code.

While monitoring activities tied to this Magento 1 campaign, we identified an e-commerce shop that had been targeted twice by skimmers. This in itself is not unusual, multiple infections on the same site are common.

However this case was different. The threat actors devised a version of their script that is aware of sites already injected with a Magento 1 skimmer. That second skimmer will simply harvest credit card details from the already existing fake form injected by the previous attackers.

In the incident we describe in this post, the threat actors also took into account that an e-commerce site may get cleaned up from a Magento 1 hack. When that happens, an alternate version of their skimmer injects its own fields that mimic a legitimate payments platform.

Mass Magento 1 infections

The Magento 1 end-of-life coupled with a popular exploit turned out to be a huge boon for threat actors. A large number of sites have been hacked indiscriminately just because they were vulnerable.

RiskIQ attributed these incidents to Magecart Group 12, which has a long history of web skimming using various techniques including supply-chain attacks.

Magento1
Figure 1: Skimming code injected in Magento 1 sites

This skimmer is rather lengthy and contains various levels of obfuscation that make debugging it more challenging. Although there are variations, the format and decoy payment form are very much the same.

No honor among thieves

Costway is a retailer that started to sell its own name-brand products via platforms such as Amazon and later rolled out costway.com and subsequent localized online stores. Their French portal (costway[.]fr) attracted about 180K visitors last December.

Our crawlers identified that the websites for Costway France, UK, Germany and Spain, which run the Magento 1 software, had been compromised around the same time frame.

We can see the credit card skimmer injection directly on the checkout page for costway[.]fr as it stands out in English while the rest of the site is in French. This is not surprising considering that the Magento 1 hacking campaign is automated and fairly indiscriminate.

skimmer1
Figure 2: Costway site already hacked with Magento 1 skimmer

But what’s more interesting is that another skimmer is also present on the site (loaded externally from securityxx[.]top) and targeting the Magento 1 skimmer.

It’s possible that the threat actors’ level of access to e-commerce sites differs. The former exploit a core vulnerability that grants them root access while perhaps the latter can only perform specific types of injections. If that is the case, this would explain why they simply leave the fake form alone and grab credentials from it.

There’s an additional twist here where the criminals also planned for the scenario where the e-commerce site gets cleaned up from the Magento 1 injection.

skimmer2 1
Figure 3: Costway site cleaned up from Magento 1 hack but with external skimmer

The skimmer creates its own form fields which closely ressemble the legitimate ones from the Adyen payments platform that Costway uses. Visually, only a very small style change (font size) gives it away, but there are more significant implications under the hood.

Adyen
Figure 4: External skimmer mimics Adyen payments form

Adyen encrypts the form fields using their proprietary technology. The threat actors wanted to recreate the same look and feel from Adyen but be able to harvest the credit card information in their own way.

To summarize, from a victim’s perspective, there are 3 different skimmers that get loaded when they proceed to the checkout page.

  1. Magento 1 hack skimmer injected directly in checkout page
  2. Custom skimmer (securityxx[.]top/security.js) that steals from Magento 1 skimmer
  3. Custom skimmer (securityxx[.]top/costway.js) that alters legitimate payment iframe
traffic
Figure 5: Web traffic showing all 3 skimmers

Previous skimmer

The same threat actors were already busy working on Costway’s compomise at least in late December 2020 as recorded in this urlscanio crawl. They used the custom domain costway[.]top to host their code.

urlscan
Figure 6: Earliest documented instance of compromise via custom domain

The domain costway[.]top is related to a family we have come across before. There is overlap with the skimmer code they use, naming conventions and even infrastructure.

VT
Figure 7: Relationship graph showing previous connections

At the moment, this group is quite active and continues with the same techniques we have seen several months ago.

Competing for resources

A large number of Magento 1 sites have been hacked but yet are not necessarily being monetized. Other threat actors that want access will undoubtedly attempt to inject their own malicious code. When that happens, we see criminals trying to access the same resources and sometimes fighting with one another.

We informed Costway during our investigation but also witnessed their site getting reinfected. The costway[.]top domain was discarded in favor of securityxx[.]top where threat actors customized the skimmer specifically for them.

At the time of writing, costway[.]fr is still compromised but Malwarebytes users are protected thanks to our Browser Guard extension and general web protection available in our software.

We would like to thank Jordan Herman over at RiskIQ for sharing additional indicators with us.

Indicators of Compromise (IOCs)

securityxx[.]top
costway[.]top
hdpopulation[.]com
cdnanalyze[.]com
hdenvironement[.]com
crazyvaps[.]info
cdnchecker[.]org
cdnoptimize[.]com
hdanalyse[.]com
cdnapis[.]org
cookiepro[.]cloud
cdndoubleclick[.]net

149[.]248[.]7[.]219
95[.]179[.]142[.]28
45[.]76[.]75[.]35
136[.]244[.]110[.]105
149[.]248[.]0[.]74
149[.]28[.]64[.]156
95[.]179[.]139[.]29
209[.]250[.]246[.]214

The post Credit card skimmer piggybacks on Magento 1 hacking spree appeared first on Malwarebytes Labs.

A week in security (January 25 – January 31)

January 28 was Data Privacy Day, but for Malwarebytes Labs, it was Data Privacy Week. As such, we’re packed with more privacy coverage than you can shake a stick at, starting with some practical steps on how to make your online life private and secure, and why privacy is core to a safer internet. We also covered news on Grindr facing a huge GDPR fine due to privacy concerns and Google’s new privacy-friendly technology called FLoC (Federated Learning of Cohorts) that could replace cookies in cross-site ad trackers.

To cap the week off, we invited a panel of experts from Mozilla, DuckDuckGo, and EFF to talk about Internet users’ experiences with the internet and online privacy, in a special episode of the Lock & Code podcast.

Lastly, we touched on DDoS attacks spawned by the abuse of RDP, the mighty take down of the Emotet botnet, and the Emotet update written by law enforcement that’s meant to remove it from infected computers.

Other cybersecurity news

Stay safe, everyone!

The post A week in security (January 25 – January 31) appeared first on Malwarebytes Labs.

Fonix ransomware gives up life of crime, apologizes

Ransomware gangs deciding to pack their bags and leave their life of crime is not new, but it is a rare thing to see indeed.

And the Fonix ransomware (also known as FonixCrypter and Xinof), one of those ransomware-as-a-service (RaaS) offerings, is the latest to join the club.

Fonix was first observed in mid-2020, but it only started turning heads around September-October of that year. Believed to be of Iranian origin, it is known to use four methods of encryption—AES, Salsa20, ChaCha, and RSA—but because it encrypts all non-critical system files, it’s slower compared to other RaaS offerings.

Encrypted files usually bear the .FONIX and .XINOF (Fonix spelled backwards) file extensions; however, the .repter extension was also used. The Desktop wallpaper of affected system is changed to the Fonix logo.

fonix ransom note
A variant of the Fonix ransomware note displayed to victims (Courtesy of Malware Intelligence Analyst Marcelo Rivero)

The same account that announced the end of Fonix later tweeted an apology:

And a promise to “make up for our mistakes”:

That promise came in the form of the master decryption keys needed to decrypt .FONIX and .XINOF files, and an administration tool, which can only decrypt one file at a time. Cautious readers may want to wait for more useful decryption tools, written by more legitimate organisations, before trusting code released by known cybercriminals.

This isn’t the first time a ransomware group has displayed a conscience—that is assuming we take their word they will continue to “use our abilities in positive ways”. In 2018, developers of the GandCrab ransomware, another RaaS that also made a public announcement of shutting down its operations in mid-2019, made a U-turn and released decryption keys for all its victims in Syria after a Syrian father took to Twitter to plead with them. GandCrab had infected his system and encrypted photos of his two sons who had been taken by the war.

In 2016, when TeslaCrypt made an exit from the RaaS scene, a security researcher reached out to its developers and asked if they would release the encryption keys. They did release the master key that helps decrypt affected systems for free.

It remains to be seen if the Fonix gang will keep their word. If some or all of them change their minds and go back to a life of crime, they wouldn’t be the first ransomware gang to do so. Any ransomware group packing up and leaving is good news. However, while Fonix appears to have left the building, it was only one small player in a vast criminal ecosystem. The threat of ransomware remains.

The post Fonix ransomware gives up life of crime, apologizes appeared first on Malwarebytes Labs.

RDP abused for DDoS attacks

We have talked about RDP many times before. It has been a popular target for brute force attacks for a long time, but attackers have now found a new way to abuse it.

Remote access has become more important during the pandemic, with as many people as possible try to work from home. Which makes it all the more important to configure RDP services in a secure way.

Quick recap of RDP

RDP is short for Remote Desktop Protocol. Remote desktop is exactly what the name implies, an option to control a computer system remotely. It almost feels as if you are actually sitting behind that computer. Because of the current pandemic, many people are working from home and may be doing so for a while to come.

All this working from home has the side effect of more RDP ports being opened. Not only to enable the workforce to access company resources from home, but also to enable IT staff to troubleshoot problems on the workers’ devices. A lot of enterprises rely on tech support teams using RDP to troubleshoot problems on employee’s systems.

We warned about one of the consequences of exposing RDP in our post Brute force attacks increase due to more open RDP ports. And we provided some security measures in our post How to protect your RDP access from ransomware attacks. But this time we are going to talk about a different kind of attack that makes use of open RDP ports.

RDP as a DDoS attack vector

The RDP service can be configured by Windows systems administrators to run on TCP (usually port 3389) and/or on the UDP port (3389). When enabled on a UDP port, the Microsoft Windows RDP service can be abused to launch UDP reflection attacks with an amplification ratio of 85.9:1.

The traffic that is set off by this amplification attack is made up of non-fragmented UDP packets sourced from the UDP port and directed towards UDP ports on the victim’s IP address(es). From logs, these attack-induced packets are readily discernible from legitimate RDP session traffic because the amplified attack packets are consistently 1,260 bytes in length and are padded with long strings of zeroes.

Open RDP ports

At the time of writing, the Shodan search engine, which indexes online devices and their services, lists over 3.6 million results in a search for “remote desktop” and NetScout identified 33,000 Windows RDP servers that could potentially be abused in this type of DDoS attack.

remote desktop shodan results
Shodan search results for remote desktop

The consequences of such an attack

The owner of the destination IP address(es) will experience a DDoS attack. DDoS stands for Distributed Denial of Service. It is a network attack that involves hackers forcing numerous systems to send network communication requests to one specific server. If the attack is successful, the receiving server will become overloaded by nonsense requests. It will either crash or become so busy that normal users are unable to use it.

A DDoS attack can cause:

  • Disappointed users
  • Loss of data
  • Loss of revenue
  • Lost work hours/productivity
  • Damage to the businesses’ reputation
  • Breach of contract between a victim and its users

We have discussed preventive measures for DDoS targets in our post DDoS attacks are growing: What can businesses do?

But there are consequences for the abused service owners as well. These may include an interruption or slow-down of remote-access services, as well as additional service disruption due to an overload of additional network hardware and services.

How to avoid helping a DDoS attack

There are a few things you can do to avoid being roped into an RDP DDoS attack. They are also useful against other RDP related attacks.

  • Put RDP access behind a VPN so it’s not directly accessible.
  • Use a Remote Desktop Gateway Server, which provides some additional security and operational benefits like 2FA, for example. Also, the logs of the RDP sessions can prove especially useful.
  • If RDP servers offering remote access via UDP cannot immediately be moved behind VPN concentrators, it is recommended to disable RDP via UDP.

Logging of the traffic will not be effective as a preventive measure, but it will enable you to figure out what might have happened and assist you in closing any gaps in your defenses.

Stay safe, everyone!

The post RDP abused for DDoS attacks appeared first on Malwarebytes Labs.

Cleaning up after Emotet: the law enforcement file

This blog post was authored by Hasherezade and Jérôme Segura

Emotet has been the most wanted malware for several years. The large botnet is responsible for sending millions of spam emails laced with malicious attachments. The once banking Trojan turned into loader was responsible for costly compromises due to its relationship with ransomware gangs.

On January 27, Europol announced a global operation to take down the botnet behind what it called the most dangerous malware by gaining control of its infrastructure and taking it down from the inside.

Shortly thereafter, Emotet controllers started to deliver a special payload that had code to remove the malware from infected computers. This had not been formally clarified just yet and some details around it were not quite clear. In this blog we will review this update and how it is meant to work.

Discovery

Shortly after the Emotet takedown, a researcher observed a new payload pushed onto infected machines with a code to remove the malware at a specific date.

That updated bot contained a cleanup routine responsible for uninstalling Emotet after the April 25 2021 deadline. The original report mentioned March 25 but since the months are counted from 0 and not from 1, the third month is in reality April.

This special update was later confirmed in a press release by the U.S. Department of Justice in their affidavit.

On or about Janury 26, 2021, leveraging their access to Tier 2 and Tier 3 servers, agents from a trusted foreign law enforcement partner, with whom the FBI is collaborating, replaced Emotet malware on servers physically located in their jurisdiction with a file created by law enforcement

BleepingComputer mentions that the foreign law enforcement partner is Germany’s Federal Criminal Police (Bundeskriminalamt or BKA).

In addition to the cleanup routine, which we describe in the next section, this “law enforcement file” contains an alternative execution path that is followed if the same sample runs before the given date.

The uninstaller

The payload is a 32 bit DLL. It has a self-explanatory name (EmotetLoader.dll) and 3 exports which all lead to the same function.

If we look inside this exported function, we can see 3 subroutines:

to uninstall 1

The first one is responsible for the aforementioned cleanup. Inside, we can find the date check:

uninstall in april

If the deadline already passed, the uninstall routine is called immediately. Otherwise the thread is run repeatedly doing the same time check, and eventually calling the deletion code if the date has passed.

waiting routine

The current time is compared with the deadline in a loop. The loop exits only if the deadline is passed, and then proceeds to the uninstallation routine.

The uninstall routine itself is very simple. It deletes the service associated with Emotet, deletes the run key, and then exits the process.

delete svc
Inside the function: “uninstall_emotet”

As we know by observing the regular Emotet, it achieves persistence in two alternative ways.

Run key

emotet persistence
MicrosoftCurrentVersionRun

This type of installation does not require elevation. In such a case, the Emotet DLL is copied into %APPDATA%

. .

System Service

service installed
HKLMSystemCurrentControlSetService<emotet random name>

If the sample was run with Administrator privileges, it installs itself as a system service.. The original DLL is copied into C:WindowsSysWow64

. .

For this reason, the cleanup function has to take both scenarios into account.

We noticed the developers made a mistake in the code that’s supposed to move the law enforcement file into the %temp% directory:

GetTempFileNameW(Buffer, L"UPD", 0, TempFileName) 

The “0” should have been a “1” because according to the documentation, if uUnique is not zero, you must create the file yourself. Only a file name is created, because GetTempFileName is not able to guarantee that the file name is unique.

collision

The intention was to generate a temporary path, but because of using the wrong value in the parameter uUnique, not only was the path generated, but the file was also created. That lead to the further name collision and as a result, the file was not moved.

However, this does not change the fact that the malware has been neutered and is harmless since it won’t run as its persistence mechanisms have been removed.

If the aforementioned deletion routine was called immediately, the other two functions from the initial export are not getting run (the process terminates at the end of the routine, calling ExitProcess). But this happens only if the sample has been run after April 25.

The alternative execution path

Now let’s take a look at what happens in the alternative scenario when the uninstall routine isn’t immediately called.

other func 2

After the waiting thread is run, the execution reaches two other functions. The first one enumerates running processes, and searches for the parent process of the current one.

get parent pid

Then it checks the process name if it is “explorer.exe” or “services.exe”, followed by reading parameters given to the parent.

Running the next stage

The next routine decrypts and loads a second stage payload from the hardcoded buffer.

run the code
The hardcoded buffer is decrypted with the above loop, and then executed

Redirection of the flow to the decrypted buffer (via “call edi“):

payload

The next PE is revealed: X.dll:

After decrypting the payload, the execution is redirected to the beginning of the revealed buffer that starts with a jump:

first jmp

This jump leads to a reflective loader routine. After mapping the DLL to a virtual format, in the freshly allocated area in the memory, the loader redirects the execution there.

call functions

First, the DllMain of X.dll is called (it is used for the initialization only). Then, the execution is redirected to one of the exported functions – in the currently analyzed case it is Control_RunDll.

The execution is continued by the second dll (X.dll). The functions inside this module are obfuscated.

inside payl

The payload that is called now looks very similar to the regular Emotet payload. Analogical DLL, and also named X.dll such as: this one could be found in earlier Emotet samples (without the cleanup routine), for example in this sample.

The second stage payload: X.dll

The second stage payload X.dll is a typical Emotet DLL, loaded in case the hardcoded deadline didn’t pass yet.

This DLL is heavily obfuscated and all the used APIs are loaded dynamically. Also their parameters are not readable – they are dynamically calculated before use, sometimes with the help of a long chain of operations involving many variables:

http send req

This type of obfuscation is typical for Emotet’s payloads, and it is designed to confuse researchers. Yet, thanks to tracing we were able to reconstruct what APIs are being called at what offsets.

The payload has two alternative paths of execution. First it checks if it was already installed. If not, it follows the first execution path, and proceeds to install itself. It generates a random installation name, and moves itself under this name into a specific directory, at the same time adding persistence. Then it re-runs itself from the new location.

If the payload detects that it was run from the destination path, it takes an alternative execution path instead. It connects to the C2 and communicates with it.

send request

The current sample sends a request to one of the sinkholed servers. Content:

L"DNT: 0rnReferer: 80.158.3.161/i8funy5rv04bwu1a/rnContent-Type: multipart/form-data; boundary=--------------------GgmgQLhRJIOZRUuEhSKorn"

The following image shows web traffic from a system infected via a malicious document downloading the special update file and reaching back to the command and control server owned by law enforcement:

traffic view

Motives behind the uninstaller

The version with the uninstaller is now pushed via channels that were meant to distribute the original Emotet. Although currently the deletion routine won’t be called yet, the infrastructure behind Emotet is already controlled by law enforcement, so the bots are not able to perform their malicious action.

For victims with an existing Emotet infection, the new version will come as an update, replacing the former one. This is how it will be aware of its installation paths and able to clean itself once the deadline has passed.

Pushing code via a botnet, even with good intentions, has always been a thorny topic mainly because of the legal ramifications such actions imply. The DOJ affidavit makes a note of how the “Foreign law enforcement agents, not FBI agents, replaced the Emotet malware, which is stored on a server located overseas, with the file created by law enforcement”.

The lengthy delay for the cleanup routine to activate may be explained by the need to give system administrators time for forensics analysis and checking for other infections.

The post Cleaning up after Emotet: the law enforcement file appeared first on Malwarebytes Labs.

3 tips to top up your privacy

It’s Data Privacy Day—the perennial event that many internet users may have never heard of, but have strong feelings and opinions about the very things that birthed it in the first place.

Originally created to help businesses learn about why online privacy matters, its reach has since extended to other public organizations, governments, communities, and families on a global scale—yes, even when they continue to say “I have nothing to hide!”

Many high-traffic websites have improved on the aspects of security and privacy these past few years, so it shouldn’t surprise you to see privacy features when you visit your account settings. You just have to make use of them.

Here are three simple, practical, and sensible steps you can take now, to achieve a more private—and secure—online life.

1. Check your browser’s privacy options

Your browser is your gateway to the Internet. Unfortunately, few of them have ideal privacy and security settings set by default, even if they’re present.

It is in your best interest then to go ahead and tinker with your browser’s settings, carefully making sure that options are set in a way that are acceptable to you, privacy-wise.

You can read about some popular browsers’ privacy settings here:

While you’re reviewing your settings, you may want to clear out your browser history, too, and review your extensions—you might actually find one or several there that you have already forgotten—and remove those you hardly, or never, used. Vulnerable or malicious add-ons can easily become a privacy and security risk.

Do a browser settings review on your mobile devices as well. You can learn more about them here:

Now, if you find that what’s in there by default lacks the privacy and security settings you hope for, it’s time to ditch that browser for a new one.

Thankfully, most (if not all) desktop browsers that made taking care of your privacy their business, too, have mobile versions. Start by looking up Firefox, Brave, DuckDuckGo, and even the Tor Browser on the Google Play and Apple App stores.

2. Review your social privacy settings

If you use a lot of different social media sites, choose one platform you’re most active on and start there.

(It’s Facebook, isn’t it?)

With privacy in mind, update settings of certain fields in your profile that you feel would less likely make you a target of identity theft. You might also want to limit the way other users of that platform can reach you, such as a total stranger who doesn’t have connections within your closest circle adding you as a friend. To learn more about your options, read Facebook’s basic privacy settings and tools page.

Disable that feature wherein anyone can look you up using an email address or phone number tied to you. Lastly, if you have a friend or family member who likes tagging you on every photo they upload (even if you’re not on the photo), feel free to un-tag yourself. You won’t regret it.

3. Start sharing with caution

Sharing might be caring—not to mention, fun—but in some cases, that doesn’t really apply, especially in social media.

I think by now we’re quite familiar with the scenario of someone publicly sharing their vacation plans on social media only to find themselves a victim of robbery when they got back.

Yes, we should think twice before sharing such information. And not only that, we should also make it a habit to ask permission when sharing photos with other people in them, or stories that involve somebody else. This is not only polite, but this also demonstrates that you care about other people’s privacy, too. They are your friends and family after all.

Every day is data privacy day

Data Privacy Day may only be one day, but looking after our personal data and keeping it safe should be an everyday affair for every Internet user.

You have the tools; we have equipped you with the know-how. Improving your data privacy doesn’t have to be rocket science.

So, let’s take a little bit of our busy time to review and make changes to those settings. These changes might be slight but are incredibly significant overall.

Remember, eyes open, and stay safe!

The post 3 tips to top up your privacy appeared first on Malwarebytes Labs.

$12m Grindr fine shows GDPR’s got teeth

As thoughts turn to Data Privacy this week in a big way, GDPR illustrates it isn’t an afterthought. Grindr, the popular social network and dating platform, will likely suffer a $12 million USD fine due to privacy related complaints. What happened here, and what are the implications for future cases?

What is GDPR?

The General Data Protection Regulation is a robust set of rules for data protection created by the European Union (EU), replacing much older rules from the 1990s. It was adopted in 2016 and enforcement began in 2018. It’s not a static thing, and is often updated. There’s plenty of rules and requirements for things such as data breaches or poor personal data notifications. Crucially, should you get your data protection wrong somewhere along the way, big fines may follow.

Although mostly spoken of in terms of the EU, its impact is global. Your data may be sitting under the watchful eye of GDPR right now without you knowing it, which…would be somewhat ironic. Anyway.

The complaint

On the 24th January, Norway’s Data Protection Authority (NDPA) gave Grindr advance notification [PDF] of its intention to levy a fine. This is because they claim Grindr shared user data to third parties “without legal basis”. From the document:

Pursuant to Article 58(2)(i) GDPR, we impose an administrative fine against Grindr LLC of 100 000 000 – one hundred million – NOK for

– having disclosed personal data to third party advertisers without a legal basis, which constitutes a violation of Article 6(1) GDPR and

– having disclosed special category personal data to third party advertisers without a valid exemption from the prohibition in Article 9(1) GDPR

That doesn’t sound good. What does it mean in practice?

Noticing the notification

The Norwegian Consumer Council, in collaboration with the European Center for Digital Rights, put forward 3 complaints on behalf of a complainant. The complaints themselves related to third-party advertising partners. The privacy policy stated that Grindr shared a variety of data with third-party advertising companies, such as:

[…] your hashed Device ID, your device’s advertising identifier, a portion of your Profile Information, Location Information, and some of your demographic information with our advertising partners

Personal data shared included the below:

Hardware and Software Information; Profile Information (excluding HIV Status and Last Tested Date and Tribe); Location and Distance Information; Cookies; Log Files and Other Tracking Technologies.

Additional Personal Data we receive about you, including: Third-Party Tracking Technologies.

Where this all goes wrong for Grindr, is that NDPA object to how consent was gained for the various advertising partners. Users were “forced to accept the privacy policy in its entirety to use the app”. They weren’t asked specifically if they wanted to share with third parties. Your mileage may vary if this is worth the fine currently on the table or not, but it is a valid question.

Untangling the multitude of privacy policies

Privacy policies can cause headaches for developers and users alike, in lots of different areas besides dating. For example, there are games in mobile land with an incredible amount of linked privacy policies and data sharing agreements. Realistically there’s no way to genuinely read all of it [PDF, p.4], because it’s too complicated to understand.

Does the developer roll with a “blanket” agreement via one privacy policy to combat this, because thousands of words across multiple policies is too much? If so, how do they cope at a granular level where smaller decisions exist for each individual advertiser?

Removing an advertiser from a specific network might warrant a notification from an app, to let the user know things have changed. Even more so if replaced by another advertiser, entirely unannounced. Does the developer pop notifications every single time an ad network changes, or hope that their blanket policy covers the alteration?

Considering the imminent fine, many organisations may be racing to their policy teams to carve out an answer. A loss of approximately 10% of estimated global revenue isn’t the best of news for Grindr. It seems likely the fine will stick.

Batten down the data privacy hatches

Data privacy, and privacy policies, are an “uncool” story for many. Everyone wants to see the latest hacks, or terrifying takeovers. Yet much of the bad old days of Adware/spyware from 2005 – 2008 was dependent on bad policies and leaky data sharing. While companies would occasionally be brought before the FTC, this was rare.

GDPR is a lot more omnipresent than the FTC is in terms of showing up at your door and passing you a fine. With data being so crucial to regulatory requirements and basic security hygiene, GDPR couldn’t be clearer: its here, and it isn’t going away.

The post $12m Grindr fine shows GDPR’s got teeth appeared first on Malwarebytes Labs.

Why Data Privacy Day matters: A Lock and Code special with Mozilla, DuckDuckGo, and EFF

You can read our full-length blog here about the importance of Data Privacy Day and data privacy in general

Today is a special day, not just because January 28 marks Data Privacy Day in the United States and in several countries across the world, but because it also marks the return of our hit podcast Lock and Code, which closed out last year with an episode devoted to educators and the struggles of distance learning.

For Data Privacy Day this year, we knew we had to do something big.

After all, data privacy is far from a new topic for Malwarebytes Labs, which ramped up its related coverage more than two years ago, giving readers in-depth analyses of the current laws that shape their data privacy rights, the proposed legislation that could grant them new rights, the corporate heel-turns on privacy, the big-name mishaps, and the positive developments in the space, whether enacted by companies or authored by Congress members.

Along the way, Malwarebytes also released products that can help bolster online privacy, and we at Labs wrote about some of the many best practices and tools that people can use to maintain their privacy online.

We’ve been in this space. We know its actors and advocates. So, for Lock and Code, we thought we’d give them the opportunity to talk.

Today, in the return of our Lock and Code podcast, we gathered a panel of data privacy experts that includes Mozilla Chief Security Officer Marshall Erwin, DuckDuckGo Vice President of Communications Kamyl Bazbaz, and Electronic Frontier Foundation Director of Strategy Danny O’Brien.

Together, our guests talk about the state of online privacy today, why online privacy information can be so hard to find, and how users can protect themselves. Tune in to hear all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

The post Why Data Privacy Day matters: A Lock and Code special with Mozilla, DuckDuckGo, and EFF appeared first on Malwarebytes Labs.

Why Data Privacy Day matters

Our Lock and Code special episode on Data Privacy Day, featuring guests from Mozilla, DuckDuckGo, and Electronic Frontier Foundation can be listened to here.

Today, January 28, is Data Privacy Day, the annual, multinational event in which governments, companies, and schools can inform the public about how to protect their privacy online.

While we at Malwarebytes Labs appreciate this calendar reminder to address data privacy head-on, the truth is that data privacy is not a 24-hour talking point—it is a discussion that has evolved for years, shaped by public opinion, corporate mishap, Congressional inquiry, and an increasingly-hungry online advertising regime that hoovers up the data of non-suspecting Internet users every day. And that’s not even mentioning the influence of threat actors.

The good news is that there are many ways that users can reclaim their privacy online, depending on what they hope to defend. For users who want to prevent their personally identifiable information from ending up in the hands of thieves, there are best practices in avoiding malicious links and emails. For users who want to hide their activity from their Internet Service Provider, VPNs can encrypt and obscure their traffic. For users who want to prevent online ads from following them across the Internet, a variety of browser plug-ins provide strong guardrails against this activity, and several privacy-forward web browsers include similar features by default. And for those who want to keep their private searches private, there are services online that do not use search data to serve up ads. Instead, they simply give users what they want: answers.

Today, as Malwarebytes commemorates Data Privacy Day, so, too, do many others. First conceived in 2007 by the Council of Europe (as National Data Protection Day), the United States later adopted this annual public awareness campaign in 2009. It is now observed in Canada, Israel, and 47 other countries.

Importantly, Data Privacy Day serves as a reminder that data privacy should be a right, exercisable by all. It is not reserved for people who have something to hide. It is not a sole function for covering up wrong-doing.

It is, instead, for everyone.

Why does data privacy matter?

Privacy is core to a safer Internet. It protects who you are and what you look at, and it empowers you to go online with confidence. By protecting your data privacy, the sites you visit, the videos you watch, even the devices you favor, will be nobody’s business but your own.

Unfortunately, data privacy today is not the default.

Instead, every-day online activities lead to countless non-private moments for users, often by design. In these “accidentally unprivate” moments, someone, somewhere, is making a dollar off your compromised privacy.

When you sign up to use a major social media platform or mobile app, the companies behind them require you to sign an end-user license agreement that gives them near-total control over how your data is collected, stored, and shared.

Just this week, the editorial board for The New York Times zeroed in on this power imbalance between companies and their users, in which companies “may feel emboldened to insert terms that advantage them at their customers’ expense.”

“That includes provisions that most consumers wouldn’t knowingly agree to: an inability to delete one’s own account, granting companies the right to claim credit for or alter their creative work, letting companies retain content even after a user deletes it, letting them gain access to a user’s full browsing history and giving them blanket indemnity.”

Separate from potentially over-bearing user agreements, whenever you browse the Internet to read the news, shop online, watch videos, or post pictures, a cadre of data brokers slowly amass information to build profiles about your search history, age, location, interests, political affiliations, religious beliefs, sexual orientation, and more. In fact, some data brokers scour the web for public records, collating information about divorce and traffic records and tying it to your profile. The data brokers then serve as a middleman for advertisers, selling the opportunity to place an ad in front of a specific type of user.

Further, depending on where you live, your online activity may become the interest of your government, which could request more information about your Internet traffic from your Internet Service Provider. Or perhaps you’re attending a university that you would like to shield from your Internet traffic, as you may be questioning your sexuality or personal beliefs. Who we are online has increasingly blurred with who we are offline, and you deserve as much privacy in one realm as in the other.

In every situation described above, users are better equipped when they know who is collecting their data and where that data is going. Without that knowledge, users risk entering into skewed agreements with the titans of the web, who have more resources and more time to enforce their rules, whether or not those rules are fair.

Are you fighting alone?

You are not alone in fighting to preserve your data privacy. In fact, there are four major bulwarks aiding you today.

First, many tools can help protect your online privacy:

  • Certain browser plug-ins can prevent online ad tracking across websites, and they can warn you about malicious websites looking to steal your sensitive information
  • VPNs can prevent ISPs from getting detailed information about your Internet traffic
  • Private search engines can keep your searches private and your search data away from any advertising schemes
  • Privacy-forward web browsers can default to the most private setting, preventing advertisers from following you around the web and profiling your activity

Second, several lawmakers across the United States have heeded the data privacy call. Since mid-2018, Senators and Representatives for the country have introduced at least 10 data privacy bills that aim to provide meaningful data privacy protections for Americans. Even more state lawmakers have forwarded statewide data privacy bills in the same time period, including proposals in Washington, Nevada, and Mainewhich successfully turned its bill into law in 2019.

Across the world, the legislative appetite for data privacy rights has outpaced the United States. Since May 2018, more than 450 million Europeans have been protected by the General Data Protection Regulation (GDPR), which demands strict controls over how their data is used and stored, and violations are punishable by stringent fines. That law’s impact cannot be understated. Following its passage, many countries began to follow suit, extending new rights of data protection, access, portability, and transparency to their residents.

Third, a variety of organizations routinely defend user rights by engaging directly with Congress members, advocating for better laws, and building grassroots coalitions.  Electronic Frontier Foundation, American Civil Liberties Union, Fight for the Future, Common Sense Media, Privacy International, Access Now, and Human Rights Watch are just a few to remember.

Fourth, a handful of companies increasingly recognize the value of user privacy. Apple, Mozilla, Brave, DuckDuckGo, and Signal, among others, have become privacy darlings for some users, implementing privacy features that have angered other companies, and sometimes pushing one another to do better. Companies that have taken missteps on user privacy, on the other hand, have drawn the ire of Congress and suffered dips in user numbers.

Through many of these developments, Malwarebytes has been there—providing thoughtful analysis on the Malwarebytes Labs blog and releasing products that can directly benefit user privacy. We know the companies who care, we talk to the advocates who fight, and we embrace a pro-user stance to guide us.

Which is why we’re proud to present today a special episode of our podcast, Lock and Code, which you can listen to here.

The future of data privacy

Data privacy has only increased in importance for the public with every passing year. That means that tomorrow, just like today and just like the many yesterdays, Malwarebytes will be there to defend and advocate for data privacy.

We will cover the developments that could help—or could be detrimental—to data privacy. We will release tools that can provide data privacy. We will talk to the experts in this field and we will routinely take pro-user stances because it is the right thing to do.

We look forward to helping you in this fight.  

The post Why Data Privacy Day matters appeared first on Malwarebytes Labs.