Archive for author: makoadmin

$12m Grindr fine shows GDPR’s got teeth

As thoughts turn to Data Privacy this week in a big way, GDPR illustrates it isn’t an afterthought. Grindr, the popular social network and dating platform, will likely suffer a $12 million USD fine due to privacy related complaints. What happened here, and what are the implications for future cases?

What is GDPR?

The General Data Protection Regulation is a robust set of rules for data protection created by the European Union (EU), replacing much older rules from the 1990s. It was adopted in 2016 and enforcement began in 2018. It’s not a static thing, and is often updated. There’s plenty of rules and requirements for things such as data breaches or poor personal data notifications. Crucially, should you get your data protection wrong somewhere along the way, big fines may follow.

Although mostly spoken of in terms of the EU, its impact is global. Your data may be sitting under the watchful eye of GDPR right now without you knowing it, which…would be somewhat ironic. Anyway.

The complaint

On the 24th January, Norway’s Data Protection Authority (NDPA) gave Grindr advance notification [PDF] of its intention to levy a fine. This is because they claim Grindr shared user data to third parties “without legal basis”. From the document:

Pursuant to Article 58(2)(i) GDPR, we impose an administrative fine against Grindr LLC of 100 000 000 – one hundred million – NOK for

– having disclosed personal data to third party advertisers without a legal basis, which constitutes a violation of Article 6(1) GDPR and

– having disclosed special category personal data to third party advertisers without a valid exemption from the prohibition in Article 9(1) GDPR

That doesn’t sound good. What does it mean in practice?

Noticing the notification

The Norwegian Consumer Council, in collaboration with the European Center for Digital Rights, put forward 3 complaints on behalf of a complainant. The complaints themselves related to third-party advertising partners. The privacy policy stated that Grindr shared a variety of data with third-party advertising companies, such as:

[…] your hashed Device ID, your device’s advertising identifier, a portion of your Profile Information, Location Information, and some of your demographic information with our advertising partners

Personal data shared included the below:

Hardware and Software Information; Profile Information (excluding HIV Status and Last Tested Date and Tribe); Location and Distance Information; Cookies; Log Files and Other Tracking Technologies.

Additional Personal Data we receive about you, including: Third-Party Tracking Technologies.

Where this all goes wrong for Grindr, is that NDPA object to how consent was gained for the various advertising partners. Users were “forced to accept the privacy policy in its entirety to use the app”. They weren’t asked specifically if they wanted to share with third parties. Your mileage may vary if this is worth the fine currently on the table or not, but it is a valid question.

Untangling the multitude of privacy policies

Privacy policies can cause headaches for developers and users alike, in lots of different areas besides dating. For example, there are games in mobile land with an incredible amount of linked privacy policies and data sharing agreements. Realistically there’s no way to genuinely read all of it [PDF, p.4], because it’s too complicated to understand.

Does the developer roll with a “blanket” agreement via one privacy policy to combat this, because thousands of words across multiple policies is too much? If so, how do they cope at a granular level where smaller decisions exist for each individual advertiser?

Removing an advertiser from a specific network might warrant a notification from an app, to let the user know things have changed. Even more so if replaced by another advertiser, entirely unannounced. Does the developer pop notifications every single time an ad network changes, or hope that their blanket policy covers the alteration?

Considering the imminent fine, many organisations may be racing to their policy teams to carve out an answer. A loss of approximately 10% of estimated global revenue isn’t the best of news for Grindr. It seems likely the fine will stick.

Batten down the data privacy hatches

Data privacy, and privacy policies, are an “uncool” story for many. Everyone wants to see the latest hacks, or terrifying takeovers. Yet much of the bad old days of Adware/spyware from 2005 – 2008 was dependent on bad policies and leaky data sharing. While companies would occasionally be brought before the FTC, this was rare.

GDPR is a lot more omnipresent than the FTC is in terms of showing up at your door and passing you a fine. With data being so crucial to regulatory requirements and basic security hygiene, GDPR couldn’t be clearer: its here, and it isn’t going away.

The post $12m Grindr fine shows GDPR’s got teeth appeared first on Malwarebytes Labs.

Google FLoC puts ad trackers on a cookie-free diet

Cookie tracking is dying and Google needs a replacement. It’s betting on FLoC, an ad tracking technology that lets it understand people’s behaviour while respecting their privacy.

Google has announced that its tests show promising signs that FLoC is working. Is this a milestone on the road to more privacy, or just better concealed tracking technology? Let’s have a look.

What are cookies?

Cookies are small pieces of information that websites store in your browser. If they contain a unique ID, they can be used to track you. That tracking can be used to provide information about your browsing behavior to the websites that you visit. On the one hand, cookies are useful for making your Internet experience more efficient. It is how you automatically get logged in on sites you’ve already visited, even if you closed the browser tab, for example. But on the other hand, cookies are a critical part of the advertising ecosystem that knows which ads are most likely to draw your attention.

Why replace cookies at all?

Several browsers, including Google Chrome, have announced privacy changes that aim to share less data with ad companies, and other third parties. And cookies are essential to the way third-party data is gathered and used by the websites you visit. Also good to know is that Chrome is trailing behind the competition, mainly Firefox and Safari, in this regard. And not only that, but privacy-focused browsers are becoming more popular, and more of them are entering the browser landscape.

What is FLoC?

The Federated Learning of Cohorts (FLoC) is a privacy-focused solution intent on delivering relevant ads “by clustering large groups of people with similar interests”. Accounts are anonymized, grouped into interests, and most importantly, user information is processed on-device rather than broadcast across the web.

FLoC runs in the browser and uses machine learning algorithms to analyze a user’s browser history. According to Google, it might look at “the URLs of the visited sites, on the content of those pages, or other factors.”

It then bundles the user with thousands of others into a group, called a Cohort. The data gathered locally from the browser is never shared. Instead, websites can ask the browser what Cohort it belongs to. In this way, the data about the much wider group of thousands of people is shared, instead of the individual user, and used to target ads.

How do users benefit?

Does that mean that sites are going to run advertisements based on what your Cohort is interested in, and not targeted at you, the individual? Ideally, yes, and that would be progress. But cookies aren’t the only way to track somebody. It may be possible to convert collective data into personalized data by using fingerprinting techniques. Browser fingerprints include details such as browser name, operating system, timezone, and much more. So, will these details be blocked as well?

Once advertisers have figured out how FLoC’s machine learning algorithms operate, they will become smarter at showing you the advertisements that are the most effective based on your interests. Informed readers will remember how popular SEO poisoning was before Google improved its search algorithms.

FLoC will make it harder for advertisers to find out any personal information about you. But that is something you can accomplish right now, by using other tools like a more privacy oriented browser or an ad tracking blocker, which are still more trustworthy companions in our opinion.

What are the downsides?

Of course, there is always a downside. The FLoC solution should be designed so that nobody can access your personal data before it is anonymized and grouped. That includes the users themselves, which denies them any control over the data stored locally. As annoying as some of us may find them, cookies are easy to control.

You are grouped with people of similar interests, but machine learning is a “black box”, so it’s likely there will be no way of you knowing what the criteria were. Does one wrong click get you in a group with interests that you find repulsive? Bad luck, and good luck figuring out how to get out of that group.

Advertisers, and the sites that earn revenue from ads, may feel that Chrome is taking some of their power away in order to take control over their visitors themselves. As it is, this may be Google’s compromise between owning a browser and living off advertising. A compromise other tech giants didn’t have to make since they live predominantly on one side of the privacy fence or another.

What’s the verdict?

FLoC will open for testing in March. For now, let’s wait and see how this pans out. Advertisers and users will have something to say when the technology is worked out, fine tuned, and implemented. Trying to please both sides will end up in a compromise for sure. There is no way yet to try out the final version, but at least now you have some idea about what’s on the horizon, and why.

Guard your privacy, everyone!

The post Google FLoC puts ad trackers on a cookie-free diet appeared first on Malwarebytes Labs.

Pow! Emotet’s down. Is it out?

In a coordinated action, multiple law enforcement agencies have seized control of the Emotet botnet. Agencies from eight countries worked together to deliver what they hope will be a decisive blow against one of the world’s most dangerous and sophisticated computer security threats.

The Emotet threat

In a statement announcing the action,  Europol described Emotet as “one of the most significant botnets of the past decade” and the world’s “most dangerous” malware.

The malware has been a significant thorn in the side of victims, malware researchers and law enforcement since it first emerged in 2014. Originally designed as a banking Trojan, the software became notorious for its frequent shapeshifting and its ability to cause problems for people trying to detect it. This lead to it being used as a gateway for other kinds of malware. Emotet’s criminal operators succeeded in infiltrating millions of Windows machines, and then sold access to those machines to other malware operators.

Taking down Emotet’s infrastructure not only hobbles Emotet, it also disrupts an important pillar of the malware delivery ecosystem.

The takedown

Successful botnets are typically highly distributed and very resilient to takedown attempts. Effective law enforcement cooperation is therefore vital, so that all parts of the system are tackled at the same time, ensuring the botnet can’t reemerge from any remnants that go untouched.

In this case, that meant tackling hundreds of servers simultaneously. Describing the level of cooperation required, Malwarebytes’ Director of Threat Intelligence, Jerome Segura said:

Going after any botnet is always a challenging task, but the stakes were even higher with Emotet. Law Enforcement agencies had to neutralize Emotet’s three different botnets and their respective controllers.

Although it gives few details, the Europol press release hints that a novel and sophisticated approach was used in the action, stating that the Emotet botnet was compromised “from the inside”. According to the agency, “This is a unique and new approach to effectively disrupt the activities of the facilitators of cybercrime.”

Segura added:

Unlike the recent and short-lived attempt to take down TrickBot, authorities have made actual arrests in Ukraine and have also identified several other individuals that were customers of the Emotet botnet. This is a very impactful action that likely will result in the prolonged success of this global takedown.

It remains to be seen if this is the final chapter of the Emotet story, but even if it is, we aren’t at the end of the story just yet.

This action removes the threat posed by Emotet, by preventing it from contacting the infrastructure it uses to update itself and deliver malware. However, the infections remain, albeit in an inert state. To complete the eradication of Emotet, those infections will need to be cleaned up too.

The knockout?

In a highly unusual step, it looks as if the clean up isn’t going to be left to chance. A few hours after the takedown was announced, ZDNet broke the news that law enforcement in the Netherlands are in the process of deploying an Emotet update that will remove any remaining infections on March 25th, 2021.

The post Pow! Emotet’s down. Is it out? appeared first on Malwarebytes Labs.

Why Data Privacy Day matters: A Lock and Code special with Mozilla, DuckDuckGo, and EFF

You can read our full-length blog here about the importance of Data Privacy Day and data privacy in general

Today is a special day, not just because January 28 marks Data Privacy Day in the United States and in several countries across the world, but because it also marks the return of our hit podcast Lock and Code, which closed out last year with an episode devoted to educators and the struggles of distance learning.

For Data Privacy Day this year, we knew we had to do something big.

After all, data privacy is far from a new topic for Malwarebytes Labs, which ramped up its related coverage more than two years ago, giving readers in-depth analyses of the current laws that shape their data privacy rights, the proposed legislation that could grant them new rights, the corporate heel-turns on privacy, the big-name mishaps, and the positive developments in the space, whether enacted by companies or authored by Congress members.

Along the way, Malwarebytes also released products that can help bolster online privacy, and we at Labs wrote about some of the many best practices and tools that people can use to maintain their privacy online.

We’ve been in this space. We know its actors and advocates. So, for Lock and Code, we thought we’d give them the opportunity to talk.

Today, in the return of our Lock and Code podcast, we gathered a panel of data privacy experts that includes Mozilla Chief Security Officer Marshall Erwin, DuckDuckGo Vice President of Communications Kamyl Bazbaz, and Electronic Frontier Foundation Director of Strategy Danny O’Brien.

Together, our guests talk about the state of online privacy today, why online privacy information can be so hard to find, and how users can protect themselves. Tune in to hear all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes storeGoogle Play Music, and Spotify, plus whatever preferred podcast platform you use.

The post Why Data Privacy Day matters: A Lock and Code special with Mozilla, DuckDuckGo, and EFF appeared first on Malwarebytes Labs.

Why Data Privacy Day matters

Our Lock and Code special episode on Data Privacy Day, featuring guests from Mozilla, DuckDuckGo, and Electronic Frontier Foundation can be listened to here.

Today, January 28, is Data Privacy Day, the annual, multinational event in which governments, companies, and schools can inform the public about how to protect their privacy online.

While we at Malwarebytes Labs appreciate this calendar reminder to address data privacy head-on, the truth is that data privacy is not a 24-hour talking point—it is a discussion that has evolved for years, shaped by public opinion, corporate mishap, Congressional inquiry, and an increasingly-hungry online advertising regime that hoovers up the data of non-suspecting Internet users every day. And that’s not even mentioning the influence of threat actors.

The good news is that there are many ways that users can reclaim their privacy online, depending on what they hope to defend. For users who want to prevent their personally identifiable information from ending up in the hands of thieves, there are best practices in avoiding malicious links and emails. For users who want to hide their activity from their Internet Service Provider, VPNs can encrypt and obscure their traffic. For users who want to prevent online ads from following them across the Internet, a variety of browser plug-ins provide strong guardrails against this activity, and several privacy-forward web browsers include similar features by default. And for those who want to keep their private searches private, there are services online that do not use search data to serve up ads. Instead, they simply give users what they want: answers.

Today, as Malwarebytes commemorates Data Privacy Day, so, too, do many others. First conceived in 2007 by the Council of Europe (as National Data Protection Day), the United States later adopted this annual public awareness campaign in 2009. It is now observed in Canada, Israel, and 47 other countries.

Importantly, Data Privacy Day serves as a reminder that data privacy should be a right, exercisable by all. It is not reserved for people who have something to hide. It is not a sole function for covering up wrong-doing.

It is, instead, for everyone.

Why does data privacy matter?

Privacy is core to a safer Internet. It protects who you are and what you look at, and it empowers you to go online with confidence. By protecting your data privacy, the sites you visit, the videos you watch, even the devices you favor, will be nobody’s business but your own.

Unfortunately, data privacy today is not the default.

Instead, every-day online activities lead to countless non-private moments for users, often by design. In these “accidentally unprivate” moments, someone, somewhere, is making a dollar off your compromised privacy.

When you sign up to use a major social media platform or mobile app, the companies behind them require you to sign an end-user license agreement that gives them near-total control over how your data is collected, stored, and shared.

Just this week, the editorial board for The New York Times zeroed in on this power imbalance between companies and their users, in which companies “may feel emboldened to insert terms that advantage them at their customers’ expense.”

“That includes provisions that most consumers wouldn’t knowingly agree to: an inability to delete one’s own account, granting companies the right to claim credit for or alter their creative work, letting companies retain content even after a user deletes it, letting them gain access to a user’s full browsing history and giving them blanket indemnity.”

Separate from potentially over-bearing user agreements, whenever you browse the Internet to read the news, shop online, watch videos, or post pictures, a cadre of data brokers slowly amass information to build profiles about your search history, age, location, interests, political affiliations, religious beliefs, sexual orientation, and more. In fact, some data brokers scour the web for public records, collating information about divorce and traffic records and tying it to your profile. The data brokers then serve as a middleman for advertisers, selling the opportunity to place an ad in front of a specific type of user.

Further, depending on where you live, your online activity may become the interest of your government, which could request more information about your Internet traffic from your Internet Service Provider. Or perhaps you’re attending a university that you would like to shield from your Internet traffic, as you may be questioning your sexuality or personal beliefs. Who we are online has increasingly blurred with who we are offline, and you deserve as much privacy in one realm as in the other.

In every situation described above, users are better equipped when they know who is collecting their data and where that data is going. Without that knowledge, users risk entering into skewed agreements with the titans of the web, who have more resources and more time to enforce their rules, whether or not those rules are fair.

Are you fighting alone?

You are not alone in fighting to preserve your data privacy. In fact, there are four major bulwarks aiding you today.

First, many tools can help protect your online privacy:

  • Certain browser plug-ins can prevent online ad tracking across websites, and they can warn you about malicious websites looking to steal your sensitive information
  • VPNs can prevent ISPs from getting detailed information about your Internet traffic
  • Private search engines can keep your searches private and your search data away from any advertising schemes
  • Privacy-forward web browsers can default to the most private setting, preventing advertisers from following you around the web and profiling your activity

Second, several lawmakers across the United States have heeded the data privacy call. Since mid-2018, Senators and Representatives for the country have introduced at least 10 data privacy bills that aim to provide meaningful data privacy protections for Americans. Even more state lawmakers have forwarded statewide data privacy bills in the same time period, including proposals in Washington, Nevada, and Mainewhich successfully turned its bill into law in 2019.

Across the world, the legislative appetite for data privacy rights has outpaced the United States. Since May 2018, more than 450 million Europeans have been protected by the General Data Protection Regulation (GDPR), which demands strict controls over how their data is used and stored, and violations are punishable by stringent fines. That law’s impact cannot be understated. Following its passage, many countries began to follow suit, extending new rights of data protection, access, portability, and transparency to their residents.

Third, a variety of organizations routinely defend user rights by engaging directly with Congress members, advocating for better laws, and building grassroots coalitions.  Electronic Frontier Foundation, American Civil Liberties Union, Fight for the Future, Common Sense Media, Privacy International, Access Now, and Human Rights Watch are just a few to remember.

Fourth, a handful of companies increasingly recognize the value of user privacy. Apple, Mozilla, Brave, DuckDuckGo, and Signal, among others, have become privacy darlings for some users, implementing privacy features that have angered other companies, and sometimes pushing one another to do better. Companies that have taken missteps on user privacy, on the other hand, have drawn the ire of Congress and suffered dips in user numbers.

Through many of these developments, Malwarebytes has been there—providing thoughtful analysis on the Malwarebytes Labs blog and releasing products that can directly benefit user privacy. We know the companies who care, we talk to the advocates who fight, and we embrace a pro-user stance to guide us.

Which is why we’re proud to present today a special episode of our podcast, Lock and Code, which you can listen to here.

The future of data privacy

Data privacy has only increased in importance for the public with every passing year. That means that tomorrow, just like today and just like the many yesterdays, Malwarebytes will be there to defend and advocate for data privacy.

We will cover the developments that could help—or could be detrimental—to data privacy. We will release tools that can provide data privacy. We will talk to the experts in this field and we will routinely take pro-user stances because it is the right thing to do.

We look forward to helping you in this fight.  

The post Why Data Privacy Day matters appeared first on Malwarebytes Labs.

A week in security (January 18 – January 24)

Last week on Malwarebytes Labs, we looked at changes to WhatsApp’s privacy policy, we provided information about Malwarebytes being targeted by the same threat actor that was implicated in the SolarWinds breach, we told the story of ZeroLogon, looked at the pros and cons of Zoom watermarking, studied the vulnerabilities in dnsmasq called DNSpooq, asked if TikTok’s new settings are enough to keep kids safe, and looked at how Google Chrome wants to make your passwords stronger.

Other cybersecurity news

  • The European Medicines Agency (EMA) revealed that some of the unlawfully accessed documents relating to COVID-19 medicines and vaccines have been leaked on the internet. (Source: EMA website)
  • Some laptops provided by the UK’s Department for Education (DfE) came with malicious files identified as the Gamarue worm. (Source: InfoSecurity Magazine)
  • Cisco emitted patches for four sets of critical-severity security holes in its products, along with other fixes. (Source: The Register)
  • The Brave team has been working with Protocol Labs on adding InterPlanetary File System (IPFS) support to its desktop browser. (Source: Brave website)
  • Sharing an eBook with your Kindle could have let hackers hijack your account. (Source: The Hacker News)
  • Attackers behind a phishing campaign exposed the credentials they had stolen to the public Internet, across dozens of drop-zone servers. (Source: CheckPoint)
  • QNAP urged customers to secure their NAS devices against a malware campaign that infects and exploits them to mine bitcoins. (Source: BleepingComputer)
  • Singapore widened its security labelling to include all consumer IoT devices. (Source: ZDNet)
  • Thousands of Business Email Compromise (BEC) lures used Google Forms in a recon campaign. (Source: SCMagazine)

Stay safe, everyone!

The post A week in security (January 18 – January 24) appeared first on Malwarebytes Labs.

Are TikTok’s new settings enough to keep kids safe?

TikTok, the now widely popular social media platform that allows users to create, share, and discover, amateur short clips—usually something akin to music videos—has been enjoying explosive growth since it appeared in 2017. Since then, it hasn’t stopped growing—more so during the current pandemic. Although the latest statistics continue to show that in the US the single biggest age group (32.5 percent, at the time of writing) is users between 10 and 19 years of age, older users (aged 25 to 34 years) in countries like China, Indonesia, Malaysia, Saudi Arabia, and the UAE are quickly overtaking their younger counterparts.

Suffice to say, we can no longer categorize TikTok as a “kids’ app”.

This, of course, further enforces the many concerns parents already have about the app. We’re not even talking about the possibilities of young children, tweens, and teens seeing dangerous challenges and trends, or pre-teens lip-synching to songs that make grown up eyes go wide, or watching some generally inappropriate content. We’re talking about potential predators befriending your child, cyberbullies who are capable of following targeted kids from one social media platform to another, and a stream of unrestricted content from users they don’t even follow, or aren’t even friends with.

Limitations and guardrails

Eric Han, TikTok’s Head of Safety in the US, announced last week that all registered accounts of users aged 13 to 15 years have been set to private. This means that people who want to follow those accounts need to be pre-approved first, before they can see a user’s videos. It’s a way for TikTok to give tweens an opportunity to make informed choices about who they welcome into their account.

Furthermore, TikTok will be rolling out more changes and adjustments, such as:

  • Limitations to video commenting. Users within this age group will be able to decide whether they want their friends, or no one, to comment. Currently, anyone can comment, by default.
  • Limitations to availability of Duet and Stitch. In September last year, TikTok introduced two editing tools: Duet and Stitch. These were made available only to users ages 16 years and above. TikTok also limited the use of video clips to Friends only, among 16 to 17-year-old users.
  • Limitations to video downloads. Users ages 16 years and above only can download content within TikTok’s app. This feature is turned off by default for users ages 16 to 17, but they have the option to enable it.

Read: TikTok is being discouraged and the app may be banned


  • Limitations to suggested accounts. Users who are 16 years and under are not allowed to suggest their TikTok account to others.
  • Limitations to direct messaging and live streaming. Users who are 16 years and under are not allowed to live stream, and can’t be messaged privately by anyone.
  • Limitations in virtual gifting. Only users who are 18 years and over can purchase, send, and receive virtual gifts.

Growing pains

This isn’t the first time TikTok has tried to prove that they’re serious about making and implementing such changes for the benefit of their userbase. Here is a rundown of the social media platform’s security and privacy growth and challenges from a couple of years back.

  • After making a $5.7 million USD settlement with the Federal Trade Commission (FTC) in 2019, for violating the Children’s Online Privacy Protection Act by failing to seek parental consent for users under the age of 13, TikTok had set out to delete profiles of users who are within this age bracket.
  • TikTok introduced account linking for parents and/or guardians in April 2019. Called Family Pairing, responsible grown-ups are now equipped to connect their TikTok accounts with their teen’s, enabling them to remotely modify settings of their accounts.
  • In December 2019, TikTok teamed up with Family Online Safety Institute (FOSI) to host internet safety seminars. Its aim was “to help parents better understand the tools and controls they have to navigate the digital environment and the resources FOSI offers through its Good Digital Parenting initiative.”
  • In January 2020, TikTok updated their community guidelines, to clarify how it moderates harmful or unsafe content. It said it wanted to “maintain a supportive and welcoming environment”, so that “users feel comfortable expressing themselves openly”.
  • In February 2020, the company partnered with popular content creators in the US, to create videos reminding users to, essentially, stop scrolling their phone and take a break—in true TikTok fashion. This is part of their “You’re in Control” initiative, a user-centric series of videos that tries to informs users of TikTok’s “safety features and best practices”.
  • At the same time, TikTok was also trying to curb online misinformation, (which is rampant on social media platforms), by working with third-party fact checking and media literacy organizations, such as the Poynter Institute.

Are TikTok’s changes enough?

Tools provided by social media platforms like TikTok can be helpful and useful. However, these companies can only do so much for their users. Parents and/or guardians should never expect their child’s favorite social network to do all the heavy lifting when it comes to keeping young users safe. More than anything, grown-ups should be more involved in their children’s digital lives. Not just as an observer, but by being an active participant in one form or another.

There is no substitute for educating yourself about social media. Look into the pros and cons of using it, and then educate your kids about it.

Tell them it’s okay to say “no”, to not follow the herd, that although something may look fun and cool, to stop and think about it first before reacting (or doing).

Everything starts in the home. Choosing security and privacy is no different. You are their first line of defense, not those default settings. So, let’s take up that mantle, and be one.

The post Are TikTok’s new settings enough to keep kids safe? appeared first on Malwarebytes Labs.

Chrome wants to make your passwords stronger

A common sentiment, shared by many people down the years, is that storing passwords in browsers is a bad idea. Malware, for example, would specifically target password storage in browsers and plunder everything in sight.

Password managers weren’t exactly flying off the shelves back in 2007, your only real options were home grown. People ended up saving logins in all sorts of odd places: Text files, email accounts…you name it. Naturally, security-minded folks gravitated towards saving passwords in browsers, because what else were they going to do?

The browser password wars

Even just 8 years ago, it was still a hotly contested debate. The problem then was that passwords were stored in plain text. They aren’t now, but if the device you’re using is compromised it doesn’t matter. Malware files can decrypt your passwords, or wait for you to do it. So, no matter how recently you look, many of the same threats still exist for browser passwords. And new ones emerge, like the rogue advertisers trying to grab autofill data.

Let’s be clear: things are better now for passwords in browsers than they used to be. Even something as basic as having to enter your Windows password to view or copy saved passwords is reassuring. Making use of encryption, instead of leaving data lying around in plaintext, is excellent. Browsers taking things one step beyond simply storing, and checking for stolen passwords is great. Real time phishing protection is the icing on an ever-growing cake.

With that in mind, Chrome continues to make inroads in the name of beefing up browser password safety.

Weak password? Chrome 88 can help

Beginning with Chrome version 88, you can now check for weak passwords (open Settings and search for “Passwords”) and alter them on the fly, with just a few clicks. The “Change password” button doesn’t alter anything inside the browser, which may disappoint. It simply takes you to the site where you use that feeble password. At this point, you’ll have to manually alter the details. The browser should then detect you’ve altered the password and update its password database, as it normally would.

If you really want to know what the stored password is but can’t remember it, you’ll need your Windows login, as mentioned earlier.

There’s not a huge amount to add about this new feature, as it is indeed incredibly simple to use. A list of all your potentially weak passwords is displayed, and off you go to fix them all. This is to its benefit. It’s easy to get bogged down in password minutiae and end up not bothering.

You don’t need bells and whistles while looking for weak passwords. You just want a list of sites, and to be told where there’s a problem. In this regard, the new functionality more than delivers.

Browser or password manager?

Having said all of that…you may still wish to ignore all the above and stick with a dedicated password manager. No matter what password features are added to browsers, some folks will never want anything to do with that. There are a wealth of choices available. Totally offline, or online functionality: the choice really is yours. I’d be surprised if there isn’t something for everyone in the options available. But if you really don’t want a password manager, then browsers are a better solution than nothing at all.

Do you prefer to keep all your tools in the browser basket, or cast passwords away into dedicated password managers? Either way, we wish you many years of secure password management to come.

The post Chrome wants to make your passwords stronger appeared first on Malwarebytes Labs.

DNSpooq bugs haunt dnsmasq

The research team at JSOF found seven vulnerabilities in dnsmasq and have dubbed them DNSpooq, collectively. Now, some of you may shrug and move on, probably because you haven’t heard of dnsmasq before. Well, before you go, you should know that dnsmasq is used in a wide variety of phones, routers, and other network devices, besides some Linux distributions like Red-Hat. And that’s just a selection of what may be affected.

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). The vulnerabilities disclosed by the JSOF team have been listed as CVE-2020-25687, CVE-2020-25683, CVE-2020-25682, CVE-2020-25684, CVE-2020-25685, CVE-2020-25686 and CVE-2020-25681.

What is DNSpooq?

DNSpooq is the name the researchers gave to a collection of seven vulnerabilities they found in dnsmasq, an open-source DNS forwarding software in common use. Dnsmasq is very popular, and so far JSOF has identified approximately 40 vendors that use it in their products, as well as some major Linux distributions. DNSpooq includes some DNS cache poisoning vulnerabilities, and buffer overflow vulnerabilities that could potentially be used to achieve remote code execution (RCE).

Domain Name System (DNS) is an internet protocol that translates user-friendly, readable URLs, such as malwarebytes.com, to their numeric IP addresses, allowing the computer to identify a server without the user having to remember and input its actual IP address. Basically, you could say DNS is the phonebook of the internet. DNS name resolution is a complex process that can be interfered with at many levels.

Dnsmasq (short for DNS masquerade) is free software that can be used for DNS forwarding and caching, and DHCP services. It is intended for smaller networks and can run under Linux, macOS, and Android. In essence, dnsmasq accepts DNS queries and either answers them from a local cache or forwards them to an actual DNS server.

What is DNS cache poisoning?

If you have ever moved your website to a different server, you will have noticed how long it can take before everyone actually lands on the new IP address. This happens because DNS records are normally cached in a number of different places, for performance. Records can be cached in your browser, by your operating system, on your network, by your ISP, and so on. When a cache entry expires it will update from the next upstream cache. Because of this, it can take a while for new records to get updated in all the places they’re stored. This phenomenon is referred to as DNS propagation.

If false information is added to a compromised DNS cache, that information can spread downstream to other caches. This method of providing a false IP address is called DNS cache poisoning. Cache poisoning can be done at all levels, local, router and even at the DNS server level.

What is a buffer overflow?

A buffer overflow is a type of software vulnerability that exists when an area of memory within a software application reaches an address boundary and writes into an adjacent memory region. Buffer overflows can be used to overwrite useful data, cause network crashes, or replace memory with arbitrary code that the instruction pointer later executes. In that last case it may offer an opportunity for RCE.

Who should worry?

JSOF has identified over 40 companies and respective products they believe are using dnsmasq. You can find a complete list on their website about DNSpooq, under Vendors. Some names worth mentioning: Asus, AT&T, Cisco, Dell, Google, Huawei, Linksys, Motorola, Netgear, Siemens, Ubiquiti, and Zyxel. Check out the list if you want to verify whether you are using one of the affected devices.

What can be done about DNSpooq?

For users of dnsmasq the quickest fix is to update it to version 2.83 or above.

In the long run it would be better for all of us if we started using a less vulnerable method than DNS, like DNSSEC, which protects against cache poisoning. Unfortunately is still not very widely deployed. Neither is HSTS, which is a web security policy mechanism that helps to protect websites against man-in-the-middle attacks.

Stay safe, everyone!

Header image and research courtesy of JSOF

The post DNSpooq bugs haunt dnsmasq appeared first on Malwarebytes Labs.

Zoom watermarking: pros and cons

Metadata, which gives background information on pieces of data, is typically hidden. It becomes a problem when accidentally revealed. Often tied to photography mishaps, it can be timestamps. It might be location. In some cases, it can be log analysis. Many tutorials exist to strip this information out. This is because it can reveal more than intended when it hits the public domain. Default settings are often to blame. For example, a mobile photography app or camera may embed GPS data by default.

Some people may find this useful; quite a few more may object to it as a creepy privacy invasion.

Well, that’s metadata. Now you have an idea what kind of things can lurk without knowledge. We can see what happens when we deliberately enable a data / tagging related function.

Watermarking: what’s the deal?

An interesting story has recently emerged on The Intercept, of voluntary data (in the form of watermarks) wrapped into Zoom recordings, which could cause headaches in unexpected ways. Watermarks aren’t hidden—they’re right there by design, if people choose to use them. And the visual side of this data is supposed to be viewable during the call.

The Intercept talks about accidental identity reveals, via data embedded into calls, in relation to the ever-present videoconferencing tool. You’d be forgiven for thinking the identity reveal referenced in the article had something to do with the watermarks, but no.

The reveal happened because someone recorded a video call and dropped it online, with participant’s faces on display. The people involved appear to be at least reasonably well known. The secret identity game was up regardless of what was under the hood.

Cause and effect

What the rest of the article is about, is theorising on the ways embedded metadata could cause issues for participants. Zoom allows for video and audio watermarking, with video of course being visual and so easier to spot. Video displays a portion of a user’s email address when someone is sharing their screen. Audio embeds the information of anyone recording the call into the audio, and Zoom lets you know who shared it. You must ask Zoom to do this, and the clip has to be more than 2 minutes in length.

Essentially, video watermarking is to help you know who is sharing and talking during the call. Audio watermarking is to allow you to figure out if someone is sharing without permission. The Intercept explores ways this could cause problems where confidentiality is a concern.

Some identity caveats

If Zoom content is shared online without permission, it may not matter much if revealing metadata is included, unless the video call is audio only. This is because people can be easy to identify visually. Is a public figure of some sort involved? The game is already lost. If they’re not normally a public facing persona, people could still find them via reverse image search or other matching tools. And if they can’t, a well-known location, or a name-badge, could give them away. There are so many variables at work, only the participants may know for sure.

Hunting the leaker: does it matter?

While the other concern of identifying the leaker is still important, your mileage may vary in terms of how useful it is, versus how much of an inadvertent threat it presents. It’s possible the leaker may not care much if they’re revealed. They may have used a fake identity, or even compromised a legitimate account in order to do the leaking.

It’s also possible that someone with a grudge could leak something then pretend they’d been compromised. If this happened, would you have a way of being able to determine the truth of the matter? Or would you simply take their word for it?

Weighing up the risk

All good questions, and a valuable reminder to consider which videoconferencing tools you want to make use of. For some organisations and individuals, there’s a valid use for the metadata dropped into the files. For others, it might be safer on balance to leave them out. It might even be worth using a virtual background instead of something which reveals personal information. It might be worth asking if you even need video at all, depending on sensitivity of call.

The choice, as always, is yours.

The post Zoom watermarking: pros and cons appeared first on Malwarebytes Labs.