IT NEWS

Google to pay $40m for “deceptive and unfair” location tracking practices

Google is going to pay $39.9 million to Washington State to put to rest a lawsuit about its location tracking practices which has been in play since last year. Google was accused of “misleading consumers” by State Attorney General Bob Ferguson. From the AG press release:

Attorney General Bob Ferguson today announced Google will pay $39.9 million to Washington state as a result of his office’s lawsuit over misleading location tracking practices. Google will also implement a slate of court-ordered reforms to increase transparency about its location tracking settings.

Ferguson’s lawsuit against Google asserted that the tech giant deceptively led consumers to believe that they have control over how Google collects and uses their location data. In reality, consumers could not effectively prevent Google from collecting, storing and profiting from their location data.

The lawsuit itself, announced back in January 2022, claimed Google used a “number of deceptive and unfair practices” to obtain user content for tracking. Practices highlighted included “hard to find” location settings, misleading descriptions of location settings, and “repeated nudging” to enable location settings alongside incomplete disclosures of Google’s location data collection.

These practices were set alongside the large amount of profit Google generated from using consumer data to sell advertising. Google made close to $150 billion from advertising in 2020, and the case pointed out that location data is a key component of said advertising. As per the Attorney General:

(Google) has a financial incentive to dissuade users from withholding access to that data.

The location based argument is focused on the discrepancy between claims related to what data Google stores in theory with location data turned off, and what it obtains in practice:

When users enable a setting called “Location History,” Google saves data on users’ location to, as it says in its account settings, “give you personalised maps, recommendations based on places you’ve visited, and more.”

Google told users that when Location History was disabled, the company did not continue to store the user’s location. For years, Google’s help page stated, “With Location History off, the places you go are no longer stored.” That statement was false. For example, the company collects location data under a separate setting — “Web & App Activity” — that is defaulted “on” for all Google Accounts.

The consent decree filed on Wednesday means Google will need to be more transparent with regard to tracking. The search engine giant will also need to provide more detailed information in cases where location technologies are involved.

AG Ferguson had this to say:

Google denied Washington consumers the ability to choose whether the company could track their sensitive location data, deceived them about their privacy options and profited from that conduct. Today’s resolution holds one of the most powerful corporations accountable for its unethical and unlawful tactics.

Google has been on the receiving end of legal action led by Ferguson for some time now. Just last month, he partnered with the US Department of Justice and a bipartisan group of attorneys general for an antitrust lawsuit aiming to break up Google’s monopolisation of display advertising. There have also been other antitrust lawsuits in this space, and in 2021 Google paid $423,659.76 in relation to violating the state’s campaign finance disclosure law.

We still don’t know how these proposed changes will take shape in terms of what consumers will see. “…with no federal law governing online privacy in the United States, state regulators are forced to make do with what they have” according to Android Central. With Ferguson showing no signs of letting up, Washington State is taking that philosophy to the max.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

ChatGPT: Cybersecurity friend or foe?

If you haven’t heard about ChatGPT yet, perhaps you’ve just been thawed from cryogenic slumber or returned from six months off the grid. ChatGPT—the much-hyped, artificial intelligence (AI) chatbot that provides human-like responses from an enormous knowledge base—has been embraced practically everywhere, from private sector businesses to K–12 classrooms.

Upon its launch in November 2022, tech enthusiasts quickly jumped at the shiny new disruptor, and for good reason: ChatGPT has the potential to democratize AI, personalize and simplify digital research, and assist in both creative problem-solving and tackling “busywork.” But the security community and other technology leaders have started raising the alarm, worried about the program’s potential to write malware and spread mis- and disinformation.

Do you think your organization should embrace ChatGPT? Or do you believe implementing the platform will compromise your company’s cybersecurity posture? Read on to learn more about the pros, cons, conversations, and controversies surrounding ChatGPT, including a call to (halt) action from technology leaders.

Why ChatGPT security concerns are both overblown and valid

Artificial intelligence (AI) was once a science fiction cautionary tale—stories consistently warned against designing machines capable of surpassing human ingenuity. But over the last 20 years, AI has transitioned from the stuff of nightmares to mainstream adoption. From the first Roomba in 2002 to the first virtual assistant (Siri) in 2011, AI has slowly and steadily penetrated the consumer technology market, often with little comprehension from buyers that artificial intelligence is actually powering the functionality behind their favorite devices.

As of November 2022, people can no longer ignore the artificial elephant in the room. Enter: ChatGPT. Developed by OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer) is an AI-powered chatbot that uses an enormous and sophisticated language model to generate human-like responses in text format. ChatGPT amassed an incredible 100 million users within two months of its explosive launch, and its website receives over 1 billion visitors per month.

Unlike other chatbots, which are typically only able to produce canned replies on a few subjects, ChatGPT can mimic the style of celebrity CEOs, craft business pitches, compose music and song lyrics, answer test questions, simulate an entire chat room, write computer code, and much more. For example, Malwarebytes asked ChatGPT to write the opening paragraph of a novel about an antiquated antivirus program that relies on signature-based detection to stop new and emerging threats.

Here’s what the program came back with:

“The antivirus program blinked to life, its archaic interface flickering on the outdated CRT monitor. The program’s once cutting-edge signature-based detection technology was now decades out of date, unable to keep pace with the rapidly evolving landscape of digital threats. But for the small team of programmers responsible for maintaining the program, it was still their best line of defense against the unrelenting tide of new and unknown viruses that threatened to breach their systems. As they watched the program scan through the code, line by line, they knew that they were fighting an uphill battle – but they had no other choice but to keep trying.”

Such articulate, detailed responses caused a near frenzy upon ChatGPT’s initial release. Many corporate customers and app developers have rushed to implement the platform, and its newer Dall-E program for generating images, into their business processes and products. However, the security community and those wary of artificial intelligence’s steady drumbeat forward have warned organizations to exercise caution over a myriad of potential risks.

Because of its meteoric rise into public consciousness and rapid adoption, the generative AI chatbot has been the subject of continuing, complex conversations about its impact on the cybersecurity industry, threat landscape, and humanity as a whole. Will ChatGPT be the sentient harbinger of death some have claimed? Or is it a unicorn that’s going to solve every business, academic, and creative problem? The answer, as usual, lies somewhere in the gray.

Security pros of ChatGPT

AI can be a powerful tool for cybersecurity and information technology professionals. It will change the way we defend against cyberattacks by improving the industry’s ability to detect and respond to threats in real time. And it will help businesses shore up their IT infrastructure to better withstand the constant stream of increasingly-sophisticated attacks. Most effective security solutions today, including Malwarebytes, already employ some form of machine learning. That’s why some in the security community argue that generative AI tools can be safely deployed to strengthen an organization’s cybersecurity posture as long as they’re implemented according to best practices.

Increases efficiency

ChatGPT can increase efficiency for cybersecurity staff on the front lines. For one, it can significantly reduce notification fatigue, a growing concern within the field. With companies grappling with limited resources and a widening talent gap, a tool like ChatGPT could simplify certain labor-intensive tasks and give defenders back valuable time to commit to higher-level strategic thinking. ChatGPT can be trained to identify and mitigate network security threats like DDoS attacks when used in conjunction with other technologies. It can also help automate security incident analysis and vulnerability detection, as well as more accurately filter spam.

Assists engineers

Malware analysts and reverse engineers could also benefit from ChatGPT’s assistance on traditionally challenging tasks, such as writing proof-of-concept code, comparing language- or platform-specific conventions, and analyzing malware samples. The chatbot can also help engineers learn how to write in different programming languages, master difficult software programs, and understand vulnerabilities and exploit code.

Trains employees

ChatGPT’s security applications aren’t limited to Information Security (IS) personnel. The program can help close the security knowledge gap by assisting in employee training. Cybersecurity training is crucial for organizations interested in mitigating cyberattacks and fraud, yet IT departments are often far too busy to offer more than a single course per year. ChatGPT can step in to offer insights on identifying the latest scams, avoiding social engineering pitfalls, and setting stronger passwords in concise, conversational text that may be more effective than a lecture or slide presentation.

Aids law enforcement

Finally, ChatGPT has the potential to assist law enforcement with investigating and anticipating criminal activities. In a March 2023 report from Europol, subject matter experts found that ChatGPT and other large language models (LLMs) opened up “explorative communication” for law enforcement to quickly gather key information without having to manually search through and summarize data from search engines. LLMs can significantly speed up the learning process, enabling a much faster gateway into technological comprehension than was previously thought possible. This could help officers get a leg up on cybercriminals whose understanding of emerging technologies have typically outpaced their own.

Security concerns overblown

Not long after ChatGPT was first introduced, the inevitable hand wringing by technology decision-makers took hold. In a February survey of IT professionals by Blackberry, 51 percent predicted we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71 percent believed nation states are likely already using the technology for malicious purposes.

The following month, thousands of tech leaders, including Steve Wozniak and Elon Musk, signed an open letter to all AI labs calling on them to pause the development of systems more powerful than the latest version of ChatGPT for at least six months. The letter cites the potential for profound risks to society and humanity that arise from the rapid development of advanced AI systems without shared safety protocols. More than 27,500 signatures have since been added to the letter.

However, even when ChatGPT is engaged in ominous activities, the outcomes at present are rather harmless. Since OpenAI allows developers to modify its official APIs, some have tested a few nefarious theories by creating ChaosGPT, an internet-connected “evil” version that runs actions users do not intend. One user commanded the AI to destroy humanity, and it planned a nuclear winter, all while maintaining its own Twitter account, which was ultimately suspended.

ChaosGPT tweet

So maybe ChatGPT isn’t going to take over the world just yet—what about some of the more realistic security concerns being voiced, like the ability to develop malware or phishing kits?

When it comes to writing malicious code, ChatGPT isn’t yet ready for prime time. In fact, the platform is a terrible programmer in general. It’s currently easier for an expert threat actor to create malware from scratch than to spend time correcting what ChatGPT has produced. The fear that ChatGPT would hand script kiddies the programming power to produce thousands of new malware strains is unfounded, as amateur cybercriminals lack the knowledge to pick up on minor errors in code, as well as the understanding of how code works.

One of our researchers recently embarked on an experiment to get ChatGPT to write ransomware, and despite the chatbot’s initial protests that it couldn’t “engage in activities that violate ethical or legal standards, including those related to cybercrime or ransomware,” with a little coaxing, ChatGPT eventually complied. The result: snippets of ransomware code that switched languages throughout, stopped short after a certain number of characters, dropped features at random, and were essentially incoherent and useless.

Since the primary focus of ChatGPT’s training was in language skills, security pros have been most anxious about its ability to generate believable phishing kits. While the chatbot can produce a clean phishing email that’s free from grammatical or spelling errors, many modern phishing samples already do the same. The AI tool’s phishing skills begin and end with writing emails because, again, it lacks the coding talent to produce other elements like credential harvesters, infected macros, or obfuscated code. Its attempts so far have been rudimentary at best—and that’s with the assistance of other tools and researchers.

ChatGPT can only pull from what’s already in its public database, and it has only been trained on data up until 2021. Even today, there are simply not enough well-written phishing scripts in the wild for ChatGPT to surpass what cybercriminals have already developed. In addition, OpenAI has safety protocols that explicitly prohibit the use of its models for malware development, fraud (including spam and scams), and invasions of privacy. Unfortunately, that hasn’t stopped crafty individuals from “jailbreaking” ChatGPT to get around them.

ChatGPT security cons

Just because some of the worst fears about ChatGPT are overhyped doesn’t mean there are no justifiable concerns. According to the NIST AI Risk Management Framework published in January, an AI system can only be deemed trustworthy if it adheres to the following six criteria:  

  1. Valid and reliable
  2. Safe
  3. Secure and resilient
  4. Accountable and transparent
  5. Explainable and interpretable
  6. Fair with harmful biases managed

However, risks can emerge from socio-technical tensions and ambiguity related to how an AI program is used, its interactions with other systems, who operates it, and the context in which it is deployed.

Racial and gender bias

There are many inherent uncertainties in LLMs that render them opaque by nature, including limited explainability and interpretability, and a lack of transparency and accountability, including insufficient documentation. Researchers have also reported multiple cases of harmful bias in AI, including crime prediction algorithms that unfairly target Black and Latino people and facial recognition systems that have difficulty accurately identifying people of color. Without proper controls, ChatGPT could amplify, perpetuate, and exacerbate toxic stereotypes, leading to undesirable or inequitable outcomes for certain communities and individuals.

Lack of verifiable metrics

AI systems suffer from a deficit of verifiable measurement metrics, which would help security teams determine whether a particular program is safe, secure, and resilient. What little data exists is far from robust and lacks consensus among AI developers and security professionals alike. What’s worse, different AI developers interpret risk in different ways and measure it at different intervals in the AI lifecycle, which could yield inconsistent results. Some threats may be latent at one time but increase as AI systems adapt and evolve.

Cybercriminal experimentation

Despite its struggles with malicious code, ChatGPT has already been weaponized by enterprising cybercriminals. By January, threat actors in underground forums were experimenting with ChatGPT to recreate malware variants and techniques described in research publications. Criminals shared malicious tools, such as an information stealer, an automated exploit, and a program designed to phish for credentials. Researchers also discovered cybercriminals exchanging ideas about how to create dark web marketplaces using ChatGPT that sell stolen credentials, malware, or even drugs in exchange for cryptocurrency.

Vulnerabilities and exploits

There are few ways to know in advance if an LLM is free from vulnerabilities. In March, OpenAI temporarily took down ChatGPT because of a bug that allowed some users to see the titles of other people’s chat histories and first messages of newly-created conversations. After further investigation, OpenAI discovered the vulnerability had exposed some user payment and personal data, including first and last names, email addresses, payment addresses, the last four digits of credit card numbers, and card expiration dates. While OpenAI claims, “We are confident that there is no ongoing risk to users’ data,” there’s no way (at present) to confirm or deny whether personal information was exfiltrated for criminal purposes.

Also in March, OpenAI massively expanded ChatGPT’s capabilities to support plugins that allow access to live data from the web, as well as from third-party applications like Expedia and Instacart. In code provided to ChatGPT customers interested in integrating the plugins, security analysts found a potentially serious information disclosure vulnerability. The bug can be leveraged to capture secret keys and root passwords, and researchers have already seen attempted exploits in the wild.

Privacy concerns

Compounding worries that vulnerabilities could lead to data breaches, several top brands recently chastised employees for entering sensitive business data into ChatGPT without realizing that all messages are saved on OpenAI’s servers. When Samsung engineers asked ChatGPT to fix errors in their source code, they accidentally leaked confidential notes from internal meetings and performance data in the process. An executive at another company cut-and-pasted the firm’s 2023 strategy into ChatGPT to create a slide deck, and a doctor submitted his patient’s name and medical condition for ChatGPT to craft a letter to his insurance company.

Chat with ChatGPT

Both privacy and security concerns have prompted major banks, including Bank of America, JPMorgan Chase, Goldman Sachs, and Wells Fargo, to restrict or all-out ban ChatGPT and other generative AI models until they can be further vetted. Even private companies like Amazon, Microsoft, and Walmart have issued warnings to their staff to refrain from divulging proprietary information or sharing personal or customer data on ChatGPT as well.

Social engineering

Finally, cybercriminals wouldn’t be cybercriminals if they didn’t capitalize on ChatGPT’s wild popularity. Because of its accelerated growth, ChatGPT was forced to throttle its free tool and launch a $20/month paid tier for those wanting unlimited access. This gave threat actors the ammunition to develop convincing social engineering schemes that promised uninterrupted, free access to ChatGPT but really lured users into entering their credentials on malicious webpages or unknowingly installing malware. Security researchers also found more than 50 malicious Android apps on Google Play and elsewhere that spoof ChatGPT’s icon and name but are designed for nefarious purposes.

ChatGPT’s disinformation problem

While vulnerabilities, data breaches, and social engineering are valid concerns, what’s causing the most anxiety at Malwarebytes is ChatGPT’s ability to spread misinformation and disinformation on a massive scale. That which enamors the public most—ChatGPT’s ability to generate thoughtful, human-like responses—is the very same capability that could lull users into a false sense of security. Just because ChatGPT’s answers sound natural and intelligent doesn’t mean they are accurate. Incorrect information and associated biases are often incorporated into its responses.

OpenAI CEO Sam Altman himself expressed worries that ChatGPT and other LLMs have the potential to sow widespread discord through extensive disinformation campaigns. Altman said the latest version, GPT-4, is still susceptible to “hallucinating” incorrect facts and can be manipulated to produce deceptive or harmful content. “The model will boldly assert made-up things as if they were completely true,” he told ABC News.

In the age of clickbait journalism and social media, it can be challenging to discern the difference between fake and authentic content, propaganda or legitimate fact. With ChatGPT, bad actors can use the AI to quickly write fake news stories that mimic the voice and tone of established journalists, celebrities, or even politicians. For example, Malwarebytes was able to get ChatGPT to write a story in the voice of Barack Obama about the earthquake in Turkey, which could easily be modified to spread disinformation or collect fraudulent payments through fake donation links.

Educational concerns

In education, mis- and disinformation are especially troubling byproducts of ChatGPT that have led some of the biggest school districts in the US to ban the program from K–12 classrooms. From its lack of cultural competency to its potential to undermine human teachers, academia is understandably apprehensive. For every student using ChatGPT to research debate prompts or develop study guides, there’s another abusing the platform to plagiarize essays or take exams.

The education industry might be willing (for now) to let teachers use ChatGPT for simple tasks like creating lesson plans and emailing parents, but the tool will likely remain off-limits for students, or at least highly regulated in public schools. Educators are aware that over-reliance on AI-powered tools and generated content could lead to a decrease in problem solving, creativity, and critical thinking—the very skills teachers and administrators aim to develop in students. Without them, it’ll be that much harder to recognize and avoid misinformation.

Final verdict

Suggesting that ChatGPT is low risk and unworthy of the security community’s attention is like putting your head in the sand and pretending AI doesn’t exist. ChatGPT is only the start of the generative AI revolution. Our industry should take its potential for disruption—and destruction—seriously and focus on developing safeguards to combat AI threats. Halting “dangerous” research on advanced models ignores the reality of rampant AI use today. Instead, it’s better to demand NIST’s criteria for trustworthiness and establish regulation around the development of AI through both government intervention and corporate security innovation.

Some artificial intelligence regulation is already on the books: the 2022 Algorithmic Accountability Act requires US businesses to assess critical AI algorithms and provide public disclosures for increased transparency. The legislation was endorsed by AI advocates and experts, and it sets the stage for future government oversight. With AI laws proposed in Canada and Europe as well, we’re one step closer to providing some important guardrails for AI. In fact, expect to see changes (aka limitations) implemented to ChatGPT in the near future in response to a country-wide ban by the Italian government.

Just as cybersecurity relies on commercial software to defend people and businesses, so too might generative AI models. New companies are already springing up that specialize in AI vulnerability detection, bot mitigation, and data input cleansing. One such company, Kasada Pty, has been tracking ChatGPT misuse and abuse. Another new tool from Robust Intelligence, modeled after VirusTotal, scans AI applications for security flaws and tests whether they’re as effective as advertised or if they have issues around bias. And Hugging Face, one of the most popular repositories of machine learning models, has been working with Microsoft’s threat intelligence team on an application that scans AI programs for cyberthreats.

As organizations look to integrate ChatGPT—whether to augment employee tasks, make workflows more efficient, or supplement cyberdefenses—it will be important to note the program’s risks alongside its benefits, and recognize that generative AI still requires an appreciative amount of oversight before large-scale adoption. Security leaders should consider AI-related vulnerabilities across their people, processes, and technology—especially those related to mis- and disinformation. By putting the right safeguards in place, generative AI tools can be used to support existing security infrastructures.

Awareness alone won’t solve the more nebulous threats associated with ChatGPT. To bring disparate security efforts together, the AI community will need to adopt a similar modus operandi to traditional software, which benefits from an entire ecosystem of government, academia, and enterprise that has developed over more than 20 years. That system is in its infancy for LLMs like ChatGPT today, but continued diligence—plus a learning model of its own—should integrate cybersecurity in a symbiotic relationship.  The benefits of ChatGPT are many, and there’s no doubt that generative AI tools have the potential to transform humanity. In what way, remains to be seen.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Webinar recap: EDR vs MDR for business success

Did you miss our recent webinar on EDR vs. MDR? Don’t worry, we’ve got you covered!

In this blog post, we’ll be recapping the highlights and key takeaways from the webinar hosted by Marcin Kleczynski, CEO and co-founder of Malwarebytes, and featuring guest speaker Joseph Blankenship, Vice President and research director at Forrester.

  • Introducing EDR and MDR: The webinar began with an overview of EDR and MDR. The speakers explained that EDR provides visibility into endpoint activity, while MDR offers 24/7 monitoring and management of security technologies and incident response services. They also pointed out that EDR solutions can be challenging for businesses without dedicated security teams and that building an in-house SOC can be expensive and difficult.
  • Limitations of Endpoint Protection and EDR: The speakers discussed the limitations of endpoint protection and EDR, specifically when it comes to advanced threats like ransomware or Advanced Persistent Threats (APTs) that use Living off the Land (LOTL) attacks and fileless malware. These threats can hide in memory and blend in with normal activity, making them difficult to detect without trained specialists who are proactively hunting for them.
  • How MDR Can Help: To address these challenges, the speakers spoke about outsourcing to an MDR provider. MDR providers work with clients to understand their security technology stack, make recommendations, and agree on response actions to take. Incident response and threat hunting are part of the MDR service, and the provider will have a plan in place to shut down threats, contain them, and eradicate them so businesses can get back to.. erm… business.
  • Which Is Right for Your Business? The choice between EDR and MDR comes down to the resources you have available and the level of security you require. If you have a dedicated security team and the resources to manage and maintain an EDR solution, EDR may be the right choice for you. However, if you lack dedicated security resources, MDR may be a better option as it provides continuous monitoring and incident response services.

Want to learn more about EDR and MDR and which is right for your business? Be sure to watch the full webinar recording on-demand and get valuable insights from industry experts on how to improve your security operations and protect against ransomware and fileless malware.

Watch now!

Identity crisis: How an anti-porn crusade could jam the Internet, featuring Alec Muffett: Lock and Code S04E11

On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States. 

The changes are, ostensibly, over pornography. 

In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID

But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone.

Look no further than Utah. 

On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you’re in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you’re presented with a message from adult film star Cheri Deville who explains that:

“As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”

Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he’s seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology. 

“The battle cry of these people have has always been—either directly or mocked as being—’Could somebody think of the children?’ And I’m thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she’s an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity.”

Muffett continued:

“I’m trying to protect that for her. I’d like to see more people grasping for that.”

Tune in today.

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.


Malwarebytes Privacy VPN can encrypt your connection when using public WiFi, and it can block companies and websites from seeing your IP address and location to identify who you are, where you live, or what you’re doing on the Internet.

TRY NOW


Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)

Additional Resources and Links for today’s episode:

A Sequence of Spankingly Bad Ideas.” – An analysis of age verification technology presentations from 2016. Alec Muffett.

Adults might have to buy £10 ‘porn passes’ from newsagents to prove their age online.” – The United Kingdom proposes an “adult pass” for purchase in 2018 to comply with earlier efforts for online age verification. Metro. 

Age verification won’t block porn. But it will spell the end of ethical porn.” – An independent porn producer explains how compliance costs for age verification could shut down small outfits that make, film, and sell ethical pornography. The Guardian. 

Minnesota’s Attempt to Copy California’s Constitutionally Defective Age Appropriate Design Code is an Utter Fail.” – Age verification creeps into US proposals. Technology and Marketing Law Blog, run by Eric Goldman. 

Nationwide push to require social media age verification raises questions about privacy, industry standards.” – Cyberscoop.

The Fundamental Problems with Social Media Age Verification Legislation.” – R Street Institute.

YouTube’s age verification in action. – Various methods and requirements shown in Google’s Support center for ID verification across the globe. 

When You Try to Watch Pornhub in Utah, You See Me Instead. Here’s Why.” – Cheri Deville’s call for specialized phones for minors. Rolling Stone. 

A week in security (May 15-21)

Last week on Malwarebytes Labs:

Stay safe!


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Update now: 9 vulnerabilities impact Cisco Small Business Series

Vulnerabilities have been found and fixed in the web-based user interface of various Cisco products in the Small Business Series. These nine issues are tied to the web-based user interface of the products, and in a worst case scenario could lead to denial of service (DoS) conditions or arbitrary code execution.

Affected products

The vulnerabilities affect all of the below if running vulnerable firmware:

  • 250 Series Smart Switches
  • 350 Series Managed Switches
  • 350X Series Stackable Managed Switches
  • 550X Series Stackable Managed Switches
  • Business 250 Series Smart Switches
  • Business 350 Series Managed Switches
  • Small Business 200 Series Smart Switches
  • Small Business 300 Series Managed Switches
  • Small Business 500 Series Stackable Managed Switches

Exploits

  • CVE-2023-20159: Cisco Small Business Series Stack Buffer Overflow
  • CVE-2023-20160: Cisco Small Business Series Switches Unauthenticated BSS Buffer Overflow Vulnerability 
  • CVE-2023-20161: Cisco Small Business Series Switches Unauthenticated Stack Overflow Vulnerability
  • CVE-2023-20189: Cisco Small Business Series Switches Unauthenticated Stack Buffer Overflow Vulnerability

The four vulnerabilities above could allow an unauthenticated remote attacker to execute arbitrary code on an affected device. This is because of improper validation of requests sent to the web interface. A crafted request sent through the web interface could result in the attacker executing arbitrary code with root privileges on an affected device.

  • CVE-2023-20024: Cisco Small Business Series Switches Unauthenticated Heap Buffer Overflow Vulnerability
  • CVE-2023-20156: Cisco Small Business Series Switches Unauthenticated Heap Buffer Overflow Vulnerability
  • CVE-2023-20157: Cisco Small Business Series Switches Unauthenticated Heap Buffer Overflow Vulnerability
  • CVE-2023-20158: Cisco Small Business Series Switches Unauthenticated Denial-of-Service Vulnerability

The four vulnerabilities above could allow for a denial of service (DoS) condition on an affected device. As above, this is due to crafted requests being improperly validated when sent to the web interface.

  • CVE-2023-20162: Cisco Small Business Series Switches Unauthenticated Configuration Reading Vulnerability

This final vulnerability could allow a remote attacker to read unauthorised information on an affected device. This is, as with the other flaws, improper validation of requests sent to the web interface.

Mitigation

Two products confirmed as being not vulnerable to the issue are:

  • 220 Series Smart Switches
  • Business 220 Series Smart Switches

However, for those web-based user interfaces that are affected, Cisco has released software updates to fix the vulnerabilities. Cisco states that product users “should obtain security fixes through their usual update channels”.

There are no workarounds to address these vulnerabilities. In other words, if you’re unable to apply an update for the time being, your devices will remain vulnerable until they’re applied.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Zip domains, a bad idea nobody asked for

If you heard a strange and unfamiliar creaking noise on May 3, it may been the simultaneous rolling of a million eyeballs. The synchronised ocular rotation was the less than warm welcome that parts of the IT and security industries—this author included—gave to Google’s decision to put .zip domains on sale.

Google Registry actually announced eight new top-level domains (TLDs) that day: .dad, .phd, .prof, .esq, .foo, .zip, .mov, and .nexus, but it was dot zip and dot mov that had security eyeballs looking skywards, because of their obvious similarity to the extremely popular and long-lived .zip and .mov file extensions.

TLDs are the letters that come after the dot at the end of the domain name in an Internet address, like example.com, example.org, and example.zip.

File extensions are the three letters that came after the dot at the end of a file name, like example.docx, example.ppt, and example.zip.

You see the problem?

Domain names and filenames are not the same thing, not even close, but both of them play an important role in modern cyberattacks, and correctly identifying them has formed part of lots of basic security advice for a long, long time.

The TLD is supposed to act as a sort of indicator for the type of site you’re visiting. Dot com was supposed to indicate that a site was commercial, and dot org was originally meant for non-profit organizations. Despite the fact that both dot com and dot org have been around since 1985, it’s my experience that most people are oblivious to this idea. Against that indifference, it seems laughable that dot zip will ever come to indicate that a site is “zippy” or fast, as Google intends.

When you’re offering services where speed is of the essence, a .zip URL lets your audience know that you’re fast, efficient, and ready to move.

Meanwhile, plenty of users already have a clear idea that .zip means something completely different. Since the very beginning, files on Windows computers have used an icon, and a filename ending in a dot followed by three letters to indicate what kind of file you’re dealing with. If the three letters after the dot spell z-i-p, then that indicates an archive full of compressed—”zipped up”—files. The icon even includes a picture of a zipper on it (because reinforcement is good, and confusion is bad.)

As it happens, cybercriminals love .zip files and the last couple of years has seen an explosion in their use as malicious email attachments. Typically, the zip file is first in a sequence of files known as an “attack chain”. In a short chain, the zip file might simply contain something bad. In a longer chain it might contain something that links to something bad, or contain something that contains something that links to something bad, or contain something that links to something that contains something that links to something bad. You get the idea.

The key to it all is misdirection. The attack chain is there to confuse (there’s that word again) and mislead users and security software.

Criminals use other forms of misdirection in file extensions too. An old favourite is giving malicious files two files extensions, like evil.zip.exe. The first one, .zip in this case, is there to fool you. The second is the real one: A dangerous executable type, .exe in this example. Given a choice of two, users have to decide which one to believe. Most aren’t even faced with that choice though. Hilariousy, Windows helps the subterfuge along by hiding the second file extension, the one you really should be paying attention to, by default.

Domain names get the same treatment. Criminals make extensive use of open redirects for example—web pages that will redirect you anywhere you want to go—to make it look as if their malicious URLs are actually links to Google, Twitter or other respectable sites. Less sophisticated criminals just throw words like “paypal”, or anything else you might recognise, into the link and hope you’ll notice that bit and ignore the rest.

Against that backdrop, Google inexplicably decided to introduce something that will generate no useful revenue but will give cybercrooks an entirely new form of file and domain name misdirection, to add to all the others we’re still wrestling with.

What could criminals do with this new toy? There is no better example than that provided by security researcher Bobby Rauch, in his excellent article The Dangers of Google’s .zip TLD. In it, Rauch challenges readers to identify which of the following two URLs “is a malicious phish that drops evil.exe?”

https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.27.1.zip
https://github.com∕kubernetes∕kubernetes∕archive∕refs∕tags∕@v1.27.1.zip

It’s the bottom one.

The top one would open a zip file called v1.27.1.zip from the github.com domain. The second would go to the domain v1.27.1.zip, which in this hypothetical example triggers the download of the evil.exe file.

If you figured it out, well done, but remember you knew that one of them was bad. Would you have spotted it if you hadn’t been forewarned? And if you didn’t spot it, don’t feel bad, that’s the whole point. It’s hard to read URLs even if you know you’re looking for something out of place.

Of course, the invention of dot zip domains didn’t suddenly make URLs hard to read, they were already, but that’s no excuse.

Google does an awful lot of really good stuff for computer security, for which it deserves enormous credit, and this is a small and uncharacteristic misstep. The search giant was under absolutely no pressure to create a dot zip TLD and it hardly seems destinted to become a major income stream.

Dot zip domains are not yet a serious problem. At the time of writing, a little fewer than 4,000 have been registered, some of which were almost certainly bought by security researchers wanting to demonstrate what a bad idea they are, or to deprive criminals of some of the more dangerous names.

Criminals may yet decide they don’t need the built-in confusion of the dot zip domain (or at least, not today). They already have a wholebag of tricks that work very well and if a new one doesn’t make their life easier or richer, they won’t use it.

It is also possible that dot zip will simply die on the vine if enough companies choose to block it. Last week, Citizen Lab’s John Scott-Railton urged his nearly 200,000 Twitter followers to simply “block it all“, saying “The chance that new .zip and .mov domains mostly get used for malware attacks is 100%.”

It’s for you and your organisation to decide if you should block it, but I will point out that if you are going to, the best time to do it is now: Almost nobody is currently using it, and nobody is going to use in future if it’s routinely blocked.


Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

APT attacks: Exploring Advanced Persistent Threats and their evasive techniques

Cyber criminals come in all shapes and sizes.

On one end of the spectrum, there’s the script kiddie or inexperienced ransomware gang looking to make a quick buck. On the other end are state-sponsored groups using far more sophisticated tactics—often with long-term, strategic goals in mind.

Advanced Persistent Threats (APT) groups fall into this latter category.

Well-funded and made up of an elite squadron of hackers, these groups target high-value entities like governments, large corporations, or critical infrastructure. They often deploy multi-stage, multi-vector approaches with a high degree of obfuscation and persistence.

But for every small-to-medium-sized business (SMB) out there asking themselves “Why would an APT group care about me?” We have the answer. 

SMBs can be stepping stones to bigger targets—especially if they’re in a supply chain or serve larger entities. A whopping 93% of SMB execs even think nation-state hackers are using businesses like theirs as a backdoor into the country’s digital defenses.

In this post, we’ll break down how APT groups work, explain their tactics and evasive techniques, and how to detect APT attacks.

How APT groups work

The aim of APT groups is not a quick hit, but a long-term presence within a system, allowing them to gather as much information as they can while remaining undetected.

APTs stand apart from typical cybercriminals in several key ways:

  • Motive: Unlike ordinary cybercriminals, APTs are primarily driven by the acquisition of intelligence. While they might engage in activities that yield financial gains, their primary funding comes from the state they serve, not from their operations.
  • Tools: APTs have access to advanced tools and zero-day vulnerabilities. They keep these under wraps for as long as they can, only resorting to destructive malware when necessary.
  • Crew: APTs consist of experienced and motivated individuals who work in close coordination with one another. This is a stark contrast to traditional cybercriminals, where distrust often prevails.

easset upload file19888 266112 eAn example of APT reconnaissance (RedStinger) as observed by the Malwarebytes Threat Intelligence Team 

So, how does an APT work its dark magic? Here’s a quick rundown:

  • Step 1: Reconnaissance. This could be anything from figuring out whether there’s sensitive data or information worth stealing to making a hit list of employees or ex-employees.
  • Step 2: Infiltration. Usually, this involves some crafty social engineering, like spear phishing or setting up a watering hole to deliver custom malware.
  • Step 3: Establishing a foothold. APTs need someone inside the target’s network to run their malware.
  • Step 4: Expanding their reach. This might involve further deployment of malware, reconnaissance of the network, or other activities aimed at consolidating their position.
  • Step 5: Data acquisition. The ultimate goal is to acquire the desired data. They might need to get more access in the network to do this.
  • Step 6: Maintaining presence. Once they’re in, they might need to create more entry points or even leave a backdoor open for a return visit. If they’re done, they’ll clean up their mess to cover their tracks.

While not all these steps are required in every case, and the time and effort expended on each can vary widely, this provides a general framework for understanding how APTs operate.

Evasive techniques of APT attacks

Alright, now that we know the basics of how APTs operate, let’s dive into the specifics of their tools, techniques, and procedures (TTPs).

TTP (MITRE ATT&CK) Description
Phishing (Spear-phishing Attachment, Spear-phishing Link) APT groups frequently initiate targeted spear-phishing attacks, often combined with social engineering and exploitation of software vulnerabilities, to gain initial access to a target network.
Execution through API (T1059.005) or User Execution (T1204) Once inside a network, APTs use legitimate system tools and processes to carry out their activities in a way that blends in with normal network activity and avoids detection.
Exploitation for Client Execution (T1203) APT groups frequently discover and exploit zero-day vulnerabilities — these are software flaws unknown to the software’s vendor at the time of exploitation.
Lateral Movement (Tactic ID: TA0008) After gaining initial access, APTs use lateral movement techniques, such as Pass the Hash (PtH), to explore the network, elevate their privileges, and gain access to more systems.
Exfiltration Over C2 Channel (T1041) APTs typically employ advanced, stealthy techniques for stealing data, such as splitting it into small packets, encrypting it, or sending it out during normal business hours to blend in with regular traffic.
Establish Persistence (Tactic ID: TA0003) APT groups use techniques like multiple backdoors, rootkits, and even firmware or hardware-based attacks to maintain access to a network even after detection and remediation efforts.
Supply Chain Compromise (T1195) APTs sometimes compromise software or hardware vendors to exploit the trust relationships between those vendors and their customers, thereby gaining access to the customers’ systems.

In a word, APT groups use methods like “living off the land” (utilizing built-in software tools to carry out their activities), fileless malware (malware that resides in memory rather than on disk), encryption (to hide their communication), and anti-forensic measures (to cover their tracks). 

Breakdown of different APT groups

Attribution is always a bit thorny when it comes to different APT groups, but some groups are rather well-known and their origin has become clear. A naming convention that not everyone follows is: Chinese APT actors are commonly known as “Pandas,” Russian APTs as “Bears,” and Iranian APTs as “Kittens”.

Some examples:

  • APT28 aka Fancy Bear (Russia)
  • Nemesis Kitten (Iran) a sub-group of Iranian threat actor Phosphorus (APT35)
  • APT1 aka Comment Panda aka unit 61398 of the People’s Liberation Army (China)

Countries typically have different groups that focus on different targets, but generally speaking, some of the most frequently hit sectors are governments, aerospace, and telecommunications. 

According to the cyber threat group list compiled by MITRE ATT&CK, we’re aware of over 100 APT groups worldwide. The majority of these groups have ties to China, Russia, and Iran. In fact, China and Russia alone are reportedly connected to nearly 63% of all these known groups.

For the purposes of this article, I compiled data on 37 different APT groups listed by American cybersecurity firm Mandiant and broke them down by country. I also ran numbers of the most frequently mentioned target industries; as this data comes from a relatively small sample size, treat these as rough estimates. 

easset upload file45037 266112 eeasset upload file15659 266112 e

Detecting Advanced Persistent Threats (APTs)

You’ve got a few tricks up your sleeve when it comes to detect APTs on your network.

You can use things like Intrusion Detection and Prevention Systems, or IDS/IPS for short, which keep an eye on your network traffic. Regular check-ups on your logs and network can also give you clues.

Then there’s following bread crumbs known as Indicators of Compromise (IoCs) and watching for any weird behavior from users or end devices. But here’s the thing, these threats are getting smarter and trickier.

That’s where Endpoint Detection and Response (EDR) comes in. Let’s take a look at how EDR can help level up your defense game against these APTs.

Consider, for example, the fairly common case of an APT group using Mimikatz, an open source tool for Windows security and credential management, to extract credentials from memory and perform privilege escalation. MITRE lists at least 8 APT groups observed to use Mimikatz for this exact purpose. 

Using Malwarebytes EDR, we can find suspicious activity like this and quickly isolate the endpoint with which it’s associated.

easset upload file22219 266112 e

Clicking into a high-severity alert, we’ll see that we have categorization of rules to help a maybe newer or less savvy security expert understand what’s going on with this process.

What we see here is the actual categorization of behaviors that Malwarebytes witnessed in this process. Each of these little bubbles has been color coded to help you understand the severity of this issue.

easset upload file1482 266112 e

At the bottom, we have a detailed process timeline as well. Clicking into any of these nodes, we get a lot of rich context information about what this process did.

As a security analyst or an IT admin, the first question you typically ask when an incident occurs is: What happened? Do we know if it’s malicious? What is the actual extent of the potential damages? And so on.

easset upload file83168 266112 e

easset upload file67149 266112 eWe can see the exact time that it ran and the file hashes, so if we needed to do further investigation, we have those available. And most importantly, we’ve highlighted below the command line actually used to execute this technique on our machine.

This is really suspicious looking code that could definitively be a sign of an APT on the network. This PowerShell command is downloading and executing Mimikatz from a remote server. Let’s remediate ASAP!

Closing this view out we’ll find a “Respond” option in the upper-right hand corner with a drop-down menu to “Isolate Endpoint“.

easset upload file43048 266112 e

We have three layers of isolation that we can provide: network isolation, process isolation, and desktop isolation.

The network and process isolations are intended to give us the ability to quarantine that machine and prevent it from doing anything that is not authorized by Malwarebytes.

What this means is, we can still use our Malwarebytes console to trigger scans to perform other tasks and to review data, but the machine otherwise can’t communicate or run anything else. 

easset upload file60251 266112 eBam! This potential APT threat is blocked all in a matter of minutes.

Want to see Malwarebytes EDR in action? Learn more here.

Respond to APT attacks quickly and effectively

Managed Detection and Response (MDR) services provide an attractive option for organizations without the expertise to manage EDR solutions. MDR services offer access to experienced security analysts who can monitor and respond to threats 24/7, detect and respond to APT attacks quickly and effectively, and provide ongoing tuning and optimization of EDR solutions to ensure maximum protection.

Stop APT attacks today

Child safety app riddled with vulnerabilities: Update now!

An app designed to restrict screen time and add a “kids’ mode” for children on smart devices has been found to have a broad range of security issues

The app, “Parental Control – Kids Place” is an Android app which is incredibly popular, sporting 5M+ downloads on its Google Play page. In terms of what the app does with user’s data, Play’s Data Safety page has this to say: 

  • No data shared with third parties 

  • Precise location, name and email, installed apps and other actions, crash logs, and device / other IDs may be collected 

  • Data is encrypted in transit 

  • You can request that data be deleted 

Despite this, the five flaws discovered by the SEC Consult researchers would give most parents quite the headache in terms of device, account, and child safety. The explanations given for the various flaws are quite technical. Fear not, because below we’ll explain how these affected app users without wandering into the coding weeds. 

  • Passwords were being stored insecurely, in a way which would be potentially easy for an attacker to crack using automated methods.
  • The parent’s web dashboard was insecure and vulnerable to attack.
  • This same dashboard could be exploited to send download links to the child’s device which could contain malware.
  • Finally, the child could potentially bypass the restriction features without anyone noticing. This last one involves a couple of steps which includes booting into safe mode. While a child may not figure the flow out themselves, it’s the kind of thing which routinely ends up on social media and streaming sites as a “cool hack”. 

The vendor was notified mid-November 2022, with the app creators responding that “most” of the vulnerabilities had been fixed. Several rounds of back and forth communication ensued, with the SEC researchers having to go back and explain that certain issues had still not been addressed by the start of January 2023. 

The vendor again replied that everything had now been fixed mid-February, and this time around the fixes got the job done. 

What does this all mean in practice if you’re a user of this app? Well, good news: the updates did indeed fix the flaws. The way to keep your app and your child safe is to download the latest version of Parental Control – Kids Place from the Google Play store. 

You must be running at least version 3.8.50 in order to be safe from the issues listed above. 

There are no workarounds available to address the five security vulnerabilities if you’re running something lower than this, and you’ll potentially be at risk until you update the app. 

To update a Google Play app, there are a few options available: 

Update all Android apps automatically: 

  • Open the Play Store app 

  • In the top right corner, press the profile icon 

  • Tap Settings > Network Preferences > Auto-update apps 

  • Select “over any network”, or “over Wi-Fi- only” 
     

Update individual apps automatically: 

  • Open the Play Store app 

  • In the top right corner, press the profile icon 

  • Tap Manage apps and device 

  • Tap Manage, and then find the desired app 

  • Tap the app to open the app’s Details page 

  • On the Details page, tap More (typically represented by three vertical dots) 

  • Turn on Enable auto-update 

You may need to restart your device to complete the process. 


We don’t just report on Android security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your Android devices by downloading Malwarebytes for Android today.

KeePass vulnerability allows attackers to access the master password

KeePass is a free open source password manager, which helps you to manage your passwords and stores them in encrypted form. In fact, KeePass encrypts the whole database, i.e. not only your passwords, but also your user names, URLs, notes, etc.

That encrypted database can only be opened with the master password. You absolutely do not want an attacker to get hold of your master password, since that is basically the key to your kingdom—aka “all your passwords are belong to us.”

However, a researcher has worked out a way to recover a master password, and has posted KeePass 2.X Master Password Dumper on GitHub.

The description of the vulnerability (CVE-2023-32784) says:

“In KeePass 2.x before 2.54, it is possible to recover the cleartext master password from a memory dump, even when a workspace is locked or no longer running. The memory dump can be a KeePass process dump, swap file (pagefile.sys), hibernation file (hiberfil.sys), or RAM dump of the entire system. The first character cannot be recovered. In 2.54, there is different API usage and/or random string insertion for mitigation.”

The issue was reported to the developer of KeePass on May 1, 2023 and relies on the way that Windows processes the input of a text box. 

Since the developer has fixed the issue, this would normally be the place where we tell you to update KeePass. Unfortunately, a release for the new update (2.54) is not expected for a few months, since the developer is still working on a few other security related features.

However, there is no reason for most KeePass users to immediately panic and switch to a different password manager, because it would be very difficult for an attacker to get their hands on a memory dump of your system without you noticing. That being said, the gravity of the situation is different for people that are afraid their system might be confiscated and submitted to forensic analysis.

Protection

There are a few things you can do if you’re worried about this vulnerability.

  • KeePass can be used with YubiKey. A YubiKey is a USB stick which, when inserted into a USB slot of your computer, allows you to press the button and the YubiKey will enter the password for you. This keeps the password out of the text box and it doesn’t end up in the system memory.
  • Scan your system for malware. It is feasible that malware could be used to remotely fetch a memory dump from an infected system.
  • Turn on device encryption to keep unauthorized users from accessing your system.

For those with the more serious threat model of system confiscation that we mentioned earlier, the researcher that found the issue posted the advice to follow these steps:

  • Change your master password
  • Delete hibernation file
  • Delete pagefile/swapfile
  • Overwrite deleted data on the HDD to prevent carving (e.g. Cipher with /w on Windows)
  • Restart your computer

Or just overwrite your hard disk drive (HDD) and do a fresh install of your operating system (OS).

That looks a bit over the top for most users, and most will not need to do it. However we do advise all KeePass users to keep an eye out and to update to KeePass 2.54 or higher once it is available.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using Malwarebytes Vulnerability and Patch Management.