IT NEWS

Google to auto-enrol users, YouTubers into 2SV

Google’s announced some changes to how it’s helping millions of its users stay safe and secure. The biggest of those changes is that it plans to auto-enrol its users in to two-step verification, or 2SV.

2SV adds an extra layer when logging into your account and the additional step happens after you’ve entered your password. For Google users, it involves just tapping a notification on their phone to confirm it’s them. It’s simple, and it dramatically decreases the chance of someone else accessing an account.

AbdelKarim Mardini, Group Product Manager for Chrome, and Guemmy Kim, Director of Account Security and Safety, wrote in a blog post:

2SV has been core to Google’s own security practices and today we make it seamless for our users with a Google prompt, which requires a simple tap on your mobile device to prove it’s really you trying to sign in. And because we know the best way to keep our users safe is to turn on our security protections by default, we have started to automatically configure our users’ accounts into a more secure state.

By the end of 2021, we plan to auto-enroll an additional 150 million Google users in 2SV and require 2 million YouTube creators to turn it on.

It’s been a long time coming—Google announced its intentions to auto-enrol users into 2SV back in May. Then, in August, Google’s official YouTube Twitter account told content creators they will have to enable their 2SV in order to log in.

For those who, for some reason, cannot use the 2SV option, Google says it’s “working on technologies that provide a convenient, secure authentication experience and reduce the reliance on passwords in the long-term.”

Google has a handy Security Checkup that’s worth going through, to make sure your account is as secure as it can be, and ready for 2SV.

Lastly, Google has shared other methods of securing accounts, such as building security keys in Android devices; creating the Google Smart Lock App for Apple users; creating the Titan Security Key, a 2SV physical key; and creating the Google Identity Service, a way to verify identities using tokens instead of passwords.

The post Google to auto-enrol users, YouTubers into 2SV appeared first on Malwarebytes Labs.

Stop. Do you really need another security tool?

The last few years have seen a mushrooming of the number and type of security tools that organizations can use to protect themselves. You can have tools, tools to integrate the tools, tools to monitor the tools, APIs, dashboards (so many dashboards), and machine learning with everything. And yet, against this backdrop of rapidly escalating security sophistication, the ransomware epidemic has got measurably worse. Moreover, as 2021 comes to a close, criminals are also still regularly exploiting vulnerabilities that their victims could have patched three years ago.

The orthodox explanation for this is that we are, collectively, not sophisticated enough—we are simply failing to adopt new technology quick enough to head off the latest threats. For some organizations that is what’s happening, but is that all there is to it?

Too much of a good thing

A year ago, IBM’s annual Cyber Resilient Organization Report (which is based on a survey of 3,400 IT and security professionals by the Ponemon institute) unearthed an interesting consequence of all this tooling: Too many tools weaken cyber resilience, it said:

The study revealed that the number of security solutions and technologies an organization used had an adverse effect on its ability to detect, prevent, contain and respond to a cybersecurity incident.

IBM’s isn’t the only recent research to identify this problem. Earlier this year, security services provider Reliaquest collaborated with IDG on a report about technology sprawl, in which it pointed out much the same thing:

The majority of survey respondents (92%) agree there’s a tipping point where the number of security tools in place negatively impacts security. Seventy-eight percent said they’ve reached this tipping point.

And there may be another, related problem too.

Over on social media, at around the same time as Reliaquest released its report, ubiquitous security influencer Kevin Beaumont was barking up an adjacent tree. To nods of approval from security professionals, he pointed out “a common trip up in cybersecurity” was “buying the best solutions … and then not having the resources/skills/whatever to actually use the solution”.

“You can buy the best – can you run the best? If not, it ain’t the best.”

The view from the trenches

To understand more about these issues I spoke to Crystal Green, Malwarebytes’ Director of Customer Success.

In fact, the Customer Success team’s very existence suggests that there is substance to these ideas. As Green explained to me, part of her work involves making sure customers aren’t left behind: “As the threat landscape changes, security software providers have to constantly improve, adding new features and protections … it is our job [in the Customer Success team] to ensure that customers are educated on the best practices for deploying and maintaining our solutions, and are getting the most value and protection from their investment.”

I started by asking if she had encountered the problem identified by IBM and Reliaquest, of some companies having too many tools. After all, we’ve all been preaching “defence in depth” for years, so aren’t a variety of tools a good thing?

“While a layered security approach is necessary, the more tools that are in the security stack, the greater the potential for conflicts between the tools increases. Additionally, key features and functionality may be intentionally, or mistakenly disabled, causing a gap in protection.”

And what about the issue that Beaumont and his followers raised, of companies buying capable software they then struggling to implement? I wonder if that was just the social media echo chamber at work or if she’d seen it for herself.

“We see a lot of companies that purchase software but don’t actually deploy or use the software. Sometimes it doesn’t get deployed at all, other times key features aren’t used.”

The reasons will sound familiar to anyone who has worked in a corporate IT department. Green explains: “This happens for many reasons, including conflicts with time, other priority projects that make the implementation of software a lesser priority, or there may not be a complete understanding of how to best use the solution.”

So can companies simply freeze their security solutions in time and stop updating? Sadly, no. Threat actors aren’t standing still, she explains, and modern tools are important. It’s just that simply owning the tools isn’t enough.

“We’ve all been seeing numerous companies in the news this year being hit by ransomware attacks. It is critical that business (and individuals) have the right tools in place. But those tools must also be implemented, configured, and maintained correctly.”

Security as a process

Green recommends that businesses need to manage their security tools as an ongoing process not a project, no matter what their vendor says about how easy the software is to setup.

“Each environment is different, and environments change over time, so it’s important that administrators complete regular reviews of each tool to ensure that the configuration is meeting their current security needs and if a particular functionality is turned off, that the risks associated with that decision are understood.”

That’s all very well, but administrators have a lot on their plate. What about the ones who don’t know what they don’t know?

“We deal with this through having business reviews with our customers where we will showcase what is going well, as well as pointing out gaps, including features and functionality that are not being utilized.”

Green sees that kind of relationship building as crucial to being “cyber smart” and tackling the problem of technology sprawl, and she thinks vendors need to be open to letting customers shape the software they use, sitting on advisory boards, and even speaking to engineering teams directly.

As Beaumont said, security isn’t about tools you can afford, it’s about the tools you can operate effectively.

The post Stop. Do you really need another security tool? appeared first on Malwarebytes Labs.

US Navy ship Facebook page hijacked to stream video games

The official Facebook page of the US Navy’s destroyer-class warship, USS Kidd, has been hijacked. According to Task & Purpose, who first reported on the incident, the account has done nothing but stream Age of Empires, an award-winning, history-based real-time strategy (RTS) video game wherein players get to grow civilizations by progressing them from one historical time frame to another.

fb uss kidd
The official Facebook page of the USS Kidd, one of the the US Navy’s warships. Its last post before getting compromised was dated September 22 announcing the ship’s return from a mission.

In an interview with Task & Purpose, Cmdr. Nicole Schwegman, a Navy spokesperson, confirmed the hijacking: “The official Facebook page for USS Kidd (DDG 100) was hacked. We are currently working with Facebook technical support to resolve the issue.”

As we write, the US Navy has yet to regain control of the account.

The hijacked account started streaming the video game live on October 4 for four hours. That session was followed by five more streams one after the other, each lasting for up to two hours.

fb uss kidd last stream
USS Kidd Facebook account’s final stream before it went quiet. Note that “POSC” could be slang.

Yes, the poor fellow couldn’t get past the Stone Age.

Official accounts of the US military getting compromised is rare but not unheard of. A year ago, the administrator responsible for the Fort Bragg Twitter account forgot to switch from that account to his own personal Twitter account before posting lewd comments on a model’s page.

How to avoid Facebook hijacking

Whether you’re an organization or an individual who’d like to secure their accounts from such potential hijacking incidents, make sure that you take full advantage of Facebook’s full suite of security and privacy settings. Make sure you understand the settings for how your account is used, secured, and viewed by others. Don’t just accept the default settings.

And let us not forget passwords. Yes—make it a good, strong one. Better yet, let your password manager create and, well, manage all password-related tasks for you.

Two-factor authentication is a relatively simple option to turn on for your Facebook account, and makes it much harder for anyone else to login as you.

And if you manage a business’s social media accounts, please be mindful of the account you’re currently handling before pushing posts to the public. If it helps, use Twitter or Facebook in the browsers for your business and the Twitter or Facebook app for your personal accounts.

A social media disaster? Not today.

The post US Navy ship Facebook page hijacked to stream video games appeared first on Malwarebytes Labs.

What special needs kids need to stay safe online

Online safety is hard enough for most adults. We reuse weak passwords, we click on suspicious links, and we love to share sensitive information that should be kept private and secure. (Just go back a few months to watch adults gleefully sharing photos of their vaccine cards.) The consequences of these failures are predictable and, for the most part, proportional—a hacked account, a visit to a scam website, maybe some suspicious texts asking for money.

But for an often-ignored segment of the population, online safety is more about discerning lies from truth and defending against predatory behavior. These are the threats posed specifically to children with special needs, who, depending on their disabilities, can have trouble understanding emotional cues and self-regulating their emotions and their relationship with technology.

This year, for National Cybersecurity Awareness Month, Malwarebytes Labs spoke with Alana Robinson, a special education technology and computer science teacher for K–8, to learn about the specific online risks posed to special needs children, how parents can help protect their children with every step, and how teachers can best educate special needs children through constant reinforcement, “gamification,” and tailored lessons built around their students’ interests.

Importantly, Robinson said that special needs education for online safety is not about a handful of best practices or tips and tricks, but rather a holistic approach to equipping children with the broad set of skills they will need to safely navigate any variety of risks online.

“Digital citizenship, information literacy, media literacy—these are all topics that need to be explicitly taught [to children with special needs],” Robinson said. “The different is, as adults, we think that you should know this; you should know that this doesn’t make sense.”

Whether adults actually know those things, however, can be disputed.

“I mean, as I said,” Robinson added, “it is also challenging for adults.”

Our full conversation with Robinson, which took place on our podcast Lock and Code, with host David Ruiz, can be listened to in full below.

The large risk of disinformation and misinformation

The risks posed to children online are often similar and overlapping, no matter a child’s disability. Cyberbullying, encountering predatory behavior, interacting with strangers, and posting too much information on social media platforms are all legitimate concerns.

But for children with behavioral challenges, processing challenges, and speech and language challenges in particular, Robinson warned about one enormous risk above all: The risk of not being able to discern fact from fiction online.

“Misinformation and disinformation online [are] a great threat to our students,” Robinson said. “There were many times [my students] would come in and say ‘I saw this online’ and we would get into discussions because they were pretty adamant that what they saw is correct.”

Those discussions have increased dramatically in frequency, Robinson said, as her students—and children all over the world—watch videos at an impossibly fast rate on platforms like YouTube, which, according to the company’s 2017 statistics, streams more than one billion hours of video a day. That video streaming firehose becomes a problem when those same platforms have to consistently play catch-up to stop the wildfire-like spread of disinformation and conspiracy theories online, as YouTube just did last week when it implemented new bans on vaccine misinformation.

“I have students pushing back and telling me, no, we never landed on the moon, that’s fake,” Robinson said. “These are the things they’re consuming on these platforms.”

To help her students understand how misinformation can spread so easily, Robinson said she shows them how it can be daylight outside her classroom, but at the same time, if she wanted, she could easily post a video online saying that it is instead nighttime outside her classroom.

Robinson said she also encourages her students to ask if they’re seeing these claims made elsewhere, and she steers them to what are called “norm-based reputable sources”—trustworthy websites that can provide fact-checks while also removing her students from the progression of recommended online videos that are fed to them through algorithms that prioritize engagement above all else.

“This is what we call building digital habits,” Robinson said, emphasizing the importance of digital literacy in today’s world.

Constant reinforcement

The promise of a “solution” to misinformation and disinformation online almost feels too good to be true, whether that solution equips special needs children with the tools necessary to investigate online sources or whether it helps adults without special needs defend against hateful content that is allegedly prioritized by one enormous technology company to boost its own profits.

So, when Robinson was asked directly as to whether these teaching models work, she said yes, but that the models require constant reinforcement from many other people in a child’s life.

Comparing digital literacy education to math education, Robinson said that every single year, students revisit the topics they learned the year before. She called this return to past topics “spiraling.”

“Part of developing digital students into really successful, smart, discernible, digital adults is the ongoing, constant spiraling and teaching of these concepts,” Robinson said. “If you can collaborate with other content area educators in your building, you’re infusing these topics through subject areas.”

Essentially, Robinson said, teaching online safety and cybersecurity to special needs children needs to be the responsibility of more than just a single technology teacher. It needs to be taken on by several subject matter educators and by parents at home.

For parents who want to know how they can help out, Robinson suggested finding teaching moments in everyday, common mistakes. If a parent themselves falls for a phishing scam, Robinson said those same parents can take that as an opportunity to teach their children about spotting online scams.

“It’s an ongoing work and it never stops,” Robinson said.

Teach kids about what they like using

To help special needs children understand and take interest in online safety education, Robinson said she always pays attention to what her students are using and what they’re interested in. This simple premise makes lessons both applicable and interesting to all students—not just those with special needs—and it provides a way for children to immediately understand what they’re learning, why they’re learning it, and how it can be applied.

As an example, since so many of her students watch videos on TikTok, Robinson spoke to her students last year about the US government’s reported plans to ban the enormously popular app.

“The federal government was thinking of not allowing TikTok to be used here because it might’ve been a safety risk, and so we had that discussion, and I said ‘What happens if you couldn’t use TikTok anymore?’” Robinson said.

Robinson said this tailored approach also gives teachers and parents an opportunity to help kids not just stay safe online, but also learn about the tools they use every day to view online content. The tools themselves, Robinson said, can greatly impact how a child with special needs feels on any given day—sad, happy, worried, scared, anything goes—and that children with special needs can often use guidance in self-regulating and understanding their own emotions.

Robinson added that many of her lessons about online tools and platforms have a similar message: If a game or website or tool makes her students feels uncomfortable, they should tell an adult.

It’s a rule that could likely help even adults when they find themselves gearing up to get into an online argument for little legitimate reason.

Embrace the game

Finally, Robinson said that many of her students enjoy using online games to learn about online safety, and she specifically mentioned Google’s Internet safety game called “Interland,” which parents can find here.

Google’s Interland leads kids through several short “games” on online safety, with lessons centered around the topics of “Share with Care,” “It’s Cool to Be Kind,” and “Don’t Fall for Fake.” The browser-based games ask kids to go through a series of questions with real scenarios, and each correct answer earns them points while their digital character jumps from platform to platform. The website works with most browsers, but Malwarebytes Labs found that it ran most smoothly on Google Chrome and Safari.

Interestingly, when it comes to lessons that Robinson’s special needs students excel at, she said they are excellent at creating strong passwords—and at calling people out for using weak ones.

“I teach 100 students, 10 classes, [and] I used not a very strong password for every student in this one class … and I said ‘By the way, everyone has this [password],’ and they’re like, when I said everyone has this same password, they’re like ‘Oh no no! That’s not a strong password, oooh,’” Robinson said, laughing. “They literally let me have it.”


This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post What special needs kids need to stay safe online appeared first on Malwarebytes Labs.

Twitch compromised: What we know so far, and what you need to do

Update, 7th October: Twitch has now confirmed the breach. The company’s statement is as follows:

We have learned that some data was exposed to the internet due to an error in a Twitch server configuration change that was subsequently accessed by a malicious third party.

At this time, we have no indication that login credentials have been exposed. We are continuing to investigate.

Additionally, full credit card numbers are not stored by Twitch, so full credit card numbers were not exposed.

Original post:

Big, breaking news going around at the moment. If you have a Twitch account, you may wish to perform some security due diligence. There are multiple reports of the site being compromised. And they absolutely do mean compromised:

There’s still no independent verification from Twitch itself yet. However, multiple people have confirmed that the leak details, which include streamer revenue numbers, match what they have in fact generated.

What has happened?

A 128GB torrent was released on the 4chan message board. The poster claims it incorporates all of Twitch including

  • Source code for desktop, mobile, and console clients
  • 3 years of creator payouts
  • Some form of unreleased Steam competitor
  • Various bits of data on several Twitch properties
  • Internal security tools

The leak is marked as “part 1”. The current data appears to contain nothing in the way of passwords or related data, but that potentially may be included in whatever comes next. This is something we may well find out from Twitch if and when it makes a statement.

In the meantime, we’d strongly suggest taking some proactive steps.

What should Twitch users do?

Log into your Twitch account and change your password to something else. If you’ve used the password on other services then you need to change them there too. Then enable two-factor authentication on Twitch, if you’re not already using it.

One small possibility against the leaking of passwords is there’s not been any visible “strange” activity from big name accounts. One would assume all sorts of dubious message shenanigans would follow in the wake of such a data grab. However, it’s possible that stolen passwords are being kept under lock and key until any such “Part 2” arrives.

This makes it all the more crucial to take some action now and start locking things down.

We’ll be updating this post with more information as we get it, so if you’re a Twitch user please feel free to check back every so often.

The post Twitch compromised: What we know so far, and what you need to do appeared first on Malwarebytes Labs.

Patch now! Apache fixes zero-day vulnerability in HTTP Server

The Apache HTTP Server 2.4.49 is vulnerable to a flaw that allows attackers to use a path traversal attack to map URLs to files outside the expected document root. If files outside of the document root are not protected by “require all denied” these requests can succeed. This issue is known to be exploited in the wild.

The vulnerability

The Apache HTTP Server Project started out as an effort to develop and maintain an open-source HTTP server for modern operating systems, including UNIX and Windows. It provides a secure, efficient, and extensible server that provides HTTP services in sync with the current HTTP standards.

The flaw (listed as CVE-2021-41773) was introduced by a change made to path normalization in Apache HTTP Server 2.4.49. So, earlier versions are not vulnerable, nor are servers that are configured to “require all denied”.

Unfortunately, “require all denied” is off in the default configuration. This is the setting that typically shows an error that looks like this:

“Forbidden. You don’t have permission to access {path}.”

Path traversal attack

Path traversal attacks are done by sending requests to access backend or sensitive server directories that should be out of reach for unauthorized users. While normally these requests are blocked, the vulnerability allows an attacker to bypass the filters by using encoded characters (ASCII) for the URLs.

Using this method an attacker could gain access to files like cgi scripts that are active on the server, which could potentially reveal configuration details that could be used in further attacks.

Impact

The Apache HTTP Server Project was launched in 1995, and it’s been the most popular web server on the Internet since April 1996. In August 2021 there were some 49 million active sites running on Apache server. Obviously we do not know which server every domain is using, but of the sites where we can identify the web server, Apache is used by 30.9%.

A Shodan search by Bleeping Computer showed that there are over a hundred thousand Apache HTTP Server 2.4.49 deployments online, many of which could be vulnerable to exploitation.

Security researchers have warned that admins should patch immediately.

Another vulnerability

There’s a second vulnerability tackled by this patch—CVE-2021-41524—a null pointer dereference detected during HTTP/2 request processing. This flaw allows an attacker to perform a denial of service (DoS) attack on the server. This requires a specially crafted request.

This flaw also only exists in Apache Server version 2.4.49, but is different to the first vulnerability in that, as far as we know, it is not under active exploitation. It was discovered three weeks ago, fixed late last month, and incorporated now in version 2.4.50.

Mitigation

All users should install the latest version as soon as possible, but:

  • Users that have not installed 2.4.49 yet should skip this version in their update cycle and go straight to 2.4.50.
  • Users that have 2.4.49 installed should configure “require all denied” if they do not plan to patch quickly, since this blocks the attack that has been seen in the wild.

A full list of vulnerabilities in Apache HTTP Server 2.4 can be found here.

Stay safe everyone!

The post Patch now! Apache fixes zero-day vulnerability in HTTP Server appeared first on Malwarebytes Labs.

Windows 11 is out. Is it any good for security?

Windows 11, the latest operating system (OS) from Microsoft, launches today, and organizations have begun asking themselves when and if they should upgrade from Windows 10 or older versions. The requirements and considerations of each organization will be different, and many things will inform the decisions they make about whether to stick or twist. One of those things will be whether or not Windows 11 makes them safer and more secure.

I spoke to Malwarebytes’ Windows experts Alex Smith and Charles Oppermann to understand what’s changed in Windows 11 and what impact it could have on security.

A higher bar for hardware

If you’ve read anything about Windows 11 it’s probably that it will only run on “new” computers. Microsoft’s latest OS sets a high bar for hardware, with the aim of creating a secure platform for all that’s layered on top of it. In effect, Microsoft is making its existing Secured-core PC standards the new baseline, so that a range of technologies that are optional in Windows 10 are mandatory, or on by default, in Windows 11.

In reality the hardware requirements will only seem exacting for a short period. Moore’s Law and the enormous Windows install base mean that yesterday’s stringent hardware requirements will rapidly turn into today’s minimum spec.

Three of the new OS’s hardware requirements play major, interlocking roles in security:

All hail the hypervisor

At a minimum, Windows 11 requires a 64-bit, 1 GHz processor with virtualization extensions and at least two cores, and HVCI-compatible drivers. In practice that means it requires an 8th generation Intel processor, an AMD Zen 2, or a Qualcomm Snapdragon 8180.

This is because Virtualization Based Security (VBS) has become a keystone concept in Microsoft’s approach to security. VBS runs Windows on top of a hypervisor, which can then use the same techniques that keep guest operating systems apart to create secure spaces that are isolated from the main OS. Doing that requires hardware-based virtualization features, and enough horsepower that you won’t notice the drag on performance.

Noteworthy security features that rely on VBS include:

  • Kernel Data Protection, which uses VBS to mark some kernel memory as read only, to protect the Windows kernel and its drivers from being tampered with.
  • Memory Integrity (a more digestible name for HVCI), which runs code integrity checks in an isolated environment, which should provide stronger protection against kernel viruses and malware.
  • Application Guard, a protective sandbox for Edge and Microsoft Office that uses virtualization to isolate untrusted websites and office documents, limiting the damage they can cause.
  • Credential Guard runs the Local Security Authority Subsystem Service in a virtual container, which stops attackers dumping credentials and using them in pass-the-hash attacks.
  • Windows Hello Enhanced Sign-In uses VBS to isolate biometric software, and to create secure pathways to external components like the camera and TPM.

United Extensible Firmware Interface (UEFI)

UEFI is a specification for the firmware that controls the first stages of booting up a computer, before the operating system is loaded. (It’s a replacement for the more widely-known BIOS.) From a security standpoint, UEFI’s key feature is Secure Boot, which checks the digital signatures of the software used in the boot process. It protects against bootkits that load before the operating system, and rootkits that modify the operating system.

Trusted Platform Module 2.0 (TPM 2.0)

TMP is tamper-resistant technology that performs cryptographic operations, such as creating and storing cryptographic keys, where they can’t be interfered with. It’s probably best known for its role in Secure Boot, that ensures computers only load trusted boot loaders, and in BitLocker disk encryption. In Windows 11 it forms the secure underpinning for a host of security features, including Secure Boot’s big brother, Measured Boot; BitLocker (Device Encryption on Windows Home); Windows Defender System Guard; Device Health Attestation; Windows Hello; and more.

New in Windows 11

Windows 11 has some new tricks up its sleeve too.

Hardware-enforced Stack Protection

Windows 11 extends the Hardware-enforced Stack Protection introduced in Windows 10 so that it protects code running in kernel mode as well as in user mode. It’s designed to prevent control-flow hijacking by creating a “shadow stack” that mirrors the call stack’s list of return addresses. When control is transferred to a return address on the call stack it’s checked against the shadow stack to ensure it hasn’t changed. If it has, something untoward has happened and an error is raised.

Pluton

Windows 11 comes ready to embrace the impressively-named Pluton TPM architecture. It’s been a feature of the Xbox One gaming console since 2013, but doesn’t exit in PCs… yet.

Pluton sees the security chip built directly into the CPU, which prevents physical attacks that target the communication channel between the CPU and the TPM. And while Pluton is backwards-compatible with existing TPMs, it’ll do more if you let it. According to Microsoft, “Pluton also provides the unique Secure Hardware Cryptography Key (SHACK) technology that helps ensure keys are never exposed outside of the protected hardware, even to the Pluton firmware itself”.

Microsoft Azure Attestation (MAA)

No discussion about security in 2021 would be complete without somebody mentioning Zero Trust, so here it is. Windows 11 comes with out-of-the-box support for MAA, which can verify the integrity of a system’s hardware and software remotely. Microsoft says this will allow organizations to “enforce Zero Trust policies when accessing sensitive resources in the cloud”.

Evolution, not revolution

For several years, Microsoft’s approach to Windows security has been to create a chain of trust that ensures the integrity of the entire hardware and software stack, from the ground up. The latest version of Windows seeks to make that approach the default, and demands the hardware necessary to make it work. With Windows 11, Microsoft is making an aggressive attempt to raise the security floor of the PC platform, and that’s a good thing for everyone’s security.

Make no mistake that threat actors will adapt, as they have done before. Advanced Persistent Threat (APT) groups are well-funded enough to find a way through tough defences, ransomware gangs are notoriously good at finding the lowest-hanging fruit, and lucrative forms of social engineering like BEC are notoriously resistant to technology solutions.

And you can add to that the interlocking problems of increasing complexity, backwards compatibility, and technical debt. Operating systems and the applications they must support are a behemoth, and while Microsoft pursues its laudable aim of eliminating entire classes of vulnerabilities, new bugs will appear and a lot of legacy code will inevitably come along for the ride.

Decisions about whether to adopt Windows 11 will doubtless be impacted by the fact it won’t run on a lot of otherwise perfectly good computers. We expect this to have a chilling effect on organizations’ willingness to migrate away from Windows 10.

And there are other headwinds too. These days, new Windows operating systems are rarely greeted with great enthusiasm unless they’re putting right the wrongs of a particularly disliked predecessor. The bottom line is that Windows 10 works and OS upgrades are painful, so it is difficult to imagine that anyone will conclude they need Windows 11.

Migration away from older versions of Windows is inevitable eventually, and by the time mainstream support for Windows 10 ends in October 2025, users will undoubtedly be more secure. But we expect organizations to move away from Windows 10 slowly, which will delay the undoubted security benefits that will follow from wide-scale adoption of Windows 11.

The post Windows 11 is out. Is it any good for security? appeared first on Malwarebytes Labs.

Criminals were inside Syniverse for 5 years before anyone noticed

“A global privacy disaster”, “espionage gold”, and “a state-sponsored wet dream” are just some of the comments one can read regarding the breach at Syniverse, a key player in the tech/telecommunications industry that calls itself the “center of the connected world.”

In a filing with the US Security and Exchange Commission, Syniverse said the breach affected more than 200 of its clients who have an accumulated number of cellphone users by the billions worldwide. Syniverse’s clients include Verizon, AT&T, T-Mobile, Vodafone, China Mobile, Telefonica, and America Movil, to name a few.

The company revealed that it first noticed the breach in May 2021, but that the access had begun in May 2016—a whole five years before.

According to Motherboard, who first wrote about this story, Syniverse receives, processes, stores, and transmits electronic customer information, which includes billing information among carriers globally, records about calls and data usage, and other potentially sensitive data. It processes more than 740 billion SMS messages alone per year, routing text messages between users of two different carriers (both in the US and abroad).

The filing said that “Syniverse’s investigation revealed that the individual or organization gained unauthorized access to databases within its network on several occasions, and that login information allowing access to or from its Electronic Data Transfer (“EDT”) environment was compromised for approximately 235 of its customers.”

In an email interview with Motherboard, Karsten Nohl, a security researcher is quoted saying, “Syniverse systems have direct access to phone call records and text messaging, and indirect access to a large range of Internet accounts protected with SMS 2-factor authentication. Hacking Syniverse will ease access to Google, Microsoft, Facebook, Twitter, Amazon and all kinds of other accounts, all at once.”

A telecomm industry insider, who spoke to Motherboard said: “With all that information, I could build a profile on you. I’ll know exactly what you’re doing, who you’re calling, what’s going on. I’ll know when you get a voicemail notification. I’ll know who left the voicemail. I’ll know how long that voicemail was left for. When you make a phone call, I’ll know exactly where you made that phone call from.”

“I’ll know more about you than your doctor.”

Motherboard asked Syniverse whether the hackers had accessed or stolen personal data on cellphone users, but Syniverse declined to answer. 

Syniverse said all EDT customers have had their credentials reset or inactivated, whether they were part of the breach or not. The company says no further action is required on behalf of those customers.

“We have communicated directly with our customers regarding this matter and have concluded that no additional action is required. In addition to resetting customer credentials, we have implemented substantial additional measures to provide increased protection to our systems and customers.” 

The post Criminals were inside Syniverse for 5 years before anyone noticed appeared first on Malwarebytes Labs.

Facebook shoots own foot, hits Instagram and WhatsApp too

Mark Zuckerberg was left counting the personal cost of bad PR yesterday (about $6 billion, according to Bloomberg) on a day when his company couldn’t get out of the news headlines, for all the wrong reasons.

The billionaire Facebook CEO’s bad day at the office started with whistleblower Frances Haugen finally revealing her identity in a round of interviews that looked set to lay siege to the Monday headlines. Anonymous revelations by the former Facebook product manager had fuelled an entire Wall Street Journal series about the harm inflicted or ignored by Instagram and Facebook, and her unmasking was its denouement. It was supposed to be big news, and for a while it was.

But then something even bigger happened.

Facebook, Instagram, and WhatsApp completely disappeared. For six hours.

Despite losing access to the world’s favourite confirmation bias apparatus, conspiracy theorists didn’t miss a beat. Putting two and two together to make five, they decided that it was all too convenient and that Facebook was using the dead cat strategy to rob Haugen of the spotlight!

It was a convenient theory, but there is no evidence for it besides an interesting coincidence, and it ignores the fact that Facebook taking itself out to silence a whistleblower is a far more interesting story than Facebook simply taking itself out by accident. I’m afraid that in the absence of more compelling information, Hanlon’s Razor will have to suffice: “Never attribute to malice that which is adequately explained by stupidity”.

BGP

What we can say for sure, is that Facebook took itself and its stablemates out with a spectacular self-inflicted wound, in the form of a toxic Border Gateway Protocol (BGP) update.

The Internet is a patchwork of hundreds of thousands of separate networks, called Autonomous Systems, that are stitched together with BGP. To route data across the Internet, Autonomous Systems need to know which IP addresses other Autonomous Systems either control or can route traffic to. They share this information with each other using BGP.

According to Cloudflare—which has published an excellent explanation of what it saw—Facebook’s trouble started when its Autonomous System issued a BGP update withdrawing routes to its own DNS servers. Without DNS servers, the address facebook.com stopped working. In Cloudflare’s words: “With those withdrawals, Facebook and its sites had effectively disconnected themselves from the Internet.”

Cloudflare appears to have noticed the problem almost straight away, so we can assume that Facebook did too. So why did it take six more hours to fix it? The social media scuttlebutt, later confirmed in Facebook’s own terse explanation, was that the outage disabled the very tools Facebook’s enormous number of remote workers would normally rely on to both communicate with each other and to fix the problem.

The underlying cause of this outage also impacted many of the internal tools and systems we use in our day-to-day operations, complicating our attempts to quickly diagnose and resolve the problem.

The unconfirmed part of the same scuttlebutt is that Facebook is so 21st century that folks were locked out of offices, and even a server room, which had to be entered forcibly in order to fix the configuration issue locally.

Of course that could just be another conspiracy theory, but as somebody who has themselves been stranded outside a building, forced to look through a glass door at the very computer that controls that door attempting and failing to boot from the broken network I had come to investigate, let me assure you that it’s not an outrageous suggestion.

The Facebook Empire withdrawing itself from the Internet didn’t stop people looking for it though. In fact, it made them look much, much harder (just imagine everyone, everywhere, frustrated, hitting “refresh” or reinstalling Instagram until they’re bored, and you get the idea). Unanswered DNS requests spiked, and DNS resolvers groaned, as computers groped around in the dark looking for the now non-existent facebook.com domains.

When they weren’t pummelling DNS resolvers, the rest of the Facebook diaspora was forced to find other forms of entertainment or other means of communication. Some local mobile phone operators reported being overwhelmed, and encrypted messaging app Signal said it welcomed “millions” of new users as people looked for alternatives to WhatsApp.

And let’s not forget that there are companies that rely on Facebook, Instagram, and WhatsApp to drive business, and there are services that use Facebook logins for authentication. And then there’s the influencers. All of them had to stop. For six hours. Won’t somebody think of the influencers?

When it finally sank in that nobody could use Facebook, Instagram, or WhatsApp, it started to dawn on us all just how much so many of us have put Facebook and its products at the centre of our lives.

And then we all went to Twitter to tell everyone else how good or bad it all was. Thankfully, it withstood the onslaught.

Which leads us to the “so what?” part of our story. This is a security blog after all, and if this wasn’t a cyberattack you may be wondering what all of this has to do with security. Where’s the lesson in all of this?

Single points of failure people.

That’s it. That’s the tweet.

The post Facebook shoots own foot, hits Instagram and WhatsApp too appeared first on Malwarebytes Labs.

Does Cybersecurity Awareness Month actually improve security?

October is Cybersecurity Awareness Month, formerly known as National Cybersecurity Awareness Month. The idea is to raise awareness about cybersecurity, and provide resources for people to feel safer and more secure online.

The month is a collaboration between the Cybersecurity and Infrastructure Security Agency (CISA) and the National Cyber Security Alliance (NCSA) and it focusses on four themes, in turn: “Be Cyber Smart”, “Phight the Phish”, “Explore. Experience. Share”, and “Cybersecurity First”. Some of these are perhaps a little interchangeable or vague, but it’s certainly a dedicated effort. The questions is, is anybody listening?

Cybersecurity Awareness Month is a fixture of the calendar now, as are Data Privacy Day, World Password Day, and a host of other well-intentioned privacy and security themed events. There are so many of them now, and they come around so often, that some of the Malwarebytes Labs team were feeling a little jaded about this month’s event.

So, in the spirit of the event’s first theme, “Be Cyber Smart”, we asked two of our Malwarebytes Labs blog team, Chris and Jovi, whether the smart thing to do was forgot about it altogether.

The pros and cons of awareness campaigns

Jovi: I don’t see that anyone can have a problem with events such as this. It’s good to have regular reminders about our responsibility to keep ourselves and our families safe. It’s also a good opportunity to learn something new about security and privacy.

Chris: I mean, are they really learning something new? From experience, the content in these events doesn’t tend to differ much from year to year. A lot of it is the same basic information you see on mainstream news reports, or blogs. I’ve been involved with events like this since 2005, and one time at a panel with reps from the FTC and the NYAG…

(several minutes of completely unrelated factoids from the dawn of time follow)

Jovi: …I’m surprised that didn’t end with you tying an onion to your belt.

Chris, oblivious to onions: If it was worthwhile, you’d think there’d be some tangible, visible improvement in security by this point. Or at least a bunch of people saying “Wow, that ‘event-name-goes-here’ really helped me with this one problem I had. Hooray for ‘event-name-goes-here’.

Jovi: True, but then again, not everyone sees every relevant news report or even reads blogs. Some people get a lot of their security information from sources like Twitter, direct from infosec pros. Who then end up directing them to events like this anyway. There’s always a churn of new people who haven’t seen any of this before, so I don’t think it’s a problem to repeat some of the basics every year. Not everything has to be groundbreaking. If it’s easy to understand and helpful, that’s okay too.

Chris: Possible, but I also think many people have burnout from this kind of thing. How many times can you hear a major event, backed by Homeland Security, say “watch out for suspicious links” before you start to demand something a bit more involved? Admittedly, we don’t know what specifically is going to be covered during the month itself yet. It might be a mix of basic information and more complicated processes, which would be great! Another major event saying “don’t run unknown files”, though? Do we really need that? Or is there still a place for it?

Jovi: I once again direct you to “a churn of new people who haven’t seen any of this before”.

Chris: Ouch.

Jovi: You may be right about the fatigue aspect, though. I imagine it’s likely very difficult for anyone to really care that much about a month-long event. If you’re directly involved in some way, then fine. If you’re one of the many random people it’s aimed at? I think it’s probable they simply won’t care very much by week 3.

Chris: It may also be exacerbated if the thing they really want to do or look at happens during the final week. Will they even remember to go back by the tail-end of October to check it out?

Jovi: This is where the web resources for the event will be crucial, alongside lots of activity on social media. Handy little reminders to go back and check it out will work wonders.

Chris: Might work wonders.

Jovi: Ouch.

Chris: One novel thing I’ll definitely highlight is that they’re doing a whole bit about careers in tech. This is good. Not every event does this. There’s a lot of resources available and the opportunity for security companies, researchers, and anyone else to give tips on how to break into the industry. This will be particularly helpful for students about to graduate, and people thinking about a change in career.

Jovi: I’m mostly interested in the phishing week. You can’t go wrong with phish advice, especially when so many people are still working from home and potentially isolated from their security teams.

Chris: Is that any better than any other event doing a phish week though?

Jovi: It certainly doesn’t hurt to have them. I reckon big organisations and governments saying “we’re interested in this and you should be too” ultimately helps more than it hurts. We’d definitely feel their absence.

Chris: I’ll give you that. I’m not 100% convinced these events are making as much impact as some may think. This is what, the 18th one of these now? I’d be interested to know what the organisers think about how successful they are, what difference they’ve made. Even so, you’re likely right that we’re better served by having them than not at all.

Jovi: Amazing—did we finally agree?

Chris: Yes, please inform the DHS I’ve given permission for the event to go ahead.

Jovi: I’m sure they’ll be relieved.

Chris: This somehow feels like sarcasm.

Jovi: Definitely not.

Winding down

Whether you think events like this are a big boon to security discourse or too much like repeating ourselves for diminishing returns, they’re here to stay. We can all play a part in ensuring these annual reminders stay relevant. Whether you’re flying solo at home, an organisation, a security vendor, an SME, or a collection of interested students? Get involved!

Let the organisers know what you’d most like to see—if not at this event, then perhaps the next one. If these awareness campaigns exist in a vacuum, they’ll assume they’re getting everything right. Let’s help them along to fix the bits we’re not sure about and make it work for everyone.

The post Does Cybersecurity Awareness Month actually improve security? appeared first on Malwarebytes Labs.