IT NEWS

Update now! Apple patches another privilege escalation bug in iOS and iPadOS

Apple has released a security update for iOS and iPad that addresses a critical vulnerability reportedly being exploited in the wild.

The update has been made available for iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation).

The vulnerability

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-30883 and allows an application to execute arbitrary code with kernel privileges. Kernel privileges can be achieved by using a memory corruption issue in the “IOMobileFrameBuffer” component.

Kernel privileges are a serious matter as they offer an attacker more than administrator privileges. In kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system.

Researchers have already found that this vulnerability is exploitable from the browser, which makes it extra worrying.

Watering holes are used as a highly targeted attack strategy. The attacker infects a website where they knows the intended victim(s) visits regularly. Depending on the nature of the infection, the attacker can single out their intended target(s) or just infect anyone that visits the site unprotected.

IOMobileFrameBuffer

IOMobileFramebuffer is a kernel extension for managing the screen framebuffer. An earlier vulnerability in this extension, listed as CVE-2021-30807 was tied to the Pegasus spyware. This vulnerability also allowed an application to execute arbitrary code with kernel privileges. Coincidence? Or did someone take the entire IOMobileFramebuffer extension apart and save up the vulnerabilities for a rainy day?

Another iPhone exploit called FORCEDENTRY was found to be used against Bahraini activists to launch the Pegasus spyware. Researchers at Citizen Lab disclosed this vulnerability and code to Apple, and it was listed as CVE-2021-30860.

Undisclosed

As is usual for Apple, both the researcher that found the vulnerability and the circumstances under which the vulnerability used in the wild are kept secret. Apple didn’t respond to a query about whether the previously found bug was being exploited by NSO Group’s Pegasus surveillance software.

Zero-days for days

Over the last months Apple has had to close quite a few zero-days in iOS, iPadOS,and macOS. Seventeen if I have counted correctly.

  • CVE-2021-1782 – iOS-kernel: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1870 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1871 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1879 – WebKit: Processing maliciously crafted web content may lead to universal cross site scripting. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30657 – Gatekeeper: A malicious application may bypass Gatekeeper checks. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30661 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30663 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution.
  • CVE-2021-30665 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30666 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30713 – TCC: A malicious application may be able to bypass Privacy preferences. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30761 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30762 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30807 – IOMobileFrameBuffer: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Tied to Pegasus (see above).
  • CVE-2021-30858 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30860 – CoreGraphics: Processing a maliciously crafted PDF may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. This is FORCEDENTRY (see above).
  • CVE-2021-30869 – XNU: A malicious application may be able to execute arbitrary code with kernel privileges. Reportedly being actively exploited by attackers in conjunction with a previously known WebKit vulnerability.

And last but not least, the latest addition—CVE-2021-30883—which means that of the 17 zero-days that were fixed over the course of a handful of months, at least 16 were found to be actively exploited.

Update

Apple advises users to update to iOS 15.0.2 and iPadOS 15.0.2 which can be done through the automatic update function or iTunes.

Stay safe, everyone!

The post Update now! Apple patches another privilege escalation bug in iOS and iPadOS appeared first on Malwarebytes Labs.

Ransom Disclosure Act would mandate ransomware payment reporting

In an effort to better understand and clamp down on the ransomware economy and its related use of cryptocurrencies, US Senator and past presidential hopeful Elizabeth Warren and US House Representative Deborah Ross introduced a new bill last week that would require companies and organizations to report any paid ransomware demands to the Secretary of the Department of Homeland Security.

“Ransomware attacks are skyrocketing, yet we lack critical data to go after cybercriminals,” said Senator Warren in a prepared release. “My bill with Congresswoman Ross would set disclosure requirements when ransoms are paid and allow us to learn how much money cybercriminals are siphoning from American entities to finance criminal enterprises—and help us go after them.”

If passed, the “Ransom Disclosure Act” would require a broad set of companies, local governments, and nonprofits that actually pay off ransomware demands to report those payments to the government. Companies would need to report this information within 48 hours of paying a ransom.

Specifically, those affected by the bill would need to tell the Secretary of the Department of Homeland Security:

  • The date on which such ransom was demanded
  • The date on which such ransom was paid
  • The amount of such ransom demanded
  • The amount of such ransom paid

Companies would also need to disclose what currency they paid the ransom in, including whether the payment was made with any cryptocurrency. Companies would also have to offer “any known information regarding the identity of the actor demanding such ransom.”

The bill’s focus on cryptocurrencies acknowledges the technology’s core role in ransomware today, as likely not a single big ransomware payment has been made for years in anything other than crypto. But this reliance on cryptocurrency seems to finally be catching up to ransomware criminals, as cryptocurrency, while providing somewhat decent pseudonymity, instead provides incredible records. And international police are now excelling at following those records.  

In June, the US Department of Justice announced that, after following a series of cryptocurrency transactions across cyberspace, it eventually retrieved much of the ransomware payment that Colonial Pipeline paid to recover from its own ransomware attack in May. And earlier in October, Europol said it provided “crypto-tracing support” when the FBI, the French National Gendarmerie, and the Ukrainian National Police seized $375,000 in cash and another $1.3 million in cryptocurrencies during related arrests against “two prolific ransomware operators known for their extortionate ransom demands (between €5 to €70 million).”

This work, while encouraging in the fight against ransomware, largely happens in the dark, though, as ransomware payments made by companies are still kept considerably private. The Ransom Disclosure Act, then, seeks to shine a light on that darkness to better aid the fight. Said US House Representative Ross:

“Unfortunately, because victims are not required to report attacks or payments to federal authorities, we lack the critical data necessary to understand these cybercriminal enterprises and counter these intrusions.”

The Ransom Disclosure Act would also require the Secretary of Homeland Security to develop penalties for non-compliance and to, one year after the passage of the bill, publish a database on a public website that includes ransom payments made in the year prior. That database must be accessible to the public, and it must include the “total dollar amount of ransoms paid” by companies, but the companies’ identifying information must be removed. The information gleaned from the incoming reports must also be packaged into a study by the Secretary of Homeland Security that specifically explores “the extent to which cryptocurrency has facilitated the kinds of attacks that resulted in the payment of ransoms by covered entities,” and the Secretary of Homeland Security must also then present the findings of that study to Congress.

Finally, according to the bill, individuals who make ransomware payments after personally being hit with ransomware must also have a way to voluntarily report their information to the government if they so choose.

The post Ransom Disclosure Act would mandate ransomware payment reporting appeared first on Malwarebytes Labs.

Inside Apple: How macOS attacks are evolving

The start of fall 2021 saw the fourth Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Conferences like this are important for understanding how attackers and their methods are evolving. Like all operating systems, macOS presents a moving target to attackers as it acquires new features and new forms of protection over time.

OBTS was a great opportunity to see how attacks against macOS are evolving. Here’s what I learned.

Transparency, Consent, and Control bypasses

Transparency, Consent, and Control (TCC) is a system for requiring user consent to access certain data, via prompts confirming that the user is okay with an app accessing that data. For example, if an app wants to access something like your contacts or files in your Documents folder on a modern version of macOS, you will be asked to allow it before the app can see that data.

A TCC prompt asking the user to allow access to the Downloads folder
A TCC prompt asking the user to allow access to the Downloads folder

In recent years, Apple has been ratcheting down the power of the root user. Once upon a time, root was like God—it was the one and only user that could do everything on the system. It could create or destroy, and could see all. This hasn’t been the case for years, with things like System Integrity Protection (SIP) and the read-only signed system volume preventing even the root user from changing files across a wide swath of the hard drive.

TCC has been making inroads in further reducing the power of root over users’ data. If an app has root access, it still cannot even see—much less modify—a lot of the data in your user folder without your explicit consent.

This can cause some problems. For example, antivirus software such as Malwarebytes needs to be able to see everything it can in order to best protect you. But even though some Malwarebytes processes are running with root permissions, they still can’t see some files. Thus, apps like this often have to require the user to give a special permission called Full Disk Access (FDA). Without FDA, Malwarebytes and other security apps can’t fully protect you, but only you can give that access.

This is generally a good thing, as it puts you in control of access to your data. Malware often wants access to your sensitive data, either to steal it or to encrypt it and demand a ransom. TCC means that malware can’t automatically gain access to your data if it gets onto your system, and may be a part of the reason why we just don’t see ransomware on macOS.

TCC is a bit of a pain for us, and a common point of difficulty for users of our software, but it does mean that we can’t get access to some of your most sensitive files without your knowledge. This is assuming, of course, that you understood the FDA prompts and what you were agreeing to, which is debatable. Apple’s current process for assigning FDA doesn’t make that clear, and leaves it up to the app asking for FDA to explain the consequences. This makes tricking a user into giving access to something they shouldn’t pretty easy.

However, social engineering isn’t the only danger. Many researchers presenting at this year’s conference talked about bugs that allowed them to get around the Transparency, Consent, and Control (TCC) system in macOS, without getting user consent.

Andy Grant (@andywgrant) presented a vulnerability in which a remote attacker with root permissions can grant a malicious process whatever TCC permissions is desired. This process involving creating a new user on the system, then using that user to grant the permissions.

Csaba Fitzl (@theevilbit) gave a talk on a “Mount(ain) of Bugs,” in which he discussed another vulnerability involving mount points for disk image files. Normally, when you connect an external drive or double-click a disk image file, the volume is “mounted” (in other words, made available for access) within the /Volumes directory. In other words, if you connect a drive named “backup”, it would become accessible on the system at /Volumes/backup. This is the disk’s “mount point.”

Mountain of bugs
Title slide of Csaba Fitzl’s “Mount(ain) of Bugs” talk

Csaba was able to create a disk image file containing a custom TCC.db file. This file is a database that controls the TCC permissions that the user has granted to apps. Normally, the TCC.db file is readable, but cannot be modified by anything other than the system. However, by mounting this disk image while also setting the mount point to the path of the folder containing the TCC.db file, he was able to trick the system into accepting his arbitrary TCC.db file as if it were the real one, allowing him to change TCC permissions however he desired.

There were other TCC bypasses mentioned as well, but perhaps the most disturbing is the fact that there’s a fairly significant amount of highly sensitive data that is not protected by TCC at all. Any malware can collect that data without difficulty.

What is this data, you ask? One example is the .ssh folder in the user’s home folder. SSH is a program used for securely gaining command line access to a remote Mac, Linux, or other Unix system, and the .ssh folder is the location where certificates used to authenticate the connection are stored. This makes the data in that folder a high-value target for an attacker looking to move laterally within an organization.

There are other similar folders in the same location that can contain credentials for other services, such as AWS or Azure, which are similarly wide open. Also unprotected are the folders where data is stored for any browser other than Safari, which can include credentials if you use a browser’s built-in password manager.

Now, admittedly, there could be some technical challenges to protecting some or all of this data under the umbrella of TCC. However, the average IT admin is probably more concerned about SSH keys or other credentials being harvested than in an attacker being able to peek inside your Downloads folder.

Attackers are doing interesting things with installers

Installers are, of course, important for malware to get installed on a system. Often, users must be tricked into opening something in order to infect their machine. There are a variety of techniques attackers can use that were discussed.

One common method for doing this is to use Apple installer packages (.pkg files), but this is not particularly stealthy. Knowledgeable and cautious folks may choose to examine the installer package, as well as the preinstall and postinstall scripts (designed to run exactly when you’d expect by the names), to make sure nothing untoward is going on.

However, citing an example used in the recent Silver Sparrow malware, Tony Lambert (@ForensicITGuy) discussed a sneaky method for getting malware installed: The oft overlooked Distribution file.

The Distribution file is found inside Apple installer packages, and is meant to convey information and options for the installer. However, JavaScript code can also be inserted in this file, to be run at the beginning of the installation, meant to be used to determine if the system meets the requirements for the software being installed.

In the case of Silver Sparrow, however, the installer used this script to download and install the malware covertly. If you clicked Continue in the dialog shown below, you’d be infected even if you then opted not to continue with the installation.

An Apple installer asking the user to allow a program to run to determine if the software can be installed.
Click Continue to install malware

Another interesting trick Tony discussed was the use of payload-free installers. These are installers that actually don’t contain any files to be installed, and are really just a wrapper for a script that does all the installation (likely via the preinstall script, but also potentially via Distribution).

Normal installer scripts will leave behind a “receipt,” which is a file containing a record of when the installation happened and what was installed where. However, installers that lack an official payload, and that download everything via scripts, do not leave behind such a receipt. This means that an IT admin or security researcher would be missing key information that could reveal when and where malware had been installed.

Chris Ross (@xorrior) discussed some of these same techniques, but also delved into installer plugins. These plugins are used within installer packages to create custom “panes” in the installer. (Most installers go through a specific series of steps prescribed by Apple, but some developers add additional steps via custom code.)

These installer plugins are written in Objective-C, rather than scripting languages, and therefore can be more powerful. Best of all, these plugins are very infrequently used, and thus are likely to be overlooked by many security researchers. Yet Chris was able to demonstrate techniques that could be used by such a plugin to drop a malicious payload on the system.

Yet another issue was presented in Cedric Owens’ (@cedowens) talk. Although not related to an installer package (.pkg file), a vulnerability in macOS (CVE-2021-30657) could allow a Mac app to entirely bypass Gatekeeper, which is the core of many of Apple’s security features.

On macOS, any time you open an app downloaded from the Internet, you should at a minimum see a warning telling you that you’re opening an app (in case it was something masquerading as a Word document, or something similar). If there’s anything wrong with the app, Gatekeeper can go one step further and prevent you from opening it at all.

By constructing an app that was missing some of the specific components usually considered essential, an attacker could create an app that was fully functional, but that would not trigger any warnings when launched. (Some variants of the Shlayer adware have been seen using this technique.)

The post Inside Apple: How macOS attacks are evolving appeared first on Malwarebytes Labs.

A week in security (Oct 4 – Oct 10)

Last week on Malwarebytes Labs

Other cybersecurity news

The post A week in security (Oct 4 – Oct 10) appeared first on Malwarebytes Labs.

Google warns some users that FancyBear’s been prowling around

APT28, also known as FancyBear, is at the heart of another targeted campaign. This time, it’s sniffing around users of Google services. Some 14,000 people have been notified about a spear phish attempt looking to compromise accounts and access their files.

When did this happen?

Sometime late September, according to the folks at Google. They didn’t go into detail about which industries were key targets, but this campaign “compromised 86% of the batch of warnings we sent for this month”.

Did Google catch all the malicious missives?

Shane Huntley, Director of Google’s Threat Analysis Group, mentioned that they blocked all the emails sent. That seems pretty conclusive. He goes into more details in this thread:

As per his thoughts, these warnings are primarily to tell you to batten down the hatches for the next attack, whenever that might be.

Google has more information on this type of warning over on its security blog. If you ever see the below message, it’s definitely time to take action:

Government backed attackers may be trying to steal your password.

There’s a chance this is a false alarm, but we believe we detected government-backed attackers trying to steal your password., This happens to less than 0.1% of all Gmail users. We can’t reveal what tipped us off because the attackers will take note and change their tactics, but if they are successful at some point they could access your data or take other actions using your account.

Google recommends those affected join its Advanced Protection Program, which is says is its strongest protection for users at risk of targeted attacks.

What is the Advanced Protection Program?

Google’s Advanced Protection Program is another layer of security on top of regular Google protection, for those who need it. Physical security keys are a big feature of this program. The Chrome browser will also scan any and all files which attempt to download on a device. It also refuses files from untrusted/unknown sources on Android, and makes it more difficult for rogue files to gain permissions from the device.

What else is Google doing in this realm?

Well, Google is very much about auto-enrolment for things like 2FA these days. Take-up on 2FA is quite low across many services on the web, and something like this can only help boost everyone’s security a bit more.

There’s also Google’s Security Checkup feature. At a glance, this will tell you about logged in devices, recent security activity, whether or not you have 2FA enabled, and your Gmail settings including which addresses you may have blocked. Many of the tabs reveal more and more information as you go. The 2-step column will tell you about phones using sign-in prompts, which Authenticator app you’re using and when it was added, phone numbers, and backup codes.

Don’t forget, you can also see a list of IP addresses using your Gmail account on the desktop in the bottom right hand corner (“last account activity”). This shows the type of access (web? mobile?), location/IP address, and the date/time of said activity.

These are all useful things to help ward off compromise, and also perhaps figure out where something might have gone wrong.

Should I be worried?

As above, the risk from something like FancyBear is as good as negligible. If you work in a high risk occupation, or deal with sensitive data you feel governments may be interested in then, yes, you could potentially be a target, though this is still very slim pickings in terms of whether you should be worried about it. If you’re a journalist, an activist, work in human rights, are a lawyer, or work in some form of natsec role then you may want to sign up to the Advanced Protection Program.

Everyone else should realistically be more concerned about common or garden malware, scams, phishes, and so on. The good news is a lot of basic security practices to help ward off these attacks will also go some way towards warding off the big stuff. There is no detriment to yourself to start making use of said security practices…it’s win-win.

Do yourself a favour, and start digging through the multitude of security features Google has available. You’ll be surprised how easy it is to set most of it up, and you’ll be strengthening the security of your data at the same time.

The post Google warns some users that FancyBear’s been prowling around appeared first on Malwarebytes Labs.

Firefox reveals sponsored ad “suggestions” in search and address bar

Mozilla is trying a novel experiment into striking a balance between ad revenue generation and privacy protection by implementing a new way to deliver ads in its Firefox web browser—presenting them as “suggestions” whenever users type into the dual-use search and URL address bar.

The advertising experiment lies within a feature called “Firefox Suggest,” which was announced in September. According to Mozilla, Firefox Suggest “serves as a trustworthy guide to the better web, finding relevant information and sites to help you accomplish your goals.”

Much like other browsers, Firefox already offers users a bevy of suggestions depending on what they type into the search and address bar. That has included suggestions based on users’ bookmarks, browser histories, and their open tabs. But with the new Firefox Suggest feature, users will also receive suggestions from, according to Mozilla, “other sources of information such as Wikipedia, Pocket articles, reviews, and credible content from sponsored, vetted partners and trusted organizations.”

Though the explanation seems simple, the implementation is not.

That’s because there appear to be two different levels of suggestions for Firefox Suggest, which are only referred to by Mozilla as “Contextual suggestions,” and “improved results for Contextual Suggestions.”

On its support page for Firefox Suggest, Mozilla explicitly said that “contextual suggestions are enabled by default, but improved results through data sharing is only enabled when you opt-in.” That data sharing, covered in more detail below, broadly includes user “location, search queries, and visited sites,” Mozilla said.

How that additional data produces separate results, however, is unclear, because Mozilla remains frustratingly vague about the experience that users can expect if they have the default “contextual suggestions” enabled compared to users who have opted-in to “improved results for Contextual Suggestions.”

Under the heading “What’s on by default,” Mozilla said that, starting with Firefox version 92, users “will also receive new, relevant suggestions from our trusted partners based on what you’re searching for. No new types of data are collected, stored, or shared to make these new recommendations.”

Under the heading, “Opt-in Suggestions,” however, Mozilla only said that a “new type of even smarter” suggestion is being presented for some users that the company hopes will “enhance and speed up your searching experience.” Mozilla said that it “source[s] and partner[s] with trusted providers to serve up contextual suggestions related to your query from across the web,” which sounds confusingly similar to the default contextual suggestions that come from the company’s “trusted partners” and are “based on what you’re searching for.”

Fortunately, Mozilla offered a way for users to check if they’ve opted-in to the data sharing required for improved contextual suggestions. Unfortunately, when Malwarebytes Labs installed the latest version of Firefox (93.0 for MacOS), we could not find the exact language described in Mozilla’s support page.

Mozilla said that, for those who go into Firefox’s preferences:

“If you see ‘Contextual suggestions’ checked with the string ‘Firefox will have access to your location, search queries, and visited sites’, you have opted in. If you do not see that label then the default experience is enabled with no new kinds of data sharing.”

As shown in the image below, though we did find this setting in Firefox’s preferences, we did not find the exact language about “location, search queries, and visited sites.”

3 Firefox Suggest options

When Malwarebytes Labs tested Firefox Suggest, we could not produce any sponsored content results. We did, however, receive a Wikipedia suggestion on our search of “Germany” and a Firefox Pocket suggestion on our search of “chicken soup,” as shown below.

0 Firefox Germany
Firefox chicken soup

During our testing, we also could not find a way to opt-in to improved contextual suggestions. According to Mozilla, opting-in seems to currently rely on a notification message from Firefox asking users to specifically agree to sharing additional data. During our testing of Firefox Suggest, we did not receive such a message.

New model, new data

Firefox’s experiment represents a sort of double-edged sword of success.

In 2019, Mozilla decided to turn off third-party tracking cookies by default in its then-latest version of Firefox. It was a bold move at the time, but just months later, the privacy-forward browser Brave launched out of beta with similar anti-tracking settings turned on by default, and in 2020, Safari joined the anti-tracking effort, providing full third-party cookie blocking.

The anti-tracking campaign seems to have largely worked, as even Google has contemplated life after the third-party cookie, but this has put privacy-forward browsers in a difficult position. Advertising revenue can be vital to browser development, but online advertising is still rooted firmly in surreptitious data collection and sharing—the very thing these browsers fight against.

For its part, Brave has responded to this problem with its own advertising model, offering “tokens” to users who opt-into advertisements that show up as notifications when using the browser. The tokens can be used to tip websites and content creators. Similar to Mozilla, Brave also vets the companies who use its advertising platform.

As to the role of advertising partners in Firefox Suggest, Mozilla said it attempts to limit data sharing as much as possible. “The data we share with partners does not include personally identifying information and is only shared when you see or click on a suggestion,” Mozilla said.

To run improved suggestions, Mozilla does need to collect new types of data, though. According to the company’s page explaining that data collection:

“Mozilla collects the following information to power Firefox Suggest when users have opted in to contextual suggestions.

  • Search queries and suggest impressions: Firefox Suggest sends Mozilla search terms and information about engagement with Firefox Suggest, some of which may be shared with partners to provide and improve the suggested content.
  • Clicks on suggestions: When a user clicks on a suggestion, Mozilla receives notice that suggested links were clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.”

Based on the types of data Mozilla collects for improved contextual suggestions, we might assume that users who opt-in will see, at the very least, suggestions that have some connection to their location, like perhaps sponsored content for an auto shop in their city when they’re looking up oil changes. The data on a user’s suggestion clicks might also help Mozilla deliver other suggestions that are similar to the clicked suggestions, as they may have a higher success rate with a user.

As to whether the entire experiment works? It’s obviously too early to tell, but in the meantime, Mozilla isn’t waiting around to generate some cash. Just this year, the company released a standalone VPN product. It is the only product that Mozilla makes that has a price tag.

The post Firefox reveals sponsored ad “suggestions” in search and address bar appeared first on Malwarebytes Labs.

GnuPG fixes a problem with Let’s Encrypt certificate chain validation

Despite advance warnings that a root certificate provided by Let’s Encrypt would expire on September 30, users reported issues with a variety of services and websites once that deadline hit. So what happened?

The problem

A number of high profile tech and security companies noticed their products and services were affected by the certificate expiration, such as cloud computing services for Amazon, Google, and Microsoft, IT and cloud security services for Cisco, as well as sellers that were unable to log in on Shopify.

When a user’s browser arrives at your website one of the first things it checks for is the validity of the SSL certificate. An SSL certificate is a digital certificate that authenticates a website’s identity and enables an encrypted connection. SSL certificates are issued by a Certificate Authority (CA). Most browsers will accept certificates issued by hundreds of different CAs. Let’s Encrypt is a CA that provides digital certificates as a free non-profit and millions of websites rely on Let’s Encrypt services.

If the certificate, or the root certificate that signed it, has expired, it issues a warning that the site may not be secure or the connection is not private. At least 2 million people saw an error message on their phones, computers, or smart gadgets due to the certificate issue.

GnuPG

GnuPG free implementation of the OpenPGP standard as defined by RFC4880 (also known as PGP). GnuPG allows you to encrypt and sign your data and communications; it features a versatile key management system, along with access modules for all kinds of public key directories.

GnuPG is a command line tool without any graphical user that is often used as the actual crypto backend of other applications.

Even though many organizations had not forgotten about the certificate expiration, GnuPG did not handle it well. And since many were unaware of the fact they were even using GnuPG, because it functions as the backend of another application, it took some organizations a while to figure out and correct the problem. Without knowing the cause, it’s a difficult problem to identify. For the affected companies, it’s not like everything is down, but they’re certainly having all sorts of service issues.

The update

The new version of GnuPG 2.2.32 (LTS) fixes the problem with Let’s Encrypt certificate chain validation, and this update should restore access to many web resources (e.g. Web Key Directory and key servers). “LTS” is short for long term support, and this series of GnuPG is guaranteed to be maintained at least until the end of 2024.

SSL/TLS certificate management

Digital certificates are the primary vehicle by which people and machines are identified and authenticated. As the number of identities in a company grows, so does the difficulty of managing and protecting certificates at scale. The adoption of BYOD and IoT makes certificate management even more critical than ever.

Like passwords and keys, certificates also go through a cycle. They’re created, provisioned into the infrastructure, and have a finite validity period after which they expire. Certificate Management is usually concerned only with certificates issued by mutually trusted Certificate Authorities. Once the digital certificates have been issued, they must be managed diligently through their entire validity period.

If this incident has shown one thing that is how important it is to keep track of all the digital certificates that your organization relies on.

The post GnuPG fixes a problem with Let’s Encrypt certificate chain validation appeared first on Malwarebytes Labs.

Discord scammers lure victims with promise of free Nitro subscriptions

A number of bogus offers are doing the rounds in Discord land at the moment. Discord, a group text chat/VoiP app of choice for many gaming communities, is having a bit of trouble with phishing links.

You may recall we’ve covered a lot of Discord scams previously. Service users can create bots, those bots can be invited into channels, and then they get to work spamming. The messages run the range of free games, discount sign-ups for services, or just plain old fake login screens.

You’ll also frequently see bots pushing offers for things which simply don’t exist anymore. Their purpose is to hit the channels and drift forever, spamming all and sundry until they get a few hits. This week it’ll be a bot promoting a “red hot” offer from 2018. Next week it’ll be promoting crossover deals with a service which went out of business a year ago.

While many gamers who know their stuff won’t fall for those kinds of things, plenty of others will. They could stand to lose their gaming accounts, their logins for other services, some money, or perhaps a combination of all 3. Depending on the scam, they could also be used to send spam messages to an even bigger audience. You definitely don’t want any of this clogging up the channels you use on a daily basis.

What’s happening?

Spam messages are sent to other Discord users. As is common with this kind of attack, they’re themed around “Nitro”. This is a paid Discord service which offers added functionality in the servers along with some other features. At one point, games were included in some of these deals, and those were a big target for scammers even after the games were no longer available. The scammers are just banking on nobody checking before clicking the links.

Here’s what some of the current messages going around look like:

Note that this isn’t being sent from bots (as in, chatbots specifically coded to send spam links). As the Tweeter points out, this is all being sent by friends. Those friends have likely been compromised earlier in the chain, and are now being used for malicious purposes.

As for the messages themselves? They’re a mixed bunch. One claims a friend has sent the recipient a Nitro subscription. The others claim the recipient “has some Nitro left over”, tied to a URL which mentions billing and promotions.

When sneaky sites go phishing…

The sites here use a common trick. This is where they switch out the letter i, for an L in the URL. As a result, you’re not visiting Discord, you’re visiting something along the lines of dLscord instead (we’re using the uppercase L here purely for visual clarity).

discord phish 1
Hunting for phish
discord phish 3
If it seems too good to be true…

From there, it’s a case of phishing the victim’s logins.

Tackling the Discord phishers

Sometimes these sites already have multiple red flags thrown up along the way:

discord phish 2
Caught!

Other times, you’re reliant on the site being taken down or your security tools stopping the scam in its tracks. Either way, if you’ve entered your details into one of these sites (or similar!), then change your login as soon as possible.

How to protect your Discord account

Discord offers some tips on how to keep your account safe:

  1. Use a strong password, and one that is unique to your Discord account. A password manager can help generate and store strong passwords for you, because it’s very very difficult to remember them yourself
  2. Set up two-factor authentication (2FA) on your account
  3. Set up message scanning, which automatically scans and deletes any explicit content. You can choose to do this for all messages or just those from people not on your Friends List
  4. Block users if you need to. Discord offers more information on how to do that in tip 4.

Stay safe out there!

The post Discord scammers lure victims with promise of free Nitro subscriptions appeared first on Malwarebytes Labs.

Making better cybersecurity training: Q&A with Malwarebytes expert Kelsey Prichard

If you hadn’t noticed by now, we are in the first week of National Cybersecurity Awareness Month, which, according to the Cybersecurity Infrastructure and Security Agency in the United States, means that we should all consider how people, organizations, and businesses can “be cyber smart” this year and ahead.

While there are countless ways to interpret exactly how to “be cyber smart”—like adopting cybersecurity best practices around strong password use, two-factor authentication, and remote desktop protocol ports—we at Malwarebytes Labs wanted to take a step back and consider: How do you train people to be cyber smart in the first place?

After all, cybersecurity training is likely the first and most important step in cybersecurity awareness, whether at home or in the office. But developing engaging, actionable cybersecurity training programs can be a difficult endeavor, as those who develop the training have to potentially meet their organization’s compliance requirements while considering their audience’s interests, needs, awareness level, and time available to actually complete training programs.

To better understand how to make smart, engaging cybersecurity training, and to help businesses everywhere roll out their own, we asked Kelsey Prichard, security awareness program manager at Malwarebytes, to share her insights. At Malwarebytes, Prichard develops the security awareness programs and compliance training for the company’s employees—which are sometimes affectionately called “Malwarenauts.” She has developed seven “microlearning modules” and one security compliance training course—with another soon to come—and she has organized multiple in-house security webinars.

Prichard’s programs have also taken advantage of what she described as a “playful culture” at Malwarebytes, as each October, she has structured the annual security training to be “based around a different popular sci-fi movie.” The themed training programs have found a perfect home at the company, as its Star Wars-themed Santa Clara headquarters includes multiple conference rooms named after popular characters and its hallways are adorned with plenty of movie art.

The following Q&A with Prichard has been edited for clarity and length.

When you first joined Malwarebytes, you were tasked with something quite intimidating: Developing a cybersecurity training program for hundreds of company employees. Where do you even start with a task this large? 

This was quite the challenge, as this role was my first formal introduction to the world of security. My background’s in learning and development, and I used to work for Tesla developing their body repair training. So much of the material was new to me. Luckily, the security team here is fantastic and gave me a lot of the security frameworks I needed to get started. I think being a “beginner” in security helped give me a clarity I’m not sure I would’ve had otherwise. The first few months consisted of a lot of Googling, online training courses, and trial and error. As I learned, I developed courses and wrote down ideas. It was extremely important to me that I didn’t start a program that people didn’t want, nor were interested in, so a huge aspect of that was learning how to make it fun. Malwarebytes has a lot of very smart individuals, and this is a security company, so I had to develop content that was interesting and yet also met compliance requirements, so everyone took training in a timely manner

How did you measure the cybersecurity familiarity of Malwarebytes employees to ensure that the training programs you built would fit their level of understanding? 

We have a huge range of security knowledge here at Malwarebytes, so we’ve tried to incorporate variability in the content we upload. Some formats, like our training modules, are catered to Malwarenauts who may have less security understanding, while others, like our monthly webinars, are more technical. We also have a Security Champions program where our security experts in the company come together to learn from each other and our security team so that they can help educate their fellow Malwarenauts. There are some things, however, like our compliance training that we need to roll out to everyone, so this needs to cover a broad spectrum of security knowledge.

How did developing these training programs specifically for employees at a cybersecurity company influence, if at all, the development process? 

Lucky for me, working at a cybersecurity company has meant more engagement in security training than you’d see at other companies. However, it also makes our mandatory trainings more difficult since we have such a broad level of security knowledge and it’s odd knowing that you may be training someone with more security knowledge than yourself. That being said, I really love that there are so many people around me that are knowledgeable and excited about cybersecurity. It means I have a lot of people to learn from and I get a lot of support from upper management, but it was definitely intimidating at first! 

When deciding what topics to prioritize, I imagine you had an enormous list. Can you describe what was on that early list? 

Yes! The first thing I needed to do was set up our first annual security training, which was easy to prioritize for compliance reasons. Cybersecurity Awareness Month was also a big priority because I used it as the launch of our security awareness program and it’s the optimal time to make a big deal of cybersecurity. Creating a plan for the year on topics to be covered was also very helpful, as it allowed for getting the expert speakers for those topics. It requires a lot of coordination.

How did you narrow down the first few topics you developed training programs for? Why did you choose those topics? 

My security teammates were hugely valuable. They were aware of the biggest threats to our organization, so I initially developed training to highlight and help our employees prevent these threats from occurring. From there, we really wanted to cover the “cybersecurity basics” to set a knowledge groundwork for all employees.  

In developing the training programs, was there any practice you knew you wanted to avoid? 

I am very aware that “learning fatigue” is easy to succumb to with mandatory training modules. Because of this, I wanted to ensure that all training programs were split up to take no longer than 15 minutes at a time. This is why you’ll see our mandatory training is 30 minutes in total, but is split into three separate courses that are combined into one learning plan. This gives learners the option to complete a course and return to the learning plan as needed.

I also aim for story-based training, where it makes sense, to simplify otherwise complex content and make it relatable. 

Finally, what is your top tip for other cybersecurity trainers who want to make smart training progrmas for their organizations? 

Keep it engaging. I think as cybersecurity trainers we tend to get wrapped up in what the content is and forget how crucial it is to make the learning entertaining. If your audience doesn’t engage in the training you create, all it’s doing is checking a compliance box. 

The post Making better cybersecurity training: Q&A with Malwarebytes expert Kelsey Prichard appeared first on Malwarebytes Labs.

At long last, Microsoft is disabling Excel 4.0 macros by default

Sometimes good news in the security world comes unexpectedly. This is one of those times. After three decades of macro viruses, and three decades of trying to convince every single Excel user individually to disable macros, Microsoft is going disable Excel 4.0 macros for everyone. Better late than never, right?

Talk about a big sigh of relief.

Excel 4.0 macros, aka XLM macros, were first added to Excel in 1992. They allowed users to add commands into spreadsheet cells that were then executed to perform a task. Unfortunately, we soon learned that (like any code) macros could be made to perform malicious tasks. Office documents have been a favorite hiding place of malicious code ever since.

For backward compatibility reasons the feature was never removed, despite being superseded by Visual Basic for Applications (VBA) just one year after it was introduced.

I understand the argument in favor of keeping it back then, but why keep it enabled by default for so long after, when so few people use it? Microsoft could have made it so that those that needed Excel 4.0 macros had to turn the feature on, and the rest of us (the overwhelming majority of Excel users) could have been more secure without having to remember to turn it off.

Good news? What happened?

Microsoft announced plans to disable Excel 4.0 macros in an email sent to customers. It will be disabled for all Microsoft 365 users by the end of the year, but the exact schedule depends on which kind of customer you are:

  • Insiders-Slow: Complete in early November.
  • Current Channel: Complete by mid-November.
  • Monthly Enterprise Channel: Complete by mid-December.

Trust me, it’s not easy to make all security professionals happy at once. Most feel this should have been done long ago. For some the glass is half full, while others are asking “why has this glass been half empty for so long?”

Will you miss it?

It is very, very unlikely you will miss Excel 4.0 macros. XLM was the default macro language for Excel through Excel 4.0, but beginning with version 5.0, Excel recorded macros in VBA by default, although XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel are capable of running XLM macros, though Microsoft discourages their use.

Now—almost 30 years after they were made obsolete—it’s fair to stay that the biggest users of Excel 4.0 macros are probably malicious threat actors.

Abuse cases

Attackers have always liked Office macros because they provide a simple and reliable method to spread malware using legitimate features, and without relying on any vulnerability or exploit. XLM macros have been used to drop many well known malware families, including ZLoader, TrickBot, BitRat, QBot, Dridex, FormBook and StrRat, among others.

And in just the last month, Malwarebytes Labs has seen XLM macros weaponized to deliver threat-actor-favorite Cobalt Strike, and a malware campaign using XLM macros to deliver a .NET payload under the cover an Excel spreadsheet full of stats about US airstrikes on the Taliban regime.

Disable manually

Should you feel the need to disable this feature right now, you can do so in the Trust Center. In July Microsoft added a new checkbox setting, “Enable Excel 4.0 macros when VBA macros are enabled”, which allows users to individually configure the behavior of XLM macros without impacting VBA macros.

Microsoft Excel Trust Center settings
Image courtesy of Microsoft

Security over backward compatibility

Despite the shared joy about this security enhancing roll-out, it raises the question of when does security overrule backward compatibility? Microsoft must have better things to do than fix obsolete features from the past century. Wouldn’t it have been preferable if the step up to VBA in 1993 had been less steep, so we could all forget about 4.0 and move on to the latest version without having to look over our shoulder? Or perhaps Microsoft could have disabled this potentially dangerous feature decades ago and left it to those who actually wanted it to turn it back on?

If history has taught us anything, it’s that the incentive to enable something you need is a lot stronger than the incentive to disable something that might be potentially dangerous.

Stay safe, everyone!

The post At long last, Microsoft is disabling Excel 4.0 macros by default appeared first on Malwarebytes Labs.