IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

macOS 11’s hidden security improvements

A deep dive into macOS 11’s internals reveals some security surprises that deserve to be more widely known.

Contents

  1. Introduction
    1. Disclaimers
  2. macOS 11’s better known security improvements
    1. Secret messages revealed?
  3. CPU security mitigation APIs
    1. The NO_SMT mitigation
    2. The TECS mitigation
    3. Who benefits from NO_SMT and TECS?
  4. Endpoint Security API improvements
    1. More message types
    2. More notifications, less polling
    3. More metadata
    4. Improved performance
  5. A vulnerability quietly fixed
  6. O_NOFOLLOW_ANY
  7. Conclusion
  8. Endnotes

Introduction

When a new release of an operating system comes out, normal people find out what’s new by attending developer conferences, reading release notes, changelogs, reviews.

Me, I download the software development kit (SDK) for the new version, and diff it with the current version.

This is not uncommon on, say, Windows: There are entire websites dedicated to large scale, long term, differential reverse engineering, that tell you what new functions appeared in what version of Windows, how their relationship with other functions has changed, how internal data structures have evolved etc. On macOS, nobody seems to do it (at least not in public), and something as simple as diffing the includes from one SDK version to the next and patiently going through it, file by file, can reveal interesting features nobody knows (or at least talks) about.

Comparing the macOS 11 and macOS 10.15 SDKs, I found several intriguing surprises that deserve to be more widely known.

Disclaimers

In this article, I describe poorly-documented, or completely undocumented, features that could stop working as advertised or disappear completely without notice in future releases of macOS. Use common sense, assess the risks, choose, and take responsibility for your choice.

Note that I’m just a developer, neither a security researcher nor an exploit writer, and my descriptions of security issues and their mitigations might fall between “slightly incorrect” and “completely wrong”. I welcome corrections.

macOS 11’s better known security improvements

At the WWDC 2020, Apple made a big deal of several new macOS and iOS features that were, in fact, big deals. This article was supposed to come out much earlier, and I don’t expect anyone still remembers what the fuss was about over a year later, so I’ll give you a brief recap.

The major new security features that would debut in macOS 11 were:

  • Pointer Authentication Codes (PAC), hardware-enforced Call Flow Integrity (CFI), implemented by Apple’s homegrown 64 bit ARM processor, the M1. Currently limited to system code and kernel extensions, but open to all third-party developers for experimentation.
  • Device isolation was another M1-only feature, that uses the more powerful IOMMU of that platform to make sure hardware devices can only share memory with the operating system and not with each other. Cross-device memory sharing is a historical custom, based on a blind, unfounded trust in hardware.
  • Write XOR Execute (W^X) finally came to macOS, in a hardware-enforced form (yes, another M1-only feature). Memory pages can now be either writable or executable, never both at the same time; no exceptions. Just-in-time (JIT) compilers will need to be redesigned around this limitation to run on ARM Macs, but special APIs are provided to make the work easier.
  • Signed System Volume (SSV) cryptographically sealed the boot volume and made it tamper-evident. (MacOS has booted from a read-only volume since 10.15.) Apple’s Protecting data at multiple layers article briefly describes SSV, but Howard Oakley has an even more detailed write-up on his blog, with illustrations; a must-read. You should also check out Andrew Cunningham’s review of macOS 11.

These technologies have justly earned the attention of the press and security researchers, and they’ve been discussed in great detail elsewhere. (The Apple video Explore the new system architecture of Apple silicon Macs from session 10686 of the WWDC 2020 has a good overview of most of the new security features, and more.)

There’s really nothing I could add to what the excellent resources out there say about these topics, but there are other security improvements that everyone seems to have missed, and that Apple seems to be shy about.

Secret messages revealed?

On second thoughts, maybe my rummaging approach can add something novel (although incredibly trivial) to the publicly-disclosed security improvements: I’ve not seen anybody mention the fact that the cryptographically sealed filesystem underlying SSV is internally code-named “Cryptex”. The cryptex(5) man page claims that: “The name ‘cryptex’ is a portmanteau for ‘CRYPTographically-sealed EXtension’.”

… but I know, we know, they know that they took the name from Dan Brown’s best-selling, award-winning birdcage liner The Da Vinci Code. The otherwise forgettable (and best-forgotten) airport thriller introduced the intriguing concept of a cryptex: a secret message, sealed by a combination lock, that would self-destruct if opened by force.

There are intriguing hints in cryptex(5) that suggest a wider Cryptex Cinematic Universe, like references to a cryptexctl(1) command and a cryptexd(8) daemon, but those man pages are nowhere to be found, nor are the two binaries part of macOS. A placeholder man page for libcryptex(3) has literally nothing to say about the “Cryptex management library”, except an interesting detail: A copyright date of 19 October, 2018, suggesting that SSV had been in development for a long time before materializing as an end user feature.

The SDK includes the import libraries for libcryptex, libcryptex_core and libcryptex_interface, but not the libraries themselves, so we have the lists of exported symbols but not the code behind them. The libraries, too, are not part of macOS, which makes me think that the scattered Cryptex artefacts found in the SDK probably escaped, no idea how, from an Apple private code corral.

All that the symbol lists can tell us is that the politically correct “CRYPTographically-sealed EXtension” revisionism can be put to rest: To me, functions with names like codex_install_pack (exported by libcryptex_interface) unquestionably prove a Brownian origin of the name!

CPU security mitigation APIs

Developers are taught to think of the CPU as a perfect, mathematical abstraction. In 2018, year of microarchitectural vulnerabilities (Spectre and Meltdown to name the most infamous ones), we were set straight: CPUs run on code; CPU developers weren’t preternaturally capable of writing multithreaded code without race conditions; and the CPUs they made were buggy, unreliable and traitorous, conspiring with applications against the operating system (OS) to bypass access controls in undetectable ways.

The issues that could be fixed were fixed. The remaining issues could only be mitigated, either in the OS or the compiler, at the the cost of performance. I was aware of the mitigations rolled out by Microsoft as Windows updates, the new MSVC compiler option /Qspectre, the changes to the Chrome JIT compiler to prevent it from generating exploitable code from malicious Javascript, etc.

But, I was surprised to discover new, unannounced, completely undocumented mitigations in macOS 11.

As far as I can tell, this is the first public article ever written that describes them. The new APIs return virtually no hits in grep.app or GitHub code search—or Google, for that matter.

Two kinds of mitigations are provided, codenamed NO_SMT and TECS. Let’s have a closer look at them.

The NO_SMT mitigation

What is it?

NO_SMT disables Simultaneous multithreading (SMT), the CPU feature better known under Intel’s trade name of “Hyper-Threading”. SMT allows a CPU core to execute two or more threads at the same time, for improved performance at the cost of contention for per-core resources, such as caches, TLBs etc.

Letting multiple threads share invisible resources carries the risk of letting a malicious thread steal secrets from a “sibling” thread running on the same core—a risk that over the years has materialized into multiple attacks, like TLBleed, PortSmash, Fallout, ZombieLoad, RIDL. A straightforward mitigation for this entire family of attacks, past and future, is then to simply disable SMT, which is what NO_SMT does.

How to use NO_SMT

In C/C++, #include <libproc.h>; no extra library necessary. From <libproc.h>:

/*
 * NO_SMT means that on an SMT CPU, this thread must be scheduled alone,
 * with the paired CPU idle.
 *
 * Set NO_SMT on the current proc (all existing and future threads)
 * This attribute is inherited on fork and exec
 */
int proc_set_no_smt(void) __API_AVAILABLE(macos(11.0));

/* Set NO_SMT on the current thread */
int proc_setthread_no_smt(void) __API_AVAILABLE(macos(11.0));

Simply call proc_set_no_smt to enable NO_SMT for the entire process (existing and future threads alike), or proc_setthread_no_smt to enable it for the calling thread only. Like the comments say, fork(2) children inherit the parent process’s NO_SMT state, and exec(2) won’t reset it.

Note that “libproc” is a misnomer, and these aren’t library functions but thin C wrappers over the private system call process_policy(2).

NO_SMT also extends posix_spawn(2), so that we can enable mitigations for a new process without setting them for the current process, or spawning a short-lived fork(2) child (ideally, we should never call fork(2) again in any new code, on any OS. Ever). From <spawn.h>:

int     posix_spawnattr_setnosmt_np(const posix_spawnattr_t * __restrict attr) __API_AVAILABLE(macos(11.0));

posix_spawnattr_setnosmt_np(3) performs the equivalent of proc_set_no_smt on the new process. In the name of the function, the “_np” suffix stands for “non-portable”: A customary way to mark OS-specific extensions to posix_spawn(2).

“But I already use fork(2) and I can’t stop using it! How do I enable NO_SMT after exec(2) without enabling them for the fork child?” You’re in luck, because macOS has you covered: Any posix_spawn(2) feature is automatically available to exec(2) thanks to non-standard flag POSIX_SPAWN_SETEXEC, that can be set on a posix_spawnattr_t using posix_spawnattr_setflags(3), and makes posix_spawn(2) behave like exec(2), replacing the current process instead of creating a new one.

How does it work?

NO_SMT is implemented as a per-task (not per-process[1]) flag, named TF_NO_SMT, or a per-thread scheduling flag named TH_SFLAG_NO_SMT. The flag is copied from tasks to their children, tasks and threads alike; it’s a write-once flag, that once set cannot be removed. The flag is then copied from each thread to the CPU they’re currently running on (field processor_t::current_is_NO_SMT).

NO_SMT is implemented by the dualq scheduling algorithm, in a pretty straightforward way: A NO_SMT thread cannot share a CPU core with any other thread.

NO_SMT can be disabled system-wide with boot argument disable_NO_SMT_threads, which causes the kernel variable sched_allow_NO_SMT_threads to be initialized with 0 instead of 1. The current value of sched_allow_NO_SMT_threads can be queried with sysctl kern.sched_allow_NO_SMT_threads:

tstudent@MAC-67C2FA8CA4EC ~ % sysctl kern.sched_allow_NO_SMT_threads
kern.sched_allow_NO_SMT_threads: 1

The equivalent of NO_SMT can be forced on system-wide at the firmware level, by setting NVRAM variable SMTDisable to %01, as described in Apple support article HT210108.

Why you probably shouldn’t use NO_SMT

Enabling NO_SMT through the API, instead of configuring the firmware to boot the machine with SMT support disabled, provides limited protection, as the sched_allow_NO_SMT_threads variable is writable at runtime by the superuser:

tstudent@MAC-67C2FA8CA4EC ~ % sudo sysctl kern.sched_allow_NO_SMT_threads=0
Password:
kern.sched_allow_NO_SMT_threads: 1 -&gt; 0

This instantly disables NO_SMT system-wide. I wonder why they bothered making it a write-once flag, only to make it so trivial to disable.

The TECS mitigation

What is it?

I have been unable to figure out what TECS stands for. Closest I could get was this comment from the source code of the kernel (osfmk/kern/task.h, from XNU 7195.50.7.100.1):

#define TF_TECS                 0x00020000                              /* task threads must enable CPU security */

Thread Enable CPU Security? Even if it’s the correct interpretation, it doesn’t help us understand what it does.

In its current incarnation (the generic name suggests the specifics might change in the future), TECS flushes certain internal CPU buffers before returning from kernel mode to user mode. It’s a mitigation for the Rogue Data Cache Load (RDCL) family of attacks (like Meltdown) and the Microarchitectural Data Sampling (MDS) family of attacks (like RIDL and Fallout).

How to use TECS

Unlike NO_SMT, TECS doesn’t have a dedicated API, but it’s enabled through a generic API called CPU Security Mitigations (CSM), that can also enable NO_SMT. In C/C++, #include <libproc.h>; no extra library necessary. From <libproc.h>:

/*
 * CPU Security Mitigation APIs
 *
 * Set CPU security mitigation on the current proc (all existing and future threads)
 * This attribute is inherited on fork and exec
 */
int proc_set_csm(uint32_t flags) __API_AVAILABLE(macos(11.0));

/* Set CPU security mitigation on the current thread */
int proc_setthread_csm(uint32_t flags) __API_AVAILABLE(macos(11.0));

/*
 * flags for CPU Security Mitigation APIs
 * PROC_CSM_ALL should be used in most cases,
 * the individual flags are provided only for performance evaluation etc
 */
#define PROC_CSM_ALL         0x0001  /* Set all available mitigations */
#define PROC_CSM_NOSMT       0x0002  /* Set NO_SMT - see above */
#define PROC_CSM_TECS        0x0004  /* Execute VERW on every return to user mode */

As with the dedicated NO_SMT API, we can enable mitigations for the entire current process, using proc_set_csm, or just the calling thread, with proc_setthread_csm. CSM functions, too, are wrappers for process_policy(2).

Finally, just like NO_SMT, CSM also extends posix_spawn(2). From <spawn.h>:

/*
 * Set CPU Security Mitigation on the spawned process
 * This attribute affects all threads and is inherited on fork and exec
 */
int     posix_spawnattr_set_csm_np(const posix_spawnattr_t * __restrict attr, uint32_t flags) __API_AVAILABLE(macos(11.0));
/*
 * flags for CPU Security Mitigation attribute
 * POSIX_SPAWN_NP_CSM_ALL should be used in most cases,
 * the individual flags are provided only for performance evaluation etc
 */
#define POSIX_SPAWN_NP_CSM_ALL         0x0001
#define POSIX_SPAWN_NP_CSM_NOSMT       0x0002
#define POSIX_SPAWN_NP_CSM_TECS        0x0004

The meaning of the flags is identical to the similarly named <libproc.h> flags, and posix_spawnattr_set_csm_np(attr, POSIX_SPAWN_NP_CSM_NOSMT) is 100% identical to and interchangeable with posix_spawnattr_setnosmt_np(attr).

How does it work?

Just like NO_SMT, TECS is a write-once, enable-only flag that is copied from task to task, from task to thread, and from thread to CPU. The task flag is TF_TECS. Below the task level, the flag becomes architecture-specific, x86-64-only, morphing into a mitigation codenamed SEGCHK. Thus, the thread flag is boolean field machine_thread::mthr_do_segchk, and the CPU flag is boolean field cpu_data::cpu_curthread_do_segchk, also known as CPU_NEED_SEGCHK in assembler code.

SEGCHK is implemented entirely in assembler, in kernel-to-user return routines ks_64bit_return and ks_32bit_return. If CPU_NEED_SEGCHK is set for the current CPU, they execute a VERW instruction shortly before the final SYSEXIT/SYSRET/IRET. VERW is an obscure and largely obsolete instruction that checks if the specified segment (the user mode stack segment, in the case of SEGCHK) is writable; but more importantly, it has the side effect of flushing the caches exploited by the RDCL and MDS families of attacks, mitigating them.

TECS is only enabled if it’s supported by the CPU, or if it’s been forced on by default. CPU support for TECS is only checked on x86-64, and it corresponds to whether SEGCHK is supported. The checks, performed at boot time, are:

Using VERW as a mitigation was initially suggested by two of the discoverers of RIDL (page 200), but it seems it proved insufficient, and CPU vendors had to enhance the instruction to act as a proper mitigation. macOS doesn’t trust the un-enhanced VERW.

SEGCHK can be forced on system-wide with a boot parameter. Again, see support article HT210108. Similarly, it can be forced off system-wide with undocumented boot parameter cwad (CPU workaround disable), which has the same syntax as cwae (CPU workaround enable). cwae has priority over cwad.

Unlike NO_SMT, SEGCHK/TECS has no firmware-level equivalent, nor can it be disabled after boot.

Who benefits from NO_SMT and TECS?

Google.

I’ve looked everywhere and no one else seems to use these mitigation APIs. The only source code match (outside of the macOS 11 and 12 SDKs, and the XNU source code itself) is Chromium. The only binary matches on my macOS 11 machine (outside of system libraries) are the Chrome and Electron frameworks, i.e. Chromium. Not even Safari seems to use them!

In Chromium, when compiling for macOS, the base::LaunchOptions structure passed to function base::LaunchProcess contains a boolean field named enable_cpu_security_mitigations; if set, the macOS implementation of base::LaunchProcess launches the new process with CSM flags POSIX_SPAWN_NP_CSM_ALL. If I understand the code correctly, mitigations are enabled for renderer and plugin host sub-processes, and disabled for all other kinds of sub-processes (another possible reading of the code suggests that the feature is implemented, but unused. Honestly, I haven’t dug too deep).

It’s hard not to wonder why Apple went through the effort of implementing mitigations and exposing them as APIs, and then neither document nor even use them. If they are ineffective, the question becomes why Google bothers using them. Either way, we are left with no clear answer.

Endpoint Security API improvements

Endpoint Security probably needs no introduction to the audience of this article, but I’ll still give a brief one.

This C API, first introduced in macOS 10.15, replaced and made obsolete the pre-existing patchwork of archaic auditing, monitoring and policing APIs (among which OpenBSM, KAUTH, Socketfilter and the venerable acct(2)—est. 1979[2]).

The design of Endpoint Security combined the near-absolute visibility and veto power over system state of a MAC[3] policy module with the safety properties of a client-server model, with a really nice and pretty-well-documented API on top. In short, it was the perfect API for a large variety of security applications.

Or was it? Unfortunately, Endpoint Security wasn’t without its own shortcomings, but they’re gradually being rectified. Let’s have a look at the most important improvements that macOS 11 and 12 make to Endpoint Security, only some of which were officially documented.

More message types

More operations can now be detected and/or vetoed, such as fcntl(2), searchfs(2), ptrace(2), remounting a filesystem, IOServiceOpen, task_name_for_pid, process suspension and process resumption.

Interestingly, process suspension includes private system call pid_shutdown_sockets, which doesn’t actually suspend processes, but only shuts down their network connections after they’ve already been suspended. The system call was originally only available on iOS, where it’s part of how apps are sent to the background.

macOS 12 adds some more notifications: setuid(2), setgid(2), seteuid(2), setegid(2), setreuid(2) and setregid(2).

More notifications, less polling

Some process metadata that only used to be available for querying, and necessitated polling and/or diffing to detect changes, now generates change events.

ES_EVENT_TYPE_NOTIFY_CS_INVALIDATED messages notify that a process’s code signature has gone invalid (i.e. CS_VALID flag no longer set) but the process is allowed to keep running (i.e. CS_HARD flag not set). Previously, it was only pollable through private system calls csops or csops_audittoken with operation code CS_OPS_STATUS.

ES_EVENT_TYPE_NOTIFY_REMOTE_THREAD_CREATE messages notify the creation of remote (i.e. inter-process) threads. Previously, this information was only available at low fidelity and with great effort, either by polling and diffing the data returned by Mach task method task_info with flavor TASK_EXTMOD_INFO, or by monitoring syslog for com.apple.kernel.external_modification messages.

More metadata

exec(2) messages now include the new process’s working directory (es_event_exec_t::cwd field).

Process metadata for all messages now includes:

  • The process’s controlling terminal, if any (es_process_t::tty field).
  • The process’s “start time”, i.e. the time when its process identifier was allocated by fork(2) (es_process_t::start_time field). Previously only available through sysctl(2) with the kern.proc.pid.<process identifier> OID.
  • the “responsible process” (es_process_t::responsible_audit_token field), i.e. the process that the notorious (to us developers) Transparency, Consent & Control (TCC) framework blames for an operation subject to user consent. Often, this is the client process that caused a daemon/agent process to be launched, which in an auditing context should be considered the “true” parent of a process (instead of “placeholder” xpcproxy(8)). Previously only available through the private—and completely undocumented—”responsibility” API of MAC policy module Quarantine (e.g. responsibility_get_responsible_for_pid).

Finally, for the first time ever in a macOS auditing API, all messages now report not just the process that caused the message to be generated, but the exact thread as well (es_message_t::thread field).

Improved performance

It’s now possible to process messages asynchronously without the overhead of es_copy_message/es_free_message (equivalent to a sequence of malloc, memcpy and free): Messages are now reference counted (see new functions es_retain_message/es_release_message), and can be moved across threads almost for free. es_copy_message and es_free_message have been outright deprecated and should no longer be used, except for backwards compatibility with macOS 10.15. They won’t be missed by me or my spindump traces.

A vulnerability quietly fixed

Sometimes, diffing SDK versions can even reveal security holes that were quietly fixed. Such is the case for fcntl(2) command F_SETSIZE.

F_SETSIZE is used to change the maximum disk space allocated to a file: If it’s smaller than the current size, the file is truncated; if it’s larger, the file is extended. What stops a malicious process from extending a file so that it fills the entire disk, and then reading from the extended file to carve deleted files out of what was previously free space? Very simple: F_SETSIZE fills the new file space with all zeroes to conceal what it used to contain. As an optimization, a superuser process (effective user id 0) is allowed to extend a file without zeroing out, because a superuser process is assumed to have access to that data anyway.

However, macOS has gradually made the UNIX security model irrelevant. For example, even the superuser is only allowed to access the private documents of a regular user with the user’s permission—permission that is given on a per-application basis, through that protector of users and bane of developers known as the Transparency, Consent & Control (TCC) framework. This reflects the new meaning that macOS has given to the “root” superuser: No longer the administrator of a multi-user system, as it was originally meant on UNIX, but either a temporary identity assumed by each user for system administration tasks (e.g. by way of sudo(8)), or the anonymous user under which daemons run[4].

In this new security model, the superuser can no longer be assumed to have unrestricted access to everything. However, not zeroing space when extending a file would let a superuser process with no entitlements at all recover any file that had been deleted.

macOS 11 fixes this by no longer handling the superuser as a special case for F_SETSIZE. The man page for fcntl(2) now says:

F_SETSIZE . Deprecated. In previous releases, this would allow a process with root privileges to truncate a file without zeroing space. For security reasons, this operation is no longer supported and will instead truncate the file in the same manner as truncate(2).

Even the comments in <sys/fcntl.h> were amended. Before (macOS 10.15 SDK):

#define F_SETSIZE       43              /* Truncate a file without zeroing space */

And after (macOS 11 SDK):

#define F_SETSIZE       43              /* Truncate a file. Equivalent to calling truncate(2) */

As far as I can tell, this information disclosure vulnerability was never assigned a CVE, nor was it publicly acknowledged in any other way before it was silently fixed in macOS 11.

O_NOFOLLOW_ANY

Even primeval APIs like system call open(2) can still have room for improvement. macOS 11 introduces a new flag for it, O_NOFOLLOW_ANY, that mitigates an entire family of potential vulnerabilities, especially in security applications.

Endpoint Security provides the full, resolved (no symlinks), normalized path of each file involved in an auditable event (what OpenBSM veterans/victims like me used to know as “vnode kpath”), but how can applications be sure that the path still identifies the same file by the time they open it? With O_NOFOLLOW_ANY set, open(2) will fail with error ELOOP if any symlink is encountered anywhere in the path: A stronger version of O_NOFOLLOW that applies to the entire path, not just the final component.

Conclusion

What did I learn from my rummaging? Apple still likes its secrets; the Chromium source code still is the best documentation on the mitigations and sandboxing features provided by all major operating systems; diffing releases remains the best way to find hidden features; and some secrets can stay hidden in plain sight for a long time.

I wrote this article in part as a “look at this cool thing”, and in part as a sort of public service, so that the new, hidden macOS features no longer return a deafening silence when queried on search engines. Even if I got some details wrong, at least the topic can now be debated.

Endnotes

1 The relationship between tasks and processes in macOS can be roughly summarized as: Each (BSD) process corresponds exactly to one (Mach[5]) task, and vice versa, until the process calls exec(2). exec(2) terminates the current task, creates a new one and associates it to the process, replacing the dead one. macOS old timers may object that exec(2) actually keeps the same task, resetting its state. It used to work like that, but it was a fragile design, that was dealt a fatal blow by Google researcher Ian Beer in 2016. Starting from XNU 3789.21.4 (macOS 10.12.1, iOS 10.1), exec(2) creates a new task.

2 What’s the use for such an ancient API? It may sound incredible, but until Endpoint Security, macOS had no reliable auditing mechanism to log process deaths. Except acct(2), that is, which to describe as “archaic” would be a compliment. It logs fixed-size records to a global log file, it truncates process names to 9 characters, it logs user ids but not process ids, and the timestamp format for process exit times is a literally unbelievable $frac{1}{64}8^{mathit{exponent}}mathit{mantissa}$ seconds since the process started (good for about 8 years and a half of non-stop running, but with a variable precision that drops below the second at the 2:16:30 mark), encoded in 16 bits as 3 bits $mathit{exponent}$, 13 bits $mathit{mantissa}$; the process start time is a saner 32-bit count of seconds since the UNIX epoch (good for about 17 years from now). We opted not to use acct(2).

3 Mandatory Access Control, no relation to “Mac” as an abbreviation for “Macintosh”. Historically referring to security models patterned after military document classification practices, MAC is nowadays a generic term for any policy-based security model, distinct from and orthogonal to permission-based security models (also known as DAC, or Discretionary Access Control). macOS inherited its modular MAC framework from the TrustedBSD project and uses it with great gusto (I count no fewer than seven policy modules on my macOS 11 machine). The Linux Security Modules framework is the Linux equivalent. Windows has limited MAC in the form of the capabilities system for UWP apps. The bitter irony is that, being limited to a small subset of applications, it’s a “mandatory” access control system that operates on an opt-in basis—to say nothing of its complete non-extensibility.

4 If daemons have to run under an anonymous user, why must it be root, which, while still bound by MAC policies, can bypass all ACLs, send kill signals to any process, invoke sysctls in write mode and in general do a lot of damage? A little known fact is that while system daemons run as root, they also do run inside extremely strict sandboxes based on the Seatbelt framework (internally based on, you guessed it, a MAC policy module, unimaginatively named Sandbox). In a sadly predictable twist, while extremely powerful and enabling incredibly granular access control, Seatbelt is almost completely undocumented (notice a pattern yet?). In a less predictable but sadder twist, it’s also been marked as deprecated with no replacement since OS X 10.8. Nevertheless, with all system daemons using it, plus third-party users of the caliber of Google Chrome and Mozilla Firefox, Seatbelt seems unlikely to disappear any time soon.

5 Mach (no relation to “Macintosh”) was a research project to replace the BSD kernel with a microkernel. The design proved impractical, and the most famous real-world implementation of Mach, NeXTSTEP (later “Darwin”, later “OS X”, later “macOS”), actually runs a Mach/BSD hybrid kernel with very little “micro”. Mach was an extremely influential design: Windows NT (later “Windows”) is a Mach clone too, except redesigned to work alongside a VMS-like kernel instead of a BSD one. Windows NT, too, flirted with a microkernel architecture, but it proved to be no more practical in the 90s than it had been in the 80s.

The post macOS 11’s hidden security improvements appeared first on Malwarebytes Labs.

How to spot a DocuSign phish and what to do about it

Phishing scammers love well known brand names, because people trust them, and their email designs are easy to rip off. And the brands phishers like most are the ones you’re expecting to hear from, or wouldn’t be surprised to hear from, like Amazon or DHL. Now you can add DocuSign to that list.

DocuSign is a service that allows people to sign documents in the Cloud. Signing documents electronically saves a lot of paper and time. It also cuts back on human contact, which is particularly useful for remote working, or when everyone is locked down in a pandemic. Google searches for DocuSign almost doubled during March 2020, and stayed there, as so many people around the world started working from home.

Earlier this year, DocuSign specifically warned about phishing campaigns using its brand.

Bad signs to look for

DocuSign phishing emails have many of the tell-tale signs of other phishing attacks: Fake links, fake senders, misspellings, and the like. Recipients can check links by hovering their mouse pointer over the document link in the email. If it is an actual DocuSign document it will be hosted at docusign.net. In the spam campaigns we have seen, documents were hosted at docs.google.com, feedproxy.google.com, and some documents came as attachments, which DocuSign does not do.

Also, the sender address should belong to docusign.net, but that alone is not enough: We have seen spoofed messages coming from that address, so check for other indicators. You can read an exhaustive list of things to look out for, as well as addresses to report suspicious activity on DocuSign’s incident reporting page (although we recommend you simply opt for the safe document access option, described below).

Remember, if you’re in doubt, it is not stupid or rude to contact a sender by direct mail or another method, and verify the email’s authenticity (just don’t hit “reply”).

We’ve included some examples of DocuSign phishing campaigns below.

Fake DocuSign invoice emails

DocuSign Sample
An example of a phishing email using the DocuSign brand

Signs:

  • “Dear Receiver”? If the sender does not use your actual name, that is a red flag.
  • The security code is way too short.
  • DocuSign links will read “REVIEW DOCUMENT” if it is a document that needs to be signed.
  • An extra space in “inquiries , contact” and other sloppy spelling.
  • Document was hosted at feedproxy.google.com, not docusign.net.
DocuSign Sample2
An example of a phishing email using the DocuSign brand

Signs:

  • “Dear Recipient”? Again, if the sender does not use your actual name, that is a red flag.
  • The security code is way too short.
  • Sentences are slightly off, like you would expect from a non-native speaker.
  • Document was hosted at docs.google.com.

Fake DocuSign attachments

Another sample from our spam honeypot came with an attachment pretending to be from DocuSign.

DocuSign Sample3
An example of a phishing email with an attachment, claiming to be an invoice from DocuSign

The sender address was spoofed. Opening the attachment presents the user with a fake Microsoft login screen, hoping to harvest the target’s password. The information would have been sent to a command and control server through the use of a compromised WordPress website.

login screen
A fake Microsoft login screen triggered by a fake DocuSign invoice

Safe document access

Rather than trying to identify whether or not an email is bad, it’s often safer (and no less convenient) to assume it’s bad and ignore its links completely.

We recommend that you use the “Alternate Signing Method” mentioned in legitimate DocuSign mails. If you get a DocuSign email, visit docusign.com, click ‘Access Documents’, and enter the security code provided in the email. It will have a format similar to this one: EA66FBAC95CF4117A479D27AFB9A85F01. (Don’t bother, it’s invalid.) If a scammer sends you a fake code it simply won’t work. There is no need to trust the sender, or the links in their email.

However, to complicate matters, phishers have now been discovered sending legitimate DocuSign emails from legitimate DocuSign accounts.

Real DocuSign emails used for phishing

Security vendor Avanan recently spotted a new DocuSign campaign that bypasses most of the advice provided above, by using real DocuSign accounts.

In this new attack, scammers upload a file to a real DocuSign account (either a free one or one stolen from somebody else) and share it with the target’s email address.

As result, the recipient will receive a legitimate DocuSign mail with an existing and functional security code that leads to the malicious file. Sharing malicious documents is hard to do because DocuSign does have protection against weaponized attachments: Uploaded document files are converted into static .pdf files, which removes malicious content like embedded macros. It remains to be seen if attackers will find ways around DocuSign’s protections. It probably won’t be necessary though.

Hyperlinks are carried across to the shared document, after it’s converted to a PDF, and remain clickable for the recipient. So all an attacker has to do is make the victim click a link to a phishing site in the DocuSign-hosted document. And scammers are very good at getting people to click on links.

The protection methods against attacks using existing DocuSign accounts are:

  • You can determine if the email is legitimate by contacting the alleged sender using something other than email.
  • If you fall for the scam, anti-malware software will warn you if you try to go to a known phishing site; it should recognize and block malicious files that get downloaded; and its exploit protection will stop malicious documents from deploying their payload.
  • If the phishing site is unknown, a password manager can help. A password manager will not provide credentials for a site that it does not recognize, and while a phishing site might fool the human eye, it won’t fool a password manager. This helps users from getting their passwords harvested.

Keep your passwords safe!

The post How to spot a DocuSign phish and what to do about it appeared first on Malwarebytes Labs.

Cars and hospital equipment running Blackberry QNX may be affected by BadAlloc vulnerability

Following an announcement by Blackberry the U.S. Food & Drug Administration (FDA) and the Cybersecurity & Infrastructure Security Agency (CISA) have put out alerts that vulnerabilities found in the Blackberry QNX real-time operating system (RTOS) may introduce risks for certain medical devices.

Manufacturers are assessing which devices may be affected by the BlackBerry QNX cybersecurity vulnerabilities and are evaluating the risk and developing mitigations, including deploying patches from BlackBerry.

FDA and CISA warnings

The FDA, in its warning that certain medical devices may be affected by BlackBerry QNX cybersecurity vulnerabilities, points to the CISA alert. CISA mentions CVE-2021-22156 which describes an integer overflow vulnerability in the calloc() function of the C runtime library of affected versions of BlackBerry® QNX Software Development Platform (SDP) version(s) 6.5.0SP1 and earlier, QNX OS for Medical 1.1 and earlier, and QNX OS for Safety 1.0.1 and earlier that could allow an attacker to potentially perform a denial of service or execute arbitrary code.

Balckberry’s QNX is an RTOS. RTOS is a term to describe an operating system (OS) intended to serve real-time applications that process data as it comes in. Typically this type of software is deployed in devices that require immediate interaction based on incoming information. The best example in this case may be the driver assistance options that many car manufacturers provide nowadays.

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). CISA mentions CVE-2021-22156 is part of a collection of integer overflow vulnerabilities, known as BadAlloc.

What is BadAlloc?

In April of 2021 the Azure Defender for IoT security research group uncovered a series of critical memory allocation vulnerabilities in IoT and OT devices that adversaries could exploit to bypass security controls in order to execute malicious code or cause a system crash.

These Remote Code Execution (RCE) vulnerabilities were dubbed BadAlloc and they were found to affect a wide range of domains, from consumer and medical IoT to Industrial IoT, Operational Technology (OT), and industrial control systems. Given the pervasiveness of IoT and OT devices, these vulnerabilities, if successfully exploited, represent a significant potential risk for organizations of all kinds.

We blogged about BadAlloc back in April if you are interested in more details.

Blackberry

If you are in my age group, you may remember Blackberry as a producer of smartphones that went the same way as VHS tapes and vinyl records. Appreciated by a few but hardly a serious competitor for the big guns.

Nowadays Blackberry produces software that is widely used—for example, in two hundred million cars, along with critical hospital and factory equipment. Automakers use BlackBerry® QNX® software in their advanced driver assistance, digital instrument clusters, connectivity modules, handsfree, and infotainment systems that appear in multiple car brands, including Audi, BMW, Ford, GM, Honda, Hyundai, Jaguar, Land Rover, KIA, Maserati, Mercedes-Benz, Porsche, Toyota, and Volkswagen.

Keep it under the hood

Back when BadAlloc was made public, Blackberry kept quiet. But now BlackBerry announced that old but still widely used versions of one of its flagship products, an operating system called QNX, contain a vulnerability that could let hackers cripple devices that use it.

Insiders have accused Blackberry of purposefully keeping this information to themselves at first. Blackberry initially even denied that BadAlloc impacted its products at all and later resisted making a public announcement, even though it couldn’t identify and inform all of the customers using the software.

Mitigation

CISA strongly encourages critical infrastructure organizations and other organizations developing, maintaining, supporting, or using affected QNX-based systems to patch affected products as quickly as possible.

  • Manufacturers of products that incorporate vulnerable versions should contact BlackBerry to obtain the patch.
  • Manufacturers of products who develop unique versions of RTOS software should contact BlackBerry to obtain the patch code. Note: in some cases, manufacturers may need to develop and test their own software patches.
  • End users of safety-critical systems should contact the manufacturer of their product to obtain a patch. If a patch is available, users should apply the patch as soon as possible. If a patch is not available, users should apply the manufacturer’s recommended mitigation measures until the patch can be applied. Note: installation of software updates for RTOS frequently may require taking the device out of service or to an off-site location for physical replacement of integrated memory.

A full list of affected QNX products and versions are available at the QNX website.

Unlike computers, Internet-connected devices can be difficult, or even impossible to update. When these devices require internet access for their operation this poses a big security risk. All you can try to do is reduce the attack surface by minimizing or eliminating exposure of vulnerable devices to the internet; implementing network security monitoring to detect behavioral indicators of compromise; and strengthening network segmentation to protect critical assets.

Stay safe, everyone!

The post Cars and hospital equipment running Blackberry QNX may be affected by BadAlloc vulnerability appeared first on Malwarebytes Labs.

Analysts “strongly believe” the Russian state colludes with ransomware gangs

“We have the smoke, the smell of gunpowder and a bullet casing. But we do not have the gun to link the activity to the Kremlin.” This is what Jon DiMaggio, Chief Security Stretegist for Analyst1, said in an interview with CBS News following the release of its latest whitepaper, entitled “Nation State Ransomware“. The whitepaper is Analyst1’s attempt to identify the depth of human relationships between the Russian government and the ransomware threat groups based in Russia.

“We wanted to have that, but we believe after conducting extensive research we came as close as possible to proving it based on the information/evidence available today.” DiMaggio concluded.

Here are some of the key players and connections identified by Analyst1:

Evgeniy “Slavik” Bogachev

Hailed as “the most prolific bank robber in the world“, Bogavech is best known for creating ZeuS, one of the most prolific banking information stealers ever seen. According to the report, Bogavech created a “secret ZeuS variant and supporting network” on his own, without the knowledge of his closest underground associates—The Business Club. This ZeuS variant, which is a modified GameOver ZeuS (GOZ), was designed specifically for espionage, and it was aimed at governments and intelligence agencies connected with Ukraine, Turkey, and Georgia.

Analyst1, too, believes that, at some point, Bogachev was approached by the Russian government to work for them in exchange for their blessing to have him continue his fraud operations.

The United States officially indicted Bogachev in May 2014. Seven years on, Russia still refuses to extradite Bogachev. The Ukraine Interior Ministry had provided the reason why: Bogachev was “working under the supervision of a special unit of the FSB.” That is, the Federal Security Service, Russia’s security agency and successor to the Soviet Union’s KGB.

EvilCorp

The Business Club, the underground criminal gang that Bogachev himself put together, continued their operations. In fact, under the new leadership of Maksim “Aqua” Yakubets, Bogachev’s successor, the criminal enterprise rebranded and started calling themselves EvilCorp. Some cybersecurity companies recognize or name them Indrik Spider. Since then, they have been behind campaigns involving the harvesting of banking credentials in over 40 countries using sophisticated Trojan malware known as Dridex.

Yakubets was hired by the FSB in 2017 to directly support the Russian government’s “malicious cyber efforts”. He’s also the likely candidate for this job due to his relationship with Eduard Bendesky, a former FSB colonel who is also his father-in-law. It was also in 2017 that EvilCorp started creating and using ransomware—BitPaymer, WastedLocker, and Hades—for their financially-motivated campaigns. In addition, Dridex had been used to drop ransomware onto victim machines.

SilverFish

SilverFish was one of those threat actors who were quick enough to take advantage of the SolarWinds breach that was made public in mid-December of 2020. If you may recall, multiple companies that use SolarWind’s Orion software were reportedly compromised via a supply-chain attack.

SilverFish is a known Russian espionage attacker and is said to be related to EvilCorp, in that this group shared similar tools and techniques against one victim: Use of the same command and control (C&C) infrastructure and unique CobaltStrike Beacon. SilverFish even attacked the same organization a few months after EvilCorp attacked it with their ransomware.

Wizard Spider

Wizard Spider is the gang behind the Conti and Ryuk ransomware strains. Analyst1 has previously profiled Wizard Spider as one of the groups operating as part of a ransomware cartel. DiMaggio and his team believes that Wizard Spider is responsible for managing and controlling TrickBot.

EvilCorp has a history of using TrickBot to deliver its BitPaymer ransomware to victim systems. This suggests that a certain level of relationship is at play between the two groups.

Does it matter?

While the Analyst1 report contains some interesting findings, we agree that it doesn’t deliver a smoking gun. That doesn’t mean there isn’t a smoking gun, somewhere, of course. But even if there is, unless you’re an intelligence agency like the NSA, establishing the intent of a potential attacker can be a waste of time and effort.

Does that mean you shouldn’t care about attribution at all? No. It’s sensible to update your threat model in response to tactics used by real-world threat actors. But it often doesn’t matter who is doing the attacking. Ransomware is well established and well resourced threat to your business whether it’s state-funded or criminal gangs living off several years of multi-million dollar payouts and a Bitcoin boom.

You can read more about attribution in our two part series on the subject, starting with when you should care.

The post Analysts “strongly believe” the Russian state colludes with ransomware gangs appeared first on Malwarebytes Labs.

A week in security (August 9 – August 15)

Last week on Malwarebytes Labs:

Other cybersecurity news:

Stay safe, everyone!

The post A week in security (August 9 – August 15) appeared first on Malwarebytes Labs.

How to troubleshoot hardware problems that look like malware problems

Sometimes it’s hard to figure out what exactly is going wrong with your computer. What do you do if you’ve run all the scans, checked all the files, and everything says the PC is malware free? Here’s a list of common problems that resemble cybersecurity issues, but could be caused by something hardware-related instead.

My computer is overheating

Some types of malware try very hard to go unnoticed, but others can be CPU hogs capable of turning your keyboard into a waffle iron. The encryption routines in ransomware demand a lot of resources, for example. But there are other, far more obvious signs of a ransomware problem, so if you’ve got this far, it’s not that. So perhaps it’s a cryptominer grinding away in your browser or System32 folder. If your antivirus says “no” though, it’s more likely to be one of the problems below:

  • One or more of your fans aren’t working. If you have a PC, you should be able to follow the wire connecting the problem fan to the motherboard / associated socket. Sometimes there’s so many wires in there, they can get nudged out of place. This is especially common when removing the panel on the side of the motherboard to clean behind the wires.
  • A software change has affected your fan profile. A fan profile is software that exerts a specific amount of control over your fans. It tells them when to ramp up, and how. Sometimes updates to your fan control program or associated hardware can do odd things to settings. You’ll have to go back in and set them to your liking.
  • Your thermal paste needs a refresh. A layer of thermal paste sits between your heat sink and your processor and conducts the heat—that would otherwise engulf the CPU—into the heat sink. It’s possible your paste needs replacing. This is quite a precise process however, so watch a few tutorial videos before attempting it.
  • Your graphics card is about to die. This is the worst case scenario. If you’re lucky, a good clean may solve the issue, though you should be looking to regularly clean everything inside your PC anyway. Dust build up? Get rid of it sooner rather than later. Contacting your PC / parts supplier at this stage is also a good idea.

My computer keeps restarting / Blue screen of death

Plenty of malware files make PCs restart or trigger the dreaded blue screen of death (BSOD). Plenty of other things do too though. Here are some alternative causes to think about:

  • Loose or faulty RAM sticks. I’ve had machines which restarted, popped a BSOD, or simply stuttered and staggered while on the desktop. Check to make sure all of your RAM sticks are in securely. If one seems a little loose, remove and reinsert it correctly. You can also run diagnostic tests on your sticks if the machine runs long enough for you to do so. If not, the long-winded approach is to remove one stick at a time and see if the problem magically goes away. If it does, there’s a good chance you’ve identified the problem.
  • Peripheral devices left in at shutdown can cause odd issues when you boot up. There’s no real rhyme or reason to this. I’ve seen USB sticks, cameras, phones, and even a digital keyboard cause a PC to not load correctly or act strangely after booting up. I’ve also seen PCs refuse to boot because of a peripheral one minute, and ignore it entirely the next. If in doubt, just take it out.
  • You might have a Windows-specific issue going on under the hood. You should consider sorting out various recovery tools and backup plans now.
  • Your PSU (power supply) may not be working correctly, or on the verge of failure. This is a bit of a tricky one to test, because messing around with PSUs and electricity can be incredibly dangerous. If the thought of paperclip tests or getting out the multimeter fills you with dread, you’re better off asking the company you bought the PC from for help or switching it out for a different PSU.

I can’t see my files / my hard drive is missing

Yes, some malware will happily scrub all of your saved documents. Most won’t. There can be other explanations:

  • Check your wires. I’ve seen PCs where the caddy holding the drive has broken, the hard drive has fallen to the bottom of the case, and a wire has been dislodged. Reattaching the wire and securing the caddy was all that was needed to stop the drive randomly disappearing and reappearing whenever it felt like it.
  • Check your Windows. Some people reported files going missing after upgrading to Windows 10, or (occasionally) other updates. Considering Windows 11 is on the way, it might be worth revisiting what happened.
  • The files might be hiding, or somewhere else. If files aren’t where they’re supposed to be but your hard drive usage suggests everything is still present, never fear. Fire up an app which tells you exactly how much space is being used, and what is using it. A relative of mine had some files go walkabout after a system update, and they were able to find them with a third party tool.
  • Check the drive for signs of corruption or imminent failure. Sometimes hardware just fails. This is a mechanical issue and not something you can hope to prevent. Back everything up as soon as you can, if you aren’t already.

Conclusion

Computers are often surprisingly delicate, and their rugged cases don’t accurately reflect the 24/7 juggling operation taking place down on the motherboard. There are many other hardware problems, but the ones listed above tend to be the first port of call for budding hardware fixers.

If you can deal with both software and hardware issues as they arise, there’ll be no stopping you the next time a relative gives you a call at Christmas with a “small problem..”

The post How to troubleshoot hardware problems that look like malware problems appeared first on Malwarebytes Labs.

Katie Moussouris hacked Clubhouse. Her emails went unanswered for weeks: Lock and Code S02E15

Nearly one year after the exclusive app Clubhouse launched on the iOS store, its popularity skyrocketed. The app, which is now out of beta, lets users drop into spontaneous audio conversations that, once they are over, are over. With COVID lockdown procedures separating many people around the world last year, Clubhouse offered its users immediate, unplanned, conversational magic that maybe they lost in shifting to a work from home environment.

At the time, it was perhaps an app to find a feeling.

And in 2021, Luta Security CEO and founder Katie Moussouris found a crucial vulnerability in it. But when she tried to tell Clubhouse about the flaw—which let her hide her presence inside a listening “room” so she could eavesdrop on conversations—the company failed to listen to her for weeks. Her emails went unanswered, and the vulnerability that she discovered could be exploited with a simple trick. Perhaps most frustratingly of all was that Clubhouse had actually set up what’s called a “bug bounty” program, in which the companies pay independent researchers to come forward with evidence and reporting of vulnerabilities in their products.

With a bug bounty program in effect, why then did Clubhouse delay on fixing its flaw?

“[Clubhouse] is too large, too popular, and too well-funded to be in the denial stage of the five stages of vulnerability response grief,” Moussouris said on the most recent episode of Lock and Code, with host David Ruiz.

Tune in to learn about the vulnerability itself, how Moussouris discovered it, how Clubhouse delayed in moving forward, and whether bug bounty programs are actually the right tool for developing secure software.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

The post Katie Moussouris hacked Clubhouse. Her emails went unanswered for weeks: Lock and Code S02E15 appeared first on Malwarebytes Labs.

Phishing campaign goes old school, dusts off Morse code

In an extensive report about a phishing campaign, the Microsoft 365 Defender Threat Intelligence Team describes a number of encoding techniques that were deployed by the phishers. And one of them was Morse code.

While Morse code may seem like ancient communication technology to some, it does have a few practical uses in the modern world. We just didn’t realize that phishing campaigns was one of them!

Let’s look at the campaign, and then we’ll get into the novel use of an old technology.

The campaign

Microsoft reports that this phishing campaign has been ongoing for at least a year. It’s being referred to as the  XLS.HTML phishing campaign, because it uses an HTML file email attachment of that name, although the name and file extension are modified in variations like these:

  • xls.HTML
  • xslx.HTML
  • Xls.html
  • .XLS.html
  • xls.htML
  • xls.HtMl
  • xls.htM
  • xsl_x.h_T_M_L
  • .xls.html
  • ._xslx.hTML
  • ._xsl_x.hTML

The phishers are using variations of XLS in the filename in the hope the receiver will expect an Excel file if they open the attachment. When they open the file, a fake Microsoft Office password dialog box prompts the recipient to re-enter their password, because their access to the Excel document has supposedly timed out. This dialog box is placed on a blurred background that will display parts of the “expected” content.

prompt to log in on blurred background
Opening the email attachment triggers a fake Microsoft Office password dialog prompting users to “re-enter” their password.

The script in the attachment fetches the logo of the target user’s organization and displays their user name, so all the victim has to do is enter the password. Which will then be sent to the attacker’s phishing kit running in the background.

After trying to log in the victim will see a fake page with an error message and be prompted to try again.

incorrect password
While the user’s password is passed on to the attacker, the dialog insists it was incorrect

It is easy to tell from the information about the target used by the phishers, like the email address and company logo, that these phishing mails are part of a targeted campaign that needed some preparation to reach this step.

And this phishing campaign is another step to gather more data about a victim. In the latest campaigns the phishers fetch the user’s IP address and country data, and send that data to a command and control (C2) server along with the usernames and passwords.

Encoding

The phishing campaign has been seen using different types of encoding, and combinations of encodings. For example, in one of the waves the user mail ID was encoded in Base64. Meanwhile, the links to the JavaScript files were encoded in ASCII before being encoded again, with the rest of the HTML code, in Escape.

Encodings seen in the campaign included:

  • ASCII, a basic character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices.
  • Base64, a group of binary-to-text encoding schemes that represent binary data in an ASCII string format. By using only ASCII characters, base64 strings are generally URL-safe, and allow binary data to be included in URLs.
  • Escape or URL-encoding, originally designed to translate special characters into some different but equivalent form that is no longer dangerous in the target interpreter.
  • Morse code, more about that below.

Not that encoding is different from encryption. Encoding turns data from one format into another, with no expectation of security or secrecy. Encryption transforms data in a way that only be reversed by somebody with specific knowledge, such as a password or key.

So, encoding methods won’t hide anything from a security researcher, so why bother? Changing the encoding methods around is designed to make it harder for spam filters trained on earlier versions of the campaign to spot the later versions.

Morse code

Morse code is a communication system developed by Samuel Morse, an American inventor, in the late 1830s. The code uses a combination of short and long pulses, which can be represented by dots and dashes that correspond to letters of the alphabet.

Famously, the Morse code for “SOS” is . . . - - - . . ., for example.

The International Morse Code encodes the 26 letters of the English alphabet, so the phishers had to come up with their own encoding for numbers. Morse code also doesn’t include special characters and can also not be used to distinguish between upper and lower case, which makes it harder to use than other types of encoding.

So, technically they didn’t use Morse code but an encoding system that used some base elements from Morse code using dashes and dots to represent characters.

This is how the javascript section for the morse code decoding looked.

javascript decodeMorse
Embedded JavaScript including Morse code

In one wave, links to the JavaScript files were encoded using ASCII, then Morse code. In other cases, the domain name of the phishing kit URL was encoded in Escape before the entire HTML code was encoded using Morse code.

Addendum

During our own research for this article we also came across files that used the pdf.html filename and similar variations on the theme we saw with the xls.html extension. These html files produced the same prompt to log into Outlook because the sign-in timed out.

These samples were named using the format: {company}-payroll-{date}-pdf.HtmL

For more information about phishing and how to protect yourself and your company please have a look at our page about phishing. For a full description of the phishing campaign, take a look a the Microsoft blog.

… – .- -.– / … .- ..-. . –..– / . …- . .-. -.– — -. . -.-.–

The post Phishing campaign goes old school, dusts off Morse code appeared first on Malwarebytes Labs.

Cyberbullying 101: A Primer for kids, teens, and parents

At some point in our lives, we have likely either been bullied, stood back and watched others bullying, or participated in the act. Playing the role of offender, offended, and by-stander has become easier, thanks to the Internet and the technologies that make it possible to keep up connected.

In this article, we aim to arm you with the basics. From there you can decide for yourself if you want to further expand your knowledge so you know what to do to help someone—a family, a peer—who might be involved in incidents of cyberbullying.

What is cyberbullying?

Cyberbullying is a term used to describe the act of bullying someone using electronic and digital means. Bullying involves two things: intent and persistence. An offender intentionally says or does something negative to the offended and does so for a period of time. This sets cyberbullying apart from, say, a one-time encounter with someone being mean or rude to them.

Cyberbullying is often used interchangeably with the terms “online bullying”, “digital bullying”, “online aggression”, or “electronic aggression”.

Note that cyberbullying and physical bullying could happen to an individual at the same time.

Examples of cyberbullying

Cyberbullying can take many forms, can happen anywhere online, and can target anyone, including adults in the workplace. It is probably most commonly associated with kids and teens who send hurtful text messages to their victims, or spread rumors about them on social media. Some bullies share non-consensual images and video recordings of victims doing something in private.

Again, we’d like to stress that what classifies something as bullying isn’t a specific act or platform, but the wilfulness of the bully, and the repeated harm they inflict on their victim.

What are the effects of cyberbullying?

The effects of bullying can manifest in someone physically, emotionally, mentally, and socially. And cyberbullying doesn’t just affect the victim and the offender, it also affects those who stand by and watch as the bullying takes place.

Studies have shown that those involved in bullying—whether they’re the abuser, the abused, or a by-stander—can experience headaches, recurring stomach pains, and difficulty sleeping. They can also have problems concentrating, behavioral issues, and can find it difficult to get along with others. Emotionally and mentally, those who are abused can feel sad, angry, frustrated, scared, and worthless, and can cause suicidal thoughts.

The effects of bullying can manifest as depression or a sudden change of attitude, such as not wanting to go to school or avoiding smartphones for example.

Is cyberbullying the same as cyber violence?

Cyber violence appears to be short for “cyber violence against women and girls (VAWG)”. It is a term used to describe violent online behaviors aimed specifically at women and girls. Usually, they are victims of domestic abuse done to them by a former or current partner.

According to UNESCO (United Nations Educational, Scientific and Cultural Organization) [PDF], “Violent online behaviour ranges from online harassment and public shaming to the desire to inflict physical harm including sexual assaults, murders and induced suicides.”

In UNESCO’s eyes, the tragic case of Amanda Todd, the 15-year old Canadian teen who committed suicide after posting an emotional video on YouTube about the bullying she had suffered in the hands of a pedophile, is a crime rooted in cyber violence.

Is cyberbullying illegal?

All US states have some form of law that covers or addresses bullying behavior. You can learn and explore more about this by visiting Cyberbullying Research Center’s Bullying Laws Across America map.

How do you report cyberbullying?

Reporting an individual or a group for cyberbullying is a way for online harassment to stop.

If you or someone you know is experiencing negative behavior that could escalate to cyberbullying, let a trusted adult know. Take evidence of the online bullying, such as screenshots, and keep it them in a secure place. If the platforms where the bullying takes place allows it, block the bully.

You can also reach out to the websites and platforms where the bullying is taking place. The Cyberbullying Research Center has a huge list of contact details that direct you to the right place for reporting bullying on a wide variety of different platforms, including social media sites and games.

If you’re anywhere in the US or Canada, remember that you have the Crisis Text Line where you can reach a Crisis Counselor at any time, 24/7. Simply text HOME to 741-741. This free support can also be reached via WhatsApp at 443-SUPPORT. Additionally, residents in Canada can also contact Kids Help Phone by texting CONNECT to 686-868.

Residents in the UK and Ireland can text SHOUT to 85-258 and HELLO to 50-808, respectively.

The post Cyberbullying 101: A Primer for kids, teens, and parents appeared first on Malwarebytes Labs.

VPN Test: How to check if your VPN is working or not

The primary function of a Virtual Private Network (VPN) is to enhance your online privacy and security. It should do this without slowing your Internet too noticeably. Performing a VPN test or two can help you ensure that it’s up to the mark.

VPN privacy test

Your Internet Service Provider (ISP) assigns a unique IP address to your router, the device that connects the computers, phones, and tablets in your house to the Internet. Every device in your home that connects through that router uses its IP address on the Internet. The IP address is allocated from a pool of addresses your ISP controls, so it can change from time to time, but it probably doesn’t change very often.

IP addresses are necessary for getting your Internet traffic to the right place, and getting the responses back to you, but they have a couple of drawbacks:

  • They are allocated geographically, so they can be used for a form of crude geolocation.
  • Because you have to tell all the websites and services you use what your IP address is, it can be used by advertising and tracking services to track you across the web, either on its own or as part of a fingerprint.

When you use a VPN, you create an impenetrable, encrypted tunnel between your computer and your VPN provider, and then join the Internet from one of your VPN provider’s computers. This protects your privacy in a few different ways.

  • Because your connection joins the Internet from your VPN provider, you use an IP address assigned by your VPN provider, rather than your router’s, on the Internet.
  • The encrypted tunnel between you and your VPN provider stops your ISP, rogue Wi-Fi hotspots or other interlopers snooping on your traffic. In particular it stops them looking at your DNS traffic, which can reveal which websites you’re visiting.

VPN leaks

Part of a VPN’s privacy protection comes from hiding your real IP address, so it’s important to understand that IP addresses can “leak”. You can leak your IP address via DNS, if your DNS traffic passes through the encrypted tunnel where your ISP can’t see it, exits your VPN, and the goes back to your ISP’s DNS servers for resolution.

You can also leak your IP address via WebRTC, a real-time communication protocol your web browser uses for things like video calls.

An IP leak is rare on a reputable and secure VPN service because the best VPN companies have workarounds to reduce their likelihood. Please avoid free VPNs. Your privacy is often not their priority.

Checking for basic IP address leaks

  1. Ensure that your VPN is disconnected and visit a search engine like DuckDuckGo. Type “what is my IP address.” Hit enter and then note down your IP address.
  2. Launch your VPN client and connect to a VPN server. Double-check to see that you’re connected and note down your the IP address the VPN has given you (if it tells you).
  3. Repeat step one and note down what your IP address is now. If your IP address hasn’t changed from step one, your IP address is not being masked. If it matches the one you picked in step two, your IP address is being masked.

Testing for DNS and WebRTC leaks

Even if your VPN passes the basic IP leak test, you should run tests for DNS and WebRTC leaks. You can test for IP address leaks via DNS on websites like DNSLeaktTest or DNSLeak. You can test for IP leaks via WebRTC on websites like browserleaks.com. You may have to disable WebRTC to stop the leak.

The post VPN Test: How to check if your VPN is working or not appeared first on Malwarebytes Labs.