IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

What is an .exe file? Is it the same as an executable?

You may often see .exe files but you may not know what they are. Is it the same as an executable file? The short answer is no. So what’s the difference?

What is an .exe file?

Exe in this context is a file extension denoting an executable file for Microsoft Windows. Windows file names have two parts. The file’s name, followed by a period followed by the extension (suffix). The extension is a three- or four-letter abbreviation that signifies the file type.

I hear some advanced users moaning in the back of the class, because there are many exceptions. But as a general rule, everything behind the last period in the filename is the extension. For example, because Windows default settings don’t always show the extension of a file, some malware authors name their files really_trustworthy.doc.exe, hoping that the user’s Windows settings cause it to hide the .exe part and have the user believe this is a document they can safely open.

By using this trick in filenames like YourTickets.pdf.exe, malware like Cryptolocker was mailed to millions of potential victims. The icon was the same as legitimate pdf files so it was hard for some receivers to spot the difference. Usually the mails pretend to be from a worldwide courier service, but they also mask themselves as a travel agency.

Wait, what? Is a .exe file a virus?

An .exe file can be a virus, but that is certainly not true for all of them. In fact, the majority are safe to use or even necessary for your Windows system to run. It all depends on what is in an .exe file. Basically .exe files are programs that have been translated into machine code (compiled). So, whether an .exe file is malicious or not depends on the code that went into it.

Most of the normal .exe file will adhere to the Portable Executable (PE) file format. The name “Portable Executable” refers to the fact that the format is not architecture specific, meaning they can be used in 32-bit and 64-bit versions of Windows operating systems. By this standard format the actual code can be found in the .text section(s) of an executable.

How do I open an .exe file?

This is an ambiguous question that deserves two answers.

To use an .exe file you can usually just double click it. You may get a security prompt before it actually runs, but technically you will have initiated running the program inside the .exe file.

If you want to look what is inside an .exe file then that is a much more complicated question. It depends why you want to look inside. Examining files without running them is called static analysis, whereas dynamic analysis is done by executing the program you want to study. As mentioned before, .exe files have been compiled by machine code, so you need special programs to do static analysis. The most well-known program to do this is IDA Pro, which translates machine code back to assembly code. This makes an .exe more understandable, but it still takes a special skillset to make the step from reading assembly code to understanding what a program does.

Difference to an executable

The definition of an executable file is: “A computer file that contains an encoded sequence of instructions that the system can execute directly when the user clicks the file icon. Executable files commonly have an .exe file extension, but there are hundreds of other executable file formats.

So, every true .exe file is an executable but not every executable file has the .exe extension. We mentioned before that .exe files are commonly intended for use on systems running on a Windows OS . That doesn’t mean you can’t open an .exe file on, say, your Android device, but you will need an emulator or something similar to make that happen. The same is true if you are wondering how to open an .exe file on a system running macOS.

Are .exe files safe to open?

It’s not safe to open any .exe file you encounter.. Just like any other file, it depends on the source of the file as to whether you can trust it or not. If you receive an .exe file from an untrusted source, you should use your anti-malware scanner to scan the file and find out whether it is malicious or not. If you’re still in doubt, get a second opinion by uploading it to VirusTotal to check if any of the participating vendors detects the file.

Can an .exe file run itself?

Any executable file needs a trigger to run. A trigger can be a user double-clicking the file, but it can also be done from the Windows registry, for example when Windows starts up. So the closest an .exe file can come to running itself is by creating a copy in a certain location and then point a startup registry key to that location. Or by dropping the copy or a shortcut in the Startup folder, since all the files in that folder get run when Windows starts.

But there are other triggers. For example, there are Autoplay and Autorun options in Windows that get executed at the connection of, for example, USB devices. Malware can be hidden in the firmware of devices that get executed once the device is connected, etc. Which is one reason not to trust USB sticks you find in a parking lot or that get handed out as swag.  You do not want to be responsible for the next cyber incident in your organization, right?

Other executable files

All the potentially bad stuff I have written about .exe files is just as true for almost all other executable files, so it’s not true that .exe files are bad by nature or that they should be trusted the least. The same dangers can be associated with other executable files. Unfortunately, other operating systems have their own viruses which use their own executable files, but that’s for another day.

Stay safe, everyone!

The post What is an .exe file? Is it the same as an executable? appeared first on Malwarebytes Labs.

Inside Apple: How Apple’s attitude impacts security

Last week saw the fourth occurrence of the Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Because of the control it exerts over its ecosystem, understanding Apple’s attitude to security—and it’s willingness to act as a security “dance partner”—are crucial to securing Apple systems, and developing Apple security software.

I was at OBTS, and this is what I learned about Apple’s current attitude to privacy, security, and communication.

Apple’s not great at working with security researchers

It’s no great surprise to anyone that Apple has a rocky relationship with many security researchers. Years ago, well-known researcher and co-author of the book “The Mac Hacker’s Handbook”, Charlie Miller, figured out how to get a “malicious” proof-of-concept app into the App Store, and reported this to Apple after having achieved it. His reward? A lifetime ban from Apple’s developer program.

This says a lot about Apple’s relationship with third-party security researchers. Unfortunately, things haven’t changed much over the years, and this is a constant cause of strains in the relationship between Apple and the people trying to tell it about security issues. During the conference, Apple got booed several times by the audience following reports from OBTS speakers of mismanaged bug reports and patches.

What is it that Apple has been accused of doing? There have been multiple offenses, unfortunately. First, a number of security researchers have reported getting significantly lower bug bounties from Apple’s bug bounty program than they should have earned. For example, Cedric Owens (@cedowens) discovered a bug in macOS that would allow an attacker to access sensitive information. Apple’s bug bounty program states that such bugs are worth up to $100,000. They paid Cedric $5,000, quibbling over the definition of “sensitive data.” (For the record: Cedric’s bug absolutely gave access to what any security researcher or IT admin would consider sensitive data… more on this later.)

Other researchers have reported similar issues, with significantly reduced payments for bugs that should have qualified for more. Further, there is often a significant wait for the bounties to be paid, after the bugs have been fixed—sometimes six months or more. Apple also had a tendency to “go silent,” not responding to researchers appropriately during the process of handling bug reports, and has repeatedly failed to properly credit researchers, or even mention important bugs, in its release notes.

All this leaves a sour taste in many researchers’ mouths, and some have decided to either publicly release their vulnerabilities—as in the case of David Tokarev, who published three vulnerabilities after Apple failed to act on them for many months—or to sell those vulnerabilities on the “gray market,” where they can earn more money.

Disclosure of three 0-day iOS vulnerabilities and critique of Apple Security Bounty program
Screenshot of David Tokarev’s blog, disclosing three 0-day vulnerabilities

Keep in mind here that Apple is one of the richest companies in the world. Paying out the highest prices for security bugs would be pennies compared to Apple’s yearly profits.

A patching myth busted

It has long been a rule of thumb that Apple supports the current system, plus the previous two, with security-related patches. Currently, that would mean macOS 11 (Big Sur), plus macOS 10.15 (Catalina) and macOS 10.14 (Mojave).

However, this is not something Apple has ever stated. I honestly couldn’t tell you where this idea came from, but I’ve heard it echoed around the Mac community for nearly two decades. Although researchers and some IT admins have questioned for years whether this “conventional wisdom” is actually true, many believe it. Josh Long (@theJoshMeister) did a lot of research into this, and presented his findings at the conference.

There have been many bugs in the last year that were fixed for only some of the “current three” systems. This was known to a degree, but Josh’s data was eye-opening as to the extent to which it was happening. Folks who were aware of some of these discrepancies theorized that some of these bugs may not have affected all three systems, and that may explain why patches were never released for them.

However, Josh was able to track down security researchers who had found these bugs, and confirmed that, in at least one case, Mojave was affected by a bug that had been patched in Catalina and Big Sur only. Thus, we know now that this rule of thumb is false. This confirmed many people’s suspicions, but there are many others who have continued to believe in the myth. It’s echoing around Apple’s own forums, among other places.

The fact that this speculation persisted for years, and that research was even necessary to prove it false, is a major failing on the part of Apple. Microsoft tells its users whether a system is still supported or not. Why can’t Apple do the same? Staying silent, and allowing people to believe the myth of the “three supported systems,” means that some machines are left vulnerable to attack.

At this point, you should assume that only the most current system—Big Sur at the moment, but soon to be Monterey—is the most secure system, and that there may be known vulnerabilities left unpatched in all others. This means you should feel a bigger sense of urgency at upgrading when a new system like Monterey comes out, rather than waiting for months to upgrade.

Apple loves privacy, but you can still be tracked

Apple is well-known for its strong stance on privacy. (I say that as if Apple isn’t well-known otherwise, and you might say, “What’s the name of that company that really likes privacy?”) However, we heard plenty of talk about data access and tracking despite this. (Or maybe because of Apple’s views on privacy, it’s more interesting when we learn how to violate it?)

Eva Galperin (@evacide) talked about how stalkers can track you on iOS, despite Apple’s protections. From a technical perspective, spyware—defined as software running on the device that surveils and tracks you—is not much of a thing, because of Apple’s restrictions on what apps can do, plus the fact that you can’t hide an app on iOS.

However, Eva showed how spyware companies are nonetheless capable of enabling you to creep on your ex. Many of these companies provide web portals where you enter your stalking victim’s Apple ID and password, which enables tracking via iCloud’s features. iCloud email can be read, as well as notes, reminders, files on iCloud Drive, and more. Find My can provide the victim’s location. Photos synced up to iCloud can be viewed. And so on.

You might say, “But wait! This requires me to know my victim’s Apple ID password, and have access to their two-factor authentication! Therefore, this is a non-issue.”

However, keep in mind that in many domestic abuse situations, the attacker has exactly this kind of information. Further, Apple ID credentials can easily be found in data breaches, for potential victims who have used the same password for Apple ID that they’ve used elsewhere, and there are techniques attackers can use to capture two-factor authentication codes.

Plus, let’s all remember the situation a few years back where someone was able to trick Apple support into helping them gain access to celebrity accounts, in order to steal their nude photos from iCloud.

On a different topic, Sarah Edwards (@iamevltwin) talked about the Apple Wallet. As a forensics expert, Sarah has a deep understanding of data and how to access it, and demonstrated the kind of data that could be obtained with access to iPhone backups. If an attacker could gain access to those backups, there’s a wealth of information about your daily activities, places that you frequent, and many other things to be harvested.

Apple has gone bananas… and who is Keith?

The most amusing part of the conference came during Sarah Edwards’ talk, when she discussed the data found in a particular database for Apple Wallet. This database contained hundreds of tables, and most of them were named after fruit. Yes, you heard me correctly—bananas, oranges, lemons, …durians! These are all the names of tables in a database relating to your wallet.

On first glance, this is quite puzzling. But it does make a certain amount of sense. If you’re trying to extract some data from this database, you’re going to have to put in a lot of work to figure out how to find it. The table names are not going to help you at all. That’s a pretty good thing, although I don’t envy the developers who have to keep all those databases straight. (“Where did we put the data on library cards again? Oh, yeah, in ‘kiwis!’”)

Although many of those tables are still a mystery, Sarah had been able to determine the purpose of some of them, through experimentation and observation. Still, many tables contained only things like identification numbers and timestamps, which by themselves are meaningless.

(As an aside, if the “durians” table doesn’t contain information relating to pay toilet transactions, I’ll be extremely disappointed!)

All privacy-related discussions aside, these table names remind me of Apple’s fun and playful side, which we so rarely get to see these days. Everyone knows Apple’s secretive facade, and security researchers often experience Apple’s sharp edges.

However, long-time Apple users know and love the “fun Apple.” This is the Apple that inscribed the signatures of all the engineers on the inside of the early one-piece Mac cases, where only a few would ever see them. Or the Apple that included a calendar file containing a history of Tolkien’s Middle Earth hidden in every copy of macOS. Or the Apple that used to Rickroll you on their Apple Watch support page!

give you up.png

Especially amusing was the discovery that, buried in the midst of all the fruit, there was a database simply named “keith.” Who is this Keith, and why is he in the wallet? Inquiring minds want to know!

For all of Apple’s flaws that we love to complain about, the discovery of this database brought back memories of the Apple that I love, and reminded me that it’s not just a faceless corporation, but is also a company full of people who also know and love the same Apple that I do.

The post Inside Apple: How Apple’s attitude impacts security appeared first on Malwarebytes Labs.

Adblocker promises to blocks ads, injects them instead

Researchers at Imperva uncovered a new ad injection campaign based on an adblocker named AllBlock. The AllBlock extension was available at the time of writing for Chrome and Opera in the respective web stores.

While disguising your adware as an adblocker may seem counterintuitive, it is actually a smart thing to do. But let’s have a look at what they did and how, first.

AllBlock

As we mentioned, AllBlock is advertised as an adblocker on its site. It promises to block advertisements on YouTube and Facebook, among others.

AllBlock website

When you’re installing the Chrome extension, the permissions it asks for make sense for an adblocker.

extension permissions

Even though that may seem like a lot to allow, and it is almost a carte blanche, any adblocker that you expect to work effectively will need a full set of permissions to at least “read and change all your data on all websites.”

What Imperva found is that the extension replaces all the URLs on the site a user is visiting with URLs that lead to an affiliate. This ad injection technique means that when the user clicks on any of the modified links on the webpage, they will be redirected to an affiliate link. Via this affiliate fraud, the attacker earns money when specific actions like registration or sale of the product take place.

Ad injection

Ad injection is the name for a set of techniques by which ads are inserted in webpages without getting the permission of site owners or paying them. Some of the most commonly seen tactics are:

  • Replacing existing ads with ads provided by the attacker
  • Adding ads to sites that normally have none
  • Adding or changing affiliate codes so the attacker gets paid instead of the affiliate that had permission to advertise on a site

To pull this off, malicious browser extensions, malware, and stored cross-site scripting (XSS) are the most commonly found techniques.

In this case it was a malicious extension that used some interesting methods.

To make the extension look legitimate, the developers actually implemented ad blocking functionality. Further, the code was not obfuscated and nothing immediately screams malware.

All the URLs that are present in a visited website are sent to a remote server. This server replies with a set of URLs to replace them with. The reading and replacing of the URLs is done by the extension which was given permissions to do so.

To avoid detection, the threat actor has taken a few more measures besides looking harmless. The malicious javascript file detects debugging, it clears the debug console every 100 ms, and major search engines (with a special focus on Russian engines) are excluded.

A part of the code in the bg.js script that is part of the extension makes an HTTP request to allblock.net/api/stat/?id=nfofcmcpljmjdningbllljenopcmdhjf and receives a JSON response with two base64 encoded properties “data” and “urls”. The “data” part is the code that gets injected on every site the affected browser opens, and the “urls” part looks like this:

{"youtubeInput":["*://*.youtube.com/get_video_info*adunit*","*://*.g.doubleclick.net/pagead*","*://*.youtube.com/pagead*","*://*.googlesyndication.com/pagead*","*://*.google.com/pagead*","*://*.youtube.com/youtube*ad_break*"],"vkInput":["https://vk.com/al_video.php?act=ad_event*","https://vk.com/al_video.php?act=ads_stat*","https://vk.com/ads_rotate*","https://ad.mail.ru/*","https://ads.adfox.ru/*"]}

Conclusion

The extension the Imperva team found actually blocks ads, but it also runs a background script that injects a snippet of JavaScript code into every new tab that a user opens in the affected browser. The end goal is to make money by replacing legitimate URLs on the website with URLs of their own. These URLs include affiliate codes, so they get paid if you click on one of those links and benefit from any sales that may come out of these clicks.

Ad blockers that are able to block advertisements on popular social media like YouTube and Facebook may seem like the holy grail to some users. To those that are interested in ad blocking and haven’t found the right program yet, please read “How to block ads like a pro.”

And as we have mentioned before, it makes sense to give ad blockers the permissions that they need to do their job. So we feel the need to emphasize that you should only give those permission to extensions that you actually trust, not just because you think “it” needs them.

Ad blocker campaigns

The Imperva team writes on their blog that they believe that there is a larger campaign taking place that may utilize different delivery methods and more extensions.

In our own Malwarebytes’ research we have found a series of adblockers that were pushed out through websites showing fake alerts like this one.

FakeFlash
If you keep stumbling over these and when you click on one of them, you might even welcome the offer of an adblocker, right?

We could not find anything wrong with these extensions, and we also found that they were all using the publicly available Adguard blocklist. So we didn’t really follow up on them because, same as the one described above, they looked legitimate. The only thing that really made them look suspicious was that they were promoted through these “fake alert” sites.

For now it is hard to tell whether we have been tracking the same or similar campaigns. Since I haven’t seen the bg.js script before they may be completely different, but I will try and contact the Imperva team and compare notes. If anything interesting comes out of that, we will let you know.

Stay safe, everyone!

The post Adblocker promises to blocks ads, injects them instead appeared first on Malwarebytes Labs.

Patch now! Microsoft fixes 71 Windows vulnerabilities in October Patch Tuesday

Yesterday we told you about Apple’s latest patches. Today we turn to Microsoft and its Patch Tuesday.

Microsoft tends to provide a lot of information around its patches and, so, there’s a lot to digest and piece together to give you an overview of the most important ones. In total, Microsoft has fixed 71 Windows vulnerabilities, 81 if you include those for Microsoft Edge.

One of the vulnerabilities immediately jumps out since it was used in the wild as part of the MysterySnail attacks, attributed by the researchers that discovered it to a Chinese speaking APT group called IronHusky.

MysterySnail

Earlier this month, researchers discovered that a zero-day exploit was used in widespread espionage campaigns against IT companies, military contractors, and diplomatic entities. The payload of these MysterySnail attacks is a Remote Access Trojan (RAT). The actively exploited vulnerability allows malware or an attacker to gain elevated privileges on a Windows device. So far, the MysterySnail RAT has only been spotted on Windows Servers, but the vulnerability can also be used against non-server Windows Operating Systems.

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-40449, a Win32k Elevation of Privilege (EoP) vulnerability, which means the vulnerability allows a user to raise their permissions.

PrintNightmare

I scared you by mentioning PrintNightmare, right? Well, that may not be completely in vain. The same researchers that discovered the PrintNightmare vulnerability have found yet another vulnerability in Microsoft’s Windows Print Spooler. This one is listed as CVE-2021-36970, a Windows Print Spooler spoofing vulnerability. The exploitation is known to be easy, and the attack may be initiated remotely. No form of authentication is needed for a successful exploitation, but it does require some action by the intended target. We may be hearing more about this one.

Exchange again

An Exchange bug that gets a CVSS score of 9.0 out of 10 is enough to make my hair stand on end. Listed as CVE-2021-26427, this one is a Microsoft Exchange Server Remote Code Execution (RCE) vulnerability. The exploitation appears to be easy and the attack can be initiated remotely. A single authentication is required for exploitation, so the attacker will need to have some kind of access to exploit this one, which may be why Microsoft listed it as “exploitation less likely.” Exchange Servers are an attractive target and so we have seen a lot of attacks. One worrying flaw reveals users’ passwords and might provide attackers with the credentials they need to use this vulnerability.

Critical Microsoft Word vulnerability

One of the three vulnerabilities classified as critical is an RCE vulnerability in Word, listed as CVE-2021-40486. The vulnerability could allow a remote attacker to trick a victim into opening a specially crafted file, executing arbitrary code on their system.

The other two critical vulnerabilities are RCE flaws in Windows Hyper-V, the virtualization component built into Windows. These vulnerabilities are listed as CVE-2021-38672 and CVE-2021-40461.

Windows DNS Server RCE

The last one is only of interest if you are running a server that is configured to act as a DNS server. Listed as CVE-2021-40469, a Windows DNS Server Remote Code Execution vulnerability. The exploitation is known to be easy. The attack may be launched remotely, but the exploitation requires an enhanced level of successful authentication. The vulnerability was disclosed in the form of a Proof-of-Concept (PoC). While it may not be up to you to maintain or patch a DNS server, it’s good to know that this vulnerability exists in case we see weird connection issues as a result of a DNS hijack or denial-of-service.

While many details are still unknown, we have tried to list the ones we can expect to surface as real world problems if they are not patched as soon as possible.

Stay safe, everyone!

The post Patch now! Microsoft fixes 71 Windows vulnerabilities in October Patch Tuesday appeared first on Malwarebytes Labs.

“Free Steam game” scams on TikTok are Among Us

TikTok has long since evolved beyond being thought of as “just” dance clips, also becoming a home for educational and informative content presented in a fun and casual way. There are accounts themed around pretty much any interest you can think of, and one of the biggest is gaming.

It’s not all entirely innocent, however. Sometimes we observe new twists on old scams, or slick videos designed to obscure some sleight of hand. Shall we take a look?

Free Steam game accounts: be careful what you wish for

Games are expensive. Even without the costs of downloadable content (DLC), you also have things like season passes, in-game currency frequently purchased with real money, lootboxes, and more. FOMO (fear of missing out) is a big driver for timed exclusives and must have items, and all of these constant pressures drive gamers to want a bit of a discount. Where it tends to go wrong is with the promise of everything being free. If it’s too good to be true, and so on.

What we sometimes see on TikTok is gaming-themed accounts making many of the same promises you see on other platforms. Free games, free items, free stuff. Everything is definitely free with no strings attached. Would RandomAccountGuy3856 lie to you?

The answer is, of course, “Yes, RandomAccountGuy3856 absolutely would lie to you”.

Taking a walk through free game town

This is a typical free game account which you’ll find on TikTok:

tiktok0

As you can see, it’s pretty minimal and is simply a stack of the same video uploaded repeatedly. The site claims to offer free games and keys.

tiktok00

The site itself appears to have recently been taken offline. Thanks to the magic of cached content, we can still piece things together and figure out the process.

 The front page splash at the start of last month looked as follows:

tiktok1

They’re claiming to offer up free versions of the incredibly popular Among Us game. However, they also claim to have special hacked versions up for grabs. These versions let the player cheat in various ways. There’s also the reassurance you won’t get banned, which is used as further encouragement to download the altered editions.

This process involves selecting which edition you want, and then hitting the download button. They claim to offer Android, PC, and iOS flavours.

No matter what button you hit, you see the below pop-up. You may well be familiar with these from years of surfing:

tiktok3

The text reads as follows:

Before downloading, we need to make sure you are a real user, and not an automated bot. This helps us keep making these kind of hacks and keep them on Google for a long time

Hitting the verify now button opens a new tab, with a new destination. Unfortunately, it’s not a very good one. As our detection page states, we have that particular URL blocked because it is associated with malvertising.

Running down the timer on TikTok fakeouts

These are old tricks, essentially given a fresh lick of paint and an enticing video to go with it. There’s just something a bit more personal about having what looks like real people telling you genuine-sounding things in a short video clip. It all feels very informal and casual, and that’s exactly the kind of ambience a scammer would look to hit you with alongside their dubious websites and offers.

Even when accounts like the above aren’t purged by TikTok, the sites they link to are often here today, gone tomorrow. Everything is purely geared towards driving as much ad/malvertising traffic as possible.

As tempting as the promise of free gaming is, please be on your guard. There are risky games, and then there are risky games.

The post “Free Steam game” scams on TikTok are Among Us appeared first on Malwarebytes Labs.

The joy of phishing your employees

Many companies set up phishing test programs for their employees, often as part of a compliance requirement involving ongoing employee education on security topics. The aim of these programs is to train employees on how to spot a malicious link, not click it, and forward it on to the appropriate responder, but most of these programs do not meaningfully achieve this. Let’s take a look at some common pitfalls and how to step around them.

You’re annoying your employees

Click-through rates on a real phish average between 10 and 33 percent of untrained users, depending on which security vendor you ask. But test phishes are sent to everyone, indiscriminately, taking time and energy away from those who are more or less doing the right thing already.

And while an organizational baseline is useful, and compliance can mandate a certain degree of repetition, repeatedly testing all employees without any sort of targeting can create a certain degree of security blindness on their part. There’s also often a lack of real-world tactics on the part of the tester due to a need to hit large quantities of people at the same time.

A better solution is to conduct infrequent, all-hands tests as a baseline, then take a look at your failures. Do you have clusters, and where are they? What job function is most common in the failures, and how does that map to overall security risk? A repeated failure in Marketing has a different impact than one in Finance.

With a good grasp of where your risk is, you can start focusing on problem areas of the organization with challenging, more frequent tests that use real-world tactics. While an all hands phish might be an untargeted credential harvester, a high-risk phish test might look more like a malicious invoice sent by a fake vendor to a select group in the Finance department.

You’re not including execs

Executives are frequently not included in enterprise security testing, most likely due to difficulty getting buy-in on a topic that some C-Suites view as esoteric. They also are a population most likely to engage in off-channel communications like SMS or bring your own device (BYOD) mobile mail using unsupported clients. However, executives—if successfully phished—can cause some of the most significant dollar losses to the organization than anyone else. While a single compromised credential pair at the ground level is typically a recoverable incident, business email compromise (BEC) aimed at an executive has caused up to $121 million dollars in single-incident losses.

Successful inclusion of executives in a phishing training program would involve spearphishing, rather than a canned phish. The key indicator of a well-formed phish is mirroring the tactics found in the wild, so your high-value targets require a high effort pitch. Make sure that your phish test vendor includes a markup editor to construct custom phishes from scratch so that you can alternate between a canned mass mailer and a laser-focused spearphish, as needed.

You’re not changing your approach

Just as security staff can get alert fatigue and start missing important alarms from their tooling, non-technical staff can get test fatigue and start associating threats with one particular phish format that you use too much. Best practice should include frequent rotation of pitch type and threat type; malicious link, malicious attachment, and pure scam threats present differently and have their own threat ecosystems that warrant their own test formats.

If you’ve been using your test failures to highlight problem areas, that’s a great place to start varying how you conduct your tests. A failure cluster in a Finance department would respond fairly well to attachment-based phish tests, with pitch text focused around payment themed keywords. Given that impact of a breach to that department would also be high risk, more frequent and more difficult tests give better outcomes over the long term. The key point is that phish tests are sensors for organizational risk and should be tuned for accuracy frequently.

You’re not using the data

Okay, so you’ve checked that compliance tick box, created a test schedule that ratchets in to your problem areas over time, and you’re running custom spearphishes against your execs. You can call it a day, right?

Hitting these marks can get you a large security advantage over other companies, but to really realize the full advantages of a security training program, you need to start sifting through the data that the program generates.

A great place to start is looking at where your failures sit. Are they evenly distributed, or do they cluster in particular departments? Are they individual contributors, or management? More importantly, which types of phishes do they click on most?

All of these questions can drive identification of high risk areas of the company, as well as prioritize which security controls should be implemented first. Rather than a top-down command approach, looking at the impact of a simulated attack can provide a clear view of where to start with a broader security improvement program.

If it’s not fun, you’re doing it wrong

Last and most importantly, this should be fun. The more creativity and variety injected into the process by security staff, the more effective the user awareness will be. And that doesn’t just extend to phish variety—user reports can and should be acknowledged at the organizational level.

Users can submit phish pitches, or preferred organization targets. Some phish test vendors even include stats broken out by department or manager that lend themselves very well towards friendly competition. Engaging employees beyond “Don’t do that” not only creates better security outcomes, but it tends to create better communication outcomes throughout a company.

Most corporate phishing programs do not meet their stated goals. The reasons for this can include overweighting compliance goals to the exclusion of others, complacency in test format, vendor choice making it tough to analyze data from the program, and failure to give dedicated resources to testing. These are largely avoidable if an organization shifts focus on their testing programs from a checkbox to risk analysis.

Overall, folding phish testing into a broader look at cyber risk can provide hard data that can drive security controls and increase organizational buy-in.

The post The joy of phishing your employees appeared first on Malwarebytes Labs.

ExpressVPN made a choice, and so did I: Lock and Code S02E19

On September 14, the US Department of Justice announced that it had resolved an earlier investigation into an international cyber hacking campaign coming from the United Arab Emirates that has reportedly impacted hundreds of journalists, activists, and human rights defenders in Yemen, Iran, Turkey, and Qatar. The campaign, called Project Raven, has been in clandestine operation for years, and it has relied increasingly on a computer system called “Karma.”

But in a bizarre twist, this tale of surveillance abroad tapered inwards into a tale of privacy at home, as one of the three men named by the Department of Justice for violating several US laws—and helping build Karma itself—is Daniel Gericke, the chief information officer at ExpressVPN.

Today, on Lock and Code, host David Ruiz explores how these developments impacted his personal decision in a VPN service. For years, Ruiz had been a paying customer of the VPN, but a deep interest in surveillance and a background in anti-surveillance advocacy forced him to reconsider.

Tune in to hear the depth of the unveiled surveillance campaign, who it affected, for how long, and what role, specifically, Gericke had in it, on this week’s Lock and Code podcast, by Malwarebytes Labs.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

The post ExpressVPN made a choice, and so did I: Lock and Code S02E19 appeared first on Malwarebytes Labs.

Update now! Apple patches another privilege escalation bug in iOS and iPadOS

Apple has released a security update for iOS and iPad that addresses a critical vulnerability reportedly being exploited in the wild.

The update has been made available for iPhone 6s and later, iPad Pro (all models), iPad Air 2 and later, iPad 5th generation and later, iPad mini 4 and later, and iPod touch (7th generation).

The vulnerability

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services). This one is listed as CVE-2021-30883 and allows an application to execute arbitrary code with kernel privileges. Kernel privileges can be achieved by using a memory corruption issue in the “IOMobileFrameBuffer” component.

Kernel privileges are a serious matter as they offer an attacker more than administrator privileges. In kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system.

Researchers have already found that this vulnerability is exploitable from the browser, which makes it extra worrying.

Watering holes are used as a highly targeted attack strategy. The attacker infects a website where they knows the intended victim(s) visits regularly. Depending on the nature of the infection, the attacker can single out their intended target(s) or just infect anyone that visits the site unprotected.

IOMobileFrameBuffer

IOMobileFramebuffer is a kernel extension for managing the screen framebuffer. An earlier vulnerability in this extension, listed as CVE-2021-30807 was tied to the Pegasus spyware. This vulnerability also allowed an application to execute arbitrary code with kernel privileges. Coincidence? Or did someone take the entire IOMobileFramebuffer extension apart and save up the vulnerabilities for a rainy day?

Another iPhone exploit called FORCEDENTRY was found to be used against Bahraini activists to launch the Pegasus spyware. Researchers at Citizen Lab disclosed this vulnerability and code to Apple, and it was listed as CVE-2021-30860.

Undisclosed

As is usual for Apple, both the researcher that found the vulnerability and the circumstances under which the vulnerability used in the wild are kept secret. Apple didn’t respond to a query about whether the previously found bug was being exploited by NSO Group’s Pegasus surveillance software.

Zero-days for days

Over the last months Apple has had to close quite a few zero-days in iOS, iPadOS,and macOS. Seventeen if I have counted correctly.

  • CVE-2021-1782 – iOS-kernel: A malicious application may be able to elevate privileges. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1870 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1871 – WebKit: A remote attacker may be able to cause arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-1879 – WebKit: Processing maliciously crafted web content may lead to universal cross site scripting. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30657 – Gatekeeper: A malicious application may bypass Gatekeeper checks. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30661 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30663 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution.
  • CVE-2021-30665 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30666 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30713 – TCC: A malicious application may be able to bypass Privacy preferences. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30761 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30762 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30807 – IOMobileFrameBuffer: An application may be able to execute arbitrary code with kernel privileges. Apple is aware of a report that this issue may have been actively exploited. Tied to Pegasus (see above).
  • CVE-2021-30858 – WebKit: Processing maliciously crafted web content may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited.
  • CVE-2021-30860 – CoreGraphics: Processing a maliciously crafted PDF may lead to arbitrary code execution. Apple is aware of a report that this issue may have been actively exploited. This is FORCEDENTRY (see above).
  • CVE-2021-30869 – XNU: A malicious application may be able to execute arbitrary code with kernel privileges. Reportedly being actively exploited by attackers in conjunction with a previously known WebKit vulnerability.

And last but not least, the latest addition—CVE-2021-30883—which means that of the 17 zero-days that were fixed over the course of a handful of months, at least 16 were found to be actively exploited.

Update

Apple advises users to update to iOS 15.0.2 and iPadOS 15.0.2 which can be done through the automatic update function or iTunes.

Stay safe, everyone!

The post Update now! Apple patches another privilege escalation bug in iOS and iPadOS appeared first on Malwarebytes Labs.

Ransom Disclosure Act would mandate ransomware payment reporting

In an effort to better understand and clamp down on the ransomware economy and its related use of cryptocurrencies, US Senator and past presidential hopeful Elizabeth Warren and US House Representative Deborah Ross introduced a new bill last week that would require companies and organizations to report any paid ransomware demands to the Secretary of the Department of Homeland Security.

“Ransomware attacks are skyrocketing, yet we lack critical data to go after cybercriminals,” said Senator Warren in a prepared release. “My bill with Congresswoman Ross would set disclosure requirements when ransoms are paid and allow us to learn how much money cybercriminals are siphoning from American entities to finance criminal enterprises—and help us go after them.”

If passed, the “Ransom Disclosure Act” would require a broad set of companies, local governments, and nonprofits that actually pay off ransomware demands to report those payments to the government. Companies would need to report this information within 48 hours of paying a ransom.

Specifically, those affected by the bill would need to tell the Secretary of the Department of Homeland Security:

  • The date on which such ransom was demanded
  • The date on which such ransom was paid
  • The amount of such ransom demanded
  • The amount of such ransom paid

Companies would also need to disclose what currency they paid the ransom in, including whether the payment was made with any cryptocurrency. Companies would also have to offer “any known information regarding the identity of the actor demanding such ransom.”

The bill’s focus on cryptocurrencies acknowledges the technology’s core role in ransomware today, as likely not a single big ransomware payment has been made for years in anything other than crypto. But this reliance on cryptocurrency seems to finally be catching up to ransomware criminals, as cryptocurrency, while providing somewhat decent pseudonymity, instead provides incredible records. And international police are now excelling at following those records.  

In June, the US Department of Justice announced that, after following a series of cryptocurrency transactions across cyberspace, it eventually retrieved much of the ransomware payment that Colonial Pipeline paid to recover from its own ransomware attack in May. And earlier in October, Europol said it provided “crypto-tracing support” when the FBI, the French National Gendarmerie, and the Ukrainian National Police seized $375,000 in cash and another $1.3 million in cryptocurrencies during related arrests against “two prolific ransomware operators known for their extortionate ransom demands (between €5 to €70 million).”

This work, while encouraging in the fight against ransomware, largely happens in the dark, though, as ransomware payments made by companies are still kept considerably private. The Ransom Disclosure Act, then, seeks to shine a light on that darkness to better aid the fight. Said US House Representative Ross:

“Unfortunately, because victims are not required to report attacks or payments to federal authorities, we lack the critical data necessary to understand these cybercriminal enterprises and counter these intrusions.”

The Ransom Disclosure Act would also require the Secretary of Homeland Security to develop penalties for non-compliance and to, one year after the passage of the bill, publish a database on a public website that includes ransom payments made in the year prior. That database must be accessible to the public, and it must include the “total dollar amount of ransoms paid” by companies, but the companies’ identifying information must be removed. The information gleaned from the incoming reports must also be packaged into a study by the Secretary of Homeland Security that specifically explores “the extent to which cryptocurrency has facilitated the kinds of attacks that resulted in the payment of ransoms by covered entities,” and the Secretary of Homeland Security must also then present the findings of that study to Congress.

Finally, according to the bill, individuals who make ransomware payments after personally being hit with ransomware must also have a way to voluntarily report their information to the government if they so choose.

The post Ransom Disclosure Act would mandate ransomware payment reporting appeared first on Malwarebytes Labs.

Inside Apple: How macOS attacks are evolving

The start of fall 2021 saw the fourth Objective by the Sea (OBTS) security conference, which is the only security conference to focus exclusively on Apple’s ecosystem. As such, it draws many of the top minds in the field. This year, those minds, having been starved of a good security conference for so long, were primed and ready to share all kinds of good information.

Conferences like this are important for understanding how attackers and their methods are evolving. Like all operating systems, macOS presents a moving target to attackers as it acquires new features and new forms of protection over time.

OBTS was a great opportunity to see how attacks against macOS are evolving. Here’s what I learned.

Transparency, Consent, and Control bypasses

Transparency, Consent, and Control (TCC) is a system for requiring user consent to access certain data, via prompts confirming that the user is okay with an app accessing that data. For example, if an app wants to access something like your contacts or files in your Documents folder on a modern version of macOS, you will be asked to allow it before the app can see that data.

A TCC prompt asking the user to allow access to the Downloads folder
A TCC prompt asking the user to allow access to the Downloads folder

In recent years, Apple has been ratcheting down the power of the root user. Once upon a time, root was like God—it was the one and only user that could do everything on the system. It could create or destroy, and could see all. This hasn’t been the case for years, with things like System Integrity Protection (SIP) and the read-only signed system volume preventing even the root user from changing files across a wide swath of the hard drive.

TCC has been making inroads in further reducing the power of root over users’ data. If an app has root access, it still cannot even see—much less modify—a lot of the data in your user folder without your explicit consent.

This can cause some problems. For example, antivirus software such as Malwarebytes needs to be able to see everything it can in order to best protect you. But even though some Malwarebytes processes are running with root permissions, they still can’t see some files. Thus, apps like this often have to require the user to give a special permission called Full Disk Access (FDA). Without FDA, Malwarebytes and other security apps can’t fully protect you, but only you can give that access.

This is generally a good thing, as it puts you in control of access to your data. Malware often wants access to your sensitive data, either to steal it or to encrypt it and demand a ransom. TCC means that malware can’t automatically gain access to your data if it gets onto your system, and may be a part of the reason why we just don’t see ransomware on macOS.

TCC is a bit of a pain for us, and a common point of difficulty for users of our software, but it does mean that we can’t get access to some of your most sensitive files without your knowledge. This is assuming, of course, that you understood the FDA prompts and what you were agreeing to, which is debatable. Apple’s current process for assigning FDA doesn’t make that clear, and leaves it up to the app asking for FDA to explain the consequences. This makes tricking a user into giving access to something they shouldn’t pretty easy.

However, social engineering isn’t the only danger. Many researchers presenting at this year’s conference talked about bugs that allowed them to get around the Transparency, Consent, and Control (TCC) system in macOS, without getting user consent.

Andy Grant (@andywgrant) presented a vulnerability in which a remote attacker with root permissions can grant a malicious process whatever TCC permissions is desired. This process involving creating a new user on the system, then using that user to grant the permissions.

Csaba Fitzl (@theevilbit) gave a talk on a “Mount(ain) of Bugs,” in which he discussed another vulnerability involving mount points for disk image files. Normally, when you connect an external drive or double-click a disk image file, the volume is “mounted” (in other words, made available for access) within the /Volumes directory. In other words, if you connect a drive named “backup”, it would become accessible on the system at /Volumes/backup. This is the disk’s “mount point.”

Mountain of bugs
Title slide of Csaba Fitzl’s “Mount(ain) of Bugs” talk

Csaba was able to create a disk image file containing a custom TCC.db file. This file is a database that controls the TCC permissions that the user has granted to apps. Normally, the TCC.db file is readable, but cannot be modified by anything other than the system. However, by mounting this disk image while also setting the mount point to the path of the folder containing the TCC.db file, he was able to trick the system into accepting his arbitrary TCC.db file as if it were the real one, allowing him to change TCC permissions however he desired.

There were other TCC bypasses mentioned as well, but perhaps the most disturbing is the fact that there’s a fairly significant amount of highly sensitive data that is not protected by TCC at all. Any malware can collect that data without difficulty.

What is this data, you ask? One example is the .ssh folder in the user’s home folder. SSH is a program used for securely gaining command line access to a remote Mac, Linux, or other Unix system, and the .ssh folder is the location where certificates used to authenticate the connection are stored. This makes the data in that folder a high-value target for an attacker looking to move laterally within an organization.

There are other similar folders in the same location that can contain credentials for other services, such as AWS or Azure, which are similarly wide open. Also unprotected are the folders where data is stored for any browser other than Safari, which can include credentials if you use a browser’s built-in password manager.

Now, admittedly, there could be some technical challenges to protecting some or all of this data under the umbrella of TCC. However, the average IT admin is probably more concerned about SSH keys or other credentials being harvested than in an attacker being able to peek inside your Downloads folder.

Attackers are doing interesting things with installers

Installers are, of course, important for malware to get installed on a system. Often, users must be tricked into opening something in order to infect their machine. There are a variety of techniques attackers can use that were discussed.

One common method for doing this is to use Apple installer packages (.pkg files), but this is not particularly stealthy. Knowledgeable and cautious folks may choose to examine the installer package, as well as the preinstall and postinstall scripts (designed to run exactly when you’d expect by the names), to make sure nothing untoward is going on.

However, citing an example used in the recent Silver Sparrow malware, Tony Lambert (@ForensicITGuy) discussed a sneaky method for getting malware installed: The oft overlooked Distribution file.

The Distribution file is found inside Apple installer packages, and is meant to convey information and options for the installer. However, JavaScript code can also be inserted in this file, to be run at the beginning of the installation, meant to be used to determine if the system meets the requirements for the software being installed.

In the case of Silver Sparrow, however, the installer used this script to download and install the malware covertly. If you clicked Continue in the dialog shown below, you’d be infected even if you then opted not to continue with the installation.

An Apple installer asking the user to allow a program to run to determine if the software can be installed.
Click Continue to install malware

Another interesting trick Tony discussed was the use of payload-free installers. These are installers that actually don’t contain any files to be installed, and are really just a wrapper for a script that does all the installation (likely via the preinstall script, but also potentially via Distribution).

Normal installer scripts will leave behind a “receipt,” which is a file containing a record of when the installation happened and what was installed where. However, installers that lack an official payload, and that download everything via scripts, do not leave behind such a receipt. This means that an IT admin or security researcher would be missing key information that could reveal when and where malware had been installed.

Chris Ross (@xorrior) discussed some of these same techniques, but also delved into installer plugins. These plugins are used within installer packages to create custom “panes” in the installer. (Most installers go through a specific series of steps prescribed by Apple, but some developers add additional steps via custom code.)

These installer plugins are written in Objective-C, rather than scripting languages, and therefore can be more powerful. Best of all, these plugins are very infrequently used, and thus are likely to be overlooked by many security researchers. Yet Chris was able to demonstrate techniques that could be used by such a plugin to drop a malicious payload on the system.

Yet another issue was presented in Cedric Owens’ (@cedowens) talk. Although not related to an installer package (.pkg file), a vulnerability in macOS (CVE-2021-30657) could allow a Mac app to entirely bypass Gatekeeper, which is the core of many of Apple’s security features.

On macOS, any time you open an app downloaded from the Internet, you should at a minimum see a warning telling you that you’re opening an app (in case it was something masquerading as a Word document, or something similar). If there’s anything wrong with the app, Gatekeeper can go one step further and prevent you from opening it at all.

By constructing an app that was missing some of the specific components usually considered essential, an attacker could create an app that was fully functional, but that would not trigger any warnings when launched. (Some variants of the Shlayer adware have been seen using this technique.)

The post Inside Apple: How macOS attacks are evolving appeared first on Malwarebytes Labs.