IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Millions of Chrome users quietly added to Google’s FLoC pilot

Last month, Google began a test pilot of its Federated Learning of Cohorts—or FLoC—program, which the company has advertised as the newest, privacy-preserving alternative in Google Chrome to the infamous third-party cookie.

Sounds promising, right? Well, about that.

Despite Google’s rhetoric about maintaining user privacy, its FLoC trial leaves much to be desired. Google Chrome users had no choice in whether they were included in the FLoC trial, they received no individualized notification, and, currently, they have no option to specifically opt-out, instead having to block all third-party cookies on their Google Chrome browsers to leave the trial.

Electronic Frontier Foundation (EFF), which analyzed Google’s published materials and Chromium’s source code to better understand FLoC, lambasted the pilot program and the technology behind it.

“EFF has already written that FLoC is a terrible idea,” the digital rights organization said. “Google’s launch of this trial—without notice to the individuals who will be part of the test, much less their consent—is a concrete breach of user trust in service of a technology that should not exist.”

What is FLoC?

Labored acronyms aside, FLoC is part of Google’s broader plan to develop its idea of a more private web, as the search giant struggles with the death of the most important digital advertising tool in the history of the Internet—the third-party cookie.

We should be clear at the outset here. First-party cookies help the Internet function. Cookies help websites knit web page visits together. First-party cookies are used to knit together different visits to pages on the same website and help them remember useful information such as your settings, what’s in your shopping cart, and—most importantly—whether you are logged in or not.

Third-party cookies can also benefit Internet users, but for years, this technology primarily served as a sort of “tree of life” for the digital advertising economy, allowing advertisers to knit together web page visits from many different websites.

Implanted on millions of popular websites, tracking code that relies on third-party cookies has enabled the profiling of nearly every single Internet user by their age, gender, location, shopping interests, political affiliations, and religious beliefs. Third-party cookies also ushered in the era of “Real-Time Bidding,” in which businesses compete for the opportunity to deliver you ads based on those user profiles. And as online publishers like newspapers struggled to maintain in-print advertising revenue in their decade-long transition to digital, third-party cookies provided a sometimes necessary bargain for those publishers: Sell ad placements not to individual companies, but scale ad revenue rapidly by harnessing the results of mass user profiling.

Without the third-party cookie, much of this activity would either have been delayed or limited. So, too, would the money being made by the developers of those third-party cookies, which include many digital advertising companies and, as it just so happens, one notable Silicon Valley giant—Google.

The obvious question about FLoC technology then is: Why would Google create an alternative to the technology that helps them generate billions of dollars in ad revenue every year?

Because the third-party cookie is dying. As users increasingly protect their online privacy, they continue to install browser plug-ins that block the type of online tracking enabled by third-party cookies. Further, several browsers—including Safari and Mozilla—began blocking third-party cookies by default years ago.

If anything, FLoC is Google’s answer to a future that we all know is coming, in which the third-party cookie has lost its power.

Alright but what actually is FLoC?

How FLoC technology differs from third-party cookies is that, primarily, FLoC will create profiles on groups of users and not profiles on individual users. If FLoC becomes the norm, then Google Chrome users will have their activity tracked by Google Chrome itself. Based on that browsing activity—including what sites are visited and what searches are made—Google Chrome will then group users into “cohorts.” When you visit a website it will be able to ask your browser what cohorts you belong to and deliver ads that advertisers have targeted towards those “cohorts.”

This means that the broader digital advertising ecosystem will remain, but the wheels that churn to move it forward will undergo some changes.

In its FLoC announcement, Google explained that it is trying to find a balance between what it believes is the usefulness and the harm of third-party cookies.

“Keeping in mind the importance of ‘and,’ FLoC is a new approach to interest-based advertising that both improves privacy and gives publishers a tool they need for viable advertising business models,” the company said.

According to Google, FLoC technology will not share your individual browsing history with anyone or any company, including Google. Instead, that activity will be grouped into the activity of thousands of users in a cohort. Further, Google said that its Chrome browser will not create cohorts based on “sensitive topics.” So, that hopefully means that there will not be cohorts for people searching for aid in suicide prevention, domestic abuse, drug addiction, or private medical diagnoses, for example.

According to EFF, though, Google’s FLoC technology includes multiple privacy problems, such as the ability to use FLoC findings in conjunction with browser fingerprinting to reveal information about users, and the potentially never-ending quest to gather user data as a first-stage requirement only to then “unlearn” that user data if it could lead to the creation of a sensitive cohort.

The technical concerns with FLoC are many, but they’re difficult for the average user to grasp. What is easy to understand, however, is how those average users are left behind in Google’s FLoC trial.

A quiet trial

For such a seismic shift in the Internet’s infrastructure, many might assume that Google would announce the FLoC trial with more safeguards.

That’s not what happened.

In Google’s FLoC trial announcement, it gave Google Chrome users no option to opt out before the trial began. Instead, Google silently pushed FLoC technology to Chrome users in the US, Canada, Mexico, Australia, New Zealand, Brazil, India, Japan, Indonesia, and the Philippines. While Google described the trial as affecting a “small percentage of users,” according to EFF, that percentage could be as high as 5 percent.

That sounds small at first, but take into account that nearly-ancient estimates (circa 2016) put active Google Chrome users around 2 billion, meaning that the FLoC trial could affect up to 100 million people. That is an enormous number of people to subject to a data analysis experiment without their prior consent.

Google also said that users can opt-out of the FLoC trial by disabling third-party cookies through Google Chrome. It’s good that such an option exists, but it’s unfortunate that users will have to have some basic understanding of FLoC and third-party cookies to remove themselves from a trial that they might have no knowledge about.

Compounding the issue is that turning off all third-party cookies could remove a good deal of functionality from a user’s web experience. That seems both imprecise and unfair.

Finally, the FLoC trial affects more than browser users—it affects websites, too. Remember those publishers that Google said it would like to help? According to Google, “websites that don’t opt out will be included in the FLoC calculation if Chrome detects that they load ads-related resources”. Some of them have already opposed being automatically included into a technology trial that will result in the profiling of their readers—even if that profiling is supposedly less privacy-invasive.

Julia Angwin, editor-in-chief of the investigative news outlet The Markup, said that her organization chose to opt out of FLoC.

“We @themarkup opted out of Google’s newfangled cookie-less tracking system (FLoC) so our readers will not be targeted with ads based on visiting our site,” Angwin wrote on Twitter. “Others who care about reader privacy might want to do the same.”

Angwin is just one of many journalists who have reported on FLoC technology, most of whom have authored FAQs, explainers, and detailed guides on just what it is Google is trying to do with its recent experiment.

All of those explainers, in fact, point to the biggest problem here: Users are being included in something that they did not know about that will affect how they are treated on the Internet, and they had no say in the matter beforehand.

A private web can incorporate many things. At the very least, it should include user respect.

The post Millions of Chrome users quietly added to Google’s FLoC pilot appeared first on Malwarebytes Labs.

Cryptomining containers caught coining cryptocurrency covertly

In traditional software development, programmers code an application in one computing environment before deploying it to a similar, but often slightly different environment. This leads to bugs or errors that only show up when the software is deployed—exactly when you need them least. To solve for this, modern developers often bundle their applications together with all of the configuration files, libraries, and other pieces of software required to run in it in “containers” hosted in the cloud. This method, called containerization, allows them to create and deploy the entire computing environment, so there are no unexpected surprises.

Because a lot of projects rely on many of the same dependencies, developers can get their projects off to a flying start by building on top of pre-configured container images, which can be downloaded from online image repositories like Docker Hub. Those images may in turn be built on top of other images, and so on. So, for example, developer building a plugin for the WordPress content management system might base their project on a container image containing WordPress, and that container might be built on top of another image that includes a web server and database, which may be built on a container image for a popular operating system, like Ubuntu.

Container images provide a simple way to distribute software at the expense of transparency.

Now imagine if a malicious actor could hide a crypto-jacker in a popular source image, one that might get used and reused thousands of times. They could end up with a huge number of systems mining cryptocurrency for them for free.

Docker images

Docker Hub is the world’s largest library and community for container images and therefore a very attractive target for attackers. Luckily, tampering with containers is not easy and Docker has a strong focus on “Trusted Delivery” which is supposed to guarantee an untampered app. But there is a lot more to be found in container images than just the app.

Uncovered by researchers

In the last several years, Unit 42 researchers have uncovered cloud-based crypto-jacking attacks in which miners are deployed using an image in Docker Hub. Containerization is almost always conducted in a cloud environment, because that contributes to its scalability—behind the scenes popular web applications or services often rely on huge numbers of identical containers. This has some advantages for the crypto-jackers:

  • There are many instances for each target.
  • It is hard to monitor, so miners can run undetected for a long time.

The researchers uncovered 30 images from 10 different Docker Hub accounts that accounted for over 20 million “pulls” (downloads).

The favorite cryptocurrency

The most popular cryptocurrency for attackers to mine is Monero. Monero is a cryptocurrency designed for privacy, promising:

“all the benefits of a decentralized cryptocurrency, without any of the typical privacy concessions”.

No cryptocurrency is anonymous, as many people think, but there are other reasons why the crypto-jackers favor Monero:

  • Many crypto-mining algorithms run significantly better on ASICs or GPUs, but Monero mining algorithms run better on CPUs, which matches what the crypto-jacker can expect to find in a containerized environment.
  • Besides Bitcoin, Monero is one of the better known cryptocurrencies and therefore is expected to hold its value.

Cryptocurrencies are pseudonymous at best, which means that users hide behind a pseudonym, like one or more wallet IDs. Their activities can be tracked—forever—so keeping their identity secret depends on how well they can separate their real identity from their wallet IDs.

XMRig

In most of the recorded attacks that mined Monero, the attackers used XMRig. XMRig is a popular Monero miner and is preferred by attackers because it is easy to use, efficient, and, most importantly, open source, which allows attackers to modify its code. In some images, the researchers found different types of cryptominers. Possibly to enable the attacker to choose the best crypto-miner for the victim’s hardware.

The consequences

Not only will having a crypto-miner in your container lead to either a higher bill or lower performance, there could be other consequences too, because many cloud service providers explicitly forbid mining for cryptocurrencies.

OVH terms
OVH terms for customers

Mitigation

Stopping crypto-jackers from taking advantage of popular images can be done at a few levels:

Image providers needs to perform regular checks against tampering, container repositories should monitor for irregularities, and cloud service providers can check outgoing connections for mining-related activity

Container users should avoid downloading containers from untrusted sources, scan images for malware at the build stage, check the integrity of images before and after copying them, and monitor runtime activity and network communication.

Since containers are just another way of arrange software stacks—including operating systems, applications and libraries—all the usual precautions apply too, such as patching vulnerabilities promptly.

Stay safe, everyone!

The post Cryptomining containers caught coining cryptocurrency covertly appeared first on Malwarebytes Labs.

Zoom zero-day discovery makes calls safer, hackers $200,000 richer

Two Dutch white-hat security specialists entered the annual computer hacking contest Pwn2Own, managed to find a Remote Code Execution (RCE) flaw in Zoom and are $200,000 USD better off than they were before.

Pwn2Own

Pwn2Own is a high profile event organized by the Zero Day Initiative that challenges hackers to find serious new vulnerabilities in commonly used software and mobile devices. The event is held to demonstrate that popular software and devices come with flaws and vulnerabilities, and offers a counterweight to the underground trade in vulnerabilities.

The “targets” volunteer their software and devices and offer a reward for successful attacks. Fans are treated to a hacking spectacle, successful hackers get kudos and no small amount of cash (in this case the reward was a whopping $200,000 USD), and vendors find nasty bugs that might otherwise be sold to criminals.

Pwn2Own 2021 runs from 6 April to 8 April. The full schedule for this year can be found on their site. This year the event has focused on software and devices used when working from home (WFH), including Microsoft Teams and Zoom, for obvious reasons.

The white hats

Keuper and Alkemade, who are employed by cybersecurity company Computest, combined three vulnerabilities to take over a remote system on the second day of the Pwn2wn event. The vulnerabilities require no interaction of the victim. They just need to be on a Zoom call.

The vulnerability

In the light of responsible disclosure, the full details of the method have been kept under wraps. What we do know is that it was Remote Code Execution (RCE) flaw: As a class of software security flaws that allow a malicious actor to execute code of their choosing on a remote machine over a LAN, WAN, or the Internet.

We also know that the method works on the Windows and Mac version of the Zoom software, but does not affect the browser version. It is unclear whether the iOS- and Android-apps are vulnerable since Keuper and Alkemade did not look into those.

The Pwn2Own organization have tweeted a gif demonstrating the vulnerability in action. You can see the attacker open the calculator on the system running Zoom. Calc.exe is often used as the program that hackers open on a remote system to show that they can run code on the affected machine.

A Zoom RCE being used to open the Windows calculator

Not patched yet

Understandably, Zoom has not yet had the time to issue a patch for the vulnerability. They have 90 days to do so before details of the flaw are released, but they are expected to do it way before that period is over. The fact that the researchers came out on the second day of the Pwn2Own event with this vulnerability does not mean they figured it out in those two days. They will have put in months of research to find the different flaws and combine them into an RCE attack.

Security done right

This event, and the procedures and protocols that surround it, demonstrate very nicely how white-hat hackers work, and what responsible disclosure means. Keep the details to yourself until protection in the form of a patch is readily available for everyone involved (with the understanding that vendors will do their part and produce a patch quickly).

Mitigation

For now, the two hackers and Zoom are the only ones that know how the vulnerability works. As long as it stays that way there is not much that Zoom users have to worry about. For those that worry anyway, the browser version is said to be safe from this vulnerability. For anyone else, keep your eyses peeled for the patch and update at earliest convenience after it comes out.

Stay safe, everyone!

The post Zoom zero-day discovery makes calls safer, hackers $200,000 richer appeared first on Malwarebytes Labs.

Fake Trezor app steals more than $1 million worth of crypto coins

Several users of Trezor, a small hardware device that acts as a cryptocurrency wallet, have been duped by a fake app with the same name. The app was available on Google Play and Apple’s App Store and also claimed to be from SatoshiLabs, the creators of Trezor.

According to the Washington Post, the fake Trezor app, which was on the App Store for at least two weeks (from 22 January to 3 February), was downloaded 1,000 times before it was taken down. A fake Trezor app on the Play Store was downloaded by a similar number of users, but it’s not clear how long it was available on the platform.

Those victimized by the fake app couldn’t tell that they were downloading a dodgy app. Apart from the mimicked name and visual style of the Trezor brand, victims have also reported seeing high rating reviews—155 reviews giving it close to a 5 star rating—a common tactic of criminal app developers looking to gain the trust of users.

Phillipe Christodoulou, owner of a dry-cleaning service was one of the many Trezor users who downloaded the fake Trezor app from the App Store. He wanted to check his cryptocurrency balance on his phone and decided to search for and download an app instead of plugging the device into his computer via a USB connection. He lost 17.1 Bitcoins, which was worth $600,000 USD at that time. At the time of writing it is worth more than $1 million USD.

A similar incident happened with James Fajcz, a reliability engineer, in December 2020. He bought both Ethereum and Bitcoin worth $14,000 USD with his savings after seeing the price of digital tokens rising that same month. To ensure his investment was secure, be bought a Trezor, and then downloaded its purported app on his iPhone. When the app didn’t connect to his hardware wallet, he assumed that the app didn’t work. After buying a second round of cryptocurrencies weeks later, he checked the balance on his Trezor device using his computer, but it was empty. He realized he had been conned out of his digital currencies when he reached out to the Trezor community on Reddit.

Both men didn’t know that an official Trezor app doesn’t exist, and both also blamed Apple for letting a fake app into the App Store, a space touted by Apple as “the most trusted marketplace for apps.”

In January 2021, the official Trezor account on Twitter warned Android users of a malicious app posing as that belonging to Trezor and SatoshiLabs. This isn’t the first time that criminals have posed as a Trezor app.

Both Google and Apple provide screening of apps before they’re added to their app stores, but these incidents remind us that no form of screening is perfect. Successful criminals are good at finding and exploiting loopholes, or using malicious techniques that are hard to screen for. We don’t know how this malicious app worked, but we can guess that it might simply transfer victims’ cryptocurrency to a wallet (that happens to be owned by the app’s creator), which is very similar to what a legitimate app would be doing.

With cryptocurrencies continuing to gain popularity, expect more scammers to bank on this wave. In May last year, Harry Denley, a cybersecurity researcher specializing in cryptocurrencies, revealed that he discovered almost 75 malicious Google Chrome extensions designed to steal money from cryptocurrency wallets.

Last month, CoinDesk went on a crypto scam hunt and found that both popular app stores have found fake crypto wallet apps.

Cryptocurrency owners are advised to be more vigilant than ever about phishing campaigns in the form of apps and extensions. Trezor users, in particular, should be aware that while there is no app for their hardware wallet now, there will be an official one in the future. Watch the company’s official website and Twitter account for news on that and, until then, avoid downloading Trezor apps and heed the company’s advice: never share your seed until your device asks you to do so.

The post Fake Trezor app steals more than $1 million worth of crypto coins appeared first on Malwarebytes Labs.

SAP warns of malicious activity targeting unpatched systems

A timely warning to keep systems patched has appeared, via a jointly-released report from Onapsis and SAP. The report details how threat actors are “targeting and potentially exploiting unprotected mission-critical SAP applications”. Some of the vulnerabilities used were weaponised fewer than 72 hours after patches are released. In some cases, a newly deployed SAP instance could be compromised in just under a week if people aren’t patching.

Old threats cause new problems

The vulnerabilities being exploited were patched months or even years ago. Sadly, when organisations don’t patch and update, compromise is only a step away. This isn’t a new phenomenon, by any means. It doesn’t matter if we’re talking software or hardware fixes, or replacing an insecure Windows XP box on the network, or running updates you’ve been putting off for that old mobile phone in your drawer. Erratic update routines, or worse still, abandoning them altogether can lead to serious consequences.

In its own press release on the subject, SAP warns that a failure to patch could give cybercriminals “full control of the unsecured SAP applications”, while pointing out that its cloud-based solutions are not at risk:

The scope of impact from these specific vulnerabilities is localized to customer deployments of SAP products within their own data centers, managed colocation environments or customer-maintained cloud infrastructures. None of the vulnerabilities are present in cloud solutions maintained by SAP.

The US Department of Homeland Security’s CISA lists some of the serious end-results of failing to make use of the available SAP patches, in an announcement that followed the release of the report:

  • Financial fraud
  • Disruption to business
  • Sensitive data theft
  • Ransomware
  • Halt of operations

Patch early, patch often

From the above list, ransomware alone could lead to any of those security issues. The data in the threat intelligence report is incredibly useful for anybody who thinks they could be affected. Thanks to SAP and Onapsis, we know how brief the window can be for those tasked with defending systems to do something about it. It also highlights how both security and compliance are at risk, along with some of the techniques attackers will try to use out in the wild.

Regular readers will know we’re big on patching and updating. Some of the most undesirable threats around thrive on a lack of regular updates. Manual, as opposed automatic updates, can also bring headaches for organisations struggling to get up to speed with best practices. It’s certainly not easy, with some organisations simply choosing to never patch at all.

A lack of patching may lead to disaster

That risky strategy of little-to-no-patching stands a good chance of going horribly wrong. A study of 340 security professionals in 2019 found 27% of organisations worldwide, and 34% in Europe, said they’d experienced breaches due to unpatched vulnerabilities. If an inability to patch promptly is compounded by delays in detecting new systems added to networks and a lack of regular vulnerability scanning, attackers are left with a lot of room to work with.

If your organisation is a touch lax on patching, or making it up as you go along – fear not! There’s still time to get a grip on this difficult subject. Whether you use any of the systems mentioned in the threat report up above or not, timely patching is the way to go. The threats to your business may not come knocking at the door today, or even tomorrow, but that won’t be the case forever.

The post SAP warns of malicious activity targeting unpatched systems appeared first on Malwarebytes Labs.

Aurora campaign: Attacking Azerbaijan using multiple RATs

This post was authored by Hossein Jazi

As tensions between Azerbaijan and Armenia continue, we are still seeing a number of cyber attacks taking advantage of this situation. On March 5th 2021, we reported an actor that used steganography to drop a new .Net Remote Administration Trojan. Since that time, we have been monitoring this actor and were able to identify new activity where the threat actor switched their RAT from .Net to Python.

Document Analysis

The document targets the government of Azerbaijan using a SOCAR letter template as lure. SOCAR is the name of Azerbaijan’s Republic Oil and Gas Company. The document’s date is 25th March 2021 and the letter, related to export of catalyst for analysis, is written to the Ministry of Ecology and Natural Resources. The document’s creation time is 28th March 2021 and is aligned with the date mentioned on the letter. Based on the dates we believe that this attack happened between 28th and 30th of March 2021.

doc
Figure 1: Document lure

The embedded macro in this document is almost similar to what we have reported before with some small differences. We will talk about the similarities between these two documents in the next section.

The macro has two main functions “Document_Open” and “Document_Close”. In “Document_Open” after defining the required variables it creates a directory (%APPDATA%Roamingnettools48) for its Python Rat.

doc open
Figure 2: Document_Open

It then copies itself in a new format to the file path defined before in order to be able to extract the required data from an embedded PNG file (image1.png).

images
Figure 3: Embedded image

To extract the embedded data, it calls the “ExtractFromPng” function to identify the chunk that has the embedded data. After finding the chunk, it extracts the files from the PNG file and writes them into “tmp.zip”.

chunk
Figure 4: Chunk identification

The “tmp.zip” is then extracted into “%APPDATA%Roamingnettools48” directory. It contains the Python 3.6 interpreter, NetTools Python library, Python Rat, the RAT C2 config, as well runner.bat.

folder
Figure 5: Application directory

The Python Rat will be executed when the document is closed. The “Document_Close” first delays execution to bypass security detection mechanisms by creating a junk loop for 100 times and then executes the runner.bat by calling Shell function.

doc close
Figure 6: Document_Close

The runner.bat is also delaying execution for 64 seconds and then it calls Python to execute the Python RAT (vabsheche.py)

SET /A num=%RANDOM% * (80 - 60 + 1) / 32768 + 60
timeout /t %num%
set DIR=%~dp0
"%DIR%python" "%DIR%vabsheche.py"

Python RAT Analysis

The Python RAT used by the attacker is not obfuscated and is pretty simple. It is using the platform library to identify the victim’s OS type.

os identification
Figure 7: OS identification

The C2 domain and port are hardcoded within a file in the RAT directory. The RAT opens this file and extracts the host and port from this file.

c2
Figure 8: Reads C2 config

In the next step if the victim is running Windows, it makes itself persistent through creating a scheduled task. It first checks if a scheduled task with the name “paurora*” exists or not. If it does not exist, it reads the content of bg.txt file and creates a bg.vbs file. Then adds the created VBS file to the list of scheduled tasks.

taskreg
Figure 9: Creates Scheduled task

The created VBS file calls the runner.bat to execute the Python RAT.

scheduletask
Figure 10: Scheduled task

The main functionality of the RAT is through a loop that starts by creating a secure SSL connection to the server using a certificate file (cert.pem) that was extracted from the PNG file and dropped into the RAT directory.

main
Figure 11: Makes secure connection to server

After building the secure connection to the server it goes to a loop that receives a message from the server and executes different commands based on the message type.

maincommands
Figure 12: Executes commands

Here is the list of commands that can be executed by the RAT:

  • OPEN_NEW_CONNECTION: Sends a message to the server with False as content
  • HEART_BEAT: Sends a message to the server that the victim is alive
  • USER_INFO: Collects victim info including OS Name, OS Version and User Name
  • SHELL: Executes shell commands received from the server
  • PREPARE_UPLOAD: Checks if it can open a file to write the received data from server into it and if that is the case it sends a “Ready” message to the server
  • UPLOAD: Receives a buffer from the server and writes them into file
  • DOWNLOAD: Archives files and sends them to the server

Similarity Analysis

In this sections we provide the similarities between two documents and TTPs used by them. This will help hunters to identify the future campaigns associated with this actor.

TTPs similarities

  • Used steganography to embed RATs within the embedded images.
  • Used scheduled tasks for persistence. In both cases It created a VBS file to execute the batch runner.
  • Used a batch file with the same name (runner.bat) to execute the final RAT.
  • Used the same technique to exfiltrate data. (Archive them and send them to the server)

Documents similarities

  • Both have been obfuscated using same obfuscation techniques: Inserting random characters within the meaningful names to obfuscate the functions and variables names. After deobfuscation, the function graph of these two documents are almost similar.
socar
Figure 13: Socar.doc
telebler
Figure 14: telebler.doc
  • Both have used the similar method to obfuscate strings: using “MyFunc23” function that receives an array of numbers and decodes them into a string.

Other similarities

  • both C2 domains have resolved to the same IP address.
  • There are overlaps between the commands used by both .Net and Python RATs.

Conclusion

Due to tensions between Azerbaijan and Armenia, cyber attacks against these countries have been increasing in the past year. The Malwarebytes Threat Intelligence Team is constantly monitoring actors that are targeting these countries and was able to identify an actor that has targeted Azerbaijan using different RATs. This actor has used .Net and Python RATs to infect victims and steal data from them. The actor used spear phishing as initial vector that has used steganography to drop a variant of its RATs.

IOCs

socar.doc 42f5f5474431738f91f612d9765b3fc9b85a547274ea64aa034298ad97ad28f4
runner.bat 82eb05b9d4342f5485d337a24c95f951c5a1eb9960880cc3d61bce1d12d27b72
vabsheche.py e45ffc61a85c2f5c0cbe9376ff215cad324bf14f925bf52ec0d2949f7d235a00
bg.vbs 1be8d33d8fca08c2886fa4e28fa4af8d35828ea5fd6b41dcad6aeb79d0494b67
C2 Domain pook.mywire[.]org
C2 IP 111.90.150.37

The post Aurora campaign: Attacking Azerbaijan using multiple RATs appeared first on Malwarebytes Labs.

Pre-installed auto installer threat found on Android mobile devices in Germany

Users primarily located in Germany are experiencing malware that downloads and installs on their Gigaset mobile devices—right out of the box! The culprit installing these malware apps is the Update app, package name com.redstone.ota.ui, which is a pre-installed system app. This app is not only the mobile device’s system updater, but also an Auto Installer known as Android/PUP.Riskware.Autoins.Redstone.

  • 2 2
  • 1 2

Infected devices and other important notes

Although this issue seems to be primarily found on Gigaset mobile devices, we have also found other manufacturers involved. Here is a list of make/model/OS version of mobile devices found with Android/PUP.Riskware.Autoins.Redstone:

  • Gigaset GS270; Android OS 8.1.0
  • Gigaset GS160; Android OS 8.1.0
  • Siemens GS270; Android OS 8.1.0
  • Siemens GS160; Android OS 8.1.0
  • Alps P40pro; Android OS 9.0
  • Alps S20pro+; Android OS 10.0

We should note that the names Gigaset and Siemens have considerable overlap—Gigaset was formerly known as Siemens Home and Office Communications Devices. We listed both to erase any confusion.

It important to realize that every mobile device has some type of system update app. Unless you are experiencing the exact behaviors in the next section, you are most likely not infected. Another key point is that this pre-installed update app is the not the same as what is described in Android “System Update” malware steals photos, videos, GPS location. In that case, the malware is simply hiding as an update app, but is not a pre-installed system app.

Malware behavior

For most Gigaset users experiencing this infection, com.redstone.ota.ui installs three versions of Android/Trojan.Downloader.Agent.WAGD. The package name of this malware always starts with “com.wagd.” and is followed by the name of the app. Here are some examples:

  • Package name: com.wagd.gem
  • App name: gem
3
  • Package name: com.wagd.smarter
  • App name: smart
4
  • Package name: com.wagd.xiaoan
  • App name: xiaoan
5

According to forum users and analysis, Android/Trojan.Downloader.Agent.WAGD is capable of sending malicious messages via WhatsApp, opening new tabs in the default web browser to game websites, downloading more malicious apps, and possibly other malicious behaviors. The malicious WhatsApp messages are most likely in order to further spread the infection to other mobile devices.

In addition, some users also experience Android/Trojan.SMS.Agent.YHN4 on their mobile devices. The downloading and installation of this SMS Agent is due to Android/Trojan.Downloader.Agent.WAGD visiting gaming websites containing malicious apps. Thereupon, the mobile device contains malware capable of sending malicious SMS messages. Like with the malicious WhatsApp messages, it can in addition send malicious SMS messages to further spread the infection.

  • 6 2
  • 7 2

Awaiting resolution

Because com.redstone.ota.ui is a system app, you cannot remove it using traditional methods. Further, past evidence from Adups and other variants shows that disabling pre-installed update apps is either impossible or it re-enables shortly after disabling. Therefore, just as the case with UMX back in January 2020, it is up to the device manufacturer to push an update to truly fix this issue. Keep in mind that even after the manufacturer fixes the issue, they can push out yet another update in the future to re-infect. There is some evidence that this has been the case with UMX as of recent, but that is another blog for another day. 

In the case of Gigaset, German blogger Günter Born on his blog Borncity has already gotten the ball rolling by contacting Gigaset to resolve. In the meantime, according to an Attention pinned at the bottom of Mr. Born’s blog he suggests the following (translating from German to English using Google Translator):

Attention: I recommend all Gigaset Android device owners to heed the information in the blog post Malware attack: What Gigaset Android device owners should do now and to lay the device dead. At least until Gigaset has responded and the process has been completely clarified.

A safe workaround

The aforementioned recommendation to quote, lay the device dead, may not be an option for some users if this is their only mobile device. Allow me to suggest another option that still gives users the ability to use their Gigaset mobile device safely.

Yes, it is true you cannot remove it using traditional methods, but we have a workaround!

We can use the method below to uninstall Update (com.redstone.ota.ui) for current users (details in link below):

https://forums.malwarebytes.com/topic/216616-removal-instructions-for-adups/

From the tutorial above, use this command during step 7 under Uninstalling Adups via ADB command line to remove:

adb shell pm uninstall -k –user 0 com.redstone.ota.ui

At this point, run a Malwarebytes for Android scan to remove any remaining malware apps.

Checking for updates

Here is the kicker. Remember that the Update app is also the mobile device’s only way to update the system. Thus, if and when Gigaset comes up with a resolution, you will need to check for system updates by re-installing Update.

You can re-install using this command:

adb shell pm install -r –user 0 <full path of the apk>

The two full path of the apk’s we have seen so far are as follows:

/system/priv-app/ThirdPartyRSOTA/ThirdPartyRSOTA.apk

/system/app/Rsota/Rsota.apk

If neither of these paths work, you can find the correct path, even after uninstalling for current user, by running this command:

adb shell pm list packages -f -u

Copy/paste the output into a text editor (like Notpad) and search for com.redstone.ota.ui to find the correct path.

If there are no updates to install or if the update that does install does not resolve the issue, remember to once again uninstall Update for the current user.

Never ending battle

Assisting customers with resolving pre-installed malware is a reoccurring action by me and our mobile support staff. Fortunately, in the case of Gigaset users, there is a workable resolution. If you are experiencing similar or other mobile malware issues you can reach us on our Malwarebytes Forum or for more thorough support submit a support ticket. As always, stay safe out there!

The post Pre-installed auto installer threat found on Android mobile devices in Germany appeared first on Malwarebytes Labs.

Research claims Google Pixel phones share 20 times more data than iPhones

If you’re an Android phone user, now might be a good time to invest in a good pair of ear plugs. Fans of iPhones aren’t known for being shy when it comes to telling Android users that Apple products are superior, and things may be about to get worse, thanks to a new research paper (pdf)

Researchers of the School of Computer Science and Statistics at Trinity College Dublin, Ireland decided to investigate what data iOS on an iPhone shares with Apple and what data Google Android on a Pixel phone shares with Google. Whilst it may not be the smoking gun some think it is (we think the sheer amount of telemetry data may come as a surprise for both sides of the argument), it didn’t go well for Android.

Research outline 

To get fair results a researcher needs to define experiments that can be applied uniformly to the handsets studied, to allow for direct comparisons, and the experiment needs to generate reproducible behavior. The research team decided to focus on the handset operating system itself, separate from optional services such as maps, search engines, cloud storage, and other services provided by Google and Apple. Although these come with practically every device, privacy-conscious minds are prone to disable these services.

The user profile was set to mimic a privacy-conscious but busy/non-technical user, who when asked does not select options that share data with Apple and Google. Otherwise, the handset settings were left at their default value. 

Test moments 

Data transfer was measured at 6 specific points of action during the phones’ normal use: 

  • On first startup following a factory reset 
  • When a SIM was inserted/removed 
  • When a handset was left idle 
  • When the settings screen was viewed 
  • When geolocation services were enabled/disabled 
  • When the user logged in to the pre-installed app store 

Test results 

Both iOS and Google Android transmit telemetry, despite the user settings. According to the research, both Android and iOS handsets shared data with Google and Apple servers every 4.5 minutes, on average.

Android handsets however, share 20 times more telemetry data than iPhones, it seems. During the first 10 minutes of startup the Pixel handset in the test sent around 1MB of data to Google, compared with the 42KB of data the iPhone sent to Apple. When the handsets were sitting idle the Pixel sent roughly 1MB of data to Google every 12 hours compared with the iPhone’s 52KB sent to Apple.

We should be careful not to draw too many conclusions from just the size of the data though. The quantity of data can be affected by things like the choice of protocols and whether or not compression is used. What matters far more, is the type of information being shared.

Type of information 

Researchers noted that devices on default privacy settings share information related to the IMEI, SIM serial number, phone number, hardware serial number, location, cookies, local IP address, nearby WiFi MAC addresses, and advertising ID. When a user has not yet logged in, Android phones don’t send location, IP address, and nearby WiFi MAC addresses, while iPhones don’t send their own WiFi MAC address. 

Unused apps and services 

Several of the pre-installed apps/services are also observed to make network connections, despite never having been opened or used. In particular, on iOS these include Siri, Safari and iCloud. On Google Android these include the YouTube app, Chrome, Google Docs, Safetyhub, Google Messaging, the Clock and the Google Search bar. 

Concerns 

The collection of so much data by Apple and Google raises some major concerns. Firstly, this device data can be fairly easily linked to other data sources. This is certainly no hypothetical concern since both Apple and Google operate payment services, supply popular web browsers, and benefit commercially from advertising.  

Secondly, every time a handset connects with a back-end server it necessarily reveals the handset’s IP address, which is a rough proxy for location. The high frequency of network connections made by both iOS and Google Android (on average every 4.5 minutes) therefore potentially allow tracking by Apple and Google of device location over time.  

And last but not least, the apparent inability for users to opt out. In the report the head researcher outlines a method to prevent the vast majority of the data sharing but noted that it needs to be tested against other types of handhelds. And from my perspective it’s not easy to pull it off, and it would not stop everything. 

Apple and Google do not agree 

The head researcher sent his findings to both companies. Google offered some clarifications and expressed its intention to publish documentation on the telemetry data collection soon. 

Apple noted that the report gets many things wrong. For instance, the company says that personal data sent to Apple is protected, and the company doesn’t collect data that can be associated with a person without their knowledge or consent. Google calls into question the methods used to determine the telemetry volume on Android and iOS. It claims the study didn’t capture UDP/QUIC traffic, nor did it look at whether the data was compressed or not, which could skew the results. 

The post Research claims Google Pixel phones share 20 times more data than iPhones appeared first on Malwarebytes Labs.

Has Facebook leaked your phone number?

Unless you keep your social media at a pole’s distance, you have probably heard that an absolutely enormous dataset—containing over 500 million phone numbers—has been made public. These phone numbers have been in the hands of some cybercriminals since 2019 due to a vulnerability in Facebook that allowed personal data to be scraped from the social media platform, until it was patched it in 2019.

But now some miscreant has posted the entire dataset on a hacking forum, so every lowlife out there has access.

When did this happen?

In an apparent attempt to play down the seriousness of the situation, Facebook spokesperson Liz Bourgeois tweeted Saturday that the leak involved “old data that was previously reported on in 2019.” Some reports say the data was scraped in 2019, others talk about early 2020. To be honest, between scraping vulnerabilities dating back to 2010, and the Cambridge Analytica scandal, an old data breach is still a data breach, and you’re probably still going to need to pay attention to it. Whether you like it or not.

If you are, or were, a Facebook user this may very well concern you.

Why it still matters

Access to personal data allows cybercriminals to seem more believable when they pretend to be somebody, making social engineering and ID theft easier, and unlike passwords, many of them can’t be changed. There are countless examples of how personal information helps criminals, but here are three to give you a sense of what’s at stake.

The first thing that comes to mind is a scam where people text you pretending to be a relative or dear friend. First, they tell you they have a new phone number and then they ask you to transfer some money on their behalf.

The scam is more likely to succeed if the threat-actor has some private information that can convince you they are who they claim to be. And with the correlation between your Facebook profile and your telephone number, depending on your settings they can look up:

  • Who your family and friends are
  • How you phrase your responses to each other
  • Some events from your life to talk about

Together with your phone number, that gives them an excellent attack vector for this type of scam.

Another devilish scheme can unfold if they have enough information about you to convince your telephone company that they are the cell phone owner. This can usually be done by providing the carrier with a phone number, a home address and the last four digits of a Social Security number.

Or you could become a victim of a text variant of a Business Email Compromise (BEC). One of the most profitable phishing scams, which is easier to pull off if the threat actor has more information available.

Limiting what you share

First off, cybercriminals don’t care where or how they get your information, so take care to hide your personal information on Facebook from profile visitors that are not friends. Facebook has a help page for this called Control Who Can See What You Share.

Facebook privacy settings

Go through that list and ask yourself if everyone needs to see all of that, and what you would rather hide from prying eyes.

Also, now that you know the information is out there, be vigilant, especially about unsolicited texts and phone calls. If any new tactics evolve from this you can always read about it right here.

How to check if your phone number is involved

There are a few sites that offer you the chance to look up your phone number and see if it’s been leaked. One that we trust, and that allows visitors to look for phone numbers from every country is the well-known have i been pwned?

Troy Hunt, the security guru that runs HaveIBeenPwned, explains in detail why he decided to include this dataset as a searchable entity on his blog. If you are too curious and want to dive right in, please note that you need to enter your phone number in the E.164 international standard format. Which is not as hard as it sounds. Replace the trailing 0 with your country code, only use numbers, and you should be good to go.

Stay safe, everyone!

The post Has Facebook leaked your phone number? appeared first on Malwarebytes Labs.

A week in security (March 29 – April 4)

Last week on Malwarebytes Labs, our podcast featured Malwarebytes senior security researcher JP Taggart, who talked to us about why you need to trust your VPN.

You’ve likely heard the benefits of using a VPN: You can watch TV shows restricted to certain countries, you can encrypt your web traffic on public WiFi networks, and, importantly, you can obscure your Internet activity from your Internet Service Provider, which may use that activity for advertising.

But obscuring your Internet activity—including the websites you visit, the searches you make, the files you download—doesn’t mean that a VPN magically disappears those things. It just means that the VPN itself gets to see that information instead.

On Malwarebytes Labs, we also wrote about six social media safety sins to say goodbye to, and we advised Steam users not to fall for the “I accidentally reported” scam that is making rounds right now. We also covered how a 5G slicing vulnerability could be used in DoS attacks, the one reason your iPhone needs a VPN, what you need to know about malicious commits found in PHP code repository, the latest ransomware attacking schools, called PYSA, and we tried to report on the npm netmask vulnerability in a way that anyone can actually understand it.

Finally, we looked at the latest Android “System Update” malware that steals photos, videos, GPS location, and we thought it was time to cool down some fervor and say that, you know what, Internet password books are OK.

Other Cybersecurity news:

Stay safe!

The post A week in security (March 29 – April 4) appeared first on Malwarebytes Labs.