IT NEWS

Google and Facebook fined $240 million for making cookies hard to refuse

French privacy watchdog, the Commission Nationale de l’Informatique et des Libertés (CNIL), has hit Google with a 150 million euro fine and Facebook with a 60 million euro fine, because their websites—google.fr, youtube.com, and facebook.com—don’t make refusing cookies as easy as accepting them.

The CNIL carried out an online investigation after receiving complaints from users about the way cookies were handled on these sites. It found that while the sites offered buttons for allowing immediate acceptance of cookies, the sites didn’t implement an equivalent solution to let users refuse them. Several clicks were required to refuse all cookies, against a single one to accept them.

In addition to the fines, the companies have been given three months to provide Internet users in France with a way to refuse cookies that’s as simple as accepting them. If they don’t, the companies will have to pay a penalty of 100,000 euros for each day they delay.

GDPR

EU data protection regulators’ powers have increased significantly since the General Data Protection Regulation (GDPR) took effect in May 2018. This EU law allows watchdogs to levy penalties of as much as 4% of a company’s annual global sales.

The restricted committee, the body in charge of sanctions, considered that the process regarding cookies affects the freedom of consent of Internet users and constitutes an infringement of the French Data Protection Act, which demands that it should be as easy to refuse cookies as to accept them.

Since March 31, 2021, when the deadline set for websites and mobile applications to comply with the new rules on cookies expired, the CNIL has adopted nearly 100 corrective measures (orders and sanctions) related to non-compliance with the legislation on cookies.

Responses

Google said in a statement that “people trust us to respect their right to privacy and keep them safe” and that the company understands its “responsibility to protect that trust and are committing to further changes and active work with the CNIL in light of this decision”.

Facebook said it’s reviewing the authority’s decision. Here it may be important to note that the CNIL fined Facebook Ireland Limited, rather than Facebook France, since the head office in Ireland presents itself as the data controller of the Facebook service in the European region.

The procedure

As an example we’ll follow the cookie management procedure for YouTube, which was one of the sites the CNIL objected against.

A first time visitor (or more precisely, someone without any cookies from a previous visit) is presented with this consent form:

YouTube cookie consent popup
YouTube’s cookie consent popup

The user’s options are to either accept all the cookies by clicking “I AGREE”, or to click “CUSTOMIZE”, which results in a multitude of choices to be made about search customization, YouTube History, ad personalization, managing cookies in your browser, and managing data Google Analytics collects on sites you visit.

The first three entries are simple On/Off settings.

YouTube cookie customization
The first three options in YouTube’s cookie customization screen

The last parts however point to instructions or link to other sites, which in general come down to “You can change your browser settings to reject some or all cookies.”

YouTube cookie instructions
YouTube’s instructions on managing cookies and data

This explains why the French watchdog objects to the skewed balance between accepting or rejecting cookies from these sites—the path to privacy is long and difficult.

The everlasting battle

Internet giants like Meta (Facebook) and Alphabet (Google) depend on advertising. Advertising represented 98% of Facebook’s $86 billion revenue in 2020, and more than 80% of Alphabet’s revenue comes from Google ads, which generated $147 billion in 2020.

Advertisers can bid on specific words and phrases, and target specific demographics, geographies or interests, and this ensures ads show up to relevant users at relavent times, or so the theory goes. To find out who the “relevant users” are ad companies gather massive amounts of information about users, and that is where our privacy comes into play.

The information is stored in giant databases about us, and the link between us and our database entries are the cookies in our browser. The cookie acts like an ID badge, you show it every time you hit a Google or Facebook page, or any time you hit a page that includes a like button, some Google Analytics code, or anything else loaded from a Google or Facebook domain.

Sometimes that’s useful. Logging in to a website would be impossible without a cookie “ID badge”—you’d have to provide your password on each and every page instead. But sometimes the ID badge is doing someting that’s useful to somebody else rather than you, such as allowing them to silently build a personal profile about you.

Luckily, sites rarely use one cookie for everything and typically use different cookies for different features. This is why YouTube customization options are so convoluted, and why adblockers and privacy plugins work at all. With a decent tool it’s possible to block or refuse the cookies you don’t like and keep the ones you do.

If you want to clear out everything and start again, take a look at our quick guide, How to clear cookies”.

Dark patterns

YouTube’s choice between “I agree” and “Customize” rather than “I agree” and “I don’t agree” is an example of a dark pattern, a desgin that subtely and deliberately nudges you in the direction of a choice that benefits the designer. They are everywhere on the web, and they’re a problem.

In June 2021, Malwarebytes Labs’ David Ruiz spoke to dark patterns expert Carey Parker on the Lock and Code podcast. To learn more about dark patterns and how to spot them, listen below.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post Google and Facebook fined $240 million for making cookies hard to refuse appeared first on Malwarebytes Labs.

Sophisticated phishing scheme spent years robbing authors of their unpublished work

Three years ago on Quora, someone asked what writers do to keep their manuscripts from being stolen. One of the top answers reads as follows:

You’re joking, right? It’s hard enough to get people to read your novel once it’s out on Amazon, much less reading it before it’s finished…unless you’re George RR Martin, nobody is trying to get your unpublished, unedited manuscript.

That optimistic piece of advice doesn’t really hold true anymore, if it ever did. In a scheme reminiscent of some sort of comic book supervillain, Filippo Bernadini was arrested at JKF International Airport on Wednesday. The reason? He stands accused of allegedly impersonating publishing professionals to obtain unpublished manuscripts. Charges include “wire fraud and aggravated identity theft”. The wire fraud aspect alone carries a potential maximum sentence of 20 years.

Throwing the book at crime

From the FBI indictment:

…an indictment charging FILIPPO BERNARDINI with wire fraud and aggravated identity theft, in connection with a multi-year scheme to impersonate individuals involved in the publishing industry in order to fraudulently obtain hundreds of prepublication manuscripts of novels and other forthcoming books.

This particular scheme had been rumbling along since “at least” 2016, and the accused individual worked in the publishing industry.

According to the FBI, multiple fake email accounts were created, impersonating real people in the publishing space. Not only that, but also publishing houses and talent agencies. Alongside this were “more than 160 internet domains”. The domains copied real entities, with deliberate use of slight typos in email addresses to further replicate the genuine article. These are common phishing tactics used by regular phishers, but here we can see it being deployed in a more targeted fashion.

Nice award. Can I have your next book, please?

There’s at least one example given of a Pulitzer prize-winning author tricked into sending a forthcoming manuscript to an imitation of a real well-known editor and publisher.

“Hundreds” of distinct people were impersonated in order to obtain manuscripts the phisher had no business accessing.

There’s also mention of gaining access to a New York literary scouting company, via bogus mails to employees and a fake domain for them to log into. Once they logged in, credentials were forwarded on to add another string in the “massive scam” bow.

This was all happening up until or around July 2021. It remains to be seen how the case will pan out for the accused, but it doesn’t sound great for him so far. It seems likely that this in-depth account of authors being contacted by fictitious publishers from August of last year is related to the above. If it isn’t, well, I guess we have two separate fake literary agent saboteurs to contend with.

What can writers do to keep their work safe?

A lot of the security issues in this story boil down to phishing, and phishing countermeasures. Most of the tips for authors for keeping their manuscripts safe tend to focus on backing up files. While some do mention security compromise, a few of the tips make me a little nervous. With that in mind:

  • The Nathan Bransford article I’ve linked to above invites that the “technically disinclined” to email themselves a copy of their manuscript, but I’d be wary of emailing documents to myself or others in plain text. I also appreciate that there are some situations where you may be left with “email or nothing”. In those situations, you should make use of a tool which can encrypt your files before you attach them, such as WinZip. Be aware though that some forms of encryption are more secure than others.
  • It also suggests placing documents in cloud storage. This puts a copy of your work in a different geograhpy than you laptop, which is good if there’s a fire, or you’re hit with ransomware, but it also means there’s another place your work can be stolen from. If someone manages to guess your cloud login, and you don’t have 2FA enabled, they have your documents. To prevent this, I suggest you enable two-factor authentication on your cloud accounts, and consider encrypting your files before uploading them.
  • If you really don’t like the idea of leaving documents on your desktop, store them on an external drive. The usual caveats apply: Encrypt, encrypt, encrypt. On the very remote chance someone breaks in and steals it, or more likely, you lose it somewhere, it’ll help keep the files safe from prying eyes.

Again, these tips are really for everyone and all kinds of files. They’re not specific to budding or even professional writers. However, they can still make full use of them. And you don’t even have to be George R.R. Martin to do it.

The post Sophisticated phishing scheme spent years robbing authors of their unpublished work appeared first on Malwarebytes Labs.

Patchwork APT caught in its own web

Patchwork is an Indian threat actor that has been active since December 2015 and usually targets Pakistan via spear phishing attacks. In its most recent campaign from late November to early December 2021, Patchwork has used malicious RTF files to drop a variant of the BADNEWS (Ragnatela) Remote Administration Trojan (RAT).

What is interesting among victims of this latest campaign, is that the actor has for the first time targeted several faculty members whose research focus is on molecular medicine and biological science.

Instead of focusing entirely on victimology, we decided to shade some light on this APT. Ironically, all the information we gathered was possible thanks to the threat actor infecting themselves with their own RAT, resulting in captured keystrokes and screenshots of their own computer and virtual machines.

Ragnatela

We identified what we believe is a new variant of the BADNEWS RAT called Ragnatela being distributed via spear phishing emails to targets of interest in Pakistan. Ragnatela, which means spider web in Italian, is also the project name and panel used by Patchwork APT.

panel 1
Figure 1: Patchwork’s Ragnatela panel

Ragnatela RAT was built sometime in late November as seen in its Program Database (PDB) path “E:new_opsjlitest __change_ops -29no – CopyReleasejlitest.pdb”. It features the following capabilities:

  • Executing commands via cmd
  • Capturing screenshots
  • Logging Keystrokes
  • Collecting list of all the files in victim’s machine
  • Collecting list of the running applications in the victim’s machine at a specific time periods
  • Downing addition payloads
  • Uploading files
commands
Figure 2: Ragnatela commands

In order to distribute the RAT onto victims, Patchwork lures them with documents impersonating Pakistani authorities. For example, a document called EOIForm.rtf was uploaded by the threat actor onto their own server at karachidha[.]org/docs/.

server
Figure 3: Threat actor is logged into their web control panel

That file contains an exploit (Microsoft Equation Editor) which is meant to compromise the victim’s computer and execute the final payload (RAT).

Figure 4: Malicious document triggers exploit

That payload is stored within the RTF document as an OLE object. We can deduce the file was created on December 9 2021 based on the source path information.

OLE
Figure 5: OLE object containing RAT

Ragnatela RAT communicates with the attacker’s infrastructure via a server located at bgre.kozow[.]com. Prior to launching this campaign (in late November), the threat actor tested that their server was up and running properly.

ping
Figure 6: Log of threat actor typing a ping command

The RAT (jli.dll) was also tested in late November before its final compilation on 2021-12-09, along with MicroScMgmt.exe used to side-load it.

dll
Figure 7: DLL for the RAT being compiled

Also in late November, we can see the threat actor testing the side-loading in a typical victim machine.

win7
Figure 8: Threat actor tests RAT

Victims and victim

We were able to gain visibility on the victims that were successfully compromised:

  • Ministry of Defense- Government of Pakistan
  • National Defense University of Islam Abad
  • Faculty of Bio-Science, UVAS University, Lahore, Pakistan
  • International center for chemical and biological sciences
  • HEJ Research institute of chemistry, International center for chemical and biological sciences, univeristy of Karachi
  • SHU University, Molecular medicine

Another – unintentional – victim is the threat actor himself which appears to have infected is own development machine with the RAT. We can see them running both VirtualBox and VMware to do web development and testing. Their main host has dual keyboard layouts (English and Indian).

host
Figure 9: Virtual machine running on top of threat actor’s main computer

Other information that can be obtained is that the weather at the time was cloudy with 19 degrees and that they haven’t updated their Java yet. On a more serious note, the threat actor uses VPN Secure and CyberGhost to mask their IP address.

vpn
Figure 10: Threat actor uses VPN-S

Under the VPN they log into their victim’s email and other accounts stolen by the RAT.

email
Figure 11: Threat actor logs into his victim’s email using CyberGhost VPN

Conclusion

This blog gave an overview of the latest campaign from the Patchwork APT. While they continue to use the same lures and RAT, the group has shown interest in a new kind of target. Indeed this is the first time we have observed Patchwork targeting molecular medicine and biological science researchers.

Thanks to data captured by the threat actor’s own malware, we were able to get a better understanding about who sits behind the keyboard. The group makes use of virtual machines and VPNs to both develop, push updates and check on their victims. Patchwork, like some other East Asian APTs is not as sophisticated as their Russian and North Korean counterparts.

Indicators of Compromise

Lure

karachidha[.]org/docs/EOIForm.rtf
5b5b1608e6736c7759b1ecf61e756794cf9ef3bb4752c315527bcc675480b6c6

RAT

jli.dll
3d3598d32a75fd80c9ba965f000639024e4ea1363188f44c5d3d6d6718aaa1a3

C2

bgre[.]kozow[.]com

The post Patchwork APT caught in its own web appeared first on Malwarebytes Labs.

Ransomware attacks Finalsite, renders 8,000 school sites unreachable for days

Finalsite, a popular platform for creating school websites, appears to have recovered significant functionality after being attacked by a still-unknown ransomware on Tuesday, January 4, 2022. At least 8,000 schools are said to have been affected by the resulting outage.

According to an open letter published on its Twitter account:

On Tuesday, January 4, our team identified the presence of ransomware on certain systems in our environment.

In the time since the incident, our security, infrastructure, and engineering teams have been working around the clock to restore full backup systems and bring our network back to full performance, in a safe and secure manner.

Internet users who are directly or indirectly affected by this ransomware incident took to Reddit to raise some concerns. User /u/flunky_the_majestic writes: “Many districts are complaining that they are unable to use their emergency notification system to warn their communities about closures due to weather or COVID-19 protocol. The impact of this outage is far greater than the attention it has received.” [1]

Some Reddit users also used this thread to complain about K12 schools continuing to use old technology and the challenges they faced on why it has remained this way. This is a notable one from someone who works in K12:

serprise pikachu face

The first good news is the company says it has found no evidence of data theft.

The second good news is, as of Finalsite’s status entry hours ago, “the vast majority of front-facing websites are online.” As a caveat, it added that some of these sites still lack some functionality and content, such as admin log-in, calendar events, and the directory of constituent groups, which the team is working to restore. While the CMS company continues to restore from backups, investigation is ongoing still as of this writing.

The third and final bit of good news is related to the second: Finalsite got it so right by making and keeping backups of all their most important data. Remember that it’s not a matter of “if” but “when” ransomware—or another cyberthreat—strikes. Sometimes, companies who deem themselves secure can still get hit. And when (not if) they do, organizations need a recovery plan and the right kind of backups.

Companies restoring from backup in just a few days after an attack rather than paying the ransom is, by far, the least worst outcome. This is also quite difficult to pull off because of so many questions to consider first before doing anything. On top of that, there are instances where backups could fail us. Malwarebytes Labs’s podcast, Lock and Code, has covered this very dilemma. Listen to the full podcast below:

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

Finalsite also kept it simple and honest, which we greatly applaud. Some (if not most) organizations leave it at “sophisticated cyberattack”—perhaps for fear of ridicule or criticism over “not doing enough”. While this is understandable, Finalsite admitting they have been ransomware victims but are actually doing something about it is somewhat refreshing to see. We can only hope that other organizations, regardless of size, follow their example.

The post Ransomware attacks Finalsite, renders 8,000 school sites unreachable for days appeared first on Malwarebytes Labs.

Card skimmers strike Sotheby’s in Brightcove supply chain attack

Over 100 real estate websites have been compromised by the same web skimmer in a supply chain attack.

So what happened?

On Monday, January 3, Palo Alto said it had found a supply chain attack that used a cloud video platform to distribute skimmer campaigns. The attacker injected the skimmer’s JavaScript code into video files, so whenever someone imported the video, their website would get embedded with the skimmer code as well.

Palo Alto worked with the cloud video platform and the real estate company to help them remove the malware before publishing the post, and the incident was resolved last year. Palo Alto didn’t name either of the companies involved by name, but it did share a list of domains where the malicious code was deployed, which indirectly identified Sotheby’s as the real estate company.

In Malwarebytes’ own ongoing investigations into web skimmers, our researchers found a script on VirusTotal that gave us a clue about the affected cloud video platform. We identified the skimmer group as Inter. The domain used for data exfiltration was known to us and has been used in previous campaigns over the past couple of years.

code snippet
retrohunt result based on code snippets

Following up on this we found several more instances of that campaign also on VirusTotal. This one is from January 2021 and is seen inserted in the JavaScript of the same video player.

Code snippet
JavaScript code snippet used in the attack

Brightcove

In 2015, Brightcove proudly announced that it had partnered with Sotheby’s, one of the world’s oldest and largest auction houses, to deliver live auction experiences.

Brightcove’s online video platform is a Software-as-a-Service (SaaS) platform that is built to provide both a secure and reliable foundation that helps its customers scale and manage their video content. Customer data is hosted in a multi-tenant environment, but segregation is integrated into the application at the customer level.

Adding JavaScript

So, how does the attacker inject the malicious code into the player of the cloud video platform? There’s only one conceivable way to achieve this.

When the cloud platform user creates a player, the user is allowed to add their own JavaScript customizations by uploading a JavaScript file to be included in their player. In this specific instance, the user uploaded a script that could be modified upstream to include malicious content.

It stands to reason that the attacker gained access and altered the static script at its hosted location by attaching skimmer code. On the next player update, the video platform re-ingested the compromised file and served it along with the impacted player.

The question here is, was the attacker using a Sotheby’s account or a Brightcove account to alter the code? In the last case there is a big chance that the attacker used this method on other companies’ players. Palo Alto concluded after analysis of the sites it identified, that all the compromised sites belong to one parent company and that all these compromised sites were importing the same video.

Code analysis

For a deep dive analysis of the JavaScript code we happily refer you to the post by Palo Alto.

The code is heavily obfuscated, but after unraveling we can tell that it uses two different functions to verify whether a string matches a credit card pattern and to validate any matches by using a checksum formula.

Another part is used to detect and thwart debugging, but the most interesting part is how the skimmer steals information and sends it out. We can tell that the skimmer is trying to gather victims’ sensitive information such as names, emails, phone numbers, and send them to a collection server, https://cdn-imgcloud[.]com/img (highlighted in the first screenshot).

In this type of supply chain attack, it is important to understand where the initial breach happened so that information can help to find more potential victims, but also to help prevent future instances.

This post was written with the immense help of the Malwarebytes Threat Intel team.

The post Card skimmers strike Sotheby’s in Brightcove supply chain attack appeared first on Malwarebytes Labs.

Intercepting 2FA: Over 1200 man-in-the-middle phishing toolkits detected

Two-factor authentication (2FA) has been around for a while now and for the majority of tech users in the US and UK, it has became a security staple. Indeed, wake up calls brought about by data breaches have stirred others out of their comfort zones into finally adopting 2FA and making it part of their online lives.

But online criminals—quick as they are with anything at this rate—are already one (if not several) step ahead.

As early as 2017, cybercriminals have been incorporating capabilities to defeat 2FA into their kits. With 2FA becoming much more commonplace, such kits are increasing in popularity and are in high demand in the underground market.

Academics from Stony Brook University and Palo Alto Networks—namely Brian Kondracki, Babak Amin Azad, Nick Nikiforakis, and Oleksii Starov—have found at least 1,200 phishing kits online capable of capturing or intercepting 2FA security codes. This, of course, would enable them to bypass any any 2FA procedures their target victims have already set up.

According to their report entitled “Catching Transparent Phish: Analyzing and Detecting MITM Phishing Toolkits” cybercriminals are using Man-in-The-Middle (MiTM) phishing kits which mirror live content to users while at the same time extract credentials and session cookies in transit.

These kits make it easy for the cybercriminals, because the harvesting of 2FA authentication session tokens are automatic. And because victims can browse within the phishing page as if it’s the real thing after they authenticate, users are less likely to notice they’ve been phished.

mitm phish illustration
Illustration of what a MiTM phishing would look like. (Source: Kondracki, et al)

MiTM phishing attacks are perfect for scenarios where cybercriminals don’t want to use malware to steal credentials, and the attack itself doesn’t need human involvement in the process. Perhaps this is why email accounts, social media accounts, and some gaming accounts (as opposed to banking sites) are likely targets of MiTM phishers. These services have a more relaxed approach on how they log in users and keep them logged in until they manually log out.

Some of these services also create authentication sessions that can remain valid for years. Such sessions tokens can be used to abuse the account on a long term basis without the user knowing.

There are currently three widely known MiTM toolkits in popular hacking forums and code repositories: Evilginx, Muraena, and Modlishka. Among these, Modlishka (the Polish word for “mantis”) is the most familiar, and we covered it back in 2019.

therecord evilginx modlishka forum
A hacking forum post where someone is looking for Evilginx and Modlishka specialists for his phishing campaign. (Source: The Record by Recorded Future)

Using machine learning, the academics created a fingerprinting tool they called PHOCA (Latin word for “seal”, the sea mammal). Per the report, PHOCA “can detect previously-hidden MITM phishing toolkits using features inherent to their nature, as opposed to visual cues.” All one needs to do is feed the tool with a URL or domain name, and then the tool determines if its web server is a MiTM phishing toolkit by using its trained classifier.

Criminals using a 2FA bypass is inevitable. PHOCA seems to be the only tool that can successfully pinpoint and help users thwart MiTM phishing websites. Aside from PHOCA, the academics propose client-side fingerprinting and TLS fingerprinting as form of detection method to greatly help thwart this type of attack.

Seemingly invisible threats like MiTM phishing are real. And we hope that we can protect from it sooner rather than later.

The post Intercepting 2FA: Over 1200 man-in-the-middle phishing toolkits detected appeared first on Malwarebytes Labs.

Hackers take over 1.1 million accounts by trying reused passwords

The New York State Office of the Attorney General has warned 17 companies that roughly 1.1 million customers have had their user accounts compromised in credential stuffing attacks.

Credential stuffing is the automated injection of stolen username and password pairs in to website login forms, in order to fraudulently gain access to user accounts. Many users reuse the same password and username/email, so if those credentials are stolen from one site—say, in a data breach or phishing attack—attackers can use the same credentials to compromise accounts on other services.

While credential stuffing may seem like a tiresome and long-winded game for attackers, it has proven to be very effective against. And unlike many other types of cyberattacks, credential stuffing attacks often require little technical knowledge.

The consequences

When attackers gain access to an account, they have several options to monetize it, such as:

  • Draining stolen shopping accounts of stored value, or making purchases.
  • Accessing more sensitive information such as credit card numbers, private messages, pictures, or documents which can ultimately lead to identity theft.
  • Using a forum or social media account to send phishing messages or spam.
  • Selling the known-valid credentials to other attackers on underground forums.

Needless to say that avoiding becoming a victim is worth the trouble.

What can users do?

Besides listening to us telling you that you should not reuse passwords across multiple platforms, there are some other thing you can do.

Start using a password manager. They can help you create strong passwords and remember them for you. Some passwords managers can be tricky at first, but once you get the hang of them you will wonder how you ever managed without one.

Then find out which credentials are at risk. You can check for compromised accounts on the website Have I been pwned? You can find information on how to use that site in our article “Have I been pwnd?”– What is it and what to do when you *are* pwned. The credentials shown as pwned there are the first ones you need to change the password for.

When it comes to which steps to take if you suspect there might be identity theft at play, we recommend you read this post we wrote after the Equifax breach some years ago.

What should organizations do?

Something that would make all of our lives easier is if organizations made it impossible, or harder at least, to credential stuff their sites and services.

One effective safeguard is to implement and enforce multi-factor-authentication (MFA). However, this puts a big part of the burden on the customers since they will have to take the extra steps before they are logged in. Another method to protect customers is to prevent them from use compromised credentials. This functionality typically relies on third party vendors that compile credentials from known data breaches.

Other more user-friendly solutions are bot detection methods and application firewalls.

Bot detection methods can distinguish between human and bot traffic even when the bot traffic has been disguised. Bot detection can be event-based and identifies bots using network characteristics, device characteristics, and behavior characteristics. More complex bot detection methods use behavioral analysis and artificial intelligence to detect login attempts that are seen as abnormal.  A less complex method to distinguish between bots and humans are the well-known CAPTCHA challenges.

Web Application Firewalls (WAF) are often the first line of defense against malicious traffic. They can block or throttle multiple attempts from the same source or at the same account. They can also use blocklists based on known IP addresses that have recently engaged in attacks. Sophisticated credential stuffing attacks, however, are often able to circumvent most WAF security measures.

No more passwords

Recently, we’ve seen initiatives that strive towards more password-less authentication. On this site we have discussed alternatives to get rid of passwords for good, along with the possible downside of the bold move Microsoft made towards a password-less future.

As with most things in security, switching to a password-less authentication will have pros and cons. It’s likely to have a different outcome for different organizations, but it seems something that is at least worth thinking through.

Stay safe, everyone!

The post Hackers take over 1.1 million accounts by trying reused passwords appeared first on Malwarebytes Labs.

New iPhone malware spies via camera when device appears off

When removing malware from an iOS device, it is said that users need to restart the device to clear the malware from memory.

That is no longer the case.

Security researchers from ZecOps have created a new proof-of-concept (PoC) iPhone Trojan capable of doing “fun” things. Not only can it fake a device shutting down, it can also let attackers snoop via the device’s built-in microphone and camera, and receive potentially sensitive data due to it still being connected to a live network connection.

Stopping users from manually restarting an infected device by making them believe they have successfully done so is a notable malware persistence technique. On top of that, human deception is involved: Just when you thought it’s gone, it still pretty much there.

The researchers dubbed this overall attack “NoReboot,” and it does not exploit any flaws on the iOS platform. This means Apple cannot patch for it.

How they did it

So how does the malware stop the actual device shutdown from happening while making it look like it did to users? In a nutshell, the researchers hijack the shutdown event on an iOS device. This involves injecting new code to three daemons—programs that run in the background that have their own unique functions: InCallService, SpringBoard, and Backboardd.

zecops persistence noreboot
The three inherent iOS daemons that the malware has to modify in order to pull a succesful fake out. (Source: ZecOps)

InCallService is responsible for sending the “shutdown” signal to SpringBoard when a user manually turns off the iOS device. The researchers were able to hijack this signal using a hooking process. So instead of InCallService sending the signal to SpringBoard as it’s supposed to, it instead signals SpringBoard and Backboardd to execute the codes injected into them.

The code in SpringBoard tells it to exit, not launch again, and only respond to a long button press. Since SpringBoard responds to user interaction and behavior, the daemon being unresponsive gives the impression that the device is off when, in fact, it’s not.

The code in BackBoardd, on the other hand, tells it to hide the spinning wheel animation, which pops up when SpringBoard ceases to work.

zecops spinny wheel
Screenshot of code snippets that are injected into SpringBoard and BackBoardd. (Source: ZecOps)

At this point, the iOS device looks and feels like a brick. But note that it’s still pretty much on, still connected to the internet, and still has functional features readily available for remote exploitation. Note that once an iOS device is infected with NoReboot, it starts its snooping via the camera.

Just as the device shutdown is simulated, NoReboot can also simulate a device to startup. And the BackBoardd daemon plays a huge role in this. Since SpringBoard is no longer functioning, Backboardd takes control of the screen and responds to user inputs, including long button presses. Backboardd is told to show the Apple logo, a known indicator that the iOS device has indeed been turned off, which makes users let go of the button and stop them from truly rebooting the device. Then SpringBoard is relaunched so Backboardd can give back its privilege to control the screen.

You can read more about how NoReboot works in detail in ZecOps’s post here.

Video demonstration of NoReboot. (Source: ZecOps)

“Is this thing on?”

Since Apple introduced a feature that allows device owners to track their phones even when they’re turned off, things have never been the same. “On” remains on, while “off” is not-quite-off anymore. And this only gives attackers an opportunity to let their malware persist on affected devices.

NoReboot is a mere PoC at this point, but its code is already public. It’s only a matter of time before iOS attackers start incorporating this into their malware kits. That said, let’s arm ourselves with what we can do as users at this point.

If you suspect that your device is compromised by a NoReboot-like malware, you can keep pressing the force reboot buttons after the the Apple logo appears. Remember that this is a simulated reboot, and keeping the restart buttons depressed would force the infected device to truly reboot. iOS device owners can also use Apple Configuration, which you can download for free.

Stay vigilant!

(Kudos to Thomas Reed for additional helpful insights)

The post New iPhone malware spies via camera when device appears off appeared first on Malwarebytes Labs.

$10m of funds goes missing in what appears to be a cryptocurrency rug-pull

There’s a lot of concern in the cryptocurrency realm at the moment. A yield farming platform “utilizing arbitrage to gain optimal yield with low risk” has gone AWOL. Site down, Twitter account deleted, no word from the team behind it explaining what happened. Worst of all, some $10 million worth of funds have been drained leading to accusations of rug-pulling.

So what’s gone wrong with rugs in the land of yield farming?

Yield farming in DeFi (Decentralised Finance)

Yield farming is a popular target for scams, as lots of money is dropped into new services with the hope of big payouts via passive income earnings further down the line. People do receive payouts, by the way. Here’s someone who picked up $1,700 because he used one particular service prior to a specific date. However, as the article notes, many projects are open source. This makes it easy for people with bad intentions to fire up a bogus service of their own, wait for funds to be pumped into it, and then vanish.

This is, of course, not very good when it happens. Sadly, this is what may have happened in this case.

What is a rug pull in cryptocurrency?

A rug pull (or “being rugged”, as they call it in cryptocurrency circles) is not a fun experience. Someone creates an altcoin (any coin other than Bitcoin) on a DEX (decentralised exchange). They then spend some time hyping that token on as many platforms as possible. The more noise, the better: anything to attract potential users. As more people invest, the idea is that the token increases in value. The liquidity of the project goes up as a result.

When hype is at its maximum and investors are running wild, the creators suddenly drain the pool of its funds and fade from existence. Anyone who bought into the project is left with worthless tokens. At this point, sites and services related to the scam token are scrubbed, and a lot of people are out of pocket. The rug is well and truly pulled.

What’s happened to Arbix?

A project called Arbix Finance has indeed pulled its site and deleted its Twitter. Arbix was audited and approved by Certik in November, adding legitimacy to itself and a way to reassure users it is on the level. Here’s an example press release in relation to certification of another cryptocurrency platform. Audits and certifications such as these are common in the DeFi space, so it’s probably a bit disconcerting for users to see the rug pull happen despite such forms of approval.

The audit history page for Arbix Finance currently reads as follows:

“Warning: This project has been confirmed to be a rugpull and is deemed high risk. Do not engage, or interact, with this project.”

Where did the money go?

The Certik Twitter feed is currently revealing pieces of its investigation into what’s happened. It’s quite likely there’s more to come, so this isn’t the full story at present, but here’s the current timeline of events:

Word of the rugpull first breaks. People are told to steer clear of interacting, because it’s still possible to get tangled up in losing some more money:

Money invested by users (the missing $10m) is sent to a variety of addresses, with a big chunk of the missing funds dumped.

That thread and offshoots of it are still being updated, so if you’re impacted you’ll want to bookmark for future reference.

If you want to see more information about the wallets used to hold the funds and where they were sent afterwards, see this tweet.

Next steps

There isn’t much advice that can be given to potential victims in this specific case. More digging is required, and it’s possible one benefit of this service having been audited is it may help with finding out who’s behind this. It’s also possible the project owners may appear at the eleventh hour with an explanation. For now, we have to just wait and see.

There’s a lot of angry people on social media in relation to this one. We’ve seen a few links being sent claiming to be forms of “help” or support from Arbix which resolve to things like Telegram links. With no way to verify, we’d suggest being very cautious around any links sent to offer assistance.

You definitely don’t want to lose out twice over

People are making money in cryptocurrency, but rug pulls remain a huge loss for all concerned. If you haven’t run into one of these scams yet, read up on ways to minimise the threat. It’s a Wild West out there.

The post $10m of funds goes missing in what appears to be a cryptocurrency rug-pull appeared first on Malwarebytes Labs.

Careful! Uber flaw allows anyone to send an email from uber.com

On New Year’s Eve, Seif Elsallamy (@0x21SAFE on Twitter), a bug bounty hunter and security researcher, pointed out a phish-worthy security flaw he found on Uber’s email system. The flaw allowed anyone to send emails on behalf of Uber, meaning they would end with “@uber.com“, just like the one below:

0x21SAFE uber flaw
The proof-of-concept (PoC) email that Seif sent to his Gmail account while testing the Uber email server flaw. (Source: @0x21SAFE on Twitter)

An email address with “@uber.com” legitimately sent from one of Uber’s email servers will no doubt pass through email spam filters and reach their intended recipients. Knowing that this can be done by anyone opens multiple phishing opportunities for the would-be scammer.

They could start sending out a fake email marketing campaign to Uber clients, potentially using a list from the 57 million user data breach in 2017. The email could contain a link to click on that leads to an external site that was made to look like it’s from Uber, too. Suffice to say, there is a lot of scamming potential here.

And this is not a case of email spoofing—which makes an email appear to come from somewhere legitimate but actually comes from somewhere entirely different—but a case of a successful HTML injection into a vulnerable email endpoint on Uber’s side.

As BleepingComputer said, it’s similar to the flaw disclosed by Youssef Sammouda in 2019 which allowed anyone to send an email on behalf of Facebook using an “@fb.com” email address. But the similarities end there, because while Facebook fixed the issue and awarded Sammouda a bounty, Uber has not done the same.

Falling on deaf ears

It often comes as a surprise when companies ignore bug reports that may actually have a huge impact on their business. Elsallamy’s finding has not only brought to light an old bug once more, but also unearthed a history of Uber not taking the issue seriously and, in turn, not doing anything about it.

Elsallamy tweeted: “Bring your [calculator] and tell me what would be the result if this vulnerability has been used with the 57 million email address that has been leaked from the last data breach? If you know the result then tell your employees in the bug bounty triage team.”

Soufiane el Habti (@wld_basha on Twitter), another bug bounty hunter, hopped on the original tweet thread, saying he raised the same issue to Uber last year but the triage team “closed it as informative”. Shiva Maharaj (@ShivaSMaharaj on Twitter) claimed that he reported this bug in 2015/2016, but said “they don’t care.”

When BleepingComputer asked what Uber should do to address this concern, Elsallamy said: “They need to sanitize the users’ input in the vulnerable undisclosed form. Since the HTML is being rendered, they might use a security encoding library to do HTML entity encoding so any HTML appears as text.”

Given how easy it is to perform HTML injections, clients and employees in Uber should keep a close eye on potential phishing and/or scam attacks.

The post Careful! Uber flaw allows anyone to send an email from uber.com appeared first on Malwarebytes Labs.