IT NEWS

Awful 4chan chat bot spouts racial slurs and antisemitic abuse

“A robot may not injure a human being or, through inaction, allow a human being to come to harm”

Science fiction readers, and many others, will recognize Asimov’s first law of robotics. After reading about a bot called GPT-4chan I was wondering whether we should include:

“A bot may not insult a human being or, through interaction, allow a human being to be discriminated”

GPT-4chan was based on an AI instance trained using 3.3 million threads from 4chan’s infamously toxic Politically Incorrect /pol/ board. Once trained, the creator released the chat bot back onto 4chan. And, no surprise here, the AI behaved just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads.

While many outside the industry may have found the experiment interesting, serious AI researchers commented that this did not qualify as a serious experiment, but as an unethical one.

Déjà vu

Reading the above may cause some people to think they have seen this before. What you may remember reading about is a Microsoft Twitter AI chat bot that went rogue in less than 24 hours. The more someone chats with Tay (the name of the chat bot), said Microsoft, the smarter it gets, learning to engage people through casual and playful conversation.

However, quickly Twitter users proved that artificial intelligence (AI) and machine learning (ML) adhere to the “garbage in, garbage out” law in computer science. Twitter users managed to turn Tay into a racist and misogynist in less than a day.

GPT-3

The name GPT-4chan was partly based on the Generative Pre-trained Transformer 3 (GPT-3) language model that uses deep learning to produce human-like text. In January 2022, OpenAI introduced a new version of GPT-3, which should do away with some of the most toxic issues that plagued its predecessor.

Large language models like GPT-3 use vast bodies of text for training. Often these texts originate from the internet. In these texts they encounter the best and worst of what people put down in words. As such, the training material includes toxic language as well as falsehoods. Filtering out offensive language from the training set can make models perform less well, especially in cases where the training data is already sparse. In its new InstructGPT model, OpenAI tries to align language models with user intent on a wide range of tasks by fine-tuning with human feedback.

Accidental bias

Despite the obvious potential, recent events have exposed how automated systems can both intentionally and unintentionally lead to bias. For example, women see fewer advertisements about entering into science and technology professions than men do. Not because companies are preferentially targeting men, but as a result derived from the economics of ad sales.

Simply put, when an advertiser pays for digital ads, including postings for jobs in science, technology, engineering and mathematics,  it is more expensive to get female views than male ones. So the algorithm targets men to enhance the number of eyeballs per spent dollar.

Another well-known example is an algorithm that selected new candidates for a job based on the current population of employees. By doing this, the algorithm amplified the outdated model that says some jobs are predominantly done by men, or women.

As AI becomes a mandatory strategic tool across multiple industries, companies using AI as part of their strategies need to accept their roles and responsibilities in reducing the risk and impact of bias inherent in their products and services.

Regulation

As you may have guessed, my call for regulation was not a novel idea. In 2020, Google CEO Sundar Pichai stated he felt that AI needed regulation in order to prevent the potential negative consequences of tools including deepfakes and facial recognition. In his mind, this was not a conversation to save for tomorrow while the building and implementing of AI tools is happening today. But by nature, laws and regulations are mostly created as a response to abuse, rather than as a visionary approach of what could go wrong.

An ongoing discussion

The responses to the GPT-4chan experiment are another step in an ongoing discussion to determine whether AI and ML are here to save the world or whether they will destroy what’s left of it. This discussion seems pointless. The focus should not be on the product, but on the way in which we use it. As with every new development, we obtain a new tool, which we can wield for good, for evil, or just for profit.

As we pointed out in our 2019 Labs report “When artificial intelligence goes awry: separating science fiction from fact”,

“There’s a crucial period in artificial intelligence’s development—in fact, in any technology’s development—where those bringing this infant tech into the world have a choice to develop it responsibly or simply accelerate at all costs.”

To some, one of the biggest issues of artificial intelligence and machine learning is the impact on the climate. The big issue is that many high-profile ML advances just require a staggering amount of computation.

On that note, at best the GPT-4chan experiment was a waste of energy producing the kind of garbage that humanity, unfortunately, does not need help with.

Don’t be like GPT-4chan!

The post Awful 4chan chat bot spouts racial slurs and antisemitic abuse appeared first on Malwarebytes Labs.

Coffee app in hot water for constant tracking of user location

A mobile app violated Canada’s privacy laws via some pretty significant overreach with its tracking of device owners. The violation will apparently not bring the app owners, Tim Hortons, any form of punishment. However, the fallout from this incident may hopefully serve as a warning to others with an app soon to launch. That’s one theory, anyway. In reality, this level of data collection is not as uncommon as is being suggested.

The app collects how much data?

It all begins in June 2020, when a reporter finds the Tim Hortons app is going above and beyond what one would expect as a reasonable level of tracking. Despite an FAQ claiming tracking only takes place “with the app open”, reporter James McCleod submits a request under Canada’s Personal Information protection and Electronic Documents Act. He discovers the app has recorded his longitude and latitude coordinates “more than 2,700 times in less than five months”, and not just when the app was in use.

In fact, he’d never have known this level of tracking was taking place save for a notification saying the app had collected his location. The twist: he hadn’t used the app in hours. This one tiny mobile notification quickly snowballed into the story we have today.

The notification was due to an Android system update giving users the option to limit an app’s access to location information. When people and organisations say it’s a good idea to update your device, this story is a perfect example of why that is.

How can apps collect data?

We’ve previously covered Bluetooth beacons and geofencing on this site. These are a staple diet of Out of Home (OOH) advertising. If you’re unfamiliar with how this technology typically operates, here’s a brief rundown:

  1. You enable Bluetooth on your phone. It’s not a major battery drain and becoming more useful to mobile users than ever before so this isn’t a hassle for most people.
  2. Stores you enter may have a Bluetooth beacon which fires out a rapid pulse signal. If you have an app for the store you’re in and have granted it permission to interact, this is where the fun begins. The store can track your movements, and figure out which items you hovered in front of and which you ignored completely. The store can then offer discounts, flash sales, and even optimal item placement based on this data.
  3. Geofencing will help get you to the store in the first place. With app and permissions enabled, you may well have adverts sent directly to your phone when driving. You may even experience digital billboard Geofence marketing.

It’s not just about coffee

The biggest concern here for McCleod wasn’t that the app was tracking him on coffee runs. That was expected behaviour. What really stood out was the kind of deep-dive data collection that was generating “events” everywhere he went and building up a picture of his daily life.

The app, which made use of Geofencing platform Radar, flagged trips in and out of the home. It tried to distinguish between home and office. There was even an event fired for walking past a KFC in Morocco. In fact, the app seemed to spring into life any time McCleod walked past a rival business. McDonald’s, Starbucks, A&W, and more all triggered events.

A spokesperson for Tim Hortons said this was to “tailor marketing and promotional offers” inside the app, and that no data was shared with the other companies. This wouldn’t be enough to avert some pretty serious conclusions made from the app investigation.

The investigation findings

Tim Hortons stopped continuous tracking in 2020 after Government investigations began, but there were still concerns over the data collected. Tim Hortons’ contract with a third-party location services supplier allowed for the possibility of selling “de-identified” data. De-anonymisation is a big problem.

Despite explanations from Tim Hortons, the investigation concluded that

“…continual and vast collection of location information was not proportional to the benefits Tim Hortons may have hoped to gain from better targeted promotion of its coffee and other products.”

It also found the app continued collecting large amounts of location data for a year after deciding against using it for targeted data, despite there being no need to do so. The four privacy authorities involved recommended Tim Hortons:

  • Delete any remaining location data and direct third-party service providers to do the same;
  • Establish and maintain a privacy management program that: includes privacy impact assessments for the app and any other apps it launches; creates a process to ensure information collection is necessary and proportional to the privacy impacts identified; ensures that privacy communications are consistent with, and adequately explain app-related practices; and
  • Report back with the details of measures it has taken to comply with the recommendations.

Tim Hortons agreed.

The full findings on this case can be seen in the report here.

Climbing under the fences: Tips for avoiding tracking

There are several ways to avoid or opt-out from tracking which you may feel is overly invasive.

  1. Keep your mobile device up to date. It’s the difference between having basic “on/off” privacy settings or waking up to find you have multiple granular controls for all aspects of app use.
  2. Not using Bluetooth? Turn it off. You won’t enjoy a massive battery bump, but you will go some way towards staying below the beacon radar.
  3. Think carefully about agreeing to GPS permissions for apps. It’s as specific a way to track your movements as can be, and some apps/services save this data online for you to view at a later date. This isn’t great if the service or account is compromised, so always ensure there’s an option to delete historical data. Depending on mobile device or OS, you may have very basic location options or several options tied to different services. It’s well worth taking some time to see what’s in there.
  4. Introduce some security to your mobile ecosystem. Mobile ad blockers, privacy and anonymity tools will all help with regard prevention of advertising profiles tied to your real world location and identity. It may not just be the app, but the other sites, services, and ad networks it plugs into which you have to consider.
  5. Always read the EULA. It’s a pain, but it’s really worth checking out the privacy policies and EULAs of the apps you use. See how they share data, how long information is stored for, and which advertising networks the apps partner with. Of course, this may have limited use considering portions of the Tim Hortons app FAQ were incorrect but it’s a good way to get up to speed on an app more broadly.

The post Coffee app in hot water for constant tracking of user location appeared first on Malwarebytes Labs.

Rotten apples banned from the App store

Apple’s App Review process may have received ill wishes from many benevolent developers, but Apple has now revealed how effective it is and why it is so stringent.

According to its review of the year 2021, Apple protected customers from nearly $1.5 billion in potentially fraudulent transactions, and stopped over 1.6 million risky and vulnerable apps and app updates from defrauding users.

Bad apples

In 2021, Apple rejected or removed over 835,000 problematic new apps, and an additional 805,000 app updates. Some were removed because they were found to be unfinished or contained bugs that impeded functionality, others because they needed improvements in their moderation mechanisms for user-generated content.

The App Review team also rejected over 343,000 apps for requesting more user data than necessary or mishandling the data they already collected.

To put these numbers in perspective, 107,000 new developers managed to get their apps onto the store. Some of which may have gone through rejection on earlier occasions, but received a stamp of approval in the end.

Apple infographic showing App store statistics
Image courtesy of Apple

Rotten apples

Over the same year, the App Review team rejected more than 34,500 apps for containing hidden or undocumented features. They also rejected upward of 157,000 apps because they were found to be spam, copycats, or misleading to users, for example, by manipulating them into making a purchase.

Also, Apple removed over 155,000 apps from the App Store because the developers altered the concept or functionality of the app after receiving approval at first. Altering the app after release is a method threat actors can use to try and bypass the App Review process.

Fraudulent accounts

When developer accounts are used for fraudulent purposes, the offending developer’s Apple Developer Program account and any related accounts are terminated.

As a result of these efforts, Apple terminated over 802,000 developer accounts in 2021. Apple rejected an additional 153,000 developer enrollments over fraud concerns, preventing these threat actors from ever submitting an app to the store.

Financial fraud

Using both human and tech review, Apple stopped more than 3.3 million stolen cards from being used to make potentially fraudulent purchases. Nearly 600,000 accounts were banned from ever transacting again. In total, Apple protected users from nearly $1.5 billion in potentially fraudulent transactions in 2021.

User concerns

If users have concerns about an app, they can report it by clicking on the Report a Problem feature on the App Store or calling Apple Support, and developers can use either of those methods or additional channels like Feedback Assistant and Apple Developer Support.

As part of the App Review process, any developer who feels they have been incorrectly flagged for fraud may file an appeal to the App Review Board.

Passwords

Apple also announced at its annual Worldwide Developers Conference (WWDC) that it will introduce support for third-party two-factor authentication apps with the built-in Passwords feature in the Settings app.

iOS 16, which is expected to be released in September 2022, will permit users to edit strong passwords suggested by Safari to adjust for site‑specific requirements.

Apple also confirmed it’s bringing support for passkeys in the Safari web browser, a next-generation passwordless sign-in standard that allows users to log in to websites and apps across platforms using Touch ID or Face ID for biometric verification.

Passkeys never leave your device and are specific to the site you created them for. Which makes phishing for them almost impossible. The passkey mechanism was established by the FIDO Alliance and is already backed by Google and Microsoft. As such, it aims to replace standard passwords by providing unique digital keys stored locally on the device.

The post Rotten apples banned from the App store appeared first on Malwarebytes Labs.

Hackers can take over accounts you haven’t even created yet

Account hijacking has sadly become a regular, everyday occurrence. But when it comes to hijacking accounts before they are even created? That’s something you’d never think possible—but it is.

Two security researchers, Avinash Sudhodanan and Andrew Paverd, call this new class of attack a “pre-hijacking attack.” Unfortunately, many websites and online services, including high-traffic ones, are not immune to it. In fact, the researchers found that more than 35 of the 75 most popular websites are vulnerable to at least one pre-hijacking attack.

Sudhodanan and Paverd identified five types:

Classic-Federated Merge (CFM)

This exploits a flaw in how two account creation routes interact. Two accounts can be created using the same email address—one normal account by the user (deemed the “the classic route”) and one federated identity by the hijacker (deemed the “federated route”)—allowing both to access the account.

This attack is most successful when the user uses a single sign-on (SSO) to log in, so they never change the actual account password the hijacker sets.

Non-Verifying Identity Provider (NV)

This is a mirror image of the CFM attack. Using the same email address, the hijacker creates an account using the classic route while the user takes the federated route. The hijacker then uses an identity provider (IdP) that doesn’t verify ownership of an email address. If the website or online service incorrectly merges the two accounts based on the email address, both hijacker and user will have access to the account.

Unexpired Email Change (UEC)

This exploits a flaw where the website or online service fails to invalidate an email change request when the user resets their password.

The hijacker creates an account with the victim’s email address and then submits a change request to replace the email for their own but doesn’t confirm it. When the victim does a password reset, the hijacker then validates control, allowing them to assume control of the account.

Unexpired Session (US)

This exploits a flaw in which authenticated users are not signed out of an active account after a password reset.

The hijacker keeps the account active using an automated script after creating an account. Even after the user creates an account using the same email address and resets the password, the hijacker maintains access to the account.

Trojan Identifier (TID)

This is a combination of CFM and US attacks.

Issues in common

These attacks vary in severity, but they were all caused by the websites’ inability to verify an identifier the user supplies before allowing the account to be used.

Many websites and online services do verify, but, as the researchers noted, they do so asynchronously, which improves website usability but unfortunately opens the door to pre-hijacking attempts.

From the report:

“As with account hijacking, the attacker’s goal in account pre-hijacking is to gain access to the victim’s account. The attacker may also care about the stealthiness of the attack, if the goal is to remain undetected by the victim.

The impact of account pre-hijacking attacks is the same as that of account hijacking. Depending on the nature of the target service, a successful attack could allow the attacker to read/modify sensitive information associated with the account (e.g., messages, billing statements, usage history, etc.) or perform actions using the victim’s identity (e.g., send spoofed messages, make purchases using saved payment methods, etc.).”

How account pre-hijacking works

Attackers attempting to pre-hijack must already know some unique identifiers related to the target whose account they want to take over. These identifiers could be an email address, phone number, or other information that can be retrieved via scraping social media accounts or leaked data.

From here, attackers can then use any of the five attack types. Regardless, everything boils down to the hijacker and the user having concurrent access to the same account.

In their case studies, the researchers mentioned a handful of known online brands vulnerable to pre-hijacking attacks. These include Dropbox, Instagram, LinkedIn, WordPress, and Zoom.

Pre-hijacking attacks are preventable

Although the root cause of pre-hijacking attacks stems from weaknesses on the side of the websites and online services, protecting against them is never one-sided.

The researchers advise website and service owners to do the following:

  • Require verification of an email address used in registration to be completed before allowing any features of the website or service to be used. A similar approach must be adopted when using other verification means, such as SMS or automated phone calls.
  • If the website or online service uses an IdP, ensure the IdP performs the verification process or conducts additional verification steps.
  • When a user requests a password reset, the website or service should sign out all active sessions and invalidate all authentication tokens.
  • Set the validity period of change confirmation emails as low as possible. Doing this doesn’t remove the risk of an attack altogether, but it minimizes it.
  • Delete unverified accounts regularly.

Microsoft has listed some in-depth steps on its website for further mitigation.

Users can also protect themselves from pre-hijacking attacks using multi-factor authentication (MFA) if the website or online service supports this feature.

Stay informed and stay safe!

The post Hackers can take over accounts you haven’t even created yet appeared first on Malwarebytes Labs.

Ransomware Task Force priorities see progress in first year

This blog is part of our live coverage from RSA Conference 2022:

US President Joseph R. Biden Jr., The White House, and law enforcement agencies across the world paid close attention last year when a group of more than 60 cybersecurity experts launched the Ransomware Task Force, heeding the group’s advice on how to defend against ransomware attacks and deny cybercriminals their ill-gotten riches.

Of the Ransomware Task Force’s initial 48 recommendations—published in their report last year—12 have resulted in tangible action, while 29 have resulted in preliminary action, said Philip Reiner, chief executive officer for the Institute for Security and Technology and member of the Ransomware Task Force.

The progress, while encouraging, is not the end, Reiner said.

“Not enough has been done,” Reiner said. “There is still a great deal of work that remains to be done on this front to blunt the trajectory of this threat.”

At RSA Conference 2022, Reiner moderated a panel of other Ransomware Task Force members which included Cyber Threat Alliance President and CEO Michael Daniels, Institute for Security and Technology Chief Strategy Officer Megan Stiflel, and Resilience Chief Claims Officer Michael Phillips. The four discussed how separate levels of the government responded and acted on the five priority recommendations made by the Ransomware Task Force last year.

In short, many promising first steps have been made, the panelists said.

“Look at what the US government has done in the past year—the impressive speed at which [they’ve] organized and focused on the ransomware threat,” Daniels said. “Everything from presidential statements, to work in the international area, to convening a ransomware task force inside the government to start working on this issue.”

He continued: “I think it’s clear that governments are really engaged in this issue in a way that they weren’t just a couple of years ago.”

Last year, governments across the world collaborated together in taking down ransomware threat actors. In June 2021, Ukrainian law enforcement worked with investigators from South Korea to arrest members affiliated with the Clop ransomware gang, and months later, members of the FBI, the French National Gendarmerie, and the Ukrainian National Police arrested two individuals—and seized about $2 million—from an unnamed ransomware group.

Around the same time as the undisclosed arrests, President Biden traveled to Switzerland to speak at a cybersecurity summit that was also attended by Russia President Vladimir Putin. When the two met, Biden reportedly told Putin that the United States was willing to take “any necessary action” to defend US infrastructure. The US President’s statement came shortly after the ransomware attack on Colonial Pipeline, which was attributed to the cybercriminal group Darkside, which is believed to be located in Russia.

“I’m gonna be meeting with President Putin and so far there is no evidence, based on our intelligence people, that Russia is involved,” President Biden said of the attack at the time, according to reporting from the BBC. But, Biden added, “there’s evidence that the actors’ ransomware is in Russia—they have some responsibility to deal with this.”

Separately, Stifel from the Institute for Security and Technology welcomed recent developments—which may take many more years to solidify—to create a standardized format and timeline for companies and organizations to report ransomware attacks.

“It will be some time, and some of you may be retired by the time it’s in place,” Stifel said, “but it’s there. You have to start somewhere.”

The panelists also acknowledged recent government efforts to appropriate cybersecurity recovery and response funds in the latest infrastructure bill. While the Ransomware Task Force specifically asked for funds for ransomware recovery and response, a broad package of millions of dollars for overall cybersecurity events is still considered a win.

One underdeveloped priority area that every panelist stressed was the need for faster, more accurate data on ransomware attacks and recovery costs. Without a centralized database—and without a requirement to report both attacks and ransom payments—the government and cybersecurity companies are working with limited information.

The panelists also lamented the difficulties posed in trying to remove safe havens for ransomware actors. As the governments that already provide cover for ransomware groups have little to no impetus to change their positions, it’s up to global governments to start working together.

“I can see the US government trying to, internationally, build a collation of countries—not just US agencies, but multiple agencies across multiple jurisdictions at the same time,” Daniels said.

He continued: “This threat has become so large that no government can really just ignore it.”

The post Ransomware Task Force priorities see progress in first year appeared first on Malwarebytes Labs.

A week in security (May 30 – June 5)

Last week on Malwarebytes Labs:

Stay safe!

The post A week in security (May 30 – June 5) appeared first on Malwarebytes Labs.

FBI warns of scammers soliciting donations for Ukraine

The FBI recently issued an announcement about a fraudulent scheme that proves there is no low that’s too low for scammers.

“Criminal actors are taking advantage of the crisis in Ukraine by posing as Ukrainian entities needing humanitarian aid or developing fundraising efforts, including monetary and cryptocurrency donations,” the FBI said.

Scammers have always followed where the money is, even if that money is for aiding those most in need. In this case, fraudsters have banked on the widespread sympathy for Ukraine as a way to make a buck.

Malwarebytes Labs had seen its fair share of Ukraine charity-centric scam sites popping up.

Days after Russia invaded Ukraine, we spotted a spam campaign titled “Donate to Help Children in Ukraine.” Apart from a stretched Ukrainian flag as the email header, there is almost nothing you can criticize about the email itself, as the usual red flags are missing.

A month after, fundraising scams were all over the place. We weren’t surprised to see phishers and scammers leading the pack when it comes to registering domains with “Ukraine” in them, as reported by Tessian. The company noted a 210 percent increase in registered domains with this pattern compared to last year, with 77 percent of them appearing suspicious based on early indicators.

Days before May, our Threat Intelligence team spotted a fake USA for UNHCR (United Nations High Commission for Refugees) website, which was part of a phishing campaign that started as a spam email using a spoofed address, calling on recipients to donate to Ukraine. The fake site asks for a potential donor’s full name, email address, and country of residence. Unlike its legitimate counterpart, this fake site also wants you to donate bitcoins.

The FBI listed some tips so users can protect themselves against such scams:

  • Be suspicious of emails, SMS messages, and social media posts from organisations encouraging you to donate. (You can check them against a database of legitimate charities, with their actual URLs.)
  • If a donation site asks you to donate in cryptocurrency, double-check the wallet address against official cryptocurrency wallets before donating.
  • Never reply to correspondences from someone purporting to be Ukrainian entities asking for humanitarian aid.

Lastly, if you think you have been a scam victim, file a report with the FBI’s Internet Crime Complaint Center (ICCC).

Stay safe!

The post FBI warns of scammers soliciting donations for Ukraine appeared first on Malwarebytes Labs.

Microsoft Autopatch is here…but can you use it?

Updating endpoints on a network can be a daunting task. Testing before rollout can take time. Delays to patches going live can cause all manner of headaches. Windows Autopatch aims to tackle some of these issues, and is now live for public preview. The release comes with a few caveats which you’ll want to keep in mind.

Fixing a patchy experience

First announced in April and slated for general release come July, Windows Autopatch is designed to free stressed sysadmins from some of the heavy lifting around updates. Billed as a managed service available to (some) users of Microsoft products, the software giant had this to say about it:

The development of Autopatch is a response to the evolving nature of technology. Changes like the pandemic-driven demand for increased remote or hybrid work represent particularly noteworthy moments but are nonetheless part of a cycle without a beginning or end. Business needs change in response to market shifts.

This service will keep Windows and Office software on enrolled endpoints up-to-date automatically, at no additional cost. IT admins can gain time and resources to drive value. For organizations who select this option, the second Tuesday of every month will be ‘just another Tuesday’.

This automated patching setup is complemented by four so-called “testing rings”. This is a way to divide up all of an organisation’s devices in a manner which allows for efficient testing and updating. The smallest ring is the initial “test ring”, which has an unspecified minimum number of devices. It’s followed by the “first”, “fast”, and “broad” rings which comprise 1%, 9% and 90% of devices under management respectively.

Assuming all is well after a validation period in one of the rings, the updates filter out to the next ring for more testing. All the while, performance is monitored to ensure everything works at least as well as it did pre-update.

The result, according to Microsoft, is a “rollout cadence that balances speed and efficiency, optimising product uptime”.

But not without caveats

It would be unrealistic to think all networks and devices can simply switch on this new service. Indeed, there’s quite a list of requirements before you can get anywhere near this process. There’s no hardware requirements, though you can’t use it in conjunction with a “bring your own device” (BYOB) policy.

From Microsoft’s blog:

Intune only:

  • Azure Active Directory (Azure AD)
  • Microsoft Intune
  • Windows 10/11 supported versions

Co-management:

  • Hybrid Azure AD-Joined or Azure AD-joined only
  • Microsoft Intune
  • Configuration Manager, version 2010 or later
  • Switch workloads for device configuration, Windows Update and Microsoft 365 Apps from Configuration Manager to Intune (min Pilot Intune)
  • Co-management workloads

What are the licensing requirements for Windows Autopatch?

  • Windows 10/11 Enterprise E3 and up
  • Azure AD Premium (for co-management)
  • Microsoft Intune (includes Configuration Manager, version 2010 or greater via co-management)

Not a magic fix for everything

Patching is incredibly important to the well-being of your network and devices. However, as useful as Autopatch will no doubt be, it can’t fix everything. Sometimes vulnerabilities occur like the Follina zero-day, and there’s no patch forthcoming. When this happens, you need workarounds and mitigations, and defence in depth.

Security tools and smart security practises by device users are two of the additional ways to keep compromise at bay until updates are released. If you’ve been waiting on Microsoft Autopatch since it was first announced, stay tuned to upcoming Microsoft announcements. Just keep those caveats, and your security setup, in mind should you go and make the leap.

The post Microsoft Autopatch is here…but can you use it? appeared first on Malwarebytes Labs.

RSA 2022: Prometheus ransomware’s flaws inspired researchers to try to build a near-universal decryption tool

Prometheus—a ransomware build based on Thanos that locked up victims’ computers in the summer of 2021—included a major “vulnerability” that led security researchers at IBM to try and build a one-size-fits-all ransomware decryptor that could work against multiple ransomware variants, including Prometheus, AtomSilo, LockFile, Bandana, Chaos, and PartyTicket.

Though the IBM researchers managed to undo the work of multiple ransomware variants, the panacea dream decryptor never materialized.

IBM global head of threat intelligence Andy Piazza said that the team’s efforts revealed that even though some ransomware families can be reverse-engineered to develop a decryption tool, no company should rely on decryption itself as a response to a ransomware attack.

“Hope is not a strategy,” Piazza said at RSA Conference 2022, held in San Francisco in person for the first time in two years.

IBM security research Aaron Gdanski, who was aided by security researcher Anne Jobman, said his interest in building a Prometheus decryption tool began after one of IBM Security’s clients was hit with the ransomware. He began by trying to understand the ransomware’s behavior: Did it persist in the environment? Did it upload files anywhere? And how, specifically, did it generate the keys that were used to encrypt files?

By using the DS-5 debugger and disassembler, Gdanski found that Prometheus’ encryption algorithm relied on both “a hardcoded initialization vector which did not change between samples” and the uptime of the computer. Gdanski also learned that Prometheus created its seeds by relying on a random number generator that, by default, used Environment.TickCount.

These discoveries revealed a key vulnerability in Prometheus, Gdanski said. If he could find when Prometheus encrypted files on the system, he could then likely generate the same seed that Prometheus used for that decryption.

“If I could obtain the seed at the time of encryption, I could use the same algorithm Prometheus did to regenerate the key it uses,” Gdanski said.

Equipped with the boot time on an affected machine and the recorded timestamp on an encrypted file, Gdanski then had a starting point to narrow down his work. After some additional calculations, Gdanski generated a seed from Prometheus and he tested it on portions of encrypted files.

With some fine-tuning, Gdanski’s work paid off.

Gdanski also learned, though, that the seed changed depending on the time when a file was encrypted. That meant that one single decryption key would not work, but by sorting the encrypted files by the last write time on the machine, he was able to slowly build a series of seeds that could be used for decryption.

The success, Gdanski said, could be applied to other ransomware families that similarly relied on flawed random number generators.

“Any time a non-cryptographically secure random number generator is used, you’re probably able to recreate a key,” Gdanski said.

But Gdanski emphasized that this flaw is rare from what he’s seen. As Piazza reiterated, the best defense to ransomware isn’t hoping that the ransomware involved in an attack has a sloppy implementation—it’s preventing a ransomware attack before it happens.

For the latest on current ransomware activity, read our May ransomware review here. You can also read about some lessons from the real-life ransomware attack on Northshore School District here.

The post RSA 2022: Prometheus ransomware’s flaws inspired researchers to try to build a near-universal decryption tool appeared first on Malwarebytes Labs.

Tor’s (security) role in the future of the Internet, with Alec Muffett

Tor has a storied reputation in the world of online privacy. The open-source project lets people browse the Internet more anonymously by routing their traffic across different nodes before making a final connection between their device and a desired website. It’s something we’ve discussed previously on Lock and Code, and something that, sometimes, gets a bad reputation because of its relationship to the “dark web.”

But for all the valid discussion about online anonymity, encryption, and privacy, Tor has an entirely different value proposition for people who build and maintain websites, and that is one of security. As explained by our guest Alec Muffett on today’s episode of Lock and Code, hosted by David Ruiz, utilizing Tor can provide organizations with an entirely separate networking stack. And this isn’t just a boon for networking diversity, but also security, Muffett explains.

Under our current system that relies on TCP/IP and HTTP (and increasingly HTTPS), whenever a user types a URL into an address bar in their web browser, multiple security risks are present. A user’s traffic can be intercepted, redirected to another server, routed through another country and surveilled, and, as Muffett explained, for website operators, their DNS servers can be tampered with.

“There are so many security risks up the stack,” Muffett said. “Whereas with onion networking, with Tor networking, the thing that you type into the web browser bar is the cryptographic key of the website that you want to talk to.”

Muffett continued:

It’s from you to them, end-to-end secure.”

Today, on the Lock and Code podcast, we speak with Muffett about the security benefits of onion networking, why an organization would want to launch an onion site for its service, and whether every site in the future should utilize Tor.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Show notes, resources, and credits:

Why and How you should start using Onion Networking

How WhatsApp uses metadata analysis for spam and abuse fighting:

Alec Muffett’s blog and about page

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)

The post Tor’s (security) role in the future of the Internet, with Alec Muffett appeared first on Malwarebytes Labs.