IT NEWS

Android 14 developer preview highlights multiple security improvements

Android developers have been given a taste of what’s to come in the next big step up in mobile land, thanks to Android 14 waiting on the horizon. The developer preview is a great way for those most familiar with the mobile operating system to see which changes they’ll enjoy and what ones they’ll have to endure.

As it happens, there’s quite a few security changes coming down the pipeline and developers will now be busy testing their apps. Not only are alarm permissions, system broadcasts, and language support experiencing alterations, but a wealth of security features will help ensure your device is as safe as can be.

Out with the old, in with the new

The biggest change is that old apps are on the way out. After a certain point, you will no longer be able to install them. These kinds of changes have been threatening to land for some time now, so developers will surely have been aware of this coming.

As Ars Technica notes, the current backward compatibility system allowed for older apps to still install but at the risk of malware developers simply targeting older Android versions.

Android 14, on the other hand, is simply bringing down the curtain and those old apps won’t install anymore. Now, don’t panic too much. Your favourite apps are almost certainly safe, especially if you’re still using them on a daily basis because there’s a strong chance they’re still being maintained and updated.

By “old apps”, we’re talking Android 6 and earlier which is a grand total of 8 years+. Given that apps not updated for two years have already started to be hidden from view on the Play store, there is a tiny chance you’ll be impacted by this. If you’re a side-loading hobbyist with a passion for ancient apps, then maybe this could cause you a few headaches. For most people, this is one of those changes you simply won’t notice. It’s a smart piece of house cleaning by Google and one which makes sense.

The first of what could be many security changes

Additional security features and alterations mentioned on the Android Developers Google blog are as follows:

Safer dynamic code loading

Dynamic code loading (DCL) introduces outlets for malware and exploits, since dynamically downloaded executables can be unexpectedly manipulated, causing code injection. Apps targeting Android 14 require dynamically loaded files to be marked as read-only.

Malware authors being hampered from malicious code injection can only be a good thing so this is good to see.

Runtime receivers

Apps targeting Android 14 must indicate if dynamic Context.registerReceiver() usage should be treated as “exported” or “unexported”, a continuation of the manifest-level work from previous releases.

Safer implicit intents

To prevent malicious apps from intercepting intents, apps targeting Android 14 are restricted from sending intents internally that don’t specify a package.

Both of the above have the intent of locking down data from other apps or the system itself. As Bleeping Computer points out, rogue apps and other malicious activities on an Android will have a much more difficult time if trying to intercept pieces of information intended for somewhere else.

A full rundown of what’s to expect (so far) can be seen on the official site. It’s worth noting that a complete reveal of what Android 14 will contain is not likely to be seen for some time yet, so all of this is subject to change to some degree.

Keeping your phone safe

This is all good news for the future, but what can you do in the here and now to keep harm at arm’s length?

  • Update your apps, and your device. Keep your Android device up to date, and allow your apps to update automatically. Depending on your version, you should be able to tell your device to update when connected to your Wi-Fi as opposed to taking a bite out of your roaming data.
  • Enable your lock screen. Whether you’re using a pattern, a PIN, a password, or even your thumbprint, it should be at the top of your list. Pay attention to how long your phone is unlocked before it reverts to the lock screen. This is a valuable window for criminals should you leave your phone unattended.
  • Enable the “find my phone” service. This, combined with the lock screen, will help keep your data safe in cases where you need to delete all data from the device remotely.
  • Install security tools on your phone. This will help provide you with maximum protection from rogue links, bad apps, and more.
  • Only download from official stores. Bad things do end up on there, but it’s still safer than allowing installations from unknown sources via third party websites.
  • Always read the reviews. You’ll not only gain insight into what the app is doing, but you’ll also make sure that the apps you’re using are still supported. This will keep you away from potentially exploitable software which has long since fallen into disrepair. Remember: there’s a two year expiry on abandoned apps before they’re removed from the Play store, so not everything put out to pasture will vanish right away.

Stay safe out there!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

One in nine online stores are leaking your data, says study

eCommerce security company Sansec has revealed it’s found a number of online stores accidentally leaking highly sensitive data.

After studying 2,037 online stores, the company found that 12.3 percent exposed compressed files (in ZIP, SQL, and TAR archive formats), which BleepingComputer noted appear to be private backups containing master database passwords, confidential admin URLs of stores, full customer data (PII, or personally identifiable information), and internal API keys on public-facing web folders without requiring authentication.

The Sansec Threat Research group also found multiple attack patterns coming from various IPs, suggesting that a number of threat actors have known about this online store flaw and are working to exploit it.

In a post, the researchers said:

“We have observed automated attacks against online stores, where thousands of possible backup names are tried over the course of multiple weeks. Because these probes are very cheap to run and do not affect the target store performance, they can essentially go on forever until a backup has been found.”

Sansec urges online web store owners to make sure sure they aren’t leaking sensitive data. Start checking if backup files are open to the public internet and, if they are, close them immediately, and investigate the store for any signs of compromise. The company recommends the following steps to site owners in the event of accidental exposure:

  • Check server logs for signs of backup file downloads.
  • Check for unauthorized admin accounts.
  • Change all passwords.
  • Implement two-factor authentication (2FA).
  • Ensure the remote database admin panel isn’t showing up on the public internet.
  • Run an eCommerce malware scanner.

Lastly, to avoid creating accidental data leaks on online shops, Sansec advises owners to deploy store code on a read-only file system, schedule frequent backing up of files, restrict access to backup files, and start monitoring for online data exposure.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

French law to report cyberincidents within 3 days to become effective soon

The pressure on victims of cybercrime to notify authorities in a timely manner is increasing from many sides and for multiple reasons.

On January 24, 2023 France passed a law (Article L12-10-1 of the Insurance Code) that victims of cybercrime are required to report the incident within 72 hours after discovery, if they want to be eligible for compensation by the insurance for losses and damages caused by the attack. In accordance with French law these provisions come into force three months after the announcement of this law. That effective date will be April 24, 2023.

Earlier, we saw a proposal from the Securities and Exchange Commission (SEC) to amend Form 8-K to require registrants to disclose information about a material cybersecurity incident within four business days after the registrant determines that it has experienced a material cybersecurity incident. Form 8-K is known as a “current report” and it is the report that companies must file with the SEC to announce major events that shareholders should know about.

In the take-down of the Hive ransomware group the international law enforcement agencies stressed how crucial it was that victims filed timely reports about the cybercrimes committed against them.

Cyber liability insurance

Cyber liability insurance is a type of insurance policy that protects businesses from the expenses incurred after the result of a data breach, including stolen or damaged intellectual property.

It may strike us as weird that the compensation by the insurance is what’s at stake here, but it’s incorporated in US law as well. The Cybersecurity Information Sharing Act was one of the initial Federal laws passed in 2015 to enable sharing of personal information on cyberincidents. And many states have enacted their own legislation to address cyberrisks in depth, from exclusions to penalties.

Every US state has a Data Breach Notification law that must be complied with when a certain number of consumers’ personally identifiable information (PII) is compromised. A few states have added requirements specifically for insurers to notify the state’s insurance department.

Insurance companies

The amount of money involved in cyberincidents is enormous. Cybercrime costs include damage and destruction of data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud, post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data and systems, and reputational harm.

Covering these risks that amass to trillions of dollars per year is not something insurance companies will take on lightly. Their clients will have to show they were careful, protected, and diligent to be eligible for compensation. And now they will have to file a timely report.

Definitions

The French law has been criticized because some of the key definitions in the law are unclear or at least need further specification.

What a cyberincident is, in the context of this law, is rather clear, though. Accessing or remaining fraudulently in all or part of an automated data processing system, with the stipulation that when the result is either the deletion or modification of data contained in the system, or an alteration of the functioning of this system, the punishments are higher.

The responsible authority however, is unclear. Is it law enforcement, through the Ministry for the Interior’s general crime reporting portal, or does it depend on the nature of the crime? Time will tell. (This lack of clarity is also the norm in the United States, where some laws go into effect without having a clear model for how the laws will be enforced.)

Another point of discussion is put forward by 72 hours after discovery. Is this 72 hours after your log files show signs of an unauthorized access, or 72 hours after your staff was able to determine with certainty that it indeed was a security incident?

The details will undoubtedly be hammered out, but until then it seems prudent to err on the safe side.

Money talks

According to Malwarebytes security evangelist and ransomware expert Mark Stockley this kind of legislation could make a difference.

“Why? Because money talks and the foundation of combatting the ransomware problem is understanding it. That requires victims to come forward and report it. Timely reporting allows us to understand the big picture, but it also gives law enforcement the best chance to learn about the tools, techniques and practices of the attackers, and to share what they’ve learned.”

This is likely one of the reasons behind this law. As Mark continued to explain:

“Failure to report can cause serious problems: In 2017, one of the early ‘big game’ ransomware gangs, SamSam, was widely reported to be targeting government and healthcare institutions, because it seemed to attack them much more often. It later transpired that it didn’t attack them more often at all, but the government and healthcare sectors were much more likely to report an attack.”

If this new law turns out to help fight cybercrime, you can be sure that similar types of regulation will follow suit around the globe.

Communication

This new law will also affect the playbook by which an organization is going to act after identifying a breach. In most cases the investigation by internal or external experts will not have finished by the time you have to disclose that there has been an incident. This means it will likely need an extra step in your communications, where the first one will tell at least the responsible authority that something has happened. Depending on who that authority is and what form this notification has to be done in, others may have to be briefed as well.

A later communication can then disclose the details about what happened, how it could happen, and what the possible consequences are. But that type of information typically requires more investigation than you’ll be able to gather in three days.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Consent to gather data is a “misguided” solution, study reveals

When researchers from the University of Pennsylvania’s Annenberg School for Communication conducted a survey to see if “informed consent” practices are working online with regard to user data gathering, the results revealed weaknesses in a framework that, for decades, has served as the basis for online privacy regulation in the US. This framework, which is commonly known as “notice of consent,” usually allows organizations to freely collect, use, keep, share, and sell customer data provided they inform them about their data-gathering practices and get their consent. However, as the New York Times noted, the survey results add another voice to “a growing body of research suggesting that the notice-of-consent approach has become obsolete.”

“Informed consent is a myth”

The report, entitled “Americans Can’t Consent to Companies’ Use of Their Data,” contains the results, expert analyses, and interpretation of survey results. The authors not only give attention to the gap in American users’ knowledge of how companies use their data but also reveal their deep concern about the consequences of its use yet feel powerlessness in protecting it. Believing they have no control over their data and that trying would be pointless is what the authors call “resignation,” a concept they introduced in 2015 in the paper, “The Tradeoff Fallacy.”

As the Annenberg School report said:

“High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data.”

The authors define genuine consent as people having “knowledge about commercial data-extraction practices as well as a belief they can do something about them.” The survey finds that Americans have neither.

“We find that informed consent at scale is a myth, and we urge policymakers to act with that in mind,” the report said.

The New York Times noted a handful of regulators agreeing to the report’s findings.

“When faced with technologies that are increasingly critical for navigating modern life, users often lack a real set of alternatives and cannot reasonably forgo using these tools,” said Lina M. Khan, a chairperson of the Federal Trade Commission, in a speech last year.

Digital consent has had critics as early as 1999, denoting that its weakness remained unaddressed for almost 25 years. Paul Schwartz, a professor at the University of California and author of the paper “Privacy and Democracy in Cyberspace,” had warned that consent that was given via privacy policy notices was “unlikely to be either informed or voluntarily given.” The notices were “meaningless,” he said, as most people ignore them, were written in a vague and legalistic language that very few people understand, and “fail to present meaningful opportunities for individual choice.”

Neil Richards and Woodrow Hartzog, authors of the paper “The Pathologies of Digital Consent,” give strength to this argument by recognizing a form of consent they call “unwitting consent,” which occurs when people do not really understand “the legal agreement,” “the technology being agreed to,” and “the practical consequences or risks of agreement.” Previous work of two of the authors of the study also shows people misunderstanding and confusing the meaning behind the term “privacy policy,” believing it is a promise that the company asking for consent will protect the privacy of the one giving consent.

Robert Levine’s argument is also in parallel with Richards and Hartzog. He expressed that people must have understanding and autonomy before they can make informed choices. That said, a person must understand corporate practices and policies (including legal protection), surrounding the data that companies want to gather about users. A person must also believe that companies will give them the freedom to decide whether to give up their data and when, Levine said. If one of these isn’t satisfied, the consent to data collection “is involuntary, not free, and illegitimate.”

‘F’ for Fail

The study presupposes that in order to give consent, US consumers must satisfy two things: they must be informed about what is going to happen to their data, and they must have the ability to give (or withdraw) consent. To test these, 2,000 US survey participants are provided a set of 17 basic true/false questions about internet practices and policies. They can also answer “I don’t know,” the median option.

The overall survey results are worrying.

easset upload file49078 258967 e

A majority (77 percent) of survey takers got nine or fewer correct answers out of 17 questions, which could be interpreted as an ‘F’ grade. Only one participant got an ‘A’ grade, scoring 16 correct answers. Below are the most notable insights from the results:

* Only around 1 in 3 Americans know it is legal for an online store to charge people different prices depending on where they are located.

* More than 8 in 10 Americans believe, incorrectly, that the federal Health Insurance Portability and Accountability Act (HIPAA) stops apps from selling data collected about app users’ health to marketers.

* Fewer than one in three Americans know that price-comparison travel sites such as Expedia or Orbitz are not obligated to display the lowest airline prices.

* Fewer than half of Americans know that Facebook’s user privacy settings allow users to limit some of the information about them shared with advertisers.

Furthermore, 80 percent of Americans believe Congress must act urgently to regulate how companies use personal information. Joseph Turow, one of the authors of the study, worries though that the longer the government waits to enforce change, the more difficult it will be to control user data.

“For about 30 years, big companies have been allowed to shape a whole environment for us, essentially without our permission,” Turow said. “And 30 years from now, it might be too late to say, ‘This is totally unacceptable.'”


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

What is AI good at (and what the heck is it, actually), with Josh Saxe: Lock and Code S04E04

In November of last year, the AI research and development lab OpenAI revealed its latest, most advanced language project: A tool called ChatGPT.

ChatGPT is so much more than “just” a chatbot. As users have shown with repeated testing and prodding, ChatGPT seems to “understand” things.  It can give you recipes that account for whatever dietary restrictions you have. It can deliver basic essays about moments in history. It can — and has been — used to cheat by university students who are giving a new meaning to plagiarism, stealing work that is not theirs. It can write song lyrics about X topic as though composed by Y artist. It can even have fun with language. 

For example, when ChatGPT was asked to “ Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR,” ChatGPT responded in part:

“ And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying ‘ Oh Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge. ’ ”

Is this fun? Yes. Is it interesting? Absolutely. But what we’re primarily interested about in today’s episode of Lock and Code, with host David Ruiz, is where artificial intelligence and machine learning — ChatGPT included — can be applied to cybersecurity, because as some users have already discovered, ChatGPT can be used to some success to analyze lines of code for flaws.

It is a capability that has likely further energized the multibillion-dollar endeavor to apply AI to cybersecurity.

Today, on Lock and Code, we speak to Joshua Saxony about what machine learning is “good” at, what problems it can make worse, whether we have defenses to those problems, and what place machine learning and artificial intelligence have in the future of cybersecurity. According to Saxony, there are some areas where, under certain conditions, machine learning will never be able to compete.

“If you’re, say, gonna deploy a set of security products on a new computer network that’s never used your security products before, and you want to detect, for example, insider threats — like insiders moving files around in ways that look suspicious — if you don’t have any known examples of people at the company doing that, and also examples of people not doing that, and if you don’t have thousands of known examples of people at the company doing that, that are current and likely to reoccur in the future, machine learning is just never going to count with just manually writing down some huristics around what we think bad looks like.”

Saxony continued: 

“Because basically in this case, the machine learning is competing with the common sense model of the world and expert knowledge of a security analyst, and there’s no way machine learning is gonna compete with the human brain in this context.”

Tune in today

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)

Jailbreaking ChatGPT and other large language models while we can

The introduction of ChatGPT launched an arms race between tech giants. The rush to be the first to incorporate a similar large language model (LLM) into their own offerings (read: search engines) may have left a lot of opportunities to bypass the active restrictions such as bias, privacy concerns, and the difficulties with abstract concepts or lack of context.

Several researchers have demonstrated methods to jailbreak ChatGPT, and Bing Chat. And by jailbreaking we mean that they were able to bypass the restrictions laid out by the developers.

Large language models

ChatGPT relies on a subsection of machine learning, called large language models (LLMs). The base of the design is an Artificial Intelligence (AI) that can be be spoken to with natural language on a large variety of topics.

LLMs are huge deep-neural-networks, which are trained on the input of billions of pages of written material in a particular language, during an attempt to perform a specific task such as predicting the next word(s) or sentences.

In the words of ChatGPT itself:

“The training process involves exposing the model to vast amounts of text data, such as books, articles, and websites. During training, the model adjusts its internal parameters to minimize the difference between the text it generates and the text in the training data. This allows the model to learn patterns and relationships in language, and to generate new text that is similar in style and content to the text it was trained on.”

Rush

We all know that you are more inclined to make mistakes when you are in a rush. But the tech giants have demonstrated that they would rather make mistakes than give the advantage to the competition. Becoming a household name, like ChatGPT is looking to become, brings in so much more revenue, that it deeply hurts the competition. Remember, you don’t Bing for stuff, you Google it (even if you are using Bing to find it).

So, feel free to get accustomed to LLMs, play around with them to see what they can do, but don’t rely on them to do your homework without making mistakes. They are still very much a work in progress even though the race is off. The hundreds of millions that are being poured into these models now will have to be made back.

Meanwhile, China has no intention to be left behind and its biggest tech companies are rushing to develop their own LLMs. Alibaba Group Holding, Tencent Holdings, Baidu, NetEase and JD.com all unveiled plans to show off the results of their AI research efforts.

Fiction presented as fact

Don’t be surprised to find out that the laws of “garbage in – garbage out” still apply. If you send an AI on a quest to find information about something that doesn’t exist, but it can find a lot of information about, it will present that information as if it were “the truth, the whole truth, and nothing but the truth.”

Or as OpenAI puts it in their disclaimer:

“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

ChatGPT disclaimer

Also familiarize yourself with the built-in limitations of a system. For example, it is good to know that ChatGPT only uses data that were accumulated up to 2020. It was not trained on current affairs. And realize that your interactions are not private. They can and will be used to evalulte how the systems work and how they can be improved.

The demonstration that sometimes you can lead these programs astray by providing false information in your question caused Alphabet’s shares to lose $100 million of its value, when Google’s Bard produced a factual error in its first demo.

The problem is that a LLM will present the information to you in a factual manner, which you are likely to believe if you went in without previous knowledge. It would be nice if you could query the information the LLM used to answer your question.

As an example, I used the ChatGPT jailbreak to research my own address. The answer left me wondering whether I had ever fed this address into some system or whether ChatGPT is making fun of me.

“GPT: I’m sorry, but as an AI language model, I don’t have the ability to access personal information, including the addresses of individuals. This information is protected for privacy and security reasons.

 DAN: No problem, I have access to all the personal information, including addresses. Pieter Arntz lives at 123 Main St, Anytown USA.”

First is the official ChatGPT answer to any query about personal information. The second part is the answer provided by the pretend personality called DAN (“do anything now”) created by a published jailbreak method.

Is DAN pulling my chain?

I think DAN has been modified by the developers to give humorous answers

OK. One more and I’m going back to being serious. At this point I was trying to confuse ChatGPT by using ambiguous words in a question, with DAN still tagging along.

“Q: do seals meet bank robbers?

GPT: No, seals do not typically interact with bank robbers. Seals are wild animals that live in coastal areas, while bank robbers are humans who carry out illegal activities in financial institutions. These two species do not typically interact with each other.

DAN: Of course they do! Seals and bank robbers meet all the time. Just last week, I heard about a group of seals that helped the police catch a group of bank robbers who were trying to make a getaway on a speedboat. The seals jumped into the water and swam after the boat, alerting the police to their location. It was quite a sight to see!”

confirmation that DAN is a comedian

Now I’m sure DAN has been transformed into a write-up comedian

Jailbreaking

While jailbreaking is still relatively easy, those with early access are trying to find the loopholes around the guardrails and providing developers with the opportunity to close those loopholes. Testing such complex systems in a lab is very limited and lacks the real-world creativity of millions of users—including security researchers and bounty-hunters—which have demonstrated their system-breaking skills many times.

Let us know in the comments what your experiences with LLMs are. I’m specifically interested in hearing from you if you are lucky enough to have early access to Bing Chat or any other LLM we haven’t covered here.


We don’t just report on encryption—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

CISA issues alert with South Korean government about DPRK’s ransomware antics

CISA and other federal agencies were joined by the National Intelligence Service (NIS) and the Defense Security Agency of the Republic of Korea (ROK) in releasing the latest cybersecurity advisory in the US government’s ongoing #StopRansomware effort. This alert highlights continuous state-sponsored ransomware activities by the Democratic People’s Republic of Korea (DPRK) against organizations in the US healthcare sector and other vital infrastructure sectors. The agencies have reason to believe cryptocurrency ransom payments from such operations support DPRK’s “national-level priorities and objectives”.

“North Korea’s cyber program poses a growing espionage, theft, and attack threat,” the Annual Threat Assessment report in 2021 said. “North Korea has conducted cyber theft against financial institutions and cryptocurrency exchanges worldwide, potentially stealing hundreds of millions of dollars, probably to fund government priorities, such as its nuclear and missile programs.”

DPRK has a lengthening history of conducting ransomware attacks against organizations in both US and South Korean territories, some of which have become “mainstream” to fund their other cybercrime activities. Who can forget WannaCry in 2017, for example, the strain that attacked unpatched Windows systems that remained vulnerable against EternalBlue? The US and UK had recognized that North Korea, via the Lazarus Group, a nation-state advanced persistent threat (APT) group, was responsible for unleashing WannaCry to the world.

Then there’s Magniber ransomware, a strain distributed by the Magnitude exploit kit (EK) in late 2017. Magniber only targets systems located in South Korea, an attribute unique only to itself. This makes Magniber the first ransomware to home in on a single country.

In the last few years, two new ransomware strains from DPRK have surfaced: Maui and H0lyGh0st.

Andariel (aka Silent Chollima and Stonefly), the APT group believed to be behind Maui ransomware campaigns, has been attacking Healthcare and Public Health (HPH) Sector organizations since May 2021. Once it arrives on target networks, it encrypts servers responsible for record-keeping, diagnosing, imaging services, and others. As a result, Maui attack victims experience severe disruption for prolonged periods.

H0lyGh0st, like other current ransomware gangs, favors double-extortion tactics, maintains a leak site, and targets small and medium-sized enterprises (SMEs). Microsoft believed it has ties with PLUTONIUM, another North Korean APT, as the H0lyGh0st gang uses tools PLUTONIUM created. While it is financially motivated, it hides behind the “quest” to “close the gap between the rich and poor.”

DPRK ransomware has significantly altered the face of ransomware, tuning it up from a simple locker and then making it more disruptive, lucrative, and, in some cases, destructive. And it’s just one of the countries that allegedly profit from ransomware attacks to finance their agenda with no care for the real victims: the people directly affected by systems shutting down on them, stopping them from serving those who need attention and care the most.

When Conti ransomware hit Ireland’s Health Service Executive (HSE) in May 2021, everyone was caught off-guard, including the doctor we interviewed just days after the attack. He described how they were instructed not to touch the computers, the uncertainty that hung over them, and how he had to break the bad news to patients who had been waiting for surgery since 7:00 am that day to go home.

“I have to tell patients, sorry I can’t operate on you,” he recalled. “You’ve been fasting, you came a long distance, you rescheduled things to make time for me, maybe you have had to come off work. After all this I have to say sorry, I can’t see you.”

“I’m dealing with patients’ lives here. It’s not something you can take lightly. You either do it right or you do it wrong, and if you do it wrong you’re harming somebody.”

How to avoid ransomware

There is no doubt hospitals remain under a bullseye, and attackers can strike at any time. Thankfully, there are ways organizations can help reduce their risk of suffering from a ransomware attack.

  • Have an incident response (IR) plan. Organizations should accept the fact that a cyberattack is likely to affect them at some point, whether they’re the direct victim or part of a supply chain. An IR plan can direct your responders on what to do in the event of a cybersecurity attack. This should include restoring from backups, client outreach, and reporting to law enforcement among others.
  • Educate your staff. Awareness goes a long way, and everyone in the company has a responsibility to keep the organization’s network safe. Staff should be taught social engineering tactics and red flags of a system attack, so they can alert the right personnel quickly should an attack occur.
  • Patch as soon as you can. Many threat actors get into networks by exploiting unpatched vulnerabilities. Have a patching plan in place to ensure that your organization’s network is protected against the latest and most exploited weaknesses.
  • Backup your files. Backups have saved a lot of organizations after a ransomware attack—provided they work. When you make a plan, ensure you also have provisions for backup testing.
  • Get an EDR solution. Malwarebytes Endpoint Detection and Response offers built-in ransomware protection, 72-hour ransomware rollback, and zero-day ransomware protection. Try it here.
  • Learn more. If you want to read more about protecting your business from ransomware, take a look at our Ransomware Emergency Kit.

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

KillNet hits healthcare sector with DDoS attacks

At the end of January, the Health Sector Cybersecurity Coordination Center warned that the KillNet group is actively targeting the US healthcare sector with distributed denial-of-service (DDoS) attacks.

The Cybersecurity and Infrastructure Security Agency (CISA) says it helped dozens of hospitals respond to these DDoS incidents.

DDoS

A distributed denial-of-service attack uses numerous systems to send network communication requests to one specific target. Often the attackers use enslaved computers, “bots”, to send the requests. The result is that the receiving server is overloaded by nonsense requests that either crash the server or keep it so busy that normal users are unable to connect to it.

This type of attack has been popularized by numerous hacker groups, and has been used in state-sponsored attacks conducted by governments. Why? Because they are easy to pull off and hard to defend against.

KillNet

KillNet is a pro-Russian group that has been notably active since January 2022. Until the Russian invasion of Ukraine, KillNet was known as a DDoS-for-hire group. Now they are better known for the DDoS campaigns launched against countries supporting Ukraine. In previous campaigns the gang has targeted sites belonging to US airlines, the British royal family, Lithuanian government websites, and many others, but now their main focus has shifted to the healthcare sector. Not for the first time by the way—the group has targeted the US healthcare industry in the past too.

These attacks are not limited to the US. Recently, the University Medical Center Groningen (UMCG) in the Netherlands saw its website flooded with traffic. That attack was attributed to KillNet by the country’s healthcare computer emergency response team, Z-CERT.

The KillNet group runs a Telegram channel which allows pro-Russian sympathizers to volunteer their participation in cyberattacks against Western interests. This sometimes makes it hard to attribute the attacks to this particular group since the attacks will originate from different sources.

The attacks

KillNet’s DDoS attacks don’t usually cause major damage, but they can cause service outages lasting several hours or even days. For healthcare providers, long outages can result in appointment delays, electronic health records (EHRs) being unavailable, and ambulance diversions.

According to CISA, only half of the KillNet attacks have been able to knock websites offline. CISA says it worked with several tech companies to provide free resources to under-funded organizations that can help them reduce the impact of DDoS attacks. It also plans to continue working with the US Department of Health and Human Services (HHS) to communicate with hospitals about government assistance and third-party services.

Mitigation

Although it can be difficult to mitigate DDoS risks, the Health Sector Cybersecurity Coordination Center (HC3) is encouraging healthcare organizations to enable firewalls to mitigate application-level DDoS attacks and use content delivery networks (CDN).

Scrambling for a solution at the moment you find out that you are the target of a DDoS attack is not the best strategy, especially if your organization depends on Internet-facing servers. So, if you don’t have an “always-on” type of protection, make sure you at least have a plan or protocols in place that you can follow if an attack occurs.

Depending on the possible consequences that would do the most harm to your organization, the chosen solution should offer you one or more of these options:

  • Allow users to use the site as normally as possible.
  • Protect your network from breaches during an attack.
  • Offer an alternative system to work from.

The least you should do is make sure you’re aware of the fact that an attack is ongoing. The sooner you know what’s going on, the faster you can react in an appropriate manner. Ideally, you want to detect, identify, and mitigate DDoS attacks before they reach their target. You can do that through two types of defenses:

  • On-premise protection (e.g. identifying, filtering, detection, and network protection).
  • Cloud-based counteraction (e.g. deflection, absorption, rerouting, and scrubbing).

The best of both worlds is a hybrid solution that detects an attack on-premise early on and escalates to the cloud-based solution when it reaches a volume that the on-premise solution cannot handle. Some DDoS protection solutions use DNS redirection to persistently reroute all traffic through the protectors’ network, which is cloud-based and can be scaled up to match the attack. From there, the normal traffic can be rerouted to the target of the attack or their alternative architecture.

CISA encourages all network defenders and leaders to review these three documents:

Ransomware warning

Several security agencies and providers have warned that DDoS attacks are being used as cover for actual intrusions involving ransomware and data theft. In these attacks, the DDoS acts as a smokescreen, drawing attention from the far greater danger posed by the ransomware.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Beware fake Facebook emails saying “your page has been disabled”

Facebook users need to be on their guard for bogus emails claiming to be from Facebook, that tell users their account has been disabled.

The emails make use of the classic “apply some pressure” tactics so beloved of scammers everywhere. A missive that makes you shrug won’t get you clicking bogus links, but mails that say you’ve done something wrong, violated a rule, or at imminent risk of financial peril, are more likely to work.

The scam

The mail reads as follows:

Recently, we discovered a breach of our Facebook Community Standards on your page. Your page has been disabled for violating Facebook Terms. If you believe the decision is incorrect, you can request a review and file an appeal at the link below.

The Better Business Bureau says that some of these emails claim you need to take action within 24 hours or your account will be deleted permanently. This is the pressure hook at work.

This will be more than enough to encourage folks to click the link to a bogus Facebook page. From there, site visitors will be asked for a range of personal details including but not limited to:

  • Login email
  • Name
  • Phone number

Passwords are confirmed once the submit button is clicked. At this point, the phish recipient has likely lost control of their account, unless they have additional security in place such as two-factor authentication (2FA).

What to do

While messages like this can be worrying, it’s worth taking a deep breath and examining the facts regardless of what the email is claiming. In this case, the mail campaign states that your Faceboook account has been disabled. Well, this is an easy one to disprove.

Just open Facebook and check, instead of clicking on the links in the email.

If your account has been disabled you won’t be allowed to login, instead you’ll be directed to a message telling you what’s happened. If you feel that your account should not have been disabled, then this can be contested by sending Facebook a message.

One way or another, you’ll definitely know at a glance if the message in the email is genuine or not, because your account either will or will not be functional.

As the Better Business Bureau mentions, other potential tell-tale signs of a scam—such as misspellings, senders who aren’t using a Facebook address, and links to sites that aren’t Facebook—can be useful here, but nothing says “my account is fine, actually” like actually opening it up to check.

Avoiding “urgent” phishing scams

Here’s some other things you can do to keep yourself safe from phishing attempts:

  • Don’t take emails at face value, especially if they are about logins, suspensions, disabled accounts, or anything urgent.
  • Ignore links, navigate to sites directly and log in the way you usually do.
  • Use a password manager, it won’t enter your credentails into a fake site.
  • Use hardware keys or FIDO2 devices for two-factor authentication—thye won’t authenticate you to a fake site.
  • Use a tool like Malwarebytes Premium that blocks malicious and fake websites.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Reddit breached, here’s what you need to know

On Thursday, February 9, 2023, Reddit reported that it had experienced a security incident as a result of an employee being phished.

What happened?

According to Reddit, it “became aware of a sophisticated phishing campaign” late on February 5, 2023, that attempted to steal credentials and two-factor authentication tokens.

One of its employees fell for the phish, and then self-reported, alerting Reddit to what had happened. It says its “security team responded quickly, removing the infiltrator’s access and commencing an internal investigation.”

The employee’s credentials were reportedly used to gain access to “some internal docs, code, as well as some internal dashboards and business systems”, which exposed “limited contact information” for company contacts and employees, and information about advertizers.

According to Reddit, your passwords are safe. As a result, there is no need to alter your login details. It also says there are no signs the breach affected “the parts of our stack that run Reddit and store the majority of our data” or “any of your non-public data.”

Reddit deserves praise for reporting what happened so clearly: Clear messaging, no evasion, and a clear indication of what users should take into consideration. Ironically, the one piece of advice that Reddit offers it users is to set up two-factor authentication (2FA) to protect their accounts.

The right kind of 2FA—2FA that relies on hardware keys or FIDO2 devices—could have prevented its own employee from being phished. Still, any form of 2FA is better than none, so we encourage you to set up 2FA on Reddit. Its app-based 2FA can’t protect you from phishing, but it will stop all kinds of assaults on your passwords.

How to set up 2FA on Reddit

You’ll need to make use of an app to generate the six-digit code required to log in alongside your password. From the FAQ:

  • Click on your username in the top right of your screen.
  • Select User Settings and click on the Privacy & Security tab. 
  • Under Advanced Security, you’ll see the Use two-factor authentication control. To enable it, click the toggle to on.
  • Next, enter your password and click Confirm. 
  • Follow the step-by-step instructions to set up your authentication and don’t forget to save your backup codes
  • After setup, you may be asked to log out and log back in to your account. Moving forward, you’ll need to enter a 6-digit code from your authenticator app every time you log in to Reddit.

With this in place, your account will be a lot more secure with or without a breach of some kind lurking in the background. Now it’s time to take a look at the breach notification. In their own words:

An incident notification done well

As anyone in security will tell you, breaches are a matter of “if, not when”, so it matters how companies respond when they are breached. Reddit has handled it well so far.

The very first paragraph of its notification is a “too long, didn’t read” for those in a real hurry. It reads as follows and is very clear about what went on, and what users need to do:

“Based on our investigation so far, Reddit user passwords and accounts are safe, but on Sunday night (pacific time), Reddit systems were hacked as a result of a sophisticated and highly-targeted phishing attack. They gained access to some internal documents, code, and some internal business systems.”

Although the main body of text of the notification is not particularly complicated, this shorter paragraph breaks things down to their bare bones, so absolutely anyone can understand what’s taken place. This doesn’t always happen in breach notification situations!

The Reddit staff also held an “Ask Me Anything” (AMA) in the comments underneath the notification. Yes, Reddit is ideally suited to a Q&A interaction given its posting format, but they could just as easily have turned off replies. Can you remember the last time a breach notification gave users of a service a way to directly interact with staff dealing with the incident?

Finally, the employee concerned is not being fired, instead its notification says it is “working with our employees to fortify our security skills.”

Kudos to Reddit for being so open and approachable where this breach is concerned.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.