IT NEWS

What is encryption? And why it matters in a VPN

Encryption is a term used to describe the methods that hide the true meaning of messages using code, especially to prevent unauthorized access to the information in the messages.

Not all users of virtual private networks (VPN) care about encryption, but many are interested and benefit from strong end-to-end encryption. So let’s have a look at the different types of encryption and what makes them tick.

We have discussed the different types of VPN protocols elsewhere, and pointed out that a big factor in many of the important properties of a VPN is the type and strength of encryption. To accomplish end-to-end encryption a process called VPN tunneling is needed.

What is a VPN tunnel?

A VPN tunnel is an encrypted link between your device and an outside network. But there are significant differences between VPN tunnels and not all of them are equally effective in protecting your online privacy. The strength of a tunnel depends on the type of protocol your VPN provider uses. One of the key factors is the type of encryption.

What is encryption used for?

Encryption is used to hide the content of traffic from unauthorized readers. This is often referred to as end-to-end encryption since usually only the sender at one end and the receiver at the other end are authorized to read the content.

Privacy of Internet traffic is, or should be, a major concern, because we use the Internet in all its forms to send a lot of sensitive information to others. For example:

  • Personal information.
  • Information about your organization.
  • Bank and credit card information.
  • Private correspondence.

Since human-based code is far too easy to crack by modern computers, we rely on computers to encrypt and decrypt our sensitive data.

Types of encryption

“What are the types of encryption?”, you may ask. Computerized encryption methods generally belong to one of two types of encryption:

  • Symmetric key encryption
  • Public key encryption

Public-key cryptography is sometimes called asymmetric cryptography. It is an encryption scheme that uses two mathematically related, but not identical, keys. One is a public key and the other a private key. Unlike symmetric key algorithms that rely on one key to both encrypt and decrypt, each key performs a unique function. The public key is used to encrypt and the private key is used to decrypt. The mathematical relation makes it possible to encode a message using a person’s public key, and to decode it you will need the matching private key.

Symmetric-key encryption

This type of encryption is called symmetric because you need to have the same substitution mapping to encrypt text and decrypt the encoded message. This means that the key which is used in the encryption and decryption process is the same.

Symmetric key encryption requires that you know which computers will be talking to each other so you can install the key on each one. This way each computer has the secret key that it can use to encrypt a packet of information before being sent over the network to the other computer. Basically, it is a secret code that each of the two computers must know in order to decode the information. But since this design necessitates sharing of the secret key,  this is considered to be a weakness when there is a chance of the key being intercepted.

Advanced Encryption Standard (AES)

The best example of symmetric encryption is probably AES, which the US government adopted in 2001. The government classifies information in three categories: Confidential, Secret or Top Secret. All key lengths can be used to protect the Confidential and Secret level. Top Secret information requires either 192- or 256-bit key lengths.

How is AES encryption done?

The AES encryption algorithm defines numerous transformations that are to be performed on data stored in an array. The first transformation in the AES encryption cipher is substitution of data using a substitution table; the second transformation shifts data rows, and the third mixes columns. The last transformation is performed on each column using a different part of the encryption key. The key length is important because longer keys need more rounds to complete.

Public-key encryption

To deal with the possibility of a symmetric key being intercepted, the concept of public-key encryption was introduced. Public-key encryption uses two different keys at once. A combination of a private key and a public key. The private key is known only to your computer, while the public key is provided by your computer to any computer that wants to communicate securely with it.

To decode an encrypted message, a computer must use the public key, provided by the originating computer, and its own private key. The key pair is based on prime numbers of a long length. This makes the system extremely secure, because there is essentially an infinite number of prime numbers available, meaning there are nearly infinite possibilities for keys.

VPNs use public-key encryption to protect the transfer of AES keys. The server uses the public key of the VPN client to encrypt the key and then sends it to the client. The client program on your computer than decrypts that message using its own private key.

Why is end-to-end encryption important?

End-to-end encryption is important to create a secure line of communication that blocks third-party users from intercepting data. It limits the readability of transmitted data to the recipient. Most VPN services use asymmetric encryption to exchange a new symmetric encryption key at the start of each VPN session. The data is only encrypted between you and the VPN server. This secures it from being inspected by any server in-between you and the VPN, such as your ISP or an attacker operating a rogue WiFi hotspot. The data transferred between the VPN server and the website you’re visiting is not encrypted, unless the website uses HTTPS.

This is why we said in an earlier post that using a VPN is shifting your trust to a new provider. When you use a VPN you transfer access to your traffic to a third party, the VPN provider. All that visibility that users balk at relinquishing to their ISP has now been handed over to their VPN provider. Careful consideration should be given to the trustworthiness of said VPN provider.

The post What is encryption? And why it matters in a VPN appeared first on Malwarebytes Labs.

What is Incognito mode? Our private browsing 101

Incognito mode is the name of Google Chrome’s private browsing mode, but it’s also become the catch-all term used to describe this type of web surfing, regardless of the browser being used. Some call it Private Mode, others call it Private Browsing. Apple almost certainly got there first, yet Chrome’s 2008 creation has largely become the generic name for all private browsing activity.

What’s the difference between Private browsing and Incognito Mode?

This is an important distinction to make. People can often get lost in options settings when reading articles about incognito mode because some aspects may be Chrome specific. This won’t help when trying to select something in options related to Safari on a Mac. With that in mind, everything we talk about below will be in relation to Chrome’s actual Incognito Mode. If we’re being more general, or referring to privacy modes in other browsers, we’ll also explain which ones.

How to go Incognito

In Chrome, Incognito is a privacy-focused option available from the dropdown menu in the top right hand corner. It’s a brand new, fresh out of the box, temporary version of your regular web browser. We’ll explain the key differences, and possible drawbacks, below.

Edge follows the same process. Click “Settings and more…” and this leads to what they call an InPrivate window.

You won’t be surprised to learn things are the same in Firefox. Its Private browsing is also opened up by the dropdown icon on the right hand side, then picking “New Private Window”.

Safari on a Mac works a little differently than the rest. You need to click on File / New Private Window from the dropdown options at the top of the screen.

What is Incognito mode?

In Incognito mode, your browsing history, cookies, site data, and information entered into forms aren not saved on your device. This means that when you start an Incognito window, you’re not logged into anything from your other session(s). You can be logged into your Amazon account, your email accounts, social media, and anything else in your “main” browser. That won’t be the case with the Incognito window when you open it up. It is completely separate from whatever you’re doing elsewhere. You don’t need to close your other browser(s) while using an Incognito window. They’ll co-exist quite happily.

Why use Incognito mode?

Incognito mode is primarily designed to keep your information private from other users of the same computer. It isn’t designed to keep your information private from the websites you visit, although that is sometimes a side effect.

The old joke is that it’s “pornography mode”, for people wanting to hide more personal aspects of their browsing. While this is no doubt true for some, there’s a lot more scope to Incognito mode and its uses than people give it credit for.

People may share computers. “Switch your login to another account” may be the first suggestion, but it’s not typically a realistic one in every scenario. What if you want to buy a surprise gift for a loved one? Nobody wants to play a game of “endlessly hide your Amazon history” while casually surfing. This is why people will look for gifts in Incognito mode, copy the URL, then drop it into their regular browser session afterwards to make the purchase. From there, they can delete it from their actual, logged-in history before forgetting about it. One additional bonus is that they won’t have dozens of similar gift items showing up in purchase suggestions. Again, this is very useful for accidental over-the-shoulder gift spoilage.

Avoid getting personal with private browsing

There’s a desire to avoid “cross-pollination” of data related to people logged in on their main browser. Sure, your Google account may know a lot about you. It’s still possible to isolate your most personal details from services you use. Suppose you don’t want your Google account to get a read on where you live, or go to work, or perhaps know the name of your children’s school. This is, again, doable. However. When your child falls ill and you can’t remember the school’s number? Punching it into your logged in account may be something you were trying to avoid. Same goes for a quick Google Maps route from your house to your office when roadworks cause delays. These are all things people who compartmentalise bits and pieces of crucial personal information like to avoid. There’s always the possibility of something going wrong in search engine land, and steps to mitigate issues like this are wise.

Is Incognito mode totally private?

Please note that the below applies to all browsers, when talking about Incognito / Privacy modes. The answer is, “no” and “because it largely depends”. Depends on what, you may ask?

If you’re on a corporate network, or on a home network with logging enabled? The person with access to the logs might not be able to see the site content, but they may be able to see URLs and can almost certainly see the names of the sites. As the text in the Incognito mode window at launch states, your ISP and websites themselves may see what you’re doing.

There’s also an option to enable third-party cookies (off by default in Incognito), though this may be something most people would naturally avoid in private browsing mode. Google has made statements about most of the above already. In fact, some of this has become quite a headache for the search giant.

Private browsing should not be used as a replacement for tools like a VPN, which are designed to solve a very different set of privacy problems. Some folks like to take things a step further. Otherwise, private browsing modes are a useful thing to have, but certainly not a one-stop fix for all privacy problems. Keep this in mind and your Incognito surfing sessions will hopefully be free from worry.

The post What is Incognito mode? Our private browsing 101 appeared first on Malwarebytes Labs.

Colonial Pipeline attack spurs new rules for critical infrastructure

Following a devastating cyberattack on the Colonial Pipeline, the Transportation Security Administration—which sits within the government’s Department of Homeland Security—will issue its first-ever cybersecurity directive for pipeline companies in the United States, according to exclusive reporting from The Washington Post.

The directives are expected to arrive within the week and will require pipeline companies in the US to report any cyberattacks they suffer to the TSA and the Cybersecurity Infrastructure and Security Agency. Such attacks will be reported by newly designated “cyber officials” to be named by every pipeline company, who will be required to have 24/7 access to the government agencies, The Washington Post reported. Companies that refuse to comply with the directives will face penalties.

The regulations represent a tidal shift in how the TSA has protected pipeline security in the country for more than a decade. Though the government agency has for 20 years been tasked with protecting flight safety in the country, the new cybersecurity directives fall under the agency’s purview following a government restructuring after the attacks on September 11, 2001. More than a decade after the attacks, the agency leaned on voluntary collaboration with private pipeline companies for cybersecurity protection, sometimes offering to perform external reviews of a company’s networks and protocols. Sometimes, the Washington Post reported, those offers were declined.

But after the ransomware group Darkside attacked the East Coast oil and gas supplier Colonial Pipeline, which led to an 11-day shut-down and gas shortages in the Eastern US, it appears that the federal government is no longer satisfied with private industry’s lagging cybersecurity protections. Already, President Joe Biden has signed an Executive Order to place new restrictions on software companies that sell their products to the federal government. Those rules were reportedly refined after the Colonial Pipeline attack, and are expected to become an industry norm as more technology companies vie to include the government as a major customer.

The TSA’s new rules for pipeline companies fall into the same trend.

In speaking with The Washington Post, Department of Homeland Security spokeswoman Sarah Peck said:

“The Biden administration is taking further action to better secure our nation’s critical infrastructure. TSA, in close collaboration with [the Cybersecurity and Infrastructure Security Agency], is coordinating with companies in the pipeline sector to ensure they are taking all necessary steps to increase their resilience to cyber threats and secure their systems.”

Though the first directive from TSA is expected this week, follow-on directives could come later. Those directives are reported to include more detailed rules on how pipeline companies protect their own networks and computers against a potential cyberattack, along with guidance on how to respond to cyberattacks after they’ve happened. Further, pipeline companies will be forced to assess their own cybersecurity against a set of industry standards. These directives, like the one expected this week, will also be mandatory, but one expected, voluntary guidance from TSA will be whether a pipeline company must actually fix any issues it finds from a required cybersecurity assessment.

The new rules will bring the private pipeline industry into a small group of regulated sectors of US infrastructure, including bulk electric power grids and nuclear plants. These sectors are the outliers in US infrastructure, as most components—including water dams and wastewater plants—have no mandatory cybersecurity protections.

Several hurdles remain for the TSA’s rules to be effective, including a dearth of staff at the agency itself. According to The Washington Post, the TSA’s pipeline security division had just one staff member in 2014, and according to testimony in 2019, that number had grown to only five. To assuage the problem, the Department of Homeland Security is expected to hire 16 more employees at TSA and 100 more employees at CISA.

The post Colonial Pipeline attack spurs new rules for critical infrastructure appeared first on Malwarebytes Labs.

Insider threats: If it can happen to the FBI, it can happen to you

If you’re worried about the risk of insider threats, you’re not alone. It can affect anyone, even the FBI. A federal grand jury has just charged a former intelligence analyst with stealing confidential files from 2004 to 2017. That’s an incredible 13 years of “What are you doing with that pile of classified material?”. Even more so, considering the indictment states the defendant did not “…have a ‘need to know’ in most, if not all, of the information contained in those materials”.

There’s lots of ways this kind of data collection and retention could go wrong. What happens if the person hoarding the documents decides to sell to the highest bidder? Or even just starts giving it away to specific entities? Could it all be digital? What happens when a random third party compromises the PC / storage the files are located on?

How about a plain old burglary, with unsuspecting thieves swiping an inconspicuous looking external hard drive?

However you look at it, this is not a great situation for those files to be in.

The safe zone is compromised

Organisations have multiple problems dealing with the issue of insider threats. They feel more comfortable locking down their data from outside entities. Mapping out ways to keep the soft underbelly of the organisation protected from its own employees is more difficult.

This makes sense. It’s frankly overwhelming for many businesses to figure out where to even begin. How many physical security experts do people know? What about social engineers? Hardware lockdown specialists? The IT department should know their way around firewall configuration. However, there may be weak spots in auditing folks with privileged IT access.

Is there someone at a business who has an idea that printer security is even a thing? If not, that could spell trouble.

Anyone can be a security risk

There’s many forms of insider threat, which we’ve explored in great detail. They differ greatly, and their motivations can differ considerably from individual to individual. If you’ve never considered the difference between intentional and unintentional insiders, and all the different varieties thereof, then now is a great time to start.

If your approach is simply “a bad person wants to steal my files”, any potential defences likely won’t contain enough nuance to be sufficient in the first place. It’s a big, complicated problem. There are lots of moving parts. It needs the same level of thought and attention given to other areas of business security elsewhere.

Some additional reading

This FBI insider threat story is quite timely, given how much attention the subject is experiencing recently. Some additional reading for your consideration:

This is hopefully just the splash of light reading material required to get you up to speed on this insidious form of data exfiltration.

The post Insider threats: If it can happen to the FBI, it can happen to you appeared first on Malwarebytes Labs.

VPN Android apps: What you should know

Months ago, we told readers about the importance of using a VPN on their iPhones, and while those lessons do apply to Android devices—a VPN for Android will encrypt your Android’s web activity and app traffic, and it will stop your mobile carrier from monetizing your data—Android users should caution against one particular risk: That of the free VPN app.

In just the past year, free VPN for Android apps have exposed the data of as many as 41 million users, revealing consumers’ email addresses, payment information, clear text passwords, device IDs, and more. Investigations into one of those free VPN Android apps also revealed that it may have been part of a larger web of Android VPNs all operating under the same company—a company that was nearly impossible to reach for customer support, borrowed liberally from other company privacy policies, and failed to meet its promises to keep “no logs” of user activity. And while poorly built VPNs are not reserved only for Android devices, Android users in particular should wade cautiously through the Google Play Store, where countless VPN apps demarcate themselves under bland terminology such as “ultimate,” “super,” “fast,” and, of course, “free.”

In reality, a secure, trustworthy VPN Android app is rarely, if ever, free, and that’s largely because the actual work that goes into running a secure VPN service costs money. As Malwarebytes senior security research JP Taggart said on our podcast Lock and Code:

“Deploying a VPN service is, you know, it requires infrastructure. It requires servers, it requires staff, it requires coders to make sure that it’s done properly or that it’s done the way you want it to work,” Taggart said. “All of that has to be paid. All these people that work on [the VPN service], nobody is going to do it for free. No one is that altruistic.”

There is no best free VPN for Android

Searching for a VPN app shouldn’t be so hard, but it is. A quick query in the Google Play store conjures up at least 250 results, and, without any knowledge of the VPN industry, it can be difficult to know which app to trust. For users taking their first steps into learning about VPNs, the temptation to download any of the countless free VPN Android apps is high.

But some of those free apps are the same ones with a poor track record of protecting user data.

In February of this year, a cybercriminal claimed to have stolen user data from three, separate VPN apps available on the Google Play Store: SuperVPN, GeckoVPN, and ChatVPN. The cybercriminal said on an online hacking forum that they’d managed to swipe email addresses, usernames, full names, country names, randomly generated password strings, payment-related data, and whether a user was a “Premium” member, along with that “Premium” membership’s expiration date. Follow-on reporting from the tech outlet CyberNews also revealed that the stolen data included device serial numbers, phone type and manufacturer information, device IDs, and device IMSI numbers.  

The impact of such a data breach is hard to measure, because it goes beyond just the harm caused to the victims. At risk here is also the trust that users are expected to place in a service that is specifically advertised as a privacy and security measure.

Troy Hunt, the founder of the data breach website HaveIBeenPwned, called the breach “a mess” on Twitter, saying that it was a “timely reminder of why trust in a VPN provider is so crucial.”

“This level of logging isn’t what anyone expects when using a service designed to *improve* privacy,” Hunt said, “not to mention the fact they then leaked all the data.”

But for one of the VPN Android apps, SuperVPN, it was actually the second time it had been named in a cybersecurity mishap.

In July, 2020, cybersecurity researchers at vpnMentor published a report that showed that  seven VPN Android apps had left 1.2 terabytes of private user data exposed online. According to the report, the data belonged to as many as 20 million users, and it included email addresses, clear text passwords, IP addresses, home addresses, phone models, device IDs, and Internet activity logs.

Particularly upsetting in this discovery was the fact that all of the seven VPN Android apps had promised to keep “no logs” of user activity—a provably false claim since vpnMentor actually found user logs in its research. The VPNs named in the report were UFO VPN, Fast VPN, Free VPN, Super VPN, Flash VPN, Secure VPN, and Rabbit VPN.

In its investigation, vpnMentor also proposed that the seven VPN Android apps were likely made by the same developer, as the VPN services shared a common Elasticsearch server, along with the same payment recipient, Dreamfii HK Limited. Three of the VPN apps also featured branding and website layouts that looked similar to one another.

These are known privacy and security failures, and they just so happen to afflict free VPN for Android apps. A free VPN may cost nothing out of your pocket, but it could cost your privacy a lot more.  

We can’t tell you the best VPN for Android, free or not free

We’ve told you the bad news—free Android VPNs are too big a risk to take. Now, understandably, you might ask about the good news—what VPN Android app should I use?

Unfortunately, we can’t recommend any VPN Android app, and that’s because what VPNs offer— which are varying privacy protections—are not uniformly valuable to every user.

For instance, for users who want to protect their Internet activity while connecting to a public WiFi hotspot, VPNs offer a strong solution to that, as VPN services encrypt web traffic and make it incomprehensible to digital eavesdroppers. Also, for users who want to access content that is geo-restricted, VPNs also offer a helpful workaround, as they can make a user’s Internet traffic appear as though it is originating from another location.

But where VPN value starts to differentiate is in the realm of privacy, and that’s because, as we’ve learned in recent years, privacy could mean something different for every user. For some users, privacy might mean hiding their Internet traffic from their Internet Service Provider, which a VPN can do. But for other users, privacy might mean keeping their sensitive data from today’s enormous social media companies, which a VPN cannot do. Or it might mean stopping cross-site tracking across the Internet, which, again, a VPN cannot do.

But do not worry if you’re still looking for help, because we can recommend the same advice we did earlier this year for anyone looking for the right VPN for themselves.

Think about how you’ll use the VPN service and look for a variety of features, like the ease of use, the connection speed, any potential data limits, the availability of customer support, and the VPN’s policy on keeping user logs. With the right info, you’ll be protecting yourself in no time.

Just remember, if you’re willing to take your privacy seriously, you should also be willing to spend a little money on it.

The post VPN Android apps: What you should know appeared first on Malwarebytes Labs.

A week in security (May 17 – May 23)

Last week on Malwarebytes Labs, we looked at a banking trojan full of nasty tricks, explained some tips and pointers for using VirusTotal, and dug into how an authentication vulnerability was patched by Pega Infinity. We also explored how a Royal Mail phish deploys evasion tricks to avoid analysis, and gave a rundown of how Have I been Pwned works. The human cost of the HSE ransomware attack was explored, new Android patches hit the streets, and Apple confirmed that Macs get malware.

Other Cybersecurity news

Stay safe, everyone!

The post A week in security (May 17 – May 23) appeared first on Malwarebytes Labs.

Shining a light on dark patterns with Carey Parker: Lock and Code S02E09

This week on Lock and Code, we speak to cybersecurity advocate and author Carey Parker about “dark patterns,” which are subtle tricks online to get you to make choices that might actually harm you.

Dark patterns have been around for years, and the tricks they’re based on are even older. Ever bought a pretty much useless concert ticket warranty? Ever paid for 12 months at a gym when you were really just interested in a trial membership? Ever been fooled in spending just a little more money than you planned?

Well, those tricks exist online, too, and they often show up in hidden, visual cues that make you think that one option is better for you than another. But, lo and behold, the option that looks appealing to you might actually be the option that best serves a company. You could be tricked into staying into a newsletter subscription. You could find it exceedingly difficult to delete an account entirely. And you may be signing away your data privacy protections without even knowing it.

But, as Parker helps explain in today’s episode, even those lowered privacy protections are a means of making money for some of today’s largest social media companies:

“They want to know as much about you, they want to know about everyone you know, so they use dark patterns to trick you into providing way more personal data than any sane human would ever want to provide. And that’s how they make more money.”

Tune in to learn about dark patterns—how to spot them, what any future fixes might look like, and what one company is doing to support you—on the latest episode of Lock and Code, with host David Ruiz.

https://feed.podbean.com/lockandcode/feed.xml

You can also find us on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

The post Shining a light on dark patterns with Carey Parker: Lock and Code S02E09 appeared first on Malwarebytes Labs.

A doctor reveals the human cost of the HSE ransomware attack

“It’s cracking, the whole thing.”

The words were delivered quickly, but in a thoughtful and measured way. As if the person saying them was used to delivering difficult news. Little surprise, given they belonged to a doctor. But this doctor wasn’t describing a medical condition—this was their assessment of the situation on the ground in the hospital where they’re working today, in Ireland.

Since May 14, Ireland’s Health Service Executive (HSE) has been paralysed by a cyberattack. In the very early hours of Friday morning, a criminal gang activated Conti ransomware inside HSE’s computer systems, sparking a devastating shutdown.

Government officials were quick to reassure people that emergency services remained open and the country’s vaccine program was unaffected. The story echoed around the world, and then, outside of Ireland at least, the news moved on. Just as it had moved on from the Colonial Pipeline attack that preceded HSE, and the attack on AXA insurance that followed it.

But the HSE attack isn’t over.

Daniel (not his real name) sat with Malwarebytes Labs on condition of anonymity, to explain how this cyberattack is continuing to affect the lives of vulnerable patients, and the people trying to treat them. Throughout our interview he speaks quickly, but with control and understatement. He has the eyes and slightly exaggerated movements of somebody substituting adrenaline for sleep.

A 21st century health system runs on computers, but the computers in Daniel’s hospital have notes on them saying they cannot be used, and should not be restarted. While those computers are dormant, simple things become difficult; everything takes longer; complex surgeries have to be cancelled.

Daniel told us that before the attack he would go through a system linked to HSE for each of his appointments, looking for GP referrals by email, checking blood results, accessing scans, reading notes linked to each patient. That is gone now.

“Before surgery I review [each patient’s] scans. Or even during the surgery. Legally I have to look at the scans.”

“I can’t even check my hospital mail. Our communication with everyone has been affected… They can’t ring me. The whole thing is just breaking apart.” The GDPR, which is designed to protect patients’ data, prevents him from using his personal email or other messaging systems for hospital business. A generation of staff raised on computers are back to pen and paper. “You don’t know who’s looking for who, who wants to see who.”

I ask him how he first learned about the attack and he tells me about coming to work on Friday totally unprepared for what he’d encounter. The only nurse he sees asks “did you hear?”. He had not. The systems he relies on to stay informed aren’t working. “I didn’t get a heads up. All computers are not allowed to be touched. Do not restart.”

He describes how uncertainty hung over them, until at midday he let a patient who had been waiting for surgery since 7 am know that the day is cancelled. “She’s been fasting. With her stress up I had to tell her to go home.”

The staff are in the dark. “We were optimistic it would get done over the weekend. We thought it might get done the same day. Then we thought maybe Monday.” It has been this way since Friday and he is not optimistic that it will be sorted any time soon. “There is no official timeline but we’re thinking it will take at least a week or so. We are not optimistic about it.”

As he says this to me all I can think about is a statistic from the recent Ransomware Task Force report. According to the report, the average downtime after a ransomware attack is 21 days. The time to fully recover is over nine months. I can’t bring myself to mention it.

I ask him about the impact on patients.

“I have to tell patients, sorry I can’t operate on you. You’ve been fasting, you came a long distance, you rescheduled things to make time for me, maybe you have had to come off work. After all this I have to say sorry, I can’t see you.”

“I’m dealing with patients lives here. It’s not something you can take lightly. You either do it right or you do it wrong, and if you do it wrong you’re harming somebody.”

But not harming people requires access to information he no longer has. Delays can be life threatening. “If I reschedule a patient and they come back a few weeks or a few months later with a tumour that I couldn’t asses from the paperwork…”, he stops there. He doesn’t need to finish the thought. Those that don’t get worse while they’re delayed are still suffering too. They will stay that way until they can be seen.

And it’s obvious from my conversation with Daniel that it isn’t only the patients who are being put at risk. There are grinding, corrosive effects on the hospital staff too. Everything takes longer, which requires more work, and nobody knows when it will be over.

It is a wicked burden for a medical profession that has spent the last year grappling with a once-in-a-century pandemic. “Our backlog just became tremendous”, Daniel says, before explaining that over the last few months he and his colleagues have performed surgeries at nighttime and weekends to work through the backlog of operations and appointments delayed by the response to COVID.

And now there is another reason to work late.

Because of the ransomware attack, he must put in hours of extra effort after his day’s work is done just to determine which of tomorrow’s appointments he will have to cancel for lack of information. And then he must deal with those anguished, sometimes angry patients, telling them their appointment cannot go ahead.

“Imagine the scenario,” he says. “Patients will wait literally two years to see us. After two years they get a call saying ‘I’m sorry I can’t see you and I have to reschedule you and I can’t say when, because of the ransomware’. They know it’s not my fault but they are upset and very annoyed.” Daniel’s understatement kicks in. “They teach us ways to speak to angry patients, but it’s not nice.”

And hanging over all these interactions is the spectre of litigation. Whichever way he turns, his decisions have consequences and his decision making process is in tatters.

I ask him if he thinks they should pay the ransom (the Irish Taoiseach does not.) I am expecting rage and anger. A defiant “no”. I am projecting. His first thought is for the health of the people being denied care.

“I think they will pay the ransom. I don’t think there is another way around it. The pressure will build up, they will have to do what has to be done. This can’t go on. This is disastrous.” If it was his decision, would he pay? “I would. There is no money you can pay to take somebody’s life away. I would make my system more robust so this doesn’t happen again.”

I ask him if there’s anything he’d say to his attackers if he could.

“If your loved one was sick. Would you do this? If you had somebody you cared about, would you do this to them. That’s what I’d ask them.”

“I think they lost their humanity.”

The post A doctor reveals the human cost of the HSE ransomware attack appeared first on Malwarebytes Labs.

Android patches for 4 in-the-wild bugs are out, but when will you get them?

In the Android Security Bulletin of May 2021, published at the beginning of this month, you can find a list of roughly 40  vulnerabilities in several components that might concern Android users. According to info provided by Google’s Project Zero team, four of those Android security vulnerabilities are being exploited in the wild as zero-day bugs.

The good news is that patches are available. The problem with Android patches and updates though is that you, as a user, are dependent on your upstream provider for when these patches will reach your system.

Android updates and upgrades

It is always unclear for Android users when they can expect to get the latest updates and upgrades. An Android device is a computer in many regards and it needs regular refreshes. Either to patch against the latest vulnerabilities or when new features become available.

An update is when an existing Android version gets improved, and these come out regularly. An upgrade is when your device gets a later Android version. Usually a device can function just fine without getting an upgrade as long as it stays safe by getting the latest updates.

Depends on brand and type

Google is the company that developed the Android operating system (which is itself a type of Linux) and the company also keeps it current. It is also the company that creates the security patches. But then the software is turned over to device manufacturers that create their own versions for their own devices.

So, when (even if) you will get the latest updates at all, depends on the manufacturer of your device. Some manufacturer’s devices may never see another update because Google is not allowed to do business with them.

The critical vulnerabilities

In a note, the bulletin states that there are indications that CVE-2021-1905, CVE-2021-1906, CVE-2021-28663, and CVE-2021-28664 may be under limited, targeted exploitation. Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. The four that may be being abused in the wild are:

  • CVE-2021-1905 Possible use after free due to improper handling of memory mapping of multiple processes simultaneously. in Snapdragon Auto, Snapdragon Compute, Snapdragon Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wearables.
  • CVE-2021-1906 Improper handling of address de-registration on failure can lead to new GPU address allocation failure. in Snapdragon Auto, Snapdragon Compute, Snapdragon Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wearables.
  • CVE-2021-28663 The Arm Mali GPU kernel driver allows privilege escalation or information disclosure because GPU memory operations are mishandled, leading to a use-after-free. This affects Bifrost r0p0 through r28p0 before r29p0, Valhall r19p0 through r28p0 before r29p0, and Midgard r4p0 through r30p0.
  • CVE-2021-28664 The Arm Mali GPU kernel driver allows privilege escalation or a denial of service (memory corruption) because an unprivileged user can achieve read/write access to read-only pages. This affects Bifrost r0p0 through r28p0 before r29p0, Valhall r19p0 through r28p0 before r29p0, and Midgard r8p0 through r30p0.

Use after free (UAF) like CVE-2021-1905 is a vulnerability caused by incorrect use of dynamic memory during a program’s operation. If after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program.

Snapdragon is a suite of system on a chip (SoC) semiconductor products for mobile devices designed and marketed by Qualcomm Technologies Inc.

Arm Mali GPU is a graphics processing unit for a range of mobile devices from smartwatches to autonomous vehicles developed by Arm.

Mitigation

You can tell whether your device is protected by checking the security patch level.

  • Security patch levels of 2021-05-01 or later address all issues associated with the 2021-05-01 security patch level.
  • Security patch levels of 2021-05-05 or later address all issues associated with the 2021-05-05 security patch level and all previous patch levels.

We would love to tell you to patch urgently, but as we explained, this depends on the manufacturer. Some users who haven’t switched to new devices that still receive monthly security updates might even not be able to install these patches at all.

Stay safe, everyone!

The post Android patches for 4 in-the-wild bugs are out, but when will you get them? appeared first on Malwarebytes Labs.

Apple confirms Macs get malware

Anyone following the court case between Epic and Apple is undoubtedly already aware of the “bombshell” dropped by Apple’s Craig Federighi yesterday. For those not in the know, Federighi, as part of his testimony relating to the security of Apple’s mobile device operating system, iOS, stated that “we have a level of malware on the Mac that we don’t find acceptable.”

This, of course, broke the internet.

Years ago, Apple promoted the idea that Macs don’t get viruses, as part of a flashy series of Get a Mac ads featuring Justin Long as a Mac and John Hodgman as a PC.

The irony of this 180 degree turnaround has caused a huge amount of snide commentary. Of course, these ads last played more than a decade ago, and things have changed significantly between then and now, so this isn’t exactly a sudden change of heart.

On the contrary, we should not be surprised by this. Apple’s actions over the last ten years speak volumes. It has implemented increasingly strict code signing requirements as a means for controlling some malware. It implemented Notarization requirements as a means of checking apps distributed outside the App Store for malware. (One could argue about the efficacy of these measures, but the intent is clear.)

Another recent addition is a series of access restrictions that must be approved on a per-app basis, such as access to the Documents or Desktop folders. (Ironically, there was a similar security feature in Windows that Apple mocked in another of the Get a Mac ads.) Admittedly, Apple really only talks about the privacy aspect of these restrictions, but the security aspect is pretty obvious.

Apple also implemented a new EndpointSecurity framework in macOS 10.15 (Catalina), in order to better support third-party antivirus software that—until then—was reliant on ageing, deprecated functionality provided by macOS. This was essentially an official acknowledgement from Apple that Macs get malware, and that there is a need for third-party antivirus software for the Mac.

It has also recently started adding information to its security update information disclosing when its aware of a fixed bug being actively exploited in the wild by malware.

macOS Big Sur 11.3.1 release notes

All this and more shows very clearly that Apple has been aware of the malware issue for a long time. It may not make a lot of public statements acknowledging the malware problem, but actions speak louder than words. In the end, this all boils down to mocking Apple for publicly acknowledging something it has been mocked for years for not acknowledging. The irony!

Is a macOS lockdown imminent?

Not all of the hot takes out there have to do with mocking Apple. Others are taking Federighi’s words in a different light. By pointing out the weaknesses in macOS as a means for illustrating the security of iOS, some fear this is a sign that Apple intends to lock down the Mac in the same way that it has iOS.

However, this also isn’t indicated by Apple’s actions. First, consider Notarization, which is intended to curb distribution of malicious apps outside the App Store. Its efficacy can be called into question, since many pieces of malware have managed to get a clean bill of health from the Notarization process, but that’s not the question here. If Apple’s intent were to shove all developers into the App Store, why would they spend time, effort, and money on an attempt to improve the user experience with apps distributed outside the App Store?

Another point to consider is the EndpointSecurity framework. Apple has put a lot of effort into this. It had conversations with security companies to find out what they needed. It did a great job of implementing something that was able to deliver what was requested, and it spent time bringing antivirus developers to Apple HQ to teach them how to use the new framework.

Antivirus software on iOS is impossible, due to Apple restrictions. So, if it had plans to lock down macOS in the same way, why would it spend all that time, effort, and money on better supporting antivirus software? It doesn’t make sense.

If you still need convincing, just consider Federighi’s own words during his testimony. He said that an iOS device was something that anyone—even an infant—could operate safely. He compared the Mac to a car, something that could be operated safely but that required caution, saying, “You can take it off road if you want, and you can drive wherever you want.”

This, to me, embodies what I perceive to be Apple’s stance on macOS and iOS. The Mac is the workhorse, used to really get things done and “go off road.” It’s the only platform it supports for writing both Mac and iOS apps. There would be no iOS if not for the Mac. The Mac is for those who “think different,” while the nature of iOS does not encourage that.

The future of macOS?

Obviously, I don’t represent Apple and all I can do is speculate based on evidence at hand. That said, I don’t see any reason to think that macOS is going down exactly the same road as iOS. That also means that we will likely continue to have problems with malware on macOS. As long as there is money to be made from increasing numbers of Macs, creators of malware will continue to target Macs.

The post Apple confirms Macs get malware appeared first on Malwarebytes Labs.