IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Apple Lockdown Mode helps protect users from spyware

Apple has announced a new feature of iOS 16 called Lockdown Mode. This new feature is designed to provide a safer environment on iOS for people at high risk of what Apple refers to as “mercenary spyware.” This includes people like journalists and human rights advocates, who are often targeted by oppressive regimes using malware like NSO Groups’ Pegasus spyware.

NSO is an Israeli software firm known for developing the Pegasus spyware. Pegasus has been in the spotlight frequently in recent years, and although NSO claims to limit who is allowed to have access to its spyware, each new finding shows it is used by authoritarian nations bent on monitoring and controlling critics. In one of the more infamous cases, and a classic example of how Pegasus has been used, journalist Jamal Khashoggi’s murder has been traced to the use of Pegasus by the United Arab Emirates.

How is this possible? iPhones don’t get viruses!

Contrary to popular belief, it is, in fact, possible for an iPhone to become infected with malware. However, this usually involves a fair bit of effort, usually the use of one or more vulnerabilities in iOS, the operating system for the iPhone. Such vulnerabilities have been known to sell for prices in the million dollar range, which puts such malware out of the reach of common criminals.

However, companies like NSO can purchase – or pay expert iOS reverse engineers to find – these vulnerabilities and incorporate them into a deployment system for their malware. They then sell access to this malware to make money.

Deploying the malware typically happens via something like a malicious text message, phone call, website, etc. For example, NSO has been known to use one-click vulnerabilities in Apple’s Messages app to infect a phone if a user taps a link. It’s also used more powerful zero-click vulnerabilities, capable of infecting a phone just by sending it a text. Because of this, Apple made changes in iOS 14 to harden Messages.

Messages is not the only possible avenue of attack, however, and it’s impossible to build a wall for the “walled garden” that doesn’t have some holes in it. Every bug is a potential avenue of attack, and all software has bugs. Thus, Apple has taken things to a new level by providing Lockdown Mode to high-risk individuals.

What is Lockdown Mode?

Lockdown Mode puts the iPhone into a state where it is more difficult to attack. This is done by limiting what is allowed in certain aspects of usage of the phone, to block potential avenues of attack. The improvements, per Apple’s press release, include:

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enrol into mobile device management (MDM), while Lockdown Mode is turned on.
Screenshot of a phone being put into Lockdown Mode.
Source: Apple

Although Apple refers to Lockdown Mode as “an extreme, optional protection,” the limitations don’t actually sound particularly difficult to live with. Most users probably won’t turn this on, but these restrictions sound like something that the average user could adapt to fairly easily.

Backing it up with big bucks

To further bolster the security of Lockdown Mode, Apple is offering an unprecedented $2 million bug bounty to anyone who can find a qualifying vulnerability that can be exploited while an iPhone is in Lockdown Mode. Vulnerability hunters everywhere will be pounding on Lockdown Mode, looking for a bug that will score them the big payout. However, this bounty will also help to ensure that the vulnerability gets disclosed to Apple rather than being sold to a company like NSO. It’s easy to do the right thing when it also earns you $2 million!

Should I use Lockdown Mode?

If you’re a journalist and you cover topics that may put a target on your back, absolutely! If you’re a defender of human rights and a critic of countries that trample on those rights, don’t think twice.

Average people, though, will have little chance of ever encountering “mercenary spyware.” However, the extra security definitely wouldn’t hurt, and it looks like it won’t cost you a lot in terms of lost functionality.

The post Apple Lockdown Mode helps protect users from spyware appeared first on Malwarebytes Labs.

Verified Twitter accounts phished via hate speech warnings

Verified Twitter accounts are once again under attack from fraudsters, with the latest phish attempt serving up bogus suspension notices.

Hijacking verified accounts on any platform is a big win for fraudsters. It gives credibility to their scams, especially when the accounts have large followings. This has been a particularly popular tactic to promote NFTs and other crypto-centric scams.

Most recently, we saw hijacked verified accounts pushing messages claiming other verified users had been flagged for spamming. In that instance, compromised accounts were made to look like members of Twitter’s support team.

Hate speech warnings via DM

This time around, the attack is less publicly visible, working its magic via DM instead of posting out in the open. The message sent to a Bleeping Computer reporter reads as follows:

Hey

Your account has been flagged as inauthentic and unsafe by our automated system, spreading hate speech is against our terms of service. We at Twitter take the security of our platform very seriously. That’s why were are suspending your account in 48h if you don’t complete the authentication process. To authenticate your account, follow the link below.

The site, hidden behind a URL shortening service, claims visitors are logging in to “Twitter help center”. Making use of Twitter APIs to call up the reporter’s test account name, it then asks for their password. A “welcome back” message alongside an image of the reporter’s profile picture makes it all seem that little more bit real.

The phishing site then asks for an email address, and appears to be checking behind the scenes to ensure you’re entering valid details. No spamming the database with deliberately incorrect information here!

The fake site displays a message which claims the account has been proven to be authentic (and in a very twisted way, it has). At this point, the phished victim likely assumes all is well and goes about their day. Meanwhile, the phisher is free to do whatever they want with the now stolen account.

Be careful out there

Whether verified or not, treat warning messages claiming to be from anyone on social media with suspicion. If they’re providing login links tied to threats of suspension, you’re better off visiting the site and contacting support directly.

The post Verified Twitter accounts phished via hate speech warnings appeared first on Malwarebytes Labs.

Google to delete location data of trips to abortion clinics

The historical overturning of Roe v. Wade in June prompted lawmakers and technology companies to respond with deep concern over the future of data. Google is one of those companies.

In a post to “The Keyword” blog last week, Google said it will act further in protecting its users’ privacy by automatically deleting historical records of visits to sensitive locations. These include abortion clinics, addiction treatment facilities, counseling centers, domestic violence shelters, fertility centers, and other places deemed as sensitive locations.

Once Google determines that a user visited a sensitive place, it will delete this data from Location History after the visit. This change will take effect in the coming weeks.

Google’s Location History is off by default, but if a user has turned it on, the company has already provided the tools they can use to easily delete part or all of their data.

Google also has plans to roll out an update allowing users of Fitbit, which tracks periods, to delete multiple logs at once.

However, in a post-Roe America, these changes may still not be enough. As The Verge points out, Google still collects a lot of user activity data, such as Search and YouTube histories, which can be used as evidence against investigations. Google didn’t mention anything about this in its blog.

Remember also that Google provides user data when served a valid court order. This is enough to get someone’s entire search history in the hands of police for investigation. Although it would not prove guilt, it’s “a liability” for women seeking assistance with abortion, especially in states where abortion has been deemed illegal.

Furthermore, Google is not the only source law enforcement could go to for evidence. Police can also access women’s health records, as HIPAA does not protect against court-issued warrants. There are also data brokers selling location data of people visiting abortion clinics. Though the datasets are allegedly anonymized, according to Vice, it’s possible to de-anonymize users from aggregate data.

The post Google to delete location data of trips to abortion clinics appeared first on Malwarebytes Labs.

IconBurst software supply chain attack offers malicious versions of NPM packages

Researchers discovered evidence of a widespread software supply chain attack involving malicious Javascript packages offered via the npm package manager. The threat actors behind the IconBurst campaign used typosquatting to mislead developers looking for very popular packages.

npm

npm is short for Node package manager, a name that no longer covers the load. npm is a package manager for the JavaScript programming language maintained by npm, Inc. It consists of a command line client, also called npm, and an online database of public and paid-for private packages, called the npm registry. The free npm registry has become the center of JavaScript code sharing, and with more than one million packages, the largest software registry in the world.

Even non-developers may have heard of Node.js, a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser. npm is the default package manager for Node.js.

Malicious fakes

Researchers at ReversingLabs identified more than two dozen such NPM packages. The packages dated back up to six months, and contain obfuscated Javascript designed to steal form data from individuals using applications or websites that included the malicious packages.

The malicious packages serviced downstream mobile and desktop applications as well as websites. In one case, a malicious package has been downloaded more than 17,000 times. The attacker used a typosquatting technique to trick developers into using the malicious packages.

Typosquatting

Typosquatting is a term you may have seen when reading about Internet scams. In essence it relies on users making typos when entering a site or domain name. Sometimes typosquatting includes techniques like URL hijacking and domain mimicry, but mostly it relies on intercepting typos, hence the name.

In this case, the attackers offer up packages via public repositories with names that are very similar to legitimate packages like umbrellajs and packages published by ionic.io.

Supply chain attack

A supply chain attack, also called a value-chain or third-party attack, occurs when someone attacks you or your system through an outside partner or provider. Attackers can deploy supply chain attacks to exploit trust relationships between a target and external parties.

This attack can be categorized as a supply chain attack because the developer falling for the typosquatting trick is not the victim. Ultimately, the user filling out a form on a website created by the developer that used a contaminated package is the actual victim of the attack.

Obfuscated code

The researchers’ attention was drawn by the use of the same obfuscator in a wide range of npm packages over the past few months. Obfuscation, although uncommon, is not unheard of in open source development. Often obfuscation techniques aim to hide the underlying code from prying eyes, but the Javascript Obfuscator used in this attack also reduces the size of JavaScript files.

Following the obfuscation trail, the developers found similarly named packages that could be connected to one of a handful of NPM accounts.

The goal

After deobfuscation, it became clear that the authors integrated a known login stealing script into the popular npm packages. The script designed to steal information from online forms, originates from a hacking tool called “Hacking PUBG i’d”. PUBG is an online multiplayer shooter with an estimated billion players. Some of these packages are still available for download at the time of writing.

Once again this attack shows us that the way in which developers rely on the work of others is not backed up by a way to detect malicious code within open source libraries and modules.

The researchers’ blog contains a list of packages and associated hashes of the malicious packages for developers that suspect they may have fallen victim to this attack.

Stay safe, everyone!

The post IconBurst software supply chain attack offers malicious versions of NPM packages appeared first on Malwarebytes Labs.

Discord Shame channel goes phishing

A variant of a popular piece of social media fraud has made its way onto Discord servers.

Multiple people are reporting messages of an “Is this you” nature, tied to a specific Discord channel.

The message reads as follows:

heyy ummm idk what happened of its really you but it was your name and the same avatar and you sent a girl erm **** stuff like what the ****? [url] check #shame and youll see. anyways until you explain what happened im blocking you. sorry if this is a misunderstanding but i do not wanna take risks with having creeps on my friendslist.

The server is called Shame | Exposing | Packing | Arguments.

Visitors to the channel are asked to log in via a QR code, and users of Discord are reporting losing access to their account after taking this step. Worse still, their now compromised account begins sending the same spam message to their own contacts.

Discord itself warned users over two years ago to only scan QR codes taken directly from their browser, and to not use codes sent by other users. Unfortunately this has been a concern for unwary Discord users for some time now.

Tips to keep your Discord account secure

  1. Enable two-factor authentication (2FA). While you’re doing this, download your backup codes too. Should you land on a regular phishing page and hand over login details, the attacker will still need your 2FA code to do anything with your account. Note: Some phishers are now stealing 2FA codes too, so this isn’t foolproof, but it is a good security step to have anyway.
  2. Turn on server wide 2FA for channel admins. This means that only admins with 2FA enabled will be able to make use of their available admin powers. This should hopefully keep the channels you’re in that little bit more secure.
  3. Use Privacy and Safety settings. Tick the “Keep me safe” box under “Safe direct messaging”. This means all direct messages will be scanned for age restricted content. You can also toggle “Allow direct messages from server members” to restrict individuals who aren’t on your friends list.
  4. Make use of the block and friend request features. You can tell Discord who, exactly, is able to send you a friend request. Choose from “Everyone”, “Friends of friends”, and “Server members”.
  5. Report hacked and suspicious accounts. Pretty much every option you can think of is available in the Trust & Safety section for reporting rogue accounts and bad behaviour. Individual messages can be reported, and you can see how bad actors are prevented from scraping your user data for nefarious purposes. Finally, a form exists for you to report specific bots sending harmful links.

The post Discord Shame channel goes phishing appeared first on Malwarebytes Labs.

Cloud-based malware is on the rise. How can you secure your business?

There’s a lot of reasons to think the cloud is more secure than on-prem servers, from better data durability to more consistent patch management — but even so, there are many threats to cloud security businesses should address. Cloud-based malware is one of them.

Indeed, while cloud environments are generally more resilient to cyberthreats than on-prem infrastructure, malware delivered over the cloud increased by 68% in early 2021 — opening the door for a variety of different cyber attacks.  

But you might be asking yourself: Doesn’t my cloud provider take care of all of that cloud-based malware? Yes and no.

Your cloud provider will protect your cloud infrastructure in some areas, but under the shared responsibility model, your business is responsible for handling many security threats, incidents, responses, and more. That means, in the case of a cloud-based malware attack, you need to have a game plan ready.

In this post, we’ll cover four ways you can help secure your business against cloud-based malware.

What ways can malware enter the cloud?

One of the main known ways the malware can enter the cloud is through a malware injection attack. In a malware injection attack, a hacker attempts to inject malicious service, code, or even virtual machines into the cloud system.

The two most common malware injection attacks are SQL injection attacks, which target vulnerable SQL servers in the cloud infrastructure, and cross-site scripting attacks, which execute malicious scripts on victim web browsers.  Both attacks can be used to steal data or eavesdrop in the cloud.

Malware can also get into the cloud through file-upload.

Most cloud storage providers today feature file-syncing, which is when files on your local devices are automatically uploaded to the cloud as they’re modified. So, if you download a malicious file on your local device, there’s a route from there to your business’ cloud — where it can access, infect, and encrypt company data.

In fact, malware delivered through cloud storage apps such as Microsoft OneDrive, Google Drive, and Box accounted for 69% of cloud malware downloads in 2021

Four best practices to prevent cloud-based malware

1. Fix the holes in your cloud security

As we covered in our post on cloud data breaches, there are multiple weak points that hackers use to infiltrate cloud environments — and once they find a way into your cloud, they can drop cloud-based malware such as cryptominers and ransomware.

Fixing the holes in your cloud security should be considered one of your first lines of defense against cloud-based malware. Here are three best practices:

  • Set up your cloud storage correctly: This is relevant if your cloud storage is provided as Infrastructure-as-a service (like Google Cloud Storage or Microsoft Azure Cloud Storage). By not correctly setting up your cloud storage, you risk becoming one of many companies who suffer a cloud data breach due to a misconfiguration.

2. Protect your endpoints to detect and remediate malware before it can enter the cloud

Let’s say you’re the average small to mid-sized company with up to 750 total endpoints (including all company servers, employee computers, and mobile devices). Let’s also say that a good chunk of these endpoints are connected to the cloud in some way — via Microsoft OneDrive, for example.

At any time, any one of these hundreds of endpoints can become infected with malware. And if you can’t detect and remediate the malware as soon as an endpoint gets infected, there’s a chance it can sync to OneDrive — where it can infect more files.

This is why endpoint detection and response is a great “second line of defense” against cloud-based malware.

Three features of endpoint detection and response that can can help track and get rid of malware include:

  • Suspicious activity monitoring: EDR constantly monitors endpoints, creating a “haystack of data“ that can be analyzed to pinpoint any Indicators of Compromises (IoCs).
  • Attack isolation: EDR prevents lateral movement of an attack by allowing isolation of a network segment, of a single device, or of a process on the device.  
  • Incident response: EDR can map system changes associated with the malware, thoroughly remove the infection, and return the endpoints to a healthy state.

3. Use a second-opinion cloud storage scanner to detect cloud-based malware

Even if you have fixed all the holes in your cloud security and use a top-notch EDR product, the reality is that malware can still make it through to the cloud — and that’s why regular cloud storage scanning is so important.

No matter what cloud storage service you use you likely store a lot of data: a mid-sized company can easily have over 40TB of data stored in the form of millions of files. 

Needless to say, it can be difficult to monitor and control all the activity in and out of cloud storage repositories, making it easy for malware to hide in the noise as it makes its way to the cloud. That’s where cloud storage scanning comes in.

Cloud storage scanning is exactly what it sounds like: it’s a way to scan for malware in cloud storage apps like Box, Google Drive, and OneDrive. And while most cloud storage apps have malware-scanning capabilities, it’s important to have a second-opinion scanner as well.

A second-opinion cloud storage scanner is a great second line of defense for cloud storage because it’s very possible that your main scanner will fail to detect a cloud-based malware infection that your second-opinion one catches.

4. Have a data backup strategy in place

The worst case scenario: You’ve properly configured your cloud, secured all your endpoints, and regularly scan your cloud storage — yet cloud-based malware still manages to slip past your defenses and encrypt all your files

You should have a data backup strategy in place for exactly this kind of ransomware scenario. 

When it comes to ransomware attacks in the cloud — which can cause businesses to lose critical or sensitive data — a data backup strategy is your best chance at recovering the lost files.

There are several important things to consider when implementing a data backup strategy, according to Cybersecurity and Infrastructure Security Agency (CISA) recommendations. In particular, CISA recommends using the 3-2-1 strategy. 

The 3-2-1 strategy means that, for every file, keep:

  • One on a workstation, stored locally for editing or on a local server, for ease of access.
  • One stored on a cloud backup solution.
  • One stored on a long-term storage such as a drive array, replicated offsite, or even an old school tape drive.

Prevent cloud-based malware from getting a hold on your organization

Cloud-based malware is one of many threats to cloud security that businesses should address, and since cloud providers operate under a shared responsibility model, you need to have a game plan ready in the case of a cloud-based malware attack. In this article, we outlined how malware can enter the cloud and four things you can do to better secure your business against it. 

Interested in reading about real-life examples of cloud-based malware? Read the case study of how a business used Malwarebytes to help eliminate cloud-based threats. 

The post Cloud-based malware is on the rise. How can you secure your business? appeared first on Malwarebytes Labs.

TikTok is “unacceptable security risk” and should be removed from app stores, says FCC

Brendan Carr, the commissioner of the FCC (Federal Communications Commission), called on the CEOs of Apple and Google to remove TikTok from their app stores. In a letter dated June 24, 2022, Carr told Tim Cook and Sundar Pichai that “TikTok poses an unacceptable national security risk due to its extensive data harvesting being combined with Beijing’s apparently unchecked access to that sensitive data.”

Carr also said:

But it is also clear that TikTok’s pattern of conduct and misrepresentations regarding the unfettered access that persons in Beijing have sensitive US user data … puts it out of compliance with the policies that both of your companies require every app to adhere to as a condition of remaining available on your app stores.

Therefore, I am requesting that you apply the plain text of your app store policies to TikTok and remove it from your app stores for failure to abide by those terms.

In the Twitter thread, Carr pointed out the national security risks TikTok poses.

Excessive data collection

TikTok is said to collect “everything”, from search and browsing histories; keystroke patterns; biometric identifiers—including faceprints, something that might be used in “unrelated facial recognition technology”, and voiceprints—location data; draft messages; metadata; and data stored on the clipboard, including text, images, and videos.

Carr cited several incidents as evidence that TikTok has been dodgy about its data collection practices.

Relation to the CCP (Communist Party of China)

ByteDance, a company based in Beijing, developed TikTok. In China, it is known as Douyin. Carr mentioned in his letter to Apple and Google that ByteDance “is beholden to the Communist Party of China and required by Chinese law to comply with the PRC‘s surveillance demands.”

The Senate and House committee members, cybersecurity researchers, privacy, and civil rights groups have flagged this as a concern. In 2019, two senators labeled TikTok as a “potential counterintelligence threat we cannot ignore”. The American Civil Liberties Union (ACLU) is also concerned about the social platform’s “vague” policies, especially in collecting and using biometric data.

Unclear use of collected data

It’s a non-issue for apps that are clear about collecting data, but these must also say how they use the data they collect. TikTok, it appears, is not one of those apps that do not abide by this clause.

“Numerous provisions of the Apple App Store and Google Play Store policies are relevant to TikTok’s pattern of surreptitious data practices—a pattern that runs contrary to its repeated representations,” the letter reads.

“For instance, Section 5.1.2(i) of the Apple App Store Review Guidelines states that an app developer ‘must provide access to information about how and where the data [of an individual will be used’ and ‘[d]ata collected from apps may only be shared with third parties to improve the app or serve advertising.”

Is TikTok a “sophisticated surveillance tool”?

TikTok didn’t sit on its hands when news spread of the FCC calling for its removal from major app stores.

Speaking with CNN’s “Reliable Sources”, Michael Beckerman, VP, Head of Public Policy, Americas at TikTok, refuted a large chunk of the FCC’s claims against the social media company, predicated on the notion that Carr is isn’t an expert on such issues and that FCC doesn’t have jurisdiction over national security.

“He’s pointing out a number of areas that are simply false in terms of information that we’re collecting, and we’re happy to set the record straight,” Beckerman said.

When asked about the inaccuracies in Carr’s claims, Beckerman responded: “He’s mentioning we’re collecting browser history, like we’re tracking you across the internet. That’s simply false. It is something that a number of social media apps do without checking your browser history across other apps. That is not what TikTok does.”

“He’s talking about faceprints—that is not something we collect,” he said, explaining that the technology in their app is not for identifying individuals but for the purpose of filters, such as knowing when to put glasses or a hat on a face/head.

Concerning keystroke patterns, Beckerman said, “It’s not logging what you’re typing. It’s an anti-fraud measure that checks the rhythm of the way people are typing to ensure it’s not a bot or some other malicious activity.”

When challenged if the CCP has seen any non-public user data, he said, “We have never shared information with the Chinese government nor would we […] We have US-based security teams that manage access, manage the app, and, as actual national security agencies like the CIA during the Trump administration pointed out, the data that’s available on TikTok—because it’s an entertainment app—is not of a national security importance.”

Politicians and privacy advocates have criticized TikTok for potentially exposing US user data to China for years. To allay fears, TikTok teamed up with Oracle and began routing data of its American users to US-based servers.

This, however, doesn’t answer some questions raised when Buzzfeed News broke the story about TikTok employees in China “repeatedly” accessing US user data for at least several months. Such incidents reportedly occurred from September 2021 to January 2022, months before the Oracle data rerouting.

There is also an allegation that a member of TikTok’s trust and safety department said in a meeting that “Everything is seen in China”. A director in another meeting allegedly claimed that a colleague in China is a “Master Admin” who “has access to everything.”

“We want to be trusted,” Beckerman said during the CNN interview. “There’s obviously a lack of trust across the Internet right now, and for us, we’re aiming for the highest, trying to be one of the most trusted apps, and we’re answering questions and being as transparent as we can be.”

The post TikTok is “unacceptable security risk” and should be removed from app stores, says FCC appeared first on Malwarebytes Labs.

Update now! Chrome patches ANOTHER zero-day vulnerability

Google has released version 103.0.5060.114 for Chrome, now available in the Stable Desktop channel worldwide. The main goal of this new version is to patch CVE-2022-2294.

CVE-2022-2294  is a high severity heap-based buffer overflow weakness in the Web Real-Time Communications (WebRTC) component which is being exploited in the wild. This is the fourth Chrome zero-day to be patched in 2022.

Heap buffer overflow

Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database. Its goal is to make it easier to share data across separate vulnerability capabilities (tools, databases, and services).

A buffer overflow is a type of software vulnerability that exists when an area of memory within a software application reaches its address boundary and writes into an adjacent memory region. In software exploit code, two common areas that are targeted for overflows are the stack and the heap.

The heap is an area of memory made available use by the program. The program can request blocks of memory for its use within the heap. In order to allocate a block of some size, the program makes an explicit request by calling the heap allocation operation.

The vulnerability

WebRTC on Chrome is the first true in-browser solution to real-time communications (RTC). It supports video, voice, and generic data to be sent between peers, allowing developers to build powerful voice- and video-communication solutions. The technology is available on all modern browsers as well as on native clients for all major platforms.

A WebRTC application will usually go through a common application flow. Access the media devices, open peer connections, discover peers, and start streaming. Since Google does not disclose details about the vulnerability until everyone has had ample opportunity to install the fix it is unclear in what stage the vulnerability exists.

How to protect yourself

If you’re a Chrome user on Windows or Mac, you should update as soon as possible.

The easiest way to update Chrome is to allow it to update automatically, which basically uses the same method as outlined below but does not require your attention. But you can end up lagging behind if you never close the browser or if something goes wrong, such as an extension stopping you from updating the browser.

So, it doesn’t hurt to check now and then. And now would be a good time, given the severity of the vulnerability. My preferred method is to have Chrome open the page chrome://settings/help which you can also find by clicking Settings > About Chrome.

If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is relaunch the browser in order for the update to complete.

updating Chrome

After the update the version should be 103.0.5060.114 or later.

Chrome is up to date

Since WebRTC is a Chromium component, users of other Chromium based browsers may see a similar update.

Stay safe, everyone!

The post Update now! Chrome patches ANOTHER zero-day vulnerability appeared first on Malwarebytes Labs.

“Free UK visa” offers on WhatsApp are fakes

A student friend recently shared a WhatsApp message, unsure if it was scam. The message claims to offer an easy to route to free visas, housing, accommodation, and medicine access.

Here’s how we know it was a scam, and where it lead.

It read as follows:

UK GOVERNMENT JOB RECRUITMENT 2022: This is open to all Individuals who wants to work in UK, Here is a great chance for you all to work conveniently in the UK. UK needs over 132,000 workers in 2022. Over 186,000 Jobs are Open for applying. THE PROGRAM COVERS: Travel expense. Housing. Accommodation. Medical facilities. Applicant must be 16 years or above. Can speak basic English. BENEFIT OF THE PROGRAM: Instant work permit. Visa application assistance. All nationalities can apply. Open to all individuals and students who want to work and study. Apply here [url removed]

As you might suspect, there’s multiple red flags in the above claims for anyone considering signing up.

The bogus visa claim checklist

The site gives the impression of being operated by UK Visas and Immigration, and repeats some of the errors and red flags from the WhatsApp message.

visa quiz
“We are hiring”

It said:

We are urgently looking for foreigners to apply for the thousands of jobs already available in the United Kingdom. This application is free and upon approval you will be given a work permit, visa, plane tickets and accommodation in the UK for free.

With even the most cursory of glances, the claims on this website simply don’t add up. Let’s dissect some of them:

  1. UKVI applications all begin here, on a gov.uk address. This scam site is not hosted on a gov.uk address.
  2. The Home Office does not cover the cost of flights or accomodation for visa applicants coming to the UK.
  3. There is no free access to medical services in the UK. Visa holders pay an annual immigration health surcharge to access NHS services. This is paid upfront at visa application time.
  4. The minimum age requirement for a skilled work visa is 18, not “16 years or above”.
  5. In most cases, applicants need to be able to demonstrate English ability through one of several available qualifications, not just “can speak basic English”.
  6. You won’t get an “Instant work permit” without paying an additional fee to make use of same day / next day processing.

It’s website quiz time

The site posed two questions, including marital and employment status. It then asks for first and last name, email address, and phone number. No matter what I entered, or even if I left all the form elements blank, I always got the following message:

After checking your applications, You have been approved to work in the United Kingdom 2022

I most certainly had not. It continued:

–Your UK VISA FORM will be available immediately after you click the “Invite Friends/Group” button below to share this information with 15 friends or 5 groups on WhatsApp so That They Can Also be Aware of the PROGRAM.

Unlike other sites along the same lines, this one didn’t check if I really was sending the link to people on WhatsApp, so I faked it. Did I get my visa after “sharing” the scam?

No, I did not.

A distinct lack of visa forms

The clue is definitely in the title. Instead of the promised form, I was greeted by the following message:

visa job recruitment
No visas yet

To facilitate the downloading of your UK VISA FORM you must complete this final step of Nationality Verification!

1.Click the Continent you are from and complete the given tasks, verification is by phone number or downloading, registering e.t.c.

(Remember, this step is very important, add your phone number to verify)

Do not skip any step..

I think my favourite part about all of this is that they used the logo for VISA, the financial multinational corporation. At this point, why not?

Of redirects and surveys

Whether I choose “Africa” or “Other Continent”, I was directed to several sites selling drones and watches or asking for mobile numbers, alongside yet more quizzes. The visa forms? Not so much.

visa survey ad
This is not what a visa form looks like

While digging around for more information on the site involved in this, I came across this article from the last day or so. Clearly, this site is doing the rounds in WhatsApp circles. There’s also some additional UK visa-related scams listed there too, one of which was bouncing around a few months ago. All in all, this is yet another “if it’s too good to be true” escapade and should be avoided.

The post “Free UK visa” offers on WhatsApp are fakes appeared first on Malwarebytes Labs.

5 pro-freedom technologies that could change the Internet

In the digital era, freedom is inextricably linked to privacy. After a good start, the Internet-enabled, technological revolution we are living through has hit some bumps in the road. We have already lost a lot of control over who and what has access to our data, and there are further threats to our freedom on the horizon.

It doesn’t have to be that way though, and it is not inevitable that the trend will continue. To celebrate Independence Day we want to draw your attention to five technologies that could improve life, liberty and the pursuit of happiness on the Internet.

The technologies are listed in a rough order of simplest, soonest, and most likely to happen, to most complex, furthest out, and least likely to happen.

DNS encryption

DNS encryption plugs a gap that makes it easy to track the websites you visit.

The domain name system (DNS) is a distributed address book that lists domain names and their corresponding IP addresses. When you visit a website, your browser sends a request to a DNS resolver, which responds with the IP address of the domain you’re visiting. The request is sent in plain text, which is the computer networking equivalent of yelling the names of all the websites you’re visiting out loud.

Anyone, or anything, on the same local network as you can see your DNS lookups, as can your ISP, which will happily sell your browsing history to the highest bidder. And any machine-in-the-middle (MitM) attackers between you and the DNS resolver—such as rogue Wi-Fi access points—can also silently change your plain text DNS requests and use them to direct you to malicious websites.

DNS encryption restores your privacy by making it impossible for anything other than the DNS resolver to read and respond to your queries. You still have to trust the resolver you send your requests to, but the eavesdroppers are out in the cold.

DNS encryption is new, and still relatively rare, but it is supported natively by modern versions of Windows, macOS, Android, and iOS, as well as a number of different DNS clients, proxies and applications, including the DNS Filtering module for the Malwarebytes Nebula platform. It’s ascendancy seems assured.

Passwordless authentication

Passwordless authentication could usher in a world where we no longer rely on passwords, and that could be an enormous, unabashed win for security and peace of mind. The trouble is, that has been true for a very long time indeed, and it hasn’t happened yet.

There is reason to hope that things are finally about change though.

Passwords are a great idea in theory that fail horribly in practice. Humans are poorly equipped to create and remember them, and demonstrably poor at building systems that handle them securely. And yet almost every Internet account requires one. The inevitable result is an epidemic of poor passwords and an entire criminal industry preying on them with relentless automated attacks.

For a long time, the successor to the password was widely presumed to be some form of biometric authentication—such as face or fingerprint recognition—but nobody could agree which one. With multiple novel, competing, costly, and incompatible alternatives, passwords remained the clear winner.

The solution to that gridlock was FIDO2.

FIDO2 is a specification that uses public key encryption for authentication. This allows users to log in to websites without sharing a secret that needs to be secured like a password. There is nothing for a programmer to secure, nothing for an attacker to guess, and nothing that can be stolen in a data breach.

The sensitive encrpytion work all happens on a device owned by the user, which can be a specialist hardware key, a phone, a laptop, or any other compatible device. FIDO2 doesn’t specify what the device is, or how it should be secured, only that a user must make a “gesture” to approve the authentication. This leaves device manufacturers free to use whatever “gesture” works best for them: PIN numbers, swipe patterns, and any and all forms of biometrics. The end result is a technology that allows you to log in to a website securely using Windows Hello, Apple’s Touch ID, and any number of other methods that exist now or could be created in the future.

Passwordless authentication is possible today but still extremely rare. However, it took a big step forward in May this year when Google, Microsoft, and Apple made simultaneous, coordinated pledges to increase their adoption of the FIDO2 standard.

Onion networking

Onion networking, the technology behind Tor and the “dark web”, has been around for twenty years, so it might seem an odd candidate for an emerging technology that could change everything—but what if that’s just because we’ve been thinking about it the wrong way?

Tor is a network of servers that allows software clients (like web browsers) and services (like websites) to communicate securely and anonymously. Although the software is extremely good at what it does, today it services a narrow niche of users who put privacy and security above all, and it has become strongly associated with ransomware, illegal drug markets, and other forms of unsavoury criminal activity.

According to security evangelist Alec Muffett, we are overlooking a very important aspect of this technology though. Muffett was previously a security engineer at Facebook, where he was responsible for putting the social network on Tor. Speaking to David Ruiz on a recent Malwarebytes Lock and Code podcast, he explained how he sees Tor as “a brand new networking stack for the Internet” that can “guarantee integrity, and privacy, and unblockability of communication.”

Every Tor address is also the cryptographic public key of the service you want to talk to. For example, the Facebook address is:

www.facebookwkhpilnemxj7asaniu7vnjjbiltxjqhye3mhbshg7kx5tfyd.onion

Having the public key act as the address provides cryptographic assurance that you are talking to the service you want to talk to, bypassing several layers of the OSI model, and cutting out fundamental Internet vulnerabilities, such as BGP hijacking.

We should stop thinking about Tor as just an anonymity tool, says Muffet. It should be attractive to anyone who cares about the integrity of their brand and what it has to say:

If you are in the position of providing a forum, a messenger service, or news to a mass public … where your brand name is a really important part of your value proposition, then onion networking is for you, because you can make sure that no one can mess with your traffic.

Alec Muffet speaking to Lock and Code.

Although mainstream organizations like The New York Times, Pro Publica, Facebook, and Twitter have already embraced Tor, having a .onion site is still very much the exception. In all likelihood, it will take something quite dramatic to change that, but that doesn’t mean it can’t happen.

In 2013, Edward Snowden’s revelations about pervasive Internet surveillance triggered a huge gobal effort to make encrypted web traffic the norm, rather than the exception.

A similar stimulus today could tip onion networking from its niche into the mainstream.

Cryptocurrencies

People may be surprised to see cryptocurrencies appearing in our list. If cryptotrading sites are naming stadia and buying superbowl ads then cryptocurrencies are already mainstream and hardly a technology for the future, surely.

Its presence near the bottom of our list tells you that isn’t how we see it.

Cryptocurrencies face a number of cyclone-force headwinds, starting with the current, across-the-board, price crash. The market cap of the biggest currencies, Bitcoin and Ether, is shrinking fast, and some cryptocurrencies have already disappeared completely; the free flow of venture capital money is likely to dry up; there are issues with scalability, scams, rug pulls, thefts from exchanges, and environmental damage; and the pseudo-anonymity blockchains provide is challenged by our ever-improving capacity to identify patterns in payments.

More importantly, from the perspective of life, liberty, and the pursuit of happiness, almost nobody is using these currencies as actual currencies—nobody is paid in Bitcoin, and nobody is using Ether to buy groceries. Remember, Bitcoin was supposed to be a peer-to-peer electronic cash system not a vehicle for speculative trading.

So why is it on our list at all?

For all the reasons to dislike them or write them off, cryptocurrencies are hard to ignore. At its core, the original cryptocurrency, Bitcoin, was supposed to be a trustless, borderless payment system that was built on top of the Internet.

What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party.

“Satoshi Nakamoto”, from Bitcoin: A Peer-to-Peer Electronic Cash System

It was a vision of what freedom might look like in the digital age.

That desire for freedom propelled Bitcoin it in its early days, and the attractiveness of a private, peer-to-peer currency is undimmed, even if nobody has managed to actually build one that works yet.

The current crash will pass and the strongest ideas and technology will survive. We suspect that Satoshi’s original vision will be one of them, even if Bitcoin isn’t.

Homomorphic encryption

The cornerstone of digital privacy, security, and freedom is encryption, and the last item in our list is one of its holy grails: Encryption that never needs to be undone.

Encryption protects your data if your phone is stolen, and it makes your emails, credit card details, and WhatsApp messages tamper proof as they whizz around the Internet. And it’s what underpins all of the other things in our list.

And all of the examples above have something in common: They are either examples of encryption that’s used to protect data at rest, or data that’s in transit. Moving or storing data only gets you so far though, sooner or later it has to be used. It can’t be used unless it’s decrypted, and you need to trust whatever system has access to that decrypted data.

Homomorphic encryption algorithms allow mathematical operations to be performed on encrypted data, so that it doesn’t need to be decrypted at all, ever, even when it’s being used.

The result of performing a mathematical operation on the encrypted data is the same as if the data was decrypted, subject to a mathematical operation, and the answer encrypted.

This incredible act of needle threading needs to ensure that you can’t learn anything about the data from the ciphertext (the encrypted version of the data), and that you can’t learn anything at all about it by observing the mathematical operations performed on it.

If you had access to homomorphic encryption you wouldn’t have to trust anyone you share your data with, whether they are the vendors in your organization’s supply chain, or your favorite, data-hungry social network.

Almost unbelievably, homomorphic encryption algorithms already exist. The reason you don’t have access to their almost magical properties though is that they are prohibitively slow. It currently takes days for them to perform actions that we expect to take seconds.

Although slow, these algorithms are already millions of times quicker than they were just a few years ago. And while that rate of improvement will surely decelerate, the processing power of computers is still doubling every few years.

At some point in the not-too-distant future, when these two trends meet, it could change how we think about trust and freedom in the digital age completely.

The post 5 pro-freedom technologies that could change the Internet appeared first on Malwarebytes Labs.