IT NEWS

Tech support scammers caught by their own cameras

A Youtuber has hacked into the CCTV cameras of an office used by tech support scammers and reported them to the police. The video feed of what is going on in that office ends with the arrest of the scammers.

CCTV

The Youtuber, acting under the handle Scambaiter, turned his attention to Punjab in India to spy on a group of Tech Support scammers.

“Scambaiting” means scamming the scammers, often by pretending to take their bait and wasting their time. The reasoning is that while the scammer is busy trying to reel the scambaiter in, they don’t have time to victimize someone else. Which makes it doing a good deed while having some fun.

Scambaiter, goes a little further than simply wasting scammers’ time. He has amassed almost 1.5 million YouTube followers by “hacking back” against the scammers and exposing where and how they work—in this case by using the scammers’ own CCTV cameras against them.

Scambaiter also hacked into some of the systems the scammers were using to defraud US citizens out of thousands of dollars. So, besides footage of the scammers, his hack also included taking screenshots from the laptops that the scammers were using while “at work”.

One thing that jumps out is that this is a very small and badly secured organization. Which came in handy because it enabled Scambaiter to show us several sides of the operation.

The video

Scambaiter condensed a weeks’ worth of footage into a 20 minute clip. In the beginning we see the scammers at work, posing as Best Buy’s Geek Squad tech support employees.

We get a good look at how these scammers are organized and how they operate. If you didn’t know they were talking people out of their money for non-existent services, it would look like any other, legitimate, office.

During the video Scambaiter explains how he found information about the scammers and their physical location, until he had gathered enough evidence to convince the local police to spring into action.

At the end of the CCTV footage you can see the police officers enter the building, shut down the electricity on two floors, and arrest five of the main scammers.

Scambaiter then concludes the video with a police report stating the charges against the scammers, and a selection of the media coverage about the incident.

Follow the rules

In case you want to take part in the fun the past we have posted a guide to get back at spammers safely.

Even though the video by Scambaiter has a happy end, it is important to understand that what Scambaiter did is just as illegal as the Tech Support scammers’ activity. Hacking into computer systems is illegal unless you are equipped with permission (say, the terms of a bug bounty) or a warrant, even if the victims are criminals and the hacking is in a good cause.

Stay safe, everyone!

The post Tech support scammers caught by their own cameras appeared first on Malwarebytes Labs.

How the FBI quietly added itself to criminals’ instant message conversations

Motherboard has disclosed some information about Operation Trojan Shield, in which the FBI intercepted messages from thousands of encrypted phones around the world. These messages are now used in courts across the world as corroborating evidence.

Operation Trojan Shield

The US Federal Bureau of Investigation (FBI), the Dutch National Police (Politie), and the Swedish Police Authority (Polisen), in cooperation with the US Drug Enforcement Administration (DEA) and 16 other countries, carried out one of the largest and most sophisticated law enforcement operations to date in the fight against encrypted criminal activities with the support of Europol.

We wrote about the 800 arrests that were made with the help of the backdoored phones. Law enforcement agencies around the world have long campaigned for encryption backdoors, so they can see what criminals are saying to each other. End-to-end encryption hides the content of messages from unauthorized readers, so that only the sender at one end and the receiver at the other end (or, more precisely, the sending and receiving devices) can read the content.

Unable to break the encryption of messages as they pass from one device to another, the FBI and the Australian Federal Police (AFP) came up with an ingenious plan. They decided to put themselves on the sending and receiving devices, by creating a phone they could eavesdrop on, and then marketing it to criminals as a secure device ideally suited to the demands of organized crime.

To that end, the FBI became secretly involved in An0m, a company that was working on an early version of an app to enable end-to-end encrypted communication.

New information

Despite several requests from defense lawyers on behalf of some of the arrested suspects, the source code of An0m was kept secret. When asked for comment, the San Diego FBI told Motherboard in a statement that

“We appreciate the opportunity to provide feedback on potentially publishing portions of the Anom source code. We have significant concerns that releasing the entire source code would result in a number of situations not in the public interest like the exposure of sources and methods, as well as providing a playbook for others, to include criminal elements, to duplicate the application without the substantial time and resource investment necessary to create such an application. We believe producing snippets of the code could produce similar results.”

By buying an An0m device from the secondary market after the law enforcement operation was announced, and a copy of the An0m APK as a standalone file, Motherboard started digging into the code.

Without revealing much of the source code, to protect various contributors that very likely had no idea what they were working on, the decompiled source code is described as if it was thrown together in a hurry. Apparently the app was based on an existing messaging app, and freely available online tools were added to complete the intelligence gathering capabilities.

An extra end

What does become clear form the revealed code is how the law enforcement agencies were able to eavesdrop on the end-to-end encrypted messages. They simply added an extra end to each conversation. You could compare this to a BCC contact in an email. Only in this case both the sender and the receiver had no idea that there was another end that was able to read the encrypted messages.

The app uses Extensible Messaging and Presence Protocol (XMPP), an open communication protocol designed for instant messaging, presence information, and contact list maintenance. XMPP works by having each contact use a handle that in some way looks like an email address. For An0m, these included an XMPP account for the customer support channel that An0m users could contact. Another of these was “bot”. And bot was a hidden or “ghost” contact that made copies of Anom users’ messages. Unlike the support channel, bot hid itself from Anom users’ contact lists and operated in the background. In practice the app scrolled through the user’s list of contacts, and when it came across the bot account, the app filtered that out and removed it from view so the end users could not see they were sending extra copies to a third party.

The post How the FBI quietly added itself to criminals’ instant message conversations appeared first on Malwarebytes Labs.

YouTube AI wrongfully flags horror short “Show for Children” as suitable for children

When content creators flag one of their own videos as inappropriate for children, we expect YouTube’s AI moderator to accept this and move on. But the video streaming bot doesn’t seem to get it. Not only can it prevent creators from correcting a miscategorization, its synthetic will is also final—no questions asked—unless the content creator appeals.

This is precisely what happened to Kris Straub, creator of the horror series Local58TV on YouTube. When he checked his account over the weekend, he spotted YouTube’s AI had erroneously marked his 3-minute video, “Show For Children”, as “Made for kids” under its “Policy reason”.

Per YouTube, “Made for kids” means:

This content has been set as made for kids to help you comply with the Children’s Online Privacy Protection Act (COPPA) and/or other applicable laws.

Features like personalized ads and comments are disabled on videos set as made for kids.

Videos that are set as made for kids are more likely to be recommended alongside other kids’ videos.

And YouTube did make it appear along with other child-friendly videos:

show for children
A still captured within the 3-minute clip of “Show For Children”.

Straub didn’t think twice about taking to Twitter to air his disbelief:

Because the video is falsely marked as safe for childred, it could even end up in the “YouTube Kids” app, a separate video service that shows only filtered video clips made for kids from YouTube.

Thankfully, “Show For Children” didn’t appear in YouTube Kids search results when I tested. It’s interesting to note, however, that when I do a search of “Local58TV”, the site shows me pre-filled suggestions, as you can see below:

youtube kids search prefill

Fortunately, YouTube already got back to Straub and resolved the matter. The company also allowed him to mark his video as “not made for kids” when this feature was previously greyed out.

Staud left a question the YouTube team has yet to reply to.

I think we already know the answer to that.

The post YouTube AI wrongfully flags horror short “Show for Children” as suitable for children appeared first on Malwarebytes Labs.

Fake job offer leads to $600 million theft

Back in March, popular NFT battler Axie Infinity lay at the heart of a huge cryptocurrency theft inflicted on the Ronin network. From the Ronin newsletter:

There has been a security breach on the Ronin Network. Earlier today, we discovered that on March 23rd, Sky Mavis’s Ronin validator nodes and Axie DAO validator nodes were compromised resulting in 173,600 Ethereum and 25.5M USDC drained from the Ronin bridge in two transactions. The attacker used hacked private keys in order to forge fake withdrawals. We discovered the attack this morning after a report from a user being unable to withdraw 5k ETH from the bridge.

These validator nodes act as a means to prevent criminals making off with lots of money. In order to put together a bogus transaction, they’d need to gain access to 5 out of 9 validator nodes. The successful attack happened over 2 stages.

An unwary employee provided one foot in the door. An unrevoked permission elsewhere kicked it wide open.

The trap is set

According to The Block, everything fell into place thanks to a senior engineer at the game developer. Two inside sources claim the engineer was fooled by a fake job offer. In fact, it seems multiple employees were approached and encouraged to put in applications. Scams originating from LinkedIn accounts are popular at the moment. As it happens, this is where scammers tried to persuade various people on the development team.

One individual is all it took to empty out a big slice of cryptocurrency funds. A job offer made after several interviews was enough to convince the victim to get on board. Perhaps the “extremely generous” compensation package offered should have set off some alarm bells. Having said that, anything digital finance related likely has huge amounts of cash available.

We rate this job offer a 4 out of 5

Unbeknownst to the engineer, everything came crashing down once they received the job offer. A booby-trapped PDF granted the attackers access to Ronin systems, and they were able to compromise 4 out of the required 5 nodes.

Just one node remained to be compromised. How did they do it?

Step up to the plate, non-revoked access. When employees leave an organisation, it’s a good idea to remove access to networks and devices. Unknown entities will happily make use of unattended credentials or permissions. Sure enough, that’s what happened here.

Nudging a node

A Decentralised Autonomous Organization (DAO) is a way for people in a community to make decisions on a project. The developers approached an Axie DAO for assistance with transactions in November 2021. Ultimately, this is where the fifth node compromise starts to take shape. The issue isn’t that the DAO exists. The issue is the permissions granted to the DAO.

From the Substack post detailing the attack:

At the time, Sky Mavis controlled 4/9 validators, which would not be enough to forge withdrawals. The validator key scheme is set up to be decentralized so that it limits an attack vector, similar to this one, but the attacker found a backdoor through our gas-free RPC node, which they abused to get the signature for the Axie DAO validator.  

This traces back to November 2021 when Sky Mavis requested help from the Axie DAO to distribute free transactions due to an immense user load. The Axie DAO allowlisted Sky Mavis to sign various transactions on its behalf. This was discontinued in December 2021, but the allowlist access was not revoked.

Who takes the blame?

In April, the US Department of Treasury pinned this one on North Korean hacking group Lazarus. Research elsewhere details Lazarus attacks on both the aerospace and defence sector involving bogus job posts. There’s no mention of those attacks having any connection to what happened above. However, the research does highlight further recruitment scams using LinkedIn as the starting point.

Whether you’re operating in a cryptocurrency / web3 realm or not, forgotten permissions could cost you dearly. There’s also the fake job offer approach to consider, too. In recent months we’ve seen other game developers targeted. Deepfakes are now worming their way into the bogus job scene. Malware is rife in this sort of operation, so please be cautious around any promising new offer.

The post Fake job offer leads to $600 million theft appeared first on Malwarebytes Labs.

Report: Brazil must do more to encrypt, back up data

Federal government organisations in Brazil may need to reassess their approach to cyberthreats, according to a new report by the country’s Federal Audit Court. It outlines multiple key areas of concern across 29 key areas of risk. One of the biggest problems in the cybercrime section of the report relates to backups. Specifically: The lack of backups when dealing with hacking incidents.

Backups in Brazil: An uphill struggle

Backups are an essential backstop that can help against several forms of attack, as well as mistakes and mishaps. The most obvious one of those would be ransomware. When networks are compromised and systems are locked up, victims with effective backups can try to restore their systems to a point in time before the attack.

Not having backups leaves victims with very limited options. Assuming the attackers don’t just vanish into the night, the business may decide to pay the ransom and recover the encrypted files. At best, that is a slow, manual process. If things go badly, the decryption tools may be broken and fail to recover data. In some cases, they may not even exist. At this point, an organisation is out of pocket and files.

This is enough to cause showstopping issues for any organisation. And if the affected business performs critical tasks, attacks can have alarming consequences for the community at large. Healthcare and law enforcement are good examples of this.

As a result, getting up to speed on backing up data has become more prominent in recent years. In fact, not just backing up. It’s important that organisations create sensible, organised backups which can be deployed in a crisis. You can’t roll back properly if the files are disorganised and nobody can make sense of which folder goes where.

With this in mind, the statistics don’t make for great reading.

The numbers game

According to the report:

  • 74.6% of organizations (306 out of 410) do not have a formally approved backup policy—basic document, negotiated between the business areas (“owners” of the data/systems) and the organization’s IT, with a view to disciplining issues and procedures related to the execution of backups.
  • 71.2% of organizations that host their systems on their own servers/machines (265 out of 372) do not have a specific backup plan for their main system.
  • 66.6% of organizations that claim to perform backups (254 out of 385), despite implementing physical access control mechanisms to the storage location of these files, do not store them encrypted, which carries a risk of data leakage from the organization, which can cause enormous losses, especially if it involves sensitive and/or confidential information.
  • 60.2% of organizations (247 out of 410) do not keep their copies in at least one non-remotely accessible destination, which carries a risk that, in a cyberattack, the backup files themselves end up being corrupted, deleted and/or encrypted by the attacker or malware, rendering the organization’s backup/restore process equally ineffective.

Backing up: Not a guaranteed fix

The report notes that various initiatives already exist to get people talking about the need for both encryption and backing up. While any rise in backup numbers is a good thing, it’s not necessarily going to come close to solving problems.

One of the worst offshoots of standard ransomware attacks in the past few year is the rise of “double extortion”, where ransomware authors steal data before it’s encrypted, and then threaten to release it if the ransom isn’t paid. One of the reasons double extortion attacks came about is precisely because backups don’t work against data leaks.

For organizations that do keep backups, the challenge is how to set them up and maintain them so they do what’s expected, when they are needed most. This is surprisingly difficult.

David Ruiz, host of Malwarebytes’ Lock and Code podcast recently spoke to backup expert Matt Crape, a technical account manager at VMWare, to find out why backups often fail when it really matters, and how to ensure they don’t.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

The post Report: Brazil must do more to encrypt, back up data appeared first on Malwarebytes Labs.

Apple Lockdown Mode helps protect users from spyware

Apple has announced a new feature of iOS 16 called Lockdown Mode. This new feature is designed to provide a safer environment on iOS for people at high risk of what Apple refers to as “mercenary spyware.” This includes people like journalists and human rights advocates, who are often targeted by oppressive regimes using malware like NSO Groups’ Pegasus spyware.

NSO is an Israeli software firm known for developing the Pegasus spyware. Pegasus has been in the spotlight frequently in recent years, and although NSO claims to limit who is allowed to have access to its spyware, each new finding shows it is used by authoritarian nations bent on monitoring and controlling critics. In one of the more infamous cases, and a classic example of how Pegasus has been used, journalist Jamal Khashoggi’s murder has been traced to the use of Pegasus by the United Arab Emirates.

How is this possible? iPhones don’t get viruses!

Contrary to popular belief, it is, in fact, possible for an iPhone to become infected with malware. However, this usually involves a fair bit of effort, usually the use of one or more vulnerabilities in iOS, the operating system for the iPhone. Such vulnerabilities have been known to sell for prices in the million dollar range, which puts such malware out of the reach of common criminals.

However, companies like NSO can purchase – or pay expert iOS reverse engineers to find – these vulnerabilities and incorporate them into a deployment system for their malware. They then sell access to this malware to make money.

Deploying the malware typically happens via something like a malicious text message, phone call, website, etc. For example, NSO has been known to use one-click vulnerabilities in Apple’s Messages app to infect a phone if a user taps a link. It’s also used more powerful zero-click vulnerabilities, capable of infecting a phone just by sending it a text. Because of this, Apple made changes in iOS 14 to harden Messages.

Messages is not the only possible avenue of attack, however, and it’s impossible to build a wall for the “walled garden” that doesn’t have some holes in it. Every bug is a potential avenue of attack, and all software has bugs. Thus, Apple has taken things to a new level by providing Lockdown Mode to high-risk individuals.

What is Lockdown Mode?

Lockdown Mode puts the iPhone into a state where it is more difficult to attack. This is done by limiting what is allowed in certain aspects of usage of the phone, to block potential avenues of attack. The improvements, per Apple’s press release, include:

  • Messages: Most message attachment types other than images are blocked. Some features, like link previews, are disabled.
  • Web browsing: Certain complex web technologies, like just-in-time (JIT) JavaScript compilation, are disabled unless the user excludes a trusted site from Lockdown Mode.
  • Apple services: Incoming invitations and service requests, including FaceTime calls, are blocked if the user has not previously sent the initiator a call or request.
  • Wired connections with a computer or accessory are blocked when iPhone is locked.
  • Configuration profiles cannot be installed, and the device cannot enrol into mobile device management (MDM), while Lockdown Mode is turned on.
Screenshot of a phone being put into Lockdown Mode.
Source: Apple

Although Apple refers to Lockdown Mode as “an extreme, optional protection,” the limitations don’t actually sound particularly difficult to live with. Most users probably won’t turn this on, but these restrictions sound like something that the average user could adapt to fairly easily.

Backing it up with big bucks

To further bolster the security of Lockdown Mode, Apple is offering an unprecedented $2 million bug bounty to anyone who can find a qualifying vulnerability that can be exploited while an iPhone is in Lockdown Mode. Vulnerability hunters everywhere will be pounding on Lockdown Mode, looking for a bug that will score them the big payout. However, this bounty will also help to ensure that the vulnerability gets disclosed to Apple rather than being sold to a company like NSO. It’s easy to do the right thing when it also earns you $2 million!

Should I use Lockdown Mode?

If you’re a journalist and you cover topics that may put a target on your back, absolutely! If you’re a defender of human rights and a critic of countries that trample on those rights, don’t think twice.

Average people, though, will have little chance of ever encountering “mercenary spyware.” However, the extra security definitely wouldn’t hurt, and it looks like it won’t cost you a lot in terms of lost functionality.

The post Apple Lockdown Mode helps protect users from spyware appeared first on Malwarebytes Labs.

Verified Twitter accounts phished via hate speech warnings

Verified Twitter accounts are once again under attack from fraudsters, with the latest phish attempt serving up bogus suspension notices.

Hijacking verified accounts on any platform is a big win for fraudsters. It gives credibility to their scams, especially when the accounts have large followings. This has been a particularly popular tactic to promote NFTs and other crypto-centric scams.

Most recently, we saw hijacked verified accounts pushing messages claiming other verified users had been flagged for spamming. In that instance, compromised accounts were made to look like members of Twitter’s support team.

Hate speech warnings via DM

This time around, the attack is less publicly visible, working its magic via DM instead of posting out in the open. The message sent to a Bleeping Computer reporter reads as follows:

Hey

Your account has been flagged as inauthentic and unsafe by our automated system, spreading hate speech is against our terms of service. We at Twitter take the security of our platform very seriously. That’s why were are suspending your account in 48h if you don’t complete the authentication process. To authenticate your account, follow the link below.

The site, hidden behind a URL shortening service, claims visitors are logging in to “Twitter help center”. Making use of Twitter APIs to call up the reporter’s test account name, it then asks for their password. A “welcome back” message alongside an image of the reporter’s profile picture makes it all seem that little more bit real.

The phishing site then asks for an email address, and appears to be checking behind the scenes to ensure you’re entering valid details. No spamming the database with deliberately incorrect information here!

The fake site displays a message which claims the account has been proven to be authentic (and in a very twisted way, it has). At this point, the phished victim likely assumes all is well and goes about their day. Meanwhile, the phisher is free to do whatever they want with the now stolen account.

Be careful out there

Whether verified or not, treat warning messages claiming to be from anyone on social media with suspicion. If they’re providing login links tied to threats of suspension, you’re better off visiting the site and contacting support directly.

The post Verified Twitter accounts phished via hate speech warnings appeared first on Malwarebytes Labs.

Google to delete location data of trips to abortion clinics

The historical overturning of Roe v. Wade in June prompted lawmakers and technology companies to respond with deep concern over the future of data. Google is one of those companies.

In a post to “The Keyword” blog last week, Google said it will act further in protecting its users’ privacy by automatically deleting historical records of visits to sensitive locations. These include abortion clinics, addiction treatment facilities, counseling centers, domestic violence shelters, fertility centers, and other places deemed as sensitive locations.

Once Google determines that a user visited a sensitive place, it will delete this data from Location History after the visit. This change will take effect in the coming weeks.

Google’s Location History is off by default, but if a user has turned it on, the company has already provided the tools they can use to easily delete part or all of their data.

Google also has plans to roll out an update allowing users of Fitbit, which tracks periods, to delete multiple logs at once.

However, in a post-Roe America, these changes may still not be enough. As The Verge points out, Google still collects a lot of user activity data, such as Search and YouTube histories, which can be used as evidence against investigations. Google didn’t mention anything about this in its blog.

Remember also that Google provides user data when served a valid court order. This is enough to get someone’s entire search history in the hands of police for investigation. Although it would not prove guilt, it’s “a liability” for women seeking assistance with abortion, especially in states where abortion has been deemed illegal.

Furthermore, Google is not the only source law enforcement could go to for evidence. Police can also access women’s health records, as HIPAA does not protect against court-issued warrants. There are also data brokers selling location data of people visiting abortion clinics. Though the datasets are allegedly anonymized, according to Vice, it’s possible to de-anonymize users from aggregate data.

The post Google to delete location data of trips to abortion clinics appeared first on Malwarebytes Labs.

IconBurst software supply chain attack offers malicious versions of NPM packages

Researchers discovered evidence of a widespread software supply chain attack involving malicious Javascript packages offered via the npm package manager. The threat actors behind the IconBurst campaign used typosquatting to mislead developers looking for very popular packages.

npm

npm is short for Node package manager, a name that no longer covers the load. npm is a package manager for the JavaScript programming language maintained by npm, Inc. It consists of a command line client, also called npm, and an online database of public and paid-for private packages, called the npm registry. The free npm registry has become the center of JavaScript code sharing, and with more than one million packages, the largest software registry in the world.

Even non-developers may have heard of Node.js, a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser. npm is the default package manager for Node.js.

Malicious fakes

Researchers at ReversingLabs identified more than two dozen such NPM packages. The packages dated back up to six months, and contain obfuscated Javascript designed to steal form data from individuals using applications or websites that included the malicious packages.

The malicious packages serviced downstream mobile and desktop applications as well as websites. In one case, a malicious package has been downloaded more than 17,000 times. The attacker used a typosquatting technique to trick developers into using the malicious packages.

Typosquatting

Typosquatting is a term you may have seen when reading about Internet scams. In essence it relies on users making typos when entering a site or domain name. Sometimes typosquatting includes techniques like URL hijacking and domain mimicry, but mostly it relies on intercepting typos, hence the name.

In this case, the attackers offer up packages via public repositories with names that are very similar to legitimate packages like umbrellajs and packages published by ionic.io.

Supply chain attack

A supply chain attack, also called a value-chain or third-party attack, occurs when someone attacks you or your system through an outside partner or provider. Attackers can deploy supply chain attacks to exploit trust relationships between a target and external parties.

This attack can be categorized as a supply chain attack because the developer falling for the typosquatting trick is not the victim. Ultimately, the user filling out a form on a website created by the developer that used a contaminated package is the actual victim of the attack.

Obfuscated code

The researchers’ attention was drawn by the use of the same obfuscator in a wide range of npm packages over the past few months. Obfuscation, although uncommon, is not unheard of in open source development. Often obfuscation techniques aim to hide the underlying code from prying eyes, but the Javascript Obfuscator used in this attack also reduces the size of JavaScript files.

Following the obfuscation trail, the developers found similarly named packages that could be connected to one of a handful of NPM accounts.

The goal

After deobfuscation, it became clear that the authors integrated a known login stealing script into the popular npm packages. The script designed to steal information from online forms, originates from a hacking tool called “Hacking PUBG i’d”. PUBG is an online multiplayer shooter with an estimated billion players. Some of these packages are still available for download at the time of writing.

Once again this attack shows us that the way in which developers rely on the work of others is not backed up by a way to detect malicious code within open source libraries and modules.

The researchers’ blog contains a list of packages and associated hashes of the malicious packages for developers that suspect they may have fallen victim to this attack.

Stay safe, everyone!

The post IconBurst software supply chain attack offers malicious versions of NPM packages appeared first on Malwarebytes Labs.

Discord Shame channel goes phishing

A variant of a popular piece of social media fraud has made its way onto Discord servers.

Multiple people are reporting messages of an “Is this you” nature, tied to a specific Discord channel.

The message reads as follows:

heyy ummm idk what happened of its really you but it was your name and the same avatar and you sent a girl erm **** stuff like what the ****? [url] check #shame and youll see. anyways until you explain what happened im blocking you. sorry if this is a misunderstanding but i do not wanna take risks with having creeps on my friendslist.

The server is called Shame | Exposing | Packing | Arguments.

Visitors to the channel are asked to log in via a QR code, and users of Discord are reporting losing access to their account after taking this step. Worse still, their now compromised account begins sending the same spam message to their own contacts.

Discord itself warned users over two years ago to only scan QR codes taken directly from their browser, and to not use codes sent by other users. Unfortunately this has been a concern for unwary Discord users for some time now.

Tips to keep your Discord account secure

  1. Enable two-factor authentication (2FA). While you’re doing this, download your backup codes too. Should you land on a regular phishing page and hand over login details, the attacker will still need your 2FA code to do anything with your account. Note: Some phishers are now stealing 2FA codes too, so this isn’t foolproof, but it is a good security step to have anyway.
  2. Turn on server wide 2FA for channel admins. This means that only admins with 2FA enabled will be able to make use of their available admin powers. This should hopefully keep the channels you’re in that little bit more secure.
  3. Use Privacy and Safety settings. Tick the “Keep me safe” box under “Safe direct messaging”. This means all direct messages will be scanned for age restricted content. You can also toggle “Allow direct messages from server members” to restrict individuals who aren’t on your friends list.
  4. Make use of the block and friend request features. You can tell Discord who, exactly, is able to send you a friend request. Choose from “Everyone”, “Friends of friends”, and “Server members”.
  5. Report hacked and suspicious accounts. Pretty much every option you can think of is available in the Trust & Safety section for reporting rogue accounts and bad behaviour. Individual messages can be reported, and you can see how bad actors are prevented from scraping your user data for nefarious purposes. Finally, a form exists for you to report specific bots sending harmful links.

The post Discord Shame channel goes phishing appeared first on Malwarebytes Labs.