IT NEWS

Potential cybersecurity impacts of Russia’s invasion of Ukraine

On Thursday night, Russia launched a military invasion of its neighbor and former Soviet Union member Ukraine, drawing a broad rebuke from international leaders, along with significant protest from the Russian public.

The toll of human life from this war is unknown, and, like the many international acts of aggression that have preceded it, future figures and statistics will not, alone, make sense of it. The threats and dangers posed by this conflict will be borne by the combatants and the people of Ukraine, and they are in our thoughts. Our collective priority must be people’s physical safety, but Russia’s assault could also produce a range of cybersecurity-related risks that organizations and people will need to protect themselves against, starting today.

Here are some of the ways in which Russia’s invasion of Ukraine may impact cybersecurity, and what organizations can do to stay safe in a continually evolving crisis.

The risk of increased stakes

In tandem with the physical strikes against Ukraine, a piece of wiper malware first detected by researchers at Symantec and ESET had already begun targeting organizations in Ukraine. Analyzed by SentinelOne, this wiper malware has been given the name HermeticWiper and it differentiates itself from typical malware in one, important way: Those responsible for it aren’t looking for any payment—they just want to do damage.

(AV-Comparatives quickly tested several known anti-malware and antivirus products against HermeticWiper and its variants and found that Malwarebytes, among others, detected the malware.)

Current analyses of HermeticWiper reveal that the malware is being delivered in highly-targeted attacks in Ukraine, Latvia, and Lithuania. Its operators seem to leverage vulnerabilities in external-facing servers while utilizing compromised account credentials to gain access and spread the malware further.

These tactics are nothing new, and familiar cybersecurity best practices around privileged access hold true. But here, the stakes have changed. Even in the worst-case-scenario of any ransomware attack, there’s at least a promise (which could admittedly be false) of a decryption key that can be purchased for a price. With a wiper malware, there is no such opportunity.

As described by Brian Krebs on his blog:

“Having your organization’s computers and servers locked by ransomware may seem like a day at the park compared to getting hit with ‘wiper’ malware that simply overwrites or corrupts data on infected systems.”

The risk of collateral damage

Russia’s proclivity for cyber warfare is well recorded. In the past, the country has been credibly blamed or proven responsible for several cyberattacks against Ukraine and its surrounding neighbors, including DDoS attacks in Estonia in 2007, Georgia in 2008, and Kyrgyzstan in 2009. Russia is also believed to have been responsible for an email spam campaign against Georgia in 2008, and also for the delivery of the “Snake” malware against Ukraine’s government in 2014. And in 2015 and 2017, when Ukraine’s power grid suffered two separate shutdowns because of the malware variants BlackEnergy and Industroyer/CrashOverride, much of the evidence reportedly pointed back to Russia.

Though these attacks, like the current attacks involving HermeticWiper, were highly targeted, the idea of “tidy” cyber warfare is a farce.

In June 2017, Russia—as concluded by the CIA just months later—unleashed a cyberattack on Ukraine that spilled out into the world. The cyberattack involved a piece of malware reportedly developed by Russia’s military intelligence agency the GRU, called NotPetya. Though it presented itself as a common piece of ransomware, it actually worked more like a wiper, destroying the data of its victims, which included banks, energy firms, and government officials.

But the attack, which was reportedly carried out to harm Ukraine’s financial system, spread out, hitting networks in Denmark, India, and the United States.

It was at the time the most devastating cyberattack in history, costing the shipping company Maersk a reported $300 million, and the pharmaceutical giant Merck a reported $870 million.

Though it’s impossible to predict what type of collateral damage could occur, the US Cybersecurity and Infrastructure Security Agency has released a cybersecurity guide for all organizations in the US to follow during this turbulent time. You can read that guide, called Shields Up, here.

The risk of escalation

As Ukraine defends itself against Russian forces, world leaders are faced with a difficult decision. Should they deliver support to Ukraine in any material way, Russia may then retaliate against them with its own cyber-attacks, and these attacks are unlikely to be borne by world leaders. Instead, the “crossfire” between national cyber-fronts will likely inflict harm on everyday individuals and businesses.

Already, this decision has produced a wrinkle, as world leaders are not just defending themselves against Russia’s cyber-offensive regimes, but also against known ransomware gangs that have quickly sworn allegiance to Russia’s cause.

On February 25, the Conti ransomware group announced that it would retaliate against any known physical or cyberattacks against Russia. As we wrote on Malwarebytes Labs:

“Any doubt that some of the world’s most damaging ransomware groups were aligned with the Kremlin, this sort of allegiance will put an end to it.”

Despite a clarification about an hour later, which attempted to reframe the group’s “full support of Russian government” into “we do not ally with any government”, there can be no doubt about the threat the group poses.

Unfortunately, the risk of escalation seems likely, as countries ramp up economic sanctions against Russia, and as the US is walking a delicate balance about its own cyber initiatives. On February 24, multiple White House officials denied, as NBC News had earlier reported, that the Biden Administration was considering multiple “options” of cyber engagement “on a scale never before contemplated.”

According to White House Press Secretary Jen Psaki, who wrote on Twitter, NBC’s “report on cyber options being presented to @POTUS is off base and does not reflect what is actually being discussed in any shape or form.”

These denials, however, preceded a more recent statement made by President Joe Biden this week, in which he said that “If Russia pursues cyberattacks against our companies, our critical infrastructure, we’re prepared to respond. For months, we’ve been working closely with the private sector to harden our cyber defenses [and] sharpen our response to Russian cyberattacks.”

The risk of misinformation

Already, countless videos have begun circulating online that either make unproven claims or make claims that have specifically been debunked. Earlier today, a video that purports to show a Ukrainian fighter pilot shooting down Russian air forces in the sky was proven to be fake—a product of a simulation game called Digital Combat Simulator.

Though that video was developed as an “homage” to the so-called “Ghost of Kyiv,” social media companies have been combatting a Kremlin-backed disinformation campaign taking place on Twitter, Facebook, YouTube, and TikTok.

According to recent reporting from Politico:

“Russia-backed media reports falsely claiming that the Ukrainian government is conducting genocide of civilians ran unchecked and unchallenged on Twitter and on Facebook. Videos from the Russian government — including speeches from Vladimir Putin — on YouTube received dollars from Western advertisers. Unverified TikTok videos of alleged real-time battles were instead historical footage, including doctored conflict-zone images and sounds.”

Users should digest any viral videos and news with caution, particularly during this conflict, as the primary aggressor has a proven history with information warfare. It is also worth remembering that during wartime even reporting from reputable sources may be based on innaccurate, incomplete or out-of-date information.

The risk of scams

In 2020, as infections of COVID-19 dramatically increased to the point of officially creating a global pandemic, online scammers pounced, sending bogus emails asking for donations to fake charities and registering thousands of COVID-19-related domains to trick unwitting victims into swiping their money or their account credentials.

With Russia’s invasion of Ukraine, the same strategy will likely happen, as online scammers constantly seek the latest crisis to leverage for an attack.

When asked on Twitter for advice on which organizations to donate to in order to help Ukraine, the user @RegGBlinker said that, after she’d read through a list of such organizations, she found many that raised suspicions.

The same Twitter user has already compiled a thread that links to multiple other Twitter users who have personally offered their cybersecurity help to small-to-medium-sized businesses (SMBs) affected by the attacks in Ukraine.

At the same time, several companies and organizations have begun offering their own support. F-Secure, for example, is offering its VPN tool for free to anyone in Ukraine, and The Tor Project has released a support channel for Russian-speaking users who want help in setting up Tor.

The full thread on support can be found here.

For any other donation offers that users think might be a scam, trust the same rules that apply to phishing emails—are there any misspellings, grammar mistakes, unknown senders, or unknown charities involved in the request? Check yourself before handing over any money.

The risk of focusing too heavily on Ukraine

While Ukraine is in crisis, several online threat actors have continued their own assault campaigns.

On February 24, multiple outlets reported that a ransomware gang that the cybersecurity firm Mandiant tracks as “UNC2596” was exploiting vulnerabilities in Microsoft Exchange to deliver its preferred ransomware, colloquially dubbed “Cuba.” On the same day, the US Cybersecurity and Infrastructure Security Agency (CISA) announced that it had spotted “malicious cyber operations by Iranian government-sponsored advanced persistent threat (APT) actors known as MuddyWater.” Those attacks were targeting both government and private-sector organizations in Asia, Africa, Europe, and North America.

An international human crisis is in no way a cause for inaction from online threat actors. Organizations should follow the same guidance they have before in protecting themselves from the most common online threats.

As CISA Director Jen Easterly warned on Twitter:

“Even as we remain laser-focused on Russian malicious cyber activity, we cannot fail to see around the corners.”

The post Potential cybersecurity impacts of Russia’s invasion of Ukraine appeared first on Malwarebytes Labs.

“Ethnicity recognition” tool listed on surveillance camera app store built by fridge-maker’s video analytics startup

The bizarre promotional video promises “Face analysis based on best of breed Artificial Intelligence algorithms for Business Intelligence and Digital Signage applications.” What follows is footage of a woman pushing her hair behind her ears, a man grimacing and baring his teeth, and an actor in a pinstripe suit being slapped in the face against a green screen. Digitally overlayed on each person’s face are colored outlines of rectangles with supposed measurements displayed: “F 25 happiness,” “caucasian_latin,” “M 38 sadness.”

The commercial reel advertises just one of the many video analytics tools available for download on an app store monitored by the Internet of Things startup Azena, itself a project from the German kitchen appliance maker Bosch.

Bosch, known more for its line of refrigerators, ovens, and dishwashers, also develops and sells an entire suite of surveillance cameras. Those surveillance cameras have become increasingly “smart,” according to recent reporting from The Intercept, and to better equip those cameras with smart capabilities, Bosch has tried to emulate the same success of the smart phone—offering an app store through Azena where users can download and install new, developer-created tools onto Bosch camera hardware.

According to Bosch and Azena, the apps are safe, the platform is secure, and the entire project is innovative.

“I think we’re just at the beginning of our development of what we can use video cameras for,” said Azena CEO Hartmut Schaper, in speaking with The Intercept.

Facial recognition’s flaws

Many of the available apps on the Azena apps store claim to provide potentially useful analytics, like alerting users when fire or smoke are detected, monitoring when items are out of stock on shelves, or checking for unattended luggage at an airport. But others veer into the realm of pseudo-science, claiming to be able to scan video footage to detect signs of “violence and street fighting,” and, as The Intercept reported, offering up “ethnicity detection, gender recognition, face recognition, emotion analysis, and suspicious behavior detection.”

Such promises on video analysis have flooded the market for years, but their accuracy has always been suspect.

In 2015, the image recognition algorithm rolled out in Google Photos labeled Black people as gorillas. In 2018, the organization Big Brother Watch found that the facial recognition technology rolled out by the UK’s Metropolitan Police at the Notting Hill carnival registered a mismatch 98 percent of the time. And in the same year, American Civil Liberties Union scanned the face of every US Congress member against a database of alleged criminal mugshots using Amazon’s own facial recognition technology and found that the technology made 28 erroneous matches.

When it comes to analyzing video footage to produce more nuanced results, like emotional states or an unfounded calculation of “suspicion,” the results are equally bad.

According to a recent report from the organization Article 19, which seeks to maintain a global freedom to expression, “emotion recognition technology is often pseudoscientific and carries enormous potential for harm.”

One need look no further than the promotional video described earlier. In the span of less than one second, the actor being slapped in the face goes from being measured as “east_asian” and “M 33 sadness” to “caucasion_latin” and “M 37 sadnesss.”

Of equal concern for the apps are the security standards put into place by Azena on its app store.

Security and quality concerns

According to documentation viewed by The Intercept, Azena reviews incoming, potential apps for their “data consistency” and the company also “performs ‘a virus check’ before publishing to its app store. ‘However,’ reads the documentation, ‘we do not perform a quality check or benchmark your app.’”

That process is a little different from the Apple App Store and the Google Play Store.

“When it comes to Apple, there’s definitely more than just a virus scan,” said Thomas Reed, director of Mac and Mobile at Malwarebytes. “From what I understand, there’s a multi-step process designed to flag both App Store rule violations and malicious apps.”

That doesn’t mean that junk apps don’t end up on the Apple App Store, Reed said—it just means that there’s a known, public process about what types of apps are and are not allowed. And that same premise is true for the Google Play Store, as Google tries to ensure that submitted apps do not break an expansive set of policies meant to protect users from being scammed out of money, for example, or from invasive monitoring. In 2020, for instance, Google implemented stricter controls against stalkerware-type applications.

According to The Intercept’s reporting on Azena though, the company’s review process relies heavily on the compliance of its developers. The Intercept wrote:

“Bosch and Azena maintain that their auditing procedures are enough to weed out problematic use of their cameras. In response to emailed questions, spokespeople from both companies explained that developers working on their platform commit to abiding by ethical business standards laid out by the United Nations, and that the companies believe this contractual obligation is enough to rein in any malicious use.

At the same time, the Azena spokesperson acknowledged that the company doesn’t have the ability to check how their cameras are used and doesn’t verify whether applications sold on their store are legal or in compliance with developer and user agreements.”

The Intercept also reported that the operating system used on modern Bosch surveillance cameras could potentially be out of date. The operating system is a “modified version of Android,” The Intercept reported, which feasibly means that Bosch’s cameras could receive some of the same updates that Android receives. But when The Intercept asked a cybersecurity researcher to take a look at the updates that Azena has publicized, that researcher said the updates only accounted for vulnerabilities patched as late as 2019.

In speaking with The Intercept, Azena’s Schaper denied that his company is failing to install necessary security updates, and he explained that some of the vulnerabilities in the broader Android ecosystem may not apply to the cameras’ operating system because of features that do not carry from one device to another, like Bluetooth connectivity.

A bigger issue

Malwarebytes Labs has written repeatedly about invasive surveillance—from intimate partner abuse to targeted government spying—but the mundane work of security camera analysis often gets overlooked.

It shouldn’t.

With the development of the Azena app platform and its many applications, an entire class of Internet of Things devices—surveillance cameras—has become a testing ground for video analysis tools that have little evidence to support their claims. Emotional recognition tools are nascent and largely un-scientific. “Ethnicity recognition” seems to forever be stuck in the past, plagued by earlier examples of when a video game console couldn’t recognize dark-skinned players and when a soap dispenser famously failed to work for a Facebook employee in Nigeria. And “suspicious behavior” detection relies on someone, somewhere, determining what “suspicious” is, without having to answer why they feel that way.

Above all else, the very premise of facial recognition itself has failed to prove effective, with multiple, recent experiments showing embarrassing failure rates.

This is not innovation. It’s experimentation without foresight.

The post “Ethnicity recognition” tool listed on surveillance camera app store built by fridge-maker’s video analytics startup appeared first on Malwarebytes Labs.

Hive ransomware: Researchers figure out a method to decrypt files

Files encrypted by ransomware can’t be recovered without obtaining the decryption key, if the encryption has been done properly. But that doesn’t seem to be the case for Hive ransomware. Researchers from the Kookmin University in Korea have published a method for decrypting the data scrambled by Hive.

Under normal circumstances, victims have to pay a ransom to get the private key that enables them to decrypt their encrypted files. But the researchers managed to exploit a flaw in the encryption routine which allowed them to recover the master key, making it possible to decrypt all the files of a victim that were encrypted in the same session.

Hive ransomware

Hive ransomware has been around since June 2021 and is a typical targeted ransomware-as-a-service (RaaS) which uses the threat to publish exfiltrated data as extra leverage to get the victims to pay. The ransomware group is known to work with affiliates that use various methods to compromise company networks.

In August 2021, the FBI published a warning about Hive ransomware sharing tactics, techniques, and procedures (TTPs), indicators of compromise (IOCs), and mitigation advice.

The flaw

The cryptographic vulnerability identified by the researchers lies in the mechanism by which the master keys are generated and stored. A master key is generated as one of the first steps in the encryption process. This master key is then used to generate a keystream for the data encryption process.

The ransomware only encrypts select portions of the file instead of all content using two keystreams derived from the master key.  Those two keystreams from the master key are generated using two random offsets from the master key and are combined and XORed to create the encryption keystream. When the file is encrypted, pointers to the keystreams in the master key are stored in the filename.

Since the keystreams get partially reused for every encrypted file, the researchers figured out that with enough data they could “guess” the keystreams. But to successfully decrypt the files they also needed:

  • Some of the original files corresponding to encrypted files; or
  • Several encrypted files with known signatures, such as .pdf, .xlsx, or .hwp.

If the researchers had either of those, the keystreams could be collected and the master key recovery initiated. Finding corresponding unencrypted files is easier than you would think, because unlike other ransomware, Hive encrypts the Program files, Program files (x86), and ProgramData directories, which commonly store software files that are not related to the operating system, but instead other software. These software packages and installation files could easily be obtained on the Internet.

Decryption success rate

By running some experiments, the researchers made an estimate about the accuracy with which they could reconstruct the master key and how many encrypted files could be recovered with such a partially known master key.

When 92% of the master key was recovered, the researchers succeeded in successfully decrypting approximately 72% of the files. When 96% of the master key was restored, the researchers successfully decrypted around 82% of the files, and when 98% of the master key was restored, approximately 98% of the files were successfully decrypted.

Using the method proposed by the researchers, usually more than 95% of the master key used for generating the encryption keystream was recovered, and a majority of the encrypted files could be recovered by using the recovered master key.

How does this help victims?

The researchers said:

“The decryption method is feasible without access to the attacker’s information, using just encrypted files. We obtained the master key by solving numerous equations for XOR operations acquired from the encrypted files. We expect that our method will be helpful for individuals and enterprises damaged by the Hive ransomware.”

This research may seem very theoretical for now, but you can rest assured that other researchers are figuring out ways to use the theoretical work done by these researchers and turn it into a working decryptor that victims of the Hive ransomware can use to get their files back.

Often, you can find working decryptors posted on the NoMoreRansom website. This is a project where law enforcement and IT security companies have joined forces to disrupt cybercriminal businesses with ransomware connections.

We will keep you updated if a working decryptor is created based on this research.

The post Hive ransomware: Researchers figure out a method to decrypt files appeared first on Malwarebytes Labs.

How to update your drivers and when you need to

Many software vendors have a driver updater in their arsenal. But is it really that important to have the latest computer drivers? Where do you get them? And how do you go about updating?

Driver updates fix security and compatibility problems, errors, broken code, and sometimes even add features to the hardware. But we tend to forget about the need for the latest drivers as long as our systems are working fine, which is understandable as the procedure is not always clear and we all know the risk of making things worse.

How do I know if my drivers need updating?

Generally speaking, if your system is working properly and you don’t get prompted about an update, you will hardly ever feel the need to update your drivers. And as long as there are no security issues with the drivers you have, that is fine.

Device drivers are essential pieces of software that help different hardware components work smoothly on your system. These device drivers have often been installed by the system manufacturer. If your system has a hardware issue, it is likely to be a device driver problem. For devices that you connect to your system, for example a USB mouse, the Operating System (OS) can usually automatically check if there are (new) drivers available for those devices. For example, Windows Update can be set to look for updated drivers.

How do I update my drivers in Windows 10?

To quickly update device drivers using Windows Update, use these steps:

  1. Open Settings.
  2. Click on Update & Security
  3. Click on Windows Update
  4. Click the Check for updates button
  5. Click the View optional updates option
  6. Click the Driver updates tab
  7. Select the driver you want to update
  8. Click the Download and install button

Once you complete the steps, the newer driver will be downloaded and installed automatically on your device. Many device drivers will require a reboot to complete the installation.

Do Windows 10 and 11 install drivers automatically?

In Windows 10 and 11 you can choose whether to let Windows automatically download the driver software or do it yourself. Automatic updating is the default and the easiest method, whereby Windows will habitually check for driver updates and install them. So, unless you are using some niche devices, the built-in Windows Update service on your PC generally keeps most of your drivers up to date in the background.

Manually installing drivers on macOS

Many drivers on mac OS systems are installed simply by updating your Mac, but third-party devices often require an additional driver installation.

  1. Download the correct driver from the manufacturers site.
  2. Double click on the driver and extract it.
  3. Open the folder and run the .pkg install file.
  4. In some cases, a warning message will pop up. For a properly signed and notarized installer, this message will be shown only when the Allow apps downloaded from setting in System Preferences > Security & Privacy > General is set to App Store only. To solve this problem, please go to System Preferences, and Security & Privacy, then click Open anyway to identify the driver.
  5. After the driver is identified, it will be installed automatically. During the process, an authentication window will pop up to ask for username and password, which is the administration account of your Mac
  6. Then click Install Software to continue the process
  7. Next click Continue Installation and Restart if prompted to finish the installation process
  8. A driver on macOS is often implemented as either a kernel extension or a system extension, and either of those will require an extra step where you go to System Preferences > Security & Privacy > General, unlock the pane by clicking the lock in the lower left corner and entering your password, and click the Allow button

On recent versions of macOS, when the installer is not properly signed and notarized, the user won’t be able to open it at all. There are workarounds to be found, but we would advise you to shy away from them.

Device drivers on Linux

In Linux, when a device is connected to the system, a device file is created in the /dev directory. The hardware devices are treated like ordinary files, which makes it easier for the software to interact with the device drivers. Most of the available hardware drivers will already be on your computer, included along with the kernel, graphics server, and print server.

However, some manufacturers provide their own, closed-source, proprietary drivers. How you install proprietary drivers largely depends on your Linux distribution. On a few distributions installing drivers is relatively easy. On Ubuntu and Ubuntu-based distributions, there’s an Additional Drivers tool. Open the dash, search for Additional Drivers, and launch it. It will detect which proprietary drivers you can install for your hardware and allow you to install them. Linux Mint has a Driver Manager tool that works similarly.

Don’t get led astray

A number of download sites that will offer files pretending to be the drivers you need are hosting malware. They try to trick users into installing this malware on their system. Other sites combine the drivers you need in a bundle which, besides the driver, also install adware or a potentially unwanted program on your system.

And, unfortunately, a lot of the advertised driver updater software feels the need to exaggerate the scan results in order to get users to buy the software. Needless to say that paying for such software is a waste of your money. As is the case with a lot of potentially unwanted programs, they offer functionality that is already built into Windows.

The post How to update your drivers and when you need to appeared first on Malwarebytes Labs.

Yik Yak “cyberbullying”: What can be done?

In August 2021, Yik Yak, the once-popular anonymous social media platform on Android and iOS, made a comeback after shutting its doors in 2017. Six months after its return, it’s started to gain attention once more, as a result of cyberbullying—the main reason why it declined years ago.

However, this new Yik Yak has a new commitment: the new owners say they will make it “a fun place free of bullying, threats, and all sort of negativity.”

Background: Yik Yak’s success and downfall

Yik Yak was the brainchild of two Furman University graduates who decided to put their careers on hold—one was supposed to go to medical school; the other was already in finance—to give starting a business of their own a shot.

Aimed at college students, it didn’t take long for the Yik Yak app to take off after it began spreading from one campus to another and then on to high schools. What made it so popular among students was its anonymity and hyper-local context (Messages posted on Yik Yak can be viewed by people using the app within a five mile radius). Within a year of its official release in November 2013, Yik Yak secured $75M USD from investors. In 2017, it was valued at $400M USD.

But as quickly as popularity came, the social media platform also promptly nosedived. Many attributed its demise to the growing number of cyberbullying, harassment, and threat incidents within the app fueled by the very features the platform thrived on. Such incidents brought about a lot of complaints from parents and school administrators, which led to schools eventually banning the app from being accessed using school networks. Not only that, Yik Yak cut off its high-school users—its second largest group of users—from using the app while on campus.

In 2015, Yik Yak was caught automatically downvoting posts containing names of competing brands like Fade, Sneek, and Unseen, a tactic that Techcrunch noted was entirely against its mission of providing “organic, unfiltered feed of news.”

Amid all the challenges and a painful decline of 76% in its userbase, Yik Yak made changes to save its business. In January 2016, Yik Yak introduced a web version, but probably the most notable change happened in August 2016 when the company did away with anonymity entirely while keeping the hyperlocality of the app. However, the changes weren’t enough and by the end of 2016, the company had laid off 60 percent of its team. Then, in May 2017, the inevitable happened: Yik Yak founders said their goodbyes, and servers were shut down a month later.

Reactions to the Yak being back

Then, after nearly five years, Yik Yak made an unexpected comeback, with a $6.25M USD seed funding to boot.

In the comeback announcement, “Team Yik Yak” introduced themselves as the new owners, saying that they purchased back the rights to redevelop Yik Yak after it was sold to Square, Inc (now called Block, Inc).

“We’re bringing Yik Yak back because we believe the global community deserves a place to be authentic, a place to be equal, and a place to connect with people nearby. We’re committed to making Yik Yak a fun place free of bullying, threats, and all sort of negativity.”

Yik Yak appears to be staying true to the mission of the original platform, which is to make Yik Yak a place for people within five miles of each other to connect, free from labels and risk. The company re-affirmed its stance against bullying and hate speech, saying it has a “one strike and you’re out” rule if someone violates the Community Guardrails or Terms of Service.

But with anonymity still in place on the platform, people are still concerned.

“It is time to have that conversation with kids about online behavior and etiquette. Again,” wrote Director of Technology Josh Sumption for the Southwest West Central (SWWC) Service Cooperative, a Minnesota-based educational service agency, in an article entitled, “Yik Yak, Yuk.”

Sumption points out that, unlike its previous version, the current Yik Yak doesn’t have geofencing enabled, so anyone with a phone in school can use it, may they be college students or not. On top of that, he also revealed that Yik Yak’s downvote feature is being used to eliminate the positive statements school administrators and student champions post to overpower negative conversation in their area.

Some of those who were around when Yik Yak made it big have some public Twitter thoughts to share about its comeback, too:

Anonymity isn’t bad

Not everyone is against the anonymity that Yik Yak has to offer. In fact, some argue that being anonymous might encourage people to step in when they see something horrible happening.

“Bullying will unfortunately always happen online and offline, however, being able to remain anonymous helps motivate people to aide victims of harassment,” wrote Rey Junco, Harvard alum and currently a senior researcher at the Center for Information and Research on Civic Learning and Engagement (CIRCLE) at Tufts University, in a Wired article in 2015. “Unfortunately, it is very difficult for bystanders to remain anonymous in offline spaces; thereby making it less likely bystanders will intervene when they see someone being bullied.”

Junco also argued that an anonymous space gives students the ability to “take creative risks” and develop their identities. “For instance, a student who is exploring a gay identity often feels more comfortable exploring the coming out process anonymously in online spaces because of an increased feeling of safety.”

People have been abusing anonymity before the dawn of social media, and ending it is not the solution (Techdirt highlighted three reasons why in a post last year). Many, many studies have also shown that anonymity alone isn’t the reason for people behaving badly online. There are many cases on Twitter, for example, where users post harassing replies under their real names with real photos of their faces.

Going about with Yik Yak and anonymous apps

“Instead of being afraid of Yik Yak, campus professionals should embrace it as not only a way for young people to explore creativity and develop their identities, but also as a way for professionals to learn more about the campus environment through students’ eyes,” Junco advised in the same article. Great advice, if you ask me. It is wise for parents and carers to take heed as well.

Things may have been different seven years ago, but it stands to reason that the upsides of anonymity remain the same. That being said, everyone in the community and not just parents and school administrators could shape the conversation exchanged within their five mile radius.

There will always be negativity in online spaces we frequent, no doubt, but there are several ways it can be combatted. As we’ve already seen, talking to our kids about Yik Yak and anonymous apps, in general, is essential. Parents, carers, and school administrators can start by saying that one is never truly anonymous on Yik Yak. Posts are tied to accounts linked to phone numbers. And numbers can be traced, especially by law enforcement. That alone should make a student think twice about posting threats of bodily harm online. Take the case of a Louisiana State University (LSU) student who was arrested for terrorizing by falsely warning of a campus shooting, even though he used someone else’s phone to post on Yik Yak.

Regularly posting positive posts, offering genuine help to those who need it, showing support to someone who decided to be vulnerable and share their testimonies, and starting conversations with interesting topics could also shift the conversation away from the negative. Foster empathy within your herd and promote bystander intervention should they encounter incidents of cyberbullying, hate, or other nasty yaks aimed at one person or a group.

And if some Yik Yak users are actively downvoting such tone-changing posts, then report it.

Yik Yak could have been better then. Now it’s back, if you’re a Yik Yak user and stand behind what its for, the opportunity to make this platform the way it was designed to be is here now. Own it, Yak responsibly, and together with your herd, shape it into a space that benefits everyone.

The post Yik Yak “cyberbullying”: What can be done? appeared first on Malwarebytes Labs.

Cyclops Blink malware: US and UK authorities issue alert

According to a joint security advisory published yesterday by US and UK cybersecurity and law enforcement agencies, a new malware called Cyclops Blink has surfaced to replace the VPNFilter malware attributed to the Sandworm group, which has always been seen as a Russian state-sponsored group.

Cyclops Blink

The alert issued by the Cybersecurity & Infrastructure Security Agency (CISA) and an analysis published by the UK’s National Cyber Security Center (NCSC) show Indicators of Compromise (IOCs) and Tactics, Techniques, and Procedures (TTPs) for this new malware.

Cyclops Blink has primarily been deployed to networking hardware company WatchGuard’s devices. According to WatchGuard, Cyclops Blink may have affected approximately 1% of active firewall appliances, which are devices mainly used by business customers.

Cyclops Blink has been found in WatchGuard’s firewall devices since at least June 2019. But the NCSC warns that it is likely that Sandworm is capable of compiling the same or very similar malware for other architectures and firmware. The attackers were able to infect their devices via a WatchGuard vulnerability that was patched in a May 2021 update.

The analysis says Cyclops Blink malware also comes with modules specifically developed to upload/download files to and from its command and control server, collect and exfiltrate device information, and update the malware. The presence of a Cyclops Blink infection does not mean that an organization is the primary target, but its machines could be used to conduct attacks on others. Either way, it is in your best interest to disconnect and remediate any affected devices.

Sandworm

In light of world news, it’s important to note that the Sandworm group has been known to target Ukrainian companies and government agencies. They were held responsible for destroying entire Ukrainian networks, triggering blackouts by targeting electrical utilities in the Ukraine (BlackEnergy malware), and releasing the NotPetya malware. NotPetya is the name given to a later version of the Petya malware that began spreading rapidly, with infection sites focused in Ukraine, but from there it also spread across Europe and beyond.

Among the latest attacks on Ukraine was a distributed denial of service (DDoS) attack. Cyberattacks, such as DDoS attacks, fall under the traditional categories of sabotage, espionage and subversion. So far, we can see the results of these attacks as several of Ukraine’s bank and government department websites crashed, and earlier this week some 70 Ukrainian government websites underwent the same fate.

As we learned from NotPetya, these attacks can spread around the world. NotPetya affected computer networks worldwide, targeting hospitals and medical facilities in the United States, and costing more than US$1 billion in losses.

VPNFilter

CISA and the NCSC both describe the Cyclops Blink malware as a successor to an earlier Sandworm tool known as VPNFilter, which infected half a million routers to form a global botnet before it was identified by Cisco and the FBI in 2018 and largely dismantled. It never fully disappeared, and the Sandworm group has since shown limited interest in existing VPNFilter footholds, instead preferring to retool.

VPNFilter was deployed in stages, with most functionality in the third-stage modules. These modules enabled traffic manipulation, destruction of the infected host device, and likely enabled downstream devices to be exploited.

Mitigation and detection

WatchGuard firewall appliances are not at risk if they were never configured to allow unrestricted management access from the Internet which is the default setting for all WatchGuard’s physical firewall appliances. Internet access to the management interface of any device is a security risk.

All WatchGuard appliances should be updated to the latest version of Fireware OS.

When it comes to infected appliances, Cyclops Blink persists on reboot and throughout the legitimate firmware update process. So, affected organizations should take steps to remove the malware. WatchGuard customers and partners can eliminate the potential threat posed by malicious activity from the botnet by immediately enacting WatchGuard’s 4-Step Cyclops Blink Diagnosis and Remediation Plan.

Owners of infected appliances will also need to update the passphrases for the Status and Admin device management accounts and replace any other secrets, credentials, and passphrases configured on the appliance. All accounts on infected devices should be assumed to be compromised.

Heightened awareness of Cyclops Blink and other malware attacks that may be aimed at the Ukraine is required. This is true for everyone involved in cybersecurity by the way, not just owners of WatchGuard appliances.

Stay safe, everyone!

The post Cyclops Blink malware: US and UK authorities issue alert appeared first on Malwarebytes Labs.

CISA offers guidance on dealing with information manipulation

Malicious actors use influence operations, like spreading false information, to shape public opinion, undermine trust, amplify division, and create dissension. In response, the Cybersecurity & Infrastructure Security Agency (CISA) has released CISA Insights: Preparing for and Mitigating Foreign Influence Operations Targeting Critical Infrastructure, which provides proactive steps organizations can take to assess and mitigate the risks of information manipulation.

The Insights document is designed for critical infrastructure owners and operators, to ensure they are aware of the risks of influence operations leveraging social media and online platforms.

False information

Instead of “false information” CISA uses the term “MDM“, which covers misinformation, disinformation, and malinformation. This deserves some clarification to understand their definitions of these types of misleading information:

  • Misinformation is false, but not created or shared with the intention of causing harm.
  • Disinformation is deliberately created to mislead, harm, or manipulate.
  • Malinformation is based on fact, but used out of context to mislead, harm, or manipulate.

CISA warns that threat actors both inside and outside the USA use MDM campaigns to cause chaos, confusion, and division.

Foreign actors

In its report, CISA focuses on foreign actors that engage in MDM to bias the development of policy and undermine the security of the USA and its allies. By using social media, MDM threat actors have means at their disposal unlike any in history. It warns that while a single MDM narrative can seem innocuous, when narratives are promoted consistently to targeted audiences, and reinforced by peers and social media influencers, it can have compounding effects.

Modern foreign influence operations demonstrate how a strategic and consistent exploitation of divisive issues, and a knowledge of the target audience and who they trust, can increase the potency and impact of an MDM narrative. Furthermore, current social factors, including the USA’s heightened polarization, and the ongoing global pandemic, increase the risk and potency of influence operations to the USA’s critical infrastructure, especially by experienced threat actors.

CISA insights goal

This CISA Insights product is intended to ensure that critical infrastructure owners and operators are aware of the risks of influence operations leveraging social media and online platforms. Organizations can take steps internally and externally to ensure swift coordination in information sharing, as well as the ability to communicate accurate and trusted information in order to bolster resilience.

Mitigation

In the CISA Insights we find some proactive actions that can limit or mitigate the influence of MDM campaigns:

  • Identify your vulnerabilities. CISA urges organizations to ask themselves what narratives or incidents have the potential to negatively affect their critical functions.
  • Secure social media. Hijacked accounts and defaced websites can be used to influence public opinion, so organizations should educate their staff on securing their personal social media accounts.
  • Practice smart email hygiene. Organizations should pracitce smart email hygiene and watch for phishing attacks.
  • Prepare communication channels in advance. CISA suggests that preparing communication channels and establishing contacts before MDM incidents occur allows organizations to respond quickly, and share accurate and verifiable information.
  • Review and update your website. Organizations need to make information as clear, transparent, and accessible as possible.
  • Review and update your social media. Organizations need to stay on top of their social media, and make sure they’re verified on each platform, so they can be identified as official accounts.
  • Anticipate MDM. Clear, consistent, and relevant communications that respond to and anticipate MDM can help organizations maintain security and build public confidence.
  • Review existing communications channels. Organizations should look at how they communicate—such as newsletters, reports, blog posts, events, social media content, podcasts, or other activities—and identify opportunities for improvement.
  • Coordinate with other organizations. Working with other organizations in your sector can amplify and reinforce messaging, and create a strong network of trusted voices.
  • Maintain contact with key outlets. Communications professionals should maintain contact with key communications outlets.

An incident response plan

CISA goes on to provide some more details about what it takes to have an effective incident response plan.

  • Clear internal communications channel. Designate an individual to oversee the MDM incident response process and associated crisis communications.
  • Establish roles and responsibilities for MDM response, including but not limited to responding to media inquiries, issuing public statements, communicating with your staff, engaging your previously identified stakeholder network, and in implementing physical security measures.
  • Ensure your communication systems are set up to handle incoming questions. Phones, social media accounts, and centralized inboxes should be monitored by multiple people on a rotating schedule to avoid burnout.
  • Identify and train staff on reporting procedures to social media companies, government, and/or law enforcement.
  • Consider your internal coordination channels and processes for identifying incidents, delineating information sharing and response. Foreign actors can combine influence operations with cyber activities, requiring additional coordination to facilitate a whole-of-organization response.

Stay safe, everyone and verify your sources!

The post CISA offers guidance on dealing with information manipulation appeared first on Malwarebytes Labs.

Facebook sued for siphoning facial recognition data without consent

Ken Paxton, the Attorney General of Texas, recently filed a lawsuit against Facebook’s parent company, Meta, for harvesting the facial recognition data of millions of Texan residents—for a decade.

Paxton filed the lawsuit on Monday in the state’s Harrison County District Court. The suit contains arguments that Facebook’s now-defunct photo-tagging feature illegally collected data about Texan people’s faces, including those who are non-Facebook users but were tagged by someone who is, without asking for consent. Facebook was able to collect such data via its face recognition technology.

Six years ago, Yann LeCun, currently Chief AI Scientist at Meta, gave Facebook users an idea of how his team approached its work in artificial intelligence research and facial recognition.

Facebook introduced face recognition technology in 2010 to make tagging friends and family on photos more manageable. The company updated and implemented this technology in December 2017 to assist users in managing their identity on the platform by helping them “find photos that you’re not tagged in and help you detect when others might be attempting to use your image as their profile picture.” Not only was the tool intended to protect users’ identities on Facebook, it was also supposed to help visually impaired users to identify the people in the photos they encountered on Facebook.

Three months ago, Jerome Presenti, the VP of artificial intelligence, announced that Facebook would be shutting down its face recognition system as part of a company-wide move to limit its use in their products. This change, in turn, will affect blind and visually impaired services that identify names in photos. Facebook users who are wary that their faces might be in pictures or videos posted by other users—especially those who create fake Facebook profiles and use stolen photos—will no longer be notified. But the most significant effect of all is that the “facial recognition templates” of more than a third of Facebook’s users will be deleted, the company says.

We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used,” wrote Presenti, “There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.

The lawsuit Paxton has filed alleges that Facebook, before shutting down its AI tool, had already amassed “biometric identifiers of Texans for a commercial purpose without their informed consent, disclosed those identities to others, and failed to destroy collected identifiers within a reasonable time, all in violation of the Texas Capture or Use of Biometric Identifier Act (CUBI). Paxton also alleges that the social media platform engaged in deceptive acts, breaking the Texas Deceptive Trade Practices Consumer Protection Act (DTPA).

Paxton doesn’t see Facebook’s face recognition technology as a means to protect its user’s identities. Instead, he sees it as a deceptive scheme against Texans. The system captured their immutable data as they innocently shared photos and videos with friends and family members on the platform. All the while, the company profited from the data and trained its AI while putting its users at risk.

Meta may have pulled its face recognition tool from Facebook, but Meta has other platforms. And, according to the lawsuit, Facebook “made no such commitment with respect to any of the other platforms or operations under its corporate umbrella, such as Instagram, WhatsApp, Facebook Reality Labs, and its upcoming virtual reality metaverse.”

According to the Texas Tribune, this lawsuit against Facebook is just the latest in a string of Paxton’s actions against big named companies. In January, he sued Google over “deceptively tracking users’ location without consent” and issued a probe on Twitter for its content moderation practices. Earlier this month, his office joined a brief to accuse Apple of violating antitrust laws (among others). And last week, he opened an investigation into GoFundMe for pulling the “Freedom Convoy,” a fundraiser campaign for Canadian truckers protesting against pandemic restrictions.

The post Facebook sued for siphoning facial recognition data without consent appeared first on Malwarebytes Labs.

A week in security (February 14 – February 20)

Last week on Malwarebytes Labs:

And don’t forget to listen to our recent podcast about the world’s most coveted spyware, Pegasus.

This video cannot be displayed because your Functional Cookies are currently disabled.

To enable them, please visit our privacy policy and search for the Cookies section. Select “Click Here” to open the Privacy Preference Center and select “Functional Cookies” in the menu. You can switch the tab back to “Active” or disable by moving the tab to “Inactive.” Click “Save Settings.”

You can also find the Lock and Code podcast on Apple PodcastsSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Stay safe!

The post A week in security (February 14 – February 20) appeared first on Malwarebytes Labs.

Watch out for this bump in LinkedIn phishing

LinkedIn is sometimes forgotten about in more general coverage of phishing attacks. Social media sites such as Facebook, Twitter, and Instagram receive regular attention. Cryptowallet customer support scams run wild in the replies to any cryptocurrency themed tweet. Facebook users can often be found dealing with compromised accounts asking for money. Instagram has a wave of influencers having their accounts held to ransom. The big questions is: have you ever wondered what’s on LinkedIn?

Presenting: What’s on LinkedIn

It’s not just endless spam for unsuitable job positions and motivational speeches. It turns out there’s a whole lot of phishing happening behind the scenes, too. At the beginning of February, Brian Krebs reported that scammers are using “Slinks” to redirect to phishing pages. Worse still, that particular technique has been around since 2016. In the most recent example, the phishing attempts seen in the wild were not hunting LinkedIn accounts specifically. Even so, tying bad URLs to reassuringly convincing LinkedIn redirects will always end badly for someone.

More recently…

Phishing by increasingly large numbers

Research claims that bogus imitation LinkedIn mails have increased around 232% since the beginning of February. Overfamiliarity with a stream of genuine messages mentioning profile views, new messages, and employment opportunity suitability may be causing people to start clicking through. Times are tough out there, and given LinkedIn is a natural fit for networking and job hunting it’s understandable that some folks will click everything in sight.

I’m a professional (phisher)

The mails are convincingly branded, look realistic, and emulate the real thing in a way that may drift past people’s sense of caution. The research points out that the fake mails also piggyback on the back of other genuine brands to make themselves look even more convincing. CVS Carepoint and American Express are two of the brands named as being spoofed in the fake mails.

Should someone click through to the phishing pages and start entering details, they may well lose the login credentials. Unlike the attacks from the beginning of February, these mails are specifically looking for LinkedIn password and username combinations. The research doesn’t say what the scammers do with the accounts once harvested, but it’s a good bet they’ll be used for spamming, social engineering, or even just more phishing attempts.

Avoiding the LinkedIn scammers

These mails appear to be getting past at least some email security defences and precautions. It’s nice to know people are checking out your profile. It’s helpful that there are awesome jobs out there for you to check out, but be careful! You don’t have to click into the latest email in your mailbox. Consider navigating directly to LinkedIn yourself and seeing what’s in there.

Bogus messages and jobs referenced in the fake mails won’t be waiting for you on the site itself. This doesn’t rule out actually being sent bogus messages and job references on LinkedIn itself. However, going there yourself and seeing what lies in wait at least negates the threat of the phishing mails.

The post Watch out for this bump in LinkedIn phishing appeared first on Malwarebytes Labs.