IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

New AgentTesla variant steals WiFi credentials

AgentTesla is a .Net-based infostealer that has the capability to steal data from different applications on victim machines, such as browsers, FTP clients, and file downloaders. The actor behind this malware is constantly maintaining it by adding new modules. One of the new modules that has been added to this malware is the capability to steal WiFi profiles.

AgentTesla was first seen in 2014, and has been frequently used by cybercriminals in various malicious campaigns since. During the months of March and April 2020, it was actively distributed through spam campaigns in different formats, such as ZIP, CAB, MSI, IMG files, and Office documents.

Newer variants of AgentTesla seen in the wild have the capability to collect information about a victim’s WiFi profile, possibly to use it as a way to spread onto other machines. In this blog, we review how this new feature works.

Technical analysis

The variant we analyzed was written in .Net. It has an executable embedded as an image resource, which is extracted and executed at run-time (Figure 1).

figure1
Figure 1. Extract and execute the payload.

This executable (ReZer0V2) also has a resource that is encrypted. After doing several anti-debugging, anti-sandboxing, and anti-virtualization checks, the executable decrypts and injects the content of the resource into itself (Figure 2).

figure2
Figure 2. Decrypt and execute the payload.

The second payload (owEKjMRYkIfjPazjphIDdRoPePVNoulgd) is the main component of AgentTesla that steals credentials from browsers, FTP clients, wireless profiles, and more (Figure 3). The sample is heavily obfuscated to make the analysis more difficult for researchers.

figure3
Figure 3. Second payload

To collect wireless profile credentials, a new “netsh” process is created by passing “wlan show profile” as argument (Figure 4). Available WiFi names are then extracted by applying a regex: “All User Profile * :  (?<profile>.*)”, on the stdout output of the process.

figure4
Figure 4 Creating netsh process

In the next step for each wireless profile, the following command is executed to extract the profile’s credential: “netsh wlan show profile PRPFILENAME key=clear” (Figure 5).

figure5
Figure 5. Extract WiFi credentials

String encryption

All the strings used by the malware are encrypted and are decrypted by Rijndael symmetric encryption algorithm in the “<Module>.u200E” function. This function receives a number as an input and generates three byte arrays containing input to be decrypted, key and IV (Figure 6).

Figure6
Figure 6. u200E function snippet

For example, in Figure 5, “119216” is decrypted into “wlan show profile name=” and “119196” is decrypted into “key=clear”.

In addition to WiFi profiles, the executable collects extensive information about the system, including FTP clients, browsers, file downloaders, and machine info (username, computer name, OS name, CPU architecture, RAM) and adds them to a list (Figure 7).

figure7
Figure 7. List of collected info

Collected information forms the body section of a SMTP message in html format (Figure 8):

figure8
Figure 8 Collected data in html format in message body

Note: If the final list has less than three elements, it won’t generate a SMTP message. If everything checks out, a message is finally sent via smtp.yandex.com, with SSL enabled (Figure 9):

figure9
Figure 9. Build Smtp message

The following diagram shows the whole process explained above from extraction of first payload from the image resource to exfiltration of the stolen information over SMTP:

Basic Activity Diagram scaled e1586884591811
Figure 10. Process diagram

Popular stealer looking to expand

Since AgentTesla added the WiFi-stealing feature, we believe the threat actors may be considering using WiFi as a mechanism for spread, similar to what was observed with Emotet. Another possibility is using the WiFi profile to set the stage for future attacks.

Either way, Malwarebytes users were already protected from this new variant of AgentTesla through our real-time protection technology.

block 2

Indicators of compromise

AgentTesla samples:

91b711812867b39537a2cd81bb1ab10315ac321a1c68e316bf4fa84badbc09b
dd4a43b0b8a68db65b00fad99519539e2a05a3892f03b869d58ee15fdf5aa044
27939b70928b285655c863fa26efded96bface9db46f35ba39d2a1295424c07b

First payload:

249a503263717051d62a6d65a5040cf408517dd22f9021e5f8978a819b18063b

Second payload: 

63393b114ebe2e18d888d982c5ee11563a193d9da3083d84a611384bc748b1b0

The post New AgentTesla variant steals WiFi credentials appeared first on Malwarebytes Labs.

Mass surveillance alone will not save us from coronavirus

As the pattern-shattering truth of our new lives drains heavy—as coronavirus rends routines, raids our wellbeing, and whiplashes us between anxiety and fear—we should not look to mass digital surveillance to bring us back to normal.

Already, governments have cast vast digital nets. South Koreans are tracked through GPS location history, credit card transactions, and surveillance camera footage. Israelis learned last month that their mobile device locations were surreptitiously collected for years. Now, the government rummages through this enormous database in broad daylight, this time to track the spread of COVID-19. Russians cannot leave home in some regions without scanning QR codes that restrict their time spent outside—three hours for grocery shopping, one hour to walk the dog, half that to take out the trash.

Privacy advocates around the world have sounded the alarm. This month, more than 100 civil and digital rights organizations urged that any government’s coronavirus-targeted surveillance mechanisms respect human rights. The groups, which included Privacy International, Human Rights Watch, Open Rights Group, and the Chilean nonprofit Derechos Digitales, wrote in a joint letter:

“Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response.”

The groups are
right to worry.

Particularly in
the United States, our country’s history of emergency-enabled surveillance has failed
to respect Americans’ right to privacy and to provide measurable, increased
security. Not only did rapid surveillance authorization in the US permit the
collection of, at one point in time, nearly every American’s call detail
records, it also created an unwieldy government program that two decades later became
ineffective, economically costly, and repeatedly noncompliant with the law.

Further, some of the current technology tracking proposals—including Apple and Google’s newly-announced Bluetooth capabilities—either lack the evidence to prove effective or require a degree of mass adoption that no country has proved possible. Other private proposals come from untrusted actors, too.

Finally, the tech-focused solutions cannot alone fill severe physical gaps, including lacking personal protective equipment for medical professionals, non-existent universal testing, and a potentially fatal selection of intensive care unit beds left to survive a country-wide outbreak.

We understand how today feels. In less than one month, the world has emptied. Churches, classrooms, theaters, and restaurants lay vacant, sometimes shuttered by wooden planks fastened over doorways. We grieve the loss of family and friends, of 17 million American jobs and the healthcare benefits they provided, of national, in-person support networks displaced into cyberspace, where the type of vulnerability meant for a physical room is now thrust online.

For a seemingly endless time at home, we curl and wait, emptied all the same.

But mass, digital surveillance alone will not make us whole.

Governments expand surveillance to track coronavirus

First detected in late 2019 in the Hubei province of China,
COVID-19 has now spread across every continent except Antarctica.

To limit the spread of the virus and to prevent overburdened healthcare systems, governments imposed a variety of physical restrictions. California closed all non-essential businesses, Ireland restricted outdoor exercise to 1.2 miles away from the home, El Salvador placed 30-day quarantines on El Salvadorians entering the country from abroad, and Tunisia imposed a nightly 6:00 p.m. – 6:00 a.m. curfew.

A handful of governments took digital action, vacuuming up citizens’
cell phone data, sometimes including their rough location history.  

Last month, Israel unbuttoned a once-secret surveillance program, allowing it to reach into Israelis’ mobile phones not to provide counter-terrorism measures—as previously reserved—but to track the spread of COVID-19. The government plans to use cell phone location data that it had been privately collecting from telecommunications providers to send text messages to device owners who potentially come into contact with known coronavirus carriers. According to The New York Times, the parliamentary subcommittee meant to approve the program’s loosened restrictions never actually voted.

The Lombardy region of Italy—which, until recently, suffered the largest coronavirus swell outside of China—is working with a major telecommunications company to analyze reportedly anonymized cell phone location data to understand whether physical lockdown measures are proving effective at fighting the virus. The Austrian government is doing the same. Similarly, the Pakistani government is relying on provider-supplied location information to send targeted SMS messages to anyone who has come into close, physical contact with confirmed coronavirus patients. The program can only be as effective as it is large, requiring data on massive swaths of the country’s population.

In Singapore, the country’s government publishes grossly detailed information about coronavirus patients on its Ministry of Health public website. Ages, workplaces, workplace addresses, travel history, hospital locations, and residential streets can all be found with a simple click.

Singapore’s coronavirus detection strategy also included a
separate, key component.

Last month, the government rolled out a new, voluntary mobile app for citizens to download called TraceTogether. The app relies on Bluetooth signals to detect when a confirmed coronavirus patient comes into close physical proximity with device owners using the same app. It is essentially a high-tech approach to the low-tech detective work of “contact tracing,” in which medical experts interview those with infectious illnesses and determine who they spoke to, what locations they visited, and what activities they engaged in for several days before presenting symptoms.

These examples of increased government surveillance and
tracking are far from exceptional.

According to a Privacy International analysis, at least 23 countries have deployed some form of telecommunications tracking to limit the spread of coronavirus, while 14 countries are developing or have already developed their own mobile apps, including Brazil and Iceland, along with Germany and Croatia, which are both trying to make apps that are GDPR-compliant.

While some countries have relied on telecommunications
providers to supply data, others are working with far more questionable private
actors.

Rapid surveillance demands rapid, shaky infrastructure

Last month, the push to digitally track the spread of coronavirus came not just from governments, but from companies that build potentially privacy-invasive technology.

Last week, Apple and Google announced a joint effort to provide Bluetooth contact tracing capabilities between the billions of iPhone and Android devices in the world.

The two companies promised to update their devices so that public
health experts could develop mobile apps that allow users to voluntarily
identify if they have tested positive for coronavirus. If a confirmed
coronavirus app user comes into close enough contact with non-infected app users,
those latter users could be notified about potential infection, whether they
own an iPhone or Android.

Both Apple and Google promised a privacy-protective
approach. App users will not have their locations tracked, and their identities
will remain inaccessible by Apple, Google, and governments. Further, devices will
automatically change users’ identifiers every 15 minutes, a step towards
preventing identification of device owners. Data that is processed on devices
will never leave a device unless a user chooses to share it.  

In terms of privacy protection, Apple and Google’s approach is one of the better options today.

According to Bloomberg, the Israeli firm NSO Group pitched a variety of governments across the world about a new tool that can allegedly track the spread of coronavirus. As of mid-March, about one dozen governments began testing the technology.

A follow-on investigation by VICE revealed how the new tool, codenamed “Fleming,” actually works:

“Fleming displays the data on what looks like an intuitive user interface that lets analysts track where people go, who they meet, for how long, and where. All this data is displayed on heat maps that can be filtered depending on what the analyst wants to know. For example, analysts can filter the movements of a certain patient by their last location or whether they visited any meeting places like public squares or office buildings. With the goal of protecting people’s privacy, the tool tracks citizens by assigning them random IDs, which the government—when needed—can de-anonymize[.]”

These are dangerous, invasive powers for any government to use against its citizens. The privacy concerns only grow when looking at NSO Group’s recent history. In 2018, the company was sued over allegations that it used its powerful spyware technology to help the Saudi Arabian government spy on and plot the murder of former Washington Post writer and Saudi dissident Jamal Khashoggi. Last year, NSO Group was hit with a major lawsuit from Facebook, alleging that the company sent malware to more than 1,400 WhatsApp users, who included journalists, human rights activists, and government officials.  

The questionable private-public partnerships don’t stop
there.

According to The Wall Street Journal, the facial recognition startup Clearview AI—which claims to have the largest database of public digital likenesses—is working with US state agencies to track those who tested positive for coronavirus.

The New York-based startup has repeatedly boasted about its technology, saying previously that it helped the New York Police Department quickly identify a terrorism suspect. But when Buzzfeed News asked the police department about that claim, it denied that Clearview participated in the case.

Further, according to a Huffington Post investigation, Clearview’s history involves coordination with far-right extremists, one of whom marched in the “Unite the Right” rally in Charlottesville, another who promoted debunked conspiracy theories online, and another who is an avowed Neo-Nazi. One early adviser to the startup once viewed its facial recognition technology as a way to “identify every illegal alien in the country.”

Though Clearview told The Huffington Post that it separated itself from these extremists, its founder Hoan Ton-That appears unequipped to grapple with the broader privacy questions his technology invites. When interviewed earlier this year by The New York Times, Ton-That looked flat-footed in the face of obvious questions about the ability to spy on nearly any person with an online presence. As reporter Kashmir Hill wrote:

“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.

Asked about the implications of
bringing such a power into the world, Mr. Ton-That seemed taken aback.

“I have to think about that,” he said. “Our belief is that this is the best use of the technology.”

One company’s beliefs about how to “best” use invasive technology is too low a bar for us to build a surveillance mechanism upon.

Should we deploy mass surveillance?

Amidst the current health crisis, multiple digital rights
and privacy organizations have tried to answer the question of whether governments
should deploy mass surveillance to battle coronavirus. What has emerged, rather
than wholesale approvals or objections to individual surveillance programs across
the world, is a framework to evaluate incoming programs.

According to Privacy International and more than 100 similar groups, government surveillance to fight coronavirus must be necessary and proportionate, must only continue for as long as the pandemic, must only be used to respond to the pandemic, must account for potential discrimination caused by artificial intelligence technologies, and must allow individuals to challenge any data collection, aggregation, retention, and use, among other restrictions.

Electronic Frontier Foundation, which did not sign Privacy International’s letter, published a somewhat similar list of surveillance restrictions, and boiled down its evaluation even further to a simple, three-question rubric:  

  • First, has the government shown its surveillance would be effective at solving the problem?
  • Second, if the government shows efficacy, we ask: Would the surveillance do too much harm to our freedoms?
  • Third, if the government shows efficacy, and the harm to our freedoms is not excessive, we ask: Are there sufficient guardrails around the surveillance? (Which the organization detailed here.)

We do not claim keener insight than our digital privacy peers. In fact, much of our research relies on theirs. But by focusing on the types of surveillance installed currently, and past surveillance installed years ago, we err cautiously against any mass surveillance regime developed specifically to track and limit the spread of coronavirus.

Flatly, the rapid deployment of mass surveillance to protect the public has rarely­, if ever, worked as intended. Mass surveillance has not provably “solved” a crisis, and in the United States, one emergency surveillance regime grew into a bloated, ineffective, noncompliant warship, apparently rudderless today.

We should not take these same risks again.

The lessons of Section 215

On October 4, 2001, less than one month after the US
suffered the worst attack on American soil when terrorists felled the World Trade
Center towers on September 11, President George W. Bush authorized the National
Security Agency to collect certain phone content and metadata without first
obtaining warrants.

According to an NSA Inspector General’s working draft report, President Bush’s authorization was titled “Authorization for specified electronic surveillance activities during a limited period to detect and prevent acts of terrorism within the United States.”

In 2006, the described “limited period” powers continued, as Attorney General Alberto Gonzalez argued before a secretive court that the court should retroactively legalize what the NSA had been doing for five years—collecting the phone call metadata of nearly every American, potentially revealing the numbers we called, the frequency we dialed them, and for how long we spoke. The court later approved the request.

The Attorney General’s arguments partially cited a separate law passed by Congress in 2001 that introduced a new surveillance authority for the NSA titled Section 215, which allows for the collection of “call detail records,” which are logs of phone calls, but not phone call content. Though Section 215 received significant reforms in 2015, it lingers today. Only recently has the public learned about collection failures under its authority.

In 2018, the NSA erased hundreds of millions of call and text detail records collected under Section 215 because the NSA could not reconcile their collection with the actual requirements of the law. In February, the public also learned that, despite collecting countless records across four years, only twice did the NSA uncover information that the FBI did not already have. Of those two occasions, only once did the information lead to an investigation.

Complicating the matter is the fact that the NSA shut down the call detail record program in the summer of 2019, but the program’s legal authority remains in limbo, as the Senate approved a 77-day extension in mid-March, but the House of Representatives is not scheduled to return to Congress until early May.

If this sounds frustrating, it is, and Senators and Representatives
on both sides have increasingly questioned these surveillance powers.

Remember, this is how difficult it is to dismantle a surveillance machine with proven failures. We doubt it will be any easier to dismantle whatever regime the government installs to fight coronavirus.

Separate from our recent history of over-extended surveillance is the matter of whether data collection actually works at tracking and limiting coronavirus.

So far, results range from unclear to mixed.

The problems with location and proximity tracking

In 2014, government officials, technologists, and humanitarian groups installed large data collection regimes to track and limit the spread of the Ebola outbreak in West Africa.

Harvard’s School of Public Health used cell phone “pings” to chart rough estimates of callers’ locations based on the cell towers they connected to when making calls. The US Centers for Disease Control and Prevention similarly looked at cell towers which received high numbers of emergency phone calls to determine whether an outbreak was occurring in near real-time.

But according to Sean McDonald of the Berkman Klein Center
for Internet and Society at Harvard University, little evidence exists to show
whether location tracking helps prevent the spread of illnesses at all.

In a foreword to his 2016 paper “Ebola: A big data disaster,” McDonald analyzed South Korea’s 2014 response to Middle East Respiratory Syndrome (MERS), a separate coronavirus. To limit the spread, the South Korean government grabbed individuals’ information from the country’s mobile phone providers and implemented a quarantine on more than 17,000 people based on their locations and the probabilities of infection.

But the South Korean government never opened up about how it
used citizens’ data, McDonald wrote.

“What we don’t know is whether that seizure of information
resulted in a public good,” McDonald wrote. “Quite the opposite, there is
limited evidence to suggest that migration or location information is a useful
predictor of the spread of MERS at all.”

Further, recent efforts to provide contact tracing through
Bluetooth connectivity—which is notthe same as location tracking—have
not been tested on a large enough scale to prove effective.

According to a mid-March report from The Economist, just 13
percent of Singapore’s population had installed the country’s contact tracing
app, TraceTogether. The low number looks worse when gauging the success in
fighting coronavirus.

According to The Verge, if Americans installed a Bluetooth contact tracing app at the same rate Singaporeans, the likelihood of being notified because a chance encounter with another app-user would be just 1.44 percent.  

Worse, according to Dr. Farzad Mostashari, former national
coordinator for health information technology at the Department of Health and
Human Services, Bluetooth contact tracing could create many false positives. As
he told The Verge:

“If I am in the wide open, my
Bluetooth and your Bluetooth might ping each other even if you’re much more
than six feet away. You could be through the wall from me in an apartment, and
it could ping that we’re having a proximity event. You could be a on a
different floor of the building
 and it could ping.”

This does not mean Bluetooth contact tracing is a bad idea, but it isn’t the silver bullet some imagine. Until we even know if location tracking works, we might assume the same.

Stay safe

Today is exhausting, and, sadly, tomorrow will be, too. We
don’t have the answers to bring things back to normal. We don’t know if those
answers exist.

What we do know is that, understandably, now is a time of
fear. That is normal. That is human.

But we should avoid letting fear dictate decisions with such significance as this. In the past, mass surveillance has grown unwieldy, lasted longer than planned, and proved ineffective. Today, it is being driven by opportunistic private actors who we should not trust as the sole gatekeepers to expanded government powers.

We have no proof that mass surveillance alone will solve this crisis. Only fear lets us believe it will.

The post Mass surveillance alone will not save us from coronavirus appeared first on Malwarebytes Labs.

Keep Zoombombing cybercriminals from dropping a load on your meetings

While shelter in place has left many companies struggling to stay in business during the COVID-19 epidemic, one company in particular has seen its fortunes rise dramatically. Zoom, the US-based maker of teleconferencing software, has become the web conference tool of choice for employees working from home (WFH), friends coming together for virtual happy hour, and families trying to stay connected. Since March 15, Zoom has occupied the top spot on Apple’s App Store. Only one week prior, Zoom was the 103rd-most popular app. 

Even late-night talk show hosts have jumped on the Zoom bandwagon, with Samantha Bee, Stephen Colbert, Jimmy Fallon, and Jimmy Kimmel using a combination of Zoom and cellphone video to produce their respective shows from home. 

In an incredibly zeitgeisty moment, everyone and their parents are Zooming. Unfortunately, opportunistic cybercriminals, hackers, and Internet trolls are Zooming, too.

What is Zoombombing?

Since the call for widespread sheltering in place, a number of security exploits have been discovered within the Zoom technology. Most notably, a technique called Zoombombing has risen in popularity, whether for pure mischief or more criminal purpose.

Zoombombing, also known as Zoom squatting, occurs when an unauthorized user joins a Zoom conference, either by guessing the Zoom meeting ID number, reusing a Zoom meeting ID from a previous meeting, or using a Zoom ID received from someone else. In the latter case, the Zoom meeting ID may have been shared with the Zoombomber by someone who was actually invited to the meeting or circulated among Zoombombers online.  

The relative ease by which Zoombombing can happen has led to a number of embarrassing and offensive episodes.

In one incident, a pornographic video appeared during a Zoom meeting hosted by a Kentucky college. During online instruction at a high school in San Diego, a racist word was typed into the classroom chat window while another bomber held up a sign that said the teacher “Hates Black People.” And in another incident, a Zoombomber drew male genitalia on screen while a doctoral candidate defended his dissertation.

Serious Zoombombing shenanigans

The Zoombombing problem has gotten so bad that the US Federal Bureau of Investigations has issued a warning.

That said, it’s the Zoombombs that no one notices that are most worrying, especially for Zoom’s business customers. Zoombombers can discreetly enter a Zoom conference and capture screenshots of confidential screenshares and record video and audio from the meeting. While it’s not likely for a Zoom participant to put up a slide with their username and password, the information gleaned from a Zoom meeting can be used in a phishing or spear phishing attack.

As of right now, there hasn’t been a publicly disclosed data breach as a result of a Zoombomb, but the notion isn’t far-fetched.

Numerous organizations and educational institutions have announced they will no longer be using Zoom. Of note, Google has banned the use of Zoom on company-owned devices in favor of their own Google Hangouts. The New York City Department of Education announced they’d no longer be using Zoom for remote learning. And Elon Musk’s SpaceX has banned Zoom, noting “significant privacy and security concerns” in a company-wide memo.

“Most Zoombombing incidents can be prevented with a little due diligence on the part of the user,” Malwarebytes Head of Security John Donovan said. “Anyone using Zoom, or any web conference software for that matter, is strongly encouraged to review their conference settings and minimize the permissions allowed for their conference attendees.”

“You can’t walk into a high school history class and start heckling the teacher. Unfortunately, the software lets people do that if you’re not careful,” he added.

For their part, Zoom has published multiple blog posts acknowledging the security issues with their software, changes the company has made to shore up security, and tips for keeping conferences private.

How to schedule a Meeting in Zoom, safely.
Set your meeting ID to generate automatically and always require a password.

Keep your Zoom meetings secure

Here are our tips for keeping your Zoom meetings secure and free from Zoombombers. Keep in mind that many of these tips apply to other teleconferencing tools as well. 

  1. Generate a unique meeting ID. Using your personal ID for meetings is like having an open-door policy—anyone can pop in at any time. Granted, it’s convenient and easy to remember. However, if a Zoombomber successfully guesses your personal ID, they can drop in on your meetings whenever they want or even share your meeting ID with others.
  2. Set a password for each meeting. Even if you have a unique meeting ID, an invited participant can still share your meeting ID with someone outside your organization. Adding a password to your meeting is one more layer of security you can add to keep interlopers out.
  3. Allow signed-in users only. With this option, it won’t matter if Zoombombers have the meeting ID—even the password. This setting requires everyone to be signed in to Zoom using the email they were invited through.
  4. Use the waiting room. With the waiting room, the meeting doesn’t start until the host arrives and adds everyone to the meeting. Attendees in the waiting room can’t communicate with each other while they’re in the waiting room. This gives you one additional layer of manual verification, before anyone can join your meeting.
  5. Enable the chime when users join or leave the meeting. Besides giving you a reason to embarrass late arrivals, the chime ensures no one can join your meeting undetected. The chime is usually on by default, so you may want to check to make sure you haven’t turned it off in your settings.
  6. Lock the room once the meeting has begun. Once all expected attendees have joined, lock the meeting. It seems simple, but it’s another easy way to keep Zoombombing at bay.
  7. Limit screen sharing. Before the meeting starts, you can restrict who can share their screen to just the host. And during the meeting, you can change this setting on the fly, in case a participant ends up needing to show something.

A special note for IT administrators: As a matter of company policy, many of these Zoom settings can be set to default. You can even further lock down settings for a particular group of users with access to sensitive information (or those with a higher learning curve on cybersecurity hygiene). For more detailed information, see the Zoom Help Center.

Remember, Zoombombing isn’t just embarrassing—it’s a big security risk. Sure, the Zoombombing incidents making headlines at the moment seem to be about trolling people more than anything else, but the potential for more serious abuse exists.

No matter which web conferencing software you use, take a moment to learn its settings and make smart choices about the data you share in your meetings. Do this, and you’ll have a safe and happy socially-distanced gathering each time you sign on.

The post Keep Zoombombing cybercriminals from dropping a load on your meetings appeared first on Malwarebytes Labs.

Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava

This week on Lock and Code, we discuss the top security headlines generated right here on Labs and around the Internet. In addition, we talk to Akshay Bhargava, Chief Product Officer of Malwarebytes, about the similarities between coronavirus and computer viruses. We discuss computer virus prevention, detection, and response, and the simple steps that consumers and businesses can take today to better protect themselves from a spreading cyberattack.

Tune in for all this and more on the latest episode of Lock and Code, with host David Ruiz.

You can also find us on the Apple iTunes store, on Google Play Music, plus whatever preferred podcast platform you use.

We cover our own research on:

Plus other cybersecurity news:

Stay safe, everyone!

The post Lock and Code S1Ep4: coronavirus and responding to computer viruses with Akshay Bhargava appeared first on Malwarebytes Labs.

APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure

The coronavirus (COVID-19) has become a global pandemic, and this is a golden time for attackers to take advantage of our collective fear to increase the likelihood of successful attack. True to form, they’ve been doing just that: performing spam and spear phishing campaigns using coronavirus as a lure for government and non-government entities.

From late January on, several cybercriminal and state-sponsored advanced persistent threat (APT) groups have been using coronavirus-based phishing as their infection vector to gain a foothold on victim machines and launch malware attacks. Just like the spread of coronavirus itself, China was the first targeted by APT groups and as the virus spread worldwide, so did the attacks. 

In the following paper, we provide an overview of APT groups that have been using coronavirus as a lure, and we analyze their infection techniques and eventual payloads. We categorize the APT groups based on four different attack vectors used in COVID-19 campaigns: Template injection, Malicious macros, RTF exploits, and malicious LNK files.

You can view the full report on APTs using COVID-19 HERE.

Attack vectors

  • Template injection: Template injection refers to a technique in which the actors embed a script moniker in the lure document that contains a link to a malicious Office template in the XML setting. Upon opening the document, the remote template is dropped and executed. The Kimsuky and Gamaredon APTs used this technique.
  • Malicious macros: Embedding malicious macros is the most popular method used by threat groups. In this technique, a macro is embedded in the lure document that will be activated upon opening. Konni (APT37), APT36, Patchwork, Hades, TA505, TA542, Bitter, APT32 (Ocean Lotus) and Kimsuky are the actors using this technique.
  • RTF exploits: RTF is a flexible text format that allows embedding any object type within and makes RTF files vulnerable to many OLEl object-related vulnerabilities. Several Chinese threat actors use RTF files, among them the Calypso group and Winnti.
  • Malicious LNK files: An LNK file is a shortcut file used by Microsoft Windows and is considered as a Shell item type that can be executed. Mustang Panda is a Chinese threat actor that uses this technique to drop either a variant of the PlugX RAT or Cobalt Strike into victims’ machines. Higaisia is a North Korean threat group that also uses this method.

We expect that in the coming weeks and months, APT threat actors will continue to leverage this crisis to craft phishing campaigns using the techniques mentioned in the paper to compromise their targets.

The Malwarebytes Threat Intelligence Team is monitoring the threat landscape and paying particular attention to attacks trying to abuse the public’s fear around the COVID-19 crisis. Our Malwarebytes consumer and business customers are protected against these attacks, thanks to our multi-layered detection engines.

The post APTs and COVID-19: How advanced persistent threats use the coronavirus as a lure appeared first on Malwarebytes Labs.

Online credit card skimming increased by 26 percent in March

Crisis events such as the current COVID-19 pandemic often lead to a change in habits that captures the attention of cybercriminals. With the confinement measures imposed in many countries, for example, online shopping has soared and along with it, credit card skimming. According to our data, web skimming increased by 26 percent in March over the previous month.

While this might not seem like a dramatic jump, digital credit card skimming was already on the rise prior to COVID-19, and this trend will likely continue into the near future.

While many merchants remain safe despite the increased volume in processed transactions, the exposure to compromised e-commerce stores is greater than ever.

Change in habits translates into additional web skimming attempts

Web skimming, also known under different terms, but made popular thanks to the ‘Magecart’ moniker, is the process of stealing customer data, including credit card information, from compromised online stores.

We actively track web skimmers so that we can protect our customers running Malwarebytes or Browser Guard (the browser extension) when they shop online.

The stats presented below exclude any telemetry from our Browser Guard extension and reflect a portion of the overall web skimming landscape, per our own visibility. For instance, server-side skimmers will go unaccounted for, unless the merchant site itself has been identified as compromised and is blacklisted.

One trend that we have noticed for a while is how the number of skimming blocks is at its highest on Mondays, lowering down in the second half of the week and being at its lowest point on week-ends.

stat1

The second observation is how the number of web skimming blocks increased moderately from January to February (2.5%) but then started to go up from February to March (26%). While this is still a moderate increase, we believe it marks a trend that will be more apparent in the coming months.

stat2

The final chart shows that we record the most skimming attempts in the US, followed by Australia and Canada. This trend coincides with the quarantine measures that began being rolled out in mid March.

stat3

Minimizing risks: a shared responsibility

As we see with other threats, there isn’t one answer to mitigate web skimming. In fact, it can be fought from many different sides starting with online merchants, the security community and shoppers themselves.

A great number of merchants do not keep their platforms up to date and also fail to respond to security disclosures. Often times, the last recourse to report a breach is to go public and hope that the media attention will bear fruit.

Many security vendors actively track web skimmers and add protection capabilities into their products. This is the case with Malwarebytes, and web protection is available in both our desktop product and browser extension. Sharing our findings and attempting to disrupt skimming infrastructure is effective at tacking the problem at scale, rather than on an individual (per site) basis.

Shopping online is convenient but not risk-free. Ultimately, users are the ones who can make savvy choices and avoid many pitfalls. Here are some recommendations:

  • Limit the number of times you have to manually enter your credit card data. Rely on platforms where that information is already stored in your account or use one-time payment options.
  • Check if the online store displays properly in your browser, without any errors or certain red flags indicating that it has been neglected.
  • Do not take trust seals or other indicators of confidence at face value. Because a site displays a logo saying it’s 100% safe does not mean it actually is.
  • If you are unsure about a site, you can use certain tools to scan it for malware or to see if it’s already on a blacklist.
  • More advanced users may want to examine a site’s source code using Developer Tools for instance, which as a side effect may turn off a skimmer noticing it is being checked.

We expect web skimming activity to keep on an upward trend in the coming months as the online shopping habits forged during this pandemic continue on well beyond. For more tips please check out Important tips for safe online shopping post COVID-19.

The post Online credit card skimming increased by 26 percent in March appeared first on Malwarebytes Labs.

Copycat criminals abuse Malwarebytes brand in malvertising campaign

While exploit kit activity has been fairly quiet for some time now, we recently discovered a threat actor creating a copycat—fake—Malwarebytes website that was used as a gate to the Fallout EK, which distributes the Raccoon stealer.

The few malvertising campaigns that remain are often found on second- and third-tier adult sites, leading to the Fallout or RIG exploit kits, as a majority of threat actors have moved on to other distribution vectors. However, we believe this faux Malwarebytes malvertising campaign could be payback for our continued work with ad networks to track, report, and dismantle such attacks.

In this blog, we break down the attack and possible motives.

Stolen template includes malicious code

A few days ago, we were alerted about a copycat domain name that abused our brand. The domain malwarebytes-free[.]com was registered on March 29 via REGISTRAR OF DOMAIN NAMES REG.RU LLC and is currently hosted in Russia at 173.192.139[.]27.

fakepage

Examining the source code, we can confirm that someone stole the content from our original site but added something extra.

A JavaScript snippet checks which kind of browser you are running, and if it happens to be Internet Explorer, you are redirected to a malicious URL belonging to the Fallout exploit kit.

Infection chain for copycat campaign

This fake Malwarebytes site is actively used as a gate in a malvertising campaign via the PopCash ad network, which we contacted to report the malicious advertiser.

traffic

Fallout EK is one of the newer (or perhaps last) exploit kits that is still active in the wild. In this sequence, it is used to launch the Raccoon stealer onto victim machines.

A motive behind decoy pages

The threat actor behind this campaign may be tied to others we’ve been tracking for a few months. They have used similar fake copycat templates before that act as gates. For example, this fake Cloudflare domain (popcashexhange[.]xyz) also plays on the PopCash name:

cf

There is no question that security companies working with providers and ad networks are hindering efforts and money spent by cybercriminals. We’re not sure if we should take this plagiarism as a compliment or not.

If you are an existing Malwarebytes user, you were already safe from this malvertising campaign, thanks to our anti-exploit protection.

MBAE

Copycat tactics have long been used by scammers and other criminals to dupe online and offline victims. As always, it is better to double-check the identity of the website you are visiting and, if in doubt, access it directly either by punching in the URL or via bookmarked page/tab.

Indicators of compromise

Fake Malwarebytes site

malwarebytes-free[.]com
31.31.198[.]161

Fallout EK

134.209.86[.]129

Raccoon Stealer

78a90f2efa2fdd54e3e1ed54ee9a18f1b91d4ad9faedabd50ec3a8bb7aa5e330
34.89.159[.]33

The post Copycat criminals abuse Malwarebytes brand in malvertising campaign appeared first on Malwarebytes Labs.

Cybersecurity labeling scheme introduced to help users choose safe IoT devices

The Internet of Things (IoT) is a term used to describe a wide variety of devices that are connected to the Internet to improve user experience. For example, a doorbell becomes part of the IoT when it connects to the Internet and allows users to see visitors outside their door.

But the way in which some of these IoT devices connect invites serious security and privacy concerns. This has led to pleas for laws and regulation in the production and marketing of IoT devices, including increased security features and better visibility into the security of those features.

Our loyal readers have seen our regular complaints about the built-in security of IoT devices and know how concerned we are about products that are designed to optimize functionality and cost over security. Many manufacturers expect consumers to care more about ease-of-use than about security.

But while this may be true for many consumers, the apparent indifference can also be explained by a lack of comparable options. If consumers were given the choice between a device that’s cheap, easy to use, and insecure and a device that’s a bit more costly but keeps users protected—our bet is there’d be a good chunk of consumers who’d select the more secure option.

While some states and countries do have laws demanding manufacturers produce “safe” products, this doesn’t help consumers in making a choice. At best, it limits their choice as some unsafe products will not make it to the market. To help users make an informed decision, some countries have decided to introduce a new cybersecurity labeling scheme (CLS) that provides consumers with information about the security of connected smart devices.

Countries introducing a cybersecurity labeling scheme

In November 2019, Finland became the first country in Europe to grant information security certificates to devices that passed the required tests. Their reasoning was that the security level of devices in the market varies a lot, and there’s no easy way for consumers to know which products are safe and which are not. As a service to the public, a website was launched to make it easy to find information about the devices that have been awarded the label.

On January 27, 2020, the UK’s Digital Minister Matt Warman announced a new law to protect millions of IoT users from the threat of cyberattack. The plan is to make sure that all consumer smart devices sold in the UK adhere to rigorous security requirements for the Internet of Things (IoT).

Shortly after the UK, the Cyber Security Agency of Singapore (CSA) announced plans to introduce a new Cybersecurity Labeling Scheme (CLS) later this year to help consumers make informed purchasing choices about network-connected smart devices.

As part of the initiative, CLS will address the security of IoT devices, a growing area of concern. The CLS, which is a first for the Asia-Pacific region, will first be introduced to two product types: WiFi routers and smart home hubs.


Recommended reading: 8 ways to improve security on smart home devices


The goals of a cybersecurity labeling scheme

The cybersecurity labeling scheme will be aligned to globally-accepted security standards for consumer Internet of Things products. It will mean that robust security standards will be introduced from the design stage and not bolted on as an afterthought.

The scheme proposes that such devices should carry a security label to help consumers navigate the market and know which devices to trust, and to encourage manufacturers to improve security. The idea is that—similar to how Bluetooth and WiFi labels help consumers feel confident their products will work with wireless communication protocols—a security label will instill confidence in consumers that their device was built according to security standards.

The Singapore CLS is a first-of-its-kind cybersecurity rating system in the APAC region, and is primarily aimed at helping the consumers make informed choices. The rating of a product will be decided on a series of assessments and tests including, but not limited to:

  • Meeting basic security requirements (e.g. unique default passwords)
  • Adherence to software and hardware security-by-design principles
  • Common software security vulnerabilities should be absent
  • Resistant to basic penetration testing activity

The same is true for the law that is under preparation for the UK. Their primary security requirements are:

  • All consumer Internet-connected device passwords must be unique and not resettable to any universal factory setting.
  • Manufacturers of consumer IoT devices must provide a public point of contact so anyone can report a vulnerability, and it will be acted on in a timely manner.
  • Manufacturers of consumer IoT devices must explicitly state the minimum length of time for which the device will receive security updates at the point of sale, either in store or online.

As you can see in both cases, the main worry was the omnipresence of default passwords that were the same for a whole series of devices. And on top of that, users were not clearly informed that they needed to change the default password, and often it was hard to change them for the average user.

Optimizing the CLS

We applaud the efforts made by governments
to improve on the overall security of IoT devices, but there are some
improvements we would like to suggest.

  • The Finnish site is available in Finnish and Swedish. For an outsider, it is hard to make out which products are approved and why. An English version would be a big step forward.
  • The laws in the UK and California are a good start but could have been more restrictive. And they don’t inform a customer about the security of a device when they are looking to buy from a web shop that might be abroad.
  • The Singapore CLS for now focuses on routers and smart home hubs because they consider them the gateways to the rest of the household. While this makes sense, it is a limited scope.

What all these regulations have in common
is that they only inform the customer whether a device has passed muster in a
certain state or country. Certainly, we can come up with a global scheme that gives
customers a security level between “don’t buy this” and “very safe” like we have
for energy efficiency in the EU.

energy labelling
EU energy labels

But let’s rejoice for now that these governments are making a start in a much-needed effort to improve devices and inform customers. Let us hope that the various security labeling schemes will help consumers make an informed choice and drive manufacturers to focus more on security. And that other governments will follow their examples.

Stay safe, everyone!

The post Cybersecurity labeling scheme introduced to help users choose safe IoT devices appeared first on Malwarebytes Labs.

A week in security (March 30 – April 5)

Last week on Malwarebytes Labs, we offered readers tips for safe online shopping now that cybercriminals are ramping up Internet-based attacks, showed the impact that GDPR has around the world, and helped users understand how social media platforms mine their personal data. We also hosted our bi-weekly podcast, Lock and Code, with guest Adam Kujawa, who discussed the state of data privacy today.

Other cybersecurity news:

  • Two zero-day vulnerabilities were used by two different groups to infiltrate DrayTek Vigor enterprise routers and switch devices. (Source: SCMagazine)
  • An organisation, Cyber Volunteers 19 (CV19), is being set up to help people volunteer their IT security expertise and services to healthcare. (Source: Graham Cluley)
  • Organizations globally are exposing their networks to risk by using insecure RDP and VPN to go remote due to COVID-19. (Source: Hot for Security)
  • Houseparty is offering a $1 million reward to anyone providing proof it was the victim of a paid commercial smear campaign. (Source: TechSpot)
  • The Marriott hotel chain announced that it had suffered another data breach exposing 5.2 million guest records. (Source: SiliconRepublic)
  • Online threats have risen by as much as six times their usual levels over the past four weeks as the COVID-19 pandemic provides new ballast for cyberattacks. (Source: InfoSecurity)
  • The Internet is rife with online communities where users can go and share Zoom conference codes to organize Zoom-bombing raids. (Source: ZDNet)
  • After being criticized about several problems, Zoom itself decided to dedicate all the resources needed to better identify, address, and fix issues proactively. (Source: Zoom Blog)

Stay safe everyone!

The post A week in security (March 30 – April 5) appeared first on Malwarebytes Labs.

How social media platforms mine personal data for profit

It’s almost impossible not to rely on social networks in some way, whether for personal reasons or business. Sites such as LinkedIn continue to blur the line, increasing the amount of social function over time with features and services resembling less formal sites, such as Facebook. Can anyone imagine not relying on, of all things, Twitter to catch up on breaking coronavirus news around the world instantly? The trade off is your data, and how they profit from it.

Like it or not—and it’s entirely possibly it’s a big slab of “not”—these services are here to stay, and we may be “forced” to keep using them. Some of the privacy concerns that lead people to say, “Just stop using them” are well founded. The reality, however, is not quite so straightforward.

For example, in many remote regions, Facebook or Twitter might be the only free Internet access people have. And with pockets of restriction on free press, social media often represents the only outlet for “truth” for some users. There are some areas where people can receive unlimited Facebook access when they top up their mobiles. If they’re working, they’ll almost always use Facebook Messenger or another social media chat tool to stay in touch rather than drain their SMS allowance.

Many of us can afford to walk away from these services; but just as many of us simply can’t consider it when there’s nothing else to take its place.

Mining for data (money) has never been so profitable.

But how did this come to be? In the early days of Facebook, it was hard to envision the platform being used to spread disinformation, assist in genocide, or sell user data to third-parties. We walk users through the social media business model and show how the inevitable happens: when a product is free, the commodity is you and your data.

Setting up social media shop

Often, Venture Capital backing is how a social network springs into life. This is where VC firms invest lots of money for promising-looking services/technology with the expectation they’ll make big money and gain a return on investment in the form of ownership stakes. When the company is bought out or goes public, it’s massive sacks of cash for everybody. (Well, that’s the dream. The reality is usually quite a bit more complicated).

It’s not exactly common for these high-risk gambles to pay off, and what often happens is the company never quite pops. They underperform, or key staff leave, and they expand a little too rapidly with the knock-on effect that the CEO suddenly has this massive service with millions of users and no sensible way to turn that user base into profit (and no way to retain order on a service rife with chaos).

At that point, they either muddle along, or they look to profit in other ways. That “other way” is almost always via user data. I mean, it’s all there, so why not? Here are just some of the methods social networks deploy to turn bums on seats into massive piles of cash.

Advertising on social media

This is the most obvious one, and a primary driver for online revenue for many a year. Social media platforms tend to benefit in a way other more traditional publishers cannot, and revenue streams appear to be quite healthy in terms of user-revenue generation.

Advertising is a straight-forward way for social media networks to not only make money from the data they’ve collected, but also create chains where external parties potentially dip into the same pool, too.

At its most basic, platforms can offer ad space to advertisers. Unlike traditional publishing, social media ads can be tailored to personalized data the social network sees you searching for, talking about, or liking daily. If you thought hitting “like” (or its equivalent) on a portal was simply a helpful thumbs up in the general direction of someone providing content, think again. It’s quite likely feeding data into the big pot of “These are the ads we should show this person.” 

Not only is everything you punch into the social network (and your browser) up for grabs, but everything your colleagues and associates do too, tying you up in a neat little bow of social media profiling. All of it can then be mined to make associations and estimations, which will also feed back to ad units and, ultimately, profit.

Guesstimates are based on the interests of you, your family, your friends, and your friends’ friends, plus other demographic-specific clues, such as your job title, pictures of your home, travel experiences, cars, and marriage status. Likely all of these data points help the social network neatly estimate your income, another way to figure out which specific adverts to send your way.

After all, if they send you the wrong ads, they lose. If you’re not clicking through and popping a promo page, the advertisers aren’t really winning. All that ad investment is essentially going to waste unless you’re compelled to make use of it in some way.

Even selling your data to advertisers or other marketing firms could be on the table. Depending on terms of service, it’s entirely possible the social platforms you use can anonymise their treasure trove and sell it for top dollar to third parties. Even in cases where the data isn’t sold, simply having it out there is always a bit risky.

There have been many unrelated, non-social media instances where it turned out supposedly anonymous data, wasn’t. There are always people who can come along afterwards and piece it all together, and they don’t have to be Sherlock Holmes to do it. All this before you consider social media sites/platforms with social components aren’t immune to the perils of theft, leakage, and data scraping.

As any cursory glance of a security news source will tell you, there’s an awful lot of rogue advertisers out there to offset the perfectly legitimate ones. Whether by purchase or stumbling upon data leaked online, scammers are happy to take social media data and tie it up in email/phone scams and additional fake promos. At that point, even data generated through theoretically legitimate means is being (mis)used in some way by unscrupulous individuals, which only harms the ad industry further.

Apps and ads

Moving from desktop to mobile is a smart move for social networks, and if they’re able to have you install an app, then so much the better (for them). Depending on the mobile platform, they may be able to glean additional information about sites, apps, services, and preferred functionalities, which wouldn’t necessarily be available if you simply used a mobile web browser.

If you browse for any length of time on a mobile device, you’ll almost certainly be familiar with endless pop-ups and push notifications telling you how much cooler and awesome the app version of site X or Y will be. You may also have experienced the nagging sensation that websites seem to degrade in functionality over time on mobile browsers.

Suddenly, the UI is a little worse. The text is tiny. Somehow, you can no longer find previously overt menu options. Certain types of content no longer display correctly or easily, even when it’s something as basic as a jpeg. Did the “Do you want to view this in the app?” popup reverse the positions of the “Yes” and “No” buttons from the last time you saw it? Are they trying to trick you into clicking the wrong thing? It’s hard to remember, isn’t it?

A cynic would say this is all par for the course, but this is something you’ve almost certainly experienced when trying to do anything in social land on a mobile minus an app.

Once you’re locked into said app, a brave new world appears in terms of intimately-detailed data collection and a huge selection of adverts to choose from. Some of them may lead to sponsored affiliate links, opening the data harvesting net still further, or lead to additional third-party downloads. Some of these may be on official platform stores, while others may sit on unofficial third-party websites with all the implied risk such a thing carries.

Even the setup of how apps work on the website proper can drive revenue. Facebook caught some heat back in 2008 for their $375USD developer fee. Simply having a mass of developers making apps for the platform—whether verified or not—generates data that a social network platform can make use of, then tie it back to their users.

It’s all your data, wheeling around in a tumble drier of analytics.

Payment for access/features

Gating access to websites behind paywalls is not particularly popular for the general public. Therefore, most sites with a social networking component will usually charge only for additional services, and those services might not even be directly related to the social networking bit.

LinkedIn is a great example of this: the social networking part is there for anybody to use because it makes all those hilariously bad road warrior lifestyle posts incredibly sticky, and humorous replies are often the way people first land on a profile proper. However, what you’re paying for is increased core functionality unrelated to the “Is this even real?” comedy posts elsewhere.

In social networking land, a non-payment gated approach was required for certain platforms. Orkut, for example, required a login to access any content. Some of the thinking there was that a gated community could keep the bad things out. In reality, when data theft worms started to spread, it just meant the attacks were contained within the walls and hit the gated communities with full force.

The knock-on effect of this was security researchers’ ability to analyse and tackle these threats was delayed because many of these services were either niche or specific to certain regions only. As a result, finding out about these attacks was often at the mercy of simply being informed by random people that “X was happening over in Y.”

These days, access is much more granular, and it’s up to users to display what they want, with additional content requiring you to be logged in to view.

Counting the cost

Of the three approaches listed above, payment/gating is one of the least popular techniques to encourage a revenue stream. Straight up traditional advertising isn’t as fancy as app/site/service integration, but it’s something pretty much anybody can use, which is handy for devs without the mobile know-how or funds available to help make it happen.

Even so, nothing quite compares to the flexibility provided by mobile apps, integrated advertising, and the potential for additional third-party installs. With the added boost to sticky installs via the pulling power of social media influencers, it’s possibly never been harder to resist clicking install for key demographics.

The most important question, then, turns out to be one of the most common: What are you getting in return for loading an app onto your phone?

It’s always been true for apps generally, and it’ll continue to be a key factor in social media mobile data mining for the foreseeable future. “You are the product” might be a bit long in the tooth at this point, but where social media is concerned, it’s absolutely accurate. How could the billions of people worldwide creating the entirety of the content posted be anything else?

The post How social media platforms mine personal data for profit appeared first on Malwarebytes Labs.