Archive for author: makoadmin

NFTs explained: daylight robbery on the blockchain

Did you hear about the JPG file that sold for $69 million?

I’ll give you some more detail, the JPG file is a piece of digital art made by Mike Winkelmann, the artist known as Beeple. The file was sold on Thursday by Christie’s in an online auction for $69.3 million. This set a record for artwork that exists only digitally. Which for many people raised the question: what’s to stop me from copying it and becoming an owner as well? After all, digital files can be copied ad infinitum, with no loss of quality.

Which is where non-fungible tokens (NFTs or “nifities”) come in. NFTs are the latest, most eyebrow-raising use of blockchain technology.

Non-fungible means the token has unique properties so it cannot be interchanged with something else. Money, for example, is fungible. You can break down a dollar or a bitcoin into change and it will still have the same value. An artwork is more like a house, each one is unique and can’t be broken into useful fractions. (Although for houses sometimes it is only the location that makes it different from its neighbors.)

But I made the analogy because for houses we have a ledger to keep track of who owns the house. If you want to know who owns a house, you look it up in the ledger. You can think of an NFT as a certificate of ownership for a unique object, virtual or tangible.

Art and technology

While the combination of art and technology may have sounded strange a century ago, nowadays they are no longer a rare combination. The first use of the term digital art was in the early 1980s when computer engineers devised a paint program which was used by the pioneering digital artist Harold Cohen. This became known as AARON, a robotic machine designed to make large drawings on sheets of paper placed on the floor.

Andy Warhol or David Hockney may be more familiar names, even for those that are not that into art. Andy Warhol created digital art using a Commodore Amiga where the computer was publicly introduced at the Lincoln Center, New York in July 1985. Hockney is huge fan of the iPad.

Art and NFTs

The maintenance of the digital ledger to keep track of who owns a digital work of art is done using blockchain technology. Blockchains make it almost impossible to forge records.

Copies of the blockchain are kept on thousands of computers and each item in the blockchain is cryptographically linked to every item that comes after it. Forging a record in a blockchain ledger means re-doing the transaction you want to forge, and every subsequent transaction, on a majority of all the copies in existence, at the same time.

Unlike bitcoins, each NFT is unique and can contain details like the identity of its owner or other metadata. NFTs also include smart contracts. Smart contracts store code instead of data in a blockchain, and execute when particular conditions are met. An example of an NFT smart contract might give an artist a percentage of future sales of their work.

But to answer the original question, this doesn’t stop anyone from copying a digital masterpiece and enjoying it at home. The NFT ledger only shows who the owner of the original is.

Stolen NFTs

Even though the blockchain technology itself is secure, the applications that are built on or around it, such as websites or smart contracts, don’t inherit that security, and that can cause problems.

Users of the digital art marketplace Nifty Gateway reported hackers had taken over their accounts and stolen artwork worth thousands of dollars over the weekend.

victim complaining on Twitter

Someone stole my NFTSs today on @niftygateway and purchased $10K++ worth of today’s drop without my knowledge. NFTs were then transferred to another account.

Some victims reported that the digital assets stolen from their accounts were then sold on the chat application Discord or on Twitter. The underlying problem, according to many claims, was that the thieves hacked the owner’s accounts. They then used the accounts to sell, buy, and re-sell NFTs.

This is possible because blockchain security is designed to prevent forgery, not theft. If somebody steals your NFT and sells it, the blockchain will faithfully record the sale, irreversibly.

Art turned into NFT without the artist’s knowledge

Some artists are reporting their work has been stolen and sold on NFT sites without their knowledge or permission. In some cases, the artist only learned about the theft weeks or even years later, having stumbled upon their work on an auction site. The people creating the NFT had no ownership and probably just copied the artwork from the artist’s website.

Identifying the original file

The way NFTs are set up now they depend too much on URLs that might end up broken at some point in time. Or get hijacked by some clever threat actor. Jonty Wareing did an analysis on how Nifty references the original and was not impressed. He expressed his concerns on Twitter. He found the fact that both the NFT token for the json metadata file as well as the IPFS gateway are defined by URLs set up by the seller. IPFS is a distributed system for storing and accessing files, websites, applications, and data.

The NFT token you bought either points to a URL on the internet, or an IPFS hash. In most circumstances it references an IPFS gateway on the internet run by the startup you bought the NFT from.

Which means when the startup who sold you the NFT goes bust, the files will probably vanish from IPFS too

Problems with art and NFTs

The reported crimes are made possible by two apparent flaws in the way the system was set up.

  • It is possible to create more than one NFT for the same work of art. This creates separate chains of ownership for the same work of art.
  • If no NFT exists for a certain work of art, creating one does not require you to be the owner. This creates false chains of ownership.
  • The references defining the original depend too heavily on URLs that are vulnerable and could vanish at some point.

To circle back to our analogy with real estate, the only way a ledger can be expected to give an accurate account of ownership is by having one central ledger that checks whether the first owner did buy the object directly from the creator. The creation of such a new ledger should also include a check whether there is not an existing registration for the same object to avoid creating a duplicate. And for digital files we need a better way to define them. Storing URLs in the blockchain will protect the URL and not the underlying file.

The post NFTs explained: daylight robbery on the blockchain appeared first on Malwarebytes Labs.

Mother charged with using deepfakes to shame daughter’s cheerleading rivals

A Pennsylvania woman reportedly sent doctored photos and videos of her daughter’s cheerleader rivals to their coaches, in an attempt to embarrass them and get them kicked off the team. She’s alleged to have used deepfake technology to create photo and video depictions of the girls naked, drinking, and vaping, law enforcement officials said.

The woman—50-year-old Raffaela Spone—was arrested in early March and charged with multiple misdemeanor counts of cyberbullying, after targeting three teen girls in Victory Vipers, her daughter’s cheerleading squad, and three counts of harassment. However, she was later released on the condition that she attends her preliminary hearing on March 30.

A deepfake, is a realistic fake image or video that uses machine learning to replace the original subject with somebody else’s likeness. The usual recipe needed to create one is a deepfake tool, which are becoming widely accessible online, the original image or video, and a photo or photos of the person being added to it.

According to reports, Spone likely used images from the girls’ social media accounts to create the fake media. She also anonymously sent harassing text messages from multiple fake phone numbers to the girls, their parents, and the owners of the gym where the cheerleading squad practiced. Some messages contained deepfakes, and some messages urged them to kill themselves, according to The Philadelphia Inquirer.

Police were able to identify that the fake numbers Spone used belonged to an app called Pinger. This allowed them to acquire the IP address messages were coming from, and then use the IP to acquire Spone’s home address and phone carrier. Further searches on Spone’s phone revealed evidence tying her to the deepfakes.

Per court records, there was no indication that her daughter knew what her mother was doing.

“Here are some of my concerns in this case,” said Bucks County District Attorney Matt Weintraub during a news conference Monday, “[deepfake] tech is now available to anyone with a smartphone. Your neighbor down the street, somebody who holds a grudge—you’ll just have no way of knowing. This is prevalent.”

He continued, “This is also another way for an adult to now prey on children, as is the case of the allegations in this instance.”

Crimes committed by Spone was something Henry Ajder, a deepfake researcher, saw coming. Speaking to The New York Times, Ajder, who anticipates that deepfake depictions will become more realistic in the next five years, is concerned they could be used to “attack individuals, create political disinformation … conduct fraud and manipulate stock markets”.

Robert Birch, Spone’s attorney, revealed to WPVI-TV, a local network, that his client has received death threats after reports about the deepfakes appeared in the press.

Victory Vipers apologized to all individuals involved in this case. “Victory Vipers has always promoted a family environment and we are sorry for all individuals involved. We have very well-established policies, and a very strict anti-bullying policy in our program.” said Mark McTague and Kelly Cramer in a statement.

“When this incident came to our attention last year we immediately initiated our own internal investigation and took the appropriate action at the time. This incident happened outside of our gym. When the criminal investigation ensued, we fully cooperated with law enforcement. All athletes involved, are no longer a part of our program.”

Other posts on the subject of deepfake:

The post Mother charged with using deepfakes to shame daughter’s cheerleading rivals appeared first on Malwarebytes Labs.

Teen behind 2020 Twitter hack pleads guilty

The so-called “mastermind” behind the 2020 Twitter hack that compromised the accounts of several celebrities and public figures—including President Barack Obama, Bill Gates, and Elon Musk—pleaded guilty to several charges on Tuesday in a Florida court.

As part of an agreed-upon plea deal with prosecutors, Graham Clark will serve three years in juvenile prison, with an additional three years spent under probation.

First reported by 10 Tampa Bay WTSP-TV, Clark’s plea deal will include restrictions to “electronic devices,” with access only permitted by the Florida Department of Law Enforcement and by those supervising Clark during his eventual probation. According to 10 Tampa Bay, at 18 years old, Clark will also be sentenced as a “youthful offender,” which could allow him to serve some of his prison time in a “boot camp.” He will also earn credit for the 229 days that he has already spent in jail.

Clark’s plea deal represents a reversal of his earlier position on August 4, 2020, when he pleaded not guilty to 30 charges of fraud brought against him by state prosecutors in Florida for allegedly stealing Bitcoin payments from countless victims. According to Hillsborough State Attorney Andrew Warren at the time, the charges filed against Clark were for “scamming people across America.”

“These crimes were perpetrated using the names of famous people and celebrities, but they’re not the primary victims here,” Warren said. “This ‘Bit-Con’ was designed to steal money from regular Americans from all over the country, including here in Florida. This massive fraud was orchestrated right here in our backyard, and we will not stand for that.” 

Last year, Clark allegedly worked with two other individuals to compromise the accounts of about 130 Twitter users in a broader scheme to steal Bitcoin payments from unsuspecting victims. On July 15, the Twitter accounts of several celebrities and industry leaders began tweeting nearly the exact same message: Sparked by sudden gratitude, anyone who donated payments to a specific Bitcoin address would receive double those payments in return.

According to the public bitcoin ledger, at the time, the hackers conned people out of more than $100,000.

Nearly two weeks later, Clark was arrested at his apartment in Tampa. Two other men—Mason Shepperd from the UK and Nima Fazeli of Orlando—were also charged in connection with the hack. Shepperd was charged with wire fraud and money laundering, while Fazeli was charged with aiding and abetting.

At the time of the attack, many asked how such a small operation—led by a teenager—could have successfully breached the security of a major technology company. According to an investigation by The New York Times, Clark’s Twitter hack was not the work of an experienced hacker, but of a tried-and-true fraudster. Having bilked victims out of small sums of about $50 for years, Clark is alleged to have eventually worked his way into a scam that involved the theft of $856,000 worth of Bitcoin, at the age of 16.

After the theft, Clark posted photos of himself on Instagram wearing a Rolex watch.

To compromise Twitter, Clark used his practiced social engineering skills to gain access to an employee control panel. From there he was able to change users’ email addresses, and to use those new email addresses to reset passwords and disable two-factor authentication, giving him access to numerous user accounts, and their millions of followers.

The post Teen behind 2020 Twitter hack pleads guilty appeared first on Malwarebytes Labs.

FBI warns of increase in PYSA ransomware attacks targeting education

On March 16, the Federal Bureau of Investigation (FBI) issued a “Flash” alert on PYSA ransomware after an uptick on attacks this month against institutions in the education sector, particularly higher ed, K-12, and seminaries. According to the alert [PDF], the United Kingdom and 12 states in the US have already affected by this ransomware family.

PYSA, also known as Mespinoza, was first spotted in the wild in October 2019 where it was initially used against large corporate networks.

CERT France issued an alert a year ago about PYSA widening its reach to include French government organizations, and other governments and institutions outside of France. PYSA was categorized as one of the big-game hunters, joining the ranks of Ryuk, Maze, and Sodinokibi (REvil). “Big-game” ransomware attacks target entire organizations, with threat actors operating their ransomware manually, after spending time breaking into and an organization’s networks and conducting reconnaissance.

PYSA/Mespinoza can arrive on victims’ networks either via phishing campaigns or by brute-forcing Remote Desktop Protocol (RDP) credentials to gain access.

Before downloading and detonating the ransomware payload, threat actors behind this ransomware were also found to conduct network reconnaissance using open-source tools like Advanced Port Scanner and Advanced IP Scanner. They also install other such tools, such as Mimikatz, Koadic, and PowerShell Empire (to name a few), to escalate privileges and move laterally.

The threat actors deactivate security protection on the network, exfiltrate files, and upload the stolen data to Mega.nz, a cloud-storage and file-sharing service. After this, PYSA is then deployed and executed. All encrypted files in Windows and Linux, the two platforms this ransomware primarily targets, will have the .pysa suffix.

The FBI report also reveals a possible double extortion tactic that might occur against victims: “In previous incidents, cyber actors exfiltrated employment records that contained personally identifiable information (PII), payroll tax information, and other data that could be used to extort
victims to pay a ransom.”

In the last six months, the FBI and other law enforcement organizations have been warning the education sector about increased threat activity against them. And this isn’t just limited to ransomware attacks. Phishing campaigns and domain typosquatting also come into play.

The FBI’s “Flash” alert includes these recommended mitigations for potential targets.

To prevent attacks:

  • Install security updates for operating systems, software, and firmware as soon as they are released.
  • Use multi-factor authentication wherever possible.
  • Avoid reusing passwords for different accounts and implement the shortest acceptable timeframe for password changes.
  • Disable unused RDP ports and monitor remote access/RDP logs.
  • Audit user accounts with administrative privileges and configure access controls with the lowest privileges you can.
  • Use up-to-date anti-virus and anti-malware software on all hosts.
  • Only use secure networks and avoid using public Wi-Fi networks. Consider installing and using a VPN.
  • Consider adding an email banner to messages coming from outside your organization.
  • Disable hyperlinks in received emails.
  • Provide users with training on information security principles and techniques as well as emerging cybersecurity risks.

To mitigate the effects of an attack:

  • Back up data and use air gaps and passwords to make them inaccessible to attackers.
  • Use network segmentation to make lateral movement harder.
  • Implement a recovery plan and keep multiple copies of sensitive or proprietary data in physically separate, segmented, secure locations.

The post FBI warns of increase in PYSA ransomware attacks targeting education appeared first on Malwarebytes Labs.

Apple shines and buffs Mac security—Is it enough to stop today’s malware?

There’s a lot going on in the Mac security world lately.

Over the last few months, Apple has ramped up security efforts across its platforms. From an endpoint security framework overhaul of macOS Catalina to phasing out kernel extensions, the tech giant has been battening down the hatches—especially of macOS and Mac computer hardware.

Despite Apple’s best efforts—or perhaps as a result of them—the Mac threat landscape has become even more dangerous. But instead of welcoming allied assistance via third-party security vendors, Apple is closing the gate. And cybercriminals are closing the gap.

A crack in the Mac door

It seems like only yesterday there weren’t many breaking news stories on Mac security threats to bite into. In fact, news on Apple cyberthreats wasn’t just infrequent—it was inconsequential. But over the last few years, credible threats, exploits, and hacks of Apple products have become more persistent. There was KeRanger ransomware in 2016. Several effective Mac-facing miners joined the crypto-rush in 2018. The iOS vulnerability exploited by checkm8 rattled quite a few cages in late 2019.

However, from the start of 2020 onward, the malicious momentum has been building. In the 2020 State of Malware Report, Malwarebytes researchers found that Mac malware—primarily backdoors, data stealers, and cryptominers—had risen by 61 percent over the previous year.

2020 served Apple users with a number of targeted attacks using RATs and APTs developed by nation-state actors from China, North Korea, and Vietnam. Some of these made their way into the wild; others appeared on journalists’ iPhones. ThiefQuest, a Mac malware masquerading as ransomware, was discovered in mid-2020.

Despite having the most locked-down security system of Apple’s platforms, iOS was particularly pummelled in the last year. A zero-click exploit remained unpatched for six months of 2020, leaving innocent iPhone users unaware that anyone nearby could completely take over their device without touching it. In November 2020, Apple released patches for three zero-day vulnerabilities in iOS and iPadOS that were being actively exploited in the wild.

Unfortunately, 2021 is proving to be similarly rotten for Apple. Just last week, the company released a patch for iPhone, iPad, and MacBook for a bug that could allow code execution through websites hosting malicious code. Reading between the lines, this means its browsers were vulnerable to exploits that could be launched from malicious website content, including images and ads.

While Apple didn’t comment on whether this particular vulnerability had been discovered by cybercriminals, the company released patches for three separate security bugs that were being actively exploited in January 2021. (Note: These are a different three vulnerabilities than the zero-days found in November.) And just a couple weeks ago, there was Silver Sparrow.

Silver Sparrow is a new Mac malware that swooped in on February 18 and was found on nearly 40,000 endpoints by Malwarebytes detection engines. At first considered a reasonably dangerous threat (researchers now believe it’s a form of adware), Silver Sparrow is nevertheless a malware family of intrigue for showcasing “mature” capabilities, such as the ability to remove itself, which is usually reserved for stealth operations.

One of Silver Sparrow’s more advanced features is the ability to run natively on the M1 chip, which Apple introduced to macOS in November. The M1 chip is central to Apple’s latest security features for Mac computers, and that makes it central to the apparent security paradigm shift happening within the company’s walls.

Apple security paradigm shift

And what paradigm shift is that? Macs running the M1 chip now support the same degree of robust security Apple consumers expect from their iOS devices, which means features like Kernel Integrity Protection, Fast Permission Restrictions (which help mitigate web-based or runtime attacks), and Pointer Authentication Codes. There are also several data protections and a built-in Secure Enclave. Put plainly: Apple have baked security directly into the hardware of their Macs.

But the security changes aren’t limited to the M1 chip or even macOS. On February 18, the company released its Platform Security Guide, which details the changes in iOS 14, iPadOS 14, macOS Big Sur, tvOS 14, and more—and there are many. From an optional password manager feature in Safari that looks out for saved passwords involved in data breaches to new digital security for car keys on Apple Watches and the iPhone, the security sweep appears to be comprehensive. In the guide preamble, Apple touts:

“Apple continues to push the boundaries of what’s possible in security and privacy. Apple silicon forms the foundation for…system integrity features never before featured on the Mac. These integrity features help prevent common attack techniques that target memory, manipulate instructions, and use JavaScript on the web. They combine to help make sure that even if attacker code somehow executes, the damage it can do is dramatically reduced.”

Looking at the collective security improvements made to Macs over the last several months—the M1 chips, changes to system extensions, an entirely new endpoint security framework—it appears Apple is making great strides against the recent uptick in cyberattacks. In fact, they should be commended for developing many beneficial technologies that help Mac (and iPhone) users stay more secure. However, not all of the changes are for the better.

Securing themselves in the foot

Unlike their Microsoft counterparts, Apple have been historically far more reticent about working with others—and that extends to third-party antivirus programs and security researchers alike. Their recent security upgrades for macOS and MacBook hardware are, unfortunately, right on brand.

The security components of M1-based Macs are harder to analyze and verify for those looking in from the outside. Security researchers and the tools they use may be thwarted by a less-than-transparent environment. Essentially, the new developments have hidden Mac defenses behind castle walls, which could make it more difficult for users, businesses, or analysts to know whether their devices have been compromised.

In a recent article in the MIT Technology Review, journalist Patrick Howell O’Neill said that Apple’s security succeeds in keeping almost all of the usual bad guys out, but when the most advanced hackers do break in, “Apple’s extraordinary defenses end up protecting the attackers themselves.” Those threat actors with the resources to develop or pay for a zero-day exploit can pole jump over the Apple security wall and, once inside, move around fairly undetected because of its locked-down, secretive nature.

Mac system extensions and the endpoint security framework introduced in Catalina are similarly problematic. Third-party software developers must apply to Apple for system extensions, and they aren’t just handing them out like masks and sanitizer. Once a developer gets a system extension approval from Apple, though, that developer’s software is protected by System Integrity Protection—and it’s nearly impossible to remove the extension unless you’re the owner of the software.

That’s great for legitimate third-party software programs, like Malwarebytes for Mac, especially in protecting against outside threats that might try to disable security software during an attack. But not every company that applies for system extensions is legitimate.

There have already been a few examples of developers known for cranking out potentially unwanted programs (PUPs) getting extensions from Apple. Because of this, some PUPs can no longer be fully removed by Malwarebytes (or any other security vendor) from Mac computers running Catalina or Big Sur. And while there are some ways that users can manually remove these programs, they are by no means straight-forward or intuitive.

No matter the malware

There’s been much fuss made about “actual” Mac malware in the press (and in this very article), but PUPs and adware are a significant issue for Mac computers. Cue the classic rebuttal: “But it’s only PUPs!” While many like to trivialize them, PUPs and adware open the door for more vulnerabilities, making an attack by malicious software even easier. Adware, for example, can host malicious advertising (malvertising), which can push exploits or redirects to malicious websites. If the most recent vulnerability patched by Apple wasn’t already being exploited, that would have been a perfect opportunity for cybercriminals to penetrate the almighty Apple defenses.

As discovered in the State of Malware Report, PUPs represented more than 76 percent of Mac detections in 2020. Adware accounted for another 22 percent. Actual malware designed for Macs is but a small slice of the apple. But it’s a growing slice for businesses with Mac endpoints.

In 2020, Mac threat actors decided to take a page out of the Windows cybercriminal book and turn their attention toward larger organizations instead of individuals. To that end, Mac malware increased on business endpoints by 31 percent in 2020—remote work and all. There may not be as many “actual” malware attacks on Mac endpoints as on Windows, but the share of Macs in business environments has been increasing, especially since the start of the pandemic.

Apple has developed some impressive armor for its Macs, but it doesn’t protect against the full scope of what’s out there. Further, Apple only uses static rules definitions for its anti-malware protection, which means it won’t stop malware it doesn’t already recognize. A security program that uses behavioral detection methods (heuristic analysis), like Malwarebytes Endpoint Detection and Response, has the potential to catch a lot of bad apples that Apple hasn’t seen yet.

As time goes on, we’re increasingly in danger of a major attack waged against Macs. There are still a myriad of Mac users who don’t install any third-party security. Fundamentally, Macs still aren’t all that difficult to infect—even with all the bells and whistles. And by closing their systems, Apple is limiting the capabilities of additional third-party security layers to assist in stopping that major attack from doing major damage.

Apple’s days of sitting on the security fence are certainly over. Time will tell if their fortress-like defenses win out, or if they’ll eventually need to depend on their allies for assistance.

The post Apple shines and buffs Mac security—Is it enough to stop today’s malware? appeared first on Malwarebytes Labs.

Careers in cybersecurity: Malwarebytes talks to teachers and students

Every year, I take part in talks for universities and schools. The theme is often breaking into infosec. I give advice to teens considering pursuing tech as a further area of study. I explain a typical working day for degree undergraduates. Sometimes I’m asked to give examples of conference talks. I get to dust off some oldies and give a snapshot of security research circa [insert year of choice here].

I’ve been doing this for about five years now, and it’s incredibly helpful for me and (hopefully) students too. I see real concerns from people who’ll end up being the next wave of researchers, writers, and communicators.

Get involved: benefits for the education space

If you work in security research and are considering doing something similar, you should! It’s helpful for many reasons:

  • It gives you a solid idea of what the next generation find interesting, research-wise. Which bits of tech do they love? What do they think will be an issue down the line? Maybe they prefer virtual machines to bare metal. Perhaps we’ll have an hour-long debate over the rights and wrongs of paying malware authors. You won’t know until you try it!
  • If you do any amount of public speaking, interviews, talks, whatever: it keeps you from going rusty. The Pandemic has shut down many conferences and sent more than a few online. If you’re unsure about doing online talks when your background is “real world only”, it’s helpful practice. Want to know what works in virtual spaces? This will definitely help.
  • Schools and universities really get a lot from these events. It’s usually quite difficult for them to get people booked in to speak about things. From experience, educators will absolutely appreciate any outreach or help you can give their students. It’s a win-win for everybody.

“I thought it was all code”

Something I emphasise is that information security has a huge number of different backgrounds in its overall makeup. I’ve met many despondent students who felt their coding skills weren’t up to scratch. The students’ impression is that everything is 100% coding or programming.

It’s true, coding and programming can be incredibly difficult things to understand. Skills like reverse engineering malware can take years to perfect. There’s no guarantee of being able to keep pace with malware developments in the meantime, either.

Well, there’s lots of fun ways issues like that can be addressed.

Even so, “I thought you had to be a qualified coder / programmer” is something I hear all the time. If not that, they often feel a lack of skills in one area negates everything else they’re good at.

It’s quite a relief for them to find out this doesn’t have to be the case.

The myth of the “expert at everything”

In media, security researchers are often presented as experts on all topics imaginable. The reality is people excel in their own little niche field and have a variable skillset for everything else. Experienced security pros know when to ask for help, and there’s absolutely nothing wrong with it. You really don’t have to know everything, all the time. This is another concern relayed to me by many students over the last few years.

The many paths to the security industry

When doing these sessions, a few key talking points come up time and again. Quite a few students have to be convinced that lots of security folk don’t necessarily even have technology qualifications. There’s also many roles which don’t involve any coding whatsoever. However, these are roles students haven’t considered, because they didn’t necessarily have any idea they existed.

Some of the deepest hardware knowledge I’ve come across is from people in sales teams. Do you like the idea of public-facing research? There’s blog and press opportunities for that. Is the idea of promoting your company’s research to a wide audience an exciting one? There’s probably a spot in marketing for you. At the furthest reaches of “no tech involvement whatsoever”, security organisations need people to design things. Maybe it’s time to dust off that design degree and start sending in your resume?

Whatever your skillset as a student, there is absolutely something you can do. That talent of yours will be a benefit to an organisation in the security space.

Thinking outside the box

One of the most interesting things about fresh talent is watching it pull apart new technology and highlight unforeseen dangers.

Look at some of the things we dig into on our very own blog. Web beacons, virtual/augmented reality, the Internet of Things, deepfakes, malign influence campaigns, securing accounts after someone’s died, and much more. The industry as a whole is more open to new / different research than it’s ever been. It has to be, or bad people will be getting away with virtual murder while everyone twiddles their thumbs.

In the last few days we’ve seen a run on art related NFT theft. Try telling someone that 12 months ago and see what the reaction would be. Someone out there has an idea for a solution to this kind of problem. They just don’t know it yet. It’s up to us to encourage them and see what kind of cool solutions they can come up with.

Talking with teachers: Holly Smylie

Computer Science teacher Holly Smylie, who sat in one of our talks, has given some insight into how the industry can help students:

Open days and talks are great in terms of giving students access to positive role models from the industry such as yourself. It essentially gives them an exposure to experiences of infosec that they may otherwise not have had from their environment, meaning that it can make a massive difference in terms of their future career aspirations and later life chances. 

I think that one of the greatest take away from your talk for my students was that although qualifications are obviously important, they aren’t the be all and end all. There are still other routes into the sector without the “usual qualifications”. It allows them to think beyond an exact route into something they want to know more about. Also, I think that there is more that our industry could do in terms of addressing the gender imbalance – whether this is providing talks or networking between students and female experts in the industry.

Again, these role models for students at school and even uni-level via talks, open days, visiting companies, etc can often be the tipping point for female students who do not believe that they would succeed in this industry (as it is still very male dominated). Again, I think this just fits in with broadening female students’ horizons to the world of infosec and giving them confidence that they will be just as valued as our male colleagues.

Closing thoughts

According to some predictions, there’s a huge number of jobs which will go unfilled into the next year. I’m not convinced the numbers will be as big as that. Even so, helping students of all ages with paths into the security industry can only be a good thing. The pandemic hasn’t made technology learning easy over the last year. I’m glad we at Malwarebytes have been able to pitch in and give students some possible careers to think about.

Special thanks to Holly, and the schools and Universities we’ve run these sessions for. We wish your students success in the years to come.

The post Careers in cybersecurity: Malwarebytes talks to teachers and students appeared first on Malwarebytes Labs.

ProxyLogon PoCs trigger a game of whack-a-mole

As we reported recently, the use of the Microsoft Exchange Server ProxyLogon vulnerabilities has gone from “limited and targeted attacks” to a full-size panic in no time.

Criminal activities, ranging in severity from planting crypto-miners to deploying ransomware, and conducted by numerous groups, have quickly followed the original exploitation by APT groups to spy on organizations.

With the focus of many security and IT professionals now firmly fixed on the world’s vulnerable Exchange servers, proof-of-concept exploits (PoCs) have surfaced left and right.

Some argue that since some attackers already possess exploit code, it’s only right for defenders to have it too, so they can test their systems by simulating what those attackers might do. Others say that PoC code doesn’t redress the balance because it’s a leg up for everyone, including criminals who haven’t created their own exploits yet.

And while most researchers deliberately omit specific components of a PoC, others feel compelled to publish full working exploits, enabling even the most technically challenged script-kiddies to use them maliciously.

All of which explains some people in the computer security community are busy tying to publish ProxyLogon PoCs, others are trying to stop them.

Purposely broken exploit

Bleeping Computer reports that a security researcher has released a proof-of-concept exploit that requires slight modification to install web shells on Microsoft Exchange servers vulnerable to the actively exploited ProxyLogon vulnerabilities.

“Firstly, the PoC I gave can not run correctly. It will be crashed with many of errors. Just for trolling the reader,” Jang told BleepingComputer.

Soon after the PoC was published, the publication reports that Jang received an email from Microsoft-owned GitHub stating that the PoC was being taken down as it violated the site’s Acceptable Use Policies.

GitHub under fire

GitHub received a ton of criticism for removing the proof-of-concept exploit. In a statement, the site said it took down the PoC to protect devices that are being actively exploited.

“We understand that the publication and distribution of proof of concept exploit code has educational and research value to the security community, and our goal is to balance that benefit with keeping the broader ecosystem safe. In accordance with our Acceptable Use Policies, GitHub disabled the gist following reports that it contains proof of concept code for a recently disclosed vulnerability that is being actively exploited.”

The main reason for criticism was that the vulnerability has a patch, so Microsoft had no reason to have the PoC removed. Some researchers also claimed GitHub has a double standard, since it has allowed PoC code for patched vulnerabilities affecting other organizations’ software in the past.

We have some sympathy with Microsoft here: a patch may be available but that doesn’t mean everyone is protected. A patch is only useful once it has been applied, and tens of thousands of servers are still unpatched.

Reverse engineering an exploit

To demonstrate how researchers go about turning a vulnerability into an exploit, Praetorian posted their methodology for a ProxyLogon attack chain.

By examining the differences (diffing) between a pre-patch binary and post-patch binary they were able to identify exactly what changes were made. These changes were then reverse engineered to assist in reproducing the original bug.

Cat is out of the bag

The problem with removing PoCs from a platform like GitHub is that the code will just re-surface elsewhere. It is very hard to make the Internet, as a collective brain, forget something.

Even if the author doesn’t post it somewhere else, there will always be that individual that has already copied the content before it was removed. Or another who is inspired to try to create their own.

For Malwarebytes Labs, one size doesn’t fit all. Sometimes a PoC can help to improve security, and sometimes some restraint is needed. Each situation needs to be judged on its merits.

The current situation is a crisis, and despite efforts to take down the emerging ProxyLogon PoCs, or neuter them by making them less than fully functional, you can bet they will be put to use by criminals. This while the owners of the remaining unpatched systems are scrambling to save what they can.

Other Malwarebytes posts on the ProxyLogon vulnerability:

Stay safe, everyone!

The post ProxyLogon PoCs trigger a game of whack-a-mole appeared first on Malwarebytes Labs.

Royal Mail scam says your parcel is waiting for delivery

Expecting a delivery? Watch out for phishing attempts warning of held packages and bogus shipping fees. This Royal Mail delivery scam begins with a text message out of the blue, claiming:

Your Royal Mail parcel is waiting for delivery. Please confirm the settlement amount of 2.99 GBP via:

Uk(dot)royalmail-bill(dot)com

Lots of folks may assume this text message is genuine, along with the URL. This would be a mistake. What we have is a simple but effective phish. It takes advantage of several real-world factors to ensure it’s possibly a bit more believable than other missives landing in mailboxes.

What are they up to? Let’s find out.

“If you do not pay this your package will be returned”

The link leads to a fake Royal Mail page which as good as repeats the message from the text, with one important addition:

If you do not pay this your package will be returned to sender

fakeroyalmailsite1

It doesn’t mention how long is left until the package is returned. (There’s nothing like a bit of sudden pressure to make people jump through some hoops.)

The phishing page has two sections. The first asks for a lot of personal details like name, address, phone number, and email address. Clicking the continue button leads to a request for payment information, in order to pay the non-existent fee.

fakeroyalmailsite2

If the victim continues, the phisher has both their personal information and their credit card.

Why this phishing attack works

This is a smart scam, for a number of reasons.

  1. The phish carries the usual markers of urgency and a request for information. It also doesn’t provide any clue about what’s in the non-existent package or who it’s from, tweaking victims’ fear of missing out, while promising to make that information available for a reasonably small and realistic fee.
  2. The endless pandemic ensures huge numbers of people are buying everything online. It’s not uncommon for households to have a steady army of delivery people at the door. A week’s shopping, clothes, entertainment items, schoolbooks for the kids, and more besides are all conveyor-belting their way into homes daily. It’s quite easy to forget which parcels have been ordered and which have already arrived.
  3. Text messages being sent from an “official” delivery company number is a practice long since abandoned, and numbers are easy to spoof anyway. If you’re waiting on a parcel, you could get a message from pretty much any number at all including the personal mobile of the driver themselves so checking if the number is official or not is no help.
  4. In the UK, Brexit is causing no end of confusion over delivery charges. People and organisations simply don’t seem to know what to expect, and this kind of phishing scam plays off that confusion to the max. If you’re waiting on something from outside the UK and find out a parcel is almost within reach? It’s likely you may be tempted to fill in the payment information request so as not to risk having the package returned to sender.

Next steps

If you or anyone you know has been caught by this, contacting banks or credit card companies is a priority. This would also be a good time to explore our in-depth look at phishing tactics. It’s a particularly unpleasant scam to be caught out by, when a majority of people are reliant on postal services. If you’re in doubt over the status of a parcel, go directly to your delivery service’s website. What you’ll lose in time, you’ll more than make back in terms of your bank account remaining safe and sound.

The post Royal Mail scam says your parcel is waiting for delivery appeared first on Malwarebytes Labs.

How your iPhone could tell you if you’re being stalked

The latest iOS beta suggests that Apple’s next big update will include an iPhone feature that warns users about hidden, physical surveillance of their location. The feature detects AirTags, Apple’s answer to trackable fobs made by Tile, and serves to block the potential abuse of the much-rumored product.

While the feature represents great potential, digital surveillance experts said that they were left with more questions than answers, including whether surveilled iPhone users will be pointed to helpful resources after receiving a warning, how the feature will integrate with non-Apple products—if at all—and whether Apple coordinated with any domestic abuse advocates on the actual language included in the warnings.

Erica Olsen, director of Safety Net at the National Network to End Domestic Violence, emphasized the sensitivities of telling anyone—particularly domestic abuse survivors—about unknown surveillance that relies on a hidden device.

“It could be extremely scary to get a notification about a device and have no idea where to start to locate and disable it,” Olsen said. “That’s not to say that it’s a bad thing; it just needs to be thorough.”

Apple did not respond to questions regarding the language of its notifications or about the company’s potential outreach to external domestic abuse advocates in crafting the feature. Members of the Coalition Against Stalkerware—of which Malwarebytes is a founding partner—said they were open to collaborate with Apple on the feature.

New “Item Safety Alerts”

According to 9to5Mac, the latest beta version for iOS 14.5 includes an update to the “Find My” app, which helps users locate iPhones, iPads, iPod Touches, and Mac computers that may have been lost or stolen. Importantly, while each of those devices can run the Find My app for their respective operating systems, it is only the iPhone version of the app—as witnessed in the iOS 14.5 beta—that includes a new setting called “Item Safety Alerts.”

The setting is turned on by default, and, according to Apple blogger and iOS developer Benjamin Mayo, any attempts to turn off the setting will result in a warning that reads:

“The owner of an unknown item will be able to see your location and you will no longer receive notifications when an unknown item is found moving with you.”

As the iOS update is still in beta, there is limited information, and the “notifications” referenced in the Item Safety Alerts advisory have not been revealed. However, the advisory itself reveals the purpose of the alerts: To warn iPhone users in the future about whether separate, unknown devices are being tracked that are in close, frequent proximity to their iPhone.

In theory, this type of surveillance has been possible for years. By abusing the intentions of Apple’s Find My app, a stalker or a domestic abuser could plant a device that can be tracked by Find My, such as an iPhone or an iPod touch, onto a victim and track their movements. But, while this type of location monitoring was possible, it also had some obvious obstacles. One, purchasing a capable device could be expensive, and two, the actual devices that can be tracked are rather easy to find, even to unsuspecting victims. After all, it isn’t every day that someone just happens to find an entirely different phone in their gym bag.

Those obstacles could fade away, though, if Apple follows through on releasing its next, rumored product.

According to multiple tech news outlets, Apple will release physical location-tracking tags in 2021, dubbed “AirTags.” The devices could directly compete with the company Tile, which makes small, physical squares of plastic which can slipped into personal items likes luggage, purses, backpacks, wallets, and other important items that could be lost or stolen.

Unfortunately, the smaller a location-tracking device is, the easier it is to use it against someone without their consent, as revealed by a woman in Houston who said her ex stalked her after planting a Tile device in her car. The woman, who remained anonymous for her safety, told ABC 13 news in an interview:

“It was shocking. In a million years, it never occurred to me that could be possible and instantly everything made sense. I think that’s what’s important that for people who are in a domestic violence situation or stalking situation to know that should be a consideration.”

The iOS 14.5 beta feature, then, makes much more sense when accounting for a potential future with Apple’s AirTags. Malicious users could purchase AirTags and sneak them into a person’s purse or their backpack without their knowledge.

The new “Item Safety Alerts” could curb that type of abuse, though, warning users about unrecognized devices that are located in the same vicinity as their current device, but are not registered through their own Find My app.

Important considerations for Apple

Several representatives from members of the Coalition Against Stalkerware said that Apple’s new feature has real potential to help users, but without more details, many questions remain.

Tara Hairston, head of public affairs for North America at Kaspersky, said she wanted to know more about how Find My could work with third-party devices, so that clandestine surveillance could be detected beyond the use of Apple’s rumored AirTags, and beyond the use of an iPhone, too. According to 9to5Mac, the updates to Find My include a new “Item” tab to track third-party accessories, but questions from Malwarebytes Labs to Apple about the extent of that cross-functionality went unanswered.

Hairston also expressed concerns about the development of the feature.

“A question I have is whether Apple has discussed the alert’s language with professionals and advocates that work with domestic violence survivors to ensure that it is not re-traumatizing for them,” Hairstone said. “Furthermore, does Apple plan to provide information regarding what someone should do if they confirm that they are being tracked, especially if they are a survivor? Accounting for these types of safety considerations would result in more holistic support for vulnerable populations.”

These are routine considerations for the Coalition Against Stalkerware, which was intentionally built as a cross-disciplinary group to help protect users from the threats of stalkerware. For the same reason that the coalition’s domestic violence advocates are not the experts on technological sample detection, the coalition’s cybersecurity vendors are not the experts on protecting survivors from domestic abuse. But when the members work together, they can do informed, great things, like developing a new way to detect stalkerware which can happen outside of a compromised device—a critical need that many cybersecurity vendors did not know about until joining the coalition.

At Malwarebytes Labs, we await the release of Apple’s feature, and we are eager to learn about the work that went into it. Any company taking steps to limit non-consensual surveillance is a good thing. Let’s work together to make it great.

The post How your iPhone could tell you if you’re being stalked appeared first on Malwarebytes Labs.

The Malwarebytes 2021 State of Malware report: Lock and Code S02E04

This week on Lock and Code, we discuss the top security headlines generated right here on Labs. In addition, we tune in to a special presentation from Adam Kujawa about the 2021 State of Malware report, which analyzed the top cybercrime goals of 2020 amidst the global pandemic.

If you just pay attention to the numbers from last year, you might get the wrong idea. After all, malware detections for both consumers and businesses decreased in 2020 compared to 2019. That sounds like good news, but it wasn’t. Behind those lowered numbers were more skillful, more precise attacks that derailed major corporations, hospitals, and schools with record-setting ransom demands.

Tune in to hear about how cybercrime has changed, along with examples of some of the most nefarious malware upgrades, on the latest episode of Lock and Code, with host David Ruiz.

https://feed.podbean.com/lockandcode/feed.xml

You can also find us on the Apple iTunes storeSpotify, and Google Podcasts, plus whatever preferred podcast platform you use.

We cover our own research on:

Other cybersecurity news

Stay safe!

The post The Malwarebytes 2021 State of Malware report: Lock and Code S02E04 appeared first on Malwarebytes Labs.