IT NEWS

World Password Day must die

The continued existence of World Password Day is a tell that something has gone badly wrong in cybersecurity.

Now in its tenth year, the day is supposed to act as an annual reminder for people to follow good password hygiene: Don’t reuse passwords; use long passwords; no, longer passwords than that; use a collection of random words; no, not those words; use a phrase; use a collection of phrases; don’t forget the weird characters; etc., etc.

This is bad. Critical technology should not require an annual pep talk to function correctly. There is no annual “how to avoid nuclear meltdown” day.

And make no mistake, password authentication is critical technology. It is the bedrock on which security is built. Fail at authentication and it doesn’t matter how “military-grade” your encryption is or if you patch twice a day before flossing, you’re toast.

The existence of World Password Day is a symptom of two problems.

The first is that password authentication is a terrible design. Its success hinges on humans being good at something humans are really bad at: Creating and remembering long strings of random characters.

In an environment where users must now remember about 100 passwords each, it is impossible to use passwords well without assistance. The only chance you have of making it work is to outsource the “creating and remembering” part you’re really bad at to a computer, in the form of some password management software.

Password managers are great—apart from where they aren’t, like when you’re logging in to Windows—but from what we can tell, most people still don’t use password managers, and those that do are almost certainly the most security-aware among us; in other words, the folks who need its help the least.

And when I write “impossible” I am not being hyperbolic. You cannot remember 100, different, strong passwords. You just can’t. Almost all of us run into serious problems juggling fewer than ten. (If you’re still doubtful, read Why (almost) everything we told you about passwords was wrong, it’s got more details and links to the research.)

The second problem is that for too long we made passwords a problem for users to solve instead a problem for IT or security. Dispersing the responsibility like this created an enormous headache that has consumed untold resources. A system is only as strong as its worst password choice, but we almost never know what the worst choice is or who made it. That creates a situation where improving security rests on our ability to improve every single user in the hope that we’ll reach the worst.

Attempts to level up users often boil down to edicts about how to do passwords better, such as making sure each password includes a mixture of uppercase and lowercase letters, and that passwords are not reused.

It’s like we asked the janitor to configure the firewall rules and then tried to fix our terrible mistake by having a firewall expert constantly lecture the janitor about not messing up the firewall.

Repeated password breaches over decades—which show us real users’ password choices—suggest that these edicts are having little effect. This shouldn’t be a surprise. Reusing passwords and making passwords simpler may be bad for security, but they make perfect sense if your most pressing problem is working out how to juggle an unmanageably large portfolios of passwords.

Our experiment in shifting responsibility and blame to users hasn’t worked. Ransomware gangs rely routinely on phished, stolen, or guessed passwords to break into corporate networks through VPNs or remote desktops, causing untold damage and disruption.

The good news is that while there isn’t much we can do about problem number one, number two was a choice, and it’s a choice we can un-make. There is another way, but it requires a shift in mindset.

Instead of thinking about how to get users to choose stronger passwords, businesses should focus on protecting themselves from users’ poor password choices instead.

The most powerful way to do this is to remove passwords entirely. Thankfully, after decades of false starts, a slew of technologies like Apple’s Touch ID, Windows Hello, and FIDO2 has appeared that now make this a viable option in a number of areas.

Passwords are going to be with us for a long time yet though, so we still need ways to cope with bad ones where passwordless authentication is unavailable.

Where you can’t abandon passwords, the next best option is multi-factor authentication (MFA). In 2019, Microsoft’s Alex Weinert wrote that “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

MFA comes in different flavors and your choice of flavor makes a difference: Hardware keys are better than push notifications from an app, which are better than One-Time Password (OTP) codes from an app, which are better than OTP codes over SMS. But the improvements that come in the steps between the different forms of MFA are incremental. The step between MFA of any kind and no MFA at all is transformational.

More than any other choice or technology, MFA puts the responsibility for password security back into the hands of IT and security specialists where it belongs.

There are other measures, too. When you go to an ATM you don’t have to type in a 14-character password with eight quattuordecillion (that’s a number with 45 zeroes at the end of it) possible combinations to get your money—a 4-digit PIN with a paltry 10,000 possible combinations will do.

Why? Because the ATM isn’t going to give an attacker 10,000 chances to guess the correct PIN, it’s going to give them three, and then it’s going to eat the card. The same thing happens on your iPhone. Six wrong guesses and you’re on the naughty step. Ten wrong guesses and your data can self-destruct.

No normal user is going to make hundreds of guesses at their password before phoning support, so take a leaf out of your bank’s playbook and give your users a handful of chances to enter their password correctly.

Like MFA, account lockouts allow users to stay secure even with truly awful password choices. (After all, EVERY 4-digit PIN is a terrible password choice.)

In the interests of defense in depth, businesses may still want to ensure that users are making strong passwords, or at least avoiding weak ones. Here, the thinking has changed in the last decade, and that change is enshrined in the National Institute of Standards and Technology (NIST) Digital Identity Guidelines.

Forcing people to create passwords to a formula, insisting on at least one uppercase letter, at least one special character etc, is out. And so are periodic password resets. Both are far more effective at annoying users than they are at improving security.

Instead, NIST says, it’s more effective to simply stop users choosing known bad passwords, such as passwords that have appeared in breaches or that are based on dictionary words.

If you are going to insist on strong passwords, please make a password manager part of the standard software suite on all your organization’s machines, and make sure employees actually know how to use it. Many users simply don’t trust password managers, and unless you’ve sat with somebody using one for the first time, you may not appreciate how difficult it can be for people to make sense of them.

The measures I’ve suggested in this article are not interchangeable or equally effective: You should start at the top and work down. If you do that, you can improve password security, remove the need for toothless edicts, and perhaps we can finally get rid of these annual pep talks.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

The one and only password tip you need

OK, it’s time for me to keep a promise.

Back in October 2022, I wrote an article called Why (almost) everything we told you about passwords was wrong. The article summarizes how a lot of what you’ve been told about passwords over the years was either wrong (change your passwords as often as your underwear), misguided (choose long, complicated passwords), or counterproductive (don’t reuse passwords).

Most damningly of all, the vast effort involved in dispensing this advice over decades has generated little discernible improvement in people’s password choices. If it hasn’t quite been a wasted effort, it has certainly represented a galactically inefficient use of resources.

We know that this advice isn’t what it’s cracked up to be thanks to intrepid researchers, such as the folks Microsoft Research, who made it their business to discover what actually makes a difference to password security in the real world, and what doesn’t.

If you want the full, three-course meal version of why all the password advice you’ve been told stacks up to much less than the sum of its parts you can read the original article. Here’s the snack version:

How strong, long, and complicated your password is almost never matters in the real world. The most common type of password attack is credential stuffing, which uses passwords stolen in data breaches. It works because it’s so common for people to reuse the same password in two places and it is completely unaffected by password strength. The next most common attack is password spraying, where criminals use short lists of very simple passwords on as many computers as possible. In both situations a laughably simple but unique password is good enough to defeat the attack.

There are rare types of attack—offline password guessing—where a strong password might help, but the trade-off is that strong passwords are far harder for people to remember, which leads them to use the same password for everything, which makes them much more vulnerable to credential stuffing. Notebooks are a really good, simple solution to the password reuse problem, but for years people were ridiculed for using them. Password managers are also a good solution but they are much harder to use than notebooks and a majority of people don’t use them, and don’t trust them, despite years of positive press and advocacy.

OK, back to the promise I mentioned.

As somebody who has done his fair share of dispensing this kind of advice, I ended my Why (almost) everything we told you about passwords was wrong article with a mea culpa in the form of a promise. Never again would I dish out laundry lists of things you should do to your password. I would instead focus my energy on getting you to do one thing that really can transform your password security, which is using two-factor authentication (2FA):

So, from now on, my password advice is this: If you have time and energy to spare, find somewhere you’re not using 2FA and set it up. If you do I promise never to nag you about how weak your passwords are or how often you reuse them ever again.

Well, today is World Password Day, and it’s time to make good on that promise. I was asked to write a list of password tips, so here they are:

  • Set up 2FA somewhere.

To explain why I’m all-in for 2FA I can’t do any better than quote Microsoft’s Alex Weinert from his 2019 article, Your Pa$$word doesn’t matter. (He calls it MFA but he means the same thing, I’ll explain why lower down).

Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.

Yes, he wrote 99.9%, and he wasn’t exaggerating. 2FA defeats credential stuffing, password spraying, AND password reuse, AND a bunch of other attacks.

Even if you don’t know what 2FA is, you’ve probably used it. If you’ve ever typed in a code from an email, text message or an app alongside your password you’ve used 2FA.

In the real world, 2FA just means “do two different things to prove it’s you when you log in”. One of those things is almost always typing a password. The other thing is often typing a six-digit code you get from your phone, but it might also be responding to a notification on your phone or plugging in a hardware key (a small plastic dongle that plugs into a USB port and does some fancy cryptographic proving-its-you behind the scenes).

2FA is very widely supported and any popular website or app you use is likely to offer it. In an ideal world those sites and apps would take responsibility for your security and just make 2FA a mandatory part of their account setup process. Unfortunately, we don’t live in an ideal world, and the tech giants that know better than anyone else how much 2FA can protect you have left it for you to decide if you need it.

To make your life a little harder still, they also give it different names. You’ve already met MFA, which means multi-factor authentication, while Google, WhatsApp, Dropbox, Microsoft, and others brand their version of 2FA with a slightly altered name: two-step verification (2SV).

If you have a choice, the best form of 2FA is a password and hardware key, but you’ll need to buy a hardware key. They are worth the small investment and not nearly as intimidating as they can seem.

If you aren’t ready for the that, the next best form of 2FA uses an app that prompts you with a notification on your phone. Next best after that is 2FA that uses a code from an app on your phone, and the least good version of 2FA uses a code sent over SMS.

However, don’t let anyone tell you any form of 2FA is “bad.” It’s all relative. Adopt any one of them and you can safely ignore the rest of the password advice you were probably ignoring already.

To help you get started, here are links to the 2FA setup instructions for the five most visited websites:


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

How small businesses can secure employees’ mobile devices

Fact: 77% of organizations are convinced they’re capable of protecting their mobile devices—smartphones, tablets, and laptops (including Chromebooks)—from cybersecurity threats.

Another fact: A third of those organizations aren’t protecting their mobile devices at all.

And that matters—in its Mobile Security Index 2022 report, Verizon reported that 45 percent of businesses suffered a major mobile-related compromise with lasting repercussions.

The increase in companies’ reliance on mobile devices that began with the pandemic persists today. Many employees are working on their mobile devices more, which follows that more mobile devices (53 percent) have access to sensitive data compared to pre-pandemic times. We recognize how critical such devices are to our work, and yet, confident or not, we continue to treat their defense against cyberattacks like an afterthought.

So what can small business owners do to quickly turn things around?

Start by recognizing that the mobile space has become a battleground, so protecting it is a must. And then, develop a mobile security policy that touches on essentials for securing employee mobile devices.

A cybersecurity policy is essentially a high-level plan detailing how a company will protect its physical and digital assets. In the context of mobile devices, that means protecting the sensitive data they store and have access to, and stopping non-employees from physically accessing such devices.

The policy doesn’t have to be complicated or perfect, but it must be solid and effective. The document must evolve with changing technologies and attack trends to prevent it from becoming outdated. For a policy to be effective, it should clearly and explicitly state responsibilities for the organization and its employees.

Here’s a list of some organizational duties you might want to include in your mobile security policy, to help you get started.

  • Use a mobile device management (MDM) platform. IT teams use MDM to provision, deploy, and manage mobile devices. It allows an administrator to perform remote tasks, such as troubleshooting and wiping devices after a theft. More importantly, an MDM can be used to enforce strong password practices and deploy software updates.
  • Use a mobile endpoint security solution to provide real-time protection to employee devices.
  • Ensure employees use a VPN to connect to the company networkYour small business may have adopted a working scheme that allows employees to work anywhere. In this case, it’s vital to encrypt data in transit, so you don’t have to worry about your employees using public Wi-Fi.
  • Use FIDO2 two-factor authentication (2FA). FIDO stands for Fast Identity Online, a globally-recognized standard for passwordless authentication. Employees using mobile devices to read their emails are particularly vulnerable to phishing. Unlike other forms of 2FA, FIDO2 devices can’t be phished.
  • Set clear Bring Your Own Device (BYOD) guidelines, explaining whether employees are allowed to use their personal devices for work and what their obligations are if they do.
  • Educate employees on best practices for mobile security. Employees are your first line of defense—arm them with the tools and know-how they need to fulfill their role.

By creating a strong mobile security policy, a small business is better placed to prevent cyberattacks, and better prepared should one occur.

Good luck!


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

AI-powered content farms start clogging search results with ad-stuffed spam

A recent study by NewsGuard, trackers of online misinformation, makes some alarming discoveries about the role of artificial intelligence (AI) in content farm generation. If you’ve previously held your nose at the content mill grind, it’s probably going to become a lot more unpleasant.

Content farms are the pinnacle of search engine optimisation (SEO) shenanigans. Take a large collection of likely underpaid writers, set up a bunch of similar looking sites, and then plaster them with adverts. The sites are covered with articles expressly designed to float up to the top of search rankings, and then generate a fortune in ad clicks.

If you’ve ever searched for something and walked into a site which spends about 4 paragraphs slowly describing your question back to you before (maybe) answering it, congratulations. I share your pain.

The worst part about this kind of content production is that in recent years many otherwise legitimate sites now write like this too. The pattern to look out for is as follows:

  • A paragraph or two describing your problem back to you as if you’re ten years old.
  • A paragraph break with a large advert.
  • Another 3 paragraphs which may or may not answer your question.

On top of that, sites don’t just populate with reasonable, genuine questions. They now fill up with ludicrous questions, or answer the questions badly. Not only is garbage like this unhelpful itself, it also keeps you away from the good stuff.

This is the current state of play before we throw AI-generated content into the mix. What did NewsGuard find?

49 news and information sites which appear to be “almost entirely written by artificial intelligence software”. There’s a broad spread of languages used on these sites, ranging from Chinese and Tagalog to English and French. This helps ensure the content is being seen by as many people as possible, as well as clogging up search engines that little bit more. Some of the key points:

  • Lack of disclosure of ownership / control, making it hard to assess bias or possible political leanings.
  • Topics include entertainment, finance, health, and technology.
  • “Hundred of articles per day” published on some of the sites.
  • False narratives are pushed by some of the sites.
  • High advertising saturation.
  • Generic names like “News Live 79” and “Daily Business Post”.

As for the actual written content itself, it is said to be filled with “bland language and repetitive phrases”. This is one of the key indicators of AI-generated content. Additionally, many of the sites began publishing just as the various content creation AI tools, tools like ChatGPT, started to be used by the public. Quite a coincidence!

Other strong indicators include:

  • Phrases in articles which are often used by AI in response to prompts. One example given is “I am not capable of producing 1500 words… However, I can provide you with a summary of the article”.
  • No bylines given for authors. Reverse image searches for a handful of supposed authors reveal that images have been scraped from other sources.
  • Generic and incomplete About Us or Privacy Policy pages, some of which even link to About Us page generation tools.

If a smoking gun was even required at this point, the dead giveaway would be the inclusion of actual error messages produced by AI text generation tools. One example, from an article published in March of this year, includes the following text:

“As an AI language model”, and “I cannot complete this prompt”.

Despite this, site owners remain cautious about admitting to any use of AI to produce the content farm rings. In April of this year, NewsGuard attempted to get some answers from the websites as to who, or what, is creating the content. The results are not encouraging.

Of the 49 sites studied, NewsGuard contacted the 29 sites which included some form of contact details. Two sites confirmed use of AI, 17 did not respond, eight provided invalid contact details, and two didn’t answer the questions provided.

Since the story broke, Google has removed adverts from some pages across the various sites flagged. Ads were removed completely from sites where the search giant found “pervasive violations”. Although two dozen sites were reported to be making use of Google’s ad services, the use of AI-generated content is “not inherently a violation” of ad policies.

Nonetheless, given the content created is likely to be low value and little more than click bait, it seems likely that this kind of site is not long for Google’s ad world. A number of other ad-based organisations pulled their ads when contacted by Bloomberg. Even so, this is very much a game of whack-a-mole with the SEO spammers in the driving seat.

It’s very likely we’ll see campaigns like the above dedicated to other unpleasant online activities. What if the spam-filled SEO magnet sites churn out endless content to lure visitors to phishing pages? Or Bogus sign up forms? It’s not a stretch to imagine dozens of sites fired out by AI generators linking to fake downloads and bogus browser extensions.

As many people have noted in the above linked articles, the high speed and lost cost of generation here are key to getting bad things online as quickly as possible. When you can register sites in bulk and have the AI bots filling all of them with a text firehose, the fear is that advertising networks and abuse departments may not be able to keep up. All this happened in the same week that AI “Godfather” Geoffrey Hinton left Google, warning of the dangers posed by rogues misusing AI.

If you run an advertising division, now is probably a very good time to check if AI-generated content is addressed by your policies and update accordingly. Just don’t run it through an AI first.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Google Authenticator WILL get end-to-end encryption. Eventually.

Following criticism, Google has decided to bring end-to-end encryption (E2EE) to its Google Authenticator cloud backups. The search giant recently introduced a feature that allows users back up two-factor authentication (2FA) tokens to the cloud, but the lack of encryption caused some commentators to warn people off using it.

Google Authenticator is an authenticator app used to generate access codes, called one-time passwords (OTPs). These OTPs are only valid for a short period and are generated on demand. They serve as an additional form of authentication by proving that you have access to the device generating the OTP. Google Authenticator is one of the most well-known authenticators. Although it’s made by Google it’s not limited to Google’s own services, but can also be used with Facebook, Twitter, Instagram, and many more.

On April 24, 2023, Google announced an update across both iOS and Android, which added the ability to safely backup the secrets used to generate OTPs to your Google Account. This allows users to create a backup which they can use if their device is lost, stolen, or damaged. Since OTPs in Google Authenticator were previously only stored on a single device, a loss of that device locked you out of any service where you used it to log in.

Shortly after the new feature was rolled out, Mysk’s security researchers advised against turning on the new feature. They analyzed the network traffic that occurs when the app syncs the secrets, and found out that the traffic was not end-to-end encrypted. This would mean that in case of a data breach or if someone obtains access to your Google Account, all of your OTP secrets would be compromised, and they would be able to generate OTPs as if they were you.

The likelihood of someone stealing the secret seeds from Google’s servers is relatively small, but since it is better to be safe than sorry and one problem less is always good to have, users asked Google to add a passphrase to protect the secrets. This would introduce an extra safeguard that makes them accessible only to their owner.

Google’s primary objection to this method was that it heightens the risk of users getting completely locked out of their own data. Meaning that if you lost your device and the passphrase, you would lose all access to your accounts.

Google Group Product Manager Christiaan Brand tweeted that end-to-end encryption (E2EE) will be made available for Google Authenticator down the line, but they are rolling out this feature carefully.

According to Google, the option to use the app offline will remain an alternative for those who prefer to manage their backup strategy themselves. But, if you want to try the new Authenticator with Google Account synchronization, simply update the app and follow the prompts.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Google takes CryptBot to the wood shed

Google is in the midst of a legal campaign designed to take down the creators of a very persistent piece of malware called CryptBot. This malware, which Google claims compromised roughly 670k computers, set about infecting users of the Chrome browser. Unfortunately for the malware campaign operators, Google’s not impressed.

This legal campaign focuses on shutting down domains associated with the stealer. The lawsuit unsealed this week reveals Google’s line of approach for tackling CryptBot’s alleged primary distributors, located in Pakistan.

It’s easy to see what piqued Google’s interest in this infection campaign. A big part of the CryptBot tactics on display involved offering up cracked or modified versions of popular Google products. The products were secretly infected with CryptBot, which would then go on to try and plunder credentials from the infected systems. From the complaint document:

(The) defendants’ criminal scheme is perpetrated via a pay-per-install (“PPI”) network known as “360installer,” which fosters the creation of websites that offer illegally modified software (“Cracked Software Sites”).

These websites offer software infected with CryptBot malware, such as maliciously modified versions of Google Chrome and Google Earth Pro, and also cracked third party software. The Malware Distribution Enterprise operated by Defendants in this case is one of the primary means of spreading the CryptBot malware to new victims.

Google highlights that CryptoBot targets users of Chrome. When it notices Chrome is installed on a PC, it attempts to “locate, collect, and extract user credentials saved to Chrome”. This can be logins, authentication methods, private data, and several types of payment information, such as card details and cryptocurrencies.

This attempt at a takedown by Google isn’t just focused on the code side of things. There’s also a trademark component, and the search giant is none too happy about their familiar product icons being used for malware-related purposes. From the blogpost:

The legal complaint is based on a variety of claims, including computer fraud and abuse and trademark infringement. To hamper the spread of CryptBot, the court has granted a temporary restraining order to bolster our ongoing technical disruption efforts against the distributors and their infrastructure. The court order allows us to take down current and future domains that are tied to the distribution of CryptBot. This will slow new infections from occurring and decelerate the growth of CryptBot.

As The Register notes, this goes beyond the usual restraining order approach where URL registries falling under the court’s jurisdiction must shut down rogue domains. Hardware and virtual machines can be turned off,  network providers can kill server connections powering CryptoBot, and steps can be taken to keep the infrastructure offline permanently.

In other words, the CryptoBot folks are in a lot of trouble. The complaint states that this action is being brought under the Racketeer Influenced and Corrupt Organisations (RICO) act, Computer Fraud and Abuse Act (CFAA), Lanham Act, and New York state common law. RICO alone, intended to deal with the dismantling of organised crime, should be enough to give the ringleaders pause for thought. Everything else is just a bonus.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Oracle WebLogic Server vulnerability added to CISA list as “known to be exploited”

On May 1, 2023 the Cybersecurity and Infrastructure Security Agency (CISA) added three new vulnerabilities to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation.

This means that Federal Civilian Executive Branch (FCEB) agencies are obliged to remediate the vulnerabilities by May 22, 2023. For the rest of us it means “pay attention,” everyone else with a vulnerable entity should do this as fast as possible too.

The Common Vulnerabilities and Exposures (CVE) database lists publicly disclosed computer security flaws. The CVEs added by CISA were:

  • CVE-2023-1389 is a vulnerability in TP-Link Archer AX21 (AX1800) firmware versions before 1.1.4 Build 20230219. Affected versions contain a command injection vulnerability in the country form of the /cgi-bin/luci;stok=/locale endpoint on the web management interface. Specifically, the country parameter of the write operation was not sanitized before being used in a call to popen(), allowing an unauthenticated attacker to inject commands, which would be run as root, with a simple POST request.
  • CVE-2021-45046 is a very old Apache Log4j2 deserialization of untrusted data vulnerability that still works on enough unpatched servers to be listed.
  • CVE-2023-21839 affects Oracle WebLogic Server. It can lead to an unauthenticated attacker with network access gaining unauthorized access to “critical data or complete access to all Oracle WebLogic Server accessible data.”

We would like to zoom in on that last vulnerability for a few reasons.

  • First of all because Oracle WebLogic is a very wide-spread java application server and has always been a popular entrance to networks for cybercriminals.
  • The vulnerability is easily exploitable. Even for copycats, since there are proof-of-concepts (PoCs) available and exploits are incorporated in pen-testing tools.
  • The scope of the vulnerability. There is a real risk that a remote, unauthenticated attacker can fully compromise the server in order to steal confidential information, install ransomware, and turn to the rest of the internal network.

Oracle WebLogic Suite is an application server for building and deploying enterprise Java EE applications which is fully supported on Kubernetes. That makes it easy to use on-premises or in the cloud. The companies using Oracle WebLogic are most often found in United States and in the Information Technology and Services industry.

In Oracle’s January security advisory you will notice that five researchers are credited with finding and reporting CVE-2023-21839. This may be due to the fact that Oracle issues patches in a quarterly cycle, where many others publish updates monthly. This means that researchers have more time to find new vulnerabilities, but they also have to keep quiet about them for longer. Nevertheless, five separate instances could indicate that this vulnerability was not hard to find.

What’s even worse is that it is easy to exploit the vulnerability. The published exploits target the Listen Port for the Administration Server. The protocol used with this port is T3—Oracle’s proprietary Remote Method Invocation (RMI) protocol, which transfers information between WebLogic servers and other Java programs. An unauthorized attacker with remote access can send a crafted request to a vulnerable WebLogic server and upload a file via an LDAP server. Basically allowing the attacker to execute reverse shells on the target. A reverse shell or “connect-back” shell opens communications with the attacker and allows them to execute commands, which enables them to take control of the system.

Update now

Affected versions of Oracle WebLogic Server are 12.2.1.3.0, 12.2.1.4.0, and 14.1.1.0.0. A patch for this vulnerability is available on the Oracle support site for those that have an Oracle account.

Oracle always strongly recommends that you do not expose non-HTTPS traffic (T3/T3s/LDAP/IIOP/IIOPs) outside of the external firewall. You can control this access using a combination of network channels and firewalls.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using Malwarebytes Vulnerability and Patch Management.

How to keep your ChatGPT conversations out of its training data

Last week, OpenAI announced it had given ChatGPT users the option to turn off their chat history. ChatGPT is a “generative AI”, a machine learning algorithm that can understand language and generate written responses. Users can interact with it by asking questions, and the conversations users have with it are in turn stored by OpenAI so they can be used to train its machine learning models. This new control feature allows users to choose which conversations to use to train OpenAI models.

“Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” the company said in the announcement. “When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”

Prior incidents involving ChatGPT may have prompted these changes. Early this month, reports revealed Samsung employees had erroneously shared confidential company information with ChatGPT. Before this, OpenAI took ChatGPT offline after it exposed some chat histories to others using the tool at the same time. This incident earned the attention of a data protection agency in Italy, which then ordered a temporary ban for the AI, pending an investigation.

Along with its announcement, OpenAI also revealed a ChatGPT Business subscription that will keep users’ input out of its training data. “ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default,” the company said.

How to opt out of OpenAI’s trianing data

Log in to ChatGPT and click the three dots next to your name to open a menu.

ChatGPT hamburger menu button

Choose Settings from the menu.

ChatGPT menu

The Settings menu will appear in the middle of the screen. Click Show next to Data Controls to expand the window, and then toggle the switch next to Chat History & Training to the off position to stop your data being used to train ChatGPT.

Users can also export their chat history for local storage by clicking the Export data text in the expanded Settings window. Users will receive an email with a button link to the file containing all of their conversations.

ChatGPT settings menu

Note that disabling Chat History & Training also turns off ChatGPT’s conversation history feature. Chats created after disabling the option won’t appear in the history sidebar, but cached conversations found in the sidebar of the page remain.

ChatGPT chat history is off


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW

Upcoming webinar: Is EDR or MDR better for your business?

Don’t miss our upcoming webinar on EDR vs. MDR!

In the webinar, Marcin Kleczynski, CEO and co-founder of Malwarebytes, and guest speaker Joseph Blankenship, Vice President and research director at Forrester, discuss topic such as: 

  • The difference between EDR and MDR, how EDR solutions can be challenging for businesses without dedicated security teams, and why building an in-house SOC can be expensive and difficult.
  • The limitations of Endpoint Protection and EDR, specifically when it comes to advanced threats like ransomware that use Living off the Land (LOTL) attacks and fileless malware
  • How MDR providers work with clients to understand their security technology stack, make recommendations, and agree on response actions to take.
  • If EDR or MDR is better for your business based on the resources you have available and the level of security you require. 

Want to learn more about EDR and MDR and which is right for your business? Be sure to catch the full webinar on Wednesday, May 10, 2023 at 10 am PT / 1 pm ET and get valuable insights from industry experts on how to improve your security operations and protect against ransomware and fileless malware.

Register now!

Read also:

How to choose an MDR vendor: 6 questions to ask

Is an outsourced SOC worth it? Looking at the ROI of MDR

Cyber threat hunting for SMBs: How MDR can help

Is it OK to train an AI on your images, without permission?

Website owners are once again at war with tools designed to scrape content from their sites.  An AI scraper called img2dataset is scouring the Internet for pictures that can be used to train image-generating AI tools.

These generators are increasingly popular text-to-image services, where you enter a suggestion (“A superhero in the ocean, in the style of Van Gogh”) and it produces a visual to match. Since the system’s “understanding” of images is a direct result of what it was trained on, there is an argument that what it produces consists of bits and pieces of all that training data, There’s a good chance there may be legal issues to consider, too. This is a major point of contention for artists and creators of online content generally. Visual artists don’t want their work being sucked up by AI tools (that make someone else money) without permission.

Unfortunately for the French creator of img2datset, website owners are very much dissatisfied with his approach to harvesting images. 

The free program “turns large sets of image URLs into an image dataset”. Its claimed the tool can “download, resize, and package 100 million URLs in 20 hours on one machine”. That’s a lot of URLs.

What’s aggravating site owners is that the tool is ignoring assumed good netiquette rules. Way back in 1994, “robots.txt” was created as a polite way to let crawlers know which bits of a website they were allowed to pay a visit to. Search engines could be told “Yes please”. Other kinds of crawlers could be told “No thank you”. Many rogues would simply ignore a site’s robots.txt file, and end up with a bad reputation as a result.

This is one of the main complaints where img2dataset is concerned. Website owners contend that it’s not physically possible to have to tell every tool in existence that they wish to opt-out. Rather, the tool should be opt-in. This is a reasonable concern, especially as site owners would essentially be responsible for adding ever more entries to their code on a daily basis.

One site owner had this to say, in a mail sent to Motherboard:

I had to pay to scale up my server, pay extra for export traffic, and spent part of my weekend blocking the abuse caused by this specific bot.

Elsewhere, you can see a deluge of complaints from site owners on the tool’s “Issues” discussion page. Issues of consent, custom headers, even talk of the creator being sued: It’s chaos over there.

If you’re a site owner who isn’t keen on img2dataser paying a visit, there are a number of ways you can tell it to keep a respectful distance. From the opt-out directives section:

Websites can use these http headers:” X-Robots-Tag: noai”, “X-Robots-Tag: noindex” , “X-Robots-Tag: noimageai”, and “X-Robots-Tag: noimageindex”. By default, img2dataset will ignore images with such headers.

However, the FAQ also says this for users of the img2dataset tool:

To disable this behaviour and download all images, you may pass “–disallowed_header_directives ‘[]’”

This does exactly what it suggests, ignoring the “please leave me alone” warning and grabbing all available images. It’s no wonder, then, that website owners are currently so hot and bothered by this latest slice of website scraping action. With little apparent interest in robots.txt from the creator, and workarounds to ensure users can grab whatever they like, this is sure to rumble on.


Malwarebytes removes all remnants of ransomware and prevents you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

TRY NOW