IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

LinkedIn bots and spear phishers target job seekers

Microsoft’s social network for professionals, LinkedIn, is an important platform for job recruiters and seekers alike. It’s also a place where criminals go to find new potential victims.

Like other social media platforms, LinkedIn is no stranger to bots attracted to special keywords and hashtags. Think “I was laid off”, “I’m #opentowork” and similar phrases that can wake up a swarm of bots hungry to scam someone new.

Bots are problematic as they not only create a poor user experience but also present real security risks. Perhaps even more insidious are customized phishing attempts, where fraudulent LinkedIn accounts directly reach out to their victim via the premium InMail feature. In this case, it’s all about harvesting personal information from targets of interest.

In this article, we review recent observations and provide tips for job seekers and users of the platform in general.

Hungry bots

Online bots are so common that they transcend every possible industry: advertising, music and concerts, social media, games, and more. There are even companies whose entire business model is to disrupt and contain bots. The impact of bots depends on which point of view you are taking, as it can range from simple nuisance, to opinion swaying, costly fraud, and a lot more in between.

We recently observed fake LinkedIn accounts that prey on those just laid off. Within minutes of a post, dozens of accounts start replying with links or requests to be added as a connection.

image fcdb62

The use of certain hashtags was already known to attract bots, and the #opentowork one is no different. Ironically, even a recruiter was previously swamped with similar messages and questioned whether they might have to refrain from ever having to use the hashtag again:

image 9344be

The battle between HR and the bots was featured in a Brian Krebs article from a couple of years ago. While the accounts we saw in the most recent campaign were not labeled as recruiters directly, they often pointed to other profiles that were. In the majority of cases, scammers used the name of real people and their pictures to create new accounts.

image f511d8

It appears their primary goal may be to gain connections by pretending to help a job seeker. This may increase the supposed authenticity of their profile and make it harder to shut them down.

LinkedIn did take action sometime after we witnessed the original spam wave. Many of the accounts indeed disappeared and comments were removed. It’s unclear whether this was a result of user reports, LinkedIn’s own algorithms, or a combination of both.

Fine tuning anti-fraud algorithms requires constant calibration, and isn’t without casualties. Some content creators have been banned due to “false positives”, eroding the trust and dedication they put into the platform.

You’ve got InMail

While bots are annoying, they are usually so predictable and noisy that they can be spotted from miles away, especially when they duplicate their own comments on the same post. More dangerous are personalized requests that come directly into a user’s inbox.

It’s the same idea of a fake recruiter, but the profile looks more credible and scammers are using paid accounts. In fact, the ability to send a message to a user who’s not in your circle of contacts, is one of LinkedIn’s feature for going premium, called InMail.

image 37e768

In the image seen above, an alleged Amazon recruiter going by the name “Kay Poppe”, sent a direct message about a unique job opportunity at Amazon Web Services. The so-called recruiter’s profile picture looks to be AI-generated, and the name Kay Poppe vaguely reminds us of “K-pop”, the Korean pop music phenomenon. Perhaps this is a bit of a stretch, but we couldn’t help but think of North Korea’s relentless phishing attempts.

This was not a standard, copy-paste message but rather a carefully crafted one based on the victim’s job profile. The link shortener they used was related to their current position and was the hook to get them to visit a fake LinkedIn page showing a number of documents related to that role. None of the links to the documents actually load what they claim to be, instead they are meant to be a segway to a page hosting a phishing kit.

image 481f0b

In this particular instance, this is the Rockstar2FA phishing-as-a-service toolkit used to harvest Google credentials. As more and more people are using two-factor authentication, criminals have come up with their own methods to bypass 2FA. While it is recommended to avoid SMS-based verification and instead use a one-time password (OTP) app, users can still get social engineered into entering the temporary code into a phishing page.

image 59e9b3

Stealing a Google account is usually only the first step in longer chain leading to a full compromise. Many people use their Google email as a recovery address for a number of other online accounts. This can allow a criminal to reset as many passwords as they can get their hands on before the victim even realizes what’s happened. It’s also not unusual to get locked out of your account and then struggle to regain control of it.

Fish in a larger pod

Scammers are notorious for targeting the vulnerable, and one could say that after losing a job you probably feel this way. All too eager to regain employment, you may jump at the first opportunity and engage in a conversation that could end badly.

Many of the bots spamming via comments tie back to some kind of fraud such as the advance-fee scam where you need to pay an up-front fee in order to receive goods or services. Some job offers are also too good to be true, and you could unknowingly participate in illegal activities by helping to funnel and launder money.

The more targeted phishing attempts are dangerous not only for the individual in question but also for the company they work for. This may be the case if you are not actively looking for a new job, and as such compromising you could in turn have further consequences, such as getting an entry point into an entire organization.

Whether you are looking for a job or already have one, you should expect to get contacted by some unknown third-party at some point. Treat every such inquiry with suspicion and caution. Remember that on the internet, not everyone is who they pretend to be.

Also, consider passkeys, a newer form of authentication that was specifically designed to move away from passwords and be less prone to phishing attacks. They rely on a private-public key exchange between a device and the service’s login page removing the need to enter passwords or codes.

If you ever fall victim to a scam, time is of the essence. Immediately:

  • be on the lookout for unusual account changes
  • proactively do a full password sweep
  • reach out to your bank and credit card company
  • inform your contacts who may receive fraudulent messages coming from you

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Upload a video selfie to get your Facebook or Instagram account back

Meta, the company behind Facebook and Instagram says its testing new ways to use facial recognition—both to combat scams and to help restore access to compromised accounts.

The social media giant is testing the use of video selfies and facial recognition to help users get their hijacked accounts back. Social media accounts are often lost when users forget their password, switch devices, or when they inadvertently or even willingly give their credentials to a scammer.

Another reason for Meta to use facial recognition are what it calls “celeb-bait ads.” Scammers often try to use images of public figures to trick people into engaging with ads that lead to fraudulent websites.

Since it’s trivial to set up an account that looks like a celebrity, scammers use this to attract visitors for various reasons, ranging from like-farming (a method to raise the popularity of a site or domain) to imposter scams, where accounts that seem to belong to celebrities reach out to you in order to defraud you.

5 Jon Hamm Facebook accounts with a different selfie
Several accounts that seem to belong to the same actor

Meta’s existing ad review system uses machine learning to review the millions of ads that are run across Meta platforms every day. With a new facial recognition addition to that system, Meta can compare faces in the ad to the public figure’s Facebook and Instagram profile pictures, and then block them if it’s fake.

According to Meta:

“Early testing with a small group of celebrities and public figures shows promising results in increasing the speed and efficacy with which we can detect and enforce against this type of scam.”

Over the coming weeks, Meta intends to start informing a larger group of celebs who have been used in scam ads that they will be enrolled into the new scheme and allow them to opt out if that’s what they want.

The problem of celeb-bait ads is a big one and I applaud Meta for trying to do something about it. The account recovery by video selfie, however, is something I’m far less fond of.

The idea of using facial recognition on social media is not new. In 2021, Meta shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in their products.

In the newly-announced system, the user can upload a video selfie, and Meta will use facial recognition technology to compare the selfie to the profile pictures on the account they’re trying to access. This is similar to identity verification tools you might already use to unlock your phone or access other apps. 

I do have a few questions though:

  • With the current development of deepfakes, how long will it take for this technology to be used for the exact opposite? Stealing your account by showing the platform a deepfake video of your face.
  • Do I want to provide Meta with even more material that might end up getting used to train its Artificial Intelligence (AI) models? Although Meta claims to delete the facial data after comparison, there are concerns about the collection and temporary storage of biometric information.
  • People have a tendency to post their best pictures and not change them as they grow older. Is a comparison always possible?
  • Is normalizing the use of biometrics for something as trivial as social media really necessary? Right now I only use a video selfie to approve bank transfers of over 1000 Euro (US$ 1075).  

There are probably good reasons why Meta is not implementing this option in the UK or the EU, because it needs to “continue conversations with regulators” first. The same is true for Illinois and Texas, likely due to stricter privacy laws in these states.

Surely there are better ways to reclaim a stolen account. What do you think? Let us know in the comments.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

This industry profits from knowing you have cancer, explains Cody Venzke (Lock and Code S05E22)

This week on the Lock and Code podcast

On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.

This is because of the largely unchecked “data broker” industry.

Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.

Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.

This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.

So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.

“We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Internet Archive attackers email support users: “Your data is now in the hands of some random guy”

Those who hacked the Internet Archive haven’t gone away. Users of the Internet Archive who have submitted helpdesk tickets are reporting replies to the tickets from the hackers themselves.

Internet Archive, most known for its Wayback Machine, is a digital library that allows users to look at website snapshots from the past. It is often used for academic research and data analysis. Earlier in October, the Internet Archive suffered from a data breach and DDoS attack.

During that breach the attackers were able to steal a user authentication database containing 31 million records.

While the Wayback Machine is almost fully functional again, in a recent turn of events the attackers have started replying to those users that have opened a support ticket with the Internet Archive.

This is one of the replies a user reported:

“It’s dispiriting to see that even after being made aware of the breach 2 weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets.

As demonstrated by this message, this includes a Zendesk token with perms to access 800K+ support tickets sent to info@archive.org since 2018.

Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine—your data is now in the hands of some random guy. If not me, it’d be someone else.

Here’s hoping that they’ll get their shit together now.”

An Application Programming Interface (API) token is like a special pass that allows a computer program or app to access and use services provided by another program or website. It is used as proof that the user or app has permission to access the service.

It appears as if the Internet Archive uses Zendesk to manage its support tickets. Having the Internet Archive’s Zendesk token would certainly explain why the hackers can reply to customer tickets.

Changing a Zendesk API token is not very hard, but it can have unexpected consequences, so it may require some advance planning to minimize potential disruptions. This could be why the Internet Archive may not have gotten round to it yet. But not changing API keys that would grant the attackers access to the organization’s important infrastructure like Zendesk would be a serious omission.

On October 18, 2024, Internet Archive founder Brewster Kahle, posted an update stating the stored data of the Internet Archive is safe and work on resuming services safely is in progress.

“We’re taking a cautious, deliberate approach to rebuild and strengthen our defenses. Our priority is ensuring the Internet Archive comes online stronger and more secure.”

So far, the Internet Archive has not responded to the new developments, and the motivation for the attacks on the Internet Archive remain unclear. We’ll keep you posted.

A week in security (October 14 – October 20)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Unauthorized data access vulnerability in macOS is detailed by Microsoft

The Microsoft Threat Intelligence team disclosed details about a macOS vulnerability, dubbed “HM Surf,” that could allow an attacker to gain access to the user’s data in Safari. The data the attacker could access without users’ consent includes browsed pages, along with the device’s camera, microphone, and location.

The vulnerability, tracked as CVE-2024-44133 was fixed in the September 16 update for Mac Studio (2022 and later), iMac (2019 and later), Mac Pro (2019 and later), Mac Mini (2018 and later), MacBook Air (2020 and later), MacBook Pro (2018 and later), and iMac Pro (2017 and later).

It is important to note that this vulnerability would only impact Mobile Device Management (MDM) managed devices. MDM managed devices are typically subject to centralized management and security policies set by the organization’s IT department.

Microsoft has dubbed the flaw “HM Surf.” By exploiting this vulnerability an attacker could bypass the macOS Transparency, Consent, and Control (TCC) technology and gain unauthorized access to a user’s protected data.

Users may notice Safari’s TCC in action when they browse a website that requires access to the camera or the microphone. They may see a prompt like this one:

Safari TCC prompt
Image courtesy of Microsoft

What Microsoft discovered was that Safari maintains its own separate TCC policy which it maintains in various local files.

At that point Microsoft figured out it was possible to modify the sensitive files, by swapping the home directory of the current user back and forth. The home directory is protected by the TCC, but by changing the home directory, then change the file, and then making it the home directory again, Safari will use the modified files.

The exploit only works on Safari because third-party browsers such as Google Chrome, Mozilla Firefox, or Microsoft Edge do not have the same private entitlements as Apple applications. Therefore, those apps can’t bypass the macOS TCC checks.

Microsoft noted that it observed suspicious activity in the wild associated with the Adload adware that might be exploiting this vulnerability. But it could not be entirely sure whether the exact same exploit was used.

“Since we weren’t able to observe the steps taken leading to the activity, we can’t fully determine if the Adload campaign is exploiting the HM surf vulnerability itself. Attackers using a similar method to deploy a prevalent threat raises the importance of having protection against attacks using this technique.”

We encourage macOS users to apply these security updates as soon as possible if they haven’t already.


Malwarebytes for Mac takes out malware, adware, spyware, and other threats before they can infect your machine and ruin your day. It’ll keep you safe online and your Mac running like it should.

23andMe will retain your genetic information, even if you delete the account

Deleting your personal data from 23andMe is proving to be hard.

There are good reasons for people wanting to delete their data from 23andMe: The DNA testing platform has a lot of problems, so let’s start with a recap.

A little over a year ago, cybercriminals put up information belonging to as many as seven million 23andMe customers for sale on criminal forums following a credential stuffing attack against the genomics company.

In December 2023, we learned that the attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. So far not too many people affected, but with the breached accounts at their disposal, the attacker used 23andMe’s opt-in DNA Relatives (DNAR) feature—which matches users with their genetic relatives—to access information about millions of other users.

For a subset of these accounts, the stolen data contained health-related information based upon the user’s genetics.

In January 2024, 23andMe had the audacity to lay the blame at the feet of victims themselves in a letter to legal representatives of victims. 23andMe reasoned that the customers whose data was directly accessed re-used their passwords, gave permission to share data with other users on 23andMe’s platform, and that the medical information was non-substantive.

And in September 2024, we found out that the company would pay $30 million to settle a class action lawsuit, as that was all that 23andMe could afford to pay. And that’s only because the expectation was that cyberinsurance would cover $25 million.

As a result, the value of 23andMe plummeted. And last month the company said goodbye to all its board members except for CEO Anne Wojcicki who stood by her plans to take the company private.

This uncertainty about the future of the company and, with that, who will be the future holder of all the customer personal information, has caused a surge of users looking to close their accounts and delete their data.

However, it turns out it’s not as easy as just asking for the data to be removed. You can delete your data from 23andMe , but 23andMe says it will retain some of that data (including genetic information) to comply with the company’s legal obligations, according to its privacy policy.

“23andMe and/or our contracted genotyping laboratory will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations, including the federal Clinical Laboratory Improvement Amendments of 1988 (CLIA), California Business and Professions Code Section 1265 and College of American Pathologists (CAP) accreditation requirements, even if you chose to delete your account. 23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”

In addition, any information you previously provided and consented to be used in 23andMe research projects cannot be removed from ongoing or completed studies, although the company says it will not use it in any future ones.

This is unfortunate, and is yet another reminder about how once you give information away you cannot always get it back. Let’s hope the policy gets changed and customers are allowed to fully delete their data soon.

It’s still worth deleting as much as possible, though. So here’s how to do that.

How to delete (most of) your data from 23andMe

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • It will ask you to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (onto a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

When you set up your 23andMe account, you had the options to either have the saliva sample that you sent to them securely destroyed or to have it stored for future testing. If you chose to store your sample but now want to delete your 23andMe account, the company says it will destroy the sample for you as part of the account deletion process.

Check your digital footprint

If you want to find out if your personal data was exposed through the 23andMe breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you used to register and 23andMe) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them

Millions of people are turning normal pictures into nude images, and it can be done in minutes.

Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.

The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”

Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.

However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.

These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.

Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.

The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.

To combat this type of sexual abuse there have been several initiatives:

  • The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
  • Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).

However, so far these steps have shown no significant impact on the growth of the market for NCIIs.

Keep your children safe

We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.

We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.

  • Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
  • Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
  • Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
  • Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
  • Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
  • Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.

If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Tor Browser and Firefox users should update to fix actively exploited vulnerability

Mozilla has announced a security fix for its Firefox browser which also impacts the closely related Tor Browser.

The new version fixes one critical security vulnerability which is reportedly under active exploitation. To address the flaw, both Mozilla and Tor recommend that users update their browsers to the most current versions available.

Firefox users that have automatic updates enabled should have the new version available as soon or shortly after they open the browser. Once you’re updated, your version number will be 131.0.3 or higher.

Other users can update their browser by following these instructions:

  • Click the menu button (3 horizontal stripes) at the right side of the Firefox toolbar, go to Help, and select About Firefox/Tor Browser. The About Mozilla Firefox/About Tor Browser window will open.
  • Firefox/Tor Browser will check for updates automatically. If an update is available, it will be downloaded.
  • You will be prompted when the download is complete, then click Restart to update Firefox/Tor Browser.

To update the Tor Browser you have to Connect first or it will fail to fetch the update. The latest version of Tor is 13.5.7.

Tor Browser is up to date
Version number should be 13.5.7 or higher

The vulnerability, tracked as CVE-2024-9680, allows attackers to execute malicious code within the browser’s content process, which is the environment where it loads and renders web content.

About the vulnerability, Mozilla said:

“An attacker was able to achieve code execution in the content process by exploiting a use-after-free in Animation timelines. We have had reports of this vulnerability being exploited in the wild.”

Use after free (UAF) is a type of vulnerability that is the result of the incorrect use of dynamic memory during a program’s operation. If, after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program.

The Animation Timeline interface of the Web Animations Application Programming Interface (API) represents the timeline of an animation. Where the timeline is a source of time values for synchronization purposes.

Exploitation is said to be relatively easy, requires no user interaction, and can be executed over the network.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

AI scammers target Gmail accounts, say they have your death certificate

Several reputable sources are warning about a very sophisticated Artificial Intelligence (AI) supported type of scam that is bound to trick a lot of people into compromising their Gmail account.

The most recent warning comes from CEO of Y Combinator Garry Tan who posted on X, saying the scammers using AI voices tell you someone has issued a death certificate for you and is trying to recover your account.

The scammers claim to be checking that you are alive and whether they should disregard a filed death certificate. If you click “Yes, it’s me” on the fake account recovery screen then you’ll likely lose access to your Google account.

In another recent example, Windows expert Sam Mitrovic was targeted by a very similar AI recovery scam.

He explained how the scam unfolds: It starts when he receives a notification of an alleged Gmail account recovery attempt, followed 40 minutes later by a call. The first time Sam misses the call, but when they try the same thing a week later, Sam answers.

In both cases, the notifications come from the US but the calls show “Google Sydney” as the caller. A polite American voice claims there’s been suspicious activity on Sam’s Gmail account and asks whether Sam was travelling.

The caller says there’s been a login attempt from Germany which raises suspicions, given that Sam is at home in the US. The caller says the login has been successful, and that an attacker has had access to Sam’s account for a week and downloaded account data.

Sam remembers the email and missed call from last week, and has the presence of mind to quickly check the caller ID. It looks like a legitimate Google Assistant number.

But knowing how easy it is to spoof a telephone number and pretend to be calling from that number, Sam asks for an email to confirm that the caller actually works for Google. Some typing against the typical background noises of a call center and soon enough the email arrives.

confirmation mail sent by the attacker to prove they are working for the Google Account Secuirty Team
Image courtesy of Sam Mitrovic

The email looks convincing. It comes from a Google domain, has a case number, claims to be from the Google Account Security Team, and it confirms the phone number and the name the caller is using.

While Sam reviews the email, the caller repeatedly says “Hello”. From the pronunciation and the spacing Sam realizes it’s an AI voice and hangs up.

Inspecting the email Sam found that the scammers are using the legitimate Salesforce CRM (customer relationship management) tool which allows you to set the sender to whatever you like and send over Gmail/Google servers.

Other targets that took the scam a little further,  were asked to verify their 2FA, so it stands to reason that the scammers are looking to take over your Google account, but this time for real.

The need to confirm an account recovery, or a password reset, is a notorious method used in phishing attacks. They usually try to trick the target into opening a fake login portal where they need to enter their credentials to report the request as not initiated by them.

Is it you trying to recover your account?
Prompt asking: Is it you trying to recover your account?

How to stay safe

There are a few signs you can use to identify this type of scams.

The “To” field of the confirmation email Sam received contains an email address cleverly named GoogleMail[@]InternalCaseTracking[.] com, which is a non-Google domain.

Google Assistant calls usually come from an automated system and only in some cases, from a manual operator. Google Support on the other hand will not contact you unsolicited.

To verify if a security alert is from Google, users can check their Recent security activity:

  • Tap your Gmail profile photo in the top right corner
  • Tap Manage your Google Account
  • Select the Security tab
  • You will see something similar to this:
Review security activity
Here you can find the Review Security Activity button

Any messages claiming to be security alerts from Google that are not listed there will not be from Google.

Do not entertain these scammers for longer than necessary. It doesn’t take them very long to fingerprint your voice which would allow their AI to impersonate you by using your voice.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Cyrus, powered by Malwarebytes.