Inside the North Korean IT job scandal

Overview

A shocking new investigation reveals how North Korean operatives used generative AI tools to pose as remote tech workers, land real jobs, and send stolen salariesβ€”and potentially sensitive dataβ€”back to the DPRK. In this episode of Today in Tech, host Keith Shaw speaks with Brett Winterford, VP of Okta Threat Intelligence, about the rise of β€œwage mole” campaigns, deepfake video interviews, and how companies were tricked into hiring fake personas with stolen identities.

Winterford details how these operations scale through AI-powered applicant tracking exploits, fraudulent laptop farms in the U.S., and how some actors even tested their fake rΓ©sumΓ©s by creating fake companies to gather intel on what works. Learn the red flags to watch for, what industries are now being targeted, and how your organization can protect itself from this growing global threat.

πŸ“Œ Topics Covered:
* How generative AI enabled DPRK’s remote job scams
* Deepfake interviews & mock ATS testing
* Laptop farms and U.S.-based facilitators
* Red flags recruiters and HR teams should watch for
* Why this isn’t just a U.S. tech problemβ€”it's global

πŸ”— Watch now and subscribe to stay ahead of the latest cybersecurity threats.

#cybersecurity #AIscams #NorthKorea #remotework #Deepfakes #HRtech #TodayInTech #Okta #AITools #fraudprevention #ATS #identityverification #infosec

Register Now

Transcript

Keith Shaw: The North Korean IT job scandal has shaken a lot of companies’ hiring practices to the coreβ€”exposing poor processes and revealing serious data security vulnerabilities.

On this episode of Today in Tech, we're going to talk about what happened, what went wrong, and what lessons companies can take from these events. Hi everybody, welcome to Today in Tech. I'm Keith Shaw. Joining me on the show today is Brett Winterford.

He is the Vice President of Okta Threat Intelligence, which recently published a report on this topic. Welcome to the show, Brett. Brett Winterford: Thank you so much for having me, Keith. Keith: And you're calling in from Australiaβ€”so this is truly an international episode today.

Brett: I hope you're having a good morning, as I'm having a good evening.

Keith: So, your company recently published findings examining how North Korean scammers are using generative AI tools to apply forβ€”and secureβ€”remote technical roles across the globe. I think you refer to these as β€œwage mole campaigns”—very interesting term.

Once employed, these scammers continue to use generative AI to maintain their jobs, act as agents, and raise funds for the North Korean state. U.S. agencies have also identified several outlier cases in which system access granted for employment was later used for espionage or data extortion.

This has been in the news for a while now, but I want to start by asking: Can you give me a quick overview of what the North Koreans were doingβ€”and how they were able to operate at such a global scale?

Brett: If we outline why this scam exists in the first place, it’s because the DPRKβ€”the Democratic People's Republic of Koreaβ€”has very limited opportunities to generate revenue in global markets. They’re heavily sanctioned across the world. But one area where they have a natural advantage is in their technical talent.

They train a lot of highly skilled computer science professionals, but there's not much internal demand for them in North Korea. However, they can use those skills abroad to earn revenue. Sometimes that's through hackingβ€”as we’ve seen.

The regime has been behind some of the most daring cyber heists in history. I’m thinking of the Central Bank of Bangladesh breach about a decade ago, and more recently, the Bybit crypto exchange.

But there’s also a subset of individuals who are assigned simply to apply for roles in Western companiesβ€”typically remote technical jobs. Once they gain employment, most of their earnings go back to the state. These workers often hold multiple jobs simultaneously and are required to meet revenue quotas.

At Okta, what we wanted to uncoverβ€”using our data holdings and cases identified by law enforcement and other trusted third partiesβ€”was how they were able to succeed. Because frankly, it's hard for many people to believe they could ever unknowingly hire a DPRK national.

So we asked: What are these individuals doing that makes them so effective?

Keith: All rightβ€”and why was generative AI such a powerful enabler for these objectives? Brett: Because of how they’re trying to operateβ€”at scale. Generative AI is what makes these scams scalable. Let me break it down.

If you're a facilitator or handler managing these operatives, you’re applying for jobs at dozens or even hundreds of companies, on behalf of dozens or hundreds of fake personas. These personas are based on stolen identities, so the complexity is high. Now, imagine managing communication for all those personas.

Some recruiters prefer email, others SMS, messaging apps, even social media. The scammers need to manage all of that in one placeβ€”and they do that using generative AI tools and dashboards that consolidate all communications. It’s an IT management problem, and AI solves it for them.

Then there's the employment application process itself. These DPRK operatives are relentless in testing and refining their applications. If they just guess what a good rΓ©sumΓ© or cover letter looks like, they'll likely fail.

But instead, they create fake companies, advertise identical roles to real job postings, and use real candidate submissions to test what gets through applicant tracking systems (ATS).

Keith: That’s so frustrating, especially when real people are struggling to get jobs because of AI screeners and ATS filters. For North Koreans to game this system so effectivelyβ€”it’s mind-boggling. Brett: It really is. And what they’re doing is clever.

Once they’ve refined their materials through testing, they apply for the actual rolesβ€”typically remote technical positions like software engineering, where there’s a skills gap and remote work is accepted. We also saw them logging into systems typically used by employers or recruitersβ€”not just candidates. It was confusing at first.

But what they were doing was setting up fake companies to post real-looking job ads. Then they’d collect the rΓ©sumΓ©s and cover letters people submitted and analyze what worked.

Keith: So they were using fake companies to reverse-engineer ATS filters? Brett: Exactly. They’d use that insight to optimize their own applications. It’s cleverβ€”and unsettling.

Keith: And it’s much easier to fabricate experience for a fake person. Brett: Yes, which sometimes backfires. We’ve seen them trip up during interviews. But even thereβ€”they’re using AI again. We saw them practicing with mock interview platforms powered by generative AI.

These tools assess posture, tone, lighting, and even test for deepfake overlays. If the AI didn’t detect the fake, they assumed a human probably wouldn’t either. They also used LLMs to prepare answers for common technical questions, and they practiced extensively to sound natural during live conversations.

Keith: Were you able to tell if these interviews used live video or just AI deepfakes? Brett: In some cases, yes. But in general, we couldn’t definitively match tools to specific interviews. We did see deepfake overlays in use.

Sometimes they didn’t bother, especially if the fake persona had a similar appearance to the real actor. If the persona required a completely different look, then they’d resort to video manipulation.

Keith: And this wasn’t just U.S. companies being targeted, right? Brett: Correct. It started with U.S. tech firmsβ€”because they were doing the most hiring at high salaries for these in-demand skills. But over time, we’ve seen them expand to healthcare, professional services, and other industries globally.

This is no longer just a U.S. tech sector problemβ€”it’s a global issue. Every chief security officer, and importantly, every HR and talent acquisition team needs to understand what these scams look like.

Keith: So once these individuals actually got the jobβ€”how were they able to maintain the ruse? Some of them were paid for months before being discovered, right? Brett: Exactly. Typically, we’ve seen most of them last only a few pay cyclesβ€”one or two months.

Not because they lack technical skills; in many cases, they’re quite capable. The real problem is sustainability. If you're working seven jobs, 16–18 hours a day, six days a week, performance drops. Managers begin to notice if you avoid video calls, won’t show your background, or constantly have technical excuses.

They exploit the remote nature of technical jobs. Another issue is that many companies don’t have consistent interviewers across all hiring stages. It’s possible a DPRK worker could be represented by someone elseβ€”often located in the Westβ€”for part of the process. This impersonation isn’t unique to DPRK scams either.

Overemployment and proxy interviewing is a growing issue. There’s even a market where people can hire someone to sit in on technical interviews for them. Verifying identity throughout the hiring and onboarding process is a major gap these scammers exploit.

Keith: And were these workers just after a paycheckβ€”or were they also stealing data or acting as spies? Brett: From our research, we didn’t see direct evidence of espionageβ€”but others in the cybersecurity community have.

Often, when their performance drops or they’re about to be terminated, that's when they grab sensitive data. The idea is to use it later for extortion. There’s also a belief that many of these IT workers were educated alongside North Korea’s cyberespionage operatives.

So while some may only be focused on income generation, the risk of access being handed over to more malicious actors is very real.

Keith: Another aspect I found disturbing was the use of U.S.-based facilitatorsβ€”rooms full of laptops, mailing addresses for onboarding gear. How did that work? Brett: There have been U.S. indictments against people who knowinglyβ€”or unknowinglyβ€”served as facilitators.

If an employer required a managed device to be shipped domestically, they’d need a U.S. address. These facilitators would receive the devices and set up remote access tools like IP KVMs, allowing DPRK workers to log in as if they were physically in the U.S.

Some of these facilitators are now facing serious prison time. Many claimed they didn’t know they were working for the North Korean state, but the legality becomes murky when they’re interacting with a sanctioned entity.

Keith: Are these operations still ongoing, or has the media coverage slowed them down? Brett: I don’t expect it to slow down. In fact, I think it will spread into new industries.

As long as North Korea lacks other revenue options and has a pool of technically skilled individuals under quota pressures, they’ll keep applying for remote tech jobs. The U.S. tech industry is catching on, but other sectorsβ€”like healthcare and financeβ€”need to learn these same lessons.

That includes HR, talent teams, and hiring managers being trained to spot red flags and verify identity throughout the hiring process. At Okta, we’ve even rolled out new features in responseβ€”like requiring a government-issued ID and liveness check before someone can create an account or authenticate.

Keith: Let’s talk about red flags. What should companies look for when hiring remotely?

Brett: A few key signs: A strong preference for chat-based communication over voice or video Delayed responses during interviews, possibly due to AI-generated replies Inconsistencies between background checks and verbal answers Resistance to showing video or background during meetings Last-minute changes to shipping addresses for devices Requests for unconventional payment methods or early changes in payment details Odd working hours or inability to join team meetings regularly On their own, some of these might seem harmless.

But in combination, they should raise concern.

Keith: Could we avoid this by ditching ATS systems and AI screeningβ€”maybe even fly candidates in for interviews again? Brett: It could help. Many tech companies now require physical onboarding or in-person verification for this reason.

But the reality is, most platforms already integrate AI and ATS screeningβ€”it’s deeply embedded in the hiring ecosystem. Some companies will need to weigh the risk of fraud against the challenge of a smaller talent pool when requiring physical onboarding.

Keith: One frustrating aspect is that these scammers were so successful. Can job seekers use some of these toolsβ€”ethicallyβ€”to navigate the system? Brett: Yes. Many tools being abused in these scams exist for legitimate reasons.

If you Google β€œhow to beat an ATS,” you’ll find tons of services aimed at frustrated, long-term job seekers trying to get noticed. There’s a difference between using tools ethically to level the playing fieldβ€”and creating fake companies or personas to game the system. That crosses a line.

Keith: Are the bad guys ahead of the good guys in this space? Brett: The DPRK workers are incentivized to master AI tools better than we do. Some come from government agencies focused entirely on AI research. And ironically, some companies developing AI tools may have unknowingly hired them.

The demand for skilled software and data engineers is highβ€”and North Korea is filling that gap in illicit ways.

Keith: Brett, this has been a fascinating discussion. Where can people find the full report? Brett: You can read the report at our security site: sec.okta.com Keith: Thanks again for joining us, Brett. Brett: Cheers, Keith. Thanks for the chat. Keith: That’s all the time we have for today’s episode.

Be sure to like the video, subscribe to the channel, and leave a comment if you’re watching on YouTube. Join us each week for new episodes of Today in Tech. I’m Keith Shawβ€”thanks for watching!