A shocking new investigation reveals how North Korean operatives used generative AI tools to pose as remote tech workers, land real jobs, and send stolen salariesβand potentially sensitive dataβback to the DPRK. In this episode of Today in Tech, host Keith Shaw speaks with Brett Winterford, VP of Okta Threat Intelligence, about the rise of βwage moleβ campaigns, deepfake video interviews, and how companies were tricked into hiring fake personas with stolen identities. Winterford details how these operations scale through AI-powered applicant tracking exploits, fraudulent laptop farms in the U.S., and how some actors even tested their fake rΓ©sumΓ©s by creating fake companies to gather intel on what works. Learn the red flags to watch for, what industries are now being targeted, and how your organization can protect itself from this growing global threat. π Topics Covered: * How generative AI enabled DPRKβs remote job scams * Deepfake interviews & mock ATS testing * Laptop farms and U.S.-based facilitators * Red flags recruiters and HR teams should watch for * Why this isnβt just a U.S. tech problemβit's global π Watch now and subscribe to stay ahead of the latest cybersecurity threats. #cybersecurity #AIscams #NorthKorea #remotework #Deepfakes #HRtech #TodayInTech #Okta #AITools #fraudprevention #ATS #identityverification #infosec
Register Now
Keith Shaw: The North Korean IT job scandal has shaken a lot of companiesβ hiring practices to the coreβexposing poor processes and revealing serious data security vulnerabilities.
On this episode of Today in Tech, we're going to talk about what happened, what went wrong, and what lessons companies can take from these events. Hi everybody, welcome to Today in Tech. I'm Keith Shaw. Joining me on the show today is Brett Winterford.
He is the Vice President of Okta Threat Intelligence, which recently published a report on this topic. Welcome to the show, Brett. Brett Winterford: Thank you so much for having me, Keith. Keith: And you're calling in from Australiaβso this is truly an international episode today.
Brett: I hope you're having a good morning, as I'm having a good evening.
Keith: So, your company recently published findings examining how North Korean scammers are using generative AI tools to apply forβand secureβremote technical roles across the globe. I think you refer to these as βwage mole campaignsββvery interesting term.
Once employed, these scammers continue to use generative AI to maintain their jobs, act as agents, and raise funds for the North Korean state. U.S. agencies have also identified several outlier cases in which system access granted for employment was later used for espionage or data extortion.
This has been in the news for a while now, but I want to start by asking: Can you give me a quick overview of what the North Koreans were doingβand how they were able to operate at such a global scale?
Brett: If we outline why this scam exists in the first place, itβs because the DPRKβthe Democratic People's Republic of Koreaβhas very limited opportunities to generate revenue in global markets. Theyβre heavily sanctioned across the world. But one area where they have a natural advantage is in their technical talent.
They train a lot of highly skilled computer science professionals, but there's not much internal demand for them in North Korea. However, they can use those skills abroad to earn revenue. Sometimes that's through hackingβas weβve seen.
The regime has been behind some of the most daring cyber heists in history. Iβm thinking of the Central Bank of Bangladesh breach about a decade ago, and more recently, the Bybit crypto exchange.
But thereβs also a subset of individuals who are assigned simply to apply for roles in Western companiesβtypically remote technical jobs. Once they gain employment, most of their earnings go back to the state. These workers often hold multiple jobs simultaneously and are required to meet revenue quotas.
At Okta, what we wanted to uncoverβusing our data holdings and cases identified by law enforcement and other trusted third partiesβwas how they were able to succeed. Because frankly, it's hard for many people to believe they could ever unknowingly hire a DPRK national.
So we asked: What are these individuals doing that makes them so effective?
Keith: All rightβand why was generative AI such a powerful enabler for these objectives? Brett: Because of how theyβre trying to operateβat scale. Generative AI is what makes these scams scalable. Let me break it down.
If you're a facilitator or handler managing these operatives, youβre applying for jobs at dozens or even hundreds of companies, on behalf of dozens or hundreds of fake personas. These personas are based on stolen identities, so the complexity is high. Now, imagine managing communication for all those personas.
Some recruiters prefer email, others SMS, messaging apps, even social media. The scammers need to manage all of that in one placeβand they do that using generative AI tools and dashboards that consolidate all communications. Itβs an IT management problem, and AI solves it for them.
Then there's the employment application process itself. These DPRK operatives are relentless in testing and refining their applications. If they just guess what a good rΓ©sumΓ© or cover letter looks like, they'll likely fail.
But instead, they create fake companies, advertise identical roles to real job postings, and use real candidate submissions to test what gets through applicant tracking systems (ATS).
Keith: Thatβs so frustrating, especially when real people are struggling to get jobs because of AI screeners and ATS filters. For North Koreans to game this system so effectivelyβitβs mind-boggling. Brett: It really is. And what theyβre doing is clever.
Once theyβve refined their materials through testing, they apply for the actual rolesβtypically remote technical positions like software engineering, where thereβs a skills gap and remote work is accepted. We also saw them logging into systems typically used by employers or recruitersβnot just candidates. It was confusing at first.
But what they were doing was setting up fake companies to post real-looking job ads. Then theyβd collect the rΓ©sumΓ©s and cover letters people submitted and analyze what worked.
Keith: So they were using fake companies to reverse-engineer ATS filters? Brett: Exactly. Theyβd use that insight to optimize their own applications. Itβs cleverβand unsettling.
Keith: And itβs much easier to fabricate experience for a fake person. Brett: Yes, which sometimes backfires. Weβve seen them trip up during interviews. But even thereβtheyβre using AI again. We saw them practicing with mock interview platforms powered by generative AI.
These tools assess posture, tone, lighting, and even test for deepfake overlays. If the AI didnβt detect the fake, they assumed a human probably wouldnβt either. They also used LLMs to prepare answers for common technical questions, and they practiced extensively to sound natural during live conversations.
Keith: Were you able to tell if these interviews used live video or just AI deepfakes? Brett: In some cases, yes. But in general, we couldnβt definitively match tools to specific interviews. We did see deepfake overlays in use.
Sometimes they didnβt bother, especially if the fake persona had a similar appearance to the real actor. If the persona required a completely different look, then theyβd resort to video manipulation.
Keith: And this wasnβt just U.S. companies being targeted, right? Brett: Correct. It started with U.S. tech firmsβbecause they were doing the most hiring at high salaries for these in-demand skills. But over time, weβve seen them expand to healthcare, professional services, and other industries globally.
This is no longer just a U.S. tech sector problemβitβs a global issue. Every chief security officer, and importantly, every HR and talent acquisition team needs to understand what these scams look like.
Keith: So once these individuals actually got the jobβhow were they able to maintain the ruse? Some of them were paid for months before being discovered, right? Brett: Exactly. Typically, weβve seen most of them last only a few pay cyclesβone or two months.
Not because they lack technical skills; in many cases, theyβre quite capable. The real problem is sustainability. If you're working seven jobs, 16β18 hours a day, six days a week, performance drops. Managers begin to notice if you avoid video calls, wonβt show your background, or constantly have technical excuses.
They exploit the remote nature of technical jobs. Another issue is that many companies donβt have consistent interviewers across all hiring stages. Itβs possible a DPRK worker could be represented by someone elseβoften located in the Westβfor part of the process. This impersonation isnβt unique to DPRK scams either.
Overemployment and proxy interviewing is a growing issue. Thereβs even a market where people can hire someone to sit in on technical interviews for them. Verifying identity throughout the hiring and onboarding process is a major gap these scammers exploit.
Keith: And were these workers just after a paycheckβor were they also stealing data or acting as spies? Brett: From our research, we didnβt see direct evidence of espionageβbut others in the cybersecurity community have.
Often, when their performance drops or theyβre about to be terminated, that's when they grab sensitive data. The idea is to use it later for extortion. Thereβs also a belief that many of these IT workers were educated alongside North Koreaβs cyberespionage operatives.
So while some may only be focused on income generation, the risk of access being handed over to more malicious actors is very real.
Keith: Another aspect I found disturbing was the use of U.S.-based facilitatorsβrooms full of laptops, mailing addresses for onboarding gear. How did that work? Brett: There have been U.S. indictments against people who knowinglyβor unknowinglyβserved as facilitators.
If an employer required a managed device to be shipped domestically, theyβd need a U.S. address. These facilitators would receive the devices and set up remote access tools like IP KVMs, allowing DPRK workers to log in as if they were physically in the U.S.
Some of these facilitators are now facing serious prison time. Many claimed they didnβt know they were working for the North Korean state, but the legality becomes murky when theyβre interacting with a sanctioned entity.
Keith: Are these operations still ongoing, or has the media coverage slowed them down? Brett: I donβt expect it to slow down. In fact, I think it will spread into new industries.
As long as North Korea lacks other revenue options and has a pool of technically skilled individuals under quota pressures, theyβll keep applying for remote tech jobs. The U.S. tech industry is catching on, but other sectorsβlike healthcare and financeβneed to learn these same lessons.
That includes HR, talent teams, and hiring managers being trained to spot red flags and verify identity throughout the hiring process. At Okta, weβve even rolled out new features in responseβlike requiring a government-issued ID and liveness check before someone can create an account or authenticate.
Keith: Letβs talk about red flags. What should companies look for when hiring remotely?
Brett: A few key signs: A strong preference for chat-based communication over voice or video Delayed responses during interviews, possibly due to AI-generated replies Inconsistencies between background checks and verbal answers Resistance to showing video or background during meetings Last-minute changes to shipping addresses for devices Requests for unconventional payment methods or early changes in payment details Odd working hours or inability to join team meetings regularly On their own, some of these might seem harmless.
But in combination, they should raise concern.
Keith: Could we avoid this by ditching ATS systems and AI screeningβmaybe even fly candidates in for interviews again? Brett: It could help. Many tech companies now require physical onboarding or in-person verification for this reason.
But the reality is, most platforms already integrate AI and ATS screeningβitβs deeply embedded in the hiring ecosystem. Some companies will need to weigh the risk of fraud against the challenge of a smaller talent pool when requiring physical onboarding.
Keith: One frustrating aspect is that these scammers were so successful. Can job seekers use some of these toolsβethicallyβto navigate the system? Brett: Yes. Many tools being abused in these scams exist for legitimate reasons.
If you Google βhow to beat an ATS,β youβll find tons of services aimed at frustrated, long-term job seekers trying to get noticed. Thereβs a difference between using tools ethically to level the playing fieldβand creating fake companies or personas to game the system. That crosses a line.
Keith: Are the bad guys ahead of the good guys in this space? Brett: The DPRK workers are incentivized to master AI tools better than we do. Some come from government agencies focused entirely on AI research. And ironically, some companies developing AI tools may have unknowingly hired them.
The demand for skilled software and data engineers is highβand North Korea is filling that gap in illicit ways.
Keith: Brett, this has been a fascinating discussion. Where can people find the full report? Brett: You can read the report at our security site: sec.okta.com Keith: Thanks again for joining us, Brett. Brett: Cheers, Keith. Thanks for the chat. Keith: Thatβs all the time we have for todayβs episode.
Be sure to like the video, subscribe to the channel, and leave a comment if youβre watching on YouTube. Join us each week for new episodes of Today in Tech. Iβm Keith Shawβthanks for watching!
Sponsored Links