The advent of virtual interviews and remote onboarding has streamlined recruitment in today’s digital age. At the same time, it has made the issue of deepfake job candidates more prevalent with far-reaching repercussions. Once hired, these impostors present significant security and financial risks. Combating this threat will require a multifront approach from both HR and IT teams.
The Rise of Deepfake Job Candidates
Since its emergence in 2017, deepfake technology has been used for general entertainment. These synthetic representations leverage generative AI to create new media using existing video and audio clips. At the time, deepfakes were easy to spot — characters didn’t blink normally, their lip-syncing was off and the finer details were clearly lacking.
However, as AI and machine learning technologies advance and become more accessible, identifying these counterfeits has become increasingly difficult. Studies suggest people could only correctly distinguish between deepfakes and authentic images 61% of the time, raising concerns about the pervasiveness of fraudulent activities like scam job applications.
How Do Deepfake Job Scams Work?
Cybercriminals typically start the process by collecting real candidate information through fake job listings. This allows them to create realistic and impressive resumes to get the attention of recruiters. Once they manage an interview, they use deepfake technology to craft and assign a fake identity to the candidate profile.
It might sound like a complex process, but it’s surprisingly simple. In fact, scammers only need a photograph and one minute of audio to generate a convincing deepfake. Sophisticated technologies can match the natural contours of their face, and automatically follow the movements of their lips and eyes.
Deepfake job candidates are a severe issue for HR professionals. While their motives might vary, they wouldn’t put all that time and effort into faking an entire personality and appearance because they have good intentions. The most common reasons for such actions revolve around gaining employee access to sensitive proprietary information for financial gain.
These scams became so prevalent that the FBI issued a public warning in 2022 detailing how deepfake candidates target remote work jobs to access corporate databases. According to the report, jobs related to computer programming, software and financial management are the most targeted.
How Business Can Train Recruiters to Identify Deepfake Job Candidates
While deepfake technologies have become increasingly sophisticated, they’re not infallible. Training recruiters and HR teams about the growing existence of deepfake job candidates and the risks they pose is key. Leaders should provide specialized programs for recognizing the telltale signs of an AI-generated impostor applicant.
Observe the Eye Reflections
Face swapping is one of the most common methods used in deepfakes, and while advanced systems can render near-perfect facial features, the eye reflections are an obvious giveaway. Usually, when a person looks at something, the reflections in their eyes are the same and fit naturally with their facial expressions.
This is not the case with most AI-generated recreations. Perhaps the technology isn’t quite advanced yet or the AI simply overlooks such a critical detail.
Check for Shadows
Moving people always have shadows even in a well-lit room. Deepfakes will likely lack natural-looking shadows since they’re synthetic.
Look for a Lack of Coherence
Unnatural head position, irregular blinking, and mismatched lip coordination are glaring red flags. Watch for blurry face borders that bob in and out of the background, like when a person uses an artificial background during a video call.
Look for inconsistencies between the sound of the words and the shape and movement of the mouth. For example, pronouncing words with a ‘b,’ ‘p,’ ‘m’ or ‘w’ typically requires the lips to close, but if the movement doesn’t match the sound, that’s a sure sign of deepfake imagery.
Conduct Technical Assessments
Asking technical questions during the initial interview can filter out fake candidates because they’ll likely struggle to answer them correctly. Of course, there’s always the possibility that threat actors will hire actual subject matter experts to attend the interview and simply overlay the deepfake image onto their faces. That’s why employers cannot afford to rely on just one assessment method.
In-Person Interviews for the Win
Whenever possible, recruiters should conduct face-to-face interviews, especially for positions that involve unfettered access to confidential company information. Organizing a physical meeting will likely be much cheaper than the potential financial and reputational losses from onboarding a deepfake candidate.
Where in-person interviews are not feasible, structured video sessions can be a suitable alternative. This involves providing specific, mandatory instructions for how the interview will be conducted. For example, one requirement is for candidates to be in a well-lit room with their faces fully visible throughout. A standardized format can make it easier to spot subtle cues like warping and distorted auditory actions indicative of a deepfake video.
Using AI to Weed Out the Fakes
Just as threat actors leverage AI to generate deepfakes, business leaders must also train HR teams to utilize the technology to detect potential issues and immediately raise the alert. For example, these systems can analyze historical data to identify inconsistencies in the information provided by applicants. Similarly, AI and ML-powered tools can automate cybersecurity incident response, providing a more robust and adaptable security framework.
Why Training HR on Deepfakes Is Critical
Four years ago, less than 10,000 deepfakes were detected online, but that number is in the millions today. Cybercriminals are getting smarter and changing how they launch attacks on unsuspecting businesses. Instead of trying to get an employee to click a malicious link, they work hard to get hired themselves to ensure continued access to enterprise systems. They’ll continue to siphon sensitive data the entire time they’re employed.
Currently, there’s no single foolproof way to filter out deepfake attempts. Even detection tools are not 100% accurate and can give off a false sense of security. As such, HR teams must employ a mix of detection and verification strategies during recruitment and onboarding to adequately mitigate the growing risks of deepfakes.
Dodging the Deepfake Candidates Menace
With deepfake applications becoming increasingly sophisticated, employers must up their cybersecurity culture to avoid falling victim. HR leaders play a critical role in steering their brands through this evolving minefield via their knowledge of the intricacies of deepfake technology and prioritizing proven mitigation measures.