Generating a fake person for KYC
This guy doesn't exist 

Generating a fake person for KYC

However, in the age of Generative AI and deep fakes, these once-reliable markers are becoming increasingly untrustworthy. 

On this page
Introduction

Know Your Customer (KYC) has long been the cornerstone of financial and identity verification processes. Traditional KYC relies heavily on documents, photographs, and historical data to confirm a customer's identity. However, in the age of Generative AI and deep fakes, these once-reliable markers are becoming increasingly untrustworthy. 

Traditional KYC procedures require a mix of personal identification documents, financial statements, and sometimes even biometric verification. The process has been mostly effective in curbing fraud and meeting regulatory requirements. However, it's not without its limitations, including bureaucratic delays and the risk of human error.

The Rise of Generative AI

Generative AI technologies like Generative Adversarial Networks (GANs) have the capability to create highly realistic images, videos, and even histories. From synthesising photo-realistic images of people who don't exist to generating fake but plausible financial histories, the power of Generative AI is both fascinating and alarming.

The very features that make Generative AI revolutionary are those that put traditional KYC methods in jeopardy. Imagine a scenario where fraudulent accounts could be created with realistic images and financial histories that stand up to cursory inspections. Not only could this lead to financial fraud on an unprecedented scale, but it could also undermine the trust in financial institutions and regulatory bodies.

The increasingly sophisticated landscape of artificial data generation calls for a fundamental reassessment of KYC protocols. Financial institutions and regulatory bodies will need to consider the following:

  • Integrating multi-layered verification processes
  • Employing real-time analytics and AI-driven checks
  • Collaborative efforts to develop new identity verification standards
  • Regularly updating KYC procedures to adapt to emerging technologies

The Experiment: Using ChatGPT to Generate a Life Story and Image Prompts

In a recent experiment, I utilised ChatGPT to generate a list of key life events for a fictional man. The AI model created a compelling timeline, complete with milestones like birthdays, graduations, job changes, and even personal experiences like travel and family events.

To bring these events to life visually, I used Generative AI to produce image prompts based on these milestones.

This turned into the below by taking that image prompt and adding it to midjounery.

Though I limited myself to 10 images for the purpose of the experiment, the potential to create hundreds of such realistic images in less than an hour is staggering. While these images may not be directly employed in the KYC process, their existence poses a critical question for the future of identity verification.

The experiment highlights a deeply unsettling possibility: Could social media platforms be getting seeded right now with AI-generated life histories and images, laying the groundwork for elaborate KYC frauds years down the line? A fraudster could feasibly build a "credible" online history, complete with realistic photos and life events, to bypass traditional KYC checks. The data, though artificially generated, would seem perfectly plausible to anyone conducting a cursory social media background check.

This isn't a scheme that requires a quick payoff. By slowly drip-feeding artificial data onto social media platforms over a period of years, a fraudster could create a persona that withstands even the most thorough scrutiny. By the time they decide to use this fabricated identity for financial gains, tracking the origins of the fraud becomes an immensely complex task.

It's family

The possibilities for deception extend far beyond creating a single individual. Using Generative AI, one can also fabricate an entire network of relatives and close contacts for the fictional person. Fathers, mothers, siblings, and extended family can all be artificially generated, complete with their own distinct life events and milestones.

The deception can be made even more intricate by generating voices for these AI-created individuals. These voices can be used to leave voicemails or participate in phone calls, adding another layer of credibility to the faux family network. Further, videos capturing special family occasions like weddings, birthdays, or holidays could be created to fortify this fictional world.

Conclusion

The advent of Generative AI is a double-edged sword; while it offers exciting prospects, it simultaneously challenges the bedrock of identity verification processes like KYC. Financial institutions, technology companies, and regulatory bodies must work in concert to overhaul KYC methods. The imperative is not just to adapt but to stay ahead of the curve, preserving trust and security in a rapidly changing landscape.