top of page

Is There a Good Use Case for Deepfakes?

  • Writer: Samantha Pillay
    Samantha Pillay
  • Jun 14, 2025
  • 3 min read

Updated: Sep 2, 2025

And Why I Created Disney-Me Instead




I’ve been following the rapid advances in avatar generation — some so realistic they’re indistinguishable from the real person. But I didn’t want to create a realistic avatar of myself.


Not because I couldn’t — but because I believe it would undermine my authenticity and create mistrust.


In the work I do — in healthcare, education, and advocacy — trust is everything. If people are left wondering whether it’s really me, the message loses its meaning.


Maybe one day, I’ll change my mind. Maybe one day, AI avatars will be so commonplace that no one will question them. But for now, I wanted to explore the technology in a way that felt safe, fun, and transparent.


So instead of trying to look exactly like myself, I created a cartoon version — friendly, expressive, clearly fictional. Spoiler Alert – The video is not the real me, it’s Disney-Me.

Disney-Me can speak in over a hundred languages including English, French, Spanish and Mandarin.


It’s easy to imagine how deepfakes could be used for harm — we’ve already seen examples of misinformation, fraud, and manipulated media.


But this exercise left me asking the harder question:


What are the good use cases for deepfakes?


Could realistic avatars — used ethically and with consent — actually help people?


Avatars With a Purpose — How They Could Help? Could there be a good use case for deepfakes?

I believe two things are essential when it comes to realistic AI avatars:

  1. Consent: No one should ever create or use an avatar of someone else without their permission.

  2. Transparency: People deserve to know when they're seeing or hearing an AI-generated avatar.


But under those conditions — and used thoughtfully — here’s how realistic avatars of yourself might actually empower those who face barriers to communication, confidence, or expression.


Who Might Benefit From an AI Avatar of Themselves?


• Therapists and clients experimenting with behavioural modeling — watching yourself set boundaries, regulate emotions, or rehearse healthy responses could support mental health and recovery. Could this help in therapy for anxiety, depression, addiction, anger management, domestic violence, or eating disorders?


• People with speech disorders, like stuttering or aphasia, who could use their avatar for therapy, self-practice, or daily communication.


• Those using robotic voice devices (like an electrolarynx) who want to speak with a more natural-sounding voice.


• People who’ve temporarily lost their voice due to illness, surgery, or trauma — using AI to maintain their identity and agency.


• Neurodivergent individuals who find face-to-face or camera communication overwhelming.


• People experiencing social anxiety or fear of public speaking, especially on video.


• Anyone uncomfortable with their appearance due to medical treatment, injury, or body image concerns.


• Those who can't afford a camera, microphone, lighting, makeup, or even clean clothes for filming.


• People temporarily displaced from their home or living in unstable housing without a private or quiet recording space.


• Those living in crowded, noisy environments who can’t access a quiet moment to record.


• Busy people — from professionals to parents of small children — who don’t have the time or energy for multiple takes or post-production.


• People creating multilingual content, wanting to reach new audiences.


A Final Thought on Trust and Awareness


The technology is already here — and more people are using deepfakes every day.

That’s why it’s important to understand it, discuss it, and use it responsibly.

Because just like you can’t always trust what you read,

You can’t always trust what you see.


Verification and reliable sources are more important than ever.


Some ways to mitigate the risks of deepfakes and avatar misuse could include:

• Clear disclaimers or watermarks indicating AI-generated content

• Platform policies that require disclosure when avatars are used

• Code words or pre-arranged phrases between individuals or teams to verify authenticity

• Trusted third-party verification services for high-stakes communication (especially in medicine, law, finance)


Can you think of other ways a realistic AI avatar might help someone in need?



1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Sep 02, 2025
Rated 5 out of 5 stars.

😊

Like
bottom of page