Imagine, if you will, a digital doppelgänger. A clone that looks, talks and behaves just like you, created from the depths of artificial intelligence, reflecting your every mannerism with eerie precision. As thrilling as it might sound, how would you feel about it?
Our research at the University of British Columbia turns the spotlight onto this very question. With advancements in deep-learning technologies such as interactive deepfake applications, voice conversion and virtual actors, it’s possible to digitally replicate an individual’s appearance and behaviour.
This mirror image of an individual created by artificial intelligence is referred to as an “AI clone.” Our study dives into the murky waters of what these AI clones could mean for our self-perception, relationships and society. We identified three types of risks posed by AI replicas: doppelgänger-phobia, identity fragmentation and living memories.
View this post on Instagram
Cloning AI
We defined AI clones as digital representations of individuals, designed to reflect some or multiple aspects of the real-world “source individual.”
Unlike fictitious characters in digital environments, these AI clones are based on existing people, potentially mimicking their visual likeness, conversational mannerisms, or behavioural patterns. The depth of replication can vary greatly, from replicating certain distinct features to creating a near-perfect digital twin.
AI clones are also interactive technologies, designed to interpret user and environmental input, conduct internal processing and produce perceptible output. And crucially, these are AI-based technologies built on personal data.
As the volume of personal data we generate continues to grow, so too does the fidelity of these AI clones in replicating our behaviour.
Read More: No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
Fears, fragments and false memories
We presented 20 participants with eight speculative scenarios involving AI clones. The participants were diverse in ages and backgrounds, and reflected on their emotions and the potential impacts on their self-perception and relationships.
First, we found that doppelgänger-phobia was a fear not only of the AI clone itself, but also of its potential misuse. Participants worried that their digital counterparts could exploit and displace their identity.
Secondly, there was the threat of identity fragmentation. The creation of replicas threatens the unique individuality of the person being cloned, causing a disturbance to their cohesive self-perception. In other words, people worry that they might lose parts of their uniqueness and individuality in the replication process.
Lastly, participants expressed concerns about what we described as “living memories.” This relates to the danger posed when a person interacts with a clone of someone they have an existing relationship with. Participants worried that it could lead to a misrepresentation of the individual, or that they would develop an over-attachment to the clone, altering the dynamics of interpersonal relationships.
Preserving human values
It is evident that the development and deployment of AI clones wield profound implications. Our study not only contributes valuable insights to the critical dialogue on ethical AI, but it also proposes a new framework for AI clone design that prioritizes identity and authenticity.
The onus lies with all stakeholders — including designers, developers, policymakers and end-users — to navigate this uncharted territory responsibly. This involves conscientiously considering moderation and user-generated data expiration strategies to prevent misuse and over-reliance.
Further, it’s imperative to recognize that the implications of AI clone technologies on personal identity and interpersonal relationships represent just the tip of the iceberg. As we continue to tread the delicate path of this burgeoning field, our study findings can serve as a compass guiding us to prioritize ethical considerations and human values above all.
- is an Assistant Professor, Computer Science, University of British Columbia
- This article first appeared on The Conversation