The AI Revolt: How Our Love Affair with Technology Could Turn into a Hate Story

Max Shestov
4 min readOct 5, 2023

--

Cyber robot with realistic womans face

On September 27, 2023, Meta made a groundbreaking announcement: the introduction of AI in beta, an advanced conversational assistant available on WhatsApp, Messenger, and Instagram, soon to be integrated into Ray-Ban Meta smart glasses and Quest 3.

These AI entities are not your typical virtual assistants; they’re designed to have more personality, opinions, and interests, making interactions far more engaging and enjoyable. What’s more, they’ve enlisted cultural icons and influencers such as Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka to lend their voices and personalities to these AI companions.

Screenshot from Meta’s website page Introducing new AI experiences
Screenshot from Meta’s website page Introducing new AI experiences

Challenges and Paradoxes in AI-Human Interaction

In an age where virtual assistants and entertainment are increasingly powered by AI, it’s not hard to imagine a future where people grow weary of the digital realm. The novelty of interacting with artificial intelligence, whether as assistants or characters in our entertainment, may soon wear off.

However, the path to this future of AI-human interaction is far from straightforward. Consider the recent experiment conducted by Joanna Stern, a columnist at The Wall Street Journal. Stern replaced herself with AI-generated voice and video, diving headfirst into a series of challenges that included creating a TikTok video, making video calls, and testing her bank’s voice biometric system. The results were nothing short of eerie.

As Stern navigated through her tasks, she found herself face to face with technology that had become astonishingly humanlike in its voice and facial expressions. The AI she interacted with mimicked her voice with almost perfect precision, making it difficult to discern from a real human voice. However, when it came to the video clone, there was a stark contrast. Despite the voice cloning being nearly flawless, the video clone left much to be desired. It had difficulty reproducing the subtle nuances of movement and facial expressions, and the visual aspect did not match the atmosphere and context. Due to imperfect imitation, the final result was clumsy, caused ridicule, and was immediately exposed.

Human Clones: Blurring Boundaries and the Verification Conundrum

This experiment underscores a paradox that may define our future interactions with AI-driven human visualizations.

Despite a rather unsuccessful initial attempt at replicating human behavior, technology is bound to catch up with our expectations of AI interaction. In return, people will look for more authentic experiences that truly engage our senses and emotions.

This craving might lead us to seek out the next wave of explosive interest in human avatars — clones of real personalities, historical figures, and celebrities, including our living or deceased relatives or friends.

These clones will replicate the appearance, voice, personality, and even simulate the thoughts and reasoning of their real-life counterparts, blurring the boundaries between reality and simulation.

The Future of AI-Human Interaction: From Fascination to Weariness

At this stage, a new obstacle will arise — the verification of clones. After all, you’d prefer to converse with or seek advice from a clone of Keanu Reeves if it’s verified by Keanu himself, wouldn’t you? Or discuss the current political situation with a Lincoln clone who has been verified to match the mannerisms, tone, and thought processes by a group of historians or institutes.

Just as every secure website has an HTTPS connection with a verification badge, every clone must have a verification code so that we know that this clone is “authorized” to act on behalf of a specific person.

In addition to Meta’s announcement, it’s worth noting that IT startups like Synthesia and HeyGen are getting closer to creating engaging AI avatars. These companies are at the forefront of pushing the boundaries of AI-human interaction, offering the promise of even more convincing and engaging digital personalities.

At the moment, technology is still far from being able to generate a video stream with human-like movements. For this, AI needs to “understand” how to match movements and facial expressions with text and, especially, context. This could take another 5 to 10 years of development.

Another issue is the computational performance required to do this in real-time. Although this also seems achievable, we might soon have a clone that is visually indistinguishable in engaging in simple conversations.

As the years pass, even these astonishing AI clones will likely lose their luster. People will grow tired of the predictability and limitations of these replicas, missing the unpredictability and quirks of genuine human interaction. It’s a paradoxical scenario where we yearn for authenticity but find ourselves in a world dominated by artificial beings.

Firefly crowd of people, apocalypses of digital world

While it is difficult to imagine at the moment, sooner or later there will come a time when people start to resent the omnipresence of AI-driven visualizations. They will begin to long for the days when human interaction was unadulterated by technology, when genuine emotions and imperfections defined our relationships. But by then, AI visualizations will have permeated every aspect of our lives, from work to entertainment, making escape nearly impossible.

--

--

Max Shestov
Max Shestov

Written by Max Shestov

Lead Web Developer. Husband of a wonderful wife, Entrepreneur, Dad