Is AI actually you?
This seems like a great time to ask this question, as it might not matter soon. With the direction things are heading, we may soon arrive at an intersection where the blurring of identity reality and identity fiction is so extreme that we’ll simply stop asking what is authentic personhood.
Case in point, a story out of Paris in July outlines not a dystopian future, but a troubling present reality. The piece recounts the story of computer-generated YouTube storytime videos. The genre is pretty much what it sounds like:
"Storytime videos are a micro-genre of YouTube content. They are informal and intimate, one person facing a camera in their living room or bedroom, recounting something that happened to them. They span a broad spectrum of topics, from the mundane (I found my dream purse while shopping) to the insane (I survived a plane crash) or even traumatic (I watched my friend die)."
But the piece goes on to discuss fake (or are they?) storytime videos that are a product of Miquela, a brand new type of social media influencer.
You see, Miquela isn’t a human, she is a computer generated virtual influencer. The Paris piece recounts Miquela’s story of a sexual assault during a rideshare to the beach. Given that Miquela is a product of artificial intelligence, her assault was a fiction, but was it fully so? If it reflects a shared experience many people have had, is it also a mirror on and of reality?
Dr. Stuart Watt, who holds a Ph.D, in Artificial Intelligence and co-founded two AI companies, argues that AI is indeed "one giant (black) mirror onto our society." Taken in context, this is both AI’s greatest strength and most glaring weakness.
This really makes perfect sense on multiple levels. If there is even a chance that AI is us in any conceivable way, then it will mirror all of our weaknesses and exponentially amplify the worst elements of who we are individually and as a society.
"Our biases, our prejudices, our lack of imagination of other possibilities, all of them are immediately absorbed. And it doesn’t hugely matter what actual technology you use. The big language models are probably the clearest -- the article that led to Timnit Gebru’s firing from Google was on exactly this; she was very rightly critical of how much of our evil side gets absorbed into these models, not just as individuals, but as a society," suggests Watt.
Dr. Watt adds:
But in the same vein, this highlights one of AI’s greatest strengths - the opportunity to focus this intelligence inward to help us identify our own flaws. In both cases I think we tend to underestimate how -centric we are, both egocentric in tending to think of the world from our own point of view, and anthropocentric too — the human point of view is what matters. We need our mirrors to see ourselves as we truly are.
This is all going to be confusing for some time to come. While we each have a sense in our minds of who we are and what defines us, there is always a struggle to separate what we self-identify as either naturally or artificially us. The infusion of technology into this matrix makes who we are more of an issue of debate -- ideally a public one rather than something left in the hands of tech giants whose acts and omissions can decide for you.
"You might have a slightly deeper sense of 'is it you' in mind -- is there some kind of a conscious entity there that is, in some sense, you! That’s way more debatable. Personally, I don’t think so. We are very much made up of our history, and it’s that history (including evolution as well as childhood and formative experiences) that defines who we are. An artificial construct will have a different history, so it’ll be different. And that’s fine," concludes Watt.
While the difference may, in some ways, be fine, it is amplified by all of this is happening in an environment where there is no shortage of AI hype.
While the Googlegentsia are now sufficiently emboldened to publicly opine that AI will be more impactful than fire (we can assume inclusion of the wheel, insulin, and the Earth being discovered to be round), in practice, where AI and our personhood are in close proximity, the results remain fricative.
In Canada, the federal government tested facial recognition on travelers in one of the world’s busiest airports (Toronto YYZ) without their permission. The dangers of AI and facial recognition to exponentially increase the risk of a false positive for marginalized people and the dire effects it has. And it’s everywhere. The government quietly tested it on unsuspecting travelers, no permission asked and absolutely none given. Not only an AI mess, a privacy one as well.
Josh Geist, a Pittsburgh lawyer specializing in personal injury and liability issues, reminds us that there is an important legal consideration to the role AI can play in defining who we are:
"In a world where AI plays an increasing role in our daily lives, privacy concerns are at an all-time high. As the line between the reality we know and a reality created by artificial intelligence becomes less clearly-defined, privacy will need to be front of mind."
That privacy mandate simply becomes more important when who we are is based even in small part on little fictions. I had a conversation with Igor Bonifacic, a journalist with Engadget, who wrote about AI in the new Anthony Bourdain film, explaining that the film, Roadrunner, relies in part on "deepfaked" audio.
"There were a few sentences that Tony wrote that he never spoke aloud," he said. "With the blessing of his estate and literary agent we used AI technology. It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive." Representatives for the project claim the AI technology is used for less than 60 seconds throughout the film.
Bonifacic observes:
I find it interesting that the director says he wanted Bourdain's quotes to 'come alive' for the ones that weren't recording. If anything, I feel like the fact that an AI program generates his voice there makes them far less alive.
Bonifacic emphasized that while there is this notion out there that Bourdain would have been fine with the role of AI in the film, this is arguable at best: "Ottavia Bourdain, his (Bourdain’s) widow, disputes. 'I certainly was NOT the one who said Tony would have been cool with that.'"
This is a position that resonates with many of us. It reminds us that the A in AI is "Artificial." Maybe, ultimately, the artificial technological aspects of our own selves will be part of a larger discussion on digital rights. But there’s a greater chance that these discussions either won’t happen at all or they won’t be inclusive, democratic, representative of who we are without AI and who we want to be with it.
Image credit: Sergey Tarasov / Shutterstock
Aron Solomon, JD, is the Head of Strategy for Esquire Digital. He has taught entrepreneurship at McGill University and the University of Pennsylvania, and was the founder of LegalX, the world’s first legal technology accelerator. Aron’s work has been featured in TechCrunch, Fortune, Venture Beat, The Independent, TechCrunch Japan, Yahoo!, ABA Journal, Law.com, The Boston Globe, The Hill, and many other popular publications, including Today’s Esquire.