The Digital Afterlife Industry Is Burgeoning—And It’s Worrying A.I. Ethicists | nooshamid.com
spot_imgspot_img

Top 5 This Week

spot_img

Related Posts

The Digital Afterlife Industry Is Burgeoning—And It’s Worrying A.I. Ethicists

When Devan Leos’ uncle, Darren Harris, a commander and spokesperson for the Santa Clarita Police Department, took his own life in late 2023, it was a huge blow. “It was just so unexpected because there were just no signs,” Leos told Observer. “We were all just unprepared. We wanted answers. We wanted closure. We just wanted to tell him how much we loved him and cared about him.” In the midst of his grief, Leos turned to ElevenLabs AI, a generative voice artificial intelligence (A.I.), to help him understand what had happened and find solace. He fed the A.I. some of his uncle’s videos from his time at the sheriff’s department and had a brief conversation with the A.I. bot. 

Sign Up For Our Daily Newsletter

SIGN UP

 See all of our newsletters

“I created some messages just so that I could hear his voice and hear him say things that I would otherwise never get to hear him say and hearing my uncle’s voice through A.I. kind of made me feel like he was still alive. And it kind of provided this strange but somewhat comforting connection to this person. It was just so weird, but at the same time, it was mixed for me because it was a bit therapeutic,” Leos told Observer.

“Deadbots” and “griefbots” are trained on the language patterns and digital footprints of those who are no longer with us. Platforms like Seance AI, StoryFile, Replika, HereAfter and others use videos, voice recordings, and/or written content to simulate real people, both alive and dead, and allow users to have conversations with those simulations. While not uncommon, it is uncanny. 

For example, movie producers have resurrected Carrie Fisher and James Dean, while other innovators have tried to recreate the mind of Ruth Bader Ginsberg. OpenAI recently had to pull their voice bot, Sky, because it sounded eerily similar to a very much alive Scarlett Johansson. In a recent paper out of the University of Cambridge, A.I. ethicists worry that, without proper governance, data privacy and more, the information that people upload in grief could be used to ‘haunt’ the living in the form of ads pushing everything from delivery meals to A.I. subscription services. The paper detailed a few different possible scenarios, including one theoretical event where a woman who was looking for her mother’s pasta recipe via a grief bot, was served an ad to order it from a local restaurant instead. All of this raises some serious ethical questions about everything from the rights of the dead to the data and privacy concerns of the living, and the policies that protect users are all over the map.

A burgeoning yet unregulated industry

There are very few laws or regulations that limit the use of the data that A.I. uses to train large language models, or LLMs, and many find that especially worrying when it comes to “ghost” bots or A.I. bots that are created in the likeness of those who have passed, and grief tech. Only California and New York have laws around what’s known as post-mortem publicity, but those laws mostly protect celebrities. While data laws have started to evolve in 15 states around the country to meet the rapidly changing landscape of A.I., it’s still scattershot.  

Most A.I. companies follow standard data collection practices, though some are rather surreptitious and covert about their practices. ChatGPT, for example, explicitly says that it does not share user data for marketing purposes, but it does share data with “trusted third parties.” StoryFile, which declined to be interviewed for this article, discloses how user data is used but does not mention marketing or re-marketing in its policy. Skunkworks projects like Seance AI, however, don’t make their privacy and data collection information clear (at least from the homepage) beyond a small blurb under the FAQs at the bottom of the homepage. We had to reach out to Seance AI directly to locate their more detailed privacy policy, as it is not directly linked on the page.

On top of this, the grief tech business is growing at a rapid pace, though specific numbers around the size of the grief tech industry are hard to find.  Some estimates put the value around $123 billion globally as of 2023. Death “technopreneurship,” a term coined by the A.I. ethics researcher Tamara Kneese to describe the industry, saw a particularly meteoric rise during the Covid-19 pandemic as the world grappled with more than seven million sudden and sweeping losses. It also has become more popular as young people consider what their digital afterlife might look like.

An ethical dilemma

Professor Emily M. Bender is a computational linguist at the University of Washington whose work has raised awareness of ethical issues in large language processing and A.I.. She says that the digital afterlife–especially afterlife A.I.s are fraught, noting that there are two vulnerabilities: How A.I. companies handle dignity as a human right after someone has passed and how living people may interact with these ghost bots that can persist long after death. “If this is a choice made by the person who is dying, they’re setting something in motion that they’re not going to be able to do anything about,” she told Observer.

Large companies are thinking about the ethical implications of A.I. in the grief tech space, too. Kristi Boyd, a tech ethicist and senior trustworthy A.I. specialist at the analytics platform SAS, notes that afterlife A.I. companies quickly gather large amounts of highly sensitive information to create the avatar and build on the user’s interactions. “Think about how you talk to your loved ones,” Boyd told Observer, “sharing secrets and intimate thoughts that are not meant for anyone else’s eyes. You probably don’t intend for that information to be used in external applications or to generate targeted advertising. So, for all these companies, there is a significant need to ensure that privacy policies that protect the human agency of the deceased and their family members remain.”

Dr. Jodi Halpern is a bioethics and medical humanities professor at the University of California Berkeley, the co-director of the Kavli Center for Ethics, Science, and the Public, and a leading voice in A.I. ethics and mental health who’s spoken to world leaders at Davos and the World Economic Forum about the ethics of A.I. She specifically combines the study of psychiatry, philosophy, and affective forecasting to examine how technologies like A.I. transform relationships and society in unexpected ways. She says that grief is a key component that makes us who we are and that, while these technologies could act as a way to evoke and stimulate grief to assist in the healing process, they should be regulated. “I do not think that this should be in the hands of private companies and unregulated,” she told Observer. “I think that this should be a medical product. I think it should be FDA-regulated.”

Yet, regulations frequently trail the real-world experience of people like Leos, who suffer great losses. While he is just 25 and no stranger to A.I., Leos is a former child actor, and the founder of Undetectable AI, a platform that helps users determine whether text is A.I. or human-generated. Before he started his company, he said he’d considered moving into the afterlife A.I. space but decided not to pursue it after the death of his uncle because he didn’t feel right about the lack of ethics in the space. He has some privacy concerns about the data he fed Eleven Labs to recreate the voice of his uncle. 

Leos only recently shared the recordings he made with his mother, who was deeply impacted by her brother’s death. Leos said that it helped her and that she wanted to share it with the rest of the family as a way to continue to ease the pain of loss. 

“Is using A.I. to clone the voices of our lost loved ones good or bad?” Leos asked. “In my case, it was therapeutic and cathartic. I do have concerns that it could make letting loved ones go and moving on harder for some people. In my situation, this was my uncle, and even though my mother enjoyed hearing the A.I. version of Uncle D,  I waited months before I showed her, long after she started her grief counseling classes.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles