U.S. Surgeon General Dr. Vivek Murthy released a report this month addressing the nation’s loneliness epidemic. It is associated with a “greater risk of cardiovascular disease, dementia, stroke, depression, anxiety, and premature death,” he wrote. He also noted the mortality impact of social isolation is similar to that “caused by smoking up to 15 cigarettes a day and even greater than that associated with obesity and physical inactivity.”
Undoubtedly, this epidemic is a palpable crisis across all layers of society, from schools to workplaces and households.
Fortunately, increased awareness around mental health has introduced significant innovation and investment into new remedies and treatment modalities. One such novel concept and proposal is the use of artificial intelligence to help resolve the ongoing crisis. However, while AI may potentially provide significant benefits, innovators and policymakers would be wise to embrace it cautiously, due to its inherent risk.
With the advent of generative AI, conversational AI and natural language processing, the thought of using artificial intelligence systems to provide human companionship has now become mainstream.
Google Cloud, which is at the forefront of developing scalable AI solutions, provides an in-depth analysis of what conversational AI really is: natural language processing and machine learning algorithms are used to train systems on large amounts of data, including text and speech. With enough training, the systems can eventually understand and process human language, and respond accordingly. As these systems constantly learn from continued interactions and more data, they can improve their response quality naturally, over time.
This means that with enough data, training and interactions, it is within the scope of plausible reality that these systems can not only replicate human language, but they may eventually utilize billions of data points and evidence-based guidelines to potentially provide medical advice and therapy.
Undoubtedly, companies such as Google, Amazon and Microsoft are investing billions of dollars in this very technology, realizing that they are just measures away from possibly replicating human language and conversation. Once these companies can perfect this, the potential is unlimited: everything from customer service to companionship and human relationships can become AI driven.
In fact, there are already trial systems that exist. Take for example Pi, a personal artificial intelligence system developed by the company Inflection AI. Pi “was created to give people a new way to express themselves, share their curiosities, explore new ideas and experience a trusted personal AI.”
Mustafa Suleyman, CEO and co-Founder of Inflection AI, explains: “Pi is a new kind of AI, one that isn’t just smart but also has good EQ. We think of Pi as a digital companion on hand whenever you want to learn something new, when you need a sounding board to talk through a tricky moment in your day, or just pass the time with a curious and kind counterpart.”
Suleyman’s co-founder is Reid Hoffman, who also co-founded professional networking company LinkedIn. Inflection AI has raised hundreds of millions of dollars in seed funding to support its technology.
However, this incredible technology brings with it many potential concerns. While artificial intelligence certainly has the capabilities to solve potential access inequities, provide healthcare services in a convenient manner and even provide companionship to those that most require it, it has to be developed with guardrails in place for numerous reasons.
For one, in a realm as sensitive as mental health, patient privacy and data security need to be of utmost importance. Using AI technology in this capacity means that a significant amount of sensitive patient information will also be collected. Developers must ensure that this data will never be compromised and that patient privacy is always the top priority, especially amidst a landscape of growing cybersecurity threats.
Moreover, perhaps the most important concern is an existential one: How far should humanity go with this? While the benefits of AI are certainly numerous, innovators have to be cautious about the limitations of these systems. Notably, the systems are only as good as the models and datasets they can learn from, which means that in the wrong hands, these systems could very easily provide incorrect or dangerous recommendations to vulnerable populations. Hence, corporations must enforce strict practices around responsible development.
Finally, as a general social commentary, combating mental health issues and a loneliness epidemic by using artificial intelligence systems sets a dangerous precedent. No system can (yet) replicate the intricacies of human nature, interaction, emotion and feeling. Healthcare leaders, regulators and innovators must remember this underlying tenet and should prioritize viable and sustainable measures to resolve the mental health crisis, such as training more mental health professionals and increasing patient access to care.
Ultimately, whatever the solution may be, the time to act is now—before this epidemic becomes too catastrophic to manage.