Google’s DeepMind is testing new AI tools that will serve as personal life coaches, performing a wide range of tasks from giving life advice to tutoring tips. The company’s own AI experts have warned against the emotional attachments some people can form with AI chatbots, making their use in even more personal roles questionable.
DeepMind, a company owned by Google, is currently experimenting with new AI tools that have the potential to act as personal life coaches according to the New York Times. These tools are designed to perform a variety of tasks, such as giving life advice and providing instructions for planning.
Among other tasks, the workers are evaluating the assistant’s ability to respond to personal questions about challenges in people’s lives. For instance, they are testing how the chatbot would respond to a user asking for advice on how to tell a close friend that they cannot attend their destination wedding due to financial constraints.
This initiative is a reflection of Google’s drive to remain competitive in the AI field and its growing confidence in entrusting AI systems with sensitive tasks. However, Google’s AI safety experts have previously cautioned against the risks of people forming strong emotional attachments to chatbots.
Experts in AI safety at Google have raised concerns about the potential negative impact on users’ health, well-being and sense of control if they rely too heavily on AI for life advice. They have also warned that some users may mistakenly believe that the technology is sentient. Google’s new chatbot, Bard, was launched in March and is prohibited from offering medical, financial or legal advice. However, if users express mental distress, Bard will provide them with resources to help with their mental health.
As Breitbart News previously reported, Google has also been testing a tool for journalists that can generate news articles, rewrite them, and suggest headlines. The company has been promoting the software, named Genesis, to executives at the Times, the Washington Post, and News Corp, the parent company of the Wall Street Journal.
The company’s AI safety experts have also expressed concerns about the potential economic harms of generative AI, arguing that it could lead to the deskilling of creative writers. Other tools under testing can draft critiques of arguments, explain graphs, and generate quizzes, word puzzles, and number puzzles.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan