With long waiting lists and increasing costs in healthcare systems, many people turn to AI -working chatbots, such as Chatgpt for medical self -diagnosis. About 1 in 6 American adults already use chatbots for health tips at least monthly, According to a recent survey.
But placing over -confidence in the outputs of chatbots can be dangerous, partly because people are struggling to find out what information to give chatbots for the best possible health recommendations, According to a recent study under the leadership of Oxford.
“The study revealed a two-way communication analysis,” said Adam Mahdi, a postgraduate director at the Internet Oxford Institute and co-author of the study, TechCrunch. ‘Those who use [chatbots] They did not make better decisions than participants based on traditional methods such as internet searches or their own judgment. ”
For the study, the authors were hired by about 1,300 people in the United Kingdom and gave them medical scenarios written by a group of doctors. Participants were instructed to identify possible health conditions in the scenarios and to use chatbots, as well as their own methods, to occupy potential action lessons (eg seeing a doctor or going to the hospital).
Participants used the default AI model that supplies Chatgpt, GPT-4O, as well as the Cohere R+ and Meta’s Llama 3 command, which once supported the company’s Meta AI assistant. According to the authors, the chatbots not only made the participants less likely to identify a relative health condition, but made them more likely to underestimate the severity of the conditions found.
Mahdi said participants often omit basic details when searching for chatbots or received answers that were difficult to interpret.
“[T]Answers they received [from the chatbots] Often combined good and bad recommendations, “he added.” Current evaluation methods for [chatbots] Do not reflect the complexity of interaction with people of people. ”
TechCrunch event
Berkeley, ca
|
June 5
Book now
The findings come as technology companies are increasingly promoting AI as a way of improving health results. Apple is referenced Developing an AI tool that can distribute tips on exercise, nutrition and sleep. Amazon is exploring a AI -based way to analyze medical databases for “social decisive health factors”. And Microsoft helps to build AI to classify messages to care providers sent by patients.
However, as TechCrunch has previously mentioned, both professionals and patients are gross as to whether AI is ready for higher risk applications. The American Medical Association recommends against the use of doctors of chatbots such as Chatgpt for help with clinical decisions and large AI companies, including Openai Warn, not to diagnose diagnoses based on the results of chatbots.
“We would recommend based on reliable sources of information on health care decisions,” Mahdi said. ‘Current evaluation methods for [chatbots] Do not reflect the complexity of interaction with human people. Such as clinical trials for new drugs, [chatbot] Systems should be tested in the real world before growing. ”
