
AI in Medicine: Can a Chatbot Beat a Human in a Cognitive Test? – News Directory 3
Roy Dayan | News Directory 3 | Read the full article
In a recent study conducted in Israel, neurologists tested several advanced AI chatbots to see how well they could perform on cognitive tests typically designed for humans. The research highlighted some surprising shortcomings in these AI systems, particularly in areas that require human-like understanding and empathy. The study, which was published in a medical magazine, aimed to shed light on the limitations of artificial intelligence in healthcare settings.
The researchers used a well-known cognitive assessment called the Montreal Cognitive Evaluation (MoCA), which is often used to identify cognitive decline in people. They evaluated five leading AI models, including ChatGPT-4 and Google's Gemini, to see how they fared on various cognitive tasks. While the AI models performed well in memory and attention tests, they struggled with tasks that required visual and spatial reasoning, as well as understanding emotional contexts.
One particularly telling aspect of the study was the AI's inability to recognize a potentially dangerous situation involving a child. This lack of awareness mirrors certain cognitive impairments seen in conditions like frontotemporal dementia, emphasizing that AI does not perceive the world in the same way humans do. The findings suggest that while AI can assist in clinical decision-making, it cannot replace the essential human qualities of empathy and understanding in medical practice.