🧬 AI Outperforms Humans in Diagnoses:

What This Means for Education

A recent study shared in the NY Times revealed a startling insight: while doctors working with or without ChatGPT correctly diagnosed illnesses 75% of the time, ChatGPT alone achieved a 90% accuracy rate. The culprit for the gap? Human bias. Doctors often stuck with their initial diagnoses even when ChatGPT presented evidence that suggested otherwise. The study also highlighted another factor—doctors used ChatGPT like a basic search engine instead of leveraging it as an advanced large language model (LLM). This points to a significant lack of training in effective prompt writing and LLM usage, skills that are becoming increasingly essential in AI-integrated fields.

This revelation doesn’t just impact healthcare—it also has profound implications for education. As AI tools like ChatGPT are introduced into classrooms and counseling offices, biases from educators and counselors could similarly influence how AI recommendations are interpreted. For example, in career guidance, teachers and counselors may unknowingly steer students toward traditional or familiar career paths, even when AI data suggests better-aligned options based on student interests and aptitudes. This is an important consideration for ensuring AI-powered systems support—not overshadow—human expertise in education. But likewise, a small study like this should not be used to suggest removing humans from the loop.

As a doctoral student at the University of Illinois at Urbana Champaign, my dissertation focuses on using AI-powered chatbots for career guidance. These findings underscore the importance of training educators and counselors to work effectively with AI tools rather than simply throwing new tools at them and hoping for the best. Just as medical professionals must learn to trust (but verify!) and properly interact with certain vetted AI systems, educators could benefit from similar skills development, to avoid misusing AI as a simple information retrieval tool. Or worse, miss the opportunity to improve on human biases in our work—much as we hear about AI biases in the news, this study highlights the dangers of the former. By building familiarity with AI’s potential and limitations, we can create a future where technology amplifies human decision-making rather than highlighting its flaws.