- AI Apprentice
- Posts
- 🤖 Can Modern AI Pass The Turing Test?!
🤖 Can Modern AI Pass The Turing Test?!
How will teachers and trainers know if it’s a human or AI they're talking to?
The “Turing Test” puts humans to the test, really, even though it is usually framed in the reverse: can a person determine if they are talking to or reading the writing of another human or a machine? Over the years, many early chatbots starting in the 1960’s (ELIZA, PARRY, etc) have failed to win the Loebner prize / Turing Test. But other later models like Jabberwacky and Mitsuku started to succeed. They started to beat the humans at their own game of discernment–more on that in an upcoming edition. So… can modern AI pass the Turing Test?
The short answer: YES, easily. But it isn’t just winning based on things like grammar and syntax pattern-matching. The important twist in the UC San Diego study (referenced in “Did AI just pass the Turing Test?” at AEI) is that it clearly identifies what constitutes true human-level intelligence according to other humans, at least: It isn’t mastery of advanced calculus or another challenging technical field, or even a particular manner of speaking. Instead, what stands out about the most advanced (AI) models is their social-emotional persuasiveness. For an AI to catch (or fool a human) it has to be able to effectively imitate the subtleties of human conversation.
But what does this mean for education and work? How will a teacher grading homework or lecturing online students know if the “person” on the other end of the screen is a human or a bot? I participated in a mandatory HR training today for supervisors in my company. Now, I really did participate the entire time, but everyone’s cameras were off. Literally the entire “class”. In the future, will someone be able to send a bot to answer questions in polls and in the Zoom chat, which was what we needed as proof of our training and compliance? Or (even worse, I think) will this force trainers to mandate on-camera participation? How do 2-D and 3-D avatars “based on our unique personalities” fit into this picture?
And forget about AI detectors, because every time someone creates one that purports to identify AI, it either: 1) falsely identifies human writing as AI-written, 2) neglects to properly identify AI-writing as AI-written, or 3) quickly becomes an obsolete model or framework because the newest AI-on-the-block can outwit the detectors.
This brave (alarming?) new world is going to have implications for schooling, training, and working. In every industry. There will be some knock-on effects soon and perhaps bigger changes in the way that we all learn and earn. But don’t take my word for it. Ask ChatGPT or Claude or Perplexity (see below) what will be some of the effects of AI able to pass the Turing Test, and limit it to education or whatever field you are most interested in. Cut and paste the same query into different tools and compare and contrast the answers. Ask it to have a robust debate with you, if you still need convincing that this is something we have to figure out ASAP…
my dialogue with Perplexity, free version
🎓 Learning in the News:
Ahead of GPT-5 release, another test shows humans cannot distinguish between humans and AI (Tech Radar)
New Jersey unveils new AI guidelines for schools (NJ Spotlight News)
Teacher Voice: My students are bombarded by negativity about AI and now they are afraid of it (Hechinger Report)
AI can predict high school drop outs many years in advance (Phys.org)
There are many other orgs and tools that predict students’ progress and risk factors while in high school, but this model is one of the most successful at using early-years schooling to predict later outcomes
đź’Ľ Workforce RoundUp:
HALF of workers globally expect AI to actually boost their salaries and increase their job protection (Fortune)
AI copilots for lawyers will shape the future of the industry says a recent study (Fortune)
Spoiler alert: though these tools are already widely used by lawyers and paralegals, the AI may currently generate more errors and hallucinations than originally estimated (or admitted…)
OpenAI and Google’s DeepMind workers warn of risks with AI (The Guardian)
The impact of AI on the tech and cybersecurity industry (KD Nuggets)
🍿 Tools, Tips, & Terms:
🛠️ Tool of the Week:
Perplexity + Make = personal AI research assistant. I love Perplexity, but adding a no-code tool like Make (or Zapier, etc) helps supercharge the already robust platform. Check out this tutorial video if you ever wanted to have a personal research assistant of your own.
đź“– Term of the Week:
RAG stands for Retrieval-Augmented Generation. It combines searching a database for relevant information (retrieval) and then using this information to generate accurate responses (generation). It’s different from LLMs (large language models) that only generate text based on the model and data it was trained on in the past. Imagine a librarian who first finds the best books on a topic (including new releases!) before giving you a well-informed answer, ensuring fewer mistakes…and in the case of AI, less hallucinations!
🤖 GenAI Tip of the Week:
Use Personas: Start a new chat or thread and have your AI copilot or tool take on a new persona to ensure that it pulls from expert data in its training model. Follow the Persona + Purpose format. Example prompt: “You are a renowned marketing executive with 15 years experience marketing programs and products to the higher education market in the U.S. and Latin America [persona]. Create a go-to-market plan for [insert product / program idea] with the following audience, goals, and constraints [purpose].”