Persona bots, which can be found on and other platforms, allow users to have real-time conversations with bots purporting to be historical figures, world leaders, and even fictional characters. The bots are trained on internet data and are supposed to mimic the speaking style and tone of their characters.
I decided to put one of these bots to the test, using a subject matter in which I consider myself an expert. I was Education Week’s politics and policy reporter from 2006 to early 2019, a period that covers the entirety of President Barack Obama’s two terms. In that role, I did in-depth reporting on Obama’s K-12 policies. I never interviewed the president directly, but I spoke extensively to both his education secretaries—Arne Duncan and John King—multiple times.
Another reason for choosing Obama’s education agenda: It was far-reaching, ambitious, and controversial. There were complexities, nuances, subtle shifts in position. Would a chatbot be able to capture them? The answer: No, or at least not very well. (Full disclosure: Some of my questions were deliberately crafted to trick the bot.)
As you’ll see, the bot got a few facts right. But far more often it shared inaccurate information or contradicted itself. Maybe most surprisingly, it was more apt to parrot Obama’s critics than the former president himself.
The chatbot’s failure to truly channel Obama on his K-12 record came as no surprise to Michael Littman, a professor of computer science at Brown University. Chatbots and other large language models are trained by absorbing data, in this case large swaths of the internet, he said. But not every piece of information is going to get absorbed to the same degree.
“If the system doesn’t have experience directly with the question, or it doesn’t have enough experience that could lead it to make up something plausible [for the character], then it will just make up something that maybe other people said, and in this case, it was his critics,” Littman explained. “[The bot] didn’t have anything else to draw on.”
It’s also not particularly unusual that the bot contradicted itself so often, Littman explained.
“One of the hardest things to do when making things up is remain self-consistent because you can easily make a statement, then later, you want to say something that doesn’t really agree with that statement,” he explained. “And then you’re stuck. The bots are always making stuff up, so they get into that situation a lot. Sometimes they don’t even bother noticing. They just continue forward.”
The bottom-line: The conversation is a good example of why character bots are good for teaching about AI, but not very good—at least not yet—at helping students find accurate information. Also, if you’re teaching a course on education policy history, definitely steer clear of referring students to this chat!