From the article:
This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.
Another way of looking at it: “AI successfully used to manipulate people’s opinions on certain topics.” If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.
“Great! Billy doesn’t believe 9/11 was an inside job, but now the AI made him believe Bush was actually president in 1942 and that Obama was never president.”
In all seriousness I think an “unbiased” AI might be one of the few ways to reach people about this stuff because any Joe schmoe is just viewed as “believing what they want you to believe!” when they try to confront any conspiracy.
With the inherent biases present in any LLM training model, the issue of hallucinations that you’ve brought up, alongside the cost of running an LLM at scale being prohibitive to anyone besides private-state partnerships, do you think that will allay conspiracists’ valid concerns about the centralization of information access, a la the reduction in quality google search results over the past decade and a half?
I think those people might not, but I was once a “conspiracy nut,” had a circle of friends who were as well, and know that for a lot of those kinds of people YouTube is the majority of the “research” they do. For those people I think this could work as long as it’s not hallucinating and can point to proper sources.
Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I’ll go read now and edit if I was pleasantly incorrect.
Edit: They didn’t test the model’s ability to inculcate new conspiracies, obviously that’d be a fun day at the office for the ethics review board. But I bet with a malign LLM it’s very possible.
A piece of paper dropped on the ground can ‘shape human beliefs’. That’s literally a tool used in warfare.
The news here is that conspiratorial thinking can be relieved at all.
"AI is just a tool; is a bit naïve. The power of this tool and the scope makes this tool a devastating potential. It’s a good idea to be concerned and talk about it.