Yes, but only in rare cases. MAXQDA’s AI features primarily rely on your provided data as input, rather than using large language models as a knowledge database. This significantly reduces the risk of “hallucinations”—that is, generating incorrect information. In fact, users have expressed surprise at how reliably MAXQDA’s AI Assist avoids hallucinations:
“Given many previous examples using AI chatbots, such as Bard, ChatGPT, etc., we were surprised to find no evidence of hallucination with AI Assist. Where we were expecting to see the AI generated false content, in our validation we found no instances in which AI Assist hallucinated.” (Loxton, 2024, p. 30)
However, hallucinations may occur when queries extend beyond the analyzed data, such as asking about unprovided theories or requesting literature references. Similarly, when suggesting subcodes for codes with minimal data, the suggestions might go beyond the context.
In general, adhering to well-defined inputs and ensuring clarity in your queries minimizes the likelihood of incorrect outputs.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article