Artificial intelligence systems are becoming increasingly sophisticated, capable of generating text that can occasionally be indistinguishable from that produced by humans. However, these powerful systems aren't infallible. One recurring issue is known as "AI hallucinations," where models generate outputs that are false. This can occur when a model