Q: 13
What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?
Options
Discussion
Option C is tempting if you only focus on the term, but it's actually D that's closer, since hallucination is about generating false info not visualizing images.
Yeah, it's definitely D. Hallucination is when the model makes stuff up and acts like it's true.
Careful, if the scenario mentioned multimodal LLMs with image output then C might fit, but standard usage points to D.
C/D? If the question means making up wrong info, then D fits but if they sneak in something about image generation in the scenario I'd worry about C. Not sure since wording feels a bit open.
Be respectful. No spam.