What is 'hallucination' in the context of LLMs?
-
A
The model generating extremely long outputs
-
B
The model producing plausible-sounding but factually incorrect or fabricated information
-
C
The model refusing to answer sensitive questions
-
D
The model outputting garbled text due to tokenization errors