When artificial intelligence models say they are unsure of their answers, people become warier of their output and ultimately more likely to dig out accurate information elsewhere. But since no AI model is currently capable of judging its own accuracy, some researchers question whether making AIs express doubt is a good idea.
While the large language models (LLMs) behind chatbots like ChatGPT create impressively believable outputs, it has been shown time and time again that they can simply make up…
Read the original article here