Feb 9, 2024
Yes, you make a very valid point that LLMs are currently far from perfect and may not have the capability to perform certain tasks. That's an important consideration. In many of the uses I am seeing, explainability of the reasoning is a key requirement - if I ask the machine to assess if a news article supports a certain thesis, I do want the machine to explain why it came to its conclusion - it helps me gain confidence that the reasoning is sound (or not).