--

Yes, you make a very valid point that LLMs are currently far from perfect and may not have the capability to perform certain tasks. That's an important consideration. In many of the uses I am seeing, explainability of the reasoning is a key requirement - if I ask the machine to assess if a news article supports a certain thesis, I do want the machine to explain why it came to its conclusion - it helps me gain confidence that the reasoning is sound (or not).

--

--

Duncan Anderson
Duncan Anderson

Written by Duncan Anderson

Eclectic tastes, amateur at most things. Learning how to build a new startup. Former CTO for IBM Watson Europe.

No responses yet