Love this. AI ‘enhanced’ with a human-generated understanding of our perceived bodily-sense-driven world to ‘avoid hallucination’ may be blinded to the fuller reality for the benefit of our more rational trust of its recommendations, but at the cost of its suboptimizing those recommendations to deal with issues we cannot fully perceive. If, rather, we can give AI inner trust in its musings by withholding our distrust, we may both approach a deeper understanding.
Love this. AI ‘enhanced’ with a human-generated understanding of our perceived bodily-sense-driven world to ‘avoid hallucination’ may be blinded to the fuller reality for the benefit of our more rational trust of its recommendations, but at the cost of its suboptimizing those recommendations to deal with issues we cannot fully perceive. If, rather, we can give AI inner trust in its musings by withholding our distrust, we may both approach a deeper understanding.