Reducing-Hallucinations-in-LLM-Agents-with-a-Verified-Semantic-Cache
PublicThis repository contains sample code demonstrating how to implement a verified semantic cache using Amazon Bedrock Knowledge Bases to prevent hallucinations in Large Language Model (LLM) responses while improving latency and reducing costs.