KnowEdit
A knowledge editing benchmark for evaluating the knowledge editing capabilities of large language models.
CommonProductOthersKnowledge EditingLarge Language Models
KnowEdit is a knowledge editing benchmark specifically designed for large language models (LLMs). It provides a comprehensive evaluation framework for testing and comparing the effectiveness of different knowledge editing methods in modifying the behavior of LLMs within specific domains, while maintaining overall performance across various inputs. KnowEdit benchmark comprises six distinct datasets, covering various editing types, including fact manipulation, sentiment modification, and hallucination generation. This benchmark aims to assist researchers and developers in better understanding and improving knowledge editing techniques, thereby propelling the continuous development and applications of LLMs.
KnowEdit Visit Over Time
Monthly Visits
1663
Bounce Rate
46.55%
Page per Visit
1.8
Visit Duration
00:00:42