Wav2Lip is an open-source project aimed at achieving high synchronization between characters' lips and arbitrary target speech in videos using deep learning technology. The project provides complete training codes, inference codes, and pre-trained models, supporting any identity, voice, and language, including CGI faces and synthetic voices. The technology behind Wav2Lip is based on the paper 'A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild,' which was published at ACM Multimedia 2020. The project also features an interactive demo and a Google Colab notebook for users to quickly get started. Furthermore, the project offers new, reliable evaluation benchmarks and metrics, along with explanations on how to calculate these metrics from the paper.