contrastors is a contrastive learning toolkit that enables researchers and engineers to efficiently train and evaluate contrastive models. Built upon Flash Attention, it supports multi-GPU training, GradCache support for large batch training in memory-constrained environments. It also supports Huggingface, allowing for seamless loading of common models. Additionally, it supports masked language modeling pretraining and Matryoshka representation learning.