The Headless Language Model proposed removes the need for vocabulary space prediction and reconstructs input embeddings through contrastive methods, which can be directly integrated into classical language model codebases. Experiments of moderate scale show that its performance surpasses classical methods, with a 20-fold increase in computational efficiency.