The latest research from Renmin University warns that caution is needed in data augmentation for contrastive learning. Strongly aligned positive samples may harm the generalization ability of contrastive learning. While stronger data augmentation can enhance downstream task performance, alignment performance decreases. The study reveals the mechanisms by which data augmentation affects contrastive learning and proposes seeking better data augmentation strategies from an information-theoretic and spectral perspective.