SegMoE is an innovative method for mixing SD models without training. It offers three mixed models, consisting of 2 SDXL models, 4 SDXL models, and 4 SD1.5 models. The model adapts to various styles, but overall performance still needs improvement. The advantage of the mixed model is its ability to accommodate diverse styles, but quality and speed still require enhancement. Although SegMoE provides a novel approach, performance and effectiveness still require further improvement.