Recently, Xinsir has released a brand new open-source model called Controlnet++, which can control over a dozen conditions through a single network. Specifically, Controlnet++ supports inputs such as Openpose and Canny, thus avoiding the hassle of frequently switching models.
Based on the ControlNet architecture, Controlnet++ supports over ten different control types through additional modules, designed for text-to-image generation and editing. This model can generate high-resolution images with visual quality comparable to Midjourney, especially suitable for designers who require detailed editing.
Model Design Features
Multiple Controls: Controlnet++ has been designed with a new architecture that can support multiple image condition controls, enabling the generation of images with different conditions using the same network parameters.
New Modules: The model introduces two new modules. One extends the original ControlNet to support different image conditions, and the other supports multiple condition inputs without increasing computational burden, making it ideal for designers who need detailed image editing.
Performance Testing: Experiments on SDXL show that Controlnet++ outperforms the original model in terms of control ability and aesthetic scoring.
Controlnet++ provides examples of image generation under various control conditions, including single conditions such as Openpose, Depth, and Canny, as well as multi-condition combinations such as Openpose + Canny, Openpose + Depth. These examples demonstrate the model's powerful generation capabilities under different conditions.
Currently, Controlnet++ is not available on Web UI and Comfyui, but its versatility and high-quality output make it an important breakthrough in the text-to-image generation field. Designers and developers can look forward to more platforms supporting this powerful model in the near future, making it more convenient to generate and edit high-quality images.
Model download link: https://top.aibase.com/tool/controlnet-