Recently, the renowned expert Jerry Davos has released a new tutorial on YouTube, detailing how to use Comfy UI to alter the lighting effects of videos and render them.
Installation and Preparation
Firstly, you need to download Jerry Davos' workflow file, which is linked at the end of this article. After downloading, simply drag it into Comfy UI, where it will prompt you for any missing nodes. No worries, a one-click installation via the manager is simple and swift.
Should you encounter any issues during installation, Jerry Davos also provides a help link where you can click to find solutions. Here is Jerry Davos' detailed tutorial:
Workflow Overview
The workflow is cleverly divided into several groups, each with its unique function, from input settings to output saving, making it clear at a glance.
Input Settings: Includes loop settings and source video settings.
Animation Difference Group: Used for adjusting lighting effects.
Compositing Group: Synthesis of IC and light mapping.
Case Sampler Group: Controls the number of steps for rendering images.
Refiner: Enhances the resolution of the final render.
Prompt Input: Describes your image and new lighting conditions.
Input Settings
There are 5 settings:
Sampler Steps: Determines the total number of steps the KSampler takes to render the image. It should not be changed. [Default value: 26]
Detail Enhancer: Enhances fine details in the final render. [Use values between 0.1 and 1]
Seed: Controls the generation seed for each KSampler.
Sampler CFG: Controls the CFG value for the KSampler.
Refine Upscale: Acts similarly to a high-resolution fix value. [Use values between 1.1–1.6 for best results]
Prompt Input
Positive Prompt: Enter the prompt that best describes the image under new lighting.
Negative Prompt: Configured to provide the best results. Feel free to edit.
Clip Text Encoding Node: Helps encode text to maximize quality. Keep it as "full".
Models and Loras
Checkpoint: Select any realistic SD1.5 model for accurate results. Choose any SD1.5 model for stylized results.
Loras: [Optional] You can select any loras from the list if needed. Do not use at full strength. Using values around 0.5-0.7 yields the best results.
Basic Settings Analysis
Each setting serves a purpose, such as case sampler steps determining the detail level of the rendered image, and the detail enhancer improving the small details in the final render.
Seed: Controls the generation seed for each case sampler.
Refiner: Similar to a high-resolution fix, enhances image quality.
Prompt Input: Describes your image, system configured for optimal results.
Model and Video Selection
Select your V model, whether it's realistic or cartoon style. The Loras intensity is recommended to be set between 50% and 70% for the best results.
Video Upload and Settings
Upload your source video, noting that the file size should not exceed 100MB and the duration should be between 15 to 20 seconds to avoid memory issues.
Light Mapping Video
Upload your light mapping video, ensuring its duration is no shorter than the source video, otherwise, errors will occur.
Rendering Settings
Adjust the rendering resolution, whether it's landscape or portrait, the maximum resolution will not exceed the set value. It is recommended to use values between 800 to 1200 for the best results.
Light Map
Upload Light Map: Click and upload the light map video you want.
It will automatically scale to the size of the source video.
Ensure it is longer than or equal to the source video's size, otherwise, errors will occur.
Light Map ControlNet: This light map also serves as the Light controlnet using this model.
Use the light-based control network model, noting that the CN intensity value should not be too high to avoid overexposure or overly sharp light transitions.
Animation Difference
Load Animatediff Model: You can use any model to achieve different effects.
Animatediff Other Nodes: You need some understanding of animatediff to change other settings [You can find them here].
Set SMZ: This is a node used to improve the quality of the model pipeline, all settings are pre-defined to run well.
Composition of Light Map and IC Adjustment
The top adjustment node (gray) above is used to control the adjustment of IC light regulation to reduce contrast and control brightness.
Generate New Background: When disabled, it will input the original image and attempt to map details similar to the source video background based on the "background prompt" (if present in the positive prompt box).
[1 girl, sunny, sunset, white shirt, black shorts, indoors, room]
When generating a new background is enabled: it will generate a new background based on depth.
[1 girl, sunny, sunset, natural scenery in the background, sky]
Additionally, the intensity and end percentage of the depth control network are reduced to 45% to form open areas in the background.
Top Light Map: When True, the light map will be on top of the source video and more dominant; when False, the source will be on top, more dominant, and brighter.
Subject Influence Area: Two blending modes work best.
Darkens shadow areas based on the top or bottom light map.
Brightens shadow areas based on the top or bottom light map.
The blending factor is used for intensity.
Overall Adjustment: This controls the brightness, contrast, gamma, and hue of the final processed light map from above.
Image Remapping: Use this node to control the overall brightness and darkness of the entire image.
Higher minimum values make the scene brighter.
Lower maximum values will darken the scene and can convert brighter areas into distorted objects, such as QrCode Monster CN.
Typically, use a minimum value of 0.1 or 0.2 to slightly brighten the scene.
A minimum value of 0 will result in black pixels in the light map appearing as pitch black shadows.
KSamplers (Original and Refined)
IC Raw Ksampler: Unlike other samplers, due to IC-Light conditions, it starts from step 8 instead of zero (frames start denoising from step 8).
For example, end step 20.
Start step at:
0 will have no light map effect.
5 will have a 50% effect.
10 will have a 100% effect.
Therefore, around 3-8 is a good testing value.
When generating a new background is TRUE, you can set the value below 5 for better results.
Ksampler Refine: It works similarly to an Img2Img Refiner after the IC raw sampling.
For an end step of 25:
Start step at:
10 and below will work like the original sampler and may give you distorted objects.
15 can work like a true refiner.
20 will not work properly.
Above 20 will produce chaotic results.
Therefore, the default 16 is good.
Face Restoration
Upgrade to Fix Face: If you are not satisfied with the face after restoration, you can upgrade it to around 1.2 to 1.6 for a better face.
Positive Prompt: You can enter a facial prompt here. The default setting is "smiling". You can change it.
Face Denoising: Use around 0.35–0.45. Higher values on the face may result in incorrect rendering and may also cause sliding face issues.
Save
Once all frames are rendered, they will be exported to the output folder in the Comfy UI directory. You can adjust the output location or even batch render longer videos.
Advanced Tips
If you wish, you can use Comfy Cloud2 to run this workflow, which has a dedicated workflow page with all documentation available, one-click operation, without the need to install any nodes or models.
Workflow Address:
https://drive.google.com/drive/folders/16Aq1mqZKP-h8vApaN4FX5at3acidqPUv
Run this workflow in RunComfy without setup:
https://www.runcomfy.com/comfyui-workflows/comfyui-ic-light-workflow-for-video-relighting
Original Video Link: https://www.youtube.com/watch?v=q__YTKxtQAE