This week, I’d like to share further tests using the segmentation workflow being developed.
Previously, I demonstrated how Sdxl + ADE20K segmentation models can give us more control by allowing us to select specific categories to include in a scene.
While this approach still has limitations—such as the fact that the category remains heavily influenced by the prompt and context—it proves highly effective for defining general objects in the scene using tools like #Photoshop or #Rhino.
Here are a few quick examples showcasing the potential of this method.
For the final images I upscaled the results using flux1 and then animated one of the spaces using runwayml
To allow for the #segmentation to control the image, the Prompt was left purposefully short by changing only one word (XXXXX):
"Massive XXXXX, reflective floor, people, fog,"