Character Consistency in Stable Diffusion (Part 1)

摘要

One of the big questions that comes up often in regards to Stable Diffusion is how to create character consistency if we want to create more than a single image. Facial characteristics are the most important, followed by body shape, clothing style, setting, etc… in the AI image/video world it’s the sought after holy grail as of now (mid-’23).

One route to achieve this goal is through the use of a LoRA (Low-Rank Adaptation), a method of training that inserts weights into an existing AI model’s layers to bias it towards a defined outcome. But, most LoRAs are trained on real-life people (famous actors images, personal photographs) and styles, not on AI generated/created persona output. Because of this the question comes about, what if I want to create a character based on the output of the model itself so that I have a 100% unique fabricated persona to develop a character around, using my own character description, how do I achieve that?

欢迎在评论区写下你对这篇文章的看法。

评论

Accueil - Wiki
Copyright © 2011-2024 iteam. Current version is 2.137.1. UTC+08:00, 2024-11-15 10:28
浙ICP备14020137号-1 $Carte des visiteurs$