Then you could generate a bunch of images to find your character in the crowd, and run variations to make sure they match up well and are consistent. Then you could add more prompt details to narrow the character down and add specific details you want. You could start by using standard characters as a base, doing the swap so they don’t look completely standard. Run variations to get closer to the character You can mix and match all the different methods we have covered so far to help you get it done. Imagine you want to create a consistent character for a comic book. Luckily for us, a new training method has entered the chat. Since Textual Inversion can be applied on top of any model, that is one solution but Textual Inversion quality sometimes is low. For example if you wanted to use the model Arcane Diffusion which was trained on images from Netflix's show, to show your own face, you would need to use Dreambooth to train the Arcane model with your face. Sometimes this training can wipe out the concept that the model was tuned to. It also means that if you find a cool new Stable Diffusion model (article about model creation and remixing coming soon!) you have to train it using Dreambooth on your character. This means if you do a lot of training you have multiple 2-4 GB models for each person, object, or style you are training. The downside to Dreambooth is that you need to train a whole model to be able to create one concept. It is the most ‘expensive’ in terms of file size and GPU requirements, but it is very effective at creating consistent characters. Out of all the methods, Dreambooth works the best. Here I based my female scientist off of Chris Pratt and Henry Cavill gender swapped, and we get a consistent female character, that doesn’t look like a famous person.Ĭonsistent character across varied environments as a 25 year old jacked handsome Jamaican male mechanic, buzzed haircut, chiled jaw, ((swole)), ((huge biceps)), (((dirty clothes))), smiling, stunningly handsom, zeiss lens, half length shot, ultra realistic, octane render, 8k To create male characters use this prompt: Negative: Male, man, cartoon, 3d, video game, unreal engine, illustration, drawing, digital illustration, painting, digital painting, sketch, black and white as a 25 year old sexy gorgeous thai female mechanic, blue hair, wispy bangs, ((thicc)), (((dirty clothes))), smiling, stunningly beautiful, zeiss lens, half length shot, ultra realistic, octane render, 8k To create female characters use this prompt: You can base your character on famous people, but then use a sex and ethnicity swap to generate pretty consistent characters that don’t look exactly like famous people. You can either use a different method (6 more covered below!), or use a trick a clever Reddit user figured out. Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3803847970, Size: 512x512, Model hash: d3c225cbc2, Model: comic-diffusion-V2īut what if you don’t want your character to look like a famous person? The easiest method is to use standard characters. Run image2image variations to get closer to the character Stable Diffusion has the most ways to create consistent characters. We covered how to create consistent characters with Dalle2 and Midjourney here, this week we are focusing on the more powerful of the Big 3 Image Generators, Stable Diffusion. This is bad if you want to create one character, in a variety of situations.īeing able to create the same character repeatedly is a requirement for comics, books, stories, film storyboards etc. This is good if you want to create billions of unique images. You can make awesome images, but the images are going to be different each time, due to the nature of diffusion models, ( explained here ). Generative AI art has a problem with character consistency. Let’s jump into this week’s AI deep dive! My baby boy just got two bottom teeth, and it is surprising how hard he can bite □□□.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |