Depending on your stage of acquiring skills in Scripthea and Stable Diffusion here are some example scenarios (far from recipes for anything).
Usually people don't know what they want, or a rare
occasion - what they need.
What are you doing? |
How do you do it? |
0. Begin from the
beginning: installations, some reads and how to run Scripthea with
Stable Diffusion After having Stable Diffusion WebUI (SDW) working try some SDW options following this intro or github location you have downloaded the installation from. If you prefer video tutorials - here are some. |
|
1. Some initial
experimentation: cues, modifiers, single queries Are you looking to how all this works. You may have some vague idea what you wish to accomplish, you have seen some examples here and there but how to do it yourself is a bit more complicated. You need to experiment with different cues, modifiers and SDW options. The third one is a bit more tricky and it depends on in which way you connect to SDW. If it is via API you may adjust SD parameters from the tab Stable Diffusion (right on the Composer). If you are running script connection to SDW you need to reset the connection every time you change a SD parameter in WebUI. |
After having previous section
executed, you can start some initial experimentation / tests.
|
2. More systematic
approach: scan mode, scannable modifiers and review As we established earlier a prompt = cue + modifiers. In scan you have the opportunity to combine a number of cues with number of modifiers. There are a number of ways you can do that using two types of modifiers: fixed and scannable. The simplest one is using some cues with some fixed modifiers: the result will be a list of all selected cues with the fixed modifiers at the end. Then you add scannable modifiers to the mix: the result will be combination of all the cues with all fixed modifiers and one or more (check the Sample number in Modifiers options) scannable modifiers. The best way is to play a bit with all types of modifiers and sample number. Overall aim here is to be able to create predictable results. Maybe not exactly what you have in mind, but achieve some level predictability. |
After some experimentation you are
having some sense how the software works and what you may expect
from the prompts you have composed.
|
3. Full systematic
approach: cues from image depot, next level iterations and having
something on your mind ...coming |
|
4. Final preparation:
upscaling, final touches and publishing 1. Upscaling is always optional. Maybe your generator can make images with sufficient for you as quality and resolution. If not - see on the right. 2. Final touches usually involve some photo-editing software. 3. There are plenty of public depositories specialized in AI-generated images and/or photography. The variety is impressive. Still, if you decide to do it yourself - see on the right. |
1. For ComfyUI upscaler follow
these
video instructions
. In Automatic1111 you go to Extra tab, pick a upscaling method, set a scale factor and give it some time, you can do a batch of images (a folder) as well. For more - see this video clip. 2. Photoshop is very popular and powerful, but it is not cheap and it could be complicated at times. Other ones (including PS plugins) can change the light or other meta characteristics. 3. Having all of your images you would like to public in one
image depot will allow you to export then into a generated webpage
using Scripthea Export utility
(some options are available). |
All of these scenarios assume using the prompts provided by Scripthea or some variations of them. Still if you decide to write your own prompts from scratch here are some useful tips:
Be specific: The more specific and detailed your prompt is, the more specific and detailed your generated image will be.
Mood: What emotions do you want the image to evoke? Do you want it to be peaceful, dramatic, joyful, mysterious, etc.?
Style: Do you have a specific artistic style in mind? For example, do you want the image to be realistic, impressionistic, whimsical, abstract, etc.? You can reference famous artists or art movements for inspiration.
Use descriptive language: Use vivid and descriptive language to help the model understand what you want the image to look like. For example, instead of saying "a black dog," you could say "a sleek, black dog with a shiny coat and a wagging tail."
Provide context: Give the model some context about the image you want to generate. For example, if you're generating an image of a dog, you could provide some information about the breed, age, and environment.
Use relevant keywords: Include relevant keywords in your prompt to help the model understand the topic of the image you want to generate. For example, if you're generating an image of a dog, you could include keywords like "dog," "puppy," "bone," "fetch," etc.
Be creative: Don't be afraid to get creative with your prompt! You can use metaphors, similes, and other literary devices to help the model generate a unique and interesting image.
Search the internet or YouTube if you prefer for more detailed guides, there are plenty of them.
You could ask some chatbot (e.g. ChatGPT) to do it for you but you have to explain to the chatbot what would you like the image to be and we've got catch 22...