Depending on your current proficiency with
Scripthea and Stable Diffusion, here are some example scenarios to
guide your exploration. These are not strict tutorials but rather flexible
pathways to help you experiment and learn.
What are you doing? |
How do you do it? |
| 0. Starting from
Scratch: installations, some reads and how to run Scripthea with
Stable Diffusion After having Stable Diffusion with WebUI (SDW) working try some SDW options following this intro or github location you have downloaded the installation from. If you prefer video tutorials - here are some. |
|
| 1. Some initial
experimentation: cues, modifiers, single queries Are you looking to how all this works. You may have some vague idea what you wish to accomplish, you have seen some examples here and there but how to do it yourself is a bit more complicated. You may try so called in the introduction a traditional approach. You need some level of clarity what do you want the picture to look like. Maybe not in details but as a general impression. Some may call it style, others - ambience but the point is that you are looking for something specific to satisfy your sense of expression (or maybe art). See Prompt writing tips or Exctract from an ext. collection below. You need to experiment with different cues, modifiers and SDW options. The third one is a bit more tricky and it depends on in which way you connect to SDW. If it is via API you may adjust SD parameters from the tab Stable Diffusion (right on the Composer). |
After having previous section
executed, you can start some initial experimentation / tests.
|
| 2. More systematic
approach: scan mode, scannable modifiers and review As we established earlier a prompt = cue + modifiers. In scan you have the opportunity to combine a number of cues with number of modifiers. There are different ways you can do that using two types of modifiers: fixed and scannable. The simplest one is using some cues with some fixed modifiers: the result will be a list of all selected cues with the fixed modifiers at the end. Then you add scannable modifiers to the mix: the result will be combination of all the cues with all fixed modifiers and one or more (check the Sample number in Modifiers options) scannable modifiers. The best way is to experiment a bit with all types of modifiers and sample number. Overall aim here is to be able to create predictable results. Maybe not exactly what you've expected, but achieve some level predictability. |
After some experimentation you will have some sense how the software works and what you may expect
from the prompts you have composed.
|
| 3. Full systematic
approach: ratings and iterations towards your goal After having experimented with scenario 2 you need to use the resulting image depot for next iteration on your way. The simplest way is just to delete (key Delete in viewer or use ID Master) unwanted images from your image depot. A better way would to rate your images and then use ID master to copy all the images above certain level in a new image depot. That way you will have your first iteration image depot. Then, open that image depot in Image Depot tab (next to Editor) or import it to a cue list in the Editor. Either way use iteration image depot with some modifications to scan and create a second iterations. The thing you try could be more modifiers or some SD parameters adjustment including an alternative model. After you delete and rate again and you see where this is going. Do as many iterations as you have time/efforts to do. It's a good practice to keep all intermediate image depot iterations in case you fail to improve or you feel stuck (hit a wall). |
Execute the first 5 steps from
scenario 2 (see above), you may call the resulting image depot
iteration 0 and then:
|
| 4. Final preparation:
upscaling, final touches and publishing 1. Upscaling is always optional. Maybe your generator can make images with sufficient for you as quality and resolution. If not - see on the right. 2. Final touches usually involve some photo-editing software. 3. There are plenty of public depositories specialized in AI-generated images and/or photography. The variety is impressive. Still, if you decide to do it yourself - see on the right. |
1. For ComfyUI upscaler follow
these
video instructions
. In Automatic1111 you go to Extra tab, pick a upscaling method, set a scale factor and give it some time, you can do a batch of images (a folder) as well. For more - see this video clip. 2. Photoshop is very popular and powerful, but it is not cheap and it could be complicated at times. One good and free alternative of PS is GIMP (see a demo clip). 3. After having all of your images in one image depot you may like to public
them (or selected subset) on your website. Scripthea Export utility
will allow you to export them into a generated webpage. |
|
|
|
|
Prompt writing tips All of these scenarios assume using the prompts provided by Scripthea or some variations of them. Still if you decide to write your own prompts from scratch here are some useful tips:
|
Search the internet or YouTube if you prefer for
more detailed guides, there is no shortage of them. Here is a
YouTube example. |
|
Extract from an external collection External collections are great source of prompts. As total size of more than 1.5 million unique prompts is impressive and overwhelming at the same time. The individual ext. collection size varies from several thousand to couple hundreds of thousands. Individual cue list typically has between from couple of tens to couple of hundreds of cues. The extraction dialog provides some basic and some sophisticated methods to extract cues list from ext.collection according to what you intend to do next or just general your preferences. You can start with the couple of collections which come by default, but ext.collection manager (round wavy button) will help you download/install many more. Plus from best results I would recommend semantic matching which requires semantic extension, downloadable via the same manager.
|
Filter options for prompt extraction:
|