| Set Context Set a
context fashioning simply means that when generating images from
preview panel every prompt will be prefixed by your
chosen context. That way you can set a style, mood or atmosphere in
much more elaborate way than just modifiers. Of course you can use
both context and modifiers where context will play principal role
fashioning your prompts but modifiers will give the flexibility to
vary them in a scan. That way you can "fine tune" the dominant
function of the context with variability of modifiers.
A combo box with example contexts (10 so far) allows you to pick
your context (prefix).
- If you like to modify existing context, just select it, edit it
and click on update (top on the left) button.
- If you like to add your original context, write it in the edit
box and click on big plus button.
- Minus button will remove currently selected context from the
list.
If you would like to include a comment (as title or description)
the context, place # in front of that line and the comment text will
be saved but ignored when context is used.
The context fashioning works well but you need to balance the
lengths of the context text and the cue text. In order not to
overwhelm the prompt with the context, rule of thumb is: on average
the length of the context should be less than the cue length.
Operations over the list of contexts and the list of questions
for LLM as update, add and remove are exactly the
same for both.
| Ask LLM Much more
flexible (and sophisticated) way to tailor your prompts is
by asking LLM to rewrite your prompts in a particular way. The
idea is very simple: generate a prompt list (with or without
modifiers) in Scan Preview panel. Then ask LLM to tailor
your prompts (only the cue part) in a distinctive way. You may
see from examples different ways to instruct LLM how to rewrite
your cues. Let's borrow some of the accumulated skills of
LLM into your creative process.
First you need to launch (Run LLM server) LM Studio.
Then you can ask LLM to fashion selected prompt (green
button) or all the checked ones (Auto Ask checkbox). Show
both checkbox refers to showing both: the original cue
and the fashioned one. Temperature control allows you to
introduce (or not) some variability/randomness. The max number of
tokens suggests the maximum length of the LLM response.
Before start using LLM you need to install and
setup one. At the
present only LM Studio is
available as local LLM server (later Ollama).
- Download and install LM Studio from
here . After
installation, you don't need
to switch to Developer mode, only if you need more
control and information about current state of LM
Studio. When you try to launch LM Studio from
configuration or from preview panel, Scripthea will try
to start LM Studio in server mode using command line
argument --server. In some versions of LM Studio that is
not enough to run the server, you need to go to the
server page of LM Studio and run the server from there.
- Download a LLM model from within LM Studio. For a
start I would recommend "ibm/granite-4-h-tiny",
it's a relatively small one (5.7Gb) which means that it
should run on 16Gb RAM machine and very good (quick and
adequate) for its size.
- Set a LM Studio executable location in Scripthea
configuration dialog (gear button top left). After you
set the path to executable, launch it. That will give
you indication that the LM Studio is there and its API
is open for business.
- Launching LM Studio will set the combo-box of LM
Studio model with all available models. Choose one and
load it.
- Setting a system prompt (context) will
instruct the model to act (as in a role) from a specific
perspective (e.g. you are an assistant helping with
text-to-image generation)
- At the end you may try to test it by asking random
question and check the response.
- Occasionally some models will give you an additional
information in their response you don't really need. First, try to instruct
LLM (system prompt) not to do that. If that does not work go to Config folder of Scripthea and find file "omit-llm.txt"
which is the list of all pieces of text you need
Scripthea to omit when reading the LLM response.
|
 |
|

 |