malins
Stumbling Seeker
Haha I think it's going to take a long time before anything results that's worth showing, if ever.Great! Can’t wait to see what you will create with it. If you have any trouble, feel free to let me know.
Played around a bit with it today.
First of all though, the install really is as easy as claimed.
If you already have Krita, update to 2.5.1 (or just run your Ninite) otherwise do a fresh install.
Download the zip, put it in the folder, go through the configure menu as described and grab a coffee.
I do not have any experience with Krita, it's just on my machine because hey it's free and why not. But I understand how a basic graphics program with brushes and layers and so on works.
Now I've got Stable Diffusion on my computer which is already a big leap for someone like me because before that I always thought "yeah maybe I should try it but am I really going to wade neck deep into some Python dependency hell, just to install something that maybe won't work anyway?"
So this is really the installer for know-nothings.
And yeah, it works on my laptop with a puny 6gb Vram which is the minimum requirement. I don't play computer games or do 3D and so don't have some tower PC with a big fat graphics card.
Of course I'm not using high resolutions, just starting off with 512x512 or 512x640.
And of course I have not been trying to use the live mode.
Now the simple test is of course just use it with a blank canvas and some prompt - just using Krita as a SD user interface for absolute dummies.
(actually if I run it without a prompt and a blank canvas it always returns a close up, indoor portrait of a young woman. Probaby the default assumption of what a user wants to see if no input is given )
but as I understand from the youtube tutorials i've seen the basic idea is it picks up from say, a rather simple outline drawing, and will elaborate this into the AI rendering. So my attempts have been, make a simple drawing, or load a simple pre-existing line drawing, set it up as a source (this may be an issue as the UI I see is not the same as in the tuorials) and then 'generate'
That hasn't really been working so far for me.
What does work is the pose control.
I can definitely tell the AI 'that arm needs to go that way and that leg needs to go here'.
Also I can use a previously rendered-from-prompt-only AI image as an input, it does pick up on that. So if successive generations gave me versions of a scene ranging from "Stonehenge vibe" to "Tulum vibe" I can fix it on the Tulum version if I want.
So far this seems to work for "locking down" successive AI generations but I've failed at having it make sense of any freshly drawn input. Not giving up yet though.
One observation from my first attempts - if I'm rendering (from prompt only) a full figure using the SD 1.5, it does pick up the vibe of a scene very well, but the figures are teratogenic monstrosities, I am not even talking about extra fingers here, the faces look like the worst things you will find conserved in formaldehyde in an 18th century anatomical museum.
The results I get from switching to SD XL are far superior. Also this thing seems to have a much more decisive interpretation of a prompt, successive generated images don't diverge as randomly. It's also less "cute", SD 1.5 really likes to go for the "anime girl mutagenized by gamma ray burst" look. However... in my futile attempts to work with original drawing - XL was even worse...
Even with these early steps though I can see how the process obviously gives me more control than just using a web interface and typing a prompt.
And yeah, this thing spontaneously comes up with lighting, costumes, background architecture & so forth that just 'feel right' in the sense that, it does look like a still from some film. Even if it is a B-movie, nothing wrong with that
So there is going to be a lot of trial and error here. Which is to be expected for a complete novice...