with just a 4 gb graphics card!
AI generated image of 22 year old woman, room, landscape
In 2022, AI image creation was for expensive PC's only, but things changed.
I have a 10 year old PC and upgraded it with a $200 graphics card (Palit GTX 1650). And yes, it can render AI images like the one above! I have a 10 year old PC, with a power supply with only 6-pin connectors. That limits me to a handful of possible cards. I chose a $200 graphics card, a Palit GTX 1650, which is silent during desktop use, and still nearly silent when rendering. And it can render AI images like the one above!
You are reading in mobile mode. On a desktop browser, instructions will be more detailed.
py
- you should get an output "Python 3.10.10".
type quit()
or press CTRL+Z to exit the python prompt.
git
- you should get the Git help text.
C:\Windows
,
and git.exe in C:\Program Files\Git\cmd
.
C:
cd \
mkdir sd
cd sd
git clone https://github.com/
AUTOMATIC1111/
stable-diffusion-webui.git
(type all in one line)
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
webui-user.bat
127.0.0.1:7860
woman, eye level, full shot. detailed background, coast with flowers.
deformed, bad anatomy, disfigured, poorly drawn face,
mutation, mutated, extra limb, ugly, disgusting, poorly drawn hands,
missing limb, floating limbs, disconnected limbs, malformed hands, blurry,
((((mutated hands and fingers)))), watermark, watermarked, censored,
distorted hands, amputation, missing hands, obese, doubled face, double hands
30
512
4
8
With my graphics card, this takes 90 seconds per image. It will create 4 images, so I have to wait 6 minutes.
Example outputs:
They don't look very interesting yet, because the prompt was so short. You have to add prompt details for pose and environment to make it more interesting.
Especially when rendering humans, many result images can be ugly, with distorted limbs and faces. it can happen that you have to render 5 to 10 images to get a single good result. You also have to add details to the negative prompt for better results.
The rendered images have a 512x512 pixel resolution, a 4 gb card cannot handle more. ("Hires. fix" directly in webui will not help.) But AI also allows magic upscaling, where it adds details, thinking how the upscaled image should look like. I did not find the upscalers in WebUI satisfying, as the results looked too artificial. But but there is a free tool called Upscayl which produces very good results in the mode: General Photo (Ultramix Balanced).
![]() 512x512 |
![]() 2048x2048 upscayled |
By default, Stable Diffusion WebUI uses a 4 gb file named v1-5-pruned-emaonly.ckpt, containing training data for image generation.
But the best model file in my opinion is Deliberate, available on civitai. So do these steps:
deliberate_v2.safetensors
from civitai,
and store it in stable-diffusion-webui\models\Stable-diffusion
,
parallel to v1-5-pruned-emaonly.ckpt.
That's it. You can continue writing prompts as before, but now the improved image data of the new model is used.
iView is a free lightweight image viewer optimized to walk through generated images. You can quickly sort out images into three different target folders, and press F3 to view the contained prompt text of an image.