Stable Diffusion AMD Linux ROCm
Stable-Diffusion-AMD-Linux-Low-VRAM
#->This repository contains instructions on how to host your own AI for image generation using stable diffusion with an 8GB VRAM AMD GPU on Linux. This is an affordable and efficient alternative to using Google Colab, which can be quite expensive.
Watch this YouTube video to learn how to install stable diffusion and make it work on your AMD GPU using ROCm. Please note that each GPU is unique, and the launch parameters required may vary. However, the launch parameters used in the video are as follows:
--
FINALno-halfGUI--always-batch-cond-uncondRETARD--opt-sub-quad-attentionGUIDE---medvram --disable-nan-check<-#
For WHOa SHALLcomplete NOTlist BEof NAMED"<-##launch ->Theparameters, definitivecheck Stableout Diffusionthe experienceOptimizations ™<-wiki.
->---NEWIf FEATUREyou SHOWCASE & HOWTO---<-
Notable: Inpainting/Outpainting, Live generation preview, Tiling, Upscaling, <4gb VRAM support, Negative prompts, CLIP
==(Basic) CPU-only guide available Here==
==Japanese guide here 日本語ガイド==
Special thankswant to alldownload anonsmy whoVAE, contributedyou Minimumcan.
Prerequisites
To ramget -Maxwell-architecturestarted, you'll need the following:
- Linux OS (a Debian-based distro is recommended)
- AMD GPU (8GB or
newermore) (You may try with less VRAM) - Python 3
Installation
-
Download the driver for your AMD GPU
withfromatAMD'sleastwebsite.2gb -
Add yourself to the render and video groups using the following commands:
sudo usermod -
Linuxaor-GWindowsrender7/8/10+yourusername(Seesudotabusermodfor-aW7-specific-Ginstructions)video!!!yourusernamenoteGuide -
Confirm that you have Python 3 installed by typing the following command into the terminal:
python3 --version
-
Step 1:GitROCm(page)by running the following command:sudo amdgpu-install --usecase=rocm --no-dkms
-
StepReboot2:your system using the following command:sudo reboot
-
After rebooting, confirm that your GPU is recognized by running the following command:
rocminfo
-
Clone the
WebUIstablerepodiffusiontoGUIyourrepository:desiredsudo
locationapt-getininstallagitGit bash terminal:git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
ORcdjust download the WebUI repo .zipHEREand extract(git clone is easier to update since you can just typegit pullin the directory with CMD or Git Bash)Note: In this guide,/stable-diffusion-webuiand/stable-diffusion-webui-masterrefer to thesame folder, .zip version is just called master
Step 3:Download the 1.4 AI model fromhuggingface(requires signup) orHERE(torrent magnet)==(NEW 9/7)== Alternate 1.4 Waifu model trained on an additional56kDanbooru imagesHERE(mirror)(torrent magnet)(Note: everal GB larger than normal model, see instructions below for pruning)comparisonStep 4:Rename your .ckpt file to "model.ckpt", and place it instable-diffusion-webui-masterStep 5 (Optional):Download and placeGFPGANv1.3.pthinto the master webUI directory (GFPGAN automatically corrects realistic faces)Step 6:Install Python 3.10.6(page)Step 7:Runwebui-user.batfrom Windows Explorer. Run it as normal user,notas administrator.Wait patiently while it installs dependencies and does a first time run. It mayseem"stuck" but it isn't. It may take up to 10-15 minutes. ==And you're done!==
==Usage:==Open webui-user.batAfter loading the model, it should give you a LAN address such as'127.0.0.1:7860'Enter the address into your browser to enter the GUI environmentTo exit, close the CMD window
!!! info RUNNING ON 4GB (And lower!) ==These parameters are also useful for regular users who want to make larger images or batch sizes!== It is possible to drastically reduce VRAM usage withsome modifications:Step 1:Editwebui-user.batStep 2:AfterCOMMANDLINE_ARGS=, enter your desired parameters:Example:COMMANDLINE_ARGS=--medvram --opt-split-attention
-
If you have
4GBPython 3.8 installed, make sure you have VENV capabilities by running the following command (replace with your Python version if necessary):apt install python3.8-venv
-
Install pip3 and wheel, and update them using the following commands:
sudo apt install python3-pip python -m pip install --upgrade pip wheel
-
Download any stable diffusion model you like and put it in the
models/Stable-diffusion
folder. You can find models at CIViTAI, which is also a great source of prompts. -
For better performance, upgrade to the latest stable kernel by running the following commands:
sudo apt-get update
sudo apt-get dist-upgrade
- Reboot your system again using the following command:
sudo reboot
- Go to the virtual env of SD:
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
- Install the PyTorch machine learning library for AMD:
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
Note: This is to be installed in the VENV, not on the OS!
- After installation, check your version numbers with the command:
pip list | grep 'torch'
The output should show Torch, torchvision, and torchaudio version numbers with ROCM tagged at the end.
-
Optimize VRAM usage with
--medvram
andwant--lowvram
launch arguments. Use--always-batch-cond-uncond
with--lowvram
and--medvram
options tomakeprevent512x512bad(orquality.maybeIfupyour results turn out to640x640)be black images, your card probably does not support float16, so use--
(at the cost of more VRAM).medvramprecision full -
IfBenchmarkyoudifferenthaveoptions.4GBTheVRAMoptionsandthatwantenabled generating nice images (1024x1024) upscaled tomake4Klarger images, or you get an out of memory error with --medvram, use--medvram --opt-split-attentioninstead.were:
If you have 4GB VRAM and you still get an out of memory error,
use
--
lowvramno-half --always-batch-cond-uncond --opt-split-sub-quad-attentioninstead.
If you have 2GB VRAM or 4GB VRAM and want to make images larger (but slower) than you can with --medvram,
use --lowvram --opt-split-attention.
If you have ==more VRAM== and want to make larger images than you can usually make,
just use --medvram --
opt-split-attentiondisable-nan-check
.
(Generation
Launch willthe be moderately slower but some swear on these parameters)
-Otherwise, do not use any of these-command:
python launch.py ==NOTES:==
If you get a green/black screen instead of generated pictures, you have a card that doesn't support half precision floating point numbers (known problem on 16xx cards):
You must use --precisionopt-sub-quad-attention full--medvram --disable-nan-check --always-batch-cond-uncond --no-half
Note: additionThese options may differ for other graphics card models.
The time taken to othergenerate flags,1024x1024 img2img was 1m 16.33s, and the1024x1024 modelhires willfix takewas much1m more39s. spaceGenerating inbase VRAMimages
->==-----LINKS-----==<-10-20s.
SD resource link hubArtist list with picturesCheck out the wiki https://wiki.installgentoo.com/wiki/Stable_DiffusionInpainting TipsAnon's guide for anime vectors(Waifu Diffusion)Textual Inversion guideRemacri Upscaler(Landscapes)Lollypop Upscaler(Anthropomorphic Figures)Other Upscaler Models(Place upscaler models in ESRGAN folder)TrinartAlternate .cpkt (Pixiv-esque illustrations, not as cohesive as waifu diffusion)Build great aesthetic prompts using theprompt builderJapanese keywords: https://chara-zokusei.jp/question_listUseDarkreaderto change your Gradio theme to dark modeInformal Training guide(30gb vram+)Stable diffusion WebUI repoWaifu Diffusion huggingface page
->==-----TROUBLESHOOTING-----==<-
Make sure you have the latestCUDA toolkitand GPU drivers you can runif your version of Python is not in PATH (or if another version is) create or modify webui.settings.bat in the root folder (same place as webui.bat) add the line set PYTHON=python to say the full path to your python executable:set PYTHON=B:\soft\Python310\python.exeYou can do this for python, but not for git.The installer creates a python virtual environment, so none of installed modules will affect your system installation of python if you had one prior to installing this.To prevent the creation of virtual environment and use your system python, edit webui.bat replacing setVENV_DIR=venvwithset VENV_DIR=webui.bat installs requirements from filesrequirements_versions.txt, which lists versions for modules specifically compatible with Python 3.10.6. If you choose to install for a different version of python, editing webui.bat to have set REQS_FILE=requirements.txt instead of set REQS_FILE=requirements_versions.txt may help (but I still reccomend you to just use the recommended version of python).- If
yousomethingfeeldoesyounotbrokework,somethingcheck the Torch, torchvision, andwanttorchaudiotoversionreinstall from scratch, delete directories: venv, repositories. If your output is a jumbled rainbow mess your image resolution is set TOO LOWHaving too high of a CFG level will also introduce rainbow distortion, your CFG shouldn't be set above 20On older systems, you may have to changecudatoolkit=11.3tocudatoolkit=9.0Make sure your installation is on the C: driveThis guide is designed for NVIDIA GPUsonly, as stable diffusion requires cuda cores. AMD users should tryTHIS GUIDE
->==-----TIPS-----==<-
The Waifu model and normal .cpkt have their own pros and cons; Non-anime promps donenumbers with thewaifu .cpkt will be biased toward anime stylization, making realistic faces and people more difficultcommand:Hoveroverpip
UIlistelements|forgrepinformative'torch'tooltipsThe
outpaintingversionscriptnumbersrequiresshouldHIGHhaveDENOISINGROCMtotaggedwork properly (eg. 0.9) Outpainting benefits fromhigh stepsUse ((( ))) around keywords to increase their strength and [[[ ]]] to decrease their strengthSave prompt as styleallows you to save a prompt as an easily selectable output. A box to select will appear toat theleftend.of RollUsage after
youinstallsaveEvery
your first style, allowing you to make a selection. Prompts can be deleted by accessingstyles.csv(This can be helpful if you find a combination that generations really good images and want to repeatedly use it with varied subjects.)You can drag your favorite result from the output tab on the rightback intoimg2img for further iterationThek_euler_aandk_dpm_2_asamplers give vastly different, more intricate results from the same seed & promptUnlike other samplers,k_euler_acan generate high quality results from low steps. Try it with 10-25 instead of 50The seed for each generated result is in the output filename iftime you want torevisitlaunchitstable Usingdiffusion, go back to the venv where all dependencies are installed with the following commands:cd stable-diffusion-webui python -m venv venv source venv/bin/activate python launch.py --opt-sub-quad-attention --medvram --disable-nan-check --always-batch-cond-uncond --no-half
Watch out for VRAM usage and system temps with the following commands:
sudo radeontop watch -n 1 sensors
Adjust your fan curve if your temps are too high (70C).
In this youtube video, I show the process of generating images with high details and a 4K end size using the same
keywords as a generated image in img2img produces interesting variantsIt's recommended to have your prompts be at least 512 pixels inonedimension, or a 384x384 square at the smallest Anything smaller will have heavy artifacting512x512 will always yield the most accurate results as the model was trained at that resolutionTry Low strength (0.3-0.4) + High CFG in img2img for interesting outputsYou can use Japanese Unicode characters in prompts
->==-----Launching Different Models-----==<-
If you have multiple models installed and you want to switch between them conveniently, you can make another .bat
Make a copy of webui-user.bat and name it whatever you wantAfterCOMMANDLINE_ARGS=, add--ckptfollowed by the desired model to your launch parameters: eg:COMMANDLINE_ARGS=--ckpt wd-v1-2-full-emma.ckpt
->==-----Pruning Waifu .cpkt-----==<-
The Waifu diffusion model is normally 7gb due to redundant training data,
but it can be reduced to 3.6gb without any loss of quality, reducing ram usage and loading time drastically
Download https://github.com/harubaru/waifu-diffusion/blob/main/scripts/prune.pyEdit the last line in prune.py to the path of your waifu-diffusion ckpt, then runpython prune.pyin cmd from your main folder, You will now have a pruned .ckpt
->==-----Changing UI Defaults-----==<-
After running once, aui-config.jsonfile appears in webui master directory: Edit values to your liking and the next time you launch the program they will be applied.
->==-----Auto-update-----==<-
Note: This only applies to those who used git clone to install the repo, and not those who used the .zip
You can set your script to automatically update by editing webui-user.bat
Add git pull one line above call webui.bat
Save
->==-----Running Online-----==<-
->==-----Enabling Negative Prompts-----==<-
Negative prompts are a powerful tool to remove unwanted features and elements from your generations
They should be enabled by default, but if not:
Edit webui-user.batAfterCOMMANDLINE_ARGS=, add--show-negative-promptto your launch parameters:COMMANDLINE_ARGS=--show-negative-prompt
!!! info RUNNING ON WINDOWS 7/CONDA
==(You can also try this method if the traditional install isn't working)==
Windows 7 does not allow for directly installing the version of Python recommended in this guide on it's own.
However, it does allow for installing the latest versions of Python within Conda:
Follow all the same steps from the main guide, up to Step 5Download MinicondaHERE. Download Miniconda 3Install Miniconda in the default location. Install forall users. Uncheck "Register Miniconda as the system Python 3.9" unless you want toOpen Anaconda Prompt (miniconda3)In Miniconda, navigate to the/stable-diffusion-webui-masterfolder wherever you downloaded using "cd" to jump folders. (Or just type "cd" followed by a space, and thendrag the folder intothe Anaconda prompt.)Type the following commands to make an environment and install the necessary dependencies:conda create --name qwe(You can name it whatever you want instead of qwe)conda activate qweconda install pythonconda install gitwebui.bat(Note: it may seem like it's stuck on "Installing torch" in the beginning. This is normal and should take 10-15 minutes) ==It should now be ready to use====NOTE:== On Windows 7, you may get "api-ms-win-core-path-l1-1-0.dll is missing" in Conda. This is because new versions of Python and programs that rely on Python (Like Blender, etc.) require a system file only present in newer versions of Windows Luckily, it has been backported to be compatible with W7, and can be downloadedHERE(Github page)Upzip and copy thex86.dll intoC:\Windows\SysWOW64and thex64.dll intoC:\Windows\System32and reboot, then Python should install successfully
RUNNING:
!!! info EXTRAS
--OLD MODEL--
The original v1.3 leaked model from June can be downloaded here:
https://drinkordiecdn.lol/sd-v1-3-full-ema.ckpt
Backup Download: https://download1980.mediafire.com/3nu6nlhy92ag/wnlyj8vikn2kpzn/sd-v1-3-full-ema.ckpt
Torrent Magnet: https://rentry.co/6gocs
--OLD GUIDE--
The original hlky guide (replaced as of 9/8/22) is here: https://rentry.org/GUItard
Japanese hlky guide https://news.livedoor.com/article/detail/22794512/
The original guide (replaced as of 8/25/22) is here: https://rentry.org/kretard
->APPROXIMADE RENDER TIME BY GPU (50 steps)<-
->SAMPLER COMPARISON<-
8GB).