{"id":76,"date":"2023-02-16T14:38:45","date_gmt":"2023-02-16T06:38:45","guid":{"rendered":"https:\/\/my.leaf.hair\/?p=76"},"modified":"2023-02-17T14:46:24","modified_gmt":"2023-02-17T06:46:24","slug":"stable-diffusion-web-ui","status":"publish","type":"post","link":"https:\/\/my.di.cloudns.asia\/index.php\/2023\/02\/16\/76.html","title":{"rendered":"Stable Diffusion web UI"},"content":{"rendered":"<h1>Stable Diffusion web UI<\/h1>\n<p>A browser interface based on Gradio library for Stable Diffusion.<\/p>\n<p><div class='fancybox-wrapper lazyload-container-unload' data-fancybox='post-images' href='\/wp-content\/uploads\/2023\/02\/post-76-63ef22c122084.png'><img class=\"lazyload lazyload-style-1\" src=\"data:image\/svg+xml;base64,PCEtLUFyZ29uTG9hZGluZy0tPgo8c3ZnIHdpZHRoPSIxIiBoZWlnaHQ9IjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgc3Ryb2tlPSIjZmZmZmZmMDAiPjxnPjwvZz4KPC9zdmc+\"  decoding=\"async\" data-original=\"\/wp-content\/uploads\/2023\/02\/post-76-63ef22c122084.png\" src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB\/AAffA0nNPuCLAAAAAElFTkSuQmCC\" alt=\"\" \/><\/div><\/p>\n<h2>Features<\/h2>\n<p><a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Features\">Detailed feature showcase with images<\/a>:<\/p>\n<ul>\n<li>Original txt2img and img2img modes<\/li>\n<li>One click install and run script (but you still must install python and git)<\/li>\n<li>Outpainting<\/li>\n<li>Inpainting<\/li>\n<li>Color Sketch<\/li>\n<li>Prompt Matrix<\/li>\n<li>Stable Diffusion Upscale<\/li>\n<li>Attention, specify parts of text that the model should pay more attention to\n<ul>\n<li>a man in a ((tuxedo)) &#8211; will pay more attention to tuxedo<\/li>\n<li>a man in a (tuxedo:1.21) &#8211; alternative syntax<\/li>\n<li>select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)<\/li>\n<\/ul>\n<\/li>\n<li>Loopback, run img2img processing multiple times<\/li>\n<li>X\/Y\/Z plot, a way to draw a 3 dimensional plot of images with different parameters<\/li>\n<li>Textual Inversion\n<ul>\n<li>have as many embeddings as you want and use any names you like for them<\/li>\n<li>use multiple embeddings with different numbers of vectors per token<\/li>\n<li>works with half precision floating point numbers<\/li>\n<li>train embeddings on 8GB (also reports of 6GB working)<\/li>\n<\/ul>\n<\/li>\n<li>Extras tab with:\n<ul>\n<li>GFPGAN, neural network that fixes faces<\/li>\n<li>CodeFormer, face restoration tool as an alternative to GFPGAN<\/li>\n<li>RealESRGAN, neural network upscaler<\/li>\n<li>ESRGAN, neural network upscaler with a lot of third party models<\/li>\n<li>SwinIR and Swin2SR(<a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/pull\/2092\">see here<\/a>), neural network upscalers<\/li>\n<li>LDSR, Latent diffusion super resolution upscaling<\/li>\n<\/ul>\n<\/li>\n<li>Resizing aspect ratio options<\/li>\n<li>Sampling method selection\n<ul>\n<li>Adjust sampler eta values (noise multiplier)<\/li>\n<li>More advanced noise setting options<\/li>\n<\/ul>\n<\/li>\n<li>Interrupt processing at any time<\/li>\n<li>4GB video card support (also reports of 2GB working)<\/li>\n<li>Correct seeds for batches<\/li>\n<li>Live prompt token length validation<\/li>\n<li>Generation parameters\n<ul>\n<li>parameters you used to generate images are saved with that image<\/li>\n<li>in PNG chunks for PNG, in EXIF for JPEG<\/li>\n<li>can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI<\/li>\n<li>can be disabled in settings<\/li>\n<li>drag and drop an image\/text-parameters to promptbox<\/li>\n<\/ul>\n<\/li>\n<li>Read Generation Parameters Button, loads parameters in promptbox to UI<\/li>\n<li>Settings page<\/li>\n<li>Running arbitrary python code from UI (must run with &#8211;allow-code to enable)<\/li>\n<li>Mouseover hints for most UI elements<\/li>\n<li>Possible to change defaults\/mix\/max\/step values for UI elements via text config<\/li>\n<li>Tiling support, a checkbox to create images that can be tiled like textures<\/li>\n<li>Progress bar and live image generation preview\n<ul>\n<li>Can use a separate neural network to produce previews with almost none VRAM or compute requirement<\/li>\n<\/ul>\n<\/li>\n<li>Negative prompt, an extra text field that allows you to list what you don&#8217;t want to see in generated image<\/li>\n<li>Styles, a way to save part of prompt and easily apply them via dropdown later<\/li>\n<li>Variations, a way to generate same image but with tiny differences<\/li>\n<li>Seed resizing, a way to generate same image but at slightly different resolution<\/li>\n<li>CLIP interrogator, a button that tries to guess prompt from an image<\/li>\n<li>Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway<\/li>\n<li>Batch Processing, process a group of files using img2img<\/li>\n<li>Img2img Alternative, reverse Euler method of cross attention control<\/li>\n<li>Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions<\/li>\n<li>Reloading checkpoints on the fly<\/li>\n<li>Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one<\/li>\n<li><a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Custom-Scripts\">Custom scripts<\/a> with many extensions from community<\/li>\n<li><a href=\"https:\/\/energy-based-model.github.io\/Compositional-Visual-Generation-with-Composable-Diffusion-Models\/\">Composable-Diffusion<\/a>, a way to use multiple prompts at once\n<ul>\n<li>separate prompts using uppercase <code>AND<\/code><\/li>\n<li>also supports weights for prompts: <code>a cat :1.2 AND a dog AND a penguin :2.2<\/code><\/li>\n<\/ul>\n<\/li>\n<li>No token limit for prompts (original stable diffusion lets you use up to 75 tokens)<\/li>\n<li>DeepDanbooru integration, creates danbooru style tags for anime prompts<\/li>\n<li><a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Xformers\">xformers<\/a>, major speed increase for select cards: (add &#8211;xformers to commandline args)<\/li>\n<li>via extension: <a href=\"https:\/\/github.com\/yfszzx\/stable-diffusion-webui-images-browser\">History tab<\/a>: view, direct and delete images conveniently within the UI<\/li>\n<li>Generate forever option<\/li>\n<li>Training tab\n<ul>\n<li>hypernetworks and embeddings options<\/li>\n<li>Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)<\/li>\n<\/ul>\n<\/li>\n<li>Clip skip<\/li>\n<li>Hypernetworks<\/li>\n<li>Loras (same as Hypernetworks but more pretty)<\/li>\n<li>A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt. <\/li>\n<li>Can select to load a different VAE from settings screen<\/li>\n<li>Estimated completion time in progress bar<\/li>\n<li>API<\/li>\n<li>Support for dedicated <a href=\"https:\/\/github.com\/runwayml\/stable-diffusion#inpainting-with-stable-diffusion\">inpainting model<\/a> by RunwayML.<\/li>\n<li>via extension: <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui-aesthetic-gradients\">Aesthetic Gradients<\/a>, a way to generate images with a specific aesthetic by using clip images embeds (implementation of <a href=\"https:\/\/github.com\/vicgalle\/stable-diffusion-aesthetic-gradients\">https:\/\/github.com\/vicgalle\/stable-diffusion-aesthetic-gradients<\/a>)<\/li>\n<li><a href=\"https:\/\/github.com\/Stability-AI\/stablediffusion\">Stable Diffusion 2.0<\/a> support &#8211; see <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Features#stable-diffusion-20\">wiki<\/a> for instructions<\/li>\n<li><a href=\"https:\/\/arxiv.org\/abs\/2211.06679\">Alt-Diffusion<\/a> support &#8211; see <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Features#alt-diffusion\">wiki<\/a> for instructions<\/li>\n<li>Now without any bad letters!<\/li>\n<li>Load checkpoints in safetensors format<\/li>\n<li>Eased resolution restriction: generated image&#8217;s domension must be a multiple of 8 rather than 64<\/li>\n<li>Now with a license!<\/li>\n<li>Reorder elements in the UI from settings screen<\/li>\n<li>\n<\/li>\n<\/ul>\n<h2>Installation and Running<\/h2>\n<p>Make sure the required <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Dependencies\">dependencies<\/a> are met and follow the instructions available for both <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Install-and-Run-on-NVidia-GPUs\">NVidia<\/a> (recommended) and <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Install-and-Run-on-AMD-GPUs\">AMD<\/a> GPUs.<\/p>\n<p>Alternatively, use online services (like Google Colab):<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Online-Services\">List of Online Services<\/a><\/li>\n<\/ul>\n<h3>Automatic Installation on Windows<\/h3>\n<ol>\n<li>Install <a href=\"https:\/\/www.python.org\/downloads\/windows\/\">Python 3.10.6<\/a>, checking &quot;Add Python to PATH&quot;<\/li>\n<li>Install <a href=\"https:\/\/git-scm.com\/download\/win\">git<\/a>.<\/li>\n<li>Download the stable-diffusion-webui repository, for example by running <code>git clone https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui.git<\/code>.<\/li>\n<li>Place stable diffusion checkpoint (<code>model.ckpt<\/code>) in the <code>models\/Stable-diffusion<\/code> directory (see <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Dependencies\">dependencies<\/a> for where to get it).<\/li>\n<li>Run <code>webui-user.bat<\/code> from Windows Explorer as normal, non-administrator, user.<\/li>\n<\/ol>\n<h3>Automatic Installation on Linux<\/h3>\n<ol>\n<li>Install the dependencies:\n<pre><code class=\"language-bash\"># Debian-based:\nsudo apt install wget git python3 python3-venv\n# Red Hat-based:\nsudo dnf install wget git python3\n# Arch-based:\nsudo pacman -S wget git python3<\/code><\/pre>\n<\/li>\n<li>To install in <code>\/home\/$(whoami)\/stable-diffusion-webui\/<\/code>, run:\n<pre><code class=\"language-bash\">bash &lt;(wget -qO- https:\/\/raw.githubusercontent.com\/AUTOMATIC1111\/stable-diffusion-webui\/master\/webui.sh)<\/code><\/pre>\n<\/li>\n<\/ol>\n<h3>Installation on Apple Silicon<\/h3>\n<p>Find the instructions <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Installation-on-Apple-Silicon\">here<\/a>.<\/p>\n<h2>Contributing<\/h2>\n<p>Here&#8217;s how to add code to this repo: <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\/Contributing\">Contributing<\/a><\/p>\n<h2>Documentation<\/h2>\n<p>The documentation was moved from this README over to the project&#8217;s <a href=\"https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui\/wiki\">wiki<\/a>.<\/p>\n<h2>Credits<\/h2>\n<p>Licenses for borrowed code can be found in <code>Settings -&gt; Licenses<\/code> screen, and also in <code>html\/licenses.html<\/code> file.<\/p>\n<ul>\n<li>Stable Diffusion &#8211; <a href=\"https:\/\/github.com\/CompVis\/stable-diffusion\">https:\/\/github.com\/CompVis\/stable-diffusion<\/a>, <a href=\"https:\/\/github.com\/CompVis\/taming-transformers\">https:\/\/github.com\/CompVis\/taming-transformers<\/a><\/li>\n<li>k-diffusion &#8211; <a href=\"https:\/\/github.com\/crowsonkb\/k-diffusion.git\">https:\/\/github.com\/crowsonkb\/k-diffusion.git<\/a><\/li>\n<li>GFPGAN &#8211; <a href=\"https:\/\/github.com\/TencentARC\/GFPGAN.git\">https:\/\/github.com\/TencentARC\/GFPGAN.git<\/a><\/li>\n<li>CodeFormer &#8211; <a href=\"https:\/\/github.com\/sczhou\/CodeFormer\">https:\/\/github.com\/sczhou\/CodeFormer<\/a><\/li>\n<li>ESRGAN &#8211; <a href=\"https:\/\/github.com\/xinntao\/ESRGAN\">https:\/\/github.com\/xinntao\/ESRGAN<\/a><\/li>\n<li>SwinIR &#8211; <a href=\"https:\/\/github.com\/JingyunLiang\/SwinIR\">https:\/\/github.com\/JingyunLiang\/SwinIR<\/a><\/li>\n<li>Swin2SR &#8211; <a href=\"https:\/\/github.com\/mv-lab\/swin2sr\">https:\/\/github.com\/mv-lab\/swin2sr<\/a><\/li>\n<li>LDSR &#8211; <a href=\"https:\/\/github.com\/Hafiidz\/latent-diffusion\">https:\/\/github.com\/Hafiidz\/latent-diffusion<\/a><\/li>\n<li>MiDaS &#8211; <a href=\"https:\/\/github.com\/isl-org\/MiDaS\">https:\/\/github.com\/isl-org\/MiDaS<\/a><\/li>\n<li>Ideas for optimizations &#8211; <a href=\"https:\/\/github.com\/basujindal\/stable-diffusion\">https:\/\/github.com\/basujindal\/stable-diffusion<\/a><\/li>\n<li>Cross Attention layer optimization &#8211; Doggettx &#8211; <a href=\"https:\/\/github.com\/Doggettx\/stable-diffusion\">https:\/\/github.com\/Doggettx\/stable-diffusion<\/a>, original idea for prompt editing.<\/li>\n<li>Cross Attention layer optimization &#8211; InvokeAI, lstein &#8211; <a href=\"https:\/\/github.com\/invoke-ai\/InvokeAI\">https:\/\/github.com\/invoke-ai\/InvokeAI<\/a> (originally <a href=\"http:\/\/github.com\/lstein\/stable-diffusion\">http:\/\/github.com\/lstein\/stable-diffusion<\/a>)<\/li>\n<li>Sub-quadratic Cross Attention layer optimization &#8211; Alex Birch (<a href=\"https:\/\/github.com\/Birch-san\/diffusers\/pull\/1\">https:\/\/github.com\/Birch-san\/diffusers\/pull\/1<\/a>), Amin Rezaei (<a href=\"https:\/\/github.com\/AminRezaei0x443\/memory-efficient-attention\">https:\/\/github.com\/AminRezaei0x443\/memory-efficient-attention<\/a>)<\/li>\n<li>Textual Inversion &#8211; Rinon Gal &#8211; <a href=\"https:\/\/github.com\/rinongal\/textual_inversion\">https:\/\/github.com\/rinongal\/textual_inversion<\/a> (we&#8217;re not using his code, but we are using his ideas).<\/li>\n<li>Idea for SD upscale &#8211; <a href=\"https:\/\/github.com\/jquesnelle\/txt2imghd\">https:\/\/github.com\/jquesnelle\/txt2imghd<\/a><\/li>\n<li>Noise generation for outpainting mk2 &#8211; <a href=\"https:\/\/github.com\/parlance-zz\/g-diffuser-bot\">https:\/\/github.com\/parlance-zz\/g-diffuser-bot<\/a><\/li>\n<li>CLIP interrogator idea and borrowing some code &#8211; <a href=\"https:\/\/github.com\/pharmapsychotic\/clip-interrogator\">https:\/\/github.com\/pharmapsychotic\/clip-interrogator<\/a><\/li>\n<li>Idea for Composable Diffusion &#8211; <a href=\"https:\/\/github.com\/energy-based-model\/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch\">https:\/\/github.com\/energy-based-model\/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch<\/a><\/li>\n<li>xformers &#8211; <a href=\"https:\/\/github.com\/facebookresearch\/xformers\">https:\/\/github.com\/facebookresearch\/xformers<\/a><\/li>\n<li>DeepDanbooru &#8211; interrogator for anime diffusers <a href=\"https:\/\/github.com\/KichangKim\/DeepDanbooru\">https:\/\/github.com\/KichangKim\/DeepDanbooru<\/a><\/li>\n<li>Sampling in float32 precision from a float16 UNet &#8211; marunine for the idea, Birch-san for the example Diffusers implementation (<a href=\"https:\/\/github.com\/Birch-san\/diffusers-play\/tree\/92feee6\">https:\/\/github.com\/Birch-san\/diffusers-play\/tree\/92feee6<\/a>)<\/li>\n<li>Instruct pix2pix &#8211; Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) &#8211; <a href=\"https:\/\/github.com\/timothybrooks\/instruct-pix2pix\">https:\/\/github.com\/timothybrooks\/instruct-pix2pix<\/a><\/li>\n<li>Security advice &#8211; RyotaK<\/li>\n<li>Initial Gradio script &#8211; posted on 4chan by an Anonymous user. Thank you Anonymous user.<\/li>\n<li>(You)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Stable Diffusion web UI A browser interface based on Gr [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[],"class_list":["post-76","post","type-post","status-publish","format-standard","hentry","category-deeplearning"],"_links":{"self":[{"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/posts\/76","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/comments?post=76"}],"version-history":[{"count":3,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/posts\/76\/revisions"}],"predecessor-version":[{"id":81,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/posts\/76\/revisions\/81"}],"wp:attachment":[{"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/media?parent=76"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/categories?post=76"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/my.di.cloudns.asia\/index.php\/wp-json\/wp\/v2\/tags?post=76"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}