Large Language Models

ComfyUI

Published 2024-01-21.
Time to read: 6 minutes.

This page is part of the llm collection.

After playing around a bit with Stable Diffusion WebUI by AUTOMATIC1111 by AUTOMATIC1111, I was impressed by its power, and how absolutely horrible the user interface was. Documentation is fragmented, and the state of the art is moving very quickly, so by the time you get up to speed, everything has changed and you must start the learning process over again.

ComfyUI is a next-generation user interface for Stable Diffusion. Instead of dozens of tabs, buttons and small text boxes that may or may not provide utility, ComfyUI uses a graph/node interface.

Technical people might find this new interface natural, however non-technical people might take longer to feel comfortable with this approach.

ComfyUI is so new that it still has sharp edges.

My major complaint is that ComfyUI currently has no provision for contextual help or ability to self-diagnose. With so many types of nodes, many of which have multiple inputs and outputs, wiring problems often result in a Python stack trace, which is difficult for regular users to understand.

The ComfyUI Community Docs do not contain much, but it has this useful gem:

To make sharing easier, many Stable Diffusion interfaces, including ComfyUI, store the details of the generation flow inside the generated PNG. Many of the workflow guides you will find related to ComfyUI will also have this metadata included.

To load the associated flow of a generated image, simply load the image via the Load button in the menu, or drag and drop it into the ComfyUI window. This will automatically parse the details and load all the relevant nodes, including their settings.

The option to run ComfyUI in a Jupyter notebook seems awesome, and I hope to give that a try soon.

Installation

Assuming you are familiar with working with WSL/Ubuntu or native Ubuntu, this article should contain all the information you need in order to install, configure and run ComfyUI on WSL/Ubuntu and native Ubuntu.

As usual, because I generally only run native Ubuntu and WSL2 / Ubuntu, my notes on installation do not mention Macs or native Windows any more than necessary.

The official installation instructions are here. Those instructions are insufficient for most people. The installation guide published by stable-diffusion-art.com is better, but the Windows-related instructions are somewhat scrambled.

Sibling Directories

I installed ComfyUI next to stable-diffusion-webui. The instructions in this article assume that you will do the same. Both of these user interfaces are changing rapidly, from week to week. You will likely need to be able to switch between them. Sharing data between them is not only possible but is encouraged.

Your computer needs to have stable-diffusion-webui installed somewhere before following the instructions in this article. If this is not the case, please work through the Stable Diffusion WebUI by AUTOMATIC1111 now before returning to this article.

I defined an environment variable called llm, which points to the directory root of all my LLM-related programs. You could do the same by adding a line like the following in ~/.bashrc:

~/.bashrc
export llm=/mnt/f/work/llm

Activate the variable definition in the current terminal session by typing:

Shell
$ source ~/.bashrc

After I installed ComfyUI, stable-diffusion-webui, and some other programs that I am experimenting with, the $llm directory had the following contents (your computer probably does not have all of these directories):

Shell
$ tree -dL 1 $llm
/mnt/f/work/llm
├── ComfyUI
├── Fooocus
├── ollama-webui
└── stable-diffusion-webui 

At this point you just need to make the parent of that directory the current directory. After following these instructions, your computer should have a ComfyUI next to stable-diffusion-webui.

Git Clone ComfyUI

Shell
$ cd $llm # Change to the directory where my LLM projects live
$ git clone https://github.com/comfyanonymous/ComfyUI Cloning into 'ComfyUI'... remote: Enumerating objects: 9623, done. remote: Counting objects: 100% (2935/2935), done. remote: Compressing objects: 100% (231/231), done. remote: Total 9623 (delta 2775), reused 2709 (delta 2704), pack-reused 6688 Receiving objects: 100% (9623/9623), 3.79 MiB | 1.50 MiB/s, done. Resolving deltas: 100% (6510/6510), done.
$ cd ComfyUI

Let's take a look at the directories that got installed within the new ComfyUI directory:

Shell
$ tree -d -L 2
.
├── app
├── comfy
│   ├── cldm
│   ├── extra_samplers
│   ├── k_diffusion
│   ├── ldm
│   ├── sd1_tokenizer
│   ├── t2i_adapter
│   └── taesd
├── comfy_extras
│   └── chainner_models
├── custom_nodes
├── input
├── models
│   ├── checkpoints
│   ├── clip
│   ├── clip_vision
│   ├── configs
│   ├── controlnet
│   ├── diffusers
│   ├── embeddings
│   ├── gligen
│   ├── hypernetworks
│   ├── loras
│   ├── style_models
│   ├── unet
│   ├── upscale_models
│   ├── vae
│   └── vae_approx
├── notebooks
├── output
├── script_examples
├── tests
│   ├── compare
│   └── inference
├── tests-ui
│   ├── tests
│   └── utils
└── web
    ├── extensions
    ├── lib
    ├── scripts
    └── types 

Python Virtual Environment

I created a Python virtual environment (venv) for ComfyUI, inside the ComfyUI directory that was created by git clone. You should too. The .gitignore file already has an entry for venv, so use that name, as shown below.

For some reason the Mac manual installation instructions guide the reader through the process of doing this, but the Windows installation instructions make no mention of Python virtual environments. It is quite easy to make a venv, and you definitely should do so:

Shell
$ python3 -m venv venv
$ source venv/bin/activate (venv) $

Notice how the shell prompt changed after the virtual environment was activated. This virtual environment should always be activated before running ComfyUI. In a moment we will ensure this happens through configuration.

Install Dependent Python Libraries

Now that a venv was in place for ComfyUI, I followed the manual installation instructions for installing dependent Python libraries. These libraries will be stored within the virtual environment, which will guarantee that there will be no undesirable versioning issues with dependent Python libraries from other projects.

Shell
$ pip install torch torchvision torchaudio \
  --extra-index-url https://download.pytorch.org/whl/cu121
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch
  Downloading https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp311-cp311-linux_x86_64.whl (2200.7 MB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.7/2.2 GB 117.5 MB/s eta 0:00:13
Collecting torchvision
  Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl (6.8 MB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.8/6.8 MB 80.7 MB/s eta 0:00:00
Collecting torchaudio
  Downloading https://download.pytorch.org/whl/cu121/torchaudio-2.1.2%2Bcu121-cp311-cp311-linux_x86_64.whl (3.3 MB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 64.7 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (3.13.1)
Requirement already satisfied: typing-extensions in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (4.9.0)
Requirement already satisfied: sympy in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (1.12)
Requirement already satisfied: networkx in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (3.1)
Requirement already satisfied: jinja2 in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (3.1.2)
Requirement already satisfied: fsspec in /home/mslinn/anaconda3/lib/python3.11/site-packages (from torch) (2023.10.0)
Collecting triton==2.1.0 (from torch)
  Downloading https://download.pytorch.org/whl/triton-2.1.0-0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.2/89.2 MB 56.4 MB/s eta 0:00:00
Collecting numpy (from torchvision)
  Obtaining dependency information for numpy from https://files.pythonhosted.org/packages/5a/62/007b63f916aca1d27f5fede933fda3315d931ff9b2c28b9c2cf388cd8edb/numpy-1.26.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached numpy-1.26.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
Collecting requests (from torchvision)
  Obtaining dependency information for requests from https://files.pythonhosted.org/packages/70/8e/0e2d847013cb52cd35b38c009bb167a1a26b2ce6cd6965bf26b47bc0bf44/requests-2.31.0-py3-none-any.whl.metadata
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Obtaining dependency information for pillow!=8.3.*,>=5.3.0 from https://files.pythonhosted.org/packages/66/9c/2e1877630eb298bbfd23f90deeec0a3f682a4163d5ca9f178937de57346c/pillow-10.2.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata
  Using cached pillow-10.2.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (9.7 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Obtaining dependency information for MarkupSafe>=2.0 from https://files.pythonhosted.org/packages/d3/0a/c6dfffacc5a9a17c97019cb7cbec67e5abfb65c59a58ecba270fa224f88d/MarkupSafe-2.1.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading MarkupSafe-2.1.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Obtaining dependency information for charset-normalizer<4,>=2 from https://files.pythonhosted.org/packages/40/26/f35951c45070edc957ba40a5b1db3cf60a9dbb1b350c2d5bef03e01e61de/charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Obtaining dependency information for idna<4,>=2.5 from https://files.pythonhosted.org/packages/c2/e7/a82b05cf63a603df6e68d59ae6a68bf5064484a0718ea5033660af4b54a9/idna-3.6-py3-none-any.whl.metadata
  Using cached idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Obtaining dependency information for urllib3<3,>=1.21.1 from https://files.pythonhosted.org/packages/96/94/c31f58c7a7f470d5665935262ebd7455c7e4c7782eb525658d3dbf4b9403/urllib3-2.1.0-py3-none-any.whl.metadata
  Using cached urllib3-2.1.0-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Obtaining dependency information for certifi>=2017.4.17 from https://files.pythonhosted.org/packages/64/62/428ef076be88fa93716b576e4a01f919d25968913e817077a386fcbe4f42/certifi-2023.11.17-py3-none-any.whl.metadata
  Using cached certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)
Collecting mpmath>=0.19 (from sympy->torch)
  Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached pillow-10.2.0-cp311-cp311-manylinux_2_28_x86_64.whl (4.5 MB)
Using cached filelock-3.13.1-py3-none-any.whl (11 kB)
Using cached fsspec-2023.12.2-py3-none-any.whl (168 kB)
Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB)
Using cached networkx-3.2.1-py3-none-any.whl (1.6 MB)
Using cached numpy-1.26.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Using cached certifi-2023.11.17-py3-none-any.whl (162 kB)
Using cached charset_normalizer-3.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (140 kB)
Using cached idna-3.6-py3-none-any.whl (61 kB)
Downloading MarkupSafe-2.1.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (28 kB)
Using cached urllib3-2.1.0-py3-none-any.whl (104 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, triton, requests, jinja2, torch, torchvision, torchaudio
Successfully installed MarkupSafe-2.1.4 certifi-2023.11.17 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2023.12.2 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.3 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.1.2+cu121 torchaudio-2.1.2+cu121 torchvision-0.16.2+cu121 triton-2.1.0 typing-extensions-4.9.0 urllib3-2.1.0 

Installing the other dependencies is easy to do, but takes a few minutes for processing to complete:

Shell
$ pip install -r requirements.txt
Requirement already satisfied: torch in ./.comfyui_env/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (2.1.2+cu121)
Collecting torchsde (from -r requirements.txt (line 2))
  Obtaining dependency information for torchsde from https://files.pythonhosted.org/packages/dd/1f/b67ebd7e19ffe259f05d3cf4547326725c3113d640c277030be3e9998d6f/torchsde-0.2.6-py3-none-any.whl.metadata
  Using cached torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB)
Requirement already satisfied: torchvision in ./.comfyui_env/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (0.16.2+cu121)
Collecting einops (from -r requirements.txt (line 4))
  Obtaining dependency information for einops from https://files.pythonhosted.org/packages/29/0b/2d1c0ebfd092e25935b86509a9a817159212d82aa43d7fb07eca4eeff2c2/einops-0.7.0-py3-none-any.whl.metadata
  Downloading einops-0.7.0-py3-none-any.whl.metadata (13 kB)
Collecting transformers>=4.25.1 (from -r requirements.txt (line 5))
  Obtaining dependency information for transformers>=4.25.1 from https://files.pythonhosted.org/packages/20/0a/739426a81f7635b422fbe6cb8d1d99d1235579a6ac8024c13d743efa6847/transformers-4.36.2-py3-none-any.whl.metadata
  Downloading transformers-4.36.2-py3-none-any.whl.metadata (126 kB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 126.8/126.8 kB 6.9 MB/s eta 0:00:00
Collecting safetensors>=0.3.0 (from -r requirements.txt (line 6))
  Obtaining dependency information for safetensors>=0.3.0 from https://files.pythonhosted.org/packages/be/49/985429e1d1915df64d01603abae3c0c2e7deb77fd9c5d460768ad626ff84/safetensors-0.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading safetensors-0.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Collecting aiohttp (from -r requirements.txt (line 7))
  Obtaining dependency information for aiohttp from https://files.pythonhosted.org/packages/69/8d/769a1e9cdce1c9774dd2edc8f4e94c759256246066e5263de917e5b22a0a/aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.4 kB)
Collecting pyyaml (from -r requirements.txt (line 8))
  Obtaining dependency information for pyyaml from https://files.pythonhosted.org/packages/7b/5e/efd033ab7199a0b2044dab3b9f7a4f6670e6a52c089de572e928d2873b06/PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Requirement already satisfied: Pillow in ./.comfyui_env/lib/python3.11/site-packages (from -r requirements.txt (line 9)) (10.2.0)
Collecting scipy (from -r requirements.txt (line 10))
  Obtaining dependency information for scipy from https://files.pythonhosted.org/packages/d4/b8/7169935f9a2ea9e274ad8c21d6133d492079e6ebc3fc69a915c2375616b0/scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 kB 3.3 MB/s eta 0:00:00
Collecting tqdm (from -r requirements.txt (line 11))
  Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata
  Using cached tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
Collecting psutil (from -r requirements.txt (line 12))
  Obtaining dependency information for psutil from https://files.pythonhosted.org/packages/c5/4f/0e22aaa246f96d6ac87fe5ebb9c5a693fbe8877f537a1022527c47ca43c5/psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (21 kB)
Requirement already satisfied: filelock in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (3.13.1)
Requirement already satisfied: typing-extensions in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (4.9.0)
Requirement already satisfied: sympy in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (1.12)
Requirement already satisfied: networkx in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (3.2.1)
Requirement already satisfied: jinja2 in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (3.1.3)
Requirement already satisfied: fsspec in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (2023.12.2)
Requirement already satisfied: triton==2.1.0 in ./.comfyui_env/lib/python3.11/site-packages (from torch->-r requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: numpy>=1.19 in ./.comfyui_env/lib/python3.11/site-packages (from torchsde->-r requirements.txt (line 2)) (1.26.3)
Collecting trampoline>=0.1.2 (from torchsde->-r requirements.txt (line 2))
  Using cached trampoline-0.1.2-py3-none-any.whl (5.2 kB)
Requirement already satisfied: requests in ./.comfyui_env/lib/python3.11/site-packages (from torchvision->-r requirements.txt (line 3)) (2.31.0)
Collecting huggingface-hub<1.0,>=0.19.3 (from transformers>=4.25.1->-r requirements.txt (line 5))
  Obtaining dependency information for huggingface-hub<1.0,>=0.19.3 from https://files.pythonhosted.org/packages/3d/0a/aed3253a9ce63d9c90829b1d36bc44ad966499ff4f5827309099c8c9184b/huggingface_hub-0.20.2-py3-none-any.whl.metadata
  Using cached huggingface_hub-0.20.2-py3-none-any.whl.metadata (12 kB)
Collecting packaging>=20.0 (from transformers>=4.25.1->-r requirements.txt (line 5))
  Obtaining dependency information for packaging>=20.0 from https://files.pythonhosted.org/packages/ec/1a/610693ac4ee14fcdf2d9bf3c493370e4f2ef7ae2e19217d7a237ff42367d/packaging-23.2-py3-none-any.whl.metadata
  Using cached packaging-23.2-py3-none-any.whl.metadata (3.2 kB)
Collecting regex!=2019.12.17 (from transformers>=4.25.1->-r requirements.txt (line 5))
  Obtaining dependency information for regex!=2019.12.17 from https://files.pythonhosted.org/packages/8d/6b/2f6478814954c07c04ba60b78d688d3d7bab10d786e0b6c1db607e4f6673/regex-2023.12.25-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached regex-2023.12.25-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (40 kB)
Collecting tokenizers<0.19,>=0.14 (from transformers>=4.25.1->-r requirements.txt (line 5))
  Obtaining dependency information for tokenizers<0.19,>=0.14 from https://files.pythonhosted.org/packages/15/3b/879231a9a80e52a2bd0fcfc7167d5fa31324463a6063f04b5945667a8231/tokenizers-0.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Downloading tokenizers-0.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB)
Collecting attrs>=17.3.0 (from aiohttp->-r requirements.txt (line 7))
  Obtaining dependency information for attrs>=17.3.0 from https://files.pythonhosted.org/packages/e0/44/827b2a91a5816512fcaf3cc4ebc465ccd5d598c45cefa6703fcf4a79018f/attrs-23.2.0-py3-none-any.whl.metadata
  Using cached attrs-23.2.0-py3-none-any.whl.metadata (9.5 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->-r requirements.txt (line 7))
  Using cached multidict-6.0.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (117 kB)
Collecting yarl<2.0,>=1.0 (from aiohttp->-r requirements.txt (line 7))
  Obtaining dependency information for yarl<2.0,>=1.0 from https://files.pythonhosted.org/packages/9f/ea/94ad7d8299df89844e666e4aa8a0e9b88e02416cd6a7dd97969e9eae5212/yarl-1.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached yarl-1.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (31 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->-r requirements.txt (line 7))
  Obtaining dependency information for frozenlist>=1.1.1 from https://files.pythonhosted.org/packages/b3/c9/0bc5ee7e1f5cc7358ab67da0b7dfe60fbd05c254cea5c6108e7d1ae28c63/frozenlist-1.4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
  Using cached frozenlist-1.4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->-r requirements.txt (line 7))
  Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Requirement already satisfied: idna>=2.0 in ./.comfyui_env/lib/python3.11/site-packages (from yarl<2.0,>=1.0->aiohttp->-r requirements.txt (line 7)) (3.6)
Requirement already satisfied: MarkupSafe>=2.0 in ./.comfyui_env/lib/python3.11/site-packages (from jinja2->torch->-r requirements.txt (line 1)) (2.1.4)
Requirement already satisfied: charset-normalizer<4,>=2 in ./.comfyui_env/lib/python3.11/site-packages (from requests->torchvision->-r requirements.txt (line 3)) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in ./.comfyui_env/lib/python3.11/site-packages (from requests->torchvision->-r requirements.txt (line 3)) (2.1.0)
Requirement already satisfied: certifi>=2017.4.17 in ./.comfyui_env/lib/python3.11/site-packages (from requests->torchvision->-r requirements.txt (line 3)) (2023.11.17)
Requirement already satisfied: mpmath>=0.19 in ./.comfyui_env/lib/python3.11/site-packages (from sympy->torch->-r requirements.txt (line 1)) (1.3.0)
Using cached torchsde-0.2.6-py3-none-any.whl (61 kB)
Downloading einops-0.7.0-py3-none-any.whl (44 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 44.6/44.6 kB 4.5 MB/s eta 0:00:00
Downloading transformers-4.36.2-py3-none-any.whl (8.2 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.2/8.2 MB 89.1 MB/s eta 0:00:00
Downloading safetensors-0.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 61.7 MB/s eta 0:00:00
Using cached aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
Using cached PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (757 kB)
Downloading scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (38.4 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.4/38.4 MB 34.0 MB/s eta 0:00:00
Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)
Downloading psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (288 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 288.2/288.2 kB 23.5 MB/s eta 0:00:00
Using cached attrs-23.2.0-py3-none-any.whl (60 kB)
Using cached frozenlist-1.4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (272 kB)
Using cached huggingface_hub-0.20.2-py3-none-any.whl (330 kB)
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Using cached regex-2023.12.25-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (785 kB)
Downloading tokenizers-0.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.8 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.8/3.8 MB 26.8 MB/s eta 0:00:00
Using cached yarl-1.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (328 kB)
Installing collected packages: trampoline, tqdm, scipy, safetensors, regex, pyyaml, psutil, packaging, multidict, frozenlist, einops, attrs, yarl, huggingface-hub, aiosignal, torchsde, tokenizers, aiohttp, transformers
Successfully installed aiohttp-3.9.1 aiosignal-1.3.1 attrs-23.2.0 einops-0.7.0 frozenlist-1.4.1 huggingface-hub-0.20.2 multidict-6.0.4 packaging-23.2 psutil-5.9.8 pyyaml-6.0.1 regex-2023.12.25 safetensors-0.4.1 scipy-1.12.0 tokenizers-0.15.0 torchsde-0.2.6 tqdm-4.66.1 trampoline-0.1.2 transformers-4.36.2 yarl-1.9.4 

Configuration

I was happy to see a section in the installation instructions that discussed Sharing models between AUTOMATIC1111 and ComfyUI. The documentation says:

You can use this technique to share LoRA, textual inversions, etc between AUTOMATIC1111 and ComfyUI.

You only need to do two simple things to make this happen.

First, rename extra_model_paths.yaml.example in the ComfyUI directory to extra_model_paths.yaml by typing:

Shell
$ mv extra_model_paths.yaml{.example,}

The file extra_model_paths.yaml will look like this:

extra_model_paths.yaml
#Rename this to extra_model_paths.yaml and ComfyUI will load it

#config for a1111 ui #all you have to do is change the base_path to where yours is installed a111: base_path: path/to/stable-diffusion-webui/
checkpoints: models/Stable-diffusion configs: models/Stable-diffusion vae: models/VAE loras: | models/Lora models/LyCORIS upscale_models: | models/ESRGAN models/RealESRGAN models/SwinIR embeddings: embeddings hypernetworks: models/hypernetworks controlnet: models/ControlNet
#config for comfyui #your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc.
#comfyui: # base_path: path/to/comfyui/ # checkpoints: models/checkpoints/ # clip: models/clip/ # clip_vision: models/clip_vision/ # configs: models/configs/ # controlnet: models/controlnet/ # embeddings: models/embeddings/ # loras: models/loras/ # upscale_models: models/upscale_models/ # vae: models/vae/
#other_ui: # base_path: path/to/ui # checkpoints: models/checkpoints # gligen: models/gligen # custom_nodes: path/custom_nodes
😁

The example file as provided contains relative paths, but the documentation shows absolute paths. After some experimentation, I found that this software works with both relative and absolute paths.

Second, you should change path/to/stable-diffusion-webui/ in this file to ../stable-diffusion/stable-diffusion-webui/. Here is an incantation that you can use to make the change:

Shell
$ OLD=path/to/stable-diffusion-webui/
$ NEW=../stable-diffusion-webui/
$ sed -ie "s^$OLD^$NEW^" extra_model_paths.yaml

Configuration complete!

Aside: At first, it seemed unclear to me if the comments in extra_model_paths.yaml meant that I had to make more changes to it to share files between the UIs. Then I noticed that the name of the ComfyUI directory that holds models is all lower case (models), but name of the AUTOMATIC1111 directory starts with a capital letter (Models). The configuration file, extra_model_paths.yaml, uses the AUTOMATIC1111 directory name (Models), so that seems to indicate that setting base_path causes all the relative paths that follow to refer to the AUTOMATIC1111 directories. This makes sense.

😁

When ComfyUI starts up it generates a detailed message that indicates exactly where it is reading files from. I found that the message clearly showed that the stable-diffusion-webui files were being picked up by ComfyUI.

Extensions and Updating

The ComfyUI Manager extension allows you to update ComfyUI, and easily add more extensions. This section discusses how to install the ComfyUI Manager extension.

You are probably not running ComfyUI on your computer yet. If this is the case, change to your ComfyUI installation directory and skip to the git clone step.

Otherwise, if ComfyUI is already running in a terminal session, stop it by typing CTRL-C into the terminal session a few times:

Shell
$ CTRL-C
Stopped server 
$ CTRL-C
$ 

Now clone the ComfyUI-Manager git repository into the custom_nodes/ subdirectory of ComfyUI by typing:

Shell
$ git clone https://github.com/ltdrdata/ComfyUI-Manager.git \
  custom_nodes/ComfyUI-Manager
Cloning into 'custom_nodes/ComfyUI-Manager'...
remote: Enumerating objects: 5820, done.
remote: Counting objects: 100% (585/585), done.
remote: Compressing objects: 100% (197/197), done.
remote: Total 5820 (delta 410), reused 489 (delta 384), pack-reused 5235
Receiving objects: 100% (5820/5820), 4.36 MiB | 1.83 MiB/s, done.
Resolving deltas: 100% (4126/4126), done. 

The Mac installation instructions tell readers to download the Stable Diffusion v1.5 model. That is a good idea, however because I wanted to share that model with the stable-diffusion-webui user interface, I typed the following:

Shell
$ DEST=../stable-diffusion-webui/models/Stable-diffusion/
$ HUG=https://huggingface.co
$ SDCK=$HUG/runwayml/stable-diffusion-v1-5/resolve/main
$ URL=$SDCK/v1-5-pruned-emaonly.ckpt
$ wget -P "$DEST" $URL

Running ComfyUI

The ComfyUI help message is:

Shell
$ python main.py -h
** ComfyUI startup time: 2024-01-22 20:19:38.581302
** Platform: Linux
** Python version: 3.11.5 (main, Sep 11 2023, 13:54:46) [GCC 11.2.0]
** Python executable: /home/mslinn/anaconda3/bin/python
** Log path: /mnt/f/work/llm/ComfyUI/comfyui.log
Prestartup times for custom nodes: 0.0 seconds: /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager
usage: main.py [-h] [--listen [IP]] [--port PORT] [--enable-cors-header [ORIGIN]] [--max-upload-size MAX_UPLOAD_SIZE] [--extra-model-paths-config PATH [PATH ...]] [--output-directory OUTPUT_DIRECTORY] [--temp-directory TEMP_DIRECTORY] [--input-directory INPUT_DIRECTORY] [--auto-launch] [--disable-auto-launch] [--cuda-device DEVICE_ID] [--cuda-malloc | --disable-cuda-malloc] [--dont-upcast-attention] [--force-fp32 | --force-fp16] [--bf16-unet | --fp16-unet | --fp8_e4m3fn-unet | --fp8_e5m2-unet] [--fp16-vae | --fp32-vae | --bf16-vae] [--cpu-vae] [--fp8_e4m3fn-text-enc | --fp8_e5m2-text-enc | --fp16-text-enc | --fp32-text-enc] [--directml [DIRECTML_DEVICE]] [--disable-ipex-optimize] [--preview-method [none,auto,latent2rgb,taesd]] [--use-split-cross-attention | --use-quad-cross-attention | --use-pytorch-cross-attention] [--disable-xformers] [--gpu-only | --highvram | --normalvram | --lowvram | --novram | --cpu] [--disable-smart-memory] [--deterministic] [--dont-print-server] [--quick-test-for-ci] [--windows-standalone-build] [--disable-metadata] [--multi-user]
options: -h, --help show this help message and exit --listen [IP] Specify the IP address to listen on (default: 127.0.0.1). If --listen is provided without an argument, it defaults to 0.0.0.0. (listens on all) --port PORT Set the listen port. --enable-cors-header [ORIGIN] Enable CORS (Cross-Origin Resource Sharing) with optional origin or allow all with default '*'. --max-upload-size MAX_UPLOAD_SIZE Set the maximum upload size in MB. --extra-model-paths-config PATH [PATH ...] Load one or more extra_model_paths.yaml files. --output-directory OUTPUT_DIRECTORY Set the ComfyUI output directory. --temp-directory TEMP_DIRECTORY Set the ComfyUI temp directory (default is in the ComfyUI directory). --input-directory INPUT_DIRECTORY Set the ComfyUI input directory. --auto-launch Automatically launch ComfyUI in the default browser. --disable-auto-launch Disable auto launching the browser. --cuda-device DEVICE_ID Set the id of the cuda device this instance will use. --cuda-malloc Enable cudaMallocAsync (enabled by default for torch 2.0 and up). --disable-cuda-malloc Disable cudaMallocAsync. --dont-upcast-attention Disable upcasting of attention. Can boost speed but increase the chances of black images. --force-fp32 Force fp32 (If this makes your GPU work better please report it). --force-fp16 Force fp16. --bf16-unet Run the UNET in bf16. This should only be used for testing stuff. --fp16-unet Store unet weights in fp16. --fp8_e4m3fn-unet Store unet weights in fp8_e4m3fn. --fp8_e5m2-unet Store unet weights in fp8_e5m2. --fp16-vae Run the VAE in fp16, might cause black images. --fp32-vae Run the VAE in full precision fp32. --bf16-vae Run the VAE in bf16. --cpu-vae Run the VAE on the CPU. --fp8_e4m3fn-text-enc Store text encoder weights in fp8 (e4m3fn variant). --fp8_e5m2-text-enc Store text encoder weights in fp8 (e5m2 variant). --fp16-text-enc Store text encoder weights in fp16. --fp32-text-enc Store text encoder weights in fp32. --directml [DIRECTML_DEVICE] Use torch-directml. --disable-ipex-optimize Disables ipex.optimize when loading models with Intel GPUs. --preview-method [none,auto,latent2rgb,taesd] Default preview method for sampler nodes. --use-split-cross-attention Use the split cross attention optimization. Ignored when xformers is used. --use-quad-cross-attention Use the sub-quadratic cross attention optimization . Ignored when xformers is used. --use-pytorch-cross-attention Use the new pytorch 2.0 cross attention function. --disable-xformers Disable xformers. --gpu-only Store and run everything (text encoders/CLIP models, etc... on the GPU). --highvram By default models will be unloaded to CPU memory after being used. This option keeps them in GPU memory. --normalvram Used to force normal vram use if lowvram gets automatically enabled. --lowvram Split the unet in parts to use less vram. --novram When lowvram isn't enough. --cpu To use the CPU for everything (slow). --disable-smart-memory Force ComfyUI to agressively offload to regular ram instead of keeping models in vram when it can. --deterministic Make pytorch use slower deterministic algorithms when it can. Note that this might not make images deterministic in all cases. --dont-print-server Don't print server output. --quick-test-for-ci Quick test for CI. --windows-standalone-build Windows standalone build: Enable convenient things that most people using the standalone windows build will probably enjoy (like auto opening the page on startup). --disable-metadata Disable saving prompt metadata in files. --multi-user Enables per-user storage.

Useful Options

--auto-launch
This option opens the default web browser with the default ComfyUI project on startup.
--highvram
By default models will be unloaded to CPU memory after being used. This option keeps them in GPU memory. This speeds up multiple iterations.
--output-directory
The default output directory is ~/Downloads. Change it with this option.
--preview-method
Use --preview-method auto to enable previews. The default installation includes a fast but low-resolution latent preview method. To enable higher-quality previews with TAESD, download the taesd_decoder.pth (for SD1.x and SD2.x) and taesdxl_decoder.pth (for SDXL) models and place them in the models/vae_approx folder. Once they are installed, restart ComfyUI to enable high-quality previews.
--temp-directory
The default directory for temporary files is the installation directory, which is unfortunate. Change it with this option. For example: --temp-directory /tmp

Comfyui Script

Below is a script called comfyui, which runs ComfyUI.

  • Sets the output directory to the current directory.
  • Uses the GPU if present, and keeps the models loaded into GPU memory between iterations.
  • A temporary directory will be created and automatically be removed after ComfyUI exits.
  • The script requires an environment variable called comfyui to be set which points at the directory that ComfyUI was installed into. You could set the variable in ~/.bashrc, like this:
    ~/.bashrc
    export comfyui=/mnt/c/work/llm/ComfyUI

    I broke the definition into two environment variables:
    ~/.bashrc
    export llm="/mnt/c/work/llm"
    export comfyui="$llm/ComfyUI"

And now, the comfyui script, in all its glory:

#!/bin/bash

RED='\033[0;31m'
RESET='\033[0m' # No Color

if [ -z "$comfyui" ]; then
  printf "${RED}Error: The comfyui environment variable was not set.
Please see https://mslinn.com/llm/7400-comfyui.html#installation${RESET}
"
  exit 2
fi
if [ ! -d "$comfyui" ]; then
  printf "${RED}Error: The directory that the comfyui environment variable points to, '$comfyui', does not exist.${RESET}\n"
  exit 3
fi
if [ ! -f "$comfyui/main.py" ]; then
  printf "${RED}\
The directory that the comfyui environment variable points to, '$comfyui', does not contain main.py.
Is this actually the ComfyUI directory?${RESET}
"
  EOF
  exit 3
fi

OUT_DIR="$(pwd)"

WORK_DIR="$(mktemp -d)"
if [[ ! "$WORK_DIR" || ! -d "$WORK_DIR" ]]; then
  echo "${RED}Error: Could not create temporary directory.${RESET}"
  exit 1
fi

function cleanup {
  rm -rf "$WORK_DIR"
}

# register the cleanup function to be called on the EXIT signal
trap cleanup EXIT

cd "$comfyui" || exit
python main.py \
  --auto-launch \
  --highvram \
  --output-directory "$OUT_DIR" \
  --preview-method auto \
  --temp-directory "$WORK_DIR"

Launching

This is the output from launching ComfyUI with the comfyui script:

Shell
$ comfyui
** ComfyUI startup time: 2024-01-22 18:08:35.283187
** Platform: Linux
** Python version: 3.11.6 (main, Oct  8 2023, 05:06:43) [GCC 13.2.0]
** Python executable: /mnt/f/work/llm/ComfyUI/.comfyui_env/bin/python
** Log path: /mnt/f/work/llm/ComfyUI/comfyui.log

Prestartup times for custom nodes:
    0.1 seconds: /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 12288 MB, total RAM 7942 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
Adding extra search path checkpoints ../stable-diffusion/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path configs ../stable-diffusion/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path vae ../stable-diffusion/stable-diffusion-webui/models/VAE
Adding extra search path loras ../stable-diffusion/stable-diffusion-webui/models/Lora
Adding extra search path loras ../stable-diffusion/stable-diffusion-webui/models/LyCORIS
Adding extra search path upscale_models ../stable-diffusion/stable-diffusion-webui/models/ESRGAN
Adding extra search path upscale_models ../stable-diffusion/stable-diffusion-webui/models/RealESRGAN
Adding extra search path upscale_models ../stable-diffusion/stable-diffusion-webui/models/SwinIR
Adding extra search path embeddings ../stable-diffusion/stable-diffusion-webui/embeddings
Adding extra search path hypernetworks ../stable-diffusion/stable-diffusion-webui/models/hypernetworks
Adding extra search path controlnet ../stable-diffusion/stable-diffusion-webui/models/ControlNet
### Loading: ComfyUI-Manager (V2.2.5)
## ComfyUI-Manager: installing dependencies
  Collecting GitPython (from -r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 1))
    Obtaining dependency information for GitPython from https://files.pythonhosted.org/packages/45/c6/a637a7a11d4619957cb95ca195168759a4502991b1b91c13d3203ffc3748/GitPython-3.1.41-py3-none-any.whl.metadata
    Downloading GitPython-3.1.41-py3-none-any.whl.metadata (14 kB)
  Collecting matrix-client==0.4.0 (from -r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2))
    Downloading matrix_client-0.4.0-py2.py3-none-any.whl (43 kB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.5/43.5 kB 8.2 MB/s eta 0:00:00
  Requirement already satisfied: transformers in ./.comfyui_env/lib/python3.11/site-packages (from -r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (4.36.2)
  Requirement already satisfied: huggingface-hub>0.20 in ./.comfyui_env/lib/python3.11/site-packages (from -r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 4)) (0.20.2)
  Requirement already satisfied: requests~=2.22 in ./.comfyui_env/lib/python3.11/site-packages (from matrix-client==0.4.0->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2)) (2.31.0)
  Collecting urllib3~=1.21 (from matrix-client==0.4.0->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2))
    Obtaining dependency information for urllib3~=1.21 from https://files.pythonhosted.org/packages/b0/53/aa91e163dcfd1e5b82d8a890ecf13314e3e149c05270cc644581f77f17fd/urllib3-1.26.18-py2.py3-none-any.whl.metadata
    Downloading urllib3-1.26.18-py2.py3-none-any.whl.metadata (48 kB)
      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 16.0 MB/s eta 0:00:00
  Collecting gitdb<5,>=4.0.1 (from GitPython->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 1))
    Obtaining dependency information for gitdb<5,>=4.0.1 from https://files.pythonhosted.org/packages/fd/5b/8f0c4a5bb9fd491c277c21eff7ccae71b47d43c4446c9d0c6cff2fe8c2c4/gitdb-4.0.11-py3-none-any.whl.metadata
    Using cached gitdb-4.0.11-py3-none-any.whl.metadata (1.2 kB)
  Requirement already satisfied: filelock in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (3.13.1)
  Requirement already satisfied: numpy>=1.17 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (1.26.3)
  Requirement already satisfied: packaging>=20.0 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (23.2)
  Requirement already satisfied: pyyaml>=5.1 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (6.0.1)
  Requirement already satisfied: regex!=2019.12.17 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (2023.12.25)
  Requirement already satisfied: tokenizers<0.19,>=0.14 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (0.15.0)
  Requirement already satisfied: safetensors>=0.3.1 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (0.4.1)
  Requirement already satisfied: tqdm>=4.27 in ./.comfyui_env/lib/python3.11/site-packages (from transformers->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 3)) (4.66.1)
  Requirement already satisfied: fsspec>=2023.5.0 in ./.comfyui_env/lib/python3.11/site-packages (from huggingface-hub>0.20->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 4)) (2023.12.2)
  Requirement already satisfied: typing-extensions>=3.7.4.3 in ./.comfyui_env/lib/python3.11/site-packages (from huggingface-hub>0.20->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 4)) (4.9.0)
  Collecting smmap<6,>=3.0.1 (from gitdb<5,>=4.0.1->GitPython->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 1))
    Obtaining dependency information for smmap<6,>=3.0.1 from https://files.pythonhosted.org/packages/a7/a5/10f97f73544edcdef54409f1d839f6049a0d79df68adbc1ceb24d1aaca42/smmap-5.0.1-py3-none-any.whl.metadata
    Using cached smmap-5.0.1-py3-none-any.whl.metadata (4.3 kB)
  Requirement already satisfied: charset-normalizer<4,>=2 in ./.comfyui_env/lib/python3.11/site-packages (from requests~=2.22->matrix-client==0.4.0->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2)) (3.3.2)
  Requirement already satisfied: idna<4,>=2.5 in ./.comfyui_env/lib/python3.11/site-packages (from requests~=2.22->matrix-client==0.4.0->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2)) (3.6)
  Requirement already satisfied: certifi>=2017.4.17 in ./.comfyui_env/lib/python3.11/site-packages (from requests~=2.22->matrix-client==0.4.0->-r /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager/requirements.txt (line 2)) (2023.11.17)
  Downloading GitPython-3.1.41-py3-none-any.whl (196 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 196.4/196.4 kB 51.6 MB/s eta 0:00:00
  Using cached gitdb-4.0.11-py3-none-any.whl (62 kB)
  Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.8/143.8 kB 53.2 MB/s eta 0:00:00
  Using cached smmap-5.0.1-py3-none-any.whl (24 kB)
  Installing collected packages: urllib3, smmap, gitdb, matrix-client, GitPython
    Attempting uninstall: urllib3
      Found existing installation: urllib3 2.1.0
      Uninstalling urllib3-2.1.0:
        Successfully uninstalled urllib3-2.1.0
  Successfully installed GitPython-3.1.41 gitdb-4.0.11 matrix-client-0.4.0 smmap-5.0.1 urllib3-1.26.18
## ComfyUI-Manager: installing dependencies done.
### ComfyUI Revision: 1923 [ef5a28b5] | Released on '2024-01-20'
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json

Import times for custom nodes:
    6.1 seconds: /mnt/f/work/llm/ComfyUI/custom_nodes/ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json 

Point your web browser to localhost:8188, if it does not open automatically, and you will see the default project.

The ComfyUI Queue panel will have a button for the Manager extension that you just installed.

Clicking on the Manager button will open the ComfyUI Manager Menu panel:

As ComfyUI runs, it creates a file called comfyui.log in its directory, and backs up any previous file of that name to comfyui.prev.log and comfyui.prev2.log.

The Linux Filesystem Hierarchy Standard states that log files should be placed in /var/log. ComfyUI does not make any provision for that. This deficiency means that production installations are more likely to have permission-related security problems.

This is a good video on first steps with ComfyUI:

More Videos

The following videos are by Scott Detweiler, a quality assurance staff member at stability.ai. Scott's job is to ensure Stable Diffusion and related software works properly. He is obviously an experienced user, although he can be hard to undertand because speaks rather quickly and often does not articulate words clearly. Check out his portfolio.

Usage Notes

The ComfyUI README has a section entitled Notes. These are actually usage notes, with suggestions on how to make prompts, including wildcard and dynamic prompts.

References

* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.