paint-brush
Leveraging SwarmUI & Stable Diffusion 3 on Cloud Platforms: A Guide to Kaggle (No-Cost), Massed Compby@secourses
485 reads
485 reads

Leveraging SwarmUI & Stable Diffusion 3 on Cloud Platforms: A Guide to Kaggle (No-Cost), Massed Comp

by Furkan GözükaraJuly 5th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This tutorial demonstrates the installation and usage of #SwarmUI on various cloud platforms. SwarmUI operates on the #ComfyUI backend. This instructional video will enable you to utilize SwarmUI on cloud GPU services as seamlessly as on your personal computer. I'll guide you through using Stable Diffusion 3 (#SD3) in the cloud environment.
featured image - Leveraging SwarmUI & Stable Diffusion 3 on Cloud Platforms: A Guide to Kaggle (No-Cost), Massed Comp
Furkan Gözükara HackerNoon profile picture

This tutorial demonstrates the installation and usage of SwarmUI on various cloud platforms. For those lacking a high-performance GPU or seeking enhanced GPU capabilities, this guide is invaluable. You'll discover how to set up and leverage SwarmUI, a cutting-edge Generative AI interface, on Massed Compute, RunPod, and Kaggle (which provides complimentary dual T4 GPU access for 30 hours per week).


This instructional video will enable you to utilize SwarmUI on cloud GPU services as seamlessly as on your personal computer. Additionally, I'll guide you through using Stable Diffusion 3 (#SD3) in the cloud environment. SwarmUI operates on the ComfyUI backend.


🔗 Comprehensive Public Post (no registration required) Featured In The Video, Including All Relevant Links ➡️ https://www.patreon.com/posts/stableswarmui-3-106135985


🔗 Windows Guide: Mastering SwarmUI Usage ➡️


🔗 Tutorial: Rapid Model Download for Massed Compute, RunPod, and Kaggle, plus Swift Model/File Upload to Hugging Face ➡️


🔗 Join SECourses Discord Community ➡️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388


🔗 Stable Diffusion GitHub Repository (Please Star, Fork, and Watch) ➡️ https://github.com/FurkanGozukara/Stable-Diffusion


Exclusive Discount Code for Massed Compute: SECourses


Valid for Alt Config RTX A6000 and standard RTX A6000 GPUs


  • 0:00 Overview of SwarmUI cloud services tutorial (Massed Compute, RunPod & Kaggle)
  • 3:18 SwarmUI installation and usage on Massed Compute virtual Ubuntu machines
  • 4:52 ThinLinc client synchronization folder setup for Massed Compute virtual machine access
  • 6:34 Connecting to and initiating Massed Compute virtual machine post-initialization
  • 7:05 One-click SwarmUI update on Massed Compute prior to use
  • 7:46 Configuring multiple GPUs on SwarmUI backend for simultaneous image generation
  • 7:57 GPU status monitoring using nvitop command
  • 8:43 Pre-installed Stable Diffusion models on Massed Compute
  • 9:53 Model download speed assessment on Massed Compute
  • 10:44 Troubleshooting 4 GPU backend setup errors
  • 11:42 Monitoring all 4 GPUs' operational status
  • 12:22 Image generation and step speed analysis on RTX A6000 (Massed Compute) for SD3
  • 12:50 CivitAI API key configuration for accessing gated models
  • 13:55 Efficient bulk image download from Massed Compute
  • 15:22 Latest SwarmUI installation on RunPod with precise template selection
  • 16:50 Port configuration for SwarmUI connectivity post-installation
  • 17:50 RunPod SwarmUI installation via sh file download and execution
  • 19:47 Resolving backend loading issues through Pod restart
  • 20:22 Relaunching SwarmUI on RunPod
  • 1:14 Stable Diffusion 3 (SD3) implementation on RunPod
  • 22:01 Multi-GPU backend system configuration on RunPod
  • 23:22 RTX 4090 generation speed analysis (SD3 step speed)
  • 24:04 Bulk image download technique for RunPod
  • 24:50 SwarmUI and Stable Diffusion 3 setup on free Kaggle accounts
  • 28:39 Modifying SwarmUI model root folder path on Kaggle for temporary storage utilization
  • 29:21 Secondary T4 GPU backend addition on Kaggle
  • 29:32 SwarmUI restart procedure on Kaggle
  • 31:39 Stable Diffusion 3 model deployment and image generation on Kaggle
  • 33:06 RAM error troubleshooting and resolution on Kaggle
  • 33:45 Disabling one backend to prevent RAM errors with dual T5 XXL text encoder usage
  • 34:04 Stable Diffusion 3 image generation speed evaluation on Kaggle's T4 GPU
  • 34:35 Comprehensive image download process from Kaggle to local device


In this comprehensive article, we will explore how to use SwarmUI, Stable Diffusion 3, and other Stable Diffusion models on various cloud computing platforms. This guide is designed to help users who don't have access to powerful GPUs locally leverage cloud resources for running these advanced AI image generation models. We'll cover three main platforms: Massed Compute, RunPod, and Kaggle.

1.1 Overview of Platforms

1.1.1 Massed Compute

Massed Compute is introduced as the cheapest and most powerful cloud server provider. It offers pre-installed SwarmUI and the latest versions of necessary software, making it easy to start generating images quickly.

1.1.2 RunPod

RunPod is another cloud service provider that offers access to high-performance GPUs. This platform allows users to deploy custom environments and install SwarmUI manually.

1.1.3 Kaggle

Kaggle, a popular platform for data science and machine learning, offers free GPU access. This article demonstrates how to use SwarmUI on a free Kaggle account, utilizing the provided T4 GPUs.

1.2 Prerequisites

Before diving into the specifics of each platform, it's strongly recommended to watch the 90-minute SwarmUI tutorial mentioned in the article. This comprehensive guide covers the details of using SwarmUI and is essential for understanding the full capabilities of the software.

Using SwarmUI on Massed Compute

2.1 Registration and Deployment

To begin using SwarmUI on Massed Compute, follow these steps:


  • Use the provided registration link to create an account.
  • Enter your billing information and load balance to your account.
  • Navigate to the deployment section.
  • Select the RTX A6000 or RTX A6000 Alt config based on availability.
  • Choose the "creator" category and "SE courses" image.
  • Apply the special coupon code "SECourses verify" to reduce the hourly rate.
  • Click "deploy" to create your instance.

2.2 Connecting to the Virtual Machine

After deploying your instance, you'll need to connect to it:


Download and install the ThinLinc client appropriate for your operating system.


Configure the ThinLinc client: Go to "Options" > "Local devices" Uncheck all options except "Drives"


Add a folder for synchronization to upload/download files


Use the provided login IP address and credentials to connect to your virtual machine.

2.3 Updating and Starting SwarmUI

Once connected to your Massed Compute virtual machine:

Double-click the updater button to automatically update SwarmUI to the latest version.


Wait for the update to complete and for SwarmUI to start.

2.4 Configuring Multiple GPUs

If you've deployed multiple GPUs, you can configure SwarmUI to use them all:

Go to "Server" > "Backends"


Add additional ComfyUI self-starting backends


Set unique GPU IDs for each backend to ensure proper distribution across available GPUs

2.5 Generating Images

With SwarmUI set up on Massed Compute, you can now start generating images:


Select your desired model (e.g., Stable Diffusion 3, SDXL, etc.)


Choose your preferred sampler and scheduler


Enter your prompt and set the number of images to generate


Click "Generate" to start the process

2.6 Downloading Generated Images

To download your generated images from Massed Compute:

Navigate to the "Files" folder


Go to "apps" > "Stable SwarmUI" > "output"


Copy the output folder to your synchronization folder


Access the synchronized files on your local machine

2.7 Using CivitAI API

A new feature allows you to download gated CivitAI models:

Obtain your CivitAI API key from your account settings


In SwarmUI, go to "User" and enter your API key


Use the model downloader in "Utilities" to access CivitAI models Using SwarmUI on RunPod

3.1 Registration and Pod Deployment

To use SwarmUI on RunPod:

Register using the provided link


Set up billing and load credits to your account


Go to "Pods" and click "Deploy Pod"


Select Community Cloud or set up permanent storage (refer to the separate tutorial for this)


Choose your desired GPU configuration (e.g., 3x 4090 GPU)


Select the "RunPod PyTorch 2.1 with CUDA 11.8" template


Set disk volume and proxy port (7801 for SwarmUI)


Deploy your pod

3.2 Installing SwarmUI

Once your pod is running:

Connect to JupyterLab


Upload the provided "install_linux.sh" file


Open a terminal and run the installation commands


Wait for the installation to complete


Restart the pod once after the first installation

3.3 Starting SwarmUI

After restarting:

Connect to JupyterLab again


Run the provided start commands in the terminal


Access SwarmUI through the HTTP port connection

3.4 Downloading Additional Models

To use models like Stable Diffusion 3 on RunPod:

Go to "Utilities" > "Model Downloader"


Use the provided direct download link to add new models

3.5 Configuring Multiple GPUs

Similar to Massed Compute, configure multiple backends:

Go to "Server" > "Backends"


Add ComfyUI self-starting backends


Set unique GPU IDs for each backend

3.6 Generating Images

Follow the same process as described for Massed Compute to generate images using your chosen models and settings.

3.7 Downloading Generated Images

To download images from RunPod:

Navigate to the SwarmUI folder


Right-click on the output folder and download as an archive


Alternatively, use RunPodCTL or upload to Hugging Face (refer to the separate tutorial for these methods)

Using SwarmUI on Kaggle

4.1 Setting Up Kaggle Notebook

To use SwarmUI on a free Kaggle account:

Register for a free Kaggle account and verify your phone number


Download the provided Kaggle notebook file


Create a new notebook on Kaggle and import the downloaded file


Select GPU T4 x2 as your accelerator

4.2 Installing SwarmUI

Follow the steps in the notebook to:

Download required models


Execute installation cells


Configure model paths and backends

4.3 Using SwarmUI on Kaggle

After installation:

Access SwarmUI through the provided link


Configure backends to use both T4 GPUs


Generate images using available models

4.4 Managing RAM Limitations

When using Stable Diffusion 3 on Kaggle:

Be aware of potential RAM limitations


Use only one backend if encountering memory errors

4.5 Downloading Generated Images

To download images from Kaggle:

Use the provided cell to zip all generated images


Refresh the file list and download the zip file

Additional Features and Resources

5.1 CivitAI Integration

SwarmUI now supports CivitAI API integration:

Obtain your CivitAI API key


Enter the key in the SwarmUI user settings


Use the model downloader to access CivitAI models


This comprehensive guide provides detailed instructions on using SwarmUI, Stable Diffusion 3, and other Stable Diffusion models on Massed Compute, RunPod, and Kaggle. By following these steps, users without powerful local GPUs can leverage cloud resources to generate high-quality AI images. Remember to refer to the recommended tutorials and resources for more in-depth information on specific topics and advanced usage scenarios.