Skip to main content
tutorial

ComfyUI Tutorial: Install & Create Your First AI Image

Angry Shark Studio
12 min read
ComfyUI AI Stable Diffusion Tutorial Beginner Node-Based Workflow Image Generation Installation Guide AI Art

ComfyUI for Beginners: Install & Create Your First AI Image

Or: How I Learned to Stop Worrying and Love the Nodes

You have been using AI image generators for a while now. Maybe you started with online tools where you type a prompt, click a button, and wait for results. Perhaps you moved to something more powerful like Automatic1111, with its walls of sliders, dropdowns, and parameters that you adjust through trial and error.

But there is a limitation to these approaches. You are not really controlling the process. You are just adjusting settings on a machine you do not fully understand.

ComfyUI takes a different approach. Instead of mysterious black boxes and hidden processes, you can see exactly how your image is being generated. You build your own image generation pipeline, piece by piece, using a node-based system.

Welcome to ComfyUI, an introduction to node-based workflows.

Free Blog Tutorials:

  • Tutorial #1: Welcome to ComfyUI (this post)
  • Tutorial #2: Understanding the Canvas
  • Tutorial #3: Mastering Generation Parameters

Person confused at complex UI vs enlightened at node-based workflow, split comparison illustration


What is ComfyUI?

ComfyUI is a node-based interface for Stable Diffusion and other AI image generation models.

Instead of hiding the process behind a “Generate” button, ComfyUI makes every step visible. Each step of the image generation process is represented as a node on your canvas. You can see:

  • Where does your prompt go in
  • How the AI model processes it
  • When and how the image gets refined
  • Where the final result comes out

Think of it like cooking. Most AI image tools are like microwave dinners. They’re convenient, but you have no idea what is really happening inside that box. ComfyUI is like a cooking show where the chef shows you every ingredient, every technique, and every step. Except instead of making soufflĂ©, you are making art.

Key Concept:

ComfyUI uses a node-based workflow where each step of image generation is represented as a visual “node” that you can see, edit, and connect together. If you can connect LEGO blocks, you can use ComfyUI.


Understanding Nodes: Why This Approach Works

If you are already comfortable with your current tool, you might wonder why you would switch to a node-based system. Here is the key difference.

The Slider Problem

Traditional interfaces give you sliders and dropdowns. CFG Scale. Steps. Sampling Method. Denoise Strength. You adjust these settings, but do you really know what they do? Or are you just following a Reddit tutorial and hoping for the best?

Here’s what’s happening behind the scenes in traditional tools:

[BLACK BOX OF MYSTERY]
        ↓
      Magic
        ↓
    Your Image

You can’t see the process. You can’t modify the process. You can’t understand the process.

The Node Solution

With ComfyUI, the process is visible. You can see and control each step:

Prompt → Model → Scheduler → Sampler → Decoder → Image
   ↓       ↓         ↓          ↓          ↓
[You can see and modify each step]

Beyond visibility, you can modify the process itself. You can:

  • Add extra steps in the middle
  • Route data in different directions
  • Use multiple models at once
  • Create feedback loops
  • Build custom workflows

Side by side comparison of locked black box vs open toolbox with visible components, isometric view

Note:

Nodes can look complicated at first glance. Looking at a workflow for the first time might feel like staring at a circuit diagram. You don’t need to understand everything at once. You’ll start with simple workflows (5-6 nodes) and build up from there. By Tutorial #3, you’ll be comfortable connecting nodes.


Who is ComfyUI for?

ComfyUI is not for everyone. It is designed for users who:

Good Fit For ComfyUI

  • Want to understand the process, not just push buttons
  • Like experimenting and tinkering with creative tools
  • Feel limited by other AI image tools
  • Enjoy learning how things work (technical background not required)
  • Want reproducible results you can share and iterate on
  • Plan to use this regularly for projects or creative work

Might Not Be The Right Tool If You

  • Only want to type a prompt and get immediate results
  • Are satisfied with online generators
  • Prefer simple interfaces over customizable workflows
  • Need the fastest possible generation time every time

ComfyUI has a learning curve. You will invest a few hours up front to understand the basics, but that investment provides significantly more control over your image generation.

Cartoon character at crossroads with two paths - Quick & Simple vs Powerful & Flexible

Common Questions:

Q: Do I need to know how to code? A: No. ComfyUI workflows are visual. You connect boxes with your mouse. No coding or programming knowledge required.

Q: Will this work on my computer? A: If you can run Stable Diffusion on your machine, you can run ComfyUI. We will cover requirements in detail below, but most modern gaming PCs and many laptops can handle it.

Q: I’m not technical. Is this over my head? A: If you can follow a recipe (combine ingredients in a specific order), you can use ComfyUI. It is more visual puzzle than technical challenge.

Q: How is this different from Automatic1111? A: Automatic1111 presents you with sliders and settings for a pre-built pipeline. ComfyUI shows you the pipeline itself as individual nodes that you can see, understand, and rearrange. Both generate images, but ComfyUI gives you visibility into the process and the ability to modify it.


Installation Guide

Now we will install ComfyUI on your machine. We will walk through this step by step.

You can download ComfyUI from the official website: https://www.comfy.org/download

System Requirements

First, check that your computer can handle this:

Minimum Requirements:

  • GPU: NVIDIA graphics card with at least 4GB VRAM (or Apple Silicon M1/M2/M3)
  • RAM: 16GB system RAM recommended (8GB might work but will be slow)
  • Storage: 20GB free space (for ComfyUI + models)
  • OS: Windows 10/11, macOS 12+, or Linux

Recommended Setup:

  • GPU: NVIDIA RTX 3060 or better (8GB+ VRAM)
  • RAM: 32GB
  • Storage: SSD with 50GB+ free space
  • OS: Windows 11 or macOS Sonoma

Key Concept:

Why does GPU matter so much? AI image generation is basically asking your computer to solve billions of math problems very quickly. Your GPU (Graphics Processing Unit) is specifically designed for this kind of parallel processing. Think of it like this: your CPU is one really smart person solving problems, your GPU is a thousand people solving problems simultaneously. For AI, we need that thousand-person team.


Windows Installation

Windows installation is straightforward with the official installer.

Step 1: Download ComfyUI

  1. Go to the official download page: https://www.comfy.org/download
  2. Download the Windows installer
  3. Wait for the download to complete

Step 2: Run the Installer

  1. Double-click the downloaded installer file
  2. Follow the installation wizard
  3. Choose your installation location (we recommend C:\ComfyUI)
  4. Complete the installation

Step 3: Launch ComfyUI

  1. Find ComfyUI in your Start Menu or desktop shortcut
  2. Double-click to launch
  3. ComfyUI will open as a desktop application
  4. You are in.

Note: ComfyUI Manager comes preinstalled, so you can manage custom nodes and updates directly from the interface.

ComfyUI desktop application icon in Start Menu with arrow annotation


Mac Installation (Apple Silicon & Intel)

Mac users, your installation is straightforward with the official installer.

  1. Go to the official download page: https://www.comfy.org/download
  2. Download the macOS DMG file
  3. Open the DMG file and drag ComfyUI to your Applications folder
  4. Launch ComfyUI from Applications
  5. ComfyUI will open as a desktop application

Note: ComfyUI Manager comes preinstalled. Apple Silicon (M1/M2/M3) users benefit from Metal acceleration, which can be faster than some dedicated GPUs.


Running for the First Time: What to Expect

You have launched ComfyUI and the desktop application has opened. On first launch, you should see a prompt to download the SD1.5 model.

The First-Run Experience:

  • ComfyUI desktop application opens
  • A dialog appears asking you to download the SD1.5 model
  • After confirming, the model downloads in the background
  • Once complete, the default workflow appears

Model download dialog showing SD1.5 download prompt with highlighted confirm button

Common First-Launch Issues:

ProblemWhat You’ll SeeThe Fix
Application won’t startError message or crashes on launchRestart your computer, check system requirements
Out of memoryImage generation fails or freezesClose other programs, use smaller image sizes
Model not found”Error loading checkpoint”Relaunch ComfyUI to trigger the model download popup

Note:

The SD1.5 model download happens automatically on first launch. If you do not see the download popup, or if it was accidentally dismissed, simply restart ComfyUI and it will appear again.

Cheerful character at desk with computer showing success, green checkmarks floating around


Tour of the Interface

ComfyUI is now running and displaying the interface. Here’s a breakdown of what you are seeing.

Full ComfyUI interface with numbered areas - Canvas, Menu Bar, Queue Area, Side Panel

The Canvas: Where It All Happens

That big empty space in the middle? That’s your canvas, your workspace where you will build workflows. Think of it like:

  • A whiteboard where you can draw connections
  • A flowchart where you design processes
  • A LEGO building plate where you snap blocks together

What you can do on the canvas:

  • Drag to move your view around
  • Scroll to zoom in and out
  • Right-click to add new nodes
  • Click and drag from a node’s output to connect it to another node’s input
  • Click a node to select it and see its properties

Those colorful boxes you see? Those are nodes. Each one does something specific in your image generation pipeline. The lines connecting them? That’s data flowing from one step to the next.

Key Concept:

Nodes are like workers in a factory. Each worker has a specific job (understand text, generate image, save file). They pass their work to the next worker through connections. The workflow is just organizing these workers in the right order to produce what you want.

The Queue Bar (Bottom Bar)

Look at the very bottom center of the screen, and you will see a control bar with a blue “Run” button. This is your Queue Bar, where you control image generation.

The Queue Bar contains:

  • Run button (also called “Queue Prompt”): Click this to start generating your image
  • Queue number: Shows how many times it will run the generation (the number next to the Run button)
  • Cancel/Clear buttons: Stop the current generation or clear the queue

When you click “Run,” ComfyUI reads your workflow (following the node connections) and executes each node in order. You will see a progress bar and preview of what is being generated.

Close-up of Queue Bar showing Run button, queue number, and cancel/clear buttons with labels

The Top Menu Bar

At the top of the interface, you will find the main toolbar:

  • ComfyUI Button (C icon): The leftmost button, click to access Settings and app options
  • Workflow dropdown: Shows your current workflow name (like “default”)
  • Manager button: Install and update custom nodes (blue button on the right side)
  • Other tools: Star icon (favorites), notifications, and utility buttons

To access Settings, click the ComfyUI button (C icon) at the top left.

In addition to the Queue Bar at the bottom, ComfyUI has sidebar panels that you can open for browsing and management. These panels have several tabs:

  • Queue tab: View your generation history, all images you have previously generated with their exact workflows. This is different from the Queue Bar (which just has the Run button). The Queue tab is where you browse past generations.
  • Node Library tab: Browse and search all available nodes organized by category
  • Model Library tab: Browse and manage your downloaded models
  • Workflows tab: Save, load, and organize your workflow files
  • Templates tab: Access pre-made workflow templates to get started quickly

The Queue tab in the sidebar is particularly useful because every image you generate is stored with its exact workflow. Found a generation you loved from yesterday? Open the Queue tab to see it and load that workflow.

Adding Nodes to Your Workflow

There are several ways to add nodes to the canvas:

Double-Click (Fastest): Double-click anywhere on the canvas to bring up a search box. Start typing the node name you want (like “sampler” or “load”) and press Enter to add it. This is the quickest method once you know what you’re looking for.

Node search box showing search results when double-clicking on canvas

Right-Click Menu: Right-click anywhere on the canvas to see the full categorized node menu. This is useful when you’re browsing or not sure of the exact node name.

Right-click context menu showing categorized list of nodes

Node Library Tab: Open the Node Library tab in the sidebar to browse all available nodes organized by category.

Exercise:

Before moving on, try this:

  1. Zoom in and out on the canvas using your scroll wheel
  2. Drag the canvas around by clicking and dragging
  3. Double-click on the canvas to bring up the node search, then press Escape to close it
  4. Right-click to open the full node menu, then press Escape to close it
  5. Find the Run button in the Queue Bar at the very bottom of the screen
  6. If you can access the sidebar panels, click through the different tabs (Queue, Node Library, etc.)

Get comfortable with the navigation. You will be spending a lot of time here.


Generating Your First Image

Now we will generate an image.

ComfyUI comes with a default workflow already set up. Now that you have downloaded the SD1.5 model, you are ready to generate your first image.

You should see the default workflow with these nodes already connected on your canvas:

Simple default ComfyUI workflow with labeled nodes showing Load Checkpoint, CLIP Text Encode, KSampler, VAE Decode, Save Image

With the model downloaded, you are ready to generate. Follow these steps:

  1. Find the “CLIP Text Encode (Prompt)” nodes. You’ll see two of them in the default workflow

  2. In the positive prompt node (usually top), type something simple:

    a beautiful sunset over mountains, professional photography
    
  3. In the negative prompt node (usually bottom), type:

    blurry, low quality, distorted
    
  4. Click the “Run” button in the Queue Bar at the bottom center of the screen

  5. Watch the generation happen

You will see nodes light up as they process, a progress bar will fill, and within 10-30 seconds (depending on your hardware), you will see an image appear in the “Save Image” node.

You have generated your first ComfyUI image. More importantly, you can see how it happened:

  • Your prompt went into the text encoder
  • The model processed it
  • The sampler created the image gradually
  • The VAE decoder made it viewable
  • The save node exported it

Celebration scene with happy character at computer, arms raised in victory, sparkles and confetti

Key Concept:

Notice what just happened. You did not just click a button and get an image. You saw the process unfold. Each node lighting up shows you what step is happening. This visibility is the foundation for understanding how image generation works.


What We will Cover in This Series

You have completed your first generation. Over the next 2 tutorials, we will progressively build your understanding of ComfyUI fundamentals.

Here is what is coming in the free blog series:

Tutorial #2: Understanding the Canvas

  • Master canvas navigation and workflow organization
  • Learn to connect and manipulate nodes efficiently
  • Customize your workspace for productivity
  • Build and save your own workflows

Tutorial #3: Mastering Generation Parameters

  • Understand sampling methods and when to use them
  • Control quality with CFG Scale, Steps, and Denoise
  • Optimize generation speed and image quality
  • Fine-tune parameters for consistent results

Want More? Our complete ComfyUI book includes all 15 tutorials covering:

  • Tutorials 4-5: Model Management, Advanced Sampling
  • Tutorials 6-10: ControlNet, Inpainting, LoRAs, Custom Workflows
  • Tutorials 11-15: Batch Processing, Animation, Custom Nodes, Performance Optimization

By completing this 3-tutorial series, you will have a solid foundation in ComfyUI and be ready to create your own custom workflows.

Creative journey roadmap showing winding path from START to EXPERT with milestone markers


Chapter Challenge: Make It Yours

Before we wrap up Tutorial #1, let’s make sure you are comfortable with what we have covered.

Challenge #1: Environment Check

  1. ComfyUI is installed and launches successfully
  2. The desktop application opens without errors
  3. SD1.5 model downloaded on first launch
  4. You’ve generated at least one image

Challenge #2: Interface Exploration

  1. Zoom in and out on the canvas
  2. Drag the canvas around
  3. Open the right-click node menu
  4. Find the Queue History
  5. Locate where your generated images are saved (hint: look in Documents\ComfyUI\output\ on Windows)

Challenge #3: First Experiments

Try generating images with these prompts and see how different they look:

  1. Realistic photography:

    professional portrait photo of a friendly robot, studio lighting, 8k uhd
    
  2. Fantasy art:

    magical forest with glowing mushrooms, fantasy illustration, vibrant colors
    
  3. Abstract:

    abstract geometric patterns, colorful, symmetrical, digital art
    

Don’t worry if they’re not perfect. We’ll learn how to improve results in the next tutorials.


What’s Next?

In Tutorial #2: Understanding the Canvas, we will master the ComfyUI workspace and workflow building. You will learn:

  • How to navigate the canvas efficiently
  • Connect and organize nodes like a pro
  • Save and load custom workflows
  • Customize your workspace for maximum productivity

By the end of Tutorial #2, you will be building and organizing workflows with confidence.

Coming in Tutorial #3: Mastering Generation Parameters, Learn to control sampling methods, CFG Scale, Steps, and optimize image quality.

Want More? Our complete ComfyUI book includes all 15 tutorials covering advanced techniques like ControlNet, custom workflows, animation, batch processing, and performance optimization.

Key Concept:

You have completed the foundational steps. You installed ComfyUI, launched it, and generated your first image. This forms the basis for everything else in this tutorial series.


Resources:

File Locations (for reference - managed automatically by installer):

Windows:

  • Installation: C:\ComfyUI\
  • Models: C:\ComfyUI\models\checkpoints\
  • Output Images: Documents\ComfyUI\output\
  • Custom Nodes: C:\ComfyUI\custom_nodes\

Mac/Linux:

  • Models: ComfyUI/models/checkpoints/ (exact path may vary based on installation)
  • Output Images: ComfyUI/output/
  • Custom Nodes: ComfyUI/custom_nodes/

Troubleshooting:

  • Can’t launch? Restart your computer and check system requirements
  • No image generating? Make sure the SD1.5 download completed on first launch
  • Slow performance? Close other programs and consider smaller image sizes

Character standing confidently with backpack and map, looking toward horizon with determination


Continue to Tutorial #2, where we’ll examine how the default workflow functions.

Angry Shark Studio Logo

About Angry Shark Studio

Angry Shark Studio is a professional Unity AR/VR development studio specializing in mobile multiplatform applications and AI solutions. Our team includes Unity Certified Expert Programmers with extensive experience in AR/VR development.

Related Articles

More Articles

Explore more insights on Unity AR/VR development, mobile apps, and emerging technologies.

View All Articles

Need Help?

Have questions about this article or need assistance with your project?

Get in Touch