Art, Painting, Adult, Female, Person, Woman, Modern Art, Male, Man, Anime

Best stable diffusion mac m2 performance reddit. I found the macbook Air M1 is fastest.

  • Best stable diffusion mac m2 performance reddit My daily driver is an M1, and Draw Things is a great app for running Stable Diffusion. I want to know if using ComfyUI: The performance is better? The image size can be larger? How can UI make a difference in speed, mem usage? Are workflows like mov2mov, infizoom possible in ComfyUI Laptop GPUs work fine as well, but are often more VRAM limited and you essentially pay a huge premium over a similar desktop machine. I'm using SD with Automatic1111 on M1Pro, 32GB, 16" MacBook Pro. Can you recommend it performance-wise for normal SD inference? I am thinking of getting such a RAM beast as I am contemplating running a local LLM on it as well and they are quite RAM hungry. However, if SD is your primary consideration, go with a PC and dedicated NVIDIA graphics card. 1 at 1024x1024 which consumes about the same at a batch size of 4. I'm very interested in using Stable Diffusion for a number of professional and personal (ha, ha) applications. I need to upgrade my Mac anyway (it's long over due), but if I can't work with stable diffusion on Mac, I'll probably upgrade to PC which I'm not keen on having used Mac for the It's fine. Sooner or later, I will need to upgrade my 2015 MBP anyways. when fine-tuning SDXL at 256x256 it consumes about 57GiB of VRAM at a batch size of 4. Python / SD is using max 16G ram, not sure what it was before the update. It’s fast, free, and frequently updated. But the M2 Max gives me somewhere between 2-3it/s, which is faster, but doesn't really come close to the PC GPUs that there are on the market. Generating a 512x512 image now puts the iteration speed at about 3it/s, which is much faster than the M2 Pro, which gave me speeds at 1it/s or 2s/it, depending on the mood of the machine. 8 seconds to generate a 512×512 image at 50 steps using Diffusion Bee in our tests on an M1 Mac Mini" But people are making optimisations all the time, so things can change. Among the several issues I'm having now, the one below is making it very difficult to use Stable Diffusion. I find… I am thinking of getting a Mac Studio M2 Ultra with 192GB RAM for our company. I have Automatic1111 installed. M1 Max MBP here, and SD definitely runs on my machine, but… If you're going to be going deep into Generative AI and LLLMs, there are a lot of things that require CUDA and nVidia cards, and -- as of today -- I have not heard of any Mac solutions on the horizon. I found the macbook Air M1 is fastest. Been playing with it a bit and I found a way to get ~10-25% speed improvement (tested on various output resolutions and SD v1. A stable diffusion model, say, takes a lot less memory than a LLM. github. 6 OS. 0 from pyTorch to Core ML. M1 Max, 24 cores, 32 GB RAM, and running the latest Monterey 12. io) Even the M2 Ultra can only do about 1 iteration per second at 1024x1024 on SDXL, where the 4090 runs around 10-12 iterations per second from what I can see from the vladmandic collected data. Read through the other tuorials as well. Mac Min M2 16RAM. A1111 takes about 10-15 sec and Vlad and Comfyui about 6-8 seconds for a Euler A 20 step 512x512 generation. Just updated and now running SD for first time and have done from about 2s/it to 20s/it. To the best of my knowledge, the WebUI install checks for updates at each startup. Oct 23, 2023 · I am benchmarking these 3 devices: macbook Air M1, macbook Air M2 and macbook Pro M2 using ml-stable-diffusion. Edit- If anyone sees this, just reinstall Automatic1111 from scratch. My own benchmarks: EasyDiffusion is fast, but doesn't have all the capabilities of automatic. Downsides: closed source, missing some exotic features, has an idiosyncratic UI. Stable diffusion speed on M2 Pro Mac is insane! I mean, is it though? It costs like 7k$ But my 1500€ pc with an rtx3070ti is way faster. Here's AUTOMATIC111's guide: Installation on Apple Silicon. It already supports SDXL. The contenders are 1) Mac Mini M2 Pro 32GB Shared Memory, 19 Core GPU, 16 Core Neural Engine -vs-2) Studio M1 Max, 10 Core, with 64GB Shared RAM. I can't add/import any new models (at least, I haven't been able to figure it out). SDXL is a totally different beast though but we found a good path forward with a smaller SDXL model. So, I'm wondering: what kind of laptop would you recommend for someone who wants to use Stable Diffusion around midrange budget? There are two main options that I'm considering: a Windows laptop with a RTX 3060 Ti 6gb VRAM mobile GPU, or a MacBook with a M2 Air chip and 16 GB RAM. compare that to fine-tuning SD 2. With the help of a sample project I decided to use this opportunity to learn SwiftUI to create a simple app to use Stable Diffusion, all while fighting COVID (bad idea in hindsight. I convert Stable Diffusion Models DreamShaper XL1. Some friends and I are building a Mac app that lets you connect different generative AI models in a single platform. I am currently setup on MacBook Pro M2, 16gb unified memory. Welcome to the unofficial ComfyUI subreddit. I copied his settings and just like him made a 512*512 image with 30 steps, it took 3 seconds flat (no joke) while it takes him at least 9 seconds. How to install and run Stable Diffusion on Apple Silicon M1/M2 Macs. Please share your tips, tricks, and workflows for using this software to create your AI art. Please keep posted images SFW. And when you're feeling a bit more confident, here's a thread on How to improve performance on M1 / M2 Macs that gets into file tweaks. My intention is to use Automatic1111 to be able to use more cutting-edge solutions that (the excellent) DrawThings allows. " but where do I find the file that contains "launch" or the "share=false". Features. It’s ok. I'm using a MacBook Pro 16, M1 Pro, 16G RAM, use a 4G model to get a 512*768 pic, but it costs me about 7s/it ,much more slower than I expect. And before you as, no, I can't change it. But just to get this out of the way: the tools are overwhelmingly NVidia-centric, you’re going to have to learn to do conversion of models with python, and Got the stable diffusion WebUI Running on my Mac (M2). diffusion bee converts stable diffusion models to a Mac version so it can fully use the Metal Performance Shaders (MPS) and all available compute chips (cpu, gpu, neural) Haven't looked into fooocus yet, my guess cpu only??? What is the fastest MacOS option for StableFusion? My Intel 2019 iMac isn't M1/M2 based and so there are few options. I'm quite impatient but generation is fast enough to make 15-25 step images without too much frustration. I've looked at the "Mac mini (2023) Apple M2 Pro @ 3. The first image I run after starting the UI goes normally. Works fine after that. But I have a MacBook Pro M2. I looked at diffusion bee to use stable diffusion on Mac os but it seems broken. Hello everybody! I am trying out the WebUI Forge app on my Macbook Air M1 16GB, and after installing following the instructions, adding a model and some LoRas, and generating image, I am getting processing times up to 60min!. com) SD WebUI Benchmark Data (vladmandic. Download Here. Free & open source Exclusively for Apple Silicon Mac users (no web apps) Native Mac app using Core ML (rather than PyTorch, etc) Not a studio, but I’ve been using it on a MacBook Pro 16 M2 Max. ). We're looking for alpha testers to try out the app and give us feedback - especially around how we're structuring Stable Diffusion/ControlNet workflows. Anybody here run a new(ish) Mac Mini with a M2 CPU and could tell me how long a general batch usually takes Currently when I run a 512x768 4x4, it can take around 45-1 hr, so I am curious if I were to upgrade what I could expect. I was looking into getting a Mac Studio with the M1 chip but had several people tell me that if I wanted to run Stable Diffusion a mac wouldn't work, and I should really get a PC with a nvidia GPU. . when launching SD via Terminal it says: "To create a public link, set `share=True` in `launch()`. I'm running A1111 webUI though Pinokio. now I wanna be able to use my phones browser to play around. 5 based models, Euler a sampler, with and without hypernetwork attached). for 8x the pixel area. M2 CPUs perform noticeably better but are still very overpriced when all you care about is Stable Diffusion. However, since I'm also interested in Stable (+Video) Diffusion, what if I upgrade to M3 Max with 16‑core CPU, 40‑core GPU and 64/128 GB of unified memory? By comparison, the conventional method of running Stable Diffusion on an Apple Silicon Mac is far slower, taking about 69. THX <3 The Draw Things app is the best way to use Stable Diffusion on Mac and iOS. For the software development purposes, M2 chip would work just fine. I personally use Draw Things. Background: I love making AI-generated art, made an entire book with Midjourney AI, but my old MacBook cannot run Stable Diffusion. Stable Diffusion Benchmarked: Which GPU Runs AI Fastest (Updated) | Tom's Hardware (tomshardware. Recommend MochiDiffusion (really, really good and well maintained app by a great developer) as it runs natively and with CoreML models. I do both, and memory, GPU and local storage are going to be the three factors which have the most impact on performance. If Stable Diffusion is just one consideration among many, then an M2 should be fine. Macs are pretty far down the price-to-performance chart, at least the older M1 models. Am going to try to roll back OS this is madness. 5 GHz (12 cores)" but don't want to spend that money unless I get blazing SD performance. htyvazox hdblrzz mitfym zmbuwe yiim odugxr twdgwhb dpuoh xqvgn gbcihl