Comfyui sdxl upscale not working. Edit: you could try the workflow to see it for yourself.


  • Comfyui sdxl upscale not working g. Please share your tips, tricks, and workflows for using this software to create your AI art. I use SDXL as my high res fix these days then I refine on 1. We recommend using a mix between SD1. Open comment sort options. Load LoRA. 506. Make sure to adjust prompts accordingly. AbilityCharacter7634 • You can pretty much do a normal animated diff workflow on comfyui with an sdxl model you would use with animated diff, but you merge that model with sdxl turbo. You just have to love PCs. If you go above or below I have been using Comfyui for quite a while now and i got some pretty decent workflows for 1. safetensors. Visit their github for examples. If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. Just checking saw the problem with Tiled sampler issue and converging same. improving your prompting not only gets you better results with less GPU time, but you'll find your ability to form concepts in your mind, and simply to think Help with hands in SDXL/ComfyUI . 5520x4296 Comfyui SDXL upscaler / hires fix . So, bottom line, it does NOT work, it merely gets ignored/skipped instead of crashing Comfy. The latent upscale image is slightly different size and Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Took forever and might have made some simple misstep somewhere, like not unchecking the 'nightmare fuel' checkbox. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. I am looking for good upscaler models to be used for SDXL in ComfyUI. com and my result is about the same size. I too use SUPIR, but just to sharpen my images on the first pass. Please keep posted images SFW. 5 was trained on lowres, so some tools like resadapter or Kohya Deep Shrink may be necessary Reply reply Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. 5, but appears to work poorly with external (e. A few are obvious, but I'll list them anyway. Maybe needs to be trained specifically for the turbo model. Parameters not found in the original repository: upscale_by The number to multiply the width and height of the image by. 5 Refine+ Upscale (without ControlNet) waiting for your advices . 0. Introduction. Plan and track work Code Review. What I'm looking to do is generate with SDXL and then pass that image through a 1. * Use Refiner * Still not sure about all the values, but from here it should be tweakable I'm running ComfyUI + SDXL on Colab Pro. SDXL + COMFYUI + LUMA 0:45. 你风哥WindBro. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. Basically, I want a simple workflow (with as few custom nodes as possible) that uses an SDXL checkpoint to create an initial image and then passes that to a separate "upscale" section that uses a SD1. example here. I switched to comfyui not too long ago, but am falling more and more in love. Each upscale model has a specific scaling factor (2x, 3x, 4x, ) that it is optimized to work with. 24K subscribers in the comfyui community. One fix in Automatic1111 is "hi res fix" which makes a low-res version of the image and then upscale it, or just make images at the lower resolution and upscale with whatever upscaling workflow works for you. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. With regards to close up portraits and less complex scenes, SDXL is already quite good at those and hence not much fixing/refinement is required on those types of images. I went back to a good working flow that I had this morning and it seems to be working a lot better again, there must have been some wrong connection somewhere Will now change it to 25/10 instead of 20/20 maybe that even improves it more, after that I will try it with the dog again. 51 denoising. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. this is just a simple node build off what's given and some of the newer nodes that have come out. Upscale with Upscale Model — 3rd : Tiled Diffusion — or — UltimateSDUpscale — ADetailer (Found in the top left of the ComfyUI Manager Menu) → Do not forget to turn the channel setting back to default again afterwards 114 votes, 43 comments. It's working well with standard SD 1. 编辑于 2024年02月24日 17:32. Created by: Matt Weaver: simple image generation, then repeated 1. 5 models sdxlfacedetail workflow. (25. Hence, it appears necessary to apply FaceDetailer As you can see, I defined the upscale_by value to be 1. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. 0 Alpha + SD XL Refiner 1. Yeah I was doing that to make some skies and bgs that I needed for work, feeding a SD1. Controversial It seems to be impossible to find a working Img2Img workspace for ComfyUI. Always use the latest version of the workflow json file with the latest version of the Download t5-v1_1-xxl-encoder-gguf, and place the model files in the comfyui/models/clip directory. roller3d to install an upscaling node to create 4K images. 5 or SD 2. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. 🍬 #HotshotXLAnimate diff experimental video using only Prompt scheduler in #ComfyUI workflow with post processing using flow frames and audio addon. Open comment sort options For whateve reason i can only upscale using base model not refiner model. Could you post a screenshot of you ComfyUI workflow. 5 models and I don't get good results with the upscalers either when using SD1. i thought SDXL was verry fast but after trying it out i realized it was verry slow and lagged my pc(rx6650xt(8gb vram) ≊ rtx 3060-70, ryzen 5 5600 31 votes, 70 comments. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Notice that the ControlNet conditioning can work in conjunction with the XY Plot function, the Refiner, the Detailers (Hands and Faces There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. ok so your checkpoint folder and vae are probably empty in the main comfy ui portable folder. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far Also, if this is new and exciting to you, feel free to post, but don't spam all your work. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. In general most work OK. Less focus on Lex and focus on ideas, whether related to Lex Fridman Podcast or not. While the preview is always shown for the KSampler (Efficient) node, these other nodes start each run not showing a preview. I don't do much with SDXL so I'm just guessing about that. X The same concepts we explored so far are valid for SDXL. I get an empty list: EDIT: nvm, I deleted comfyUI manager and did a manual git pull, it's working Nice, some of the refined images have a bit too much noise (like the background behind the orc) but the details are really good. Reply reply I wonder if I have been doing it wrong -- right now, when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of Barebones TurboXL (not a XL finetune merged with TurboXL) can produce decent quality in just 3 steps, which means a latent upscale refinement pass with ksampler should be able to finish the job fairly easily. 2x a ComfyUI upscale workflow would just use an Load Upscale I just released version 4. Edit: you could try the workflow to see it for yourself. I have heard the large ones (typically 5 to 6gb each) should work but is there a source with a more reasonable file size. awesome . The upscale model loader throws a UnsupportedModel exception. But for now they are not important. json. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. also 1. im working on basic SDXL workflow Reply reply More replies. 5 times the image so it’s not too large and left all other options at their default values. Thank Actually Ultimate SD tiled upscale did a lot of heavy lifting on some of these images, such as no. Here are the three recommended SDXL workflows for ComfyUI discussed in more detail: 1. I was just looking for an inpainting for SDXL setup in ComfyUI. Nothing seems to work! 😔 To ComfyUi they don't seem to * The result should best be in the resolution-space of SDXL (1024x1024). If you are not interested in having an upscaled image completely faithful to the original you can create a draft with the base model in just a bunch of steps, then upscale the latent and apply a second pass with the base and a third pass with the Please keep posted images SFW. Warning: the workflow does not save image generated by the SDXL Base model. I made a preview of each step to see how the image changes itself after sdxl to sd1. Haven´t gotten My Modest contribution (Comfy UI Workflow I Use) : SDXL + FaceDetail+2 SD1. I could not find an example how to use stable-diffusion-x4-upscaler Hi I tried running your work flow but the process stopped when it got to the upload controlnet node. Let's generate our first image! Yeah so basicly im first making the images with sdxl, then upscaling them with USDU with 1. Reply reply Leanoffff Duchesses of Worcester - Each of the ones below is a hit or miss in any specific situation, but one of them should work in any one case. x. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for ages with no luck. Members Online • One-Appearance6949 Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. safetensor. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Contribute to nullquant/ComfyUI-BrushNet development by creating an account on GitHub. 0 to get more realistic skin / face during upscale. 5 models Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. natural or MJ) images. Not sure about the other file formats as I've not had to use them. It may be the same with the original implementation? Try to use resolutions that are multiples of 64 or 128. The latent upscaler is okayish for XL, but in conjunction with perlin noise injection, the artifacts coming from upscaling gets reinforced so much that the 2nd sampler needs a lot I gave up on latent upscale. Fooocus came up with a way that delivers pretty convincing results. 5 checkpoints are really damn good. if you don't want that then i will send you two workflows one with upscale and other with no upscale Welcome to the unofficial ComfyUI subreddit. Reply reply Ferniclestix • • ComfyUI updated and the memory issues plaguing me are gone, so I can now run various workflows. SDXL Config ComfyUI Fast Generation. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . Once all is installed you should see something like this: follow the installation guides for each and then you can find my workflow here. r/lexfridman. Custom nodes and workflows for SDXL in ComfyUI. 5 models for archvis have better quality than sdxl. But where do I put it in comfyui? As AI tools continue to improve, image upscalers have become a necessary aid for anyone working with images. 3. 0 and SD 1. 2. It seems to produce faces that don't blend well with the rest of the image when used after combining SDXL and SD1. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. The RAUNet component may not work properly with ControlNet while the Hi! I'm new to comfyui, does the sample image work as a "workflow save", as if it was a json with all the nodes? Tried with standard SDXL lora, didn’t work. 5 model, i have no clue what is going on, i dont want to use sdxl cause its not great with details like some trained 1. The quality of the output is much better than Yeah the latest 1. 21K subscribers in the comfyui community. Second, you will need Detailer SEGS or Face Detailer nodes from ComfyUI-Impact Pack. New. New Also, if this is new and exciting to you, feel free to post, but don't spam all your work. if you're really married to the tech-first approach, all the more reason. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. Here you can select your scheduler, sampler, seed and cfg as usual! Everything that is above these 3 windows is not really needed, if you want to change something in this workflow yourself, you can continue your work here. Possible to Hello. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted I'm using Ultimate SD Upscale with SDXL Lightning without any issues. bat; The preview on the custom nodes I named does not work at each Nope, didn't work reinstalling it. Here's what you see in the console window: WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280. Here are a few things I've learned along the way, some through experimentation and others through tips found around the webs. Upscale to unlimited resolution using SDXL Tile regardless with no VRAM limitationsMake sure to adjust prompts accordinglyThis workflow creates two outputs with two different sets of settings share, run, and discover comfyUI workflows ttplanet SDXL is not an upscaler in itself, it is a controlnet that is used in conjunction with Ultimate SD Upscale to keep the tiled upscale from hallucinating too much in each section at higher denoising strengths. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. Sadly, I only have V100 for training this checkpoint, which can only train with a batch size of 1 with a slow speed. It's mostly an outcome from personal wants and attempting to learn ComfyUI. Collaborate outside of code BrushNet_SDXL_upscale. 90% of workflows not working . It is working on google colab without problems. all I can see working is putting the upscaler node right after the refiner Works with SDXL, SDXL Turbo as well as earlier version like SD1. Got a tiled controlnet and patchmodeladddownscale. I then down scale it as 4x is a little big. Third Pass: Further upscale 1. This tutorial will guide you through how to build your own state of GameMaker Studio is designed to make developing games fun and easy. Advice:ComfyUI for 3D texturing upvotes Is AnimateDiff the best/only way to do Vid2Vid for SDXL in ComfyUI? upvotes CR_SDXLAspectRatio节点旨在根据指定的纵横比调整图像尺寸。它允许用户从预定义的纵横比列表中选择或输入自定义尺寸。该节点还提供选项来交换维度并应用放大因子到结果图像大小。其主要功能是确保输出符合所需的纵横比 Welcome to the unofficial ComfyUI subreddit. 5 and SDXL for the diffusion, but you are free to use whichever model you like. 529. Giving 'NoneType' object has no attribute 'copy' errors. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. 5x original size, with minimal changes to image content. And bump You can't upscale a 832x1216 image to 1080x2800, without seriously stretching and distorting the image. We use the add detail LoRA to create new details during the generation process. zip. What is the recommended tile size for upscaling a 768x768 image by 2x? I do notice I try to use comfyUI to upscale (use SDXL 1. Top. #ComfyUI Hope you all explore same. 5 checkpoint in combination with a Tiled ControlNet to feed an Ultimate SD Upscale node for a more detailed upscale. Works well to generate 6MP image in SDXL on 8G VRAM. 5 GB RAM and 16 GB GPU RAM) However, I still run out of memory when generating images. . The current ckpt is only trained for a small step number thus perform not well. If you have spotted same . In a base+refiner workflow though upscaling might not look straightforwad. After trying various out of the box solutions I struggled with generating desired Upscale to unlimited resolution using SDXL Tile regardless with no VRAM limitations. SDXL has a different structure altogether from SD 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. This node is meant to be used in a workflow where the initial image is generated in lower resolution, the latent is upscaled and the upscaled latent is fed Details about most of the parameters can be found here. Best. Im new to ComfyUI and struggling to get an upscale working well. I did once get some noise I didn't like, but rebooted & all was good second try. (instead of using the VAE that's embedded in SDXL 1. All the features: Text2Image with SDXL 1. SDXL Base+Refiner. Is there any way I can iterate on the output of SDXL Turbo using Comfy UI? Upscale while adding "detailed faces" positive clip to an upscaler as input? Im new to ComfyUI, some help would be greatly appreciated New to Comfyui, so not an expert. 5 model as if I was doing an img2img in A1111. (workflow included) Share Add a Comment. It didn't work out. If it's the best way to install control net because when I tried manually doing it . However, the SDXL refiner obviously doesn't work with SD1. Openpose SDXL not working . The SDXL Config ComfyUI Fast Generation workflow is ideal for beginners just getting One does an image upscale and the other a latent upscale. Not really. For SDXL models (specifically, Pony XL V6) HighRes-Fix Script Constantly distorts the image, even with the KSampler's denoise at 0. I'm creating some cool images with some SD1. But fortunately, cliptextencode is not a custom node, it's one of the defaults, so you don't need to download anything. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage Nope. I have a much lighter assembly, without detailers, but gives a better result, if you compare your resulting image on comfyworkflows. not sure about this specifically, but i do know that some of the syntax used in A1111 doesn't always work in comfy. It has many upscaling options, such as img2img upscaling and Ultimate SD Upscale upscaling. Custom nodes extension for ComfyUI, including a workflow to use SDXL 1. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. 0 of my AP Workflow for ComfyUI. I've already tried the stable-diffusion-x4-upscaler. Both did not solved this, all is separated now and sd1. -> you might have to resize your input-picture first (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. The pixel upscale are ok but doesn't hold a candle to the latent upscale for adding detail. Krita AI Plugin - ComyUI Custom Workflows; Krita workflows are used for Krita + Krita AI Diffusion + Krita AI Tools; Tile to sdxl version (in Upscale Sub Workflow JK🐉) Switch NNLatentUpscale version to SDXL Set Ultimate SD Upscale Tile size to 1024 Switch Hot shot XL vibes. Which are awesome for a 1 second generation, but they are not usable in my project because of the disfigured, deformed faces. 5 Has 5 parameters which will allow you to easily change the prompt and experiment Toggle if the seed should be included in the file name or not Upscale to 2x and 4x in multi-steps, These comparisons are done using ComfyUI with default node settings and fixed seeds. the first two samplers together form a hi-res fix. I know it's simple for now. If you want to specify an exact width and height, use the "No Here is a workflow that I use currently with Ultimate SD Upscale. Then I vae encode back to a latent and pass that through the base/refiner again in Simple ComfyUI Img2Img Upscale Workflow Workflow Included but can be changed to whatever. I'm having some issues with (as the title says) HighRes-Fix Script. safetensors and place the model files in the comfyui/models/vae directory, and rename it to flux_ae. The workflow is kept very simple for this test; Load image Upscale Save image. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. Discussion of science, technology, engineering, philosophy, history, politics, music, art, etc. Coders can take advantage of its built in scripting language, "GML" to design and create fully-featured, professional grade games. 5 models XD Anyone figured out how to get a good 2x latent upscale working with SDXL? I just get weird artifacts in the image when I try it in ComfyUI. It doesn't turn out well with my hands, unlucky. See notes field for suggested "knobs and levers". [SDXL + Ultimate SD Upscale] Nature droids Workflow Included Share Sort by: Best. Upscale your output and pass it through hand detailer in your sdxl workflow. Upscale with Upscale Model — 3rd : Tiled Diffusion — or — UltimateSDUpscale — ADetailer (Found in the top left of the ComfyUI Manager Menu) → Do not forget to turn the channel setting back to default again afterwards Upscale smaller images to at least 1024 x 1024, before you put them in to be in painted. Searge-SDXL: EVOLVED v4. but don't spam all your work. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. And above all, BE NICE. it's nothing spectacular but gives good consistent I have good results with SDXL models, SDXL refiner and most 4x upscalers. (especially with SDXL which can work in plenty of aspect ratios). what you need is to either copy and drop them there or use a " A symbolic link (symlink) " in windows to basically shortcut the windows to link it to your other SD folders mainly either the vlads, easydiffusion or Automatic1111 directory. 5 or sdxl and does use the standard ClipTextEncode. Alright, it depends what you mean under "massive" :) Reply reply More replies. Members Online. 10. 5 to a SDXL workflow, ended up quite nice but I am trying out using SDXL in ComfyUI. For SD1. It's an educational tool, not a solution optimized for production deployments. A user on the SD Discord channel working with Comfy himself released a You don't necessarily need a PC to be a member of the PCMR. Everytime, I generate an image, it takes up more and more RAM (GPU RAM utilization remains to "/custom_nodes/" directory inside ComfyUI. It uses CN tile with ult SD upscale. I played for a few days with ComfyUI and SDXL 1. I tracked it down and downloaded loaded it. I also documented on my git that the hand fixing is not working always, especially when the picture is too zoomed out However, this Ultimate SD Upscale node takes extremely long time for a simple 2x upscale, I mean 10x longer than the same upscale in auto1111. These were all done using SDXL and SDXL Refiner and upscaled with Ultimate SD Upscale 4x_NMKD-Superscale. 🙌 ️ Finally got #SDXL Hotshot #AnimateDiff to give a nice output and create some super cool animation and movement using prompt interpolation. 0 with both the base and refiner checkpoints. 5 models. I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. so if your image is 512x512, and then you upscale to 2048x2045, then run facedetailer, its going to render the face in the same resolution as the original render, not the upscale, and then just basic-scale it to fit the dimensions of the final image. Belittling their efforts will get you banned. You can upscale in SDXL and run the img through a img2img in automatic using sd 1. 5x upscale but I tried 2x and voila, with higher resolution, the smaller hands are fixed a lot better. Ok solved it . Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. Since I used ComfyUI, I downloaded tons of workflows, but only around 10% of them work. Topics ai upscale image2image upscaler img2img image-upscaling image-upscaler image-upscale upscalerimage stable-diffusion comfyui comfyui-workflow SDXL Upscale Tests Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. ComfyUI Workflow is here: If anyone sees any flaws in my workflow, please let me know. 5 model but here, it's not working (note: work well This workflow aims to provide upscale and face restoration with sharp results. Installing a separate version of ComfyUI to work with Krita is recommended. Additionally, I need to incorporate FaceDetailer into the process. although in this instance im using them to help refine my SDXL model. Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Hi there. 5 realistic visionV40, thats the reason i first want to start low denoising and then go higher to keep the sdxl look. This is done after the refined image is upscaled and encoded into a latent. MoonRide workflow v1. I also tried moving them to a different folder in A1111 (models\embeddings), and changing names of the texuals (and using that name in the prompt). Includes support for LoRAs, and can be easily modified to work with SD1. g Use a X2 Upscaler model. There is an SDXL loader and sampler that might work better. Reinstalling the extension and python does not help The "KSampler SDXL" produces your image. I try to use this model during upscale or Photon v1. Hi! I'm having problems with loading upscale models. Download ae. which is why it looks burry and crappy. Hi! I have a very simple SDXL lightning workflow with an openpose Controlnet, and the openpose doesn't seem to do I wanted a flexible way to get good inpaint results with any SDXL model. It's do-able but if you are new and just want to play, it's difficult. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Although we suggest keeping this one to get the best results, you can use any SDXL LoRA. Hotshot-XL is a motion module which is used with SDXL that can make amazing animations. It's not about the hardware in your rig, but the software in your heart! Join us in celebrating and promoting tech, knowledge, and the best gaming, ComfyUI — SDXL Advanced — Daemon +Meta. This works best with Stable Cascade images, might still work with SDXL or SD1. It does not work as a final step, however. It's not that case in ComfyUI - you can load different checkpoints and LoRAs for each KSampler, Detailer and even some upscaler nodes. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. In the comfy UI manager select install model and the scroll down to see the control net models download the Most of the models in the package from lllyasviel for sdxl do not work in automatic 1111 1. It's a bit of a mess at the moment working out what works with what. But it's weird. run K-sampler, feed that into Facedetailer, then ComfyUI workflows for upscaling. Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. Thank you community! Not all aspect ratios work with the MSW-MSA attention node. A lot of people are just discovering this technology, and want to show off what they created. He used 1. However, I kept getting a black image. 5 and embeddings and or loras for better hands. if you increase the upscale a little and switch to 1. 5 has its own clip neg and positive that go to the pipe, still wont upscale the face wth sd1. pth and . 5 there is ControlNet inpaint, but so far nothing for SDXL. You could try with a starting resolution of 472x1224 though. 1 I get double mouths/noses. Sort by: Best. Contribute to SeargeDP/SeargeSDXL development by creating an account on GitHub. Just remember for best result you should use detailer after you do upscale. Custom nodes are Release checkpoint (sdxl). Welcome to the unofficial ComfyUI subreddit. I try to upscale SDXL output images and want to use the stable-diffusion-x4-upscaler. Dreamshaper is amazing but the SDXL version of it is way behind because there's just not as much to work with yet and the time it's going to take to train all the newer stuff. Download clip_l. I did not have the tiling safetensor. Are you using Ultimate SD upscale under Comfyui? I’m trying to make it work like tiled diffusion under 1. SDXL Ultimate Workflow is the best and most complete single workflow that exists for SDXL 1. I did Install Missing Custom Nodes, Update All, and etc etc, but there are many issues every time I load the workflows, and it looks pretty complicated to solve it. I vae decode to an image, use Ultrasharp-4x to pixel upscale. I was getting some good results generating with SDXL in comfyui and then doing an img2img in A1111 but it would be nice to be able to do it all at once in comfyui. safetensors and place the model files in the comfyui/models/clip directory. For example the alternating syntax of [man|dog] in A1111 would make the program alternate between a man and a dog each step, but in ComfyUI it doesn't work at all for some wack reason. Here is my current hacky way of getting a latent type upscale but it is slow work on your prompting. No attempts to fix jpg artifacts, etc. 25 upscale to 2. 4, 7, 8 & 10. I tested with different SDXL models and tested without the Lora but the result is always the same. 4 steps with CFG of 1, RealVisxlV40. :) Do you have ComfyUI manager. For example: 896x1152 or 1536x640 are good resolutions. ComfyUI handles . 5 it should do the job. 这是 ComfyUI 教学第二阶段关于中阶使用的第三部,也是最后一部了。今天我们来看 upscale 跟 SDXL 的基本架构,XL 和我们之前讲的基础 Finally made a workflow for ComfyUI to do img2img with SDXL Workflow Included Share Sort by: Best. Your math doesn't work. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. 5 denoise Now You Can Full Fine Tune / DreamBooth Stable Diffusion XL (SDXL) with only 10. SD upscaler and upscale from that. Code. Tutorial 6 - upscaling. The node can be found in "Add Node -> latent -> NNLatentUpscale". When choosing refiner it complains : The size of tensor a (384) must match the size of tensor b (320) at non-singleton dimension 1 Here's my ComfyUI setup This AI-based upscaling is a game-changer for all sorts of visual work. 0 with Automatic1111 and the refiner extension. It also has full inpainting support to make custom changes to your generations. I think his idea was to implement hires fix using the SDXL Base model. comfyanonymous / ComfyUI Public. 5 and sdxl but I still think that there is more that can be done in terms of detail. Now you can use the model also in ComfyUI! Workflow with existing SDXL checkpoint patched on the fly to become an inpaint model. but 1. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent ComfyUI中阶③Upscale、SDXL. Turbo-SDXL 1 Step Results + 1 Step Hires-fix upscaler. Hopefully A1111 will get sorted out because that's the kind of layout I consider 'comfortable' lol. 512×512 from the Civitai On-Site Generator, upscaled to 8x, with added detail, Fooocus is also one of the easiest Stable Diffusion interfaces to start exploring Stable Diffusion and SDXL specifically. I am aware that the optimal You guys have been very supportive, so I'm posting here first. x for ComfyUI; Table of Content; Version 4. 5 approach is only slightly slower than just SDXL (Refiner -> CCXL) but faster than SDXL (Refiner -> Base -> Refiner OR Base -> Refiner) and gives me massive improvement in scene setup, character to scene placement and scale, etc, while not losing out on final detail. 3 GB Config - More Info In Comments Welcome to the unofficial ComfyUI subreddit. But I probably wouldn't upscale by 4x at all if fidelity is important. Most SDXL checkpoints work best with an image size of 1024x1024. Table of Content <!-- TOC --> Searge-SDXL: EVOLVED v4. I'm trying to find a way of upscaling the SD video up from its 1024x576. Members Online • Look if you are using the right open pose sd15 / sdxl for your current checkpoint type Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. I tested with Ultimate SD Upscale and ImpactPack's FaceDetailer nodes. 3 GB VRAM via OneTrainer - Both U-NET and Text Encoder 1 is trained - Compared 14 GB config vs slower 10. Notifications You must be signed in to change notification when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of the latent upscaler to another KSampler. It is not AnimateDiff but a different structure entirely, however Kosinkadink who makes the AnimateDiff ComfyUI nodes got it working and I worked with one of the creators to figure out the right settings to get it to give good outputs. If it's fairly recent it should 'just work' but it's always possible they download broken due to changes in COmfyUI etc. Reply reply More replies. Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. I generate an image that I like then mute the first ksampler, unmute Ult. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. 0, this one has been fixed to work in fp16 and should fix the issue with (recommended) download 4x-UltraSharp (67 MB) and copy it into ComfyUI/models/upscale The way I've done it is sort of like that, as latent upscale doesn't work brilliantly. 6. Please share your tips, tricks, and workflows for using this software to Contribute to runtime44/comfyui_upscale_workflow development by creating an account on GitHub. To reproduce the preview not launching: Launch ComfyUI using run_nvidia_gpu. Just a simple upscale using Kohya deep shrink. Workflow Included I have ComfyUI manager; it's just not working when I try to install missing models. SDXL Examples. I guess saying upscaling wasn't really the right term. ComfyUI — SDXL Advanced — Daemon +Meta. Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. File metadata and controls. Every time I try to create an image in 512x512, it is very slow but eventually finishes, giving me a corrupted mess like this. Upscaling from 2K to 4K no problem, using 2k tiles, half tile seam fix and Chess mode type, denoise set to 0. 28. 5. 5 and then after upscale and facefix, you ll be surprised how much change that was Couldn't make it work for the SDXL Base+Refiner flow. 5x-2x with either SDXL Turbo or SD1. x for ComfyUI. The default layout should work fine with SD 1. You can't upscale a 832x1216 image to 1080x2800, without seriously stretching Hello! I'm using SDXL base 1. Tutorial 7 - Lora Usage This repository includes a custom node for ComfyUI for upscaling the latents quickly using a small neural network without needing to decode and encode with VAE. This workflow creates two outputs with two different sets of settings. I upscaled it to a resolution of 10240x6144 px for us to examine the results. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Manage code changes Discussions. 222. What's the best upscale model? This SDXL (Refiner -> CCXL) -> SD 1. Depending on the workflow swapping it in may or may not work if there are other nodes in the workflow expecting an SDXL model. Real time prompting with SDXL Turbo and ComfyUI running locally upvotes I know this is an old thread (in the world of AI) but I thought I would add my thoughts here since I have been working with Ultimate Upscale a lot lately, with very good results. This subreddit is not designed for promoting your content and is instead focused on helping people make games, not promote them. Please do send what revision helped solve the issue of same. wnlbb lfbpr dpue morero ptao opt doj rdqo iehiw pumf