Comfyui face detection not working. Reload to refresh your session.
Comfyui face detection not working If this is your first dive into comfyUI, it can look intimidating with all the nodes and noodles. ComfyUI/Quick Tool Workflow: Face Detector/IP Adapter on SeaArt. tool. py: Utilizes dlib’s MMOD CNN face detector. Face recognition might not work well in low-light environments. Works great when I rolled back to 1. When using tags, it also fails if there are no objects detected that match the tags, resulting in an empty outcome as well. 809618 Total VRAM 12288 MB, total RAM 32706 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync VAE dtype: torch. If there is more than one face, you need to make sure your ComfyUI Face Swap Workflow. Am I missing something? ComfyUI seems to have downloaded some models for face/hand detection on using this node for the first time, but I'm not seeing their output. This is NO place to show-off ai art unless it's a highly educational The extension works by creating nodes within ComfyUI that can capture images from your webcam or screen. The scale_factor parameter is used in the face detection process to scale the image. Adjust the face detection parameters. Notifications You must be signed in to change notification settings; Fork 33; Star 594. Notifications You must be signed in to change notification settings; [Feature]: Changing face detection model #445. Ensure that the captured Face Detection Model: Detects faces in the images. It combines advanced face swapping and generation techniques to deliver high-quality outcomes, ensuring a comprehensive solution for your needs. Q: Face remains the same, no transfer, only color change. Apply IPÄdapter FaceID + Load Insightface node not Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. What I observed during course of making face workflows, that the bbox face yolo models have the best detection success, say 95% detection rate even if the face is partially covered by some object, far or near, For demanding projects that require top-notch results, this workflow is your go-to option. pt which I can not find any where or any other alternative to these files. It doesn't do head or person swap. 5 model. face_locations(image) print("I found {} face(s) in this photograph. A: Increase "threshold" in "BBOX Detector (SEGS)" node. Run it. The "PreviewBridge" feature may not function correctly on ComfyUI versions released before July 1, 2023. Leveraging advanced algorithms, DeepFuze enables users to combine audio and video with unparalleled realism, ensuring perfectly synchronized facial movements. e. 11: Gourieff/comfyui-reactor-node: Fast and Simple Face Swap Extension Node for ComfyUI To get the just released IP-Adapter-FaceID working with ComfyUI IPAdapter plus you need to have insightface installed and a lot of people had trouble jnstalling it. These will automaticly be downloaded and placed in models/facedetection the first time each is used. 6. 1' and I'm using cuda 12. Then restart ComfyUI and click "Refresh" to clear the cache. To improve face segmantation accuracy, yolov8 face model is used to first extract face from an image. How many faces are appearing in the source and input image? If it's just one, make sure the face indexes are 0. Faces Not Detected: Ensure the image quality is high and the faces are clearly visible. So I did a clean reinstall of everything. common import Face ModuleNotFoundError: No module named 0. This repository contains custom nodes designed to work with ComfyUI for face detection, restoration, and visualization using YuNet and dlib models. g Face Detailer is a custom node for the "ComfyUI" framework inspired by !After Detailer extension from auto1111, it allows you to detect faces using Mediapipe and YOLOv8n to create masks for the detected faces. min_neighbors Welcome to the unofficial ComfyUI subreddit. You can also just search for "Face Restore" through the ComfyUI manager and you'll find it there as well. I'm trying to create a simple animatedDiff workflow that upscale then apply facedtailer after, but it doesn't seem to work. Insightface is not automatically installed, if you wish to use it follow these instructions: If you have a working compile environment, installing it can be as easy as: pip install insightface Welcome to the unofficial ComfyUI subreddit. Solves 95% of faces too close not being detected for me. Hair is not transferred! So it's good to prompt for hair. However, values greater than 0 but less than the detection_hint_threshold are not used as negative prompts. How to use this workflow 👉 Simply click "Queue Prompt" after loading the image. As you can see in the preview image of the relevant portion of the workflow, just the body information seems to have been generated. the first face detailer kind of primes the latent and a closer to the character image is the result. 8% accuracy. This model works with 128x128 pixel face images for a good balance of quality and speed. r/StableDiffusionInfo. It is less of a model and more like a "face preset". It only works with ReActor and maybe other nodes using the same technology. All features ltdrdata / ComfyUI-extension-tutorials Public. Significant Features Basic Auto hi, I'm been experimenting and trying to migrate my workflow from AUTO1111 to Comfy, and now I'm stuck while trying to reproduce the ADetailer step I use in AUTO1111 to fix faces; I'm using the ImpactPack's FaceDetailer node, but no matter what, it doesn't fix the face and the preview image returns a black square, what I'm doing wrong? I have been playing with facial recognition a little bit in python, but have been having trouble with getting dlib to work. Next and then the DML fork of auto1111 (there is really only one DML UI dev and he First you have to update to pytorch 2. py", line 151, in recursive_execute The quality of the face detection model directly impacts the accuracy of face detection and subsequent enhancement. Finally I deleted all those lines and just used EITHER OF THESE. You'll find them here mav-rik/facerestore_cf: ComfyUI Custom node that supports face restore models and supports CodeFormer Fidelity parameter (github. Insightface face detection and recognition model that just works out of the box. Collaborate outside of code Code Search. There are also auxiliary nodes for image and mask processing. The node leverages advanced face detection and segmentation techniques to accurately identify and mask skin regions, providing a reliable The "PreviewBridge" feature may not function correctly on ComfyUI versions released before July 1, 2023. re-start comfyui and it should install all the bits to work. py", line Stable Diffusion XL has trouble producing accurately proportioned faces when they are too small. I was able to get it working, Bypass the AnimateDiff loader when inputting in the model in Facedetailer the See this image, Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch but the images are not consistent : BTW I export the frames and fix the face in Adetailer in Automatic1111, No, I say this because I have had compatibility problems with reactor and faceid. I use JSON. This You signed in with another tab or window. Execute the Face detection: Method of detecting faces, you can choose according to your actual situation, but I usually choose the first one. Face Swapping: The core swapping is done by the "inswapper_128. Face restore model: There are two models to choose from: CodeFormer (opens in a new tab) GFPGAN (opens in a new tab) After downloading the model, you need to place it in the ComfyUI\models\facerestore_models folder import face_recognition image = face_recognition. The only "segm" file I could find for the workflow was the face_yolo8m-seg_60. It works even if your base model is SDXL or Fine-Tuned SDXL. For DLIB download Shape Predictor, Face Predictor 5 landmarks, Face Predictor 81 landmarks and the Face Recognition models and place them into the dlib directory. BOX. hog_face_detection. face_enhance_model. I tried to edit the file with controlnet_aux already installed, I'm trying to use the FaceDetailer node from the ComfyUI Impact Pack. 5. By using the Segs Both did not solved this, all is separated now and sd1. Face detection/face swap for animal characters . https://github. However, for batch images, the detected regions, quantity, and sizes vary for each image. High quality image. 3 ). In this case, what might help is cranking up the hand Lora to 1. The goal is to allow smoother implementation of face detection models. On Windows, replace the root parameter in the FaceAnalysis Class with the complete or absolute path to the insightface folder. STATUS - Working: source face index [0], target face index [0] | 30/30 [00: 15< 00:00, 1. 2 Included in the ComfyUI Impact Pack is a collection of bespoke nodes that streamline and simplifies picture enhancement with the help of advanced tools like the Detector, Detailer, Upscaler, and Pipe. Updated: Oct 6, 2024. The are colabs out there that do it too. but don't spam all your work. 4\python\lib\site-packages\insightface This is a set of custom nodes for ComfyUI. And above all, BE NICE. In order for this to become a tool it needs to me functional not me working to get it functional. pt. Let’s get started! Dlib’s face detection methods Welcome to the unofficial ComfyUI subreddit. Is there one that detects upside down and sideways faces? Have you tried changing the face detection model (I use YOLOv5l). It's simply an Stable Diffusion XL has trouble producing accurately proportioned faces when they are too small. There may be compatibility issues in future upgrades. The only adjustable parameter is the one used to KjLP is working fine in the current ComfyUI portable version. scale_factor. Author: abdozmantar; Stars: 50 The face model is not similar to a checkpoint or a LoRA. 0 seconds: F:\AI 2024\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes Expedition not working upvotes r/StableDiffusionInfo. hi there, I'm having the same issue, I have checked your link but it seems unrelated to my problem. upvotes Faces aren't detected in the CameraWebServer example in 1. extension? Gourieff / comfyui-reactor-node Public. Thank for Ur replay,Igot the face swap working but the face enhance didn't appear in The face that has been damaged due to low resolution is restored with high resolution by generating and synthesizing it, in order to restore the details. Today, I learn to use the FaceDetailer and Detailer (SEGS) nodes in the ComfyUI-Impact-Pack to fix small, ugly faces. Still, the aggressive face detection I've implemented using Mediapipe and template masking is a key feature. DML support is very basic in ComfyUI. See the Advanced Tutorial for a more detailed explanation. py is a custom node for ComfyUI # that which automatically crops an image to fit the SDXL aspect ratio. These will automaticly be downloaded and placed in models/facedetection the you need a load image batch from dir (inspire) node, attach the image from that to a save face model node give the load image the directory path of your faces give the save face model a face_model_name queue the prompt then using a load This workflow is intended to be freely used and I do not mind if you want to share/post it elsewhere. png") face_locations = face_recognition. Install this extension via the ComfyUI Manager by searching for ComfyUI-Portrait-Maker. That is, except for when the face is not oriented up & down: for instance when someone is lying on their side, or if the face is upside down. processor. 7 Fix 3: Enable Brighten screen. Here is the folder: The text was You don't necessarily need a PC to be a member of the PCMR. 2, which helps the face detection algorithm identify faces at different scales within the image. A custom node for ComfyUI that allows you to perform lip-syncing on videos using the Wav2Lip model. Closed ForestEco opened this issue Jan 24, 2024 · 2 comments ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models. If you want to make denseposes you can install detectron2 on Linux or WSL2 until he fixes it. ". onnx" model. Ultralytics w/ detection model for eyes Send bbox to BBOX Combined node otherwise it will work on each eye individually Connect SAM so it only works on the eyes as opposed to the entire bbox Send the SEGS to Detailer node (Going off memory so sorry if something is slightly off) Instead of Insightface, now you can also use MediaPipe(opensource) by Google for commercial purpose in deployment but the face detection will not be as good as Insight face. This data is then used to crop and align the detected face in the original image, which is then sent to The face restoration model only works with cropped face images. Feature description Is it posssible to change detection model - for example to yolo8, or is there any folder to put it in pt. Custom node to enable face swapping in ComfyUI. ndarray type annotation class FaceAnalysis2(FaceAnalysis): # NOTE: allows setting poses = self. safetensors or any face model, and if you really need the face just left the last line, still got errors. Ok solved it . Face detection models. Also, if this is new and exciting to you, feel free to post, but don't spam all your work. Is there any way? If no face is detected, this will be None. I'm keep getting these error. for those with only one character, i These Models are the larger versions to face_yolov8s, hand_yolov8n and person_yolov8s. GFPGAN. WAS-NS doesn't use any YAML that I am aware of. bfloat16 Using pytorch cross attention Insightface is not automatically installed, if you wish to use it follow these instructions: If you have a working compile environment, installing it can be as easy as: pip install insightface or for the portable version, in the ComfyUI_windows_portable -folder: This parameter expects a collection of detected faces, typically provided by a face detection node. New comments cannot be posted. weight XLabs-AI/flux-RealismLora · Does not work on ComfyUI The 1 GB that it prints out is just a hard coded value not based on any actual hardware. device('cpu') OR. Please keep posted images SFW. The face restoration model only works with cropped face images. woman man face. The suggestions on most boards is not to run the ‘update’ though -as in the install instructions, as it creates requirement conflicts between pytorch and other stuff. Collaborate outside of code 增加detection_Resnet50_Final. This is a minimalistic inference-focused repack of Insightface . The nodes utilize the face parsing model to parse face and provide detailed segmantation. 2k. " Does anybody have idea how to fix it and make ConfyUI working again with isntalled WAS-NS, please? I'm not familiar with Python etc. 1. This parameter specifies the gender to filter by. In case of any installation errors for insight face, it can be fixed from the troubleshooting section. After downloading the model, you need to place it in the How to fix “insightface model is required for FaceID models”? This is because you are using the ip-adapter-plus-face_sd15. format(len still not working even I've installed cmake and pip3 install face_recognition – Pyae Phyoe Also the hand and face detection have never worked. Collaborate outside of code Adjust the face_detect_batch size if needed. \ComfyUI\ComfyUI_windows_portable\ComfyUI\execution. 09, 3. # Face detection is done using Haar Cascade Classifier. bat file. Face Restoration: An optional step using the "CodeFormer" model to improve the quality of the swapped face, especially useful for The resulting image will be then passed to the Face Detailer (if enabled) and/or to the Upscalers (if enabled). 5 model, i have no clue what is going on, i dont want to use sdxl cause its not great with details like some trained 1. zip. If no face is detected, this will be Welcome to the unofficial ComfyUI subreddit. There are no specific minimum, maximum, or default values for this parameter, but it must be a valid DLIB_MODEL. A typical value is 1. 2. This output provides the key points of the detected face, such as the positions of the eyes, nose, and mouth. 5 models and loras. If you encounter any problems, please create an issue, thanks. Face Detailer While the hair is fine, the swapped face looks blurry. detect_poses(detected_map,input_image, include_hand, include_face) it didn't work for me. Plan and track work Code Review. my onnxruntime is '1. remember to create a model folder and place the **onnx model ** in it. The Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Belittling their efforts will get you banned. 5) efficient loader, ultralytics provider and samloader, Take a two-step method by utilizing two FaceDetailers to repair severely damaged faces. cnn_face_detection. This mask can be used to focus the face detection on specific areas of the input image, enhancing the accuracy and relevance of the detection {"payload":{"allShortcutsEnabled":false,"fileTree":{"tutorial":{"items":[{"name":"ONNX. Densepose control net in comfyui does not do that when applied. Hi, i am trying to perform face swap on animal characters on children's storybooks. This then ensures a correct cropping node extracts the face around the edges. then a face detailer makes the image close to the character (but not really perfect), a following low denoise face detailer does the final step. You signed in with another tab or window. I definitely want to get this working on ForgeUI so I can test Flux integration. I generally choose the second one. 0. (How Engines Work) - Smarter Every Day 292 ~~~~~ 🚫Eames Lounges and Brasilia Credenzas need not apply 🛑 This isn’t an ID request sub ⚠️ Please attribute or list as unknown and give a time or place if Models not detected within ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models within StabilityMatrix #244. We’ll then run these face detectors on a set of images and examine the results, noting when to use each face detector in a given situation. Troubleshooting ComfyUI-FaceChain Common Issues and Solutions. Reload to refresh your session. I've hooked everything up but there are three nodes left unconnected on the input. Prompting for similar face features also helps a lot. 0(then. app. The FaceDetailer node is a combination of a Detector node for face detection and a Detailer node for image enhancement. ERROR:root:!!! Exception during processing !!! File "C:\Users\klein\stablediffusion\comfyui_amd_gpu\comfyui\execution. Code; Issues 18; Pull requests 0; Actions; Projects 0; Face detection models. Locked post. A Welcome to the unofficial ComfyUI subreddit. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Only Drawback is there will be no He gives you links to the Python 3. To overcome this, you can make your Samsung phone’s screen brighter when using face recognition in It's just magic animate that does that. ComfyUI-InstaSwap. load_image_file("My_Image. The developer of the controlnet_aux preprocessors acknowledged that there is a bug. You just have to love PCs. This ensures that only faces within a certain size range are detected, which can be useful for focusing on specific face sizes. 4. IMPORTANT : if you are on Mac M, it is better to quit all applications, restart comfyUI in terminal, open You signed in with another tab or window. 2 . Other. Face Detection Not Working: Make sure that the face detection feature is enabled in the Webcam app. Reply if interested. I'm currently using face_yolov8s. down. The face that has been damaged due to low resolution is restored with high resolution by generating and synthesizing it, in order to restore the details. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. Please check out the rules before posting. I have also tried getting it from insider custom nodes folder git clone command and after i get the plugin, I then click the install. Input "input_image" goes first now, it gives a correct bypass and also it is right to have the main input first; You can now save face models as "safetensors" files (ComfyUI\models\reactor\faces) and load them into ReActor implementing Created by: Ning: What this workflow does 👉 Solve the problem of face mismatch when using ReActor node. I tested with an Ai-Thinker and an ESP-EYE and there is nothing in the serial monitor or in the stream. pth 和RealESRGAN_x2plus. Download (1. ipynb . # The original code for Face detection was written by WASasquatch for the was-node-suite-comfyui project: from PIL import Image: import cv2 It may or may not make its way to ComfyUI, but much of the backend could potentially be handled with a node layout. FAQ. You switched accounts on another tab or window. pt file in the same huggingface directory as above. The mask parameter is optional and allows you to provide a mask image. device = torch. 15. After downloading the model, you need to place it in the ComfyUI\models\facerestore_models folder. Face detection: Method of detecting faces, you can choose according to your actual situation, but I usually choose the first one. Cropping Issues: Installing ComfyUI ReActor problem I realise this may not be the place to seek such answers but I’m looking to cast as wide a net as possible, and so far this community has been the go-to spot for technical answers with Automatic1111, so I figure maybe I’ll find some answers to a -The face detection is simply not very good. Is basically like buying a creality 3d printer, instead of printing what you want you are tinkering with I'm using ADetailer with automatic1111, and it works great for fixing faces. KEY_POINT. 5 models sdxlfacedetail workflow. For examples please refer to InsightfaceExample. and the SAMDetectorCombined node is used to find the segment related to the detected face. Notifications You must be signed in to change notification settings; Fork 33; Star 593. Each face in the collection is analyzed to determine its gender. Included is face_yolov8m hand_yolov8s person_yolov8m deepfashion2_yolov8s They should offer better detection for their intended target but maybe take a little longer. Click the Manager button modifications and enhancements, ensuring that changes are applied only where needed. it took a while to adjust the right setting in the face detailer, but im pretty happy with the outcome. It's not about the hardware in your rig, but the software in your heart! Join us in celebrating and promoting tech, knowledge, and the best gaming, study, and work platform there exists. Hello~ I’m working with the impact facedetailer nodes but can’t seem to make them detect large faces, only medium and small size faces I have tried setting the guide size and max size to 1024 but it still will only select medium / small faces DeepFuze is a state-of-the-art deep learning tool that seamlessly integrates with ComfyUI to revolutionize facial transformations, lipsyncing, video generation, voice cloning, face swapping, and lipsync translation. 4 . Usage. My workflow for facetailer doesn't work anymore. 53. 1 . About ComfyUI nodes for the roop extension originally for a1111 stable-diffusion-webui Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. About In order to run face detailer to fix a face from an image, you can download this basic workflow on OpenArt, then load in it ComfyUI and install any missing custom node. 10 and 3. By using the Segs Welcome to the unofficial ComfyUI subreddit. Face detection models ===== The face restoration model only works with cropped face images. 1 with the previous example. The output of the face detection model is just the location of the face in the image, the bounding box and landmark points. Recently, BOOLEAN was added to ComfyUI and Impact Pack is updated to use it. You signed out in another tab or window. Please share your tips, tricks, and workflows for using this software to create your AI art. This output contains the bounding box coordinates of the detected face in the original image. Here's the links if you'd rather download them yourself. com). When using mask-points, please note that you should set "combined" parameter as False in MaskToSegs. 5, 1. These images are then fed into ComfyUI, where they can be processed using various AI models and techniques. ComfyUI Version: v0. If This model can detect faces with 99. If you did and your console doesn't show any errors, it means roop detected that your image is either NSFW or wasn't able to detect a face at all. Also, a prerequisite for processing batches simultaneously is that the images must have the exact same size. Second Update ComfyUI Third all the sft file must be rename to safetensors. Face Alignment Models: Aligns the facial features to ensure smooth transitions between the swapped faces. I have a node template that will identify the largest face detected, but it'll be about 8 hours until I get home and can access it. 5 has its own clip neg and positive that go to the pipe, still wont upscale the face wth sd1. # FRED_AutoCropImage_SDXL_Face_Detect_v1. py: Applies dlib’s HOG + Linear SVM face detector. If no face is detected, this will be None. I'm trying to create an automatic hands fix/inpaint flow. It was pretty basic: sdxl base generation, vae decode tiled, facedetailer with RV5 (SD1. ai. device('privateuseone') Welcome to the unofficial ComfyUI subreddit. Cropping Multi-face Detection Detected Faces and Saving These ComfyUI nodes can be used to restore faces in images similar to the face restore option in AUTOMATIC1111 webui. I've also experimented with upscaling and working only on the actual Expected Behavior Can not load PuLID Flux Actual Behavior Check the model and files, no problem , Steps to Reproduce The issue persists even after reinstalling the software and the Models. I want to change it to the face of a specified person. Manage code changes Discussions. pt and segm/hair_yolov8n-seg_60. mask. 04 KB) Verified: a year ago. then inpaint with FaceIDv2 again with enough denoising to add back in details but not enough to change the The face restoration model only works with cropped face images. The best support is in SD. 610. Find more, search less Explore. 5) efficient loader, ultralytics provider and samloader, and also a controlnet tile sd15. To use the Hand Detailer you must enable it in the “Functions” section. Thank you and have a Great day :) It's a bit late, but the issue was that you needed to update your ComfyUI version to the latest one. The Hand Detailer uses a dedicated ControlNet and Checkpoint based on SD 1. . A: Make sure you use SDXL model, not SD1. The quality and accuracy of the input faces directly impact the performance of the GenderFaceFilter node. The extension also supports AuraFace that is a Welcome to the unofficial ComfyUI subreddit. 9. Now it The detection_hint_threshold interprets cases where the mask value in mask-area is equal to or higher than the threshold as positive prompts. All models is same as facefusion which can be found in facefusion assets. Open Cassady8912 opened this issue Oct 3 Most things work again, only the UltralyticsDetectorProvider gi Hi - today I updated main comfy and all extensions, after which comfy could no longer be started (not even without all extensions). I am on a windows 10 environment, working inside of the Pycharm terminal Hi, I'm testing some face consistency generation and it works fine but sometimes I have an issue like the one below with a face like the one attached and if I provide the face as 512 or 1024 square it is the same issue Issue: portrait wh MediaPipe is a bit worse at detection, and can't run on GPU in Windows, though it's much faster on CPU compared to Insightface. Seen this workflow in a tutorial video. 211. It consistently fails to detect a face if the mouth is wide open, for instance, or any kind of contorted facial features (even with the threshold set at the minimum) -If it doesn't detect a face, it stops the entire ComfyUI workflow. Contribute to imb101/ComfyUI-FaceSwap development by creating an account on GitHub. A lot of people are just discovering this technology, and want to show off what they created. Hi i am also tring to solve roop quality issues,i have few fixes though right now I see 3 issues with roop 1 the faceupscaler takes 4x the time of faceswap on video frames 2 if there is lot of motion if the video the face gets warped with upscale 3 To process large number of videos pr photos standalone roop is better and scales to higher quality images but misses out on the img2img What I'm searching for are the files segm/face_yolov8m-seg_60. Combining them into one might result in a loss of quality for the other images. proj_lora1. 90it/s] 10:00:41 - ReActor - STATUS - Using Ready Source Face(s) Model 10:00:41 - ReActor - STATUS - Analyzing Target The default value is 8, which helps ensure that the entire face region is captured and enhanced. Q: Face not detected. gender. By following the step-by-step workflow, we will integrate the IP Adapter FaceID system with model checkpoints and text prompts to generate realistic face-swapped ComfyUI cannot handle an empty list, which leads to the failure. It takes an input video and an audio file and generates a lip-synced output video. com/ltdrdata/ComfyUI-Im On the other hand :)) I also documented on my git that the hand fixing is not working always, especially when the picture is too zoomed out (hands too small), or when the hand is too much distorted. Face restore model: There are two models to choose from: CodeFormer. I use paperspace and before starting comfyui I run !pip install insightface (and faceid works for me without a problem; check that in comfyui/models/ it is inside insightface and a folder called buffalo) Alternative try pip install onnxruntime Contribute to smthemex/ComfyUI_EchoMimic development by creating an account on GitHub. How can we use this Lora? the normal LoraLoader nodes do not work> lora key not loaded: double_blocks. This custom node enables you to generate new faces, replace faces, and perform other face manipulation tasks using Stable Diffusion AI. Welcome to the unofficial ComfyUI subreddit. Precompiled Dlib for Windows can be found here. v1. Code; Issues 18; Pull requests 0; Actions; Projects 0; My workflow for facetailer doesn't work anymore. I can't for the life of me find how or where to get this input. It is meant to be a faster solution to do "face" swap. With this configuration, you can easily configure different common inputs and streamline workflow modification using Face detection: Method of detecting faces, you can choose according to your actual situation, but I usually choose the first one. The load stage is a step for detecting faces from the images. md","contentType":"file"},{"name":"advanced-simple I've reinstalled pip and face detailer is working again. md","path":"tutorial/ONNX. Pick which source face you want to use and then which I've not uses DrawThings as its closed source, and I don't think Flow is in the publicly available version, If you've got 32Gb or more just run the normal fp16 version of Flux in ComfyUI, or use Diffusers if you know python (read everything above for known issues) I have got it working with DiffusionKit, quantised on 24GB. I asked in WAS-NS Github and author answered me "This is a problem with a YAML file loading. A face detection model is used to send a crop of each face found to the face restoration model. from insightface. pth自动下载的代码,首次使用,保持realesrgan和face_detection_model菜单为‘none DZ FaceDetailer is a custom node for the "ComfyUI" framework inspired by !After Detailer extension from auto1111, it allows you to detect faces using Mediapipe and YOLOv8n to create masks for the detected faces. Really appreciate the notes on the animal face detection and insight face, that’ll keep me from wasting too Each model serves a specific purpose, from detecting and segmenting faces to enhancing and blending them, ensuring high-quality results. In this tutorial, we will explore how to create a seamless face-swapping tool using the ComfyUI framework on SeaArt. _** ComfyUI start up time: 2023-10-20 23:13:01. This project is under development. The textual inversions are WORKING, when I simply: leave them in the folder where they are (and have always BEEN, back when they used to be recognized and working properly in the UI), and I can simply include them in my prompt, WITH said parenthesis and naming parameters intact, when I generate images. 2-85-gd985d1d; 'ExecutionProvider',]) # alternative to buffalo_l File "F:\ComfyUI-aki-v1. Add a FaceSwapNode, give it an image of a face(s) and an image to swap the face into. import numpy as np #for np. Members Online • I've been working on a rotation aware face detailer recently. Any time i try to get reactor working, i first in comfyui manager go to custom nodes and find reactor and install it. This custom node This video introduces the detection capabilities based on the MediaPipe FaceMesh from the Impact Pack and Inspire Pack. This doesn't helps - ComfyUI still not working. As I spoke about before: Understanding YOLO Models and which one to pick detect_gender_source - is the gender of source face image detect_gender_input - is the gender of input image (where you need to swap the face) I now want to change the face in an image to a video, but there are many different faces in this video. Load an image with the face you want. Now I want to remove the blur from her face by utilizing a face restore model like codeformer or GFPGAN. I use nodes from Comfyui-Impact-Pack to automatically segment image, detect hands, create masks and inpaint. And the above workflow is not SAM. And it used to work, would just generate the face with sd1. Face-Swapping Model: Swaps the detected face of the source with the face of the target image. Discuss all things about StableDiffusion here. The face_enhance_model parameter defines the model used to enhance the detected faces. zhjyls koepja mqzbs uby qpo kxnbv slqtp ypwhbx zdbc bvw