Comfyui pony workflow reddit

Comfyui pony workflow reddit. Hi. Please keep posted images SFW. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Very proficient in furry, feet, almost every NSFW stuffs etc Beside that, if you have a large workflow built out, but want to add in a section from someone else's workflow, open the other workflow in another tab, you can hold shift and select each node individually to select a bunch (or hold down ctrl and drag around a group of nodes you want to copy. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Also, if this is new and exciting to you, feel free to post Hello good people! I need your advice or some ready-2-go workflow to recreate this one workflow from A1111 in Comfy: 1 step: generating images with adding some (2-3) additional LORAs. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. But it's reasonably clean to be used as a learning tool, which is and will always remain the main goal of this workflow. For your all-in-one workflow, use the Generate tab. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Just load your image, and prompt and go. Less is more approach. Nobody needs all that, LOL. but mine do include workflows for the most part in the video description. I'm not sure if IP Adapter will. It was one of the earliest to add support for turbo, for example. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. I use a lot of the merges on CivitAI, and one other key I've found is using a low CFG. Welcome to the unofficial ComfyUI subreddit. Comfy Workflows Comfy Workflows. AP Workflow v3. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. There are plenty of ways, it depends on your needs, too many to count. It's become such a different model that most of the loras don't work with it. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. hopefully this will be useful to you. I need a img2img pony Mar 23, 2024 · My Review for Pony Diffusion XL: Skilled in NSFW content. So I'm happy to announce today: my tutorial and workflow are available. Offers various art styles. I've been especially digging the detail in the clothing more than anything else. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. This is gonna replace lightning lora's when using with pony at least for me. Ending Workflow. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. A lot of people are just discovering this technology, and want to show off what they created. The ui feels professional and directed. Help me make it better! Welcome to the unofficial ComfyUI subreddit. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. png) Flux Schnell is a distilled 4 step model. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments I’m finding it hard to stick with one and I’m constantly trying different combinations of Loras with Checkpoints. It's not for beginners, but that's OK. It's simple and straight to the point. Here goes the philosophical thought of the day, yesterday I blew my ComfyUI (gazilions of custom nodes, that have wrecked the ComfyUI, half of the workflows did not worked, because dependency difference between the packages between those workflows were so huge, that I had to do basically a full-blown reinstall). Nothing fancy. ) Ctrl C, then in your workflow Ctrl V. What im thinking of is setting up a workflow that uses Pony then run it back again for a second pass with IP Adapter img2img with the image from the pony pipeline and see how that goes. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Using the basic comfy workflow from huggingface, the sd3_medium_incl_clips model, latest version of comfy, all default workflow settings, on M3 Max MBP, all I can produce are these noise images. Hey Reddit! I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Upcoming tutorial - SDXL Lora + using 1. Pony Diffusion and EpicRealism seem to be my “go to” options, but then I try something like Juggernaut or RealVis and I’m back to racking my brain. Number 1: This will be the main control center. The graphic style I think it was 3DS Max. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. You can't change clipskip and get anything useful from some models (SD2. I've color-coded all related windows so you always know what's going on. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 5-5 most of the time. 0 of my AP Workflow for ComfyUI. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". I really really love how lightweight and flexible it is. We would like to show you a description here but the site won’t allow us. What samplers should I use? How many steps? What am I doing wrong? Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. What’s New in 4. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. After all: Default workflow still uses the general clip encoder, ClipTextEncode Welcome to the unofficial ComfyUI subreddit. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. Not a specialist, just a knowledgeable beginner. If you see any red nodes, I recommend using comfyui manager's "install missing custom nodes" function. Belittling their efforts will get you banned. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. 0 I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. Starting workflow. And above all, BE NICE. Specializes in adorable anime characters. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. YMMV but lower CFG with pony has TREMENDOUSLY reduced my frustration with it Anyone have a workflow to do the following. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. 3 - At least to my eyes, 2 step lora @ 5 step is better than 4 step lora @ 5 steps. com/ How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. . 9(just search in youtube sdxl 0. So, up until today, I figured the "default workflow" was still always the best thing to use. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I don't have much time to type but: The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. I share many results and many ask to share. I wanted a very simple but efficient & flexible workflow. It’s becoming very overwhelming and counterproductive to my workflow. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. io/ComfyUI_examples/flux/flux_dev_example. For a dozen days, I've been working on a simple but efficient workflow for upscale. Hey everyone, We've built a quick way to share ComfyUI workflows through an API and an interactive widget. I just released version 4. It shines with LoRAs but I personally haven't used Pony itself for months. Any suggestions? That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. 2 - At least with pony hyper seems better. Also, if this is new and exciting to you, feel free to post comfy uis inpainting and masking aint perfect. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. ComfyUI is a completely different conceptual approach to generative art. in your workflow HandsRefiner works as a detailer for the properly generated hands, it is not a "fixer" for wrong anatomy - I say it because I have the same workflow myself (unless if you are trying to connect some depth controlnet to that detailer node) Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. So, I just made this workflow ComfyUI. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. May 19, 2024 · Download the workflow and open it in ComfyUI. (I've also edited the post to include a link to the workflow) Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Pony is weird. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. BTW , 1step lora's are unusable on both. A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). I hope that having a comparison was useful nevertheless. 2 step loras @ 2 step also very bland, 4 step loras @ 4 step , same. Please share your tips, tricks, and workflows for using this software to create your AI art. 1 or not. ComfyUI is usualy on the cutting edge of new stuff. Like 2. Just my two cents. Share, discover, & run thousands of ComfyUI workflows. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows You can just use someone elses workflow of 0. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. You can also easily upload & share your own ComfyUI workflows, so that others can build on top Jul 9, 2024 · How the workflow progresses: Initial image generation; Hands fix; Watermark removal; Ultimate SD Upscale; Eye detailer; Save image; This workflow contains custom nodes from various sources and can all be found using comfyui manager. github. I have a question about how to use Pony V6 XL in comfyUI? SD generates blurry images for me. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. nbfj wnkte rcghx qsrowb xhzi yjs ubz evdod uvsq dvct