//r/
File: 17674645852890807.png (4.8 MB)
4.8 MB
ComfyUI Apprentice portable installer v4 (torc2.9.1/cuda13.0/python3.12):
:::::::: https://files.catbox.moe/rbwu4v.rar
:::::::: https://mega.nz/file/rpQgFZSA#f91Ki6aGzdJZd7OpBooEUe4xiFD8nmVP49GWU1WxICw (mirror)

NOTE: Updated installer, hopefully hugginface_hub error now fixed. Added svi loras and wf into the pack. Also flash-attn and sage3-attn (only works on 50xx cards)

>What is it?
My version of ComfyUI portable. Aimed for new apprentices who wanna jump in AI world. includes:
-Installs latest ComfyUI @ python 3.12, torch 2.9.1, cuda 13.0 (ver tested v0.18.1-3-gda6edb5a when making this thread)
-Auto-downloader for all necessary files for running my WAN, SDXL, ZiT and QWEN workflows
-Install most common used custom-nodes
-Install Triton, Flash, Sage 2 & 3, Nunchaku attentions
-My Workflows for various purposes
-ReActor patched for NSFW
-LosslessCut program for video merging
Showing all 269 replies.
>>
>How
Download the package and extract it on some fast drive that have good amount free space (100gb at least). Then
----Run 1stAIdBag_install.bat
--------Install ComfyUI and Manager. This is kinda bare minimum you need to install. You can install workflow's missing node via manager. Some user inputs needed during this.
--------Check if your ComfyUI runs after this. If it does....proceed
--------Install Custom nodes. This will install all custom nodes needed in my workflows, also other workflow's commonly used nodes (this might take some time)
--------Install Attentions. Highly recommend this as they give speed boost for generations.
--------Download essential WAN files and one of the WAN GGUF models (Q8, Q5 or Q2) Better your rig, bigger Q to get
------------NOTE: Downloading these models take time as those are several gigs
--------OPTIONAL: Download QWEN, GIT, or SDXL stuff if wanna play with image workflows

Run comfyui. On the first run it will take some time to load as some nodes like reactor will download some models needed. Once loaded, open the latest version of the workflow, re-select the all the models, vae, clip and loras. This is something you need to do every time you open up a new workflow, so the files point into locations in your computer. See the vid: https://screenrec.com/share/mkgZDHG5uI

After that, check other settings, type in prompt and hit run.

Buy me a beer: https://ko-fi.com/1staidbag
>>
>Already got old install?
----Backup (cut) models and user folders somewhere
----del old ComfyUI_windows_portable folder. Then do as guided above
----paste model and user folders back into your fresh ComfyUI folder after install

>What do i need?
-Windows, RTX card, pref at least 8GB of VRAM, 32GB of RAM, drivers up2date
-You can run stuff with lower rig specs but then you would need lower models, lower resolution, lower clip duration, etc
-There are some info/help to run stuff with ATI cards in prev threads
-NOTE1: Major change on the pack, nothing is pre-installed, all nodes are getting installed via 1stAIdbag.bat now. Pack is now much smaller, 7megs

>If/when you need help....
A) take a screenshot of your whole workflow (and log console window if possible)
B) state what workflow are you using (i'm not going to try fix other's workflows for you)
C) what specs on your rig

>Where to get loras?
https://civitai.com/search/models?baseModel=Wan%20Video&baseModel=Wan%20Video%202.2%20I2V-A14B&baseModel=Wan%20Video%202.2%20T2V-A14B&baseModel=Wan%20Video%2014B%20t2v&modelType=LORA&sortBy=models_v9%3AcreatedAt%3Adesc
(you need to make account and enable show adult content)
https://civarchive.com/search?type=LORA&base_model=Wan+Video+14B+i2v+480p&is_nsfw=true&sort=newest&page=2
(old 2.1 loras that got removed from civitai due to their nsfw bullshit policy)

Old thread(s)
https://archived.moe/r/search/subject/ComfyUI%20portable%20for%20newbies/
>>
Hey AId Bag Thanks for all your work and teaching me this magic. Any suggestions about audio or talking or lip sync using comfyui?
>>
>>20483176

I'm not Mr. first responder (1st aid)

..but some genner's use LTX, its easy and fast but very inconsistent with any type of sex gens and requires a bit of GPU power. Others use SVI, S2v and there are some tricked out workflows that can do it with Wan and possibly even add sound to existing gens.. but I don't have experience with any of those.
>>
>>20483225
I have heard of this but cannot find a good local download ? i keep ending up back at the website and it wants me to pay. can u share?
>>
>>20483231

For LTX? Just go into comfy templates, click on LTX2.3, download the models and read the boxes where it tells you to put them. Least that is what I did.

I'm sure there are a zillion tutorials on reddit/utube/huggingface too.
>>
838.0 KB
>>20483251
oh shit thanks didnt realize it was just a comfyui thing when people talked about it.

thanks
>>
>>20483176
well i use ltx 2.3 (locally, if wan ggufs runs, you can run ltx as well)

https://huggingface.co/QuantStack/LTX-2.3-GGUF

like anon said above....it has pretty limited nsfw support off the bat. But let say you start with some nsfw, load some ltx nsfw lora then prompt the woman to moan or something like that, that works ok. you can make 30sec clips, quality is ok, and is much faster than wan.

but getting a nude man appear in frame and shove his his penis in talking woman mouth...that is bit tricker task

for wan 2.2 audio, the is s2v (sound2video) model....but this kinda works other way around, you first need to make the soundfile and wan will try to gen a video out from that with help from your prompr
>>
File: fl15153153153.mp4 (897.1 KB)
897.1 KB
>>20483307
no i just want my school teacher to talk dirty while she uses the fleshlight. nothing too crazy
>>
>>20483328
i would guess that is possible....well maybe not that fleshlight part as i doubt model nor loras understand what that is....but you could use endframes or middleframes to force that transparent fucktube in
>>
File: Audio_00322_.mp4 (1.2 MB)
1.2 MB
>>20483328

That is definitely doable. When you have to gen a penis from scratch is where LTX usually has issues but if the penis is already in the image should not be a problem.
>>
File: 256151515.jpg (211.7 KB)
211.7 KB
>>20483366
Yesss i want to make her talk dirty before she gives head. ill have to work on it.

thank you
>>
File: Audio_00128_.mp4 (1.0 MB)
1.0 MB
>>20483328

Will take a bit of finaggling with the prompts/loras and playing with CFG/multiple runs but got this in about 5 min. Also I used a cocksleeve in starting image.

LTX isn't like Wan, it needs you to desribe all object/motions like you are explaining it to a 5 year old when you prompt and didn't feel like going into all that for an example vid without the cocksleeve already in the photo (you will likely need to describe it VERY well if it's not in initial image)
>>
File: ltx2_00159-audio-.mp4 (1.6 MB)
1.6 MB
>>20483328
>>
>>20483169
hey, just wanted to say thanks, you're an actual fucking hero
>>20483366
lmao
>>
>>20483617
This is awesome!!! THANKS
>>
>>20483617
thought maybe the workflow was baked in but its not? care to share?
>>
Is flash attention not supposed to work with a 3090? I'm getting multiple lines of this when generating:

Flash Attention failed, using default SDPA: schema_.has_value() INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\pytorch\\aten\\src\\ATen/core/dispatch/OperatorEntry.h":84, please report a bug to PyTorch. Tried to access the schema for which doesn't have a schema registered yet
>>
>>20484390

Cut and paste that line into google and gives you a few things to try
>>
>>20484517
Looks like it was the RES4LYF node pack, I disabled it and the spam stopped. Apparently it's incompatible with Flash Attention.
>>
Are you guys using a normal text encoder or an uncensored one?
>>
>>20484963

For LTX? I downloaded like 100gb of various alt model/text encoder/Gemmas but have been using the default ones linked in the template with no problem just using NSFW loras.
>>
>>20484390
stupid question but did you install it? also are you using my pack or some other comfyui desktop/portable?

flash attn should work ok almost every card. Before sage flash was the best speed up attention to use.
>>
>>20484963
for wan/ltx default text encoders work just fine...uncersored versions have some finetunes/edits that makes it understand some nsfw phrasing better....but sometimes it makes it worse also. tldr; they are not mandatory to make nsfw content
>>
>>20484963
>>20485307
thanks, than i'll just stick with the default one
>>
>>20485296
Your pack. I didn't install anything extra besides your pack and a few checkpoints and loras.
>>
>>20485479
did you install attentions via the installer?
>>
>>20485609
Yeah, the only steps I skipped were from SDXL to Losslesscut since I don't use these. I solved the issue by disabling RES4LYF.
>>
>>20486117
ok....did you launch with flash attention bat?
also does the sage work ok for you?

also if you have res4lyf disable, does it load flash ok? (can be seen on the console)

i dunno why res4lyf would be an issue, but more info i get, the better
>>
Hi, trying to make this work for the first time but not having a lot of luck.
No errors, I just followed the guide and tracked all resources for the WAN_SVI2_pro_1.2 workflow (the guide said most recent but they're not ordered by date so I guessed), tried gguf 2 and 5 but it tried for an hour and then died.

I half expected something like this because I'm running on a basic 3060 12vram + 32 ram
The thing is I'm a total newbie so other than redownloading the gguf thing I have no idea how to lighten the load to make it functional on my potato.

I don't wanna inconvenience you with spoonfeeding, I can normally poke around until I get it right on my own, but I can't tinker and improve if it won't even spit out any results.
Could I get some tips or perhaps an example of a lightweight workflow to use as reference?

Oh and this LTX thing, is it just faster or potentially lighter? would that be better than WAN for vramlets such as myself?

Thank you for the high effort guide and resources btw, Aldbag you're the goat for doing all this
>>
>>20486144
Yeah, launching with flash attention bat.

I thought sage attention was only for 40/50xx, I have a 3090, will it work?

Yes, disabling res4lyf fixes flash, there is a guide to fix it but since I'm not using it I just disabled it: https://note.com/198619891990/n/n1c7082aa1c6e

Apparently some of the stuff res4lyf installs breaks Flash Attention by default.
>>
>>20486163
well you should get shit running with those specs...might be slow, but should get work

if you have disc, lemme know and i'll contact you, much easier to troubleshoot/help you out (and not spam the thread)

>Oh and this LTX thing, is it just faster or potentially lighter? would that be better than WAN for vramlets such as myself?

yes and no....dev version of ltx model is about 40 gigs, plus the text encoder and loras.....gguf version are doable on local machines (12gb vram / 32bg ram). They take bit to load but gen is much faster....time it takes to make 5sec in wan = 20 secs in ltx (but again, for nsfw purposes ...ltx is not there, yet)
>>
>>20486193
sage 2.1 works ok with rtx30xx and rtx40xx cards

sage 3.x is for blackwell chpsets cards (rtx50xx)

sage 1.0 if everything fails and for the rest (rtx20xx, rtx 10xx)

>>20486193
also what version of comfyui?....heard that the latest major update fucked up some nodes. I will make a fresh pack when this thread goes beyond bump, make a fresh install myself and see what works or not
>>
>>20486259
Got it, I will try 2.1 then, thanks.

As for the version, the one your script installed is ComfyUI 0.19.3.
>>
>>20486269
yeah, the installer always installs the latest repo

might change that in the future.....

>>20486269
if you installed attentions via the installer, sage 2 (and 3) should be installed
>>
>>20486210
Thanks for the help
I just need a few quick pointers though, like I noticed it stalls hard during the tripleksampler process, I lurked and read someone with a 3070 (8gb?) swapped the ksampler for a different (moe?) version. I don't know if it helps in quality or speed though.

Could you just tell me out of all the workflows in the portable pack, which one would be the lightest for tests? If I can get it running I'll figure it out.

perhaps someone with a similar shitrig has some input.
>>
>>20486319
just disable the 3rd sampler....set output reso to 360-480, just to see if you get anything going

also post a screenshot of your workflow...i've made about dozen so far so hard to pin out what are referring to
>>
>>20486354
Ok I'm a retard, so sorry. I didn't realize "output longest side in pixels" had to be changed to set resolution.
It managed to produce something and lo and behold the problem: it was set to 5 seconds but the workflow extends 3 times, so it was trying to gen 20 seconds at 720p, that's why it was dying.

I'm sorry I don't even know how to screenshot this massive thing in a legible way. Looking around, although I don't think it's necessary anymore.
It's pretty much the default WAN_SVI2_pro_1.2 workflow.

Now I just need to set it up to 5 seconds, or 10 seconds with 1 extension. Is it as simple as bypassing the extend nodes/delete them?

Thanks for your help, seriously, I've never even used comfy prior to this, just the other baby ones for images.
>>
>>20486532
>so it was trying to gen 20 seconds at 720p, that's why it was dying.

not correct....you are making 5 secs clips four times (they are all separate gens)

making one 20secs clip at 720p at wan 2.2 would kill it for sure....

for screencaps....search your window...type "snipping" , that give you the tool to take a screenshot of the area of choice. use mousescroll to zoom out your workflow where most of things are visible (output stuff not important), all text readable, then snip it
>>
File: workflow.jpg (3.4 MB)
3.4 MB
>>20486573
ok it was right click > workflow image > export > png
just had to resize and compress

You've done enough but is there an easy way to disable extend 1 and extend 2?
I'm assuming the purple extends at the bottom are disabled?
I'm confused about very basic things for now
I just want simpler shorter gens so I can tinker and learn a bit faster.

Would it be better to look for a different workflow or should I try adapting this?
>>
>>20486651
OH my fucking god I just saw the nodes for "enable extend"
Sorry disregard this I suck dicks.

Thank you again for setting all this up, no more questions, ty
>>
>>20486656
yeah...if you just want basic 5secs clips, no need to use SVI workflow.

they my wan 2.2 1.5d...or other basic wan workflow you can find on civitai...you have all the needed components now, choose a workflow you find most easy to use

and other note; looking at your prompt....there is not going to be any fingering unless you load some fingering lora
>>
>>20486670
>looking at your prompt....there is not going to be any fingering unless you load some fingering lora

Oh that's just the default prompt, I haven't gotten to that yet. I did track the loras down anyhow, so plenty to play with

Thanks again for all you do
>>
>>20486679
also note when you are dealing with 5 second limit....its impossible to the woman undress, spread her legs, finger herself

this where SVI workflow comes in.....or make one 5sec clip where she does X, save its last frame and use that as new input for your next gen where she does Y (...etc....etc)
>>
>>20486720
Yeah I caught on it's broken down in timed stages and stitched together.
I disabled 1 extend and it gave me 13 seconds (12.5?) which is agreeable.

Can non SVI handle 10 seconds or so? I read somewhere the longer it goes the more it degrades.
>>
File: 2026-04-27 042908.png (103.6 KB)
103.6 KB
>>20486856
generally wan does does 5sec (90% of loras are trained from 5sec samples)...after that no matter you prompt...wan does what it want, most commonly it starts to reverse the action (aka it starts to go back to point where it knew what to do)

example prompt "woman opens her shirt, revealing her nude tits"

on 5secs....that works fine, she will undress and scene ends

on 8secs....she undress within the first 5sec, then spend last 3secs putting the shirt back on

>I disabled 1 extend and it gave me 13 seconds (12.5?) which is agreeable.

note the settings....just one clip (aka just initial scene) on 5sec, interpolation 2, speed 1.0 should be 5sec....if speed set to 0.5 --> 10sec and so on
>>
>>20486930
to continue (and confuse you more)

there are ways to make single long clips;

this is using the last frame option

you provide image as input and second image as the last frame.....this kinda forces wan to do the prompt within the time you provide it. It knows where to start FIRST FRAME...it knows what to do PROMPT....guiding it all toward to your LAST FRAME

one good example of this would be if you have a pic where woman is sucking the penis. use that on BOTH; first and last frame, prompt it "woman sucking penis, blowjob" (with some lora loaded ofc), set length to 10-15sec....it keeps the face mostly unchanged as it has the end frame reference
>>
>>20486656

Alternatively think you could just do "bypass" for the extend nodes, the re-enable when you want to use them.
>>
Bump for bag
>>
>>20484124
sorry for long delay, missed your post. im using this:
https://civitai.red/models/2354193/ltx-23-all-in-one-workflow-for-rtx-3060-with-12-gb-vram-32-gb-ram

thou i modified it quite bit, added nag node to make negatives work (ltx likes to instert subtitles, that look nonsense, this fixes that). Also added option to use different samplers and schedulers (default ones are the fastest thou) then option to add 3rd pass to do some additional upscale.

my version:
https://files.catbox.moe/39xii7.json
>>
up
>>
AIdbag thank you for your service, everything has helped me a lot understanding things better even though i still barely scratched the surface. also big props on keeping the general running, its great to always see it up when i am interested in reading more.

i checked the workflow tab out a bit and theres a lot of workflows, what are the differences and which should i focus on?
>>
up we go
>>
>>20490882
latest version is always the better (in my eyes) i know some like to use older ones. there are also some workflows for image creations...like qwen, sdxl, z-image.....if images are you thing, check them out.

svi one is latest i've made, and what i mainly use...might look confusing if you are new to comfyui, so start with simpler workflows
>>
Anyone know where to find a fleshlight lora?
>>
Bumps
>>
I need help bros, I got a pc two weeks ago with Ryzen 7 9700X & RTX 5060 Ti 16GB. I've tried multiple simple workflows from civit and haven't gotten any good videos. do you guys have any guides or videos I can watch to better understand this shit
>>
>>20483169
can a 1060 6gb even try?
>>
>>20494768
Post image of your workflow so ppl can see the settings, also post sample of what kinda video it gens
>>
>>20488885
any chance of reup?
>>
>>20495504
https://limewire.com/d/SnGDT#XKAZUYwmHz
>>
Hey bag, what's a good thing to prompt when you want the woman to keep her eyes half-closed? Like sexily looking at something.
>>
File: 15742_00001.mp4 (1.6 MB)
1.6 MB
>>20496199
try "languid look" "seductive gaze" "heavy-lidded eyes"
>>
>>20496261
Thanks fren.
>>
File: 1777518267031903.mp4 (1.8 MB)
1.8 MB
Can someone do me a solid and identify this lora?
All sex loras I tried are full body and the penetration close-up seems nicer for lower res stuff.
>>
File: 1777437950622385.mp4 (518.6 KB)
518.6 KB
Another example.
I have seen some of you use similar ones. A name or a similar one is just fine. Thanks
>>
>>20497398
>>20497400
those are the "blink" loras from iGoonHard, you can find most of them on civarchive
>>
>>20497458
I half suspect that much, but I could only find cowgirl and missionary. Unless this is missionary with "with her legs up" sub-variant.

They're also hard cuts, while igoon's blink loras do this unique "hyper-zoom" effect, so maybe not iGoon.
I just want to know what lora creates a penetration focus position
>>
>>20497500
>search/models?baseModel=Wan%20Video&baseModel=Wan%20Video%202.2%20I2V-A14B&baseModel=Wan%20Video%202.2%20T2V-A14B&baseModel=Wan%20Video%2014B%20t2v&modelType=LORA&sortBy=models_v9%3AcreatedAt%3Adesc
>>20497458
>>20497400
>>20497398
nvm I'm 90% I figured it out. It's smash cut + side missionary, both seem to be on civit. Testing but I'm pretty sure this is it
>>
>>20497398
Middle part is smash cut, last part is most likely igoons creampie Lora...... Note that you need a svi workflow to make these kinda of gens
>>
>>20498394
I tried with svi but assuming smash cut works like blink? (triggering during sections, not inbetween).

Set up different things but all came corrupted random garbled hallucinations. I don't know if it's the t2v missionary lora (although I saw examples on civit that were an i2v smash cut) or the cinematic smashcut civit lora being the wrong one.
I have no idea what I'm doing.

Could I bother you or someone else with a catbox gen example or workflow of a functional smashcut gen so I can take notes of what I'm doing wrong?
If it includes something like>>20497398 >>20497400 some penetration focus scene like this it would be 10/10, but smashcut reference is enough.

My card is shit, 15-20 mins a gen on svi lol, trial and error is a bit hard. Thanks either way
>>
>>20498773
i ment if you want smashcut PLUS creampie seemlesly on same gen, then i would use svi

put yeah, smashcut works just like igoon's blink loras (you don't any other loras with it as it's a kinda one trick pony)
>>
File: 2026-05-01 014024.png (472.2 KB)
472.2 KB
>>20498911
general prompt is this
>>
File: 15742_00001(1).mp4 (1.2 MB)
1.2 MB
>>20498964
>>
>>20498966
That's a nice gen!

How much time does it take, and what are your specs? I will follow your tutorial now, rocking a 3080 12gb, so I'm kinda limited on time/model I guess
>>
>>20498964
Could I get your workflow to play with?
>>
>>20498966
>>20498964
I can't thank you enough Ald, you're the MVP
>>
>>20498982
well time depends mainly on 3 things; output resolution, the length and how many steps you use

i'm on 4070 (12gb) and 64gb card....i would say 5sec clip at 720p using 8steps with lightx loras and sage attention, gives a speed around 20-25 sec / step....note that this only the generation speed....good amount of time goes loading and unloading the models into your card, this is more ram comes handy, then there is decoding that eats time....id say 3-5mins per 5sec clip

on svi longer gens i usually use bit lower reolution
>>
File: 2026-05-01 021918.png (908.9 KB)
908.9 KB
>>20498911
same smashcut idea but with happy ending using 3 scene svi
>>
>>20499039
>>
>>20499041
>>20499009
By this is me asking for the workflow: Does this help with making portrait photos into sex scenes?

Alot of what I run into is I use selfies as reference photos and takes awhile for the scene to transition or it doesnt at all and the guy is fucking her belly while she just stands like the photo and smiles.
>>
>>20499009
they are in the pack...no need to install, unpack, then unpack the addon.rar, workflows are in user folder

https://limewire.com/d/xBOCD#jb5f0eB6Ir (workflow embeded)
>>
File: 1777453541961346.mp4 (3.6 MB)
3.6 MB
any ideas why my gens look cartoonish? especiall< the eyes are fucked up usually.
suspecting upsacle? pr maybe cause i am on Q4? But I've seen gens by people claiming to be on Q4 which where way more crisp
>>
>>20499045
if you use selfies as your input and want to see action in 5sec, then hardcut/blink loras are your only option.

example some i2v missionary lora assumes you use input image where that action is already happening, where it sees the penis and the vagina....those loras basically just add the motion into your input.

>>20499054
i'm don't like to guess without seeing the settings...many things might cause that, most common is that you output to low reso output, model has trouble making the details (also low quality model like q4 has effect), try cropping your input to 1:1 ratio, increase your output resolution, and increase the steps some......even if you use lightx loras that can make gens with only 4steps...try 8-12...whatever your card can handle

at the end of the day, its the question of how much time are you willing to wait
>>
>>20499076
>if you use selfies as your input and want to see action in 5sec, then hardcut/blink loras are your only option.

would your suggestion to do t2i first than? If so, do you have a t2i workflow to play with too? I made my own but anytime i created a photo either her face work overlap the dick for like a BJ and anything near her head would get blurry. Also if the image had a tongue out or cum the references face would looked gluded on with transparency issues.
>>
>>20499076
unfortzunately the question is rather when I am getting OOM errors with my card. 12gb 3060 isnt build for genning
>>
>>20499104
thats why i stated "what your card can handle"

you could at least try Q5 @ 640p 5secs....should be doable. How much ram you have?

(my vid is using q8)
>>
>>20499084
did you mean i2i first?.....i mean you could do t2i with sdxl or flux and then faceswap you girl into it, but new image models like klein/qwen can do do both and quality looks pretty ok
>>
>>20499115
Oh, I didnt know that was a thing I am new to this. Do you have I2i then?

The problems I ran into might of been I was doing t2i technicques
>>
File: 2026-05-01 034458.png (831.4 KB)
831.4 KB
>>20499115
example
>>
>>20499178
Is that workflow avaliable anywhere? that looks alot better than what I have achieved
>>
>>20499112
just 16 gigs. I started with Q8 could do like 2 gens then ran oom. Afterwards I wasn't able to run a single Q8 without running into OOM even with unloading and all. Had to set up Comfy from scratch due to some issues and can gen Q4 5 second vids in about 10 mins now using lightx.
>>
up
>>
Whats the best way to faceswap ? Tried reActor for I2I it's garbage. Wananmita ofr V2V it's slow as fuck and not convincing.
>>
>>20500466
Was meant to say
>Wan Animate

Also, best nudify workflow ?
>>
>>20499494
https://limewire.com/d/bL6hh#gDclCJwdTN
note that my klein workflow is pretty basic...there are plenty klein workflows on civit that has more options.

>>20500468
>nudify workflow ?

This seems to confuse ppl alot......workflow is NOT a template to do just ONE certain thing....Workflow is scheme where every element needed by the MODEL is loaded and chained up in correct way. You make the action (undressing, blowjob, dildoing, cumshots..etc etc) by A) Loading additional loras via loader provided in the workflow B) You prompt the action.

once you done your gen....you can save your video somewhere and once somewhere in the future you want to make something similar, you can just drag and drop the video file into your comfyui --> it loads the workflow and all the settings and prompts
>>
>>20500794
>workflow is NOT a template to do just ONE certain thing
I mean it can be though, right?

>once you done your gen....you can save your video somewhere and once somewhere in the future you want to make something similar, you can just drag and drop the video file into your comfyui --> it loads the workflow and all the settings and prompts

When people ask for a specific workflow that does _____, (ie a nudify workflow) isn't that what they are asking for? Something already set up with those loras and prompt so they don't have to tweak settings or just slightly adjust settings, and then can just use whatever image they want to animate?

Is there a better terminology for it when looking for a premade workflow that does does a specific thing?
>>
>>20501125
>I mean it can be though, right?
of course it can, if that what you want, set it up once and do shit over and over again...

But here comes the whole idea behind why i've made these threads/tools/workflows for over past year and trying ppl to move out from those token-driven-template sites that do just that....Because you can do it your own, tweak it, bend it, without restriction, just use your imagination, load up different loras and prompt everything differently on every gen you make, world is your oyster.

>When people ask for a specific workflow that does _____, (ie a nudify workflow) isn't that what they are asking for? Something already set up with those loras and prompt so they don't have to tweak settings or just slightly adjust settings, and then can just use whatever image they want to animate?

this works if the asking party has similar image that the "template" uses...they would at least know how to edit the prompt. On these token based nsfw sites there is image analyzer script that checks your input image (is is full body, is face towards side/front etc) and pics some premade prompts based on that

people want stuff to be easy, i get that...But that is never the case in comfyui/ai world, and that is what makes it fun (and frustrating)
>>
Is this just for videos?
>>
>>20501161
>you can do it your own, tweak it, bend it, without restriction, just use your imagination, load up different loras and prompt everything differently on every gen you make, world is your oyster.

Oh sure I get that, but at the same time it is nice to have an example of where to start when you're looking to do a certain thing, like breast expansion or maybe kissing or whatever, and then you can change it to suit your needs. Maybe tweaking settings on an example workflow will lead to inspiration to do something different/better than what was in the original workflow. A lot of times a general purpose workflow with no guidance just makes things confusing and you maybe don't know where to start, An example at least gives a starting point that you can test with your own images.

Anyway, didn't mean to argue about this or anything lol just wanted to point out that the idea of a premade, specific workflow can be really useful, especially for beginners. Appreciate what you've done man. I'm sure you've helped hundreds of would be genners.
>>
can i install both qwen edit and qwen edit nunchaku or that will fuck it up?
i want to try both
>>
Do loras not marked as 14b loras work with 14b?
Do I need a special workflow or some other setup?
Sorry for the dumb question
>>
taking a shot at doing this. I downloaded comfy ui and manager, custom nodes, attentions, wan (q8), losslesscut etc.

When I run comfy UI all of the folders for nodes, models, workflows etc are empty. any idea where i've gone wrong? total amateur here
>>
Hello aIdbag, thank you for your service.
Have you ever considered sharing a repository of Loras? Or even a list of preferred Loras? Or did anyone else?
I personally like to browse around a bit but I always have the feeling I'm missing out on some good ones that experienced users are using. To be honest I'll usually only download the most popular ones since I have no clue how to see which lora is good
>>
>>20501197
videos, images, audio,etc

>>20501271
yes, you can install both if you wish

>>20501300
no....model dictate what loras to use....if you model is WAN2.x.14B.........then use 14B loras for wan 2.2 or wan 2.1. wan 2.2 has smaller version model called 5B, so for that you would need to find 5B loras (and there aren't that many)

>>20501336
what folders are empty....do you mean in the comfui sidebar view or inside comfui folder? if they were empty, i doubt you've gotten comfyui running....
>>
Number 1, thank you for the guide.
Number 2, I haven't really used tools like this previously. It seems that all the different workflows only gives me errors for missing models. Which of the workflows can I use without actually adding any of the optional stuff? Where do I add my downloaded models?
>>
>>20502040
Did you use my installer? It automatically downloads stuff into right places. Addition stuff you download yourself, mainly loras goes into comfyui/models/lora/ folder

Note that everytime you open a new workflow, you need to reselect all the model, vae, clip, lora etc files so those files point into location in your computer
>>
>>20502040
Also every workflow needs somekind of model, clip and vae. (or checkpoint which basicly means model that has clip and vae build in)
>>
>>20502070
Yes... It turns out I was just stupid and didn't read everything until the end. I just finished my first 3 second video. Do you have any recommendations for worlflows/models and maybe even example prompts etc that is easy to start using for blowjobs, maybe even two cocks at once?
>>
>>20502070
And again, thank you so much for taking the time to do this. Another question I have would be, is it possible to simply change the models in the workflow you provided, or is it specifically altered to the exact models supplied?

I see tons of things needing LTXV 2.3 for example, maybe it would be possible to use something like that with some... god damnit I am to bad at this. I need a base model (LTXV, then another thing on top of that) for this one as an example? https://civitai.red/models/1811313?modelVersionId=2747549
>>
>>20502100
My bag has no ltx workflow or file include yet....next version will.

I've posted my version of ltx seperate on this thread.

But if I think what you are asking..... You can't use Wan workflows with ltx models... Or other way around
>>
Is there maybe a summary of each of the workflows and some example prompts?
>>
>>20502094
>>20502100

Civitai is a good resource for some things. Find a vid like what you want to achieve and copy their prompts/loras. Not exactly a workflow but gives you an idea where to start.

For Wan 2.2 you can use other Wan 2.2 base models like the template model, SmoothMix, FastMove. For LTX you have to use LTX base models (think there are like 3 or 4 Daiswa, MrX and the template one).

Just a warning, people have a love/hate relationship with LTX. Personally I think the longer vids and sound trumps the issues but be prepared to spend a lot of time tweaking and re-running gens on it as it is absolutely terrible at making dudes.

The new UltimateBJ-All in one Lora seems to help with the mutant looking guys/cocks but the motion and anything generated not in the start image can be.. iffy.

Some people's workflows are pretty tricked out and out specific nodes so when Comfy pushes an update it could break everything if it is node that Comfy decided to mess with, so read through what all the workflow needs before installing so if/when Comfy pushes an update and it breaks you have an idea where to troubleshoot (or just do a full reinstall I suppose)
>>
anyone know if there is a way to make a batch of videos from all pics in a folder? instead of having to choose each image individually?
>>
>>20502375

I dont do it myself but here is a few ideas

https://www.reddit.com/r/StableDiffusion/comments/1o0jftx/wan22_generate_videos_from_batch_images/

I'm sure if you do a deeper dive into huggingface/reddit/youtube there are people that have figured it out.
>>
>>20502375
how many images are you going to load in your queue? are you making vids or pics? i would want to edit the prompt for each input image....but there are some custom nodes that allow you to load folders, then you would need to edit the batch count on your sampler to match the # of images
>>
>>20502375
I use this Load Image Batch node from Was Node Suite. Just set what order you want the images to go through(I do incremental, but you can do random too) then set if you want a fixed, incremental or random seed. Then finally set your folder path.

Keep in mind it will use all the same settings/prompt for all those images, but I assume you know that. I don't use any upscalers or anything that alters the input images so I try to make sure all the images in the folder are the same dimensions. I think you can queue 100 at a time? Not sure if that's a hard limit, but that's more than enough if I'm genning overnight or away from hours at a time.

Oh and you can have multiple folders set up and switch to a different tab in comfy if you want to gen stuff with other settings/prompts
>>
>>20501555
all the loras most wiz used can be found at:
>>
>>20499178
This is great, thanks!

Any idea on how to do i2i with different people and also use nsfw loras on them? Tried your workflow and by itself it works great, I get good results. Using nsfw loras with it gives pretty bad results, not your problem of course, just trying to figure it out.
>>
>>20500794
Ok, let me rephrase
What's the best tool to nudify I2I
same for best tool to V2V faceswap
I'm not asking for a json file, just point me in a direction
>>
>>20502637
>What's the best tool to nudify I2I
klein 9b or z-image with some general nsfw lora, just open your image and prompt her nakes

>same for best tool to V2V faceswap
id say two options; wan animate or program called facefusion (has limitations like blowjob and need to nsfw patched)
>>
>>20502495
>>20502534
>>20502581
thanks for replying guys. i am doing i2v

my goal is using the exact same prompt for a bunch of different pics
>>
>>20483169
Newbie Apprentice here. I added reactor to your qwen workflow and COCKQWEN as a lora. It seems like the face is good only if she isfully looking at the camera and doesnt do any facial expressions.

Also I have been feeding my gens into my I2V workflow and have not got great results. I had to trun off lightening loras cause I would just get fuzz as my result. Is there any newbie loras you would recommend and i2i prompts that would set them up?

Also correct me if I am saying anything that dumb or isnt best practice.

https://files.catbox.moe/x4tgyt.json
>>
File: Screenie.png (199.0 KB)
199.0 KB
Any idea what could be causing this blur/fuss? I get it in basically every video... Maybe I could reload the json that you first gave me, in case I have somehow messed something up and changed some setting when scrolling or something...
>>
>>20503610
I think I finally found it out. I was putting the Loras as the Load Lora rather than the high/low noise Lora. At least the latest result came out pretty darn crisp. Too bad I wasted a full night of runs on bad settings
>>
Ok, last post. Just wanted to say a huge thank you to OP for putting all of this together.

I have gotten some pretty great results using the following loras:
https://civitai.red/models/2095336/ultimate-bbc-deepthroat-i2v-wan22-video-lora-k3nk

https://civitai.red/models/1874153/oral-insertion-wan-22?modelVersionId=2121297 (insanely good results with this)

https://civitai.red/models/1972311?modelVersionId=2232429 (good results, but a bit harder to get it right, especially with a big smaller tits (so just like irl I guess kekw).

Hoping to try some more out too, will try this one next: https://civitai.red/models/1855263?modelVersionId=2200389

Thanks a ton OP, finally I can stop trying to put shit together in AE/Premiere Pro/Photoshop lol
>>
Newbie here too (well I played a lot on the early days, and now I try to catch up)
I want to do some video (wan / ltx)
Would a 4070/5070 12Gb be enough?
Is a 4060Ti/5060Ti 16Gb better because of more VRAM?
Cause I have a 2060 6Gb, and it was barely enough for images...
I would have to tweak a bit your files cause my gen server is running linux, but it it work I will share my container with you.
>>
>>20502643
Many thanks !
>>
>>20503804

How do you have your workflow setup? Ive tried


https://civitai.red/models/2095336/ultimate-bbc-deepthroat-i2v-wan22-video-lora-k3nk

&

https://civitai.red/models/1874153/oral-insertion-wan-22?modelVersionId=2121297

and have gotten really bad results
>>
>>20504131

Some positions are better with some loras than others. Also if the penis is not in initial image you need to prompt for it and possibly add a lora. A lot of gens are trial and error with the prompts/loras because each image has a different position/angle/photo dimensions so you need to craft your prompt and action based on that.

Don't use 3rd person prompt/lora for a 3rd person shot, etc.
>>
>>20504192

Meant to say don't use 3rd person for 1st person/pov shot.
>>
>>20504192
Could I possibly get your workflow? I want to compare it to mine. I feel its not prompt related
>>
>>20504195

For what kind of shot? POV? 3rd person? What kind of action? Deepthroat? Facial? Facefuck? Man in shot or no? Setting? Is there stuff in the way? Is it a car selfie? All this determined prompt/loras.

Post a photo example of an example image with what you are starting with and will screenshot a basic example for you. I don't use catbox or whatever and 4chan does not allow workflows with metadata to post here.
>>
>>20504206
Here is my current setup. I have done selfies, full body pics, outside, inside, cars, AI Gens, etc

The video is very slow moving, and all she does is just put the tip in her mouth. I will post the video
>>
File: 157666_00029.mp4 (353.6 KB)
353.6 KB
>>20504206
>>
>>20504215

Okay got it now. So that is a REALLY bad position for most blowjob loras. Loras determine the action and most blowjob loras are trained from side/pov shot. You are doing a lean-over type deal. Think there is a 69 lora that may be helpful there, and a facefuck lora.

Also that image is hella wonky. Large black border around entire image and photo is vertically stretched a lot.

I would prompt something like "woman opens her mouth and the mans penis is inserted into womans mouth. Man repeatedly thrusts his penis into the womans mouth. He fucks her mouth like a cocksleeve.

Use:
FastMove model (quicker action)
-Facefuck
-ManandlivingCocksleeve (helps with deep thrust motions)

Gimme 10 min to tinker with it.
>>
>>20504213
>>
File: Wan2.2Base-Workflow.png (687.6 KB)
687.6 KB
>>20504254

So basically just 3 loras, probably could use just 2.

- Wan NSFW general
- Ultimate Deepthroat V3
- DR3LAY

Prompt was pretty simple but your image is potato quality, like 280x600 or whatever and didn't feel like upscaling it. Ignore the spaghetti mess going to top.
>>
File: SmoothMix.mp4 (754.9 KB)
754.9 KB
>>20504215
>>
File: FastMove.mp4 (938.3 KB)
938.3 KB
>>20504262
>>
File: Wan2.2BaseModel.mp4 (560.7 KB)
560.7 KB
>>20504263
>>
>>20504260

Also ignore the negative prompts.. was using an old workflow and just tweaked it for your image.
>>
>>20504263
Thank you, that action looks alot better.

Question: do you have any loras that work well with portraits? I have selfie, full body, nudes, etc. I ask cause I feel like using AI gens for I2V gives it an uncanny look. I see some wizards have extremely high quality videos and maybe their I2I is just much higher quality than my setup.

I have a 4090 and 64gb of very high end ram
>>
File: 001.mp4 (3.1 MB)
3.1 MB
>>20504289

You have a few options with portraits/selfies. You can crop the image to just show face and prompt "camera pans out to show mans penis enters from (bottom/left/bottom left, etc) side of frame. Man penis is thrusted deeply into womans mouth).

There are also a bunch of smash-cut loras that basically takes the womans face and immediately cuts to a blowjob scene (like this one). The loras are further up in this thread provided by Mr. First Responder himself (1st Aid).

Attached vid is a smashcut example. The loras are usually named "iBlink" something.. there is a series of like 10 or so of them.
>>
>>20504289

Image resolution and genning at highest resolution your rig can do is key to getting sharp gens. If you are just doing it for peeps on here I wouldn't sweat it as usually 720 quality is good enough. Start with a high res photo, then gen it at a decent resolution. If you are working with crap photos there are AI upscalers you can use but I dont usually bother with all that.
>>
Thank again, I have used some Blink loras and its been a mix bag for me. Ill keep trying to perfect them but the ones that really only work for me is the facial and bj ones.

Ill mess with some more later and take in your edits to see if i can get them running
>>
>>20504307

Also some Wiz's use post gen vid upscalers and interpolation programs (Topaz/Flowframes, etc) to make the vids look smoother and recompress to fit on the board here.
>>
File: dhdfhdfh.png (1007.8 KB)
1007.8 KB
>>
>>20504308

If the face is at a bad angle or position AI has trouble recognizing it. Also helps to crop the face for them as the iBlink just uses an AI body and mostly ignores the persons body in the photo.

Takes about 100 hours or so to get an understanding and comfortable with all the settings/loras/models, etc. Then you can quickly figure out what lora/prompt works with what position/photo.
>>
>>20503953
always go for more vram when possible. i went from 3060 12gb to 5060 ti 16gb last year and while the small speedup in genning images is nice i think the 4 more gb of vram is much nicer to have because that opens up more flexibility with which models i can run without a severe gen speed penalty because something wouldn't completely fit into the vram. i do image gen and llms, not as interested in vid gen yet.
>>
>>20504411
Thanks for the answer.
I will try to grab a 16Gb model then.
>>
>>20503953
i do my stuff with 4070 (12gb) and 64gb ram....of course more vram/ram the better you get the better. most of the new ai video models are huge, 40gigs, so there are no "consumer level" cards out there, but more you can fit in vram == better, as this means less loading from ram->vram...and if the models exceeds the ram+vram, then stuff gets loaded into your page file, and this slows down even more....latest comfyui (0.19.x) promises less OOMs, haven't tested it yet

but note: on speed # of cuda cores and mem bandwidth on the card play big part as well.

sadly the world situation has pushed chip/ram prices way way up....those who are thinking upgrading their rigs, wait up
>>
>>20504777
Well, thanks for this info to.
I have a ~$300 voutcher, so I can buy a card ~$600 and at this price, a 5060Ti is the most VRAM I can have (some 5070 sometimes are on this price bracket, more powerful, but less VRAM)

>those who are thinking upgrading their rigs, wait up
Because it's more expensive now or because you more or less know it will be better in a month or two?
>>
>>20504937
Don't forget about the potential hidden cost of buying a new power supply is there's cable incompatiblility
>>
>>20504937
who knows....all the war shit going on. AI is big thing now so many ai farm companies have hogged up most of the ram chips out there. Demand is larger than supply --> prices go up....big boys like like samsung and kingston are selling stuff out that isn't even made yet....i would say, instead of months, speak couple years...

example....bit over year ago i bought new ddr5 2 x 32gb at 80$, got my 6 month old 2 x 16gb in the drawer, they go over 250$ used

that being said if you get good deal now, take it
>>
>>20504974

There are some deals on pre-built rigs bundled with cards so could be an option to consider. Usually see regular deals knocking 500-1k off a complete system. More rare to find nowadays then late 2025 so guess the supply crunch is still getting worse.
>>
>>20504957
Yeah, I have a beefy power supply from my old mining rig.
I could never use it because the first GPU price soaring happened just when I was about to buy the GPU for said rig...
The special cable for new GPU is something to remember tho, you are right.

>>20504974
Yeah, I feel like you, it's uncertain, but as long as AI is going, prices will go up.
Call me crazy, but I wonder if the governments won't try to reduce personal computational power too...
It would be as easy as asking to be a company to buy "high power, high carbon emission good" here...
But I digress.
I'll take a good deal if I have one.
Fortunately, a friend of mine lend me his old computer, and it's plenty powerful for AI (except for GPU that I had to supply of course)
Anyway, thanks, and let's see what I can do next week.
>>
I wanna do some prepwork edits for video, I tried Qwen-Nunchaku edits but it just runs out of memory at least with the stuff downloaded from the .bat
With 12GB should I be using something else for images? thanks
>>
>>20505485
Post the screenshot of workflow. This is to all for asking for help... POST A SCREENSHOT 99% cases solved that way, by me or other Ai wise anons just by looking at your screencap
>>
File: error.jpg (225.4 KB)
225.4 KB
Hey, I suspect there may be an issue with the bat fetching custom nodes.
I previously erased the portable for disk space reasons (it was working fine, I just wanted a clean install on a different place).

Upon reinstalling and hitting run it throws pic related.
I reinstalled clean 3 times to be sure (Manager,run, custom nodes, attentions, WAN), tried previously working workflows and a clean workflow each time. Tried Flash attention and normal each time. Same thing.
No errors during install, always same node error when running.

Maybe it just hates me now or something, but if you find the time please try to see if the install process is actually bonked by doing a clean separate install, It's pointing at WanImageMotion, but since the process stops I can't tell you if anything else is messed up as well.
>>
>>20505612
And here's the last workflow, just so you know I'm not trying anything funny.
This used to work on the previous install a few days ago
>>
File: unlikely.jpg (201.1 KB)
201.1 KB
>>20505614
The only other thing "out of the ordinary" is this, which happens after every run, but I doubt it's related. Some out of date UI stuff I may not have paid attention to prior.
I'm not very technical, maybe there's a stupid fix to it, I just know it doesn't work off the bat (no pun).
>>
File: prev.jpg (6.6 KB)
6.6 KB
>>20505617
Actually, changing this makes it run, derp

Temporary fix, but I don't think it's good for SVI genning. Maybe it's fine for I2V since the image is static? was this supposed to be off during the initial scene? I don't know, it's activated by default though.
>>
>>20505624
>>20505617
>>20505614
>>20505612
Seems that was it, please ignore these posts. Prev samples for the initial scene as false did the trick. Prev samples can be true on extensions.

It makes sense I just have no idea why it worked before, since when I load old workflows for metadata it's set as true on initial scene.
All working fine
>>
>>20505485
Qwen 2.5 alone is 12gb, I don't think you can even load it, let alone with the other models, it should be for 16gb+ only unless I'm missing something.
Don't know about the other ones. Qwen image gens seem to be a lot more expensive than video gen, ironically
>>
Looking for some help setting up my Blink workflow.

I ran the settings blink recommended and got a very fuzzy video that looks catoony with how fast it is.
>>
File: Video Project 10.gif (2.8 MB)
2.8 MB
>>
>>20504262
IS this smoothmix checkpoint or smoothmix Animation lora ?
>>
>>20506309
what is mainou lora? disable it and try again?
>>
>>20506931
I did and its a model I trained. same outcome.
>>
>>20506950
cant see the models....sure you got high and low loaded...not 2 x high?

also why reactor? have you tried newer versions of bag's workflow?
>>
>>20506555

Its a base model like the Wan2.2 hi/low model. Check Civitai for it. think Its like 12-16gb but believe there are quants avail also
>>
>>20506990
I switched toblink's workflow and got it to work. I was missing the low and high models for the lightening he was using. Now the clip is pretty clear but very short (2 seconds) and very sped up
>>
File: iGOON_00001_ (1).mp4 (620.3 KB)
620.3 KB
I feel like this workflow is a little basic so I am going to try to get it to work on the 1st.ald workflow
>>
>>20506555

https://civitai.red/models/1995784/smooth-mix-wan-22-14b-i2vt2v. Add the files to correct folder in comfy, then select Smoothmix on Drop-Down instead of Wan2.2i2v
>>
>>20506999

Check your FPS and number of frames if it is running fast and beief (short). Some loras (most) are trained at 16fps so if you put 60 its gonna speed up usually
>>
File: workflow (1).png (709.6 KB)
709.6 KB
hello, i am having an issue i've never had before, but i recently re-installed windows and this the first time i've gotten around to getting this installed again. using default wan workflows, regardless of version, i get noisy brown output immediately as the video starts. i am posting the workflow image and then the output.
>>
File: 157666_00001.mp4 (3.1 MB)
3.1 MB
>>20508036
>>
File: Untitledwf.png (495.3 KB)
495.3 KB
>>20508036
try setting this to 0
>>
I get the following error with 1stAid SVI workflow after hardware applied - proper models have all been selected on the main page of the workflow:

TypeError: IAMCCS_HwSupporter.apply() missing 2 required positional arguments: 'reserved_vram_auto_headroom_gb' and 'reserved_vram_auto_max_gb'

RTX 5080 with 32GB Ram
>>
>>20508336
Try setting apply reserved vram to false.... Kinda weird error as those missing settings are set
>>
>>20508043
dude, thank you so much, this worked. i wasted so many gens trying to figure it out just for it to be so simple!
>>
>>20508511
I'm also getting the same error, tried to set reserved VRAM to false and still the same.

Not sure if the code will help:

TypeError: IAMCCS_HwSupporter.apply() missing 2 required positional arguments: 'reserved_vram_auto_headroom_gb' and 'reserved_vram_auto_max_gb'


File "E:\WindowsOS\rbwu4v\ComfyUI_apprentice_portable\ComfyUI\execution.py", line 535, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "E:\WindowsOS\rbwu4v\ComfyUI_apprentice_portable\ComfyUI\execution.py", line 335, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "E:\WindowsOS\rbwu4v\ComfyUI_apprentice_portable\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\py\metadata_collector\metadata_hook.py", line 171, in async_map_node_over_list_with_metadata
results = await original_map_node_over_list(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "E:\WindowsOS\rbwu4v\ComfyUI_apprentice_portable\ComfyUI\execution.py", line 309, in _async_map_node_over_list
await process_inputs(input_dict, i)

File "E:\WindowsOS\rbwu4v\ComfyUI_apprentice_portable\ComfyUI\execution.py", line 297, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
>>
>>20508511
I've tried that with no success getting past that same node returning the error. I tried to used different profiles to ensure there is enough headroom for it to run and get the same error code.
>>
File: 2026-05-06 020023.png (271.2 KB)
271.2 KB
>>20508967
you could try replacing those nodes with good old sage patch from kj

>>20508986
the headroom part mean how many GB comfyui should leave out for other system use, ie if you are watching a movie while gen, its good to leave gig free
>>
>>20509022
The screenshot got me going from my last post here >>20508986

Thanks!
>>
File: Peachlight.mp4 (2.0 MB)
2.0 MB
>>20493231

You could just work backwards with Qwen image edit and a handjob lora. I made this by creating the end frame in Qwen first then just used Remix AIO since it has loras baked in. Probably should have prompted the guys feet to stay still as well.

prompt:

she picks up a large thick rubbery plastic cylindrical object with her left hand (it has a rubbery slit in the bottom), she puts the cylinder over the tip of penis and pauses for a moment, she then pushes it down the penis with slight resistance as the penis slides in, she starts moving the cylinder quickly up and down the penis shaft.
>>
>>20508967
according to grok

The IAMCCS_HwSupporter.apply() function (from the IAMCCS-nodes custom node pack) was recently updated to require two new parameters:

reserved_vram_auto_headroom_gb
reserved_vram_auto_max_gb

Your ComfyUI-Lora-Manager (specifically its metadata hook in metadata_hook.py) is still calling the old version of that function without these arguments

you could try to update both IAMCCS-nodes and ComfyUI-Lora-Manager
>>
Thank you so much for the guides and details, I was able to get my machine running, after some adjustments, I can make videos in less than 6 minutes now on my 3060 12gb vram.

Now, I'm trying to figure out image upscaling, i've tried using the included Qwen flows, but I'm not sure which would be the optimal flow for realistic upscaling, and I don't know much about the models themselves. Is qwen the best option for this? Or is is there a realistic upscaler that's better to run on 12gb vram
>>
File: Screenshot (143).png (971.6 KB)
971.6 KB
It just makes her fat...so there isn't a one stop shop to nudify any pic? no workflows out there?
>>
File: 1000364795.jpg (53.4 KB)
53.4 KB
here's the original
>>
>>20511599
>>20511596

you may want to throw a differential diffusion node right before your ksampler, your CFG is very low, and you likely don't need to do 40 steps, I'd stick with somewhere like 24-30 steps

Also I'd like to throw out that the quality of the image is awful. SD1.5 likes to work with 512x512 images
SDXL likes to work with 1024x1024 images

you can use any image size you want technically, but you may not get the results you want.

SDXL doesn't use prompting like the newer models, it doesn't understand what "exact same woman" is for example. Inpainting is taking the surrounding pixels and blending in and drawing over it to put it simply. is has no concept of what likeness is or isn't

you're better off using a prompt like

woman, naked, nude, breasts, detailed nipples, photorealistic, hand on hip

then tweak it as you see fit. still, there are better ways to inpaint, or if you have the vram use a larger model like QWEN or Flux Klein 9b to just "remove the clothes to show her naked body"
>>
>>20511617
Plus I'm using zluda 8gb vram 32gb total on a laptop. Any AMD tips/tricks easily listed? Seems like there should be more saved (working) workflows around? I'll try those suggestions, thanks.
>>
>>20511836
I'm on Nvidia, so I can't help you there.
Honestly you should probably look into Flux Klein 9b, with he gguf version and some offloading, it should do the job
>>
File: test.jpg (1.2 MB)
1.2 MB
Another newbie here, I stole this guy's modded workflow here >>20503272 (thanks)

It worked fine except for the filmgrain node throwing this error:
>TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

I fixed it easily by removing the node and reconnecting upscale to save image, but I'd like to get better and understand what's wrong with it. It seems like a simple post processing effect, do nodes with no reference need any associated models or resources separately, not specified in the node?

Posting workflow for reference just in case, but it's pretty much the same as >>20503272
>>
>>20511596
I don't think juggernaut checkpoints does NSFW... Use other checkpoint like biglust.... Also there are speedup Lora for sdxl that does images in 4-8 steps.... 40-50 is bit overkill, you are basically over-sampling it.... Causes the Gen to get burned and oversaturated
>>
>>20512000
adding a "image to device" node, putting it at cpu fixed it for me. you wanna add it after the upscale, before the film grain.
>>
>>20512755
Thanks anon <3
>>
>>20498964
Please drop your workflow for this, dude.
>>
>>20512571
Steer me to any videos, forums, guides for this i2i nudify for AMD? Discord NSFW comfyui? Step by step for a dummy?
>>
Thank you for this guide sir. I've gotten Qwen running well. And I've just genned my first image to vid with a wan 2.2 template.

I have not yet figured out how to apply more nodes properly and also how to apply the loras that I downloaded from civitai.

But I'm getting there!
>>
>>20515192
i don't think tutorials differs any if you are using the amd (romc) or nvidia. Later is just generally faster...Using sd/sdxl models is bit outdated way to nudify, but i understand for some this is only option due smaller model size and vram limitation.

even if you are at 8/32gb, i would give klein 9b a try (klein 4b is much smaller model but also less quality and lora support). There are so called GGUF versions of it out there that might get you going.

is the nudify is the only thing you are after?, if so , i can make a workflow for that in few minutes

start here:
1. get the model https://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF/tree/main (start with Q8, go lower if out of memory issues) save in comfyui/models/unet/
2. get text-encoder https://huggingface.co/Qwen/Qwen3-8B-GGUF/tree/main (again Q8, lower if mem issues) save in text-ecoders folder
3. get vae file https://huggingface.co/Comfy-Org/flux2-dev/blob/main/split_files/vae/flux2-vae.safetensors ...goes into vae folder
>>
File: basicklein.png (1.0 MB)
1.0 MB
>>20515478
https://files.catbox.moe/jklnz5.json

here is the most basic flow i could think of using those gguf files above
>>
>>20515499
https://civitai.red/models/1972981/sex-nudes-other-fun-stuff-snofs

pretty good nsfw lora for klein 9b
>>
>>20515319
what nodes you would need to add? and why?....adding loras should be quite simple...just press add lora button ;)
>>
File: simpleklein2.png (1.1 MB)
1.1 MB
>>20515499
prompting something like this with SDXL and inpainting and controlnets would be a task....
>>
File: ht.jpg (153.4 KB)
153.4 KB
i'll give my 0.2 cents on nudify too, still figuring things out. forgetard, 1st tried qwen img edit models in img2img and then a 2nd img2img on tqwen'sa outputs with sdxl detailers. results were hit and miss and the process is retarded.

working on this img >>20512431 i tried flux 2 klein for the img2img "make her naked" thing. f2k4b is faster than any qwen or z-image models i've messed with (5060 ti 16gb) but the nsfw bits are as bad or worse than than those. looked for f2k loras, only seem to be loras for 9b.

flux2Klein_9b.safetensors is 18 gb and forge would crash out on me more often than not trying to load it.
https://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF/tree/main - grabbed the q8_0.gguf there, it's 2x slower than 4b but it doesn't crash my forge neo.

for this image snofs wasn't removing her thumb but it also wasn't removing the part of the bikini her thumb is pulling on so i tried this lora and had better luck
https://civitai.red/models/2592290/sexgod-klein-9b-image-edit-female-nudity-helper

in the end i still had to use qwen and composite a couple images together because everything i tried in f2k would remove her thumb and mangle the hand a bit and i didn't want to mess around with "move her hand" kinda stuff.
>>
Would this be because of my low memory? or something I didn't install?
TextEncodeQwenImageEditPlus

RuntimeError: GET was unable to find an engine to execute this computation


# ComfyUI Error Report
## Error Details
- **Node ID:** 11
- **Node Type:** TextEncodeQwenImageEditPlus
- **Exception Type:** RuntimeError
- **Exception Message:** RuntimeError: GET was unable to find an engine to execute this computation

## Stack Trace
```
File "C:\ComfyUI-Zluda\execution.py", line 535, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
>>
File: simpleklein3.png (1.2 MB)
1.2 MB
>>20515585
forge is pretty much dead project, i haven't used forge neo, but i think might might have similar issues with loading models to ram-->vram back and forth....

move to comfyui for better model/memory handling. i know its a jump, i started with forge as well as i found comfyui confusing

as for anatomy...still in year 2026 most of of the ai models don't know how limbs and fingers works, its the fact we have to live with.

for beginners 3 out 10 images/videos turn out ok...when you learn how prompt to avoid major issues, then maybe 6 out of 10 come up ok....accepting the fact that ai can't read your mind of what you want is kinda the first step you need to take on your ai-journey
>>
>>20515596
not out of mem error.....take pic of your comfyui screen....you are not missing nodes right? no red borders anywhere?

but don't get you hopes up....i know about nothing on zluda/amd comfyui, what nodes work or what don't
>>
File: Screenshot (147).png (892.3 KB)
892.3 KB
>>
>>20515596
leave me your disc and i'll contact you, i might have few ideas to try
>>
>>20515627
@smokinmeat69
>>
>>20515612
>https://huggingface.co/Comfy-Org/flux2-dev/blob/main/split_files/vae/flux2-vae.safetensors

Slow but i expected that, Thanks again.
>>
>>20515499
This is incredible, and it's so fast compared to the stuff I was using. Thank you Ald!
Quick question, is there a way to use 2 or 3 images like on Qwenedit workflows?
>>
>>20515836
yes...basically even more if you wish, but i've never found use more than 3 desu
>>
>>20515839
Nice! I'll try to put it together as an exercise using qwen edit as reference, I don't want to be over-reliant on you
Thanks again!
>>
File: 2026-05-09 081147.png (1.1 MB)
1.1 MB
>>20515894
just loading two more images and connecting them to that prompt-node is enough...if you want it simple
>>
nunchaku doesnt work, ive deleted the folder and reinstall from ur .bat but that thing wont work
>>
>>20516147
ok....with all that extra info you gave me. i get right into it
>>
>>20516008
Oh that easy? thanks lmao I was overcomplicating things
>>
>>20516165
i get this in the terminal when i launch
ComfyUI-nunchaku version: 1.2.1
'nunchaku_versions.json' not found, Node will start in minimal mode. Use 'update node' to fetch versions.
i went into a rabbit hole with chat gpt, made me change numpy version and tons of other things, gave up, deleted everything and reinstall fresh from ur bat again. i tried to update from manager and it doesnt work
>>
>>20516175
well yeah.....other way is to encode all the input images into noise and pour them into same latent noise "pool" and then prompt it out.

this method has basically no limitations how many images you could use, but more mud (noise) you pour on the pool, less clear it gets.

but like said, usually 1 or 2 additional images is all you ever need
>>
>>20516193
try installing the attentions again from installer.....also don't ask ai to fix your comfyui, will most likely work in opposite way. also also, does the custom node nunchaku load ok, or does it show import failed?

if import failed, then click on that red button to show why (usually it is missing some dependency). I haven't used nunchaku for ages. I know it was pain to install...
>>
>>20516224
if that doesn't work then latest comfyui not liking the nunchaku-wheel that my installer is using....i will take a look at that on next version. try to work without for meanwhile
>>
>>20516008
which workflow is that?
>>
>>20516269
a standard qwen image edit wf.
>>
>>20516269
This >>20515499

Is for one image... But what it seems just add few load image nodes
>>
Looking for help. Video produced isnt the act
>>
File: 157666_00331 (1).mp4 (3.1 MB)
3.1 MB
>>20517072
>>
>>20517072

Says "no jumpcut" in your negative prompts sir.

Take that out and abracadabra.. Blink jump cut
>>
>>20517181
Removed No jumpcut and got the exact same result for video
>>
>>20517242
If anyone wants to see the workflow more clearly

https://files.catbox.moe/hjk4hz.json
>>
>>534779983 test
>>
>>20517242

Change the noise seed, make sure the iBlink "trigger word" starts at beginning of clip and is exactly as it is shown on the lora model, then add whatever you want after. Putting stuff in the middle of the trigger word has a high chance of lora not being triggered.

Aside from that try restarting comfy/your computer and see if that helps.
>>
File: 157666_00340.mp4 (3.0 MB)
3.0 MB
>>20517453
I dont see any trigger word on their page:

https://civitaiarchive.com/models/2187182?modelVersionId=2462696

I am using one of the three example prompts:

https://files.catbox.moe/zrgzck.json
>>
>>20517704

https://civarchive.com/models/2187182?modelVersionId=2462696
>>
>>20517715
Yeah, that is what I just posted. where is the trigger words?
>>
>>20517704

You shouldn't have to do it but try putting your Hi CFG at 1.4. Outside of that, try it with another image and see if it works then. Beyond that outside my wheelhouse sir.
>>
>>20517720

No, you got the prompt right.. the trigger "word" for Iblink is that whole paragraph so no idea why it won't work for you.
>>
>>20517704
she's just not in the mood rn bro. a.i. is evolving
>>
>>20517834
Made me laugh... Not going to lie
>>
>>20498966
Hi, which lora are you using for this gen?
>>
>>20499104
It's possible, im on 3060 and made this gen. Not great not terrible
>>
>>20517704
No lightsx2v/lightning lora on 8 steps? that may be you issue.
>>
Anybody got any high-end I2I workflows I can use?
>>
I bought a 5080 3 weeks ago to learn to ai porn but every time I get horny think about how much I want to make some ai porn and just bust to regular porn and then loose all motivation to actually learn it. I have to publicly declare that I am not going to fap or look at porn until I learn how to use comfyui
>>
>>20518855

1stAid is gonna need a screenshot of that to be able to help you out
>>
>>20518274
what needs to be "high-end" about it? what are you trying to achieve?
>>
>>20519980
I feel like most wizards have way better looking outputs with their I2I than I have been able to achieve with the apprentice workflows that come packaged ones from this thread
>>
>>20520211
Certain wizards use multiple processes, like photoshop and AI, different models, different workflows, and different loras.

Since there are so many variables, if you're looking for a specific quality, it would be better to ask the person postin gthese high quality photos.
>>
>>20520232
pretty much this

I always tell everyone the same thing when working with workflows

set a goal for yourself and make your own workflows from zero. It'll help you understand what's happening in other's workflows and how you can integrate that into your own work

You should make it a goal to learn what you need to do to get from point A to point B, if you get stuck, some of the more advanced wizards will be more than happy to help

Knowledge should never be gatekept. I personally am more than happy to help someone if they're stuck, or if they want to learn
>>
>>20520211

Also many Wiz's use post gen processing stuff or include it in their workflow like upscaling and frame interpolation which helps gens look smoother/more polished.

I mean if its a turd gen, its just gonna be a polished turd but at least its shiny
>>
>>20520211
Quality comes from:
Input image quality (use sharp image, if it's not run it throu some image ai model) ,

model you use (at least Q8)

steps (with lightx only 4steps needed, thou steps gives better)

Output resolution (use at least 720p)

Loras... Most of the loras tend to push quality down, more loras you load, more likely quality will take a hit
>>
bumps before ZzZ-land
>>
>>20521037
Some of my best gens have been made with kinda low-quality starting images. I especially love it when the video model interprets depth of field into the videos. It gives the gens extra realism.
>>
amateur learner here. Any tips for maintaining resemblance and introducing realistic anatomy via prompt/loras?

In this example the woman's looks completely different. But in the soon to follow example she looks the same but is not following the prompt/lora closely.

Qwen was easy to use and maintain resemblance/nudify etc. But it is my understanding that flux is better for nsfw anatomy? So i tried to build this with flux dev.1

Using a 4090.
>>
>>20523396

slightly differnt settings
>>
>>20520211
I've noticed the same

For SVI think it's either

- maybe Q8 models that are trash and need more than 64Go RAM to not be slow as fuck. Switched to fp8 and results were 3x times faster and way better adherence, cleaner results, no corruption in frames, etc.
- or the IAMCCS nodes that are buggy.
- or both

Sorry 1st Aid but something is broken in your workflow, and I don't want to advertise for someone elese wrokflow here but check SVI kenpechi on civit it's working way better

check my gens here >>20519887
15m on 5060Ti 16 VRAM 64 RAM
>>
>>20523424
No need to apologize. If you want good workflow, share it with others...kinda point of these threads.

My workflows are build for my needs....and stuff gets broken in random intervals as nodes and comfyui updates. Some other anon with 50xx card also issues with iamcss nodes. Replacing those with kj's patch sage attention node
>>
>>20523396
flux 2 or flux 2 klein....flux1.dev is couple years old and works different than qwen or flux2....it cannot transform your input into the latent like qwen/flux2. as you example when denoise is 0.95 that basically follows just prompt, works as t2i, if you set denoise low it tries to keep your input image unchanged...aka denoise 0.0 == it does nothing

as for anatomy, it still sucks in flux, but give flux2 a try, dev model is quite big, but there is smaller distilled (4step) version, has been working ok for me.
>>
>>20523768
thx sir. I started out trying to use flux 2 klein but ran into some errors. And when I researched the error it was suggested by AI to use flux1.dev for better compatibility. I'll try klein again
>>
I have a 6800XT GPU, any changes to the installation process I should be aware of? I'm not really tech-savvy with this tech so would really appreciate the help
>>
>>20524077
>6800XT
my installer is for win / nvidia cards. AMD user should try this guide
https://github.com/patientx-cfz/comfyui-rocm (same guy who is maintaing comfyui-zluda)
seem more straightforward than zluda
>>
>>20524118
Thanks I managed to get it installed and got to the main UI. Is there a comprehensive guide on what to do/where to learn what I should do next?
>>
>>20524390
Mostly I'm not entirely sure where to find the CustomNodes, Attentions, Workflows, and WANs that came with your original packet since I did a fresh download from github
>>
>>20524419
>https://github.com/patientx-cfz/comfyui-rocm
comes with supported attentions (so it claims)...i would also assume it installs the manager? <-- if so, you just manually install missing nodes via manager as you need them (aka you open up a new workflows)

you can use my installer to download the model files in their right places (place my install bat inside you comfy folder, where other bats are)
DO NOT INSTALL ATTENTIONS OR COMFY from my installer, that will break your freshly installed amd-romc-comfy

sadly i don't know much about amd cards + comfyui
>>
Bumps
>>
^^

Reply to Thread #20483169


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)