Thread #8827652
File: OP06022026.png (2.1 MB)
2.1 MB PNG
Responsible And Diligent Sensei Edition
Previous Thread: >>>>8815821
>LOCAL UI
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge | Stable commit: 6964ceb
Comfy: https://github.com/comfyanonymous/ComfyUI | https://comfyanonymous.github.io/Co mfyUI_examples
>NOOBAI-XL
https://civitai.com/models/833294/noobai-xl-nai-xl
https://huggingface.co/Laxhar
>NOOBAI SHITMIXES
102d custom: https://civitai.com/models/1201815?modelVersionId=1491533
291h: https://civitai.com/models/1301670/291h
>RESOURCES
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Easy booru tag export userscript: https://github.com/Takenoko3333/Danbooru-Tags-Sort-Exporter
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_ tag | https://tagexplorer.github.io/#/
Inpaint: https://files.catbox.moe/fbzsxb.jpg| https://huggingface.co/Wenaka/NoobA I_XL_Inpainting_ControlNet_Full
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
ControlNet: https://rentry.org/dummycontrolnet (OLD) | https://civitai.com/models/136070 (look at the green links in the description)
>TRAINING
Guide: https://rentry.org/yahbgtr (WIP)
Anon's scripts: https://mega.nz/folder/VxYFhAYb#FQZn8iz_SxWV3x1BBaJGbw
Trainers: https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_s s | https://github.com/Nerogar/OneTrain er
>NEWS
https://huggingface.co/circlestone-labs/Anima
https://huggingface.co/ChenkinRF/ChenkinNoob-XL-v0.2-Rectified-Flow
OP Template/Logo: https://rentry.org/hggop/edit | https://files.catbox.moe/om5a99.png
1232 RepliesView Thread
>>
File: 00063-3180376270.png (2.6 MB)
2.6 MB PNG
>Why are you doing [thing]
Why not, I am just that bored so anything goes I guess
>Why is so [thing]
Because it's me
>Are you going to do [thing]
Maybe I will maybe I won't
>>
>>
>>
File: 00009-1854146137.jpg (1.1 MB)
1.1 MB JPG
>>
File: 00008-3906899065.jpg (496.5 KB)
496.5 KB JPG
>>8827658
I forgor about that one, fuck
>>
File: 00091-1687842187.png (3.2 MB)
3.2 MB PNG
This one didn't came out quite like I want it, boobs look weird
>>
File: 00031-3976810895.png (1.9 MB)
1.9 MB PNG
Have you seen any of the epstein files? crazy stuff , seeing and reading some of those made me remember all of those really old and obscure vocaloids songs, not cool
>>
File: 00143-4083351731_result.png (3.3 MB)
3.3 MB PNG
I was blessed with this seed the other day, it looked funny so even if I had a little too much gens like this I decided to complete it anyway
>>
By the way, here is the gen of the other day, you know after doing more of those I think they are not that of a big deal anymore
It's a little fried since I didn't intend to share it but eh, whatever, world's ending
>https://files.catbox.moe/mm1l50.png
>>
this is anima thread now
>>
File: 00035-357018711.png (3.6 MB)
3.6 MB PNG
I have done so much shit over the past two years on this hobby chat, kind of nuts
Hope the new models once complete trained and full tuned will be great, as you may know I am more than fine with SDXL but new toys are always welcome
>>
sex with JKs
>>
File: 00090-3287242668.jpg (1 MB)
1 MB JPG
Despite everything, my favorite shitmix is still 102d custom, r3mix being second place and bakaschizo "finetune" third place for the rad styles it can do (when it works after layers and layers of hacks)
>>
File: 00068-754088371.png (2 MB)
2 MB PNG
I love 1girl so much, you have no idea, I thought about going back to posting on /e/ but it's full of very autistic childs nowadays, not worth it
Peak /e/ was back then on the golden days of NAI v3 when the other baker was around, I wonder where he is now
>>
>>
>>
>>8827677
the meltdown over the past two weeks or whatever on /e/ has been entertaining. carebear schizos have been outed as being control freaks who are only okay with things when they're the ones doing it and a few of the remaining "regular posters" have gone mask off schizo-mode when anyone says something they don't like.
meanwhile /adt/ is completely fucking dead with the last two threads being schizo highjacks and the only real poster left being a specific nai user, with occasional schizos that "migrated" to edg making light appearances.
>>
File: 00068-3201542375.png (3.2 MB)
3.2 MB PNG
What's better than 1girl? 2 of them
I love yurishit and futashit and really really want a place to post about it but for some reason, those two always attract a very particular set of autists that ruin the fun for everyone else, I don't want to see girls with giga cocks or balls, I want to see girls make out and/or imply that they are fucking the other one
>>
File: 00025-3364129963_result.png (2.9 MB)
2.9 MB PNG
>>8827684
I also have thought about going back to /adt/ but I hate forcing myself to do sfw when I am clearly not in the mood of doing it and "barely" sfw would get me banned on this dogshit website
>>
Speaking of it, I still want a "low level" bestiality thread, where the main topic (and focus) are the girls taking animal cocks
>https://files.catbox.moe/j6se2o.png
>https://files.catbox.moe/qxsxzf.png
>https://files.catbox.moe/kpqk03.png
>>
>>
>>8827684
>the meltdown over the past two weeks or whatever on /e/ has been entertaining. carebear schizos have been outed as being control freaks who are only okay with things when they're the ones doing it and a few of the remaining "regular posters" have gone mask off schizo-mode when anyone says something they don't like.
I feel like it's mostly the IA schizo and maybe 1-2 of his most loyal minions. Most of the other slop posters don't really give a shit what happens around.
>>
>>
>>
>>
File: anima_00773_.png (1.3 MB)
1.3 MB PNG
it's so over
>>
Loving Anima so far but I'm having trouble with style consistency. It's like I either gotta put in an artist as a tag, or just get random shit every time?
I don't want it to look hand drawn, I just want it to look crisp and detailed in every image, like default Illustrious type style. anyone know how I can prompt that?
>>
>>
>>8827783
>like default Illustrious type style
Default illustrious is like this too, you're thinking of WAI or some other mix. It's similar to having an artist prompted at all times with low emphasis, someone like dikko. You can also run a low-strength style lora.
>>
>>
>>
File: animaslop.png (2.1 MB)
2.1 MB PNG
Even with NL I can't get the guy to grab the breasts in a way that presses them against his dick.
>>
>>
>>
>>8827792
not perfect, but still, it's a prompt issue. think of another way to approach what you want to see and write it out.
>>
File: ChatGPT Image Feb 7, 2026, 09_53_55 PM.png (964.6 KB)
964.6 KB PNG
>>8827796
eh, not terrible for a first past, i'll try messing with the tags a bit more
>>
>>
>>
>>
>>8827783
the model not having a baked in default style is a good thing, slop-kun
literally just put together an artist mix for a style you like or download
https://civitai.com/models/723360/ai-styles-dump-animaillustriousrouwe inoob?modelVersionId=2663471
>>
>>
>>
>>
>>
>>
>>
>>
>>8827833
>girl is looking straight at the viewer, she is indoors and the background is blurry. There is sunlight streaming through the window, and her hand is on her chin with a finger on her lips. Her large breasts are visible, and she has puffy nipples. Across her breasts in large, permanent marker is the words
I hate natural language
>>
>>
>>
>>
>>
>>
>>
>>
>>
My biggest problem with Anima is that it can't really do "textures", as in artist styles that have more detailed skin textures or shading that is neither super soft or completely flat, which I'm guessing is the consequence of training on 512x and so I presume that will be fixed for the full release if they train on 1024x
>>
>>
How to lora anima? google search gives these repos
https://github.com/bluvoll/diffusion-pipe/tree/main
https://github.com/FHfanshu/Anima_Trainer
one is bluvoll one is chinese. not exactly confidence inspiring
>>
>>
>>
>>8827893
Yeah, stuff like Yunsang or nanaken nana and just today I tried Mirei
They end up looking more flat than they should because I'm guessing the image is too compressed as to fit 512x so they detail is lost when training
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00066-2860182781.png (3.6 MB)
3.6 MB PNG
>>8827982
Yes yes I know, to be honest I don't want to comment anything about it but he asked for it so lol
Back to 1girl and sugoi hentai
>>
File: 00188-413525928.png (3.7 MB)
3.7 MB PNG
Let's all just have happy thoughts
>>
>>
>>
>>
File: ComfyUI_temp_zljgp_00036_.png (1.4 MB)
1.4 MB PNG
Hope texts gets better as well. would rather not switch to qwen for that
>>
https://www.reddit.com/r/StableDiffusion/comments/1qyk4fd/anima_2b_sty le_explorer_visual_database_of_900/
https://thetacursed.github.io/Anima-Style-Explorer/
>>
>>
>>
>>
>>
>>
>>8828057
"sometimes"
yes, it's as vague as that. also if you're trying to update kohya directly to access new training implementations you'll be locked to having to use kohya directly and not using the GUI at all(since the GUI won't just magically update to include all of the specific new shit).
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8828158
It's definitely the best SDXL model we have but anima is a new arch and despite it being undertrained and mostly trained at 512x512 so far it's a bit more interesting to play around with and they dropped around the same time so it got overshadowed
>>
>>
>>
File: 00145-3516660384.png (3.4 MB)
3.4 MB PNG
>>8828158
Quick 0 effort raw upscaled gen, not bad I guess
>>
>>
>>
>>
File: 00026-2741226280.png (3.3 MB)
3.3 MB PNG
>>8828165
Alright, slightly more effort this time, also tested how well inpainting work and it does fine, maybe I'll use it a bit more to really see the extent of the updated dataset used but I really really hate using the suggested samplers
>>
File: 00001-3241480020.png (3.5 MB)
3.5 MB PNG
>>8828176
Style wise doesn't seem to be much different from the goat 102d custom, it's more blurry and sketchy but I bet all my horses that it's because of the suggested gen config (quality tags, sampler, CFG, scheduler and steps)
>>
>>
>>
>>
>>
>>
>>
>>
File: anima-preview.safetensors_00001_.png (824.6 KB)
824.6 KB PNG
>>8828199
latter
>>
>>8828199
you just need to prompt the way you normally would with danbooru tags. when you think you have everything ready, translate them into natural language, but don’t delete the booru tags. So I’d say booru tags + NLP is the best way to use Anima. Remember to specify who is where and who is doing what. futa warning: https://litter.catbox.moe/1biift1c7w24e3ss.png
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: sample_b340c8e808aeefbe2bb23bd43dd73297.jpg (53 KB)
53 KB JPG
I dont know if this is the right thread for it but im asking anyway. the artist of this pic used to have a large rule34 collection online but i cant find it anymore. artist is mino_dev. it cant possible have all been nuked. anyone know where i can find it.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8828322
Yeah, but check last thread. It's not quite when you try to upscale, it's when the model works at too high resolution. You can still do a 1.3x or so, or you can use tiled upscaling where each tile is only 1024 and go as high as you want.
Although for the issues anon probably meant in >>8828319, all it would take is inpainting a few key areas.
>>
>>
>>
>>
File: 1758567970520679.png (1.7 MB)
1.7 MB PNG
>>8828331
forgot my slop
>>
>>
>>
>>
>>
>>
>>8828333
This will be true once we get rid of VAE compression altogether, only halving it is sometimes not enough. Depends on the image, the usual issues of faces too far from the camera, fine details like zippers or jewelry, lace, etc.
>>
damn
>>
>>
File: 00110-3600524339.png (2.5 MB)
2.5 MB PNG
>>8828329
>thread so good dumb mf has to blame his own schizophrenia to justify his stupidity
love to see it and welcome, I will be around more often so feel free to come and say hi
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8828404
https://litter.catbox.moe/6ii3kr4yddjgohev.png
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>half feel like actually doing something
>except my chair has been slowly breaking over the last two weeks and really fucking broke this past wednesday
>had ordered a chair that was supposed to deliver same day, never came
>didn't come the day after, either
>cancelled because it was clearly lost in shipping, ordered another chair that was supposed to come today
>nope it's coming tuesday
>meanwhile I'm sitting here in a chair that's tilted 45 degrees backwards so I'm just sitting in a fucking V with no back support and doing anything productive is no fun at all
I hate amazon so fucking much it's unreal. how does a logistics company fuck up logistics that badly.
be sore to lik and scribscrub for moar blog
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8828481
there just has to be something wrong with how it parses multiple artist tags
like it's similar to when you use an artist tag without the @, it kind of just doesn't understand what to do and so it gives you something random
>>
>>
>>
>>
>>
>>
>>
>>8828499
Consider the following: Very few images on boorus have multiple artists, and those that do are usually artist collaborations where each artist draws in their own style. That means that a sufficiently smart model with an intelligent text encoder will understand this and will not mix styles when it sees multiple artist tags. At best it might start drawing parts of your image in one style, and another part in another style. This is what the data distribution teaches it.
Artist mixed images are simply out of distribution, unless you train on SDXL slop.
>>
>>
>>8828511
I highly doubt that will work. There are no such labels in the data, and even artist collaborations are likely just split by character or maybe background and characters, not parts of a single character. If you are very lucky, the model will generalize to "background drawn by ..." or maybe "asuka drawn by ...", but that's a very long shot and I'd be surprised.
>>
>>8828511
Doesn't work, it just mixes them image-wide like usual.
You can still do it with regional prompter+controlnet, once we get one. I've tried that on noob, made a raita skeleton with a zankuro moeblob face. But it' not what we're looking for here.
>>8828492
>when you use an artist tag without the @
In this case it's because without the @ you're prompting a different tag, the TE doesn't leak like old CLIP. Or perhaps not as much.
>>
File: conditioning.png (78.1 KB)
78.1 KB PNG
>>8828508
>>8828511
>>8828523
What if we use a ConditioningAverage
>>
>>
>>
>>
>>
>>8828524
My expectation is that this will often result in unpredictable results. It might sometimes work to some degree, at least if both artist tags have the same number of tokens. Doing it for a full prompt with multiple artists will be difficult.
>>
>>
>>
>>
Anima is weird. It doesn't seem to recognize tons of popular artists, and at the same time I think it's the first model that knows Hanabi (Starmine18) out of the box (which throws the anatomy out of the window though) https://litter.catbox.moe/g02msogpkkatierf.png
>>
File: ComfyUI_temp_cjtpb_00024_.png (2.5 MB)
2.5 MB PNG
anima is really good but the lack of artist mixing is a bummer.
>>
>>
File: conditioningaverage.jpg (398.1 KB)
398.1 KB JPG
I don't think ConditioningAverage does anything meaningful
>>
>>8828556
It has recognized literally every artist I've tried that Noob also knows. Make sure you have @ in front of the tag, and convert underscores to spaces. Make sure you escape parentheses in Comfy. For artists that have different spellings between Danbooru and Gelbooru, use the Gelbooru spelling. All these things can completely fuck up the artist recognition if you do it wrong.
>>
File: ComfyUI_temp_yonaq_00018_.png (852.2 KB)
852.2 KB PNG
>>8828561
this is it mixed normally
>@hood \(james x\), @ahonoko
which looks pretty okay so maybe artist mixing isn't a complete disaster
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
I spent 40 usd on a NSFW app just to make shitty hentai images of some girls I know.
Fuck
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1760409220829094.png (71.4 KB)
71.4 KB PNG
>>8828680
my nigga playing tibia
good for him
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: Princess Solange de Cumslut.png (3.6 MB)
3.6 MB PNG
She was meant to be the crescendo of the parade of the newly captured, but it seems the stagehands had had their fill of her before she was even revealed.
>>
>>8828758
That's obvious: names are very useful for describing who goes where and does what to who. What I'm more curious about instead is how are you supposed to prompt nameless non-characters you made up. They don't have names so how do you address them in the NLP portion of the prompt? You can get away with "the boy" and "the girl" if it's just 1boy, 1girl, but what if there are two girls?
>>
>>
>>
>>
>>
>>
>>
>>8828808
>vibe transfer/ip adapter
Dogshit slop snake oil, what we need are more generalist reference images that could be used for more than just styles, mainly for characters, but also for concepts (poses, items, etc).
>>
>>
>>
>>8828817
>doesn't do anything
There are two ways to achieve this:
>actually do nothing
>try to do something and fail so spectacularly nobody will ever use it
IPA is the second. Both are snake oils, just that the second one is by choice rather than by nature.
>>
>>8828820
Have you tried the noob trained one? It's still pretty cool, I use it on occasion. For transferring weird background designs or as added style bias on top. I recall cathag mentioning it too a couple times.
It's just not accurate enough to fully capture a style, and nowhere near enough for characters.
>>
>>
File: ComfyUI_temp_smdiz_00005_.png (1.1 MB)
1.1 MB PNG
>>
we will never be graced by epstein levels of devotion to a single general endlessly highlighting our posts again, or the accompanying instructions to escort discord users off the board with blocks of conveniently deleted posts like someone was trying to tell us something
>waow
>why seethe
>decline
no one will ever wear that crown the same
but there must always be
a highlights
>>
File: actual.jpg (1.2 MB)
1.2 MB JPG
*smooch*
>>
>>
>>
File: let the madness unfold.jpg (1.4 MB)
1.4 MB JPG
>>
>>
>>
>>
>>8828840
>how did almighty and greatest folks at NovelAI corporation figured it out then?
like you said, they didn't.
Also I haven't used anima at all because I'm not bothering with cumrag shit but a few things to point out.
First, you should be checking one artist at a time, no mixing. then you should add an artist and see what/how much changed. then swap to a different artist, then add an artist, etc. Point is, going straight to a mix doesn't really help or tell you anything.
Second, the "mix" isn't exactly something tangible to begin with. Any and all artists are going to be heavily impaired when trying to just replicate from the model and not from a lora. And "mixing" them is nebulous and unpredictable until observed(read: you will have no idea what it looks like until you know what it looks like) and that's on a per model basis to begin with.
either way the "solution" is that with anima at least you can just train a lora. of course, you're going to need to have enough "high quality" output images to form a dataset but surely if you care that much you've spent the time cleaning, fixing errors and unslopifying your gens, right?
>>
how about something simple to cope with artist mixing? using prompt scheduling doing something like
[(@artist1:0.75), (@artist2:0.95)|(@artist1:0.95), (@artist2:0.75)].
alternating the weights between artist tags with each step
>>
>>
>>
File: anima_00892_.png (1.3 MB)
1.3 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>8828948
I haven't actually looked at the clusterfuck of anima and what vram usage actually looks like but the model is only 4.18GB and the TE is 1.19GB with a 256MB VAE.
so I would guess maybe? probably? at least batch 1 should work? worst case it might oom by some small enough amount that it'd still be feasible to do a train overnight. No idea how unet only training on it would work in terms of function but that would also improve the chances of it working, probably by a lot.
>>
>>
>>
>>
>>
>>
File: 00146-1517456864.png (3.4 MB)
3.4 MB PNG
>>
>>
>>
>>8828968
And you're missing out on potentially tens of thousands of mixes, if you have to spend time generating the dataset then training for each one.
I'll still do it though, got a pair of mixes I've been dragging along since early NAIv3.
>>
This question has probably been asked a dozen of times by now, but what are the (((optimal))) settings for Anima? Anything other than er_sde hurts the artstyle, and anything other than simple DRAMATICALLY hurts prompt adherence in my experience. What about the tags? I'm currently using "masterpiece, best quality, highres, very awa, score_9, absurdres, @artist" and THE schizoprompt for negs
>>
>>8828985
i like the full monty of pos/neg but no pos/neg is also pretty good for artist replication. and i'm using the same thing as always, 3m sde with high steps (50) and it works fine. you don't need the ratings tags ever, it gets them from context, if you do 1:1 with explicit and nothing it will look basically the same and the other ratings tags either have minimal or way too big and unpredictable effects. artist tags in negs are extremely powerful so be careful.
the most important negs by far are the two non-anime datasets: ye-pop (photos) and deviantart (deviantart). if you don't have these in your negs you are fucking stupid. everything else is negotiable.
like, both of these are kinda correct and kinda accurate to the artists prompted, partially, but they look quite different
https://files.catbox.moe/4qde6a.png
https://files.catbox.moe/tz9avo.png
>>
>>
>>
>>
>>
>>
>>
>>8828840
4.5 can look alright, it's just that it doesn't look good by default unlike v3 and it's mostly used by completely clueless people. Artists do mix but the mixes are unstable depending on the prompt. It's still kinda better than anima mixing (although I only prompted anima for like 2 hours or so. Trying shit like prompt scheduling might be worth a try later)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: anima_01185_.png (1.5 MB)
1.5 MB PNG
>>8829034
hilarious and original
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00052-1835812201.png (2.9 MB)
2.9 MB PNG
This turned out pretty good.
>>
>>
File: 00054-3846737727.jpg (726.8 KB)
726.8 KB JPG
Another one
>>
>>
>>
>>
>>
File: 1671492803314331.jpg (33.5 KB)
33.5 KB JPG
I thought claude was some crypto bro meme but so far I
>made a chrome extension that lets me fully automate my novelAI set creation
>can give it a bunch of images from a previous set I made and just swap the character prompt out
>Made an extension that lets me streamline censoring and watermarking images
I may actually be able to triple my grifting with this and just make two spinoff patreons with different styles and nobody would be the wiser.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: ComfyUI_temp_nrhgx_00001_.png (2.7 MB)
2.7 MB PNG
>>
>>
>>8829291
Enjoy your pure anime btw. Properly tagged and filtered e621 would have no negative effect whatsoever, but that would make the schizo angry regardless. People have ptsd from pony, but that piece of garbage wasn't an anime model to begin with, it was a sepia cartoon porn model.
>>
>>8829342
and anima is already contaminated with deviantart and ye-pop (whatever the fuck that it) for some reason, those two shitholes literally add nothing but poison the dataset. at least e621 makes nsfw part better.
>>
File: Insatiable Solange.png (3.8 MB)
3.8 MB PNG
>>
File: Princess Press.png (2.2 MB)
2.2 MB PNG
What pose/position to do next?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8829406
base illustrious still had some sort of leftover sdxl knowledge and also didn't have danbooru realistic pics thoroughly removed like it happened with nai. honestly I really liked illu 0.1 aesthetic even if it was mostly only useful for 1girl.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1759566544451611.png (2.5 MB)
2.5 MB PNG
>>8829224
But plain fucking is boring.
>>
File: anima_01524_.png (1.3 MB)
1.3 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1757690221380893.png (296.8 KB)
296.8 KB PNG
Hold the fuck up
Anima doesn't have the e621 dataset? Into the TRASH it goes.
Wake me up when NoobAnima (Full) releases.
>>
>>
>>
>>8829623
Anything other than er_sde @ simple produce terrible results for me, but I'm 70% sure that it's a prompt-related skill issue on my part, I'm still experimenting with it
>>8829627
50 steps with this thingie at default values https://github.com/Jasonzzt/ComfyUI-CacheDiT is faster than 30 steps without
>>
>>
>>8829692
huh? cachedit spits out errors for me when i try to use it with anima. Also have any of you found a good upscaling solution? straight latent upscaling gives me grid like artifacts when used with er_sde or euler a
>>
>>
>>
>>
>>8829695
>straight latent upscaling gives me grid like artifacts
It's not just latent, it's any time the model does img2img at over 2 megapixel res.
I use classic SD1.5 era tiled upscaling >>8826986, this way it works with 1mpix squares one at a time.
Refreshed catbox https://litter.catbox.moe/blx4e9ui79wgdmyz.png
Would be a lot easier sharing knowledge if we were all in on thread.
Multidiffusion >>8829725 is a newer version of the same thing, does it work now? Last time I checked it was not yet updated for the new Qwen stuff that Anima uses, and would just crash.
>>
>>
>>
>>
>>
File: 00058-4160158878.png (3.6 MB)
3.6 MB PNG
Why are we on page 6 wtf
>>
>>
>>
>>
>>8829778
It makes generating more involved character interactions significantly easier than trying to tardwrangle SDXL and CLIP for sure, and if you don't like the style it produces or want to use your sikret special artist mix you can just use the anima gen as a CN for Noob. The problem is that genitals are not as good as noob due to no e621 and it's missing those extra specific tags too which aren't easy to replicate with NL.
>>
>>
>>
File: 00086-560191612.png (813.1 KB)
813.1 KB PNG
>>8829935
I was just fooling around with it and the speech bubble, the juices and the visible pussy lips were purely from the description, no tags at all. That said it also seems to be quite literal, if you don't tag or describe a background it won't generate one, it's also much more susceptible to shitty tags
>>
>>
>>8829935
mixing artists doesn't really seem to be all that difficult. You can just slap in @ARTIST, @ARTIST and it'll do something, and do things different if you append them with strengths. It's a lot more noticeable with vastly different artists' styles of course. it's definitely a lot fucking cleaner with what it gens though, it definitely does a better job of masking that kinda smudgy vectorized look of other models, kinda shocking that this is a smallish preview model
>>
Is it too late to add e621 and shit to improve the model's NSFW performance? Is the preview a snapshot of the full model from mid-training or more of a proof of concept and rather than be a direct continuation, the full model will be trained from scratch, potentially with an expanded dataset?
>>
>>
>>
>>
>>
>>
>>
>>
Everyone who's whining about anima being terrible are just wrong. What people don't get is that the quality of hands, anatomy, genitals or x is highly dependent on what artist/style you're using. As a monster girl genner this model is miles better than the shit I could gen with Noob/Illu.
https://litter.catbox.moe/st4rl1q0fd7b9mfx.png
>>
>>8829968
very funny, but actually i will note that closeups in general are very good on anima. and oral in general. illu has some serious problems differentiating a deepthroat and something else but this model doesn't.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8829965
yeah the misalign thing does happen but it's not that bad, as for detailed genitals I dunno, cocks look like cocks whether in gens them censored or not, gaping vaginas are kinda silly looking as it doesn't seem to render cervixes well but it does well enough for your standard penetration. Tentacles decidedly generate penetrating inward as opposed to pushing outward from orifices like I used to see a lot of, although it does need some prodding to generate monster tentacles over squid tentacles
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 20260211064424-2482976829.png (1.2 MB)
1.2 MB PNG
>>
>>
>>
>>
Quick browse through the thread (hey, actual discussion for once) so if I read everything correct
>every Forge but Pancho's now supports Anima?
>base anima gens can only kinda somewhat be upscaled through boomer SD Upscale?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00332-1412630908.png (3.6 MB)
3.6 MB PNG
>>8830096
What do I have to do with that dogshit place?
>>
>>
>>
>>
>>
File: 00043-90740503.png (2.2 MB)
2.2 MB PNG
>>8830107
I also post hags wtf
>>
>>
>>
File: 00065-4068565420.png (3.5 MB)
3.5 MB PNG
>>8830112
So is not about the content but the style? lol
>>
>>
>>
>>
>>
File: 00027-3715350491.png (856 KB)
856 KB PNG
>>8830099
does this look young to you
>>
File: 1740337329509328.png (1.3 MB)
1.3 MB PNG
here is your 1girl missionary pov slop sirs please calm down
>>
>>
>>
>>
File: 1752682139452102.png (1.9 MB)
1.9 MB PNG
Did the anima guys say anywhere if they are planning to train CNs for their model?
>>
>>
>>
>>
>>
>>
>>
File: {8F8ED6FA-B19A-4E63-8221-2668C1CD35EE}.png (27.8 KB)
27.8 KB PNG
where can i get the optional t5 stuff? will it make training better?
>>
>>
File: 1757271711095476.png (2.5 MB)
2.5 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00054-439229568.jpg (802.7 KB)
802.7 KB JPG
>>8830490
Yes I did
>>
>>
>>
>>
>>
>>
>>
>>
>>
It's weird how on anima some characters will not register unless you add some specifier to their appearance. Like kanzaki_ranko on its own gives me a nondescript girl but if I add even two or three related tags she gens her spot on.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 20260212061804-1868829929.png (1.5 MB)
1.5 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00079-476997217.png (3.6 MB)
3.6 MB PNG
Chat, should I buy muse dash with the dlc or pump it up rise?
I have played both before and I like both
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1744433532087316.png (1.9 MB)
1.9 MB PNG
>>8830803
You mean Ultimate SD Upscale? It's in the Scripts menu at the bottom.
>>
>>8830778
score_9 score_8 score_7 masterpiece best quality newest recent
worst quality low quality score_1 score_2 score_3 old early blurry jpeg artifacts sepia
testing these now since i am no longer forced to use a DOGSHIT ui without convenient x/y plotting
you should always neg out ye-pop deviantart, that's easy to see
>>
>>
>>
>>
>>
>>
>>
>>
>>8830783
>>8830807
I only use
>best quality, masterpiece
in that order
best quality because it had minimal impact on style but gave more consistent anatomy, and masterpiece after because otherwise it would overpower composition and style but going after best quality it only adds a bit of flair
don't use score tags, they're fucked and will give you shit shading and just make everything more 3D looking
>>
>>8830838
there's no practical difference between score tags and the NL quality tags, they are associated with the same images.
just did an x/y on negs and some of them totally rape the quality while the ones you'd expect (score 1 and old) are good
https://litter.catbox.moe/vxud24atlpn0vnbx.jpg
https://litter.catbox.moe/j43erhb686v91712.jpg
>>
>>
>>8830842
>there's no practical difference between score tags and the NL quality tags, they are associated with the same images
Unlikely to be true, masterpiece and best quality and so on are based on the image's score on danbooru while the pony score tags used astra's aesthetics rater
If you want to see the real difference between them don't prompt any artists.
>>
>>
File: image(1).jpg (1.1 MB)
1.1 MB JPG
>>8830851
Here's a really simple test, all images include highres and absurdres
no quality tags, masterpiece, best quality, score_7, score_8, score_9, score_9, score_8, score_7
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
it is what it is
https://litter.catbox.moe/p3zvyoa63gsyn16b.jpg
https://litter.catbox.moe/p3zvyoa63gsyn16b.jpg
>>
>>
>>
>>
>>
>>
>>
File: finetune.png (23.1 KB)
23.1 KB PNG
>>8830714
30k steps
>>
>>
>>
>>
>>8831025
best quality and masterpiece both have a very high impact on anatomy quality and coherence imo, higher than on almost any other model i've used
i usually like to stop using quality tags once i've trained a lora i like but i'm having trouble doing that with anima
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00009-2275583744.jpg (908.4 KB)
908.4 KB JPG
>>
>>
>>
>>
>>
>>
File: asd_result.png (1.3 MB)
1.3 MB PNG
>chat would rather train a lora on my images (again) than create a lora of chinatsu or velvet
i am so done with life
>>
>>
>>8831266
doesn't have eps grey or vpred fry
dataset is updated to early 2025
prompt comprehension is good due to qwen vae
artists are accurate
what's not to like?
only problem is it's a 512x preview version so extremities can be kinda bad, and it's undertrained especially for characters
>>
>>
>>
>>
>>
>>
>>
>>8831277
Regional prompting by itself is a cope that's janky to use and simply doesn't work more often than it does. If you want it to be useful, you NEED either NAI's macros (target#, source#, mutual#) or NL, otherwise you're still at RNG's mercy for the actual interactions.
>>
>>
>>
>>8831281
>vae
You mean UNet and text encoder. VAE is responsible for upscaling the already generated latent space to full res so a better VAE means better details, but it has no effect on the overall composition of the image, which depends on both the UNet (visual representations) and the text encoder (textual representations).
>>
>>
>>
>>
>>
>>
>>
>>
>>
How good is Anima's natural prompt comprehension? Is prompt bleeding finally solved because now you can just specify "the girl on the left has blonde hair, big breasts and wears a red micro bikini while the girl on the right has red hair, flat chest and wears a blue slingshot swimsuit"?
>>
>>
File: ComfyUI_Panel_26.jpg (1.1 MB)
1.1 MB JPG
As there's this discussion with regional prompting and poses and scenes, I'm having fun with the Comfyui Panels node, though it's a bet on getting good enough gacha rolls 4 times in a row.
Works well enough with simple scenes, but add 2 very different girls for example and it becomes nightmarish to keep characters consistency across scenes. I'm using Illustrious.
The original example uses Qwen and nodes that seemingly allow to keep that consistency across scenes by feeding it base images of characters, I don't know much about Qwen other than it being xbox hueg, I think it's still not SDXL-anime-booru-friendly, right?
Upon searching on how to keep consistency with those models, it seems I'm, out of luck. Controlnet/IPAdapter/somethingIDface or whatever exist, but I feel like I'll be chasing a chimera.
Is consistency the Aimeme endgame? Any tips on achieving that without it being overly complicated?
>>
>>
>>8831366
Just genned a batch of 4 pics, no cherrypicking. Catbox because not /h/, also I'm too lazy to combine them
https://files.catbox.moe/ye7j5s.png
https://files.catbox.moe/5uu6w3.png
https://files.catbox.moe/ovp2xm.png
https://files.catbox.moe/ere8al.png
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8831415
yeah i'm trying to figure this out, the gui is currently never ever and I want to try porting my loras to anima for testing. all these .py and scripts are really confusing, even one of the docs trying to explain it to you is empty via train_network.py guide -> dataset preparation. literally file not found in the docs. guess i'll wait for more guides or someone passing a full config for this .py thing.
>>
>>8831419
it's actually quite shrimple you just make a text file with the extension .ps1 and then put all the arguments and shit in the text file, then open it in powershell and run it. the toml is just another text file same shit
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8831726
>natural language and newer dataset are convient imo, but you could achieve the same with loras and controlnet
Yeah but it's great qol and reduces the times you have to reroll to get certain specific things that aren't well covered by booru tags.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: ComfyUI_temp_xybgr_00001_.jpg (498.3 KB)
498.3 KB JPG
i tried to do a inset with gendou's face in it but surprisingly noob cant really do his face
>>
>>
>>
>>8831772
Any findings? I've been jumping between ER SDE and DPM++ 2M. Both with SGM Uniform. I seemed to have less finger/toes issues with Euler A++ but it does indeed lean a bit towards that 2.5d look. Not overwhelming but notable. Of course I'm on Neo so you probably have a myriad of more options on comfy. Sampler options on Neo are limited compared to Reforge.
>>
>>
>>
>>
>>
>>8831781
I remember this one https://github.com/DenOfEquity/webUI_ExtraSchedulers/tree/neg and this https://github.com/Koishi-Star/Eule r-Smea-Dyn-Sampler and this https://github.com/MisterChief95/sd -forge-extra-samplers seems to be something new for forge neo.
>>
>>
>>
>>
>>
>>
>>
File: 1759055035706551.png (853.2 KB)
853.2 KB PNG
>>
>>
>>
>>
>>8831868
>>8831872
I tried throwing in hyouuma into a mix myself but it overpowered all the other artists easily. So far, the prompt scheduling niggas are on the right track with anima. It's the only consistent way I've gotten mixes to 'work'. It's no noob but at least it shows something is happening and it's not just 1 artist overpowering the other 2/3.
>>
>>
>>
>>
>>8831966
accelerate launch --num_cpu_threads_per_process 1 anima_train_network.py \
--pretrained_model_name_or_path="<path to Anima DiT model>" \
--qwen3="<path to Qwen3-0.6B model or directory>" \
--vae="<path to Qwen-Image VAE model>" \
--dataset_config="my_anima_dataset_config.toml" \
--output_dir="<output directory>" \
--output_name="my_anima_lora" \
--save_model_as=safetensors \
--network_module=networks.lora_anima \
--network_dim=8 \
--learning_rate=1e-4 \
--optimizer_type="AdamW8bit" \
--lr_scheduler="constant" \
--timestep_sampling="sigmoid" \
--discrete_flow_shift=1.0 \
--max_train_epochs=10 \
--save_every_n_epochs=1 \
--mixed_precision="bf16" \
--gradient_checkpointing \
--cache_latents \
--cache_text_encoder_outputs \
--vae_chunk_size=64 \
--vae_disable_cache
the only thing i changed was disabling cached te outputs so shuffling works
>>
>>
>>
>>
>>
I've seen conflicting information on setting discrete_flow_shift=1.0 vs discrete_flow_shift=3.0. Is there anybody who has tested both and has a clear preference? I should also test that myself and compare but anima loras take too long...
>>
>>8832023
larger shift emphasizes noisier timesteps, meaning the model has relatively more time to learn medium-large features
shift=1.0 is technically sort of how pony/illustrious/noob/any other ddpm models were trained
>>
>>
>>
>>
>>
>>
>>
>>
>The composition utilizes the golden ratio to position the figure against the vast urban sunset, creating a powerful silhouette that speaks to ambition and reflection. Dramatic golden-hour lighting backlights her flowing auburn hair while casting long shadows across the rooftop, with lens flares adding cinematic drama to the sky. Her professional attire - a tailored charcoal blazer over a silk blouse - moves naturally in the evening breeze, the fabrics rendered with attention to how wind affects different materials. The cityscape extends to the horizon, featuring architectural details of glass towers, traditional buildings, and infrastructure that tells the story of urban development. The artistic approach combines architectural photography principles with character-focused narrative illustration
bros.. we've been outskilled.
>>
>>
>>
>>
>>
>>
>kneepit chads will never go hungry again
bros.. the mad scientist autism tagging I'd have to do to even get the penis in the vicinity of the kneepit. Now I just ask it and it gives me the full squeeze and pump experience.
https://files.catbox.moe/58en5m.png
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 20260215071108-2017146512.png (1.5 MB)
1.5 MB PNG
>>
File: 20260215061724-3563225648.png (1.9 MB)
1.9 MB PNG
>>
File: 2026-02-15074723_stealthmeta.png (1.1 MB)
1.1 MB PNG
>>
>>8832054
sure, but there's really not much to it.
[general]
resolution = [1024, 1024]
enable_bucket = true
min_bucket_reso = 512
max_bucket_reso = 2048
bucket_reso_steps = 64
[[datasets]]
[[datasets.subsets]]
caption_extension = ".txt"
image_dir = ""
num_repeats = 1
shuffle_caption = true
keep_tokens = 1
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
So Anima is pretty damn impressive even in its current state. Was curious if it could do artists like Fugtrup or even Nyl, as the model states it can't do 3d well. Well, just tagging them alone you're not going to get far. But load up the quality slop tags, especially those score_'s, a '3d' here, a lack of ye-yop' in negs there, and you get a very impressive fugtrup and a very close Nyl. The latter is the one I'm more impressed by as noob (base, not the shiny shitmixes) struggles with those. It can do fugtrup's 2d art but not the style used in animations. Full model with a suite of cn upscaling methods should be nuts.
>>
>>8832281
2x ultimate sd upscale works fine as long as you don't go above 0.4 denoise. Also "out of frame" in the positives somewhat helps the model understand that the processed tile does not contain everything described in the prompt
>>
File: 1753606564586779.png (3.7 MB)
3.7 MB PNG
>>8832282
>prompt Yunsang on NAI
>bang-on his style
>prompt Yunsang on Anima
>generic flat slop
I hope it's just the 512p training not giving enough detail.
>>
>>
>>8832294
Artists with less than 400 pics on danbooru are very hit or miss and, honestly, I don't think additional training will improve much beyond that. Model can only learn that much with that few parameters. I just hope it'll get better at understanding concepts, poses and angles, you can always train a style lora later.
>>
>>
>>
>>8832300
>Model can only learn that much with that few parameters
It's 2b parameters compared to noob's 2.6. Nai should be around the same size if you believe kurumuz claims of it being smaller than sdxl. Also a smaller model doesn't translate to there being some arbitrary learning cutoff.
>>
>>
>>
>>
>>
>>
>>
>>
File: ComfyUI_temp_rvuba_00047_.png (1.5 MB)
1.5 MB PNG
>>8832294
You're doing something wrong if you only gt generic flat shading
it knows yunsang, just that since it's trained 512x most detail is lost
>>
>>8832329
>You're doing something wrong
If noticed that the model is quite finicky about samplers, negs and quality tags. you have to give it a very specific baseline for it to even attempt to get a style right. just using the artist tag and cranking up the weight gets you nowhere
>>
>>
>>
File: ComfyUI_temp_xqctr_00004_.png (1.2 MB)
1.2 MB PNG
>>8832331
>>8832334
I just use good old trusty Euler A, Normal, 25 steps, 4 CFG
and
>best quality, masterpiece
in that order, but you can opt out of masterpiece, I just use it for some extra flavor
and make sure to use @s, obviously or else the artist tag just wont work at all
>>
>>
>>
>>
>>
>>
>>
>>8832369
It's this dataset https://huggingface.co/datasets/Ejafa/ye-pop which itself is based on this dataset https://laion.ai/blog/laion-pop/ which is 600k ai slop images sourced from midjourney
>>
>>
>>
>>
>>8832378
>huge difference here folks
It is unironically a huge difference. It's not synthetic images, it's real internet images that roughly align with the types of things people generate on midjourney.
I don't fucking get it. People have been saying shit like "man I wish Noob was better at more traditional and painterly styles in addition to anime". Somebody actually trains a model specifically to do that, and you complain.
>>
>>
>>
>>
>>
>>
>>8832387
every furry I've seen complaining about anima has been trying to get the dev to stop training on deviantart and laion pop, and replace them with e621 instead. it's not the same people, furries only want e621 and nothing else because they are degenerate brainrotted gooners
>>
>>
>>
>>
>>
>>
>>8832397
>forget that we ruined the last 3 models bro
what models?
pony was not ruined, it was a shitslop cartoon porn model made for furries, anime was second class citizen to begin with
noob's issues were not due to e621, eps was fine, it was vpred where they started doing weird shit
what's the third one even?
>>
>>8832403
NTA
ponyv6 was good for its time, but would have been even better if 25% of its dataset wasn't MLP and another 25% western cartoons
ponyv7 was a complete failure
noob, it's impossible to say what effect e621 had on it, but it cost $70k to train and probably would have been a bigger leap over illustrious had it not been contaminated by all the furshit
chroma anatomy is completely broken for anything other than 1girl standing because half the dataset is e621. it's seen more paws than human hands
>>
>>8832403
>pony was not ruined, it was a shitslop cartoon porn model made for furries, anime was second class citizen to begin with
well imagine if astra wasn't a furry and a brony and wrong in the head
>noob's issues were not due to e621, eps was fine, it was vpred where they started doing weird shit
eps 1.0 was also shit. even if we pretend that the e621 dataset had no negative influence, its addition still prompted them to unfreeze the te which ruined the bake
>what's the third one even?
nai v4.5 also has a furry dataset
>>
>>
>>
>>
>>
>>
been playing with anima
it's better than illustrious base but that isn't saying much compared to all the fine tunes. Unironically wai is massively ahead still.
It can be potential model just waiting for fix #13123 for now
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8832427
sure but it's probably the example of the most stable popular anime model right now.
Anima won't pick up steam until it consistently makes better images than it, and it has the potential (natural language works far better in it), but isn't there yet at all
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1747028973604622.png (2 MB)
2 MB PNG
>>8832469
Left is Euler a + Normal, right is ER SDE + Beta, the target artist is Yunsang. You can certainly see his shading on the left even though the style isn't 100% on point whereas the right is just generic slop completely unrecognizable as him.
>>
>>
>>
>>
>>
>>
>>
>>8832468
Wouldn't say that's an absolute. ER SDE (I prefer dpm++ 2m over it although they're similar) helps with styles that have more detail or sketchy. Euler A smooths things out a bit which can help with anatomy issues the model currently has but you lose plenty of detail. But as I posted here >>8832282 it absolutely is the best sampler for anything leaning towards 2.5d and overall smoother textures. This anon >>8832331 is right in that currently samplers and pos/neg tags overwhelmingly influence your gen. Which it's nice to have the option but requires extra autism.
>>
>>
>>
>>8832543
Some anons have mentioned boomer methods like ultimate sd upscale. Last time I tried that, ages ago, I kept getting seams all over and a few anons have complained about that with anima so I'm personally just coasting on 1.3x hires until we get some decent cn support.
>>
>>
>>
>>
>>8832549
>It looks pretty damn good minus it's still a resolution for ants in 2026
I don't care my monitor is 1080p, I only want to upscale so I can add a bit more detail to some areas, particularly eyes since I like to gen them with detailed eyelashes
>>
>>8832549
reposting for the third time https://litter.catbox.moe/jy06t8qikipkt6he.png
from my earlier tests, hiresfix or any other img2img is fine roughly up to about 2 megapixel (1420 square) before it starts to break down
>>
>>
>>8832553
Ah yeah. I remember seeing this but didn't put much attention at first as I was still on reforge and just made the move to Neo on Friday. I'll try to parse the spaghetti code later to see what settings you used exactly. Currently still slopping about with tags and samplers.
>>
>>
>>
>>
>>
>>
>>8832474
https://litter.catbox.moe/pcj7cjgsan5e6gb0.png
It does. It's just a bit creative as the model page says. Reminds me of genning with AYS on early illu/noob.
>>8832563
Genitals in general. We need the snakeoil illu penis lora to return.
>>8832565
>anime screencap
unc wildin'
>>
>>
>>
>>
>>
>>8832577
>returning to loras
Grim. If there was one thing about noob, most styles were achievable (few exceptions sure) with tagging provided your shitmix wasn't slopped to WAI levels. It was nice deleting that giga huge pony lora folder.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8832647
Based anima requester.
>>8832617
It's far from a 1:1 but considering the limitation of the preview model and that it's not even supposed to do 3d, it's a decent imitation. Use the same sampler/schedule as those do the 2.5 look best.
https://files.catbox.moe/008fqo.png
>>
>>
>>
>>
>>
>>
>>
>>
>The composition utilizes the golden ratio to position the figure against the vast urban sunset, creating a powerful silhouette that speaks to ambition and reflection. Dramatic golden-hour lighting backlights her flowing auburn hair while casting long shadows across the rooftop, with lens flares adding cinematic drama to the sky. Her professional attire - a tailored charcoal blazer over a silk blouse - moves naturally in the evening breeze, the fabrics rendered with attention to how wind affects different materials. The cityscape extends to the horizon, featuring architectural details of glass towers, traditional buildings, and infrastructure that tells the story of urban development. The artistic approach combines architectural photography principles with character-focused narrative illustration
There you go, lil bro. Just replace the words like mad libs.
>>
>>
>>
File: 2026-02-16080336_stealthmeta.png (1.8 MB)
1.8 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1758796250806712.png (689.8 KB)
689.8 KB PNG
>>
>>
>>
>>
>>
>>8832872
>2girls
>try to describe each girl's features separately
>horrible bleeding that's only (semi)fixable with regional prompting
No. Never again. Anima is literally perfect for being able to describe the general scene and composition with booru tags and then adding smaller details with natural language.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00021-547314378.png (1.5 MB)
1.5 MB PNG
>Do some idle prompting
>For some reason the dude looks mildly worried
I wonder if there are any stories out there that start as your usual can't win against cock story just for the dude to wonder if he went too far and regretting the entire thing.
>>
File: ComfyUI_temp_xzjzs_00026_.png (1.1 MB)
1.1 MB PNG
>>8832874
Huh. Little effort too. I'm impressed. Still needs some improvements, but this is quite promising.
>>
File: 1745331184312059.png (2.1 MB)
2.1 MB PNG
gacha whore netorare has never been so easy!
>>
>>
>>
>>
>>
>>
>>
>>
>>8832925
>it's not something that I couldn't do on current SDXL
That's true for most of the stuff you can do in anima, but as you say the difference is in consistency and not not needing to fiddle with extensions and hacks to force SDXL and CLIP to not be retarded.
>>
>>
>>
>>
>>
File: 00040-55014921.png (1.1 MB)
1.1 MB PNG
>>8833004
Literal first result on search engine says they are going to tighten up with the safeguards due to Copyright and other sensitive topics.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
Man, idk if it's because I'm used to how sloppy illustrious used to be but all the artists I use seems to have a great (and negative) impact in the anatomy rather than just the rendering
What artists are you guys using?
>>
File: 00010-2358231038.jpg (765.1 KB)
765.1 KB JPG
>>8833070
More or less yeah but since you were willing to see any gen I took the shot
>>
>>
>>8833045
It really is devoid of a lot of features and quite a few extensions don’t work on it that I used on reforge. Nothing major but def a few QoL shit like auto ratio dimension picker come to mind. I’m only using it for anima and reforge for everything else.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 00005-2516547972.jpg (385.5 KB)
385.5 KB JPG
>>8833111
Must you smirch him when he has done no wrong?
>>
>>
>>
>>
>>
File: ComfyUI_30542_.png (1.4 MB)
1.4 MB PNG
>>8833071
Base illu isn't sloppy. Only the later versions, and merges on top of illu.
I like the anatomical impact. But score tags negate it to some degree, and you should be able to avoid it using prompt scheduling. Overall body shape is determined in the very early steps.
>>
File: 00083-3341511986.png (852.9 KB)
852.9 KB PNG
>>
>>
>>
>>
>>8833123
>I like the anatomical impact
I do too but some styles I had worked great before and now make it all wonky. I suppose prompt editing is the way to go.
>But score tags negate it to some degree
Any consensus on the score tag business btw? Not that this place has ever reached any consensus before but it doesn't hurt to ask
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
Just a heads up to my fellow reforge refugee bros. All upscalers now go in the ESGRAN folder, no matter the class. Was malding thinking it was yet another thing Neo didn't support but fortunately did a bit of github reading and saw they merged all upscalers into 1 folder.
https://files.catbox.moe/othesi.png
>>
>>
File: 1771276668845279.png (1.7 MB)
1.7 MB PNG
>>8827652
FRIENDLY REMINDER THAT IF YOU USE ANY OF THESE OVERCOOKED PIECES OF SHIT, THE BEST THING YOU CAN DO IS EUTHANIZE YOURSELF AND DONATE YOUR ORGANS AND GPU TO PEOPLE WHO NEED THEM.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 20260217093454-1399990167.png (1.7 MB)
1.7 MB PNG
>>
File: 20260217095316-3285918164.png (1.6 MB)
1.6 MB PNG
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8831967
i'm having trouble copy pasting this, I keep having this as an example:
At line:1 char:3
+ --vae_chunk_size=64
+ ~
Missing expression after unary operator '--'.
At line:1 char:3
+ --vae_chunk_size=64
+ ~~~~~~~~~~~~~~~~~
Unexpected token 'vae_chunk_size=64' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingExpressionAfterOperator
I tried removing line breaks, removing \, changing \ to ^, making everything in a single line and nothing worked. I also have venv activated with the green (venv) on sd scripts.
I also made a toml already from the same guy posting it. and searching posts about it and nothing came up.
>>
>>8833571
>green (venv)
what? win11 got colors in cmd or something?
try pasting this with updated file paths: accelerate launch --num_cpu_threads_per_process 1 anima_train_network.py --pretrained_model_name_or_path="E:/sd-webui-forge-neo21/models/Stable -diffusion/anima-preview.safetensor s" --qwen3="E:/sd-webui-forge-neo21/mo dels/text_encoder/qwen_3_06b_base.s afetensors" --vae="E:/sd-webui-forge-neo21/mode ls/VAE/qwen_image_vae.safetensors" --dataset_config="E:/sd-scripts/con fig/dataset.toml" --output_dir="E:/sd-scripts/output" --output_name="anima_test" --save_model_as=safetensors --network_module=networks.lora_anim a --network_dim=16 --learning_rate=1e-4 --optimizer_type="AdamW" --lr_scheduler="cosine" --timestep_sampling="sigmoid" --discrete_flow_shift=2.0 --max_train_epochs=10 --save_every_n_epochs=1 --mixed_precision="bf16" --gradient_checkpointing --cache_latents --vae_chunk_size=64 --vae_disable_cache
>>
>>
>>
>>8833580
>>8833577
almost there, now it's giving this:
Error on parsing TOML config file. Please check the format. / TOML config_util.py:693
形式の設定ファイルの読み込みに失敗しました。文法が正しいか確認してくださ
い。: D:\kohya\sdtoml\Anima.toml
Traceback (most recent call last):
File "D:\kohya\sd-scripts\venv\Lib\site-packages\toml\decoder.py", line 511, in loads
ret = decoder.load_line(line, currentlevel, multikey,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kohya\sd-scripts\venv\Lib\site-packages\toml\decoder.py", line 778, in load_line
value, vtype = self.load_value(pair[1], strictly_valid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\kohya\sd-scripts\venv\Lib\site-packages\toml\decoder.py", line 866, in load_value
raise ValueError("Reserved escape sequence used")
ValueError: Reserved escape sequence used
thanks guys.
>>
>>
>>
>>
>>
>>8833597
this isn't the exact error i got but there was an error in one version of sd-scripts caused by a bad thing that's supposed to automatically swap between jp or eng depending on system lang, it hits the jp and breaks. fixed it by deleting the jp text in the affected python files
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>8833627
it's giving this error:
Traceback (most recent call last):
File "C:\Users\ThisPC\AppData\Local\Programs\Python\Python312\Lib\threading .py", line 1052, in _bootstrap_inner
self.run()
File "C:\Users\ThisPC\AppData\Local\Programs\Python\Python312\Lib\threading .py", line 989, in run
self._target(*self._args, **self._kwargs)
File "D:\LoRA_Easy_Training_Scripts\main_ui_files\MainUI.py", line 189, in start_training_thread
self.train_helper(url, Path("queue_store/temp.toml"))
File "D:\LoRA_Easy_Training_Scripts\main_ui_files\MainUI.py", line 197, in train_helper
response = requests.post(f"{url}/validate", json=True, data=json.dumps(final_args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\api.py" , line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\api.py" , line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\session s.py", line 575, in request
prep = self.prepare_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\session s.py", line 484, in prepare_request
p.prepare(
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\models. py", line 367, in prepare
self.prepare_url(url, params)
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\models. py", line 438, in prepare_url
raise MissingSchema(
requests.exceptions.MissingSchema: Invalid URL '/validate': No scheme supplied. Perhaps you meant https:///validate?
I don't get it, comfy, kohya gui and pancho sd installed and works fine but never these.
>>
>>
>>
>>
>>
>>
>>
>>
>>8833696
thats set true in the config. with the backend url set to blank.
and read the install plenty of times and it'll still end up with the same error. the gui seems to work it's just until I press start training. after that stop training changes from the start and it'll not work and pressing stop gives the same error.
>>
>>
>>8833707
it gave this now:
File "D:\LoRA_Easy_Training_Scripts\main_ui_files\MainUI.py", line 197, in train_helper
response = requests.post(f"{url}/validate", json=True, data=json.dumps(final_args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\api.py" , line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\api.py" , line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\session s.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\session s.py", line 697, in send
adapter = self.get_adapter(url=request.url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\LoRA_Easy_Training_Scripts\venv\Lib\site-packages\requests\session s.py", line 792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}")
requests.exceptions.InvalidSchema: No connection adapters were found for '127.0.0.1:8000/validate'
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 2026-02-17191833_stealthmeta.png (3.2 MB)
3.2 MB PNG
>>
>>
>>
>>
>>
File: 2026-02-17201740_stealthmeta.png (1.6 MB)
1.6 MB PNG
>>
>>
>>
>>
made an anima artist style listing
https://gitgud.io/gayshit/anima-artists-prompts
not really a replacement for the other once since i have less artists but it exists
i plan on adding a third prompt more aimed to nsfw and maybe some stuff similar to tagexplorer
>>
>>
looked at what i baked overnight and training on anima is great, really picked up the finer details better than illu. gotta do more testing but it's really nice. i'm also seeing fairly minimal concept lora style influence, or maybe it's just that artists work really well in anima
>>
>>
>>
>>
>>
>>8834042
I see.
Well my first thought when using comfy was that I need a node to see metadata, like PNG info or whatever is called in Forge. Second step would be, once installed, how to insert/add that node into the workflow.
There has to be somewhere out there a compilation of essential or basic nodes with features that the forge UI has or similar. I feel like an utter brainlet again.
>>
>>
>>
How much of a difference does flash attention do in lora baking? Anima takes so long at an hour above, where I can bake noob loras in less than 30 mins.
And holy shit the trouble you have to do to attempt to even install it.
>>
>>
>>
>>8834044
Not sure if there is a good one, I use https://sprites.neocities.org/metadata/viewer
Many images you can also just drag into the UI. For comfy this imports the workflow, for simple A1111 stuff it will try to produce one (though it will set incorrect clip skip on SDXL, because it's saved incorrectly in webUI)
Here's mine with all the common features, labeled and annotated. Using the bare minimum custom nodes.
https://files.catbox.moe/w67rvs.png
>>
>>
>>
>>
>>
File: {36240DC2-1018-4C04-A4B1-F0D562CB3BBF}.png (168.9 KB)
168.9 KB PNG
>Ask claude to put @ infront of the artists in the auto complete csv
>this nigga builds an entire app including drag and drop file upload
pretty sick ngl
>>
>>
>>
>>
>>8834233
What does it even mean that it's 512x?
Btw I'm not sure the inconsistency is a defect per se. When it comes to style mixing, for sure, but when it comes to a single artist then I think the inconsistency is a byproduct of how good the prompt adherence is.
Y'know how in lora training if you feed the model several arts of the same artist, but with different artstyles it'll produce a bad lora, so the goal is to give it only the same consistent artstyle? With how well the model sticks to a tag, I think it acts kinda like that. So artists that have several distinct artstyles will get inconsistent results, or if they're constantly tied to the same franchise (like with big artists; kubo, oda, kishimoto, toriyama), or if they have enough of posts to have his development as an artist recorded.
>>
>>8833834
oh https://thetacursed.github.io/Anima-Style-Explorer/index.html changed the prompt but now it only has a blank white background so you cant see how well the artists do backgrounds
also he has a fucking donation link at the top.
>>8834243
trained at 512x512 instead of 1024x1024 so it's missing small details during training though it's still better off then SDXL because of the improved vae, also why it has trouble upscalinh
dont quote me on this though i dont finetune shit
>>
>>
>>
>>
>>8834251
I mean it can hiresfix up to 8K, as long as you keep the denoise low so it doesn't have enough leeway to fuck up anatomy. I actually downloaded based64 again just to check. Anima starts breaking apart at 2mpix and makes square artifacts, regardless of how low your denoise is.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: 1742062912322072.png (1.4 MB)
1.4 MB PNG
>>8833834
Mostly finished vibecoding the site for this.
https://anima-artists-prompts-957be3.gitgud.site/
You can search for artists and if you click on an image it copies the artist name to your clipboard.
It doesn't have a donation link so it's at least 0.3% better than the other dude's.
>>8834324
It kind of resembles his style if you add the "one piece, one piece card game" tags but that bakes in the card game logo obviously.
>>
File: ComfyUI_temp_vgobf_00006_.png (1.8 MB)
1.8 MB PNG
I miss artist mixing
>>8833834
>>8834367
>i plan on adding a third prompt more aimed to nsfw
would be great
I think there is a lot more to a style than just the "style", an artist style affects so much more like how poses are set up, composition, anatomy and body types
I think what the other guy has done with just having a girl facing the viewer with a white background is a mistake as it purely showcases the base style and nothing else, it's not very useful unless you're only interested in knowing how an artist draws their faces
>>
>>
File: 1759778199257914.png (1000.6 KB)
1000.6 KB PNG
>>8834396
yeah, I didn't like either of his choices for representative prompts. i tried to make a prompt that was a bit more general with the rosemi prompt, showing full body proportions, facial proportions, and backgrounds all at once while the minimal prompt shows what the artist is most naturally biased to output
i was thinking of something like this for the nsfw images but this pose doesnt show anus so i'll have to workshop it
also you can mix artists its just really wonky, feels like bringing weight up is more effective when mixing then bringing weight down
>>
>>
>>
>>
>>
>>
>>
File: 20260219101048-3815855533_cleanup.png (2.1 MB)
2.1 MB PNG
>>
>>
>>8834367
You should add sorting by post count or filtering by post count imo
But it's pretty good
>It kind of resembles his style if you add the "one piece, one piece card game" tags but that bakes in the card game logo obviously.
Yeah, I think that artists that are heavily associated with copyright tags don't work properly
>>8834396
I miss it too, haven't found a single artist that fulfills all I want
>>
>>
>>