Thread #108585471
HomeIndexCatalogAll ThreadsNew ThreadReply
H
File: adt.jpg (362 KB)
362 KB
362 KB JPG
Previous: >>108518256

>UIs to generate anime
ComfyUI:https://github.com/comfyanonymous/ComfyUI
SwarmUI:https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic:https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next:https://github.com/vladmandic/sdnext
Wan2GP:https://github.com/deepbeepmeep/Wan2GP
InvokeAI:https://www.invoke.com/

>How to Generating Anime Images
https://rentry.org/comfyui_guide_1girl
https://tagexplorer.github.io
https://making-images-great-again-library.vercel.app/
https://neta-lumina-style.tz03.xyz/

>Output cleanup
https://rentry.org/RemovingDiffusionGunk
https://www.mediafire.com/file/vipr23exc5htmnt (batch processing python script)

>Generating Anime Videos
Guide:
https://rentry.org/wan22ldgguide

>Anime Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info
https://openart.ai/workflows
https://www.seaart.ai
https://www.liblib.art/
https://rentry.org/adtsampler

>Anime Misc
Local Model Meta:https://rentry.org/localmodelsmeta
Share Metadata:https://catbox.moe|https://litterbox.catbox.moe
Img2Prompt:https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Samplers:https://stable-diffusion-art.com/samplers
Txt2Img Plugin:https://github.com/Acly/krita-ai-diffusion
Online metadata viewer SD/NovelAI:https://spell.novelai.dev
Catbox/Metadata Userscript: https://gist.github.com/catboxanon/ca46eb79ce55e3216aecab49d5c7a3fb

>Inpainting Guide from an Anon
https://files.catbox.moe/fbzsxb.jpg
>>106520607

>Neighbours
https://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg
>>>/b/degen
>>>/gif/vdg
>>>/d/ddg
>>>/d/dddg
>>>/e/edg
>>>/h/hdg
>>>/tg/slop
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg
>>>/r9k/aiwg/
>Local Text&Image
>>>/g/lmg
>>>/g/ldg
>>>/vp/napt
>Cloud Text&Image
>>>/g/aicg
>>>/g/sdg/
+Showing all 151 replies.
>>
>>
>>
>>108585471
im only here in this thread because i saw the OP image and he is in my /cm/ folder

allow me to bless/curse this thread with /cm/-tier anime ai generated pits
>>
>>108585459
>stop using latent upscale
I am using RealESRGAN_x4plus_anime_6B.
>>
>>108585809
It definitely just sounds like your denoise is too high. I usually use 0.2 but it depends on your sampler and scheduler.
>>
>>108585508
Can i get tbe prompt for.this
>>
>>
>>108585830
Since Anima 3 came out you have been stuck with the same style and that dumb bean mouth. How much longer are you going to keep blessing us?
>>108566483
>>108552895
>>108551815
>>
>>108586117
Artist tag sir
>>
>>108586190
@racun
>>
>>108586160
omg are you... a fan?
>>
>>108586254
Thank you sir, did you use anima sir
>>
>>
>>108586117
>>108586392
>>108587146
Nice Anima gens
Shame the dev only posts in /ldg/ so anything we post in any anime general will never be seen by him...
>>
>>
>>108586392
So cutesy cute with that facial expression and pose, meanwhile she's on her knees bottomless and my cum is running down her thighs. You can post in /hgg/ if you know what I mean.
>>
Chat...
>>
>>108587681
what
>>
>>108587681
huh?
>>
>>108585471
How many image should i used for training Character LoRA
I always used 120 images, mostly by using other Model to gen
>>
>>108586160
Tragedy! Can you imagine? Poor real artists are stuck with the same style for their whole life...
>>
>>
stole >>108577379
>>
>>108590717
now imagine if everyone in this thread commissioned that artist only and nothing else
>>
>>108590961
make more please im gooning rn
>>
>>
>>108586392
>>108587649
OK you didn't seem too enthusiastic about my vision so I did it myself:( >>>/h/8859439
Might need another attempt, it's a bit off...
>>
>>
i love my forehead wife, seki hiromi
>>
>>108591543
I tried it but I wasn't really happy with the results. I don't think I like the concept that much. I'd prefer instant loss or a suggestive image that doesn't actually show anything.
>>
File: 00001.png (3 MB)
3 MB
3 MB PNG
What I am doing wrong and why does it look all stretched out and smudgy looking? After TXT2IMG and putting it through img2img 0.35 denoise 2nd pass in neo reforge, then putting it to extra with upscaler 4x-AnimeSharp model with 2.0x upscaling. I have never had this problem with comfyui...
>>
i wanna start doing gens, know nothing about it tho
tried comfyai for amd but i cannot turn it on (kept getting error and after trying many fixes i gave up)

sooo, any reccs for alternatives? which one of the listed UIs are best?
>>
>>108591801
You are doing it backwards. First you do the AnimeSharp in extras, then you do the img2img.
>>
>>108591838
So first is text2img, then extra and then img2img? Oh...
>>
>>108591850
The big difference that upscale models give is in sharpness. You can just use lanczos to upscale, but it will give you softer edges unless you crank up the denoise. An upscale model gives you more starting sharpness before the sampling so you don't need to deal with the issues of high denoise.

>>108591814
I think everyone's using either comfy or forge neo. AMD setup might be more complicated across the board. It's not that bad to set up manually with pyenv+venv if you know what you're doing, but otherwise it's a pain in the ass.
>>
The only thing I find annoying in ComfyUI is that I cant have several versions a prompt, like I wish I could switch back and forth quickly or that at least I could comment out parts of the prompt that I dont want anymore (but may use later) etc.
Is there a fix for this im new so there prob is and im just too dumb
>>
>>108592142
you can use multiple prompt nodes and connect the one you currently want.
>>
>>108577379
It's out :D
https://civitai.com/models/2385278/animayume?modelversionid=2798563
>>
>>108592167
sweet. i'll run it rn
>>
>>108591925
I tried doing it and it has weird patterns and artifacts after img2img.
>>
>>108592230
It's anima. Don't go too far above 2 megapixels with it. If you want higher, use mixture of diffusers or ultimate sd upscale.
>>
>>108592142
I used to use the styles selector node from comfyui-easy-use, but it broke so I made my own prompt node (that now needs to be reworked for anima). >>108592152
is an easy way to do it if you don't want to rely on a node pack.

You can use comments pretty easily with a regex replace node. You can turn this into a subgraph with a clip input and conditioning output and it'll function just like a normal clip text encode node.
>>
>>108592230
I used 896x1152 with txt2img process, then 2x for upscale and finally img2img. I'll try x1.5, must be tile issue.
>>
>>108592286
>>108592239
Ah, it worked. Solid.
>>
>>108592286
Yeah, if it's not enough for some images, you can do 2x, but it needs to be denoised in 2 vertical tiles (or horizontal for landscape). Multidiffusion addon has mixture of diffusers method that does it well.
>>
>>
>>
Anima is good because it's trained also on Gelbooru which is full of depraved shit, unlike Danbooru which is relatively "safe"
>>
>>108592849
peak
>>
All these years gooning to anime all my anime pics were from the threads almost never set foot on either of these
Pixiv is cool though
>>
>>108586117
Cute picture, sweetie! :3
>>
>>
>>
>>108594035
Yeah but nobody loves Oekaki. 34 years old status? Alone in /adt/ on 8th of january status? "Feel free to add me" status?
>>
>>
>>
>>108591226
idk
>>
>>108594316
Illya looks breedable, I need to do more gens of her
>>
>>108591925
What if I want to do it on arc
>>
File: 0.png (1.9 MB)
1.9 MB
1.9 MB PNG
>>108593768
I wish that ai models had a hope in hell of getting the emblem right, but oh well. Cute wife, here's mine.
>>
advice for anima artist mixing?
>>
>>108594611
find an artist that looks like the style you got before
>>
>>108594445
I'm using arc right now and I'd say it's about the same (on linux). Same process for the two, use pyenv to pin a python version, install dependencies with pyenv, and install the right torch version for the gpu (rocm for amd and xpu for intel). I prefer the intel card over my old amd card because it's a bit faster and basically silent in operation, but that obviously depends on the card you're using.

I'm not sure which is better if you're on windows. Rocm wasn't an option on windows for a long time and I don't know if that's changed. I think XPU just werks, but if something doesn't just work on windows, it's a bigger pain in the ass to set up manually than it is on linux.

>>108594464
I think illustious/noob handled logos a bit better. I still inpainted/redrew them a lot, but they were closer. I can prompt for Ichika's armband in anima, but then she's turned sideways to show the whole thing.
>>
>>108594632
are you using comfy or an a1111 fork?
>>
>>108594699
I'm using comfy
>>
>>108594611
anima shits the bed once you add more than one artist tag. it seems to be because anima doesn't use CLIP unlike older models
>https://huggingface.co/circlestone-labs/Anima/discussions/112#69d3239fdbe185d18ae3d4d4
the important part
>I do agree that artist blending is different (and worse) than SDXL, but I think this was always a happy accident of how CLIP worked and that the downsides of CLIP are not worth it.
>>
>>108595196
ah i see. that explains why i've been having trouble with it compared to illustrious
so just sticking to one artist tag would be best, huh?
>>
>>108595272
yeah one artist works fine. i usually mix 4 or 5 artist tags in illustrious so i'm a bit underwhelmed. doesn't sound like it's going to get much better in the future either
>>
can anyone tell me how to fix face/eyes with face detailer in comfyui
I have watched many tutorials and I follow them exactly and they still look like shit
or can someone share a workflow where the output's eyes doesn't look like shit so I can see what settings you used
it makes me sad generating a great image and then zooming in and seeing all that distortion and ugly artifacts on the face, kills my boner because it reminds me it's just AI slop
also I'm using Illustrious. my GPU is not powerful enough for anything beyond SDXL
perhaps the latent image resolution needs to be higher? using 1024x1024 rn
>>
>>
>>108595196
I disagree, I think the downsides of CLIP are well worth it, because it's a lot better at conveying fundamental concepts. In an ideal world, we could just use both at the same time.
>>
>>108594611
>>108595196
skill issue
>>
>>108594611
Use prompt scheduling. In webui and its forks it looks something like this [@style1:@style2:0.6]. Which switches from genning with style 1 to style 2 after 60% of the steps are done. You can also do something like [:@style1:0.6] which only starts using style 1 after 60% of the steps are done - this is handy when you have a style lora that you're trying to mix with an already known artist.
>>
one thing i think might be true: the first artist in the list might be the least influential and the last one might be the most
>>
On the topic of how Anima handles artists, does anyone feel that it has a lot of randomness in seed depending on the quality differences seen in the artist's images? Like if an artist has a lot of different quality levels that they draw, the model will randomly generate one of them depending on seed. As opposed to older models that had more of an averaging effect, it felt like. So if you wanted to consistently gen a quality level of an artist on Anima, you need to select an artist that draws more consistently, or you need a LoRA.
>>
>>108597124
try score_8
>>
>>
>>108595998
I made my own detailer nodes because face detailer has always been slow and shit for me. You can try the FastDetailer node from my pack if you'd like: https://github.com/mudknight/comfyui-mudknight-utils
>>
>>
>>
>>
>>108597489
cute uma
>>
>>108598812
I need to stop browsing this thread at work, now I have an erection...
>>
>>108598867
Nice. Like a gift wrapped up so nicely it's almost a shame to unwrap.
>>
>>108596606
appreciate this. all these years and i didn't know this was a thing
>>
How am I supposed to use the anima model exactly? The safetensors file is only 4.7gb which seems to be lower than the expected filesize.
>>
>>108594611
>artist mixing
I have never done this before.
>>
>>108599980
which UI are you using? right now only forge neo and comfy support it
>>
>>108599988
Both Forge neo classic and Comfy fail to recognize the model for me.
>>
>>108599980
Read the model page bro. TE and VAE are not included and it says where to get them.
>>
>>
>>108599995
Already put the text encoder and vae in their respective folders. Maybe my fork is just not compatible or I'm forgetting something,
>>
>>108600093
did you actually select and enable them in their respective dropdowns on neo?
>>
>>108600171
>AttributeError: 'NoneType' object has no attribute 'unet_key_prefix'
>>
>>108597688
I love you anon. Just installed it and it's perfect and really fast. I also managed to kind of make the regular face detailer work by increasing the denoising to around 0.75 but it was still painfully slow (1400+ seconds per image compared to 250 now)
>>
>>108600291
Did you switch from SD to Anima?
>>
>>108594632
Sometimes I see a pic like that where it does a little weird thing where the neck seems to be setting up for side view but the body is trying three-quarter.
>>
>>
>>108601210
Most models struggle with perspective in three-quarter view. Your edit seems like it's twisted her body towards the camera but I'm not sure if that's better or worse. I think the clothes just do a better job of masking it.
>>
>>
>>
>>108601958
I dont even look at the neck myself
>>
>>
Which model can I use to edit an image to, say, restore a girl's ripped shorts?
>>
File: rinha.png (896.2 KB)
896.2 KB
896.2 KB PNG
>>108603975
I don't either, I was more referring to the head relative to the body. Pic sort of related, but I don't mind weird anime angle perspective most of the time. It usually only bothers me in nsfw gens where a further breast appears larger than the closer one, especially when it shouldn't be visible at all.

>>108604446
Any should work if you draw and inpaint.
>>
>>108604446
If a model can draw it can inpaint. Do you mean which UI do people recommend for inpainting? I just do it in Comfy and can share a workflow if you like. A browser-based option is Llama, on Huggingface. I've heard Krita is good.
>>
>>108604446
krita is the best one but it doesn't support all the models and it has a steeper learning curve (mostly related to keeping proper colors/saturation)
>>
>>
>>
>>
>>108605897
Oh nice!
>>
@khyle. works way too well lmao
>>
anyone using Spectrum for speeding up Anima? i don't know what settings are optimal
>>
>>108605897
If you have the extra time, Can you firx the right hand?
>>
>>108607508
nice composition
>>
So I didn't realize until now but Forge Neo can now be installed on Linux.
>>
File: ako.png (1.1 MB)
1.1 MB
1.1 MB PNG
>>
>>108607798
>>
>>
File: file.png (1.5 MB)
1.5 MB
1.5 MB PNG
>>
Occasionally I internally seethe a bit when someone appears to be running the same prompt over and over again, not talking about this thread.
Suppose my recent duplicunt behavior isn't that much better (isn't helping the variety of posts), but I feel compelled to play with others' existing works. Also this is far from a one-click-and-done job.

>>108609449
>>
>>108611071
>I know I am being a fag but you know what? I actually like it so I will keep doing it because I can't create things on my own
>>
File: ako-cover.png (985.2 KB)
985.2 KB
985.2 KB PNG
>>108611071
To your point, I don't really want to post a third ako but here's a response to your edit.
>>
anyone trying out WAI-anima?
>>
>>108611792
I'm on break from making my own stuff.
>>
>>
>>108609471
Thank you!
>>
Accidentally replied lol
>>
Anima Preview3 doesn't know what an Ouija board is.
>>
>>
>>108615184
cute arms
>>
how good will the final version of anima be
>>
>>108615361
the hands and highres images should be better. Hopefully its more knowledgeable about concepts and artists with low image counts
>>
>>108615361
>>108616117
>>
I got krita-ai-diffusion working again with anima and ended up not using it for this at all.
>>
>>108616913
how do you make it work with anima? there's no support afaik
>>
>>
>>108617189
I'm just using a custom workflow with the regular model. It works well enough, but I'd rather just redraw what I don't like most of the time or redraw on the base gen and then upscale.
>>
is sd ultimate upscale the only way to upscale anima on comfy? cause i tried the old hires fix method and it kept blurring or artifacting
>>
>>108618318
You just can't upscale past 1.5x with a normal upscale workflow. You can do it in multiple stages, but I find that the results of a 1.5x upscale are the best balance of detail and generation time (compared to ultimate sd upscale which is slower and introduces seam artifacts).

There's also the highres lora that fixes higher resolution base gens and upscales, but I think I prefer the 1.5x upscale without it from my limited testing. https://civitai.com/models/2540444/anima-highresaesthetic-boost
>>
>>108618381
yea been trying sd to upscale some stuff it keeps putting x in y's place for no reason, i tried the old hires fix for a bit now im seeing some decent improvements and its way faster than sd ultimate without the misplacement of stuff in the upscaled image but its not totally perfect for me look at her right eye, might get fixed in the full release
>>
>>108618498
Nice gen. I don't expect any model to spit out a perfect image, and it also just depends on the level of scrutiny. If I was editing the picture, I'd probably fix the right side of her shorts because it looks kinda weird and baggy, but that's super easy to fix and not necessarily something you'd even notice or care about. If i'm not posting an image online or using it on something, I wouldn't care about that.

Do you mean x and y like coordinates or prompt elements showing up in different places in the image?
>>
>>108618318
the hires lora makes it stable up to 4MP, i do hiresfix after taking the upscale model's output down to 3.5MP
https://civitai.red/models/2540444/anima-highresaesthetic-boost
>>
7 steps for .35 denoise seems pretty decent for hiresfixing w/ HR lora
>>
>>108618669
oh i was talking about sd ultimate upscale i was doing a nsfw gen and when it got upscaled the character's hair was white so it swapped some of it with cum xd
>prompt elements
yea exactly this
>>108618892
will try this one and attach it to my ksampler pass after the upscale thank you
>>
>>108619408
Oh okay, I think I just read that wrong. Are you saying that you would have prompt elements show up in every tile of the upscale? You could try lowering your denoise but I think that's just one of the problems with ultimate upscale.
>>
>>108619493
>show up in every tile of the upscale?
yea i tried diff upscale models to fix it but it kept showing up even with 0.15 denoise after that i stopped upscaling
>>
>>108619517
blank prompt is okay
>>
>>
>>

Reply to Thread #108585471


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)