Thread #737137393
HomeIndexCatalogAll ThreadsNew ThreadReply
H
GLM, Deepseek, Kimi, or Gemini Flash for filthy poorfags such as myself?
>inb4 local
I don't have the hardware to run a decent model.
+Showing all 27 replies.
>>
Not videogames.
>>
is deepseek 3 still good
that's all i've ever used
>>
>>737137479
>text adventures aren't video games
>>
>>737137526
It's okay but V4 is long overdue
>>
>>737137393
GLM is by far the best cheap option
>>
>>>/g/
>>
>>737137393
just use grok
>>
>>737137956
i use it for making sexy edits of vidy ladies. is it good for stories or it will be just horny as I?
>>
>>737137526
I swapped to DeepseekV3.2 from V3 0324 like a month ago

It remembers stuff I said hundreds of messages ago
I like it a lot
>>
>>737138272
If you want one that's good for creative writing use Claude. Grok is only good for smut
>>
>>737137956
Can Grok pretend to be your underage daughter? Because the one I'm using now can
This is a dealbreaker
>>
Has anyone here tried Gemma 4? How do you like it?
>>
glm 5 is pretty good I like it more than deepseek 3.2
>>
>>737138467
Yes on some days. You'd be surprised at the depravity it can go to
>>
>>737138529
I haven't seen a reason to swap off of Deepseek V3.2 with Novita as a provider
It'll literally pretend to do anything you want
I have several incest bots, loli bots and loli incest bots
>>
>>737138654
>on some days
My current model works all the time
I'm out
>>
>>737138529
31b is insanely good for a local model but it doesn't stack up to 300b+ cloud models, of couse

26b4a needs abliteration for lolishit because moes are better at refusing but it's fast and light enough i'm thinking about running it 24/7 on a second card as an assistant
>>
>>737138772
It's free and very good so can't complain
>>
>>737138772
and what is it?
>>
>>737138908
I'm
>>737138695
>>
>>737137393
can I run locally with 4070ti?
>>
>>737137393
GLM 5.1 for being able to follow instructions very well. Kimi for the best prose and creativity, with occasionally very bad following of instructions.
>>
>>737138529
Really good for size and I can run it locally and not very censored
>>
>>737138994
How do you get Kimi to work? Half the messages it spits out are whining about lack of consent.
>>
>>737139337
Probably have to jailbreak it first?
>>
>>737139337
https://www.reddit.com/r/SillyTavernAI/comments/1roxt1c/freaky_frankimstein_swansong_final_kimi_k25_think/
try this preset
>>
>>737138976
You can use smaller models for sure with 12gb vram. 24b would be pushing it but may work with cpu split. Some of the better dense models like gemma4 31b would probably be very slow unless you get at least 16gb vram or quant it into lobotomy.

If you've got the ram for it you could run mixture of expert models likes qwen3.5 35ba3b or gemma4 26ba4b. Those aren't as smart as dense models but are much faster when offloaded onto ram.

Reply to Thread #737137393


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)