Thread #108616187
HomeIndexCatalogAll ThreadsNew ThreadReply
H
>paid users get rate limited
>whinge that nobody wants to pay for AI
+Showing all 32 replies.
>>
>>108616187
thats because your subscription needs to be 100 times more expensive before they start pulling a profit.
its over the for plebs. only governments and big business will be using AI
>>
>>108616204
>thats because your subscription needs to be 100 times more expensive before they start pulling a profit.
When can we expect the AI services to go bankrupt?
>>
>>108616204
I was happy to pay money. Just don't fuck me with rate limits. Not my problem if you can't make money, capitalism.
>>
>>108616187
>paying for AI
>>
>>108616290
When Discord or Facebook does
>>
>>108616204
>governments
In other words: my taxes will go to civil servants too lazy to do any real work.
Great.
>>
>unwilling to pay cost via API access
>buy a massively subsidized service with a quota
>bitch about it
>>
>>108616187
how come no one whines about whatever subscription openai/google or the chink offers?
>>
>>108616462
Running AI locally still requires you to spend money for the hardware.
>>
>>108617293
comes for free if you are a GAYMER thougheverbait
>>
>>108617318
Even a 5090 is limited when it comes to LLMs, too little VRAM. The best local models are all 70B
Good for image gen though.
>>
>>108617370
I mean, they aren't SOTA for enterprise usage but those local llms are pretty fucking neat still, got them to fix and implement some stuff on some random c++ abandonware from github, I have a surface level knowledge of python at best, I can just read error tracebacks and haphazardly try to fix stuff, not actually code stuff, and getting bugs fixed and features implemented on c++ is not something that would've taken me a few minutes to do, if I would've even be capable of, but that small local model did
>>
>>108616204
>only governments and big business will be using AI
I miss the good ol' days when the only filter between individuals and institutions was having to interact with a retard.
>>
their free messages seem to have dried up , i just ask two or three questions and i got cut off completely. chatgpt the same, all being choked off
>>
>>108616187
LLMs are stupid expensive to operate. They're all losing money. Turns out, making a datacenter create something for $50 that your brain can do for the price of half a banana and a sip of water is a bad idea, but sunk cost fallacy rules modern tech.
>>
>>108617370
running a local model on a 5090 would far surpass anything these "state of the art" models can do, even on the lowest paid tier. they're shoveling shit out the door because they know it's expensive to run the hardware.
Go download Gemma4-31b and you can get better responses, even without a gpu, it will just be slow
>>
>>108618474
delusional
>Go download Gemma4-31b and you can get better responses, even without a gpu, it will just be slow
this is about as capable as Haiku, Anthropic's budget model
>>
>>108618497
>Anthropic's budget mode
that's what I'm talking about. low tier is still complete shit and they're selling it to you like it's their best stuff.
>>
How can I get Claude to hack anthropic and steal more usage from them
>>
>>108616344
Not their problem if you want no rate limits.
>>
>>108619279
Did you know people can just be paid to lie?
>>
>>108619279
If you pay for Max it's very doable. I went from being careful to judge the merit of requests to Claude on the Pro plan to actively trying to figure out how to keep it busy with Max.
>>
>>108619279
>Kirkenuinely
reddit
>>
>>108619305
>200 burgers a month plan
ai is a powder keg waiting to go off and take others along with it
>>
>>108616204
100 times $0 is $0.
Based Anthropic jannies taxing paypigs to subsidize the freeloaders.
>>
File: claude.png (303.1 KB)
303.1 KB
303.1 KB PNG
>>108616187
claudetrannies on suicide watch
>>
>whine that no one wants to pay for AI
Anthropic isn't complaining about that. Almost the opposite. They complain they have too much paying users to serve and it's the fastest growing user base in history so they can't scale up fast enough to serve anyone.

To give you some indication, the user base has doubled every month for the past 28 months in a row.
>>
>>108616187
Haha, their logo is a butthole.
>>
>>108621032
Costs scale linearly with users, the more users they have the more loney they lose.
>>
>>108617370
You can run 70B on a gaymurr CPU. It needs a good CPU but with 64 or better 128gb of ram it'll be fine
>>
>>108617370
Gemma4 shits on every 70b your info is outdated as fuck by literal weeks
>>
>>108616204
In 5 years a normal GPU will have enough VRAM to run AI and all these companies will go bankrupt.

Reply to Thread #108616187


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)