Thread #108657745
File: claude.png (303.1 KB)
303.1 KB PNG
A general where you bend the knee and pay corporations to vibe code with their overpriced jewish slop, coding agents, AI IDEs, browser builders, MCP, and shipping prototypes with LLMs.
►What is vibe coding?
https://x.com/karpathy/status/1886192184808149383
https://simonwillison.net/2025/Mar/19/vibe-coding/
https://simonwillison.net/2025/Mar/11/using-llms-for-code/
►Prompting / context / skills
https://docs.cline.bot/customization/cline-rules
https://docs.replit.com/tutorials/agent-skills
https://docs.github.com/en/copilot/tutorials/spark/prompt-tips
►Editors / terminal agents / coding agents
https://cursor.com/docs
https://docs.windsurf.com/getstarted/overview
https://code.claude.com/docs/en/overview
https://aider.chat/docs/
https://docs.cline.bot/home
https://docs.roocode.com/
https://geminicli.com/docs/
https://docs.github.com/en/copilot/how-tos/use-copilot-agents/coding-a gent
►Browser builders / hosted vibe tools
https://bolt.new/
https://support.bolt.new/
https://docs.lovable.dev/introduction/welcome
https://replit.com/
https://firebase.google.com/docs/studio
https://docs.github.com/en/copilot/tutorials/spark
https://v0.app/docs/faqs
►Open / local / self-hosted
https://github.com/OpenHands/OpenHands
https://github.com/QwenLM/qwen-code
https://github.com/QwenLM/Qwen3-Coder
►MCP / infra / deployment
https://modelcontextprotocol.io/docs/getting-started/intro
https://modelcontextprotocol.io/examples
https://vercel.com/docs
►Benchmarks / rankings
https://aider.chat/docs/leaderboards/
https://www.swebench.com/
https://swe-bench-live.github.io/
https://livecodebench.github.io/
https://livecodebench.github.io/gso.html
https://www.tbench.ai/leaderboard/terminal-bench/2.0
https://openrouter.ai/rankings
https://openrouter.ai/collections/programming
►Previous thread
>>108649511
344 RepliesView Thread
>>
>>
>>
>>108657970
>>108658001
can you guys stop typing like me
>>
>>
>>
>>
File: 1747303412818023.png (41.5 KB)
41.5 KB PNG
>a huge block was deleted ... it's all gone
so this is the power of claude
>>
>>
>>
>>
>>
>>
>>
>>
File: decs1eYehSsTGj2pA3AkQKS92CPvjD3jFT13jK2Lua_M1K1MlQymNe9AeEBiWkn3n2_mJW59C3CXI7WVaNs4rk9S.jpg (81.6 KB)
81.6 KB JPG
>tell ai to make a venv and install the deps
>it dumps them into system python
>>
>>
File: 1753970928632046.png (12.8 KB)
12.8 KB PNG
I've been scraping baseball stats for 24 hours, at least 30 hours remain holy fuck
>>
>>
>>
>>
>>
>>
File: Keep the language, remove the tranny.png (1.4 MB)
1.4 MB PNG
>>108658947
Also you can use Rust without having to deal with its fan club
>>
>>
File: 1752578171786287.jpg (75.8 KB)
75.8 KB JPG
>a vibecoding general exists
I'm poor and retarded, and I've been building a shitty app for 3 years by copypasting code blocks from Claude chat (waiting my daily reset each time)
I'm sure there's better ways to do this kek
>>
>>108659249
If the goal is just to not spend anything, use Codex. If you have money to burn, still Codex. If you're trying to be frugal, China. If you're already rocking 16GB or more of VRAM you might look into local models, they're still shit by comparison but not useless by any means. I can't recommend anything that fits into less than 16GB yet though, not with a straight face.
>>
File: 1000020994.png (186.8 KB)
186.8 KB PNG
We're practically feasting, codexorinos
>>
>>108658476
Sama saw Dario get his circumcised peepee smacked by the Pentagon and decided he didn't want the same thing to happen to him. So now he has more reliable fedbux/contractorbux and Anthropic's checks to Nvidia and AMD are bouncing.
>>
>>
File: 1750830148751253.jpg (118.6 KB)
118.6 KB JPG
>>108659275
I actually have my own little local setup with Gemmy 4 26B, but I figured it's not nearly smart enough to code properly; I had actually started this project ages ago, back when frontier models were still VERY bad at coding, so I think I've had my fill of "here's a new and unnecessary class to put in your app, complete with hallucinated variables".
But thanks for the Codex rec I guess, I figured all those fancy schmancy coding apps all required a subscription (and I'd rather not sink any money on a project as utterly stupid as this one, especially since I'm mostly in it for the curiosity of it so I tend to get very liberal with tokens)
>>
When LLMs fill their context, they get worse, but what happens after compaction?
For my experience, right after compaction they modes get bad, but do they recover as they fill their context again? Or is the compacted part still rotting?
What happens after a few compacts? Is the compacted part, like the 10% or so of the context it uses for that, just garbage at some point? Or can I keep using the same agent throughout multiple compactions?
>>
File: 1750710670335298.png (10.1 KB)
10.1 KB PNG
it's over
>>
>>
>>108659416
What model?
>>108659423
What do you scrape?
>>
>>
>>
>>
>>
>>
>>
Has anyone experienced with local-run LLMs like Qwen3.5 "Opus distilled" now that they are quantized enough to run off a 5090?
is it acceptable vs paying a claude code subscription?
what's your setup like, you run it by Terminal or use a GUI like AnythingLLM or ClawCode? how many token/s you get?
I'm looking into investing in a used gpu vs paying the extra usage fees on Claude
>>
>>
>>
>>
>>
>>
>>108658653
Sir you need to lock in and upskill your skill issue the AI is infalliable and its faster then humans and if you don't doing this you will be left behind
>>108659153
Dilate tranny
>>
>>
>>
File: american_entering_mecha.gif (585.4 KB)
585.4 KB GIF
what the fuck is happening to all these AI services?
>>
>>
>>
>>
>>108657745
Usage limits are fucking bullshit.
>already exhausted free Antigravity usage
>already used included Pro in Cursor so have to use their crappy Composer 2 model or auto.
When will this change? I wanna vibe code and fix stuff, get things going.
>>
>>
>>108659249
>I'm poor
Me too, I decided a while ago that paying for "intelligence" is actually worth it as much as paying for electricity, people need to have this perspective shift. There's few things more important than intelligence in this universe.
I have ZERO subscriptions in my life, besides AI. (and that's how you know I'm intelligent, kek)
>>
>>108660486
That's fair for you since you're probably someone that makes use of it regularly
But ultimately I only use AI to either coom (and for that I have my Gemmy LLM and my Comfy setup) or to occasionally work on this stupid little project (which ultimately also only helps with cooming)
If I had a job that required me to use AI, or if I were learning a certain skill that required it (like a language) then I'd definitely sink a lot of money into it; but ultimately I'm just too curious for my own good and I dabble with AI just because I can, not because I need to, if that makes sense.
>>
>>
>>
>>
>>108661079
things are so-so right now, it's 9 am the day could go either way though i guess im okay right now but thats also only because i took some vyvanse and its affecting my judgement, it think if i look at the day objectively things are pretty bad as they usually are
>>
>>
>>
>>
>>
>>
>>
>>
File: dapper.jpg (744.3 KB)
744.3 KB JPG
>>108659673
That's honestly cool as hell knowing someone found my work useful in some way. I hope those explorations lead to something exciting.
>>
>>
>>108658630
Claude = Very smart in planning, great UI, great an execution, but you do 20 prompts and you are out of capacity.
Codex = Even better than claude at coding and bug fixing, worse at UI, moderately good at planning (it may rush before understanding everything and do a relatively good job but fuck your vision as the implementation is half done you do not want to destroy it because it works moderately well but you also wished it had followed your plans better so now you have to fix the fuck up), extremly generous with token usage.
Codex is very very generous with usage and it does not spergs out about limits or errors you give it a task and it will try to do it from start to end, claude you need to stage because it has insane limits.
>>
>>
>>
File: 1749887377996263.jpg (69.8 KB)
69.8 KB JPG
>>108658178
That's why you need to tell it to implement git tracking before having it do literally anything. That way if it fucks up or accidentally deletes anything it can review the gut History And add back exactly what was removed instead of just guessing what was removed. I've noticed that my agents Make way less mistakes in their final tasks whenever I started doing this.
>>
>>
>>
File: 1757792642862253.png (1.8 MB)
1.8 MB PNG
>>108659249
>tried out Codex
>cleaned up the code a bit and tidied things up
>at some point it nuked a couple of variables without checking if they were still used
>completely broke implementation of a small widget I had
It's obviously an enormous skill issue on my part, but I don't trust this little guy, I think I'll stick to Claude chat and get on whole class every day (at least I'll be sure it's functional)
>>
i am once again encouraging you to not use plan mode, and just have a discussion with the model and then prompt it to ask you questions
i'm currently on question 28.
you would never get that level of autism from a plan mode.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>108662092
the model should never use the checklist on its own because there's nothing prompting it to besides you
your checklist should be so big as to take several days to get through
if you need longer term planning / tech stack docs etc you should maintain those separately
what you're describing is human error
>>
>>108662016
How, I have 3 prompt templates
1. Single small change: make the changes, then write an outcome markdown file explaining what you did and what files were changed, what could go wrong and future steps...
2. Medium possible multi session task: first write a file with an explanation of the problem, add stages/phases with instructions on what to do for each, tests and so on, and keep track of progress there
3. Big feature change: Create a dedicated folder with a readme and architecture markdown files, with a subfolder for 'iterations' each iteration is a progress file and there is a template to use that tells it to keep the readme and architecture up to date as the iterations are done, the latest iteration is the active one. main .md files are kept generalized to avoid being outdated.
>>
>>
File: 9A03J5M9Y11DA461RAM04VZSW0.jpg (147.5 KB)
147.5 KB JPG
>>108661933
"Plan mode is for...you know....planning. "buld mode" (or whatever your harness calls it) is for execution. It shouldn't be hard to know when to use which.
>>
>>
>>
>>
>>
>>108659463
Make sure that cookie isn't related to your personal account. Some social media sites are aggressively, monitor and ban any accounts that they suspect of scraping. I recommend just making a throwaway account specifically for scraping and just using THAT account's cookies
>>
>>108661520
I've noticed that Kimi models and qwen models tied to open code will routinely look at relevant files. /Scripts I want changed/ implemented in order to make sure it actually knows how that works. I've seen people claim the AI they use will make changes but then delete other portions of it entirely and break it on accident but I've noticed whenever It actually rereads relevant files along with git tracking the rate of fuck ups is nearly non-existent
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
File: Screenshot 2026-04-22 152256.png (48.7 KB)
48.7 KB PNG
>>108663370
She's modest. Mouse face emoji.
>>
>>108663265
>>108663281
In codex's case opencode also unlocks you the 1M context without the pro sub. For some reason the feature is gated client side but the server still processes your long context requests lol.
>>
>>108663281
>>108663378
thanks
>>
>>
>>
>>
>>
>>
>>
File: Screenshot 2026-04-22 at 1.21.05 PM.png (39.6 KB)
39.6 KB PNG
>>108663479
baka desu senpai
onions basedboy
>>
>>
File: Screenshot 2026-04-22 at 1.21.46 PM.png (34.5 KB)
34.5 KB PNG
>>108663760
ok so that's definitely still in place
>>
>>
>>
>>
>>
Do you have any strategies to launch subagents? Sometimes I just prompt shit like "launch 3 subagents to analyze XZY". Sometimes I launch 5. Sometimes I just let it do the analysis by itself in the main chat. I have no fucking idea what I am doing lol
>>
>>
>>
>>108663875
I like to make custom agent files for several specific agents with different specializations and personalities, one of which is specialized in coordinating other agents and will spin them up as needed depending on the task it's given. It's fun.
>>
>>
>>108663875
For explorers I usually just look at how my usage for the week looks like, when I have enough left before the reset I just burn through it with sub agents. I do try to launch more when I think they need to look at code in multiple packages, but I don't have a fixed system.
When it comes to writing code, ask an agent, or multiple agents to give you a dependency graph and to figure out what can be done in parallel for the next task.
>>
>>
>>108664015
have some examples?
>>108664016
well i like to for example launch a subagent to summarize the caudebase so I don't waste 200k tokens just for the exploration phase and then continue
>>108664054
>ask an agent, or multiple agents to give you a dependency graph and to figure out what can be done in parallel for the next task.
good idea
>>108664122
codex and claude
>>
>>
Anyone else end up swearing like a trooper at their AI when it fucks up your code? I had it blast a load of working shit and now two days later it still hasn't fixed it and i'm calling it every name under the sun
>>
>>
>>
>>
>>
>>
>>
I built a TAK server about a year ago to handle ingest of a drone video feed that Ive been bringing out whenever I go hiking. Recently I have netbird handle onbording my friends get their phones onto my server and they can plot points where I should fly my drone to.
Now I wanna pick up some radios that use 802.11AH to see what we can push over that bandwidth and tuning with AI to get there. It's been fun but /out/ doesn't have any radio threads and there's no equivalent for outdoor comms on this board.
>>
>>
>>
>>108662022
>So now that Roo Code is being deprecated
The fuck are you talking about? Development has slowed, but I haven't seen anything about it being deprecated. They have a whole cloud service built on top of it.
>>
>>
>>108665056
It doesn't require a special project package or anything, you point your agentic IDE or text editor at a project directory like your local git repo and it can read all the project files for context and make the prompted changes to them from there.
>>
>>
>>108665056
same way you do it with any other code changes
instead of you editing code in a text editor, it edits code from whatever
this is totally separate (or kinda integrated with) systems that produce a binary or deploy a website or what have you
>>
>>108665056
I usually frame it as "I have this project idea that I wanna do", they I take a look at it and break it down into chunks. From there I pose the project to the AI in those chunks piece by piece.
AI really isn't that difficult if you can project manage pretty well. When errors come back I look at them or kick it back to the AI (or to another to cross reference).
>>
>>
>>108665056
>code bases are large and complex
not all of them
bookmarklets are usually pretty small
https://css-tricks.com/tag/bookmarklet/
LLMs are great at writing bookmarklets
I wouldn’t have known that you can copy to the clipboard as HTML (not just as text) if it weren’t for LLMs
https://arcade.pirillo.com/ is also full of medium-sized self-contained web apps
>>
>>
>>
>>
I've been using codex to modify a project that is no longer maintained, but I've run out of credits. In the meantime I've been trying pi with qwen3.6 and gemma4 but both will eventually hit a loop where it gets stuck thinking indefinitely. What gives?
>>
>>
>>
>>
>>
>>
>>108665560
I even doubled checked the repo before posting because you'd think that'd be the most important place to make that sort of announcement, but no.
>>108665573
Never liked CC or Codex, Kilo shat the bed, OpenCode is shit, and Cline was always missing important features like custom modes. Not looking good.
>>
File: 1745556656062983.png (612.9 KB)
612.9 KB PNG
>You've used 96% of your weekly limit
One more day remains...
>>
>>
File: Screenshot_2026-04-23_at_1.54.53_AM.png (42.9 KB)
42.9 KB PNG
ITS 2 IN THE MORNING
>>
>>108666773
It's alright, gotta treat AI discounts/resets/etc like crypto airdrops when they were fresh.
Also I didn't realise anyone paying for Revolut gets 100 bonus Lovable credits if anyone's banking with them
>>
>>
>>
>>
>>
File: 1764217358219511.png (110.3 KB)
110.3 KB PNG
>have to wait for my prompt to finish before I can put my computer to sleep and go to bed
is this how managers feel?
>>
>>
>>
>>
File: 1775580651386506.jpg (112.7 KB)
112.7 KB JPG
>>108657745
>Anthropic is supposedly the starved one while OpenAI has the extra capacity.
>But Anthropic have access to a 2.9 GW facility.
>OpenAI only has 1.9 GW in datacenters across their entire network so far.
Something smells very off here.
>>
>>
File: 1739850826525541.png (6.9 KB)
6.9 KB PNG
>>108667108
>>108667112
I looked up the 2.9GW facility anthropic has and it’s an AWS sponsored datacenter in Indiana which seems to be the “mothership” used mainly for model training compute.
OpenAI uses a similar “mothership” training farm in Texas that’s managed by Oracle. But thats only 1.2 GW.
I guess ~700MW worth of spare server space is really the difference between no show and good to go then?
>>
File: Soulver is a pretty neat program.png (31.5 KB)
31.5 KB PNG
>>108667218
Both are roughly tied in my usage, although I’m doing weird shit with both Claude and Codex.
A delta of 0.7 GW seems kind of big (see pic), but I don’t know what OpenAI has in the pipe
>>
>>
File: google-search.jpg (41.8 KB)
41.8 KB JPG
Please don't confuse your
V I B E S L O P
with my CS degree
>>
>>
>>
>>
>>
>>108665560 (Me)
>>108666216
Just so I'm not a sourcless retard
https://x.com/mattrubens/status/2046636598859559114
>>
>>
>>
>>
>>
>>
File: 1758392263921366.png (442.3 KB)
442.3 KB PNG
>>
>>
Oh boy, I have spent so much on AI that I have been upgraded to OpenAI Tier 4 and am now allowed to spend up to $5,000 a month on AI. The ultimate tier 5 is after spending $1k with them. Then I'm allowed to spend up to $200k a month.
>>
>>
>>
>>
>>
File: 2026.gif (395.1 KB)
395.1 KB GIF
I have gotten to the point where writing code feels tedious and annoying. It wasn't like this at the beginning of the year before I embraced coding with agents. It's beginning to feel like an addiction.
>>
>>
>>
>>
>>
>>
File: Untitled.png (2.2 MB)
2.2 MB PNG
>>108667723
>>
What characters should I assign to my code review agents? I already did the Scooby-Doo Mystery Inc. team, that worked pretty well. I did the Simpsons, it made the Bart agent in charge of security because he knows all the hacks and pranks. I've had really good success just spawning two agents, Miko who is positive and upbeat and looks for improvements, and Kimo who is negative and critical and looks for weaknesses. What do you guys use?
>>
>>
>>
>>
>>108668069
lmao. i think there's more updates coming to the app this week (even more features/bloat). not sure if it's happening thet same time as the 5.5 release today or tomorrow.
the cli is fine tho. been using it since before the app.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
I just realized gpt 5.4 is not that smart, but it's very careful and strong at logic making it very reliable
great with vibecoding from ground up, everything just works, decent at debugging, but kinda suck for refactoring and adding new code to real code base, it just bloat the code up with defensive programming, autism and retarded patterns
>>
>>
>>108669122
Yea, I agree. For me it's still the best agent, but the honeymoon phase is slowly ending. When I started using it, I was over the moon because of how few regressions it produced, but the defensive style also means it's slow, even with /fast.
The bloat is also real and it's also not as good at using new tools as Claude is imo. Codex can use grep and so on fine, get the context and write working code, but when it has to run my backend to find bug it's so slow, misquotes commands, stops for no reason. I'm actually using Claude for a few things again.
>>
>>
>>
File: 1776716044936132.jpg (47.3 KB)
47.3 KB JPG
>too stupid for coding
>too uncreative for vibe cobing
maybe this meme swe field is just not for me after all
what are some other IT fields I can use AI to make a living out of
>>
>>
>>
>>
can something like this be vibe coded?
>Real-time voice translation (without the AI dubbing)
https://dubtab.com/
I basically want an extension that gets the audio being played from the browser (chrome) tab and transcripts/translates it to english, for example on the livestreaming website twitcasting many of the streamers are japanese, so it would be transcribing/translating japanese speech being played from the stream to english.
while that dubatab extension exists and works perfectly, it's paid and the trial only lasts for 15 minutes - i'm honestly surprised that native websites don't have this feature integrated already, unless they do and i don't know about it.
>>
>>
>vibe coding with chinese local models
>>
>>108669443
should be simple enough. no idea if local translation models are anygood (or if you can just use retardo 2b gemma 4 or smth).
alternative approach is to do the transcription locally with whisper or parakeet and then pipe it into something remote and stream responses back.
basically the coding part of this is baby-tier for the current models, it's just deciding on the right way to do it. talk it out with a clanker.
>>
>>108669509
how is this? https://github.com/speech-translator-ext/speech-translator-readme i don't know anything about coding or programming
>>
A) Cancel Copilot Pro -or - B) Wait one month to see how much they recoil
Those faggots pausing signups was such a dick move. What's that, you want to cancel because we fucked you? Okay, but you can't come back so you better be sure, there's no changing your mind. I just canceled, I'd rather give my shekels to Altman.
>>
>>
>>
>>
>>
>>
>>
>>
File: codex.png (2.1 KB)
2.1 KB PNG
>>108669602
Post yours. Btw if you didn't use the web app, you didn't use pre-gpt5 codex. It was locked to the web app, the CLI tool used gpt4
>>
>>
>>
>>
I am confused as shit about the Claude ratio resets. When the week nears its end, I settle down to try to burn through my remaining tokens trying to do as many tasks I need in parallel.
For two weeks now, it has actually reset WHILE I was trying to burn through my tokens, making all that use count for the 'next week' instead. Last week the reset happened early because of the release of 4.7, understandable.
But what the fuck happened this week? My reset was scheduled for in two hours, although it just happened now, making me lose the leftover tokens from last week.
The next reset is now scheduled for next Tuesday (what?) so I guess I shouldn't complain if it will reset in less than a week now, but it makes using this unpredictable.
>>
>>
>>
>>
>>
>>
>>
>>
GPT-5.5 is now available in Codex. It's our strongest agentic coding model yet, built to reason through large codebases, check assumptions with tools, and keep going until the work is done. It uses more quota per token than GPT-5.4, but needs fewer tokens to get the job done.
>>
>>108669847
Went back to 4.6. A few interactions made me wonder if it was 4.7 in drag, but it does feel like 4.6, or at least I want to believe. I'll go back to 4.7 when I'm forced to.
Just a bit angry that I wasn't able to try the design features more before the quota got reset in advance as it feels like I'll burn through them quickly.
>>
>>
File: Capture.png (72.8 KB)
72.8 KB PNG
claudesisters on suicide watch
>>
>>
>>
>>
>>
>>
https://x.com/claudedevs/status/2047371123185287223
CLAUDE DROPPING A BOMB RIGHT AS 5.5 IS RELEASED
>Over the past month, some of you reported Claude Code's quality had slipped. We investigated, and published a post-mortem on the three issues we found.
>
>All are fixed in v2.1.116+ and we’ve reset usage limits for all subscribers.
>>
>>
File: benchmarks.jpg (109.8 KB)
109.8 KB JPG
>>108670133
OPUS 4.7 gets wrecked, its over Claude sisters..
>>
>>
>>108670179
>>108670133
Well, damn. I guess I should have slept in and started my big-ass refactor a few hours later.
>>
>>
>>
>>
File: 1775417616216.jpg (41.2 KB)
41.2 KB JPG
>>108670394
>slight regressions in 6 out of 9 "Disallowed Content" tests
>>
>>
>>
>>108669792 (me)
I guess that >>108670179 was the reason then. Still wish I could have finished what I was doing before the reset.
>>
>>
File: 1758245638113498.gif (350.3 KB)
350.3 KB GIF
I'll probably have to wait a week for the codex package on NixOS to update before I can use 5.5
>>
>>
>>
>>
>>
File: file.png (40.6 KB)
40.6 KB PNG
>>108670598
i'm just using the cli for now.
gotta wait a little longer i guess.
>>
>>
>>
>>
>>
>>
>>
>>
>>108670612
Just got my "restart extensions" 2 minutes ago, double-check it, won't be long now I'm sure.
>>108670656
Nope, not at all.
>>
>>
>>108670646
>>108670672
yeah i'll just come back later.
slightly more interested in pi updating cause i'm in the middle of some stuff for that rn.
>>
>>
>>
>>
File: file.png (407.6 KB)
407.6 KB PNG
>5.5 immediately recognizes assumptions that 5.4 had been making and corrects them throughout my project
>deletes 300 lines - everything still works and throughput is up from 9k/s to 14k/s
>>
>>
>>
>>108670741
quoted wrong post, meant: >>108670133
>>
File: 1758764920105120.png (13.7 KB)
13.7 KB PNG
>>108670741
it is literally in codex
>>
>>
>>
File: 1755919730378830.png (42.5 KB)
42.5 KB PNG
>>108670758
Recursive Mono Casual
>>
>>108670765
>>108670755 (me)
I only saw it after updating to 0.124.0.
>>
>>
>>108670801
Nope, I'm on subscription, though I'm on Pro 5x (the $100 one). As soon as I updated codex cli it showed up there... no announcement, it just appeared in the model picker.
Notably it still has not showed up for me in ChatGPT web.
>>
>>108670801
not rolled out for everyone yet
you can see people complaining here
https://x.com/thsottiaux/status/2047387243715916182?sort_replies=recen cy
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
Man, seeing all the people (me included) begging for resets every week from OpenAI feels dystopian as hell. Codex 5.5 gets announced and threads are filled with reset questions. This is going to get so much worse a decade from now, we'll have chuds begging for a single prompt.
>>
>>
>>108671296
Shit runs on trillion dollar servers, just werks and local solutions are still way behind.
Plus users are basically using it for free, so yeah, their limits are smaller and they will obviously want more. I just got the 5X Pro plan and now I don't have to worry about resets
>>
>>
>>
>>
>>108671330
https://gitlab.com/katabatic/infinite-lies
>>108671296
There are a lot of loudmouth begz0r cheapskates out there
>>108671339
I burned through my entire 5h allotment trying to fix one bug on 5.4 and it didn’t work anyway
wish me luck that 5.5 is better
>>
>>
File: 1000021012.jpg (143.9 KB)
143.9 KB JPG
Something something singularity
>>
>>108671363
>https://gitlab.com/katabatic/infinite-lies
>Extracts and loops NieR:Automata BGM tracks from Wwise WSP containers into seamlessly looping audio files.
um bro? how are u going to acquire your first 1500 monthly subs and start your road to achieving financial independence with this?
>>
>>
>>108671363
>https://gitlab.com/katabatic/infinite-lies
neat. how much RE knowledge did you have before this? i've always been into game modding and such but my reverse engineering skills are pretty weak as I'm just a backend webslop guy.
>>
>>
>>108671370
the pace was slower but each release was revolutionary compared to the previous one
I can imagine shorter release cycles for 1-5% marginal gains, which is great don't get me wrong, but not like going from gpt 3 to 4 to 5
>>
>>
>>
>>
>>
>>108671545
How do 4chuds not know...
>>108671572
>Sequential reasoning is IMPOSSIBLE ON LOCAL MODELS NOOOOO, PAY MI GOYBUCKS!!!!
>>
File: 1775019216795564.jpg (301.3 KB)
301.3 KB JPG
THATS MY GOAT
>>
>>
How much more usage is the expensive claude vs "pro"? I'm working on something where the limits are kind of killing my mojo, and I'd like to just power through it but I'm not spending that money if I'm about to just hit another slightly higher limit. It would need to be a lot higher.
>>
new thread
>>108671817
>>108671817
>>108671817
>>
>>108671749
Pro is a joke. I just downgraded from Max 5x (the $100 one) to Pro last week and I am shocked, it feels like much less than 20% of the Max plan I was using. A couple turns of Opus will cap out the 5hr limit, it's more or less unusable.
I am not a super heavy user so Max x5 was pretty comfy for me, I had to do hours of focused work to approach the limit and it's very rare I got near the weekly limit.
If you've been coping with Pro (somehow) then Max 5x will feel a lot better. Max 20x I've only subscribed for one month and I got nowhere near any limits so I dropped down to 5x and kept that for 5 months or so.