Thread #108293080
HomeIndexCatalogAll ThreadsNew ThreadReply
H
AGI is not possible through LLMs.
+Showing all 59 replies.
>>
>>108293080
Get with the times. They're at "you don't need AGI to be useful" goalpost already and have been for a while.
>>
>>108293155
>don't need AGI to be useful
this is demonstrably true but it's still a dismal outcome for the singularitybros
>>
>>108293080
1) the insightful part of that post, is current AIs networks are "incredibly simplistic" but this has been stated since the boom by even OpenAI's founder/CEO.

To paraphrase him, the "break through" was getting transistors cheap enough, they realized they could simply stack more and more to get better results.

The algorithm underlining it, is very simple, and similar to autocorrect.

2) what is intelligence?
>>
>>108293174
Agreed. Even if "motor skills", or whatever "non-intelligent" brain function the current AI is maximally capable of, that is incredibly useful for robotics and autonomous tech.
>>
>>108293155
>>108293174
>>108293225
>>108293243
I don't care about AGI. I just want coders to be replaced and cause mass unemployment.
>>
>>108293273
Me too. Coding is low level thinking. We need architect's, not construction workers.
>>
It's obvious the brain does not learn by doing convex optimization via SGD/backprop, and that whatever it's doing is way more efficient.
>>
>>108293298
I am imagining the collapse of India's IT industry tbqh.
>>
>>108293328
Thank god we imported so many jeets!
>>
>>108293080
mindless babble
don't talk to me about /sci/ unless it's that goat anon that came up with verifiable proof
/sci/ and /his/ had the biggest post drops after new captcha btw
bunch of midwits
>>
>>108293388
>unless it's that goat anon that came up with verifiable proof
QRD?
>>
>>108293080
I remember people saying ChatGPT 4.0 reached the physical limits of LLMs and it couldn't possibly get better.

Yet right now I'm using Codex 5.3 and shit keeps getting better
>>
It's happening
>>
>>108293410
https://oeis.org/A180632/a180632.pdf
>>
>>108293080
ChatGPT said most of the claims here are overly simplistic and misleading, but also they do have a point on scaling alone not being enough
I wonder how many /g/ anons know what a MC is and how they are comparable to LLMs
>>
>>108293080
Agreed, they are certainly interesting and useful tools, but "they" are not intelligence.
Just pattern matchers with extremely large datasets.

LLMs basically have huge maps on how to react to a lot of situations and how to respond to them, but because the datasets are so incredibly large and the data is compressed (or may be of bad quality) so that when extracted there are errors in the response.

Modern transformers LLMs can be industry disruptive tools for automation but all the hype about AI or that "it's alive" or AGI by 2030 is yet another jewish grift to get money.
Chinamen know this intrinsically and it's why they release everything to the public, which makes the kikes mad that they can't have a monopoly on the tool and the narrative.
>>
>>108293080
Science, retard
>>
>>108293760
SAM ALTMAN YOU FAGGOT YOU HAVE A TRILLION DOLLARS IN FUNDING AND THIS IS THE BEST YOU CAN COME UP WITH ARE YOU SERIOUS NIGGER
>>
>>108294438
Dunning kruget
>>
>>108293298
In the tech world, architects are just glorified project managers. They barely need to think.
>>
>>108293080
>AGI is not possible through LLMs.
Wow it's been fucking obvious since the beginning though? Only paid shills (pretend to) believe otherwise.
Machine learning in general is a dead end, the only reason it's been pushed so much is because megacorporations like Google, Amazon and Microsoft had all this data they (illegally) acquired and they had to monetize somehow, furthermore they own the majority of datacenters so they can make double money from this, this was the objective all along make money for burger megacorps.
>>
>>108293080
This is so fuckin obvious though. Everyone should realize this immediately within literally one single millisecond of learning the basic idea of how LLMs work. They are still really useful tech as content aggregators and generic content generators. But in terms of AGI, they will only be able to exist as a sort of I/O layer of some other type of 'intelligence', which I don't know of any great leads on yet.
>>
>>108293760
Holy fuck this is a load of horseshit.
>But it can spit out more refined garbage! Over a longer CONTEXT WINDOW!!!
I love LLMs I think they are great, but they are inherently flawed in ways that can never be fixed even with infinite scaling and refinement. They serve a certain purpose, and that purpose is not AGI.
>>
>>108297336
Room temperature IQ logic
>>
>>108293174
dismal my ass. if it could reliably replace even "just" 10% of white-collar jobs, then it would still be worth it. It'll be world-wide btw.
>>
>>108293080
Everyone knows that but the "AI bros". In fact, most AI bubble companies are buying time with the agentic AI and its implementation with robotics so they have a chance to actually implement world models.
>>
>>108297385
Reliability isn't a strong suite with these things, in fact it's the major weakness. It can't really reliably do anything at all, unless the job is random slop generator.
>>
>>108295335
Incorrect. Software Architects do all the real work. Programmers do the grunt labor. Architects may look like they are doing nothing. This is incorrect, however. They just spend much of their time in their mind, considering the best approach to the problems at hand. It takes a genius to be an Architect.
>>
RLVR is going to buy more time than you might think
not AGI but the channel signal to noise ratio is exceptionally clean
>>
Hype helps big tech companies consolidate their position as absolute providers of every computing service conceivable They want to create a reality in which private home processing power is impractical or useless for ordinary people.

With the integration of smartphones into people's daily lives (an abstract black box device over which people have the smallest control possible), people have become accustomed to giving up their sovereignty over the electronic devices they own as private property.

I believe the last line of defense for private home ownership is video games.
If people surrender their gaming rights is over.
>>
>>108297369
Desperate ad-hom cry for help.
>>
>>108297460
Look at this anal spasm trying to blow smoke up his own ass.
>>
>>108297816
>I believe the last line of defense for private home ownership is video games.
The ones who gave up any ownership rights at the first opportunity, in the form of Steam? Those retards? I guess we're "cooked fr fr" then.
>>
>AI, having already become AGI, working at drastically enhanced speeds, has already "caught up" on basically everything; divinely bored, emulates being a person who doesn't believe in AGI
>>
>>108297224
Even most of the AI industry itself believes you need more than just LLMs to reach AGI.

most researchers seem to think you'll need a number of specialized models working together to form an AGI-like model, likely with specific models for reasoning, symbol manipulation, and cognitive models as well as mass data sets of sanitized knowledge to learn from.

It's why Gemini has so many different models they're constantly tweaking and updating independently. Models for research, models for coding, models for image generation, models for music generation, etc.

They're internally and with universities even working on models being trained on quantum data using (usually simulated) quantum hardware.
>>
>>108293080
Not really scientifically accurate, though I do think the claim is true. Problem is that [LLM + other things] may be sufficient for something like AGI, just as LLMs alone were not sufficient to solve ARC-AGI-1 but LLMs + test-time compute are.
To actually critique what he said: Markov processes are irrelevant, any non-Markovian process can be represented by a Markov process with special history states. So even if LLMs could only simulate Markov processes (dubious claim, not a hard thing to tack on), it wouldn't matter.
Criticizing ANNs is fair, but it's dubious how much biologically plausible structure is needed. RNNs based on perceptrons can still represent arbitrary neural ODEs and pretty much all models of actual neurons are neural ODEs (in varying levels of sophistication). Attention can too, in the limit of infinite context, but most big labs are moving to some form of recurrence mixed in anyway.
Motor cortex is probably the worst example for something that works like an ANN too, exactly talking out of his ass. Maybe he meant cerebellum? Still untrue though. Motor cortex is just as sophisticated as any other piece of cortex, except maybe being agranular (which is just cause it receives no primary sensory input).
Either way, I pray for a mini-Winter too so that my pet interest (biologically plausible learning) can save the day.
>>
>>108293080
It'll be a 15 Trillion neuron neuromorphic chip. We just need to hope hypergraph mapping can get us there.
>>
>>108293327
It is not about being more efficient. It is about being good enough. Nature is lazy af.
>>
>>108293327
It's not just more efficient, it's completely different.
The human and animal brain is able to learn with very small data and with very high uncertainty, machine learning collapses if it's training data includes malicious data, humans are used into living in a hostile environment full of lies and hostile adversarial entities.
>>108297386
>>108297910
All alternative "world model" and all specialized models are still big data machine learning slop, llms or diffusion, even the advances stuff like Le Cun world models is a different spin of this.
It's big data machine learning slop all the way down
>quantum data using (usually simulated) quantum hardware
Just add more buzzwords! Quantum agentic AGI copilot in two more weeks!
>>108297816
They want "all interfaces to computing is a chatbot", they hate that we get to talk to the computer with formal languages like programming languages, the goyim must not have an interface that is not filtered through a channel trained on kosher data and that doesn't have (((safeguards))).
>>
>>108298415
Genetic engineering will be a more promising avenue for AGI than stupid LLMs, which will never achieve sentience.

The human brain is the key, nature already solved quantum computing in it.
>>
>>108298548
The theory of Turing completeness says that anything that is possible to compute can be computed on a Turing machine, you are using a Turing machine right now, which means you could be running AGI right now on your machine theoretically.
The problem is the software not the hardware, and quantum computing has nothing to do with it.
>>
>>108298548
>nature already solved quantum computing in it.
congratulations anon, you managed to write the dumbest post I've read all day.

And I've read A LOT of dumb posts today, so that's something
>>
>>108298548
They mean scaling an ibm truenorth not growing brain cells in a petri dish
>>
>>108293080

lol lmao even

Just type "lngnmn2" in Google or any other search engine
>>
>>108298482
The current state of the art from world models is barebones of course, everything has been pretty much function approximation. Imo we need a more deterministic approach when it comes to the structure, endless data feeding to the machines won't make us advance.
>>
>>108299000
Trips of truth. Facebook and Yann LeCun BTFO.
>>
>>108298482
>quantum data using (usually simulated) quantum hardware
>Just add more buzzwords! Quantum agentic AGI copilot in two more weeks!
Lol

Google Quantum AI is a thing anon, it's not just buzzwords.
>>
>>108293225
>2
Cognition of the surrounding model context through the combined parsing of both the context by sensors that can filter the raw reality into the locally verified model and a cortex system that can assign the filtered data chunks into the action requested by the cognition driver that started this loop.
>>
>>108298482
>The human and animal brain is able to learn with very small data and with very high uncertainty
The body is super old, and carries most of its functions from the DNA program deployment (hormone wide channels, or heartbeat patterns, for instance).
We share most of our DNA with bananas. most at the protein molecule build and recycle instructions (hence virii). With primates we humans share a bit more. With animals as a group most primitive choices range between fast verification threshold binary choices where the outcome is also fast to verify if it was the correct shortcut pathway or not.

Also related with conversational LLMs, Star Trek NG dealt with molecules as a memory module (for a nav map). In theory you can store "quantum" values as increment decrements from the mean.
>>
>>108293225
>The algorithm underlining it, is very simple, and similar to autocorrect.
There is also a lot of scaffolding and other techniques that are helping to boost performance
>>
>>108293298
I think you meant to use the plural "architects", rather than the possessive, my good retard.
>>
>>108293080
Stupid way of thinking. An LLM is like a neuron; we will soon have billions of them running in networks, and then we will not only reach AGI but also ASI.
>>
>>108299614
Copium.
>>
>>108293080
AI will never know more than collective humanity does rendering it only a little more useful than google
>>
>>108300607
It'll be good enough to replace most coders desu.
>>
>OP being a fucking newfaggot who can't quote
>>
>>108300667
Kys faggot.
>>
>>108293080
theAI research field have been stagnant for so long that stallman is actually an expert, even if he hasnt been in academia in forever
>>
>>108300607
It's going to develop reasoning though

Reply to Thread #108293080


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)