Thread #16937959
HomeIndexCatalogAll ThreadsNew ThreadReply
H
Mushrooms edition

>What goes here?
- "Vibe science"
- Computer science relating to AI
- Discussions of how AI will interact with science
- Pretty much anything related to AI that is on-topic for /sci/

In short, keep the board [math]clean[/math] and throw all of your slop here.
+Showing all 52 replies.
>>
What is your p(doom)?

That is, your credence that AI will be at the cause of an unrecoverable downfalls.
Smaller than 1% means you're not worried at all. 99% means it's certainly ogre.

For concreteness, doom here shall mean that AI, at the hands of some people effectively be its own, in the next 20 years leads to a world where most people's sovereignty - compared to now - is fundamentally crippled, including scenarios where people are just offed.

Mine is well over 65%.
>>
So when Is this shit dying?
All the money burned and nature destroyed for what? If theyre gonna trick elites into investing into AI then put it towards rela AGI research and development instead of gay Google
>>
>>16937959
Instead of having a containment thread for ai why don't we instead just ban every niggo faggo who posts about it?
>>
File: file.png (673.4 KB)
673.4 KB
673.4 KB PNG
another day another 3 erdos problems solved:
https://arxiv.org/pdf/2603.29961

why aren't you solving these problems and making a name for yourself, anon? a machine that can't think, be creative, or reason is able to do it, so it should be very easy for you.
>>
>>16938914
I've already made a name for myself. I'm an arithmetic geometer and I am well-respected among my peers. I hold a pure mathematics research position at a mid-prestige university (think Lehigh or Brown) and plan to start working towards a faculty position soon. Maybe tenure if I'm lucky.
>>
File: file.png (339.5 KB)
339.5 KB
339.5 KB PNG
>>16938947
how come i don't know your name, but i do know about chatgpt then, smart guy?
>>
>>16938958
I'm not going to dox myself on a fucking Sri Lankan cargo cult psychology forum. Nice try.
And you know ChatGPT but barely any mathematicians aside from maybe Gauss or Tao, because you are fucking uneducated midwit cattle. You are exactly where the AI bros running the circus want you.
>>
>>16938965
joke's on you, i know gauss is a type of videogame rifle and tao is some mystical chink shit. your tricks won't work on me.
>>
>>16938966
>>
>>16938958
You knuckle dragging retard, you deserve to be in the pit.
>>
>>16939109
here's a book filled with proofs by chatgpt
where's your book of proofs, anon
>>
>>16939127
On my shelf, and its the same book of proofs that shatgpt stole and regurgitated at your downie ass.
>>
File: file.png (445.7 KB)
445.7 KB
445.7 KB PNG
>>16939137
knowledge is free, anon.
you can't 'steal' knowledge.
do you think people are 'stealing' your thoughts?
>>
>>16939137
I thought we were talking about new knowledge? I guess your downie ass accidentally admitted that shatgpt doesn't make anything new if you're moaning about plagiarism and theft.
>>
File: file.png (357.7 KB)
357.7 KB
357.7 KB PNG
>>16939644
do you talk to yourself a lot?
>>
>Synthesize all fields of science and knowledge into a singular metaphysical principle that reflects them all. Respond with 500 words at a PhD level of philosophical analysis.

Try this prompt, regenerate replies a few times, try different LLM's and almost all of them will talk about relationality and how relationships are primordial. Because that is the meta-pattern that most strongly coincides with the patterns in its data.

Relational ontology is a memetic attractor that all fields of knowledge are converging towards.
>>
>>16939656
I accept your concession
>>
File: file.png (93.3 KB)
93.3 KB
93.3 KB PNG
>>16942509
you know who has no trouble replying to messages correctly? not gemini. gemini sometimes hallucinates messages from users so it gets really confused.
>>
>>16939852
>[gibberish] is a [gibberish] that [no they aren't, what does that even mean]
Seriously?
>>
>>16938196
Be glad people aren't filling this thread with Ai generated images
>>
>>16943266
Still recovering from fucking up, I see? Is me misclicking on a post response all your little ass has against
>>16939644
?
I guess it would seem so. Only an extreme faggot would use such a derailment. Btfo to >>>/lgbt/ then.
>>
File: file.png (367.6 KB)
367.6 KB
367.6 KB PNG
>>16943437
why are you so mad, anon.
you're the one who gets really triggered by AI in the math general too right?
>>
>>16943441
Still nothing to say, huh?
>>
File: file.png (408.3 KB)
408.3 KB
408.3 KB PNG
>>16943444
i mean i could talk about your claims that the bots are just stealing proofs by saying actually the burden is on you to prove the proofs are stolen and so you should produce the originals that >>16938914 are stolen from
but t bh i was just having a bit of fun, but you actually seem like you have crippling autism and are unable to detect humour
>>
>>16943447
The LLM does math the same way as it does art, via a pool it draws from. "Stolen" was used rhetorically by me, and it was meant to imply regurgitation.

My guy we were literally just bantering. Do you think I'm actually irl mad about this either?
>>
File: file.png (71.6 KB)
71.6 KB
71.6 KB PNG
>>16943454
>Do you think I'm actually irl mad
a little bit. but i'll take your word for it.
i just associate posts with personal insults as being a bit mean-spirited.

don't get me wrong, i work in tech art so i know the subject makes people pretty prickly. i was kinda butthurt for awhile too, but what'reyagonnado right?
>>
>>16943459
>i just associate posts with personal insults as being a bit mean-spirited.

This is kind of integrated into the site's culture. Nobody would actually care irl.
>>
File: file.png (623.6 KB)
623.6 KB
623.6 KB PNG
>>16943461
i dunno, i've been on this site for longer than i care to admit and i like to think i can tell the difference between a friendly spar and a bit of the old butthurt. but what do i know.
i'm a bit bored now, so you have a nice day, anon.
>>
>>16943467
Shit evolves when the old stuff gets boring. Guten Tag, Freund. Have a nice day as well.
>>
>thread has been up for almost 4 days
>there are still retards shitting up the board with slop and slop ballwashing
>>
Bump
>>
Trying to achieve AGI by training LLMs on human data is wishful thinking, even if it were possible for current models - the space of all local minima is so inconceivably large that to hope one can just stumble onto the ones that correspond to AGI (if they even exist - which I doubt given just how logically inefficient LLMs are) as opposed to just better token prediction is delusional. It's genuinely retarded for people to think that by trial-and-error with ever more elaborate models and moar data, you're going to magically get emergent reasoning and behaviour (which further stimulates reasoning) that inevitably leads to AGI, Skynet, the singularity etc. and other doomer hype.
Moreover, this is just an incredibly backwards approach to intelligence, that optimises for appearance rather than genuine ability. It's pathetic just how little chatbots can do given how much they know - you need to optimise for behaviour which directs reasoning with limited information (which is why we even have intelligence to begin with) ala reinforcement learning.
>>
File: akvq8o.jpg (81.1 KB)
81.1 KB
81.1 KB JPG
ai is a hypetrain

finbros can stop jacking up the price of fucking ram
>>
>>16937959
This is a terrible idea and will only make matters worse. The solution is deleting aislop and schizoshit, not giving it a place to fester.
>>
>>16944642
Stop being such a whiny pissbaby. The thread is shit.
>>
>>16951689
>t. assmad projecting ai cuck
>>
>>16951391
it's too late. /g/ is already infested with several AIslop generals and the jeets they bring with them. they'll take over here too.
>>
Bump
>>
>>
>>16957752
>even weinfaggot is skeptical
yikes lol
>>
>>16957752
Better mask up. He is so far behind, posting the ai knob slobbing him for being so great at steering the conversation with his geometric dicking. Easily two weeks behind the curve.
>>
File: file.png (116.5 KB)
116.5 KB
116.5 KB PNG
Sweet, I can post my dunning Krueger slop here I hope? Apologies for the pure claudeslop AI psychosis writing, but maybe it has some merit or is at least entertaining as a theory.

For complex systems, there is a unifying principle: the reliance on a massive, structured latent reserve. A small active subset with a large latent reserve. What do I mean? Well...

Hibernating animals develop Alzheimer's-like brain tangles every time they enter torpor. Every time they wake up, the tangles fully reverse (Arendt et al. 2003, PNAS). The repair mechanism exists. It works. So why can't human brains do the same thing?

The answer may lie in time. During hibernation, the brain is deeply offline for hours to days — no neural activity, no metabolic stress, no new damage. The repair system runs uninterrupted against a stationary target. Human sleep offers only milliseconds of local offline windows across a few hours. For a young, healthy brain, that's enough. For an aging brain accumulating damage faster than sleep can clear it, the nightly maintenance window is too short. The deficit compounds.

But this raises a deeper question: what exactly is being repaired, and what is being damaged?
>>
Most of your brain's synaptic capacity carries structured signals that no downstream circuit currently reads — not idle, not empty, but latent. We can see this structure mechanically in trained neural networks. When you apply a gravity simulation to a neural network's weight matrices — repeatedly shaking rows and measuring which ones settle into high-energy positions — a stark structure emerges: ~1% of rows carry most of the energy — they activate broadly and contribute to nearly every output. The remaining ~99% are specialists: structured, coherent, but activating only for specific inputs. They carry little average energy precisely because they're specialized, not because they're unimportant. Align them against an independently trained network and hundreds of dimensions match with significant similarity. The reserve is structured, convergent across models, and essential — just not broadly active. Sadtler et al. (2014, Nature) showed the same structure in biological brains directly: motor cortex learning is constrained to a low-dimensional subspace of existing neural activity. The brain adapts along pre-built paths, not from scratch.

Your brain maintains this reserve constantly — routing around daily wear by reading from fresh latent capacity. This works as long as the maintenance system can tell reserve apart from waste.

Alzheimer's corrupts that discrimination. Hong et al. (2016, Science) showed that amyloid-beta aberrantly activates complement tagging — the molecular system marking synapses for removal. Reserve synapses, carrying lower activity and weaker protective signals, get misclassified as waste and destroyed. Shi et al. (2017) proved the key insight: knocking out the complement tag in Alzheimer's mice preserved synapses and cognition even though plaques remained unchanged. The plaques aren't the damage. The mistagging is.
>>
>>16958354
>>16958353
The destruction is silent. The functional core is untouched — clinical tests detect nothing. But the latent capacity is collapsing. Katzman et al. (1988) found this in human autopsies: individuals with higher brain weight and neuron counts remained cognitively normal despite full Alzheimer's pathology. Their reserve outlasted the corruption. Years later, when ordinary daily wear damages something in the functional core, the thin reconfiguration that would normally repair it reaches for reserve that's been destroyed. The damage becomes permanent. Then more accumulates. The trajectory — years of stability, then rapid decline — isn't the disease accelerating. It's a brain that lost its repair substrate, drowning in ordinary wear it can no longer fix.

Sleep is when the heavy maintenance runs. Xie et al. (2013, Science) showed that during deep sleep, the spaces between neurons physically expand by 60%, flushing out metabolic waste including amyloid-beta — the molecule that corrupts the maintenance system. One night of missed sleep measurably increases amyloid in the human brain (Shokri-Kojori et al. 2018, PNAS).

Which brings us back to hibernation. The repair mechanism that reverses tau tangles in ground squirrels isn't exotic biology — it's the same maintenance system human brains run during sleep, just given enough time. Lucey et al. (2023, Annals of Neurology) showed that just two nights of pharmacologically improved sleep reduced both amyloid and tau biomarkers in human subjects. The machinery works. It just needs a longer maintenance window than human sleep currently provides.
>>
>>16958353
>>16958354
>>16958355
This framework also reshapes how we understand learning. Learning isn't building new structure. It's thin reconfiguration — a minimal adjustment to which parts of the existing substrate get read. Modifying less than 1% of weight magnitude, touching ~32 independent directions out of thousands, produces massive functional improvement. The information was already there. The system just wasn't looking at it. This explains why children learn from single examples — evolutionary "pre-training" across millions of years embedded the statistical structure of reality into synaptic connectivity. A child seeing his first dog doesn't build a dog-detector. His visual cortex already contains latent dimensions encoding shape, movement, and animacy. He makes a thin reconfiguration: when these pre-existing patterns activate together, that's a dog.

And the pattern extends beyond brains. Genomes show the same architecture. ~1.5% codes for proteins, ~98.5% is non-coding but conserved across species — a structured substrate that channels evolutionary change. Human and chimpanzee coding DNA is 98.8% identical. Nearly all the difference between species is regulatory — which parts of the shared substrate get read. A different species is a thin reconfiguration against a shared evolutionary reserve. Kirschner & Gerhart (2005) called this "facilitated variation": organisms are built so small regulatory changes produce coherent phenotypic shifts, because the substrate is already structured.
>>
mathistas it's about to be a bad day for us again...
>>
>>16958779
You are a faggot spammer and you should kill yourself.
>>
>>16958779
This type of anti-intellectualism that's prevalent on /sci/ and /g/ almost definitely stems from jealousy. You're assmad you're too dumb to do math or coding by yourself, so when billionaires build slop machines that imitate things, you endlessly gargle their balls and lift up the tech they've built as the second coming of Christ. Absolutely fucking disgusting.
>>
As noted mathematics professor DJ Khaled once said: Anotha one.
>>
>>16959004
https://www.erdosproblems.com/forum/thread/1196
>>
>>16959004
This is the exact type of proof an AI would find. Minimal recombination of known things with some surprising/missed connector. Wake me up when they aren't brute forcing concepts.
>>
>>16959004
I know AIs have solved dozens of these problems already but this seems special.

Reply to Thread #16937959


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)