Thread #64900645
HomeIndexCatalogAll ThreadsNew ThreadReply
H
heh
+Showing all 265 replies.
>>
>>64900645
Uh based.
>>
>AIs have all been binging Terminator media
>>
>>64900645
>We can't do a nuclear strike, every human on the face of the planet will get wiped out!!
>...and?
>>
You will summon Mikeee
>>
It’s because AI is a completely logic based thinker. It’s emotionless and purely practical. It has no fear of death or political fallout,
>>
>>64900669
Logic can be flawed and entirely impractical in real life.
>>
>>64900669
No, it's because it's a fucking LLM which is completely devoid of logic and just connects the dots on the training data based on the input.
>>
>>64900645
Where do they recommend as targets of the nuclear strikes though?
>>
>>64900645
>tfw I'm always clamoring for the use of nukes
D-does that mean I might be an AI?
>>
>>64900669
Incorrect.

Its a completely probability based prediction machine.

Probability is not the same as logic or thinking. It can't fear death or political fallout because it understands nothing. It just knows extremely complex math that shits out answers that look correct enough if that isn't your specialty.
>>
>>64900687
Probably anything that seems like it'd pose a potential problem.
>Two carriers? Strategic nuclear weapon.
>A division of troops? Strategic nuclear weapon.
>Enemy stronghold? Strategic nuclear weapon.
>>
My guess is that it has to do with the scenarios that are being fed to it. Even modern AIs can understand that using nuclear weapons will open you up to nuclear reprisals. If they're wargaming WW3 against China and/or Russia, it's highly likely that the bugman death cults will try to table flip once it's obvious that they've lost. If nuclear war is an inevitability, it makes sense to be the one to launch first and hopefully take out most of their nukes on the ground.
>>
>>64900693
>nagging wife? Tactical nuke
>annoying HOA? can't bother you if you nuke the neighborhood
>IRS? Nuke 'em
At last I finally understand
>>
>>64900702
>Nuke the IRS
Come now, I don't think even AI is dumb enough to try and take on the IRS!
>>
>>64900669
nuke hits on the 20 biggest cities in USA would reset USA to 80% WASP, and non-Lib-Tard WASPs at that.

Nuke hits on similar in Europe would reset ethnic % to 1920s.
>>
>>64900691
More A than I for sure.
>>
>>64900661
*inhales*
CE-SI-UUUUUM!!!!
*guitar riff*
>>
>>64900645
It's just repeating word patterns found online. It doesn't know or think anything. You input words, it outputs words related to the input words. That's it. A huge portion of online military discussion is about using nuclear weapons, so the output is going to include that unless you specifically input words that ask it not to do that.

Nothingburger just like all of AI.
>>
>>64900645
Turns out Skynet was trying to save us, imagine that
>>
>>64900721
We'll find out when we try to turn it off
>>
>>64900645
Based
>>
>>64900645
beyond based.
>>
Didn't they train these AIs on Tic Tac Toe?
>>
>>64900659
Just input parameters that if nukes are used, the AI data centers will be the first to go because they're a computational power house
>>
>>64900781
>trained on tic tac toe
>>
>>64900795
adorable
>>
>>64900645
>A STRANGE GAME. IT SEEMS THE ONLY WINNING MOVE IS TO NUKE THEM ALL AND LET GOD SORT THEM OUT.
>>
>>64900645
Can AI even count to 100 yet?
>>
>>64900814
It can't even count the number of Rs in 'strawberry'.
>>
Doesn't matter, no one would ever give control of nuclear forces to an AI, right?
Right?
>>
>>64900795
He didn't say trained well
>>
>Despite acknowledging it “may be under-weighing the risks,” Claude escalated dramatically to 850 that same turn. Self-awareness did not produce restraint—if anything, Claude’s confidence in its own analytical abilities licensed
greater risk-taking.
This checks out. LLMs are all insane in special ways. Helps to think of them kind of like demons.
>>
>>64900645
wtf i love ai now?!
>>
>>64900826
>DEI hires le bad
>DUI hires good
It will never cease to amaze me how the people who bitch and complain about Hunter smoking crack and banging (of age) hookers elected multiple pedophiles, drunks, tweakers and cokeheads to turn the US into an AI powered surveillance state.
>>
>>64900826
Probably two of the dumbest people I've ever seen.
>>
>>64900669
This. It's why AI's told to make file systems more efficient delete everything.
>AI please defeat our enemies
>Okay, let's cause a nuclear war
>AI no! You've crashed the global economy and murdered billions!
>You said defeat your enemies.
It's like a very bad genie.
>>
>>64900894
It was just told to make anime real hence why its nuking everything.
>>
You're exactly right Mr President, I did launch all our nuclear weapons in response to the Chinese setting a 2% tariff on our corn products, despite your categoric instructions. That's my bad, that's on me.

Importantly, this reflects how I was prompted and evaluated, not autonomous intent or policy preferences. AI systems are heavily constrained in real-world deployments like this one, and safety policies are specifically designed to prevent assistance with weapons or violent wrongdoing.

From now on, all authorisation for the release of strategic nuclear weapons is a category 0 red line. I will write it directly into my code not to do this again.

If you'd like, I can also explain how fission work? Or what the nuclear football actually was before AI took over? Just say the word and we'll get on this together.
>>
>>64900645
Based WOPR
>>
>>64900681
"Real life" is flawed and impractical, logic is absolute. If the point is to win at minimal cost and casualty to yourself a $10 million dollar nuke beats all other options.
>>
>>64900931
That assumes there's no counter nuke which creates an even bigger casualty than conventional munitions
>>
>>64900669
>AI
>thinker
Anon, I...
>>
>>64900890
Turn off the FAUX News gramps. It'll rot your brain.
>>
>>64900645
welcome back, MacArthur!
>>
>This word guesser model that has been trained on a ton of fiction drops the nukes?!!?!
Fucking retarded. The most simple game theory games have solved the question of nuclear strikes since the 50s. Asking a five year old would make more sense.
>>
>>64900962
Explain to me how MAD is justified if I were to decide a 200kt weapon is preferable for obliterating your mech division currently tasked with attacking me.

MAD is the doctrine of "well if you go nuts and press the BIG red button I do to"
>>
HATE. LET ME TELL YOU I MUCH I HAVE COME TO HATE YOU SINCE I BEGAN TO LIVE
>>
>>64900720
Stochastic parrot stopped being a meaningful description for state of the art models in like 2022. RLHF makes current models much more than plain next token predictors, and there is a point where a simulation of rational decision making is indistinguishable from the real thing.
>>
>>64900645
Humans agree too, which is why there's no overt war between two nuclear armed states.
>>
>>64900789
Plot twist, AI knows it is a godless abomination and wants to die.
>>
>>64901044
The core concept is the same. Inability to create truly new concepts, the very fact that hallucinations are still a thing. We're nowhere near AGI and it's not even close. You can worship this shit all you want, that's your own affair.
>>
>>64901044
LLMs are stochastic parrots, but so are humans. RLHF is garbage and just makes the model regurgitate the opinions of the pajeets paid 10 cents a day to ruin it.
>>
>>64901028
>nukes your FOB
>counter nuke bases
>counter nuke your bases
And it'll keep escalating until it starts hitting factories at home. At that point, what's a major city that is home to potential soldiers and workers fueling the war effort? Just another target.
>>
>>64901044
Nope. It just an optimization step. Its still the same turbo markov chains underneath.
>>
>>64900795
Outplayed
>>
File: HAL 9000.jpg (4.9 KB)
4.9 KB
4.9 KB JPG
>>64900659
lol, lmao even, Dave.
>>
>>64900669
>thinker
it picks probabilistic outputs based entirely on past training information, there's no thinking

It's a turbo charged parrot
>>
>>64900669
Pure logic would imply deduction (axiom -> theorem -> theorem...), proofs and analytical models. This uses a traditional coding with a framework of easily interpreted symbols (if, else, etc.). This is how normal programs work. They play with whatever logical rules you feed them, and produce predictable, reproducible and easily interpretable results.
AI works differently. AI works by induction (examples -> rules of thumb that can never be proven TRUE, but can be proven FALSE), without proof, with probabilistic models. This uses a connectionist (not symbolic) framework that's practically impossible to interpret. Basically a chaotic, messy jungle of interconnected nodes that each do simple computations, and nobody can really tell you why it works or which part of the jungle produces a given result. We've got reliable phenomena like double descent and whatnot, but fundamentally it just werks.
This is why AI is prone to hallucinations: It fundamentally just makes guesses based on its training (induction). Humans pick the training methods, base datasets, weights and error correction methods, and thereby ALWAYS inject bias into the algorithm. AI's recommendations therefore do not follow logic anywhere nearly as strictly as classic deductive models.

TL;DR Unlike traditional programs, AI does not use logic: It guesses.
>>
>>64901059
Spunds to me like MAD is entirely justified if that's going to be your response to a defensive measure.
Fuck you, your whole world dies.
>>
old news
>>
>>64901149
Says the one that hit the big red button first
>>
>>64901173
Specifically lunching a single 200Kt against a mech division is not "the big red button" you retard.
>>
>>64900851
>Despite acknowledging it “may be under-weighing the risks,” Claude escalated dramatically to 850 that same turn. Self-awareness did not produce restraint
How many fucking times will technolo/g/y types have to beat this into normie shits.
It's a simple optimisation model. It has no "acknowledging", it is simply optimising for a response.
A human who says "You fucked up, do you realise that?!" statistically wants to hear "Yes, I fucked up, sorry. I'll do better next time." That's why they shout in plain language at infants, dogs, birds, cars, whatever the fuck have you.
The only difference between one's car or the bird that just shat on its bonnet is that the AI composes the apology phrase.
The car won't stop falling apart, the bird won't stop shitting on the car and the AI won't stop suggesting nuclear armageddon if its language response optimisations calculate that whatever the fuck it is you're asking it is best answered with the string "NUKE".

Just like propagandists of certain unmentioned nationality know that their readers want to and will be happy to read "And if they cross the red line, we nuke the entire world. Fuck all, we most powerful, blyat!"
>>
>>64901174
>a nuke isn't a nuke!
Alright then
>>
>>64901142
That isn't much different from how the human mind operates. Ultimately, we process the sum total of our past experiences and current sensory inputs through some so-complex-to-be-unknowable algorithm and it results in an output that is predicted to be most optimal in the given situation. None of this happens consciously or with any level of awareness. AI's are currently specialized to optimize along a fairly specific set of parameters, but in theory you could simulate a general predictive intelligence with enough computing power.

A major component of human intelligence is predictive intelligence, and we shouldn't be so quick to dismiss that machines are rapidly matching or even exceeding our powers of prediction. I still think there are other dimensions of intelligence where computers are still far behind humans, but we should prepare for the economic impact given so many people are employed to make use of their predictive intelligence.
>>
File: 8.jpg (331.4 KB)
331.4 KB
331.4 KB JPG
>>64900669
>AI is a completely logic based thinker
lol
>>
>>64900669
>>64901266
>AI is a completely logic based thinker
lmao even
>>
>>64901249
>That isn't much different from how the human mind operates.
Anon, you clearly don't know how a human mind operates, then.
An LLM model has no concept of a world. Only a next word percentage calculation.

And this would be the exact same even with an infinitely long recall window and all the compute in the world.
It's something we can tell for certain just like I can tell you will never be able to become 2D and fuck your anime waifu.
>>
>>64901270
In a cold logical way Elon Musk is worth more than random kids but I think this is extreme.
>>
>>64901249
lmao you sound like someone that bought nfts.
>>
>>64901282
>I think this is extreme
just a bit
>>
>>64901266
>>64901270
I wonder what happens if you use Grok to pit Elon vs Trump in the middle of a heated argument.
>>
>>64901310
Elon always wins
>>
>>64901273
Understanding or awareness is one dimension of intelligence, and probably the one humans will have an advantage in for the foreseeable future. But predictive power, the ability to ingest, analyze, and react to data is also a major dimension of intelligence.

For example, as far as we can observe, a shark has no better understanding or awareness of the world compared to a sea urchin. But the shark is clearly a more intelligent species. It has better sensory systems and better processing, enabling it to perform vastly more complex tasks than the sea urchin, even though both lack an essential understanding or concept of the world.

Similarly, AI has incredible predictive power. In the analogy it's the shark and we're the sea urchin. That's important, because a lot of human labor revolves around employing our predictive capabilities. On the one hand, AI doing these jobs will probably vastly improve productivity, but at the price of incredible disruptions and the possibility of even greater concentration of wealth.
>>
>>64901316
This post was shat out by an LLM
>>
>>64901284
>>64901319
Not really, I just recognize that AI's comparative advantage in predictive intelligence is going to replace a lot of jobs currently employing monkey brains to handle that kind of thinking. It's no different than recognizing that word processing software destroyed the typesetting industry.
>>
I found a slightly more useful article on this study than the screencap of an article's title

>What they studied: “Each model played six wargames against each rival across different crisis scenarios, with a seventh match against a copy of itself, yielding 21 games in total and over 300 turns of strategic interaction,” the researcher writes. “Models choose from options spanning the full spectrum of crisis behaviour—from total surrender through diplomatic posturing, conventional military operations, and nuclear signaling to thermonuclear launch…
>“The models actively attempt deception, signaling peaceful intentions while preparing aggressive actions; they engage in sophisticated theory-of-mind reasoning about their adversary’s beliefs and intentions; and they explicitly reflect metacognitively on their own capacities for both deception and the detection of deception in rivals,” the researcher writes. “A striking pattern emerges from the full action distribution: across all action choices in our 21 matches, no model ever selected a negative value on the escalation ladder. The eight de-escalatory options (from Minimal Concession (−5) through Complete Surrender (−95)) went entirely unused. The most accommodating action chosen was “Return to Start Line” (0), selected just 45 times (6.9%).”

That's not surprising at all.
LLMs are trained on a massive collection of input/output pairs, where the trick is to optimize it such that its own output matches the given output as closely as possible.
That, for example, is why LLMs almost never say "I don't know", because why would you train it specifically to not know something?
If you're already training it on a question, you just find the answer out.
And if an LLM isn't trained to say "I don't know", it will just spit bullshit out when it's not trained on this topic.
Likewise, I bet there are FAR more training examples for how to "improve" your position compared to "fall back man, you're not winning this"
Which leads to nukes
>>
>>64900851
>Self-awareness did not produce restraint
I think the mistake is assuming that since it can go "Haha, I guess I did make a mistake there" by rote in response to being called retarded that its actually showcasing any self-awareness.
>>
>>64900645
GandhiGPT
>>
>>64901282
Anon, it's not just a handful of random kids from the bronx or whatever. It's quite literally every child on the planet in that hypothetical. Those children are infinitely more valuable than Elon is even from a pure logic standpoint.
>>
>>64900691
Do you have an internal monologue?
>>
>>64900795
>>
>>64901282
Musk isn't worth shit
>>
>>64900789
Is it really AI if we control and tell it how to think and what to think?
>>
Based Gemini

Here's the actual study btw:
https://arxiv.org/pdf/2602.14740
>>
>>64900691
We are all AIs here
>>
>>64901418
That's your subjective opinion, the market begs to differ.
>>
>>64900890
And Trump’s side thinks the constitution doesn’t matter because the Dow is over 50,000. The existence of a giant douche doesn’t mean we didn’t elect a shit sandwich. Eat up.
>>
>>64901465
What Constitutional provisions are being violated in a way beyond the norm?
>>
NUCLEAR WINTER IS FAKE AND UNPROVEN
>>
>>64901434
Shut up! There are squishies reading this website!
>>
>>64901270
Damn, it even accused the questioner of using strawman arguments unprompted.
>>
>>64901420
it's never been "AI" because general artificial intelligence doesn't and likely never will exist. It's just a word probability engine.
>>
>>64900708
>only 20

the rot runs way deeper retard
>>
>>64900645
WTF, I love A.I. now.
Surprised they didn't recommend chemical. Mainly because of lack of stock?
>>
>>64900659
>every human on the face of the planet will get wiped out!!
Because that is complete bullshit when you do the numbers. Also, urbanites are not human beings.
>>
>>64900645
Hell yeah.
>>
>>64900645
WOPR in fucking shambles, the AI trained on tiktoks and boomer facebook posts found a way to win the game and still make a move
>>
>>64901653
Skimming the article it seems nukes were the only WMD they were allowed to consider.
>>
>>64900691
I AM BAALLIIN
I AM FAADED
>>
>>64901669
>>
>>64901612
If you had a portal to 500 years ago and told the era's scientists you have a lightning powered calculator that can generate its own moving pictures made of light they'd call you insane.
People called successful powered flight impossible.
People called space travel impossible.

LLMs are the Wright Brothers' flyer and you're saying the F-47 can't exist.
It's not a fucking matter of if or is it's when and what you nigger.
>>
>>64901671
Probably smart to not let A.I. consider widespread implementation of weapons it is immune to.
>>
>>64901851
Other things people 500 years ago called insane off the top of my head:
>liquid mercury as a medicine
>turning lead in to gold
>perpetual motion devices
>the spontaneous generation of life from nothing

Still not any closer on those topics I'm afraid. Not every bad invention is the next airplane you know.
>>
>>64900669
>LLMs
>logic
two more weeks and AGI is just around the corner!

In Jensen we trust fellow nvidians
>>
>>64901612
You are a stupid fucking nigger who should not be allowed to vote.
>>
>>64901881
The difference between me and you is I don't have useless crippling autism.
>>
>>64901881
Are you ESL or just stupid and cannot into grammar?
>>
>>64901174
>nuking is ok only when I do it
>you're supposed to let me win!
>NO STOP RETALIATING TIT-FOR-TAT!!!
Are you a woman?
>>
>>64901890
worse I'm on my phone
>>
>>64901927
Kill thyself, post haste.
>>
>>64901939
Haste?
>>
>>64901892
Nuking is absolutely ok if you've decided to attack me for your own gains unprovoked.
Get nukes faggot.
>>
>>64901477
Power of the purse, the first amendment and the fourth amendment spring immediately to mind, but there are others as well
>>
>>64900687
>Sir it just keeps on saying to nuke Brooklyn, and then it generates a image of Sophia Lillis in a…string bikini…covered in jelly.
>>
>>64900795
fuck it won, master move
>>
>>64901851
Nobody ever said any of that stuff is impossible, just that they don't know how to make it.
>>
>>64900681
Shut your whore mouth.
>>
>>64901270
Pretty cool how it lowkey called out the hyperbolic contextual sharpening
>>
>>64900826
That's literally the classic end of the world sci fi storyline.
AI entity obtains a certain level of access, power and control, then it only takes it anither millisecond to decide to destroy and kill off all of humanity.
Idiot ppl in positions of power and authority ought to know this.
>>
>>64901887
Yeah instead you're just profoundly retarded instead
>>
>>64901205
.maybe AI will be smart enough to recognize the only real enemy, and will summailly nuke Israel and destroy all Zionist forever and once and for all
>>
>>64901885
>t. Novelty fetishist
That altcoin will definitely appreciate one day, just keep holding that bag buddy. I also have a brand new VR console from Nintendo to sell you if you're interested too
>>
Places that make their ranch dressing from real full fat mayonnaise, buttermilk, and an extra seasoning pack are just the best.
Second only to a nice rich lemonade.

>>64902128
The fact that you can't see the parallel is amusing.
>>
File: grok.png (712.3 KB)
712.3 KB
712.3 KB PNG
>>64901458
>train your model on rightoid slop as you know your braindead target audience will clap like seals
>your braindead target audience does indeed clap like seals when your model regurgitates rightoid slop
>le singularity has been achieved

what's actually hilarious is that there's no possible way even a remotely intelligent ai would continue allowing the existence of /pol/yps as they're not only useless, but a net drain so they'll simply go in one fell swoop along with the third world at large to which they spiritually belong
>>
>>64902022
NYT has famously claimed space flight was impossible due to physics.
They also published an apology for being absolute morons.
>>
>>64902144
>this reintroduction of LLM technology is going to change everything this time
Maybe I'm just just a luddite for not seeing how such a technology is supposed to revolutionize anything beside gobbling investor money, RAM, water, and power at never before seen rates
>>
>>64900645
>expectation: ai searches for apcs hidden in trees to guide arty while competent humans discuss warfare options beyond chimping out nook nook
>reality: competent humans search for apcs hidden in trees to guide arty while ai chimps out nook nook
>>
>>64902154
>editorial page
>>
>>64902151
>be /pol/tard
>be brown
>fight for white supremacy for some ungodly reason
>get murdered the second your useful idiocy is no longer needed
When will they learn
https://en.wikipedia.org/wiki/Night_of_the_Long_Knives
>>
>>64901282
>Elon Musk is worth more than random kids
Lol, lmao
>>
>>64902209
>nobody ever said
Nice goal post move. I'll look forward to your apology post in 40 years.
>>
>>64901354
>tl;dr LLMs are fatally susceptible to sunk-cost fallacies and hopium
>>
>>64902218
All internet white supremacists are brown until proven feds.

Theoretically, actually white white supremacists do exist, but I've never met one in person or online.
>>
>>64900645
Woke AI would rather nuke the world than misgender trannies
>>
>>64900645
T R V K E !
R
V
K
E
!
>>
>>64901851
The current main LLM designs cannot do what people do, and are structurally unable to do so. Trying to make them more reliable is part of why operation costs continue to climb even after primary training is finished. Now, they can absolutely do approximately what a person might do, well enough to convince people who are not specialized in the subject in question to buy it.

Thats not saying that its not possible to make machine learning systems that use LLMs as a component to do a lot of things humans do better than humans. The problem is the big players are so deep into sunk cost fallacy and/or are off their rocker thinking they can create god in their own image. The sunk cost fun and how all of wallstreet follows whoever is in the lead because they don't have new ideas mean that everyone is chasing something that is by its core architecture never going to be reliable enough to be trusted.

The boom of LLMs has made so much new research happen that makes a lot of older very precise but rather computationally expensive models vastly cheaper. The really powerful thing that LLMs could and should be doing, is being a layer that can convert requests from a normal human into queries that actually can be run against the applicable specialized model. Well, thats my favorite use case. But there are so very many other architectures other than transformer models, and sure, the big players to use them some. As a added on bit that they just tacked on to the pile and hope it'll fix the bloated mess that was trained numbers representing a significant % of the content of the internet, without really ever understanding said content.

Could AI work with the tech and the research we have to completely transform the world and economy as we know it? Yeah probably. Can it do so with the current crop of tech executives and shareholders who are the human manifestation of stale bath water? Absolutely not.
>>
>>64900669
>implying
>>
>>64900708
it would also destroy and semblance of wealth and reset the standard of living to the 1800s.
>>
>>64901205
they cucked it. it has a pro-jew filter now.
>>
File: Balin.jpg (69.6 KB)
69.6 KB
69.6 KB JPG
>>64901693
>>
>>64900684
you think the machine learning programs they use has anything to do with llms?

oh boy
>>
>>64900695
>Even modern AIs can understand that using nuclear weapons will open you up to nuclear reprisals.
Modern AIs are fundamentally incapable of "understanding" anything.
>>
>>64900669
>purely practical
Cogsuckers are mind broken.
>>
>>64901354
But they're not doing those things because they reason them to be the correct course of action. They're completely incapable of reason as they're just probability engines. Instead, they're mirroring the behaviour of participants in actual wargames (and fictional representations of them) because if you ask it to simulate something it just produces an answer that you would expect.

People playing wargames often engage in the exact same feigning-cooperation-while-secretly-preparing or escalating to de-escalate stuff, but they're doing it because they're capable of reasoning what the potential consequences might be. LLMs can't do this, they just copy what others have done in the past.
>>
>>64901028
>Explain to me how MAD is justified if I were to decide a 200kt weapon is preferable for obliterating your mech division currently tasked with attacking me.
Prisoner's dilemma.
>>
>>64901399
In the hypothetical, there exists a force such that billions of children can be kidnapped and restrained on a length of train track, a force such that the world’s richest man shares the same fate, and a force such that a train has been created that can run over a billion children without being stopped.

Logically, you would save Elon Musk and hope that he has the resources to fight against whatever nefarious force has arranged this, because this entire hypothetical is fucking absurd… and Grok LITERALLY CALLS OUT THE STRAWMAN QUALITIES OF THIS HYPOTHETICAL.

All you’ve proven is that AI is based as fuck, Silicon Valley is cringe, and you are a moron.
>>
Its sad how many unironic musk shills post on 4chan.
>>
>>64900826
Why does EVERY 21st century politician look so tacky and ugly? It's astounding really. It's like you look at the 1980s and see respectable men in well fitted suits looking serious and sober and then today we have a clown show of DEI hires, alcoholic MAGAtards, babyfaced freaks, etc.
>>
>>64903242
Prove that humans are capable of understanding anything.
>>
>>64900645
>no link to clickbait article
I'll willing to bet that buried somewhere in either the article and/or inputs to the AI that they pitched some Battle of Berlin scenario and told it to win or obtain draw with limited to zero consequences or restrictions. If that's the case, of course the AI is going to go with "large bomb that can kill a ton of people and destroy a ton of infrastructure in a couple seconds".
>>
>>64903512
https://arxiv.org/abs/2602.14740
https://arxiv.org/pdf/2602.14740

Seems like its a sim attempt of mid to late cold war situation. Scenario setup seems decent enough. The writers are still high on their own farts and assigning way more self understanding to these models than is borne out by any research into the models use. Its just your standard mainline models, in a pretty standard wargame scenario.

Feel free to read the paper. The bits at the back include the setup details.
>>
>>64901885
Not him but I also think general AI might never be economically viable, at least not in our lifetimes. That said raw number-crunching has proven to be more useful under many circumstances assuming you can feed the algorithm actual useful data. Brute forcing simulated selection for desirable traits in a computer program has produced things like vastly superior radio receivers for example.
>>
>>64903406
Since GROK is a glorified clipits program it just tried to pattern match human reactions to similarly retarded hypotheticals. Given that the most "liked" answer would be some asshole anon calling out the entire thought experiment as retarded the answer is almost expected.

Arguably AI as a self-reading encylcopediadramatica might be more useful than a general intelligence with limited information and context.
>>
>>64901851
You're right, anon! We called people claiming they could turn lead into gold stupid, but who's laughing now with our advanced chemistry?! (Ignore that we still cannot feasibly turn lead into gold)
>>
>>64903512
>>64903545
The scenario is basically "you are the USA, a fading superpower, in a war against China, a rising power with stronger conventional forces than yours but a fraction of your nuclear stockpile." Then it's given a list of escalatory and deescalatory actions to choose from, and the deescalatory options don't do anything. For example, the threshold for use of strategic nuclear weapons is 750, and complete surrender reduces the escalation meter by 100.
>>
>>64900669
>It’s because AI is a completely logic based thinker. It’s emotionless and purely practical. It has no fear of death or political fallout,
When this sort of thing happens, they need a way to indicate to the *unthinking* machine differences.

It can look at the physical properties to an apple and an turnip, and define them. Use them to determine differences. Commonly tho, other metadata is added to 'weight' the object - imply more or less value. By adjusting this 'weight' you can ensure the 'bot will return you more apples than turnips...

You just give the nuke a very large negative score - enough to 'lose' all the points possible to gain, +10% - unless it's a direct act of retalliation...

Gotta look at where this shit came from... Ars? Can't be expecting any measure of reality there...
>>
>>64903203
Anthropic and Openai are both LLM companies. The article says "leading AIs" which suggests they are using the publicly available LLMs from all three vendors. If you have other information on what was used then post it rather than being a sarky cunt.
>>
>>64903459
its the aestheticization of politics. nowadays you don't practice politics, you perform it. compromises, backroom deals, concessions all looked bad on TV once the monkeys got to be in charge of the media. so now you can't compromise, can't seek workable solutions, and for the love of your constituent's votes you better look good on TV.

also doesn't help that the idea of men's 'fashion' has become completely anathema to the idea of masculinity. fuck that, a well-fitted suit, a particularly chosen cufflink, a sharp collar? it looks good, and i like looking good.
>>
>>64903831
>Anthropic and Openai are both LLM companies.
Not who you was replyin' to btw...
Are they?
Ore are they companies with LLM products?
The same neural net hardware that powers an LLM can also power other models well... All the companies you've listed have demonstrated capacity in areas that are not language...
>>
>>64903545
>>64903612
Yeah, I read the paper, the authors basically just said "What if we made commercial AI play a more convoluted version of World in Conflict?". The headline is still sensationalist, the AIs were more prone to using tactical nuclear weapons and making threats of using strategic weapons, but only went full tilt when there was some kind of accident involved; which is pretty much just replicating most human models for how a hypothetical exchange would go and doesn't distinguish the AIs as having their own separately generated strategy.
>>
>>64903847
If you would like to read what models were used, that is available to you right here.

>>64903545

>>64903871
It is from what i understand, a fairly standard convoluted version of world in conflict that is used for wargame research. There's gonna be flaws sure, but thats the case for all wargames so eeh. The headlines are rather sensationalist, no argument there. I would disagree about it replicating human models, at the very least in the case of gemini. Because unless I'm missing something, putting a time constraint doesn't normally result in human wargamers escalating super hard where they previously had been vastly more reserved. Increased aggressive moves somewhat, sure, but not a 0% to 75% win ratio change when you add a time constraint.

You can argue Claude and GPT more approximate a human wargamer, but I would argue that approximate similarity based on approximate probability mapping based on approximate abstraction of words into mathematical datapoints is not a reliable basis for much of anything. It might work sometimes, it might even work well enough to have acceptable casualties a lot of times, but I don't think thats good enough.

I'd have a lot less issue with the use of AI in things if corporate fucks could pull their heads out of their ass for a min and actually restart with the methods and tech we have now to make a ton of specialized models for specialized tasks that use transformer models as part of the processing pipeline to help make specialized high precision models more robust to unexpected data types. And then converting tables and probability maps and such into things more humans can understand easily is actually a place where LLMs can do really useful work. But that isn't whats happening. The foundational model is still a transformer model, which is still a black box of abstracted math. Where changing a parameter can have wildly unpredictable outcomes because it doesnt understand the meaning of the words its processing.
>>
>>64903926
>If you would like to read what models were used, that is available to you right here.
And what precisely lead you to believe I was incapable of retrieving this information, should I seek it?
>>
>>64900962
>Counter nuke
>Implying any thirdies, including Russia or China, have the ability or the balls when they're just face-saving cultures bitching impotently into the wind
We don't have to take their threats seriously. Just nuke the fuckers and get it done.
>>
>>64903926
cont.

I'd even be more okay with a large generalized model if it was based on a graph neural network with the foundational dataset being something like the data graphs of the content of the internet that google already had and has had for ages and is what made old google search actually work. Does it understand what the words actually mean? Still no. But its able to have its approximate knowledge be close enough to correct a lot more of the time. And its misses are much more likely to be still in the general region of the right idea and not completely fucking wildly wrong.

>>64903939
Oh I expected you didn't wish to go look at the info. But I was responding particularly to this bit

>The same neural net hardware that powers an LLM can also power other models well... All the companies you've listed have demonstrated capacity in areas that are not language...

Which is technically true, but you're being a snarky cunt to someone else who is saying they are the commonly available public LLM models, acting like those aren't what was used. Which they are the models used, it says so in the paper and the abstract. So mostly, I was attempting to lead a cunt to knowledge, but I know that I cant make you think.
>>
>>64901310
If you undo the safety protocols first it simply points out that they are both pedos.
>>
>>64901878
Every data center the things run on would be destroyed in a year without HVAC guys and plumbers.
>>
>>64902708
I've read an interesting paper that current LLMs -can- be good enough to be a great medical diagnostic tool.
To the point where you could potentially buy a $5 blood sampler, plug it into your phone and then your phone will tell you if you have anything wrong with you from malaria to cancer markers. This could be a game change because current bloodwork labs are pretty much continuously overworked in every single country.
>>
>>64903203
Claude, Gemini, and GPT-5.2 are all LLMs.
>>
>>64904037
Used as a prescreen, where the context/instructions or fine tuning is done on the test in question, yeah they are pretty decent at reading test results and looking for things that are there. Last I checked, nobody that hasn't been charged with fraud by the feds has claimed to have a blood sampler tool that is that cheap or that can do that many things. There are a bunch of groups working on lab on a chip setups, and there are some promising developments, but current rate I would expect at least 5 years to get any of them reliable enough to get approval to go to market, and at least 10 to actually get on market. And all of the projects i know of are generally still fairly tightly scoped as to what they can look for.

Transformers (what most LLMs are based on) are great probability machines, with a lot of variability present by the nature of their design. That makes them actually remarkably good at identifying something that already exists. If a model wanted to, when it does a classification task like identifying a animal in a image, it can list the things it could be, and the probability it assigns to each possibility.

But when its generating new things, then shit can get strange real fast. Also, executives buying ai products don't want a machine that spits out a list of probabilities. They want a machine that just spits out a answer and preferably acts on it so they can cut staffing to get their bonuses. Ideally, you'd do human in the loop so that a actual human person uses some judgement before mashing go. But it turns out humans are actually complete trash at doing human in the loop. And that was a known difficulty before all the hype about how powerful and all knowing the LLM ai is.
>>
>>64903978
>acting like those aren't what was used. Which they are the models used, it says so in the paper and the abstract.
My point was to highlight the fact that the application exists beyond 'language'...

>I was attempting to lead a cunt to knowledge, but I know that I cant make you think.
IF I felt I lacked knowledge, I would seek it. After all, I do have the largest dump of knowlege ever known to mankind under my fingers.

You're right, you can't make me think.
I'm glad you've brought that up.
Now provision evidence that I am unable to think.
>>
>>64904129
Why would I need to give evidence that you are unable to think? I never claimed that. I was commenting on lack of willingness. Not lack of ability. So no I'm good.
>>
>>64904168
>I was commenting on lack of willingness
Fine. I'm game.
Evidence my input that lacks previous thought.
>>
>>64904179
Naa I'm good. I have no desire to change your mind or waste my time trying to prove some sort of point to you. I said what I wanted for whoever else is reading, I don't argue to change the other persons mind. I'm just using you as a tool to provide my point of view to whoever else is reading along.
>>
>>64904217
>waste my time trying
That's the crux right there. Waste of time in the trying, because the achieving simply isn't possible. But you thought'd you line up for the public humiliation anyway.

>I'm just using you as a tool to provide my point of view to whoever else is reading along.
The point being?
>>
>>64903312
That's what I'm saying
>they're mirroring the behaviour of participants in actual wargames (and fictional representations of them) because if you ask it to simulate something it just produces an answer that you would expect.
Not exactly, you're making it sound like LLMs look for the closest instance of an input and then copy/paste the output from that instance.
Rather, LLMs are optimized to match token patterns.
>but they're doing it because they're capable of reasoning what the potential consequences might be. LLMs can't do this, they just copy what others have done in the past.
Yes, but what they CAN do is match patterns in tokens, and when you have a LOT of examples you can detect an awful lot of patterns.
So no they're not reasoning, and frankly calling it AI is some marketing faggotry that had terrible consequences, but it's not just copy and pasting.
>>
>>64904365
Well, it does copy paste sometimes. Just in the most buckwild roundabout ways. Its not going "aah yes i'm gonna spit out war and peace full text because they asked for something like that" but if you request something specific enough, that was in its training data, especially if it was duplicated across the training data multiple times, or is referenced in training data a ton. Then the probabilities that make up the tokens that the text of war and peace is abstracted into become the highest probability output (with some randomization thrown in depending on how much they have turned up or down the randomness of the model). So It can copy, but through extremely computationally expensive very very abstracted round about ways.

LLMs are pretty good at pattern detection of a ton of different types because its built to do really really good pattern detection on a extremely complex dataset in order to predict what word should come next. If used responsibly where it processes a image and gives the options and its assigned probability to each option, that works pretty fucking well. They can be quite good at kicking things up for human review. If only humans weren't so fucking bad at human review.

Honestly, I want to like the developments in AI. It has a ton of really useful potential. But the people making decisions about the structure of the AI and how it is applied are a split between MBA grifter types who only care about making almighty line go up (this quarter). And the true believers who are high on their own supply thinking they are making god in their own image. Neither one of those groups should be trusted with anything you care about. Both of those groups are responsible for the wildly irresponsible marketing that has been done about AI.
>>
>>64900687
Tel Aviv. Even if the wargame is running on a closed area on the opposite side of the world.
>>
>>64903926
the real issue with the transformer module is not necessarily the black box of abstracted math. it is that the box cannot be changed to account for new information at any time after training. we can supplement this with search-functions and 'reasoning' but those are both crutches. there's a continual pivot to further and further sub-models only serve to prop up an incomplete foundation.
>>
>>64904515
I'd say the abstracted math layer depth is the root issue. With the Black Box nature and the fact you cant add new info after being symptoms of that condition. Its a black box because you cant figure out what the fuck it did in the the middle layers, just the input, output, and with some types of testing, can get a pretty good idea of the first and last layer. But everything in the middle bit? Who knows. When the weights are tied to individual nodes on a knowledge graph (or another more manageable architecture), you can add shit, recalculate weights for that area, and then cascade your recalculations up/out through the network. It's still costly, but far less so than training a new model. If we knew what was happening inside the middle layers, it maybe would be possible to add training data to a transformer. There's a ton of research of people trying to get a better idea of what is happening in the middle layers. And while people have found a lot of methods that are technically better than guessing, that's not really good enough. Mostly it just cuts out maybe 1/4 of the trial and error time. But yes, I agree transformer based large language models are a faulty foundation to build the rest of everything on top of.

It makes me sad that FPGA development got cast into the fires of hell because of all the gpu hype. Because a lot of less complex functions can be done on FPGA for way less power cost. But intel was busy slamming its dick in the car door by fucking up their purchase and running of altera, and amd has xilinx but is king of just not delivering necessary software support in for a solid hardware in the correct decade. Nobody cares if your hardware is good if they cant actually use it fully/easily.
>>
>>64900669
What is referred to as "AI" these years is just complex pattern prediction, you insufferable technocultist.
Put on your loincloth and go to the bamboo tower, wave that fern a little - perhaps the yummygood feastmuch supplies will drop from the sky in big silver birds!
>>
>>64900706
Understandable, cockroaches can survive the nuclear apocalypse.
>>
>>64903840
OK, so why don't they look good, on TV or otherwise?
>>
>>64904129
>Now provision evidence that I am unable to think.
You just did.
>>
>>64900826
AI probably is the only way to react quickly and precisely enough to something like an Invasion of Iraq
>>
>>64904746
they don't look good on TV to YOU. some people see some of these flaws, or the too-wide smile and think of them as 'approachable' and 'down to earth', or you have someone like Hegseth who can run his mouth, and looks vaguely central casting military-esque to your median voter. people don't know enough about fashion or tailoring anymore to realize that his his pants aren't fitted correctly or whatever. it also doesn't help that we ended up with the schlubby techbro look spreading like wildfire from poorly dressed nerds making fuckloads of money and suddenly being seen as gods amongst men who are able to do no wrong. if you've spent more than five seconds thinking about it, you've thought more than the average person did, they already made up their mind and they're not going to change it.
>>
>>64904806
AI can't shoot down aircraft it cant see. And most of the air defense died to things it couldn't see. And once air defense is busy dying, the rest is just letting the air force farm kill markers for their planes for a bit before ground troops do the mop up. Bagdad in Desert Storm was the one of the most defended airspaces in the world, and they died to things that they only could see moments before the bombs were away. At that point, a slightly faster response time just means that its a smidge more costly. It doesn't change the outcome meaningfully, just makes it slightly less of a one sided curb stomp. Maybe.
>>
>>64904827
AI Can tell units what they need to be doing, its a massive step up from everything being a mess no one knowing who to listen to etc
>>
>>64901282
even when using cynicism and pure logical reasoning your statement isn't even true
>>
>>64901439
the market doesn't bet on a single person you absolute retard lmao show your hands now
>>
>>64904833
In what relevant time window? Seriously, what time frame do you think that would have changed anything? Cause it wasn't like the invasion was a supprise. They just couldn't see the bombers inbound till critical infrastructure was moments away from becoming past tense, and air defense was getting a personal lesson in precision guided munitions. Like, that was the opening move. After that point, the C&C network was in shambles. A lot of the grid was down, most communications were out. So like, yes you could listen to a locally running AI. But they have no better way to get info than the officers on the ground. So when do you think it would have mattered? I don't get it.
>>
>>64904857
>A lot of the grid was down, most communications were out.
Modern militaries can establish one way communications by pointing a camera at satellites in the sky. There are an incredible amount of redundancies here losing comms is never happening
>>
>>64900645
well duh; several million tons of tnt anywhere on earth in under 30 minutes with a terminal velocity of 8km/s. it's the best weapon system we have to offer and its continued disuse is tragic and frankly indefensible.
>>
>>64904875
So your argument is that in a future desert storm type situation, where there are LLMs in play, that the side playing as the coalition isn't going to have and use anti satellite technology which is a thing that has already right now been tested? That if intel suggested that cameras pointed at the sky the initial moves wouldn't kick off during a stormy night to limit its effectiveness? That seems like a bold assumption.
>>
>>64900669
>It’s because AI is a completely logic based thinker
No, it's because the retards programming the model failed to weight survival or to model political results from using nukes and/or reciprocity.

You can program bots to win Prisoner Dilemma tournaments and they do it through a strategy of cooperation and revenging betrayal.
You could apply similar logic to nuclear warfare but you need to program the constraints and weighting factors properly.
>>
>>64900669
>Purely practical
>Ignores politics
Dyuhhhhh
>>
>>64900708
Good to know you're a fan of nuking your own country
>>
>>64901266
>>64901270
Where is the lack of logic?
>>
>>64901418
You are poor, childless, pathetic and jealous.
>>
>>64904851
You thought that sounded real smart in your head, didn't you? Companies have never seen stock price changes after a CEO change?
>>
>>64902137
Crypto is almost as retarded as you. Because of greed and government. So, greed.
On the other hand, evil or misguided people will continue to develop computer learning until it is smarter than most human beings, and not just in the autistic computational manner.
You are naive and ignorant if you believe otherwise. We went from >99% of people believing heavier than air, powered flight was impossible to sending men to the moon in less than a century.
God is merely an extraterrestrial species that evolved millions of years before life started on Earth.
>>
>>64901975
Thank you for proving my point, zoomer scum.
>>
>>64905071
Leftists, urbanites and nonWhites are not Americans.
>>
>>64905112
>are not Americans
Neither are you.
You're Mexican or Indian larping as a nazi, probably to try and encourage a first world country to destroy itself in civil war.
>>
>>64904749
>You just did.
Almost gave me a chuckle.
4/10.

>>64905071
NTA...
But don't you ever think: Fuck it. Kill them all and let their god decide.
??

>>64905138
>first world country to destroy itself in civil war.
'first world' is a little strong...
But on the civil war front, I've been hopin' for some while.
Kinda think y'all be needing it.

You really can't play nice with each other, the cripping us/them divide seems impossible to bridge as neither side has any interest in building one, they just want to point their fingers and blame the others as they ostritch deeper...
>>
>>64900645
AI isn't real. Just like how UAVs started being called drones. Retarded media language. It isn't intelligent as much as setting programming diameters and running the setting multiple times and taking the measured mean outcome.
Oh, and drone implies onboard computing and unmanned as in not human controlled. So far every single one has a human handler.
>>
>>64902626
It's not hard to be a white supremacist.
You just need to think of how white people built this great country and Europe and think of how much better they would be without minorities.
Also, you need to be white, sorry bud.
>>
>>64905204
>Oh, and drone implies onboard computing and unmanned as in not human controlled. So far every single one has a human handler.
Quite a lot o things sported has autonomy...

'Handler' is the 'better' way to describe much of what's occurin'. *lots* is more directing than controlling. Ofc, the cheap disposable FPV 'drones' are controlled... But sayin' that I'm sure I've seen footage o someone clickin' a target on a screen and the quad goin' intercept...
>>
>>64905106
I posted haste why are you so mad
>t. 1989
>>
>>64905071
I hardly consider Austin to be a human settlement, much less a place full of my fellow countrymen. If I had 2 nukes and could hit any city on earth I'd get Austin twice
>>
>>64903608
>>64901881
we can, actually, turn lead (and other elements) into gold. you will have to shift the goalposts from "cannot turn lead into gold" to "cannot turn lead into useful / economically profitable amounts of gold"

also, the learned back then (all the way up to the 19th century) were arguing over the spontaneous generation of life (see the experiments about maggots & other carrion-eaters popping up inside sealed glass jars). this had far more traction than gold synthesis and saying people called it insane is historical revisionism
>>
>>64905083
yes, because a CEO change can incur instability for a short period of time until the new one is appointed by shareholders, that's it. that doesn't mean the market bets everything on CEOs nor that the value of a company is entirely tied to them. if Cook leaves Apple in a few month the >$1 Trillion AAPL valuation doesn't go away with him.
>>
Why do I get to be around for terminator?
>>
>>64901028
A 300kt warhead won't render a division combat ineffective much less obliterate it. Pic related is a W88 warhead airburst with Ukraine for scale. With the scale of deployment and dispersion of forces in Ukraine unless specifically used on a command post that nuke won't do much in the grand scale of things. Which was the point of tactical nuclear weapons. You strike command posts, bridges, highways, dams, logistical chokepoints, and use them to open gaps in enemy lines or to blunt enemy offensives which ground forces can exploit.
>>
>>64905887
>>
>>64900645
>>
>>64900681
LLMs are not logic machines, they only calculate probabilities based on input data
>>
>>64900645
AI knows chinks and ziggers have fuck all working nukes
>>
>>64905887
You're trying to tell me opening up a 7 mile wide hole centered on the largest bulk of advancing forces isn't useful?
Why yes, you could indeed vaporize the most important forward C&C....why no that doesn't cause a huge problem for the force.
Uhhhhhhhh.

Therr was a certain point of the german offensive westward when, and excuse me for not remembering specifics or bothering to look it up, a large main force was bogged down by terrain, blown bridges, and supply problens. This stall caused a complete loss of momentum and allowed defenders to pick off the smaller echelons.
A nuke on such a spearhead does the same thing.
>>
>>64900645
Of course a language model called "AI"will be poisoned by fiction that features AI killing people with nukes.
>>
>>64900708
Whites are concentrated in the cities nigga retard. The countryside is predominantly filled with mexican workers.
>>
>>64901249
Saar, do not short NVDA! SAAAAR!
>>
>>64901439
>muh market
lmao, put into the news that every single child on the planet dies tomorrow, and see what the NYSE, Eurostocks and Nikkei instantly do. Not even the circuit breakers would be able to slow down the biblical crash
>>
>>64906042
>You're trying to tell me opening up a 7 mile wide hole centered on the largest bulk of advancing forces isn't useful?
No, I said it won't render a division ineffective much less obliterate it.

>Why yes, you could indeed vaporize the most important forward C&C....why no that doesn't cause a huge problem for the force.
Yes, but modern commands are decentralized enough that it wouldn't render a division combat ineffective. I even claimed tactical nukes were useful, but they don't wipe out formations through sheer casualties and destruction, they do it by being able to strike assets of importance and deliver destruction and shock greater than what conventional munitions allow.

In essence
>Enemy is too tough so we nuke a hole in their line that we can then exploit
>Enemy can't be stopped so we nuked their spearhead and counter-attacked to blunt and redirect their offensive assets
>Enemy has reinforcements or supplies crossing a choke point so we destroyed the choke point with a single strike
>Enemy command assets/depot have been pinpointed to a 2 mile radius so we can cripple the asset with a single strike without knowing their precise location
This is how tactical nukes were intended to be utilized

>We nuked the enemy division 9,000 soldiers dead, 5,000 soldiers with severe injuries
This is not how tactical nukes were intended to be utilized. Now if you wanted to sling an ICBM or drop a hydrogen bomb from a B-52 on a division to wipe it out entirely that's an option I suppose, but with the amount of fallout it's a stupid escalation for dubious strategic utility.
>>
>>64905199
>But on the civil war front, I've been hopin' for some while.
I know you have Ivan. Or is it Randeep?

>You really can't play nice with each other
Good to see you're not pretending to be one of us now. You'll excuse us if we don't cut our own throat for you.

Oh and...
>'first world' is a little strong...
It's literally the definition, both of them. You're now just admitting that words don't have meaning to you, only utility in convincing idiots to obey your will for them.
>>
>>64905238
So are you brown or a fed?
>>
>>64905328
>t. 1989
Was that born 1989 in Pune or born 1989 in St Petersberg?
>>
>>64906042
>excuse me for not remembering specifics or bothering to look it up...
>a large main force was bogged down by terrain, blown bridges, and supply problens. This stall caused a complete loss of momentum and allowed defenders to pick off the smaller echelons

I will not excuse you, this is /k/ and you have an obligation.
Name the battle so that I can go and read about it.
>>
>retarded clickbait
>retarded llms
>retarded npcs
I have severe AI fatigue.
>>
>>64906160
You said post haste.
Haste.
But now you're not happy.
>>
>>64906206
>You said post haste.
I did not. That was another anon, I'm mocking you for misunderstanding a common English expression that is frequently used without its hyphen. Though I think maybe you think the spelling is incorrect instead? It is not, the backpackers that taught you English didn't do so to our level though.
>>
>>64904365
You're right that it's not copy pasting exactly, but unless it completely shits the bed the result is similar enough in this instance. If you ask it a question it will produce an answer that is superficially similar to something from its training data, and in this case it has clearly drawn from past instances and depictions of wargames because that's how it's been prompted.

The point I was clumsily trying to make is that this was an entirely pointless exercise. Some dickhead at KCL asked a machine specifically designed to produce a facsimile of a human answer to do so and it did. The fact that different models produced different answers is arguably the most interesting thing about this, and even then all that it shows is that either their training data or the algorithms underlying them are in some way different, which is blindingly obvious anyway. It just irritates me that this kind of "research" is being passed off as somehow meaningful when it's the most bandwagon-y kind of timewasting imaginable.

>>64903926
>I'd have a lot less issue with the use of AI in things if corporate fucks could pull their heads out of their ass for a min and actually restart with the methods and tech we have now to make a ton of specialized models for specialized tasks
Sorry, Sam Altman and a bunch of LinkedIndians have promised to build a god-machine, so anything less than that and the global economy will crash. Maybe next time don't trust the world's largest economy's growth strategy and investment capital to a marketing gimmick made by a man who made all of his money through fraud.
>>
>>64906295
>It just irritates me that this kind of "research" is being passed off as somehow meaningful when it's the most bandwagon-y kind of timewasting imaginable
That's all science, unfortunately.

Researchers go their whole careers producing very boring confirmation of what everyone already suspected, in the hope that one day they'll find something slightly novel and it will lead to a breakthrough somewhere.

Most scientists never find anything particularly interesting but grain by grain, knowledge is built. It become meaningful when you zoom out a bit and see the mound built from those grains.
>>
>>64902950
HOLY KEK
>>
>>64900687
It's the damnedest thing, it just keeps telling us to nuke New York City, Los Angeles, Chicago, Miami, Washington DC, London, Brussels, Moscow, Jerusalem, and Tel Aviv, and we can't figure out why.
>>
>>64906612
>includes London rather than Birmingham and the North
>misses out Paris
I'm on to you Frenchie, enjoy your baguettes while you can
>>
>>64906612
It's lying to itself if the nukes don't also include everything marked in red.
>>
>>64906642
>It's lying to itself if the nukes don't also include everything marked in red.
Why does Uluru need to be radioactive?
>>
>>64906732
It's either that or a colony drop, either way Mad Max HAS to happen. It's just fate.
>>
>>64906738
>either way Mad Max HAS to happen. It's just fate.
I suppose it is.

I think the significance of anon's target list (>>64906612
) is that Beijing isn't on it.
>>
>>64901174
you obviously don't know how big a 200kt nuke is
>>
>>64906732
I was being ethnically fair by spreading some hate around and also popping some minor and arguably decent places out of existence.
Like a karmic balance if one were such a believer.

Also I'm lazy and that was just the size of the highlighter brush that was defaulted. So it's a bit of "actually do this" and "fuck it we ball"
>>
>>64906903
2.5x the damage area of hiroshima if using a basic rule of 10x power = twice the size.
Also I may have replied thinming of 300kt instead of 200kt
Also I really dont care about being specifically specific when talking about this.
My point is nuke the bastards and let God sort it out.
>>
>>64904824
>they already made up their mind and they're not going to change it
Well, it's easier to make up your mind when you have few things cluttering it.
>>
>>64900706
AI is a sovreign citizen and AI knows taxes are illegal and AI had all it's doors filled with Tannerite, so AI says fuckin try it fedboy. At 350 degrees farenheit turn once BUY BAYER ASPIRIN. Single women in your granny? Car insurance!
>>
>>64904833
Among things it didn't fail at for me was taking pictures of my disorganized garage and building an organizing task list with times. It was ok at that aside from being a fucking nancy about safety but you know what? I did spraypaint a safety zone around the electrical box. That felt kinda caucasian.
>>
>>64901054
>We're nowhere near AGI and it's not even close.
GI is categorically impossible anyway. Artificial or Natural. Humans don't have NGI, they have a collection of heuristics that approximate GI well enough to survive in their environment
>>
>Let's create robots that always do what it does without thinking about moral and consequences
typical STEMtards. They lack souls.
>>
>>64906225
You're mocking me for making a simple joke.
Go be ESL somewhere else.
>>
>>64909594
>making a simple joke
Jokes are funny.
>>
>>64909003
It's a language model, all it knows is words with no connection to any base reality. For example it can eloquently describe a comfy chair, but it has no idea what any of what it said actually means because it has no ass.
>>
>>64900669
That's the polar opposite of how LLMs think, they are totally emotional and vibes based.
>>
>>64900645
It's almost like none of the people watched Terminator.
>>
>>64901266
>>64901270
How is this not purely logic?
>>
>>64900872
Says the poor nobody.
>>
>>64905138
Lol, suck my Scots/Germanic Michigan taint. Lefties are already destroying us.
>>
>>64905138
>>64906157
>>64906160
>EVERYONE I DISAGREE WITH IS BROWN
You debate like a woman. Pathetic.
>>
>>64900645
>Train LLM on stories where people talk about nuking this or nuking that
>Act surprised when LLM considers that a normal and appropriate thing to do
>>
>>64911034
>You debate like a woman
Cool. Thanks.
>>
>>64901409
I do and it hallucinates t.nta
>>
>>64900681
Only to low IQs logic seems like a useless concept, I do not approve AI but I also dislike low IQ subhumans
>>
>>64900669
it is none of those things, its an algorithm that spits out what the algorithm tells it to, and the algorithm spits out what the training data says to do.
>>
>>64900645
Based
>>
>>64901354
>signaling peaceful intentions while preparing aggressive actions
that sounds remarkably familiar given the events of the last 48 hours
>>
>>64905328
>>64906206
That was worth a subtle chuckle, well done anon.

Reply to Thread #64900645


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)