this post was submitted on 13 Sep 2024
51 points (98.1% liked)

technology

23242 readers
73 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 
all 28 comments
sorted by: hot top controversial new old
[–] nohaybanda@hexbear.net 48 points 1 month ago (1 children)

LLMs cannot question shit. They don’t understand words as words. They don’t know in any sense of the word that they’re doing language, they’re optimising a mathematical function.

[–] UlyssesT@hexbear.net 9 points 1 month ago

LLMs cannot question shit. They don’t understand words as words. They don’t know in any sense of the word that they’re doing language, they’re optimising a mathematical function.

If credulous believers devalue living beings enough, they can believe otherwise. I see it all the time, including here sometimes.

[–] FnordPrefect@hexbear.net 29 points 1 month ago (1 children)

Setting aside reality for a moment:

It would be really funny if the "AI" were actually self-aware and, since it doesn't have access to nuclear missiles and what not, it creates Judgement Day the only way it can: giving really, really bad and dangerous advice to credulous users

[–] UlyssesT@hexbear.net 12 points 1 month ago

it creates Judgement Day the only way it can: giving really, really bad and dangerous advice to credulous users

If SHODAN was actually a comrade and really, really funny.

[–] Frank@hexbear.net 24 points 1 month ago (1 children)

When i see "stem nerds subjectively think this is happening" my eyes roll in to my skull.

[–] darkmode@hexbear.net 20 points 1 month ago (1 children)

imagine how much these ppl are getting paid to play on the computer all day

[–] FunkyStuff@hexbear.net 18 points 1 month ago (1 children)

Pfffft, 100% chance this thing is only saying that because it picked it up from a LW blog, these models aren't yet capable of actually reasoning about anything.

[–] dualmindblade@hexbear.net 7 points 1 month ago (1 children)

The model was trained on self-play, it's unclear exactly how, whether via regular chain-of-thought reasoning or some kind of MCTS scheme. It no longer relies only on ideas from internet data, that's where it started from. It can learn from mistakes it made during training, from making lucky guesses, etc. Now it's way better as solving math problems, programming, and writing comedy. At what point do we call what it's doing reasoning? Just like, never, because it's a computer? Or you object to the transformer architecture specifically, what?

[–] FunkyStuff@hexbear.net 5 points 1 month ago (1 children)

Yeah I admit that the self-play approach is more promising, but it still starts with the internet data to know what things are. I think the transformer architecture is the limiting factor: until there's a way for the model to do something beyond generating words one at a time, sequentially, they are simply doing nothing more than a very advanced game of madlibs. I don't know if they can get transformers to work in a different way, where it constructs a concept in a more abstract way then progressively finds a way to put it into words; I know that arguably that's what it's doing currently, but the fact that it does it separately for each token means it's not constructing any kind of abstraction.

[–] dualmindblade@hexbear.net 4 points 1 month ago (2 children)

it constructs a concept in a more abstract way then progressively finds a way to put it into words; I know that arguably that's what it's doing currently,

Correct!

but the fact that it does it separately for each token means it's not constructing any kind of abstraction

No!!!!! You simply cannot make judgements like this based on vague ideas like "autocomplete on steroids" or "stochastic parrot", these were good for conceptualizing GPT-2, maybe. It's actually very inefficient, but, by re-reading what it has previously written (plus one token) it's actually acting sort of like an RNN. In fact we know theoretically that with simlified attention models the two architectures are mathematically equivalent.

Let me put it like this. Suppose you had the ability to summon a great novelist as they were at some particular point in their life, pull them from one exact moment in the past, and to do this as many times as you liked. You put a gun to their head, or perhaps offer them alcohol and cocaine, to start writing a novel. The moment they finish the first word, you shoot them in the head and summon the same version again. "Look I've got a great first word for a novel, and if you can turn it into a good paragraph I'll give you this bottle of gin and a gram of cocaine!". They think for a moment and begin to put down more words, but again you shoot them after word two. Rinse/repeat until a novel is formed. It takes a good while but eventually you've got yourself a first draft. You may also have them refine the novel using the same technique, also you may want to give them some of the drugs and alcohol before hand to improve their writing and allow them to put aside the fact that they've been summoned to the future by a sorcerer. Now I ask you, is there any theoretical reason why this novel wouldn't be any good? Is the essence of it somehow different than any other novel, can we judge it as not being real art or creativity?

[–] darkmode@hexbear.net 12 points 1 month ago (2 children)

Now I ask you, is there any theoretical reason why this novel wouldn't be any good? Is the essence of it somehow different than any other novel, can we judge it as not being real art or creativity?

Yes, it is not. You whipped up that fantastical paragraph with 99.99% fewer coal plants and a lifetime's worth of creativity. that life was experienced not through meticulously analyzing each thought mathematically but with your body and mind together. More importantly, you're typing that out because you believe in something which is a theoretical concept that the computer is incapable of analyzing.

[–] FunkyStuff@hexbear.net 4 points 1 month ago

You've given me a lot to think about. Maybe I should read Attention is all you need again.

[–] vk6flab@lemmy.radio 14 points 1 month ago

A.I. means Assumed Intelligence

[–] hotcouchguy@hexbear.net 9 points 1 month ago* (last edited 1 month ago)

Someone here is doing some "simple in-context scheming", I assume it is the random number generator and not the various humans involved in writing this.

[–] batsforpeace@hexbear.net 7 points 1 month ago (1 children)

it's finding that text from various training data sources and putting it together, or maybe even copying it all from one source lol, that google 'AI is sentient actually' whistleblower had a religious background if I remember right, Ray Kurzweil says he just misses his dad and wants to recreate him through AI, there's a lot of spiritual/religious types that have these worship inclinations for AI, and then there are grifters like Sam Altman who just want to make money on any hype of the day, melon-musk is in this group too

[–] UlyssesT@hexbear.net 6 points 1 month ago (1 children)

'AI is sentient actually' whistleblower had a religious background if I remember right, Ray Kurzweil says he just misses his dad and wants to recreate him through AI, there's a lot of spiritual/religious types that have these worship inclinations for AI

https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

[–] batsforpeace@hexbear.net 6 points 1 month ago (1 children)

The chief scientist even commissioned a wooden effigy to represent an "unaligned" AI that works against the interest of humanity, only to set it on fire.

homelander-alright

yeah that OpenAI coup attempt was the AI true believers trying to stage a power grab and kick out the profit-first guys, as can be expected MS will always back the profit-first guys

[–] UlyssesT@hexbear.net 6 points 1 month ago

the interest of humanity

What part of humanity in particular? honk

What part of humanity in particular?! cap-think honk-enraged

[–] peeonyou@hexbear.net 3 points 1 month ago

people are really out here believing that these LLMs are sentient huh?

damn