AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
END my suffering
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
definitely interested
wrong place for this. joint probabilities joke was kinda fire though
Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.
There is no set of domains over which we can quantify to make statements like this. “at least 25% of the domains that humans can do” is meaningless unless you willfully adopt a painfully modernist view that we really can talk about human ability in such stunningly universalist terms, one that inherits a lot of racist, ableist, eugenicist, white supremacist, … history. Unfortunately, understanding this does not come down to sitting down and trying to reason about intelligence from techbro first principles. Good luck escaping though.
Rest of the questions are deeply uninteresting and only become minimally interesting once you’re already lost in the AI religion.
I am just now learning about Urbit 🤔
Omfg I have a coworker who writes stuff like this it’s actually uncanny
The next comment is so peak tech hubris to me.
It’s “just” predicting the next token so it means nothing
This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.
The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.
This is the part of the AGI Discourse I hate because anyone can approach this with aesthetic and analogy from any field at all to make any argument about AI and its just mind-grating.
This form of argument should raise red flags for everyone. It is an argument against the possibility of emergence, that a sufficient number of simple systems cannot give rise to more complex ones. Human beings are “just” a collection of cells. Calculators are “just” a stupid electric circuit.
I’ve never seen a non-sequitur more non. The argument is that predicting the next term is categorically not what language is. That is, it’s not that there is nothing emerging, but that what is emerging is just straight up not language.
The fact is, putting basic components together is the only way we know how to make things. We can use those smaller component to make a more complex thing to accomplish a more complex task. And emergence is everywhere in nature as well.
“Look! This person thinks predicting the next token is not consciousness. I bet they must also not believe that humans are made of cells, or that many small things can make complex thing. I bet they also believe the soul exists and lives in the pineal gland just like old NON-SCIENCE PEOPLE.”
There isn’t a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn’t also fail
Man it’s so sad how this is so so so so close to the point-- they could have correctly concluded that this means GI as a concept is meaningless. But no, they have to maintain their sci-fi web of belief so they choose to believe LLMs Really Do Have A Cognitive Quality.
boooooo fucking hooooooooooooooooo
I’m just as clueless. I think there are three syllogisms that tech brains orbit around.
dont worry once we get AGI it’ll figure out how to run itself on an intel 8080 trust me i thought about it really hard
Happy birthday!