According to a recent TechCrunch article -
Nvidia’s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away
See - https://techcrunch.com/2024/03/19/agi-and-hallucinations/
I don't think it's impossible to achieve AGI, but I do know it's not possible to predict when this will occur. I very much doubt it will be in the next five years, but if it is, it's pure coincidence. Note that claims of this sort tend to come in digital intervals. (If it's not five fingers from now, it's ten...) We've been hearing claims about strong AI for fifty years or more now. As usual, the person writing the headline appears not to have read the content. What the article states is:
- “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within 5 years,” Huang explains. He suggests that the tests could be a legal bar exam, logic tests, economic tests or perhaps the ability to pass a pre-med exam. Unless the questioner is able to be very specific about what AGI means in the context of the question, he’s not willing to make a prediction." -
There's a very big if here.
Watch out for this 'stipulative definition' move. This is how AI came to mean what it does today. Fifty years ago AI only meant one thing. Take ALL claims about AGI with a pinch of salt. Wait for more explicit claims about high IQ sentient intelligence, etc.
AGI In Five Years
Unknowable. Unprovable. Unlikely. Unconvincing. There are too many uns. I could go on. When in doubt about the feasibility of AGI, always remember The Chinese Room argument. Syntax isn't evidence of semantics.
What does that mean? In short, it means that an empty can make a lot of noise. There's no conscious content. There are no thoughts even if the machine gives a damn good impression of thinking. There is no introspection. There are no emotions. There is no sentience. No mind. No thinking thing. I don't use the term soul lightly, but let's just say there's no soul either.
And if all this doesn't convince you, keep in mind (in the real, human sense...) that the people working on AI aren't neuroscientists and neurologists and that while the output from AI can be eerily convincing as the output from human AGI, this illusion is achieved through the use of technology that bears no actual resemblance to the human brain in structure. Arguably, the connection between biological brains and computers is metaphorical. (See - Why your brain is not a computer | Neuroscience | The Guardian)
The illusion that a network inevitably leads to consciousness is reflected in ideas like the notion that the internet is alive. Similar thinking sees the solar system as a giant consciousness, ant colonies and termites as 'hive minds', jungles as giant green brains, etc. The illusion is strengthened because the computer 'mind' does in fact create output - from pictures to paragraphs - that makes sense to us, and because the machine, using our input (i.e. - the input from our minds) can improve its ability to produce what we regard as a decent outcome. If were train our AI models with wonky data, the output will be wonky too. The traditional term for this is GIGO.
We have certainly reached a turning point in data presentation., But don't fall for the cozy, unconvincing language of Bing's Copilot, which tries to speak in the first person and uses emotional language to talk about topics such as the recent death of a celebrity. This is all presentation layer stuff. It's a double bluff. There are two layers; the outer presentation layer, - which is the language; the style, and subtleties of the syntax - and the content or data layer, which is the information being generated (more often than not, regurgitated through rapid Google searches) and presented via layer one.
A Quick Note On AI Hallucinations
I don’t work in AI. I know nothing more about AI than the average mildly-techy guy, but for what it’s worth, far from being an issue with AI, the realm of so-called AI hallucinations (AI just making stuff up) is precisely where I’d be looking for the first signs of anything like AGI processes.
I’ve seen what AI can do when generating imagery with minimal prompts. It’s quite beautiful. The dreamlike quality is unmistakable. For some examples of my low-grade AI art/animation experiments head here: Tentative Steps In AI Animation - by Gavan Stockdale (substack.com)
Try it yourself at https://pika.art/
The Chinese Room Argument
GS