source: tumblr.com
At risk of wading into discourse, I'm kind of baffled at this kind of rhetoric I've been seeing around lately - it's fairly obviously not true if you've more or less anything written on the topic in the last half-century. People were working with single-layer neural networks in the sixties! People have been talking about and roadmapping the potential uses of massively parallelized computing and natural-language interfaces and machine vision for decades - see, for example, Advanced Automation for Space Missions, published 1983. There are clear areas for improvement in real-world machine intelligence - robot 'hands' that can do dexterous manual manipulation without crushing or dropping things, factory machines that need less babysitting from humans, agriculture automation that actually works, machine vision and perceptual models to make these things possible... these improvements need not necessarily come from brute-force large-dataset neural network training either, there's been some hopeful recent work done with neurosymbolic hybrids to get around the pitfalls of such approaches. I feel overall like people are all far too ready to throw out the advanced automation bathwater because of cultural polarization against AI art.
The energy use talking point is also ultimately kind of silly but that's a whole other post.
I do, of course, find the absurd hype bubble that's lead to companies trying to cram a LLM into everything that can fit a microprocessor in it extremely annoying though. And it's extremely dispiriting to see people use ChatGPT as a search engine and take its output to be trustworthy. But I really do think there is in fact something worthwhile in the tech, even if the low hanging fruits picked with it have been somewhat overhyped and disruptive.
Sounds like "science" has a roadmap for AI but "business" does not