Why People Don't Understand AI
Hoards of people have flocked to the latest buzzing trend, but so few actually understand what it is and how it works.
Introduction
Just recently world leaders gathered to discuss the imposing threat of Artificial Intelligence and how best to deal with it. A global convergence of the most important people discussing AI as if they were the domain exerts and knew everything about it (maybe with the exception of Elon Musk). This convergence is leading to policy changes that could potentially set the future of how businesses build products with AI. The problem is that these people know little of how AI actually works, decisions are seemly being made to aid large existing corporations, and newer companies are being left at the wayside. Add to this a common pattern of grifters being presented as the experts of the whole thing and you start to see cracks emerge.
So what the hell is it anyway?
In short… a statistical prediction model - that’s it. Don’t get me wrong the likes of ChatGPT are mind-bogglingly huge statistical models, but they essentially are still statistical models. For this set of words, in this specific order the most likely output is [this result].
So with this in mind what real threat does AI posses - I’ve written about the dangers of quantity; but ultimately predictive outcomes themselves do not pose much real threat. I doubt we will see a singularity event occur with this as the current approach.
A predictive model is not a form of intelligence that will aim to take over the world. It doesn’t think for itself, it’s simply a most likely outcome in response to a particular input. Maybe my brain simply can’t move past the models and functions that operate under the hood, how the weighting is calling the shots - but honestly I don’t see a monumental danger in AI taking over the world right now.
Where the money is…
One thing that has struck me over the past year or so is the sheer volume of people who have jumped onto the AI bandwagon. Don’t get me wrong It’s a common theme in the tech industry to jump on opportunities much like the crypto rush or the dotcom boom; when the scent of money seems to be present the hoards of charlatans will accordingly appear; but AI seems to have taken this to an extreme of which I have never seen before.
It’s always struck me as surprisingly odd that an industry that was built upon the work of engineers and technologists, that was fiercely elitist in nature, and created one of the most meritocratic systems ever should give away it’s wares to useless morons who seem to want to capitalise on the whole thing.
But, alas here we are watching presenters in mainstream media who claim to understand AI intrinsically because they wrote a blog post once, or saw a YouTube video. Or how about the legions of LinkedIn maniacs who splurge out streams of nonsense about how they are AI gurus. Have they ever created a statistical model, worked with model weights, built anything with Tensorflow or PyTorch, even looked through Hugging Face… I doubt it.
What we currently have is a bunch of people trying to capitalise on the latest technology trend, eager for their share of the pie and a press industry that facilitates them without prejudice.
And this is ultimately why we now have global meetings over fears that AI will destroy the world, and therefore must be regulated - heavily.
Conclusion
So is there a danger
I do see AI having a profound effect on mundane and generic work, journalism for example could be outsourced to AI, copywriting, image content, etc. The sea of awful websites that provide nothing of value other than click bait SEO traps. These will be AIs domain.
There is however one major concern I have with AI and that is quantity. It’s always been hard to get noticed within creative industries but at least the sea of products people were competing