Why people don't understand AI
Hoards of people have flocked to the latest buzzing trend, but so few actually understand what it is and how it works.
Introduction
In a recent high-profile summit, I observed world leaders and influential policymakers come together to tackle the increasingly pertinent issue of Artificial Intelligence (AI). This gathering, reminiscent of the forums at Davos, included a mix of personalities, with controversial tech visionaries like Elon Musk participating in pivotal discussions about AI's future.
As I followed the proceedings, it became evident that there were concerns about the depth of AI understanding among many attendees. The summit's aim to shape policies guiding AI integration into business and product development seemed overshadowed by apprehensions that these decisions might not fully comprehend AI's complex nature.
From my perspective, insiders suggested that the emerging policies might favor well-established corporations, potentially sidelining startups and smaller firms. This development raises questions about the equitable distribution of AI's benefits across the business landscape.
Furthermore, the summit has been critiqued for occasionally highlighting individuals with more nominal than substantive AI expertise. This trend of showcasing so-called experts, whose credentials often seem more aligned with public relations than technological acumen, has led to skepticism about the effectiveness of the proposed policies.
So what the hell is it anyway?
In short… a statistical prediction model - that’s it. Don’t get me wrong the likes of ChatGPT are mind-bogglingly huge statistical models, but they essentially are still statistical models. For this set of words, in this specific order the most likely output is [this result].
So with this in mind what real threat does AI posses - I’ve written about the dangers of quantity; but ultimately predictive outcomes themselves do not pose much real threat. I doubt we will see a singularity event occur with this as the current approach.
It's crucial to remember that a predictive model doesn't equate to a form of intelligence with ambitions of world domination. These models don't possess independent thought; they merely generate the most probable response to a given input. Perhaps my view is influenced by my understanding of the underlying models and functions, where algorithmic weightings are key. But from this perspective, the fear of AI taking over the world seems more speculative than imminent.
Where the money is…
Over the past year, a notable trend has emerged in the tech industry: the rapid influx of enthusiasts and opportunists onto the AI bandwagon. This phenomenon isn't entirely new; the tech sector has historically witnessed similar rushes, like the crypto craze and the dotcom boom. Whenever there's a hint of financial opportunity, a certain contingent invariably emerges, eager to capitalize. Yet, the scale at which this has happened with AI is unprecedented and, frankly, astonishing.
Historically, the tech industry, particularly the AI sector, was the bastion of engineers and technologists, known for its elitism and meritocratic ethos. It's somewhat ironic, then, to see this field now inundated with individuals whose expertise is, at best, superficial. We're witnessing a scenario where mainstream media presenters claim deep AI knowledge based merely on a blog post they wrote or a YouTube video they watched. The same can be said for numerous self-proclaimed "AI gurus" on platforms like LinkedIn, many of whom likely lack hands-on experience with creating statistical models, working with model weights, or using advanced tools like TensorFlow, PyTorch, or Hugging Face.
This influx of novices and profiteers in the AI space has not only diluted the pool of genuine expertise but also influenced the broader narrative around AI. As a result, there's a growing clamor for stringent regulation, fueled by exaggerated fears that AI could potentially wreak havoc on a global scale. This sentiment has reached such a fever pitch that it has led to high-level international meetings, with leaders debating over how to contain this perceived threat.
In essence, the current landscape of AI is a blend of genuine innovation and opportunistic exploitation, a dichotomy that reflects the complexities and contradictions of the tech world. The challenge now is to navigate this terrain, ensuring that the development and application of AI are guided by informed perspectives rather than sensationalist rhetoric.
So is there a danger?
The advancing capabilities of AI are poised to profoundly transform various industries, notably journalism, copywriting, creating stock imagery and artwork, and producing stock music are increasingly becoming domains where AI can excel. This technological shift presents both opportunities and challenges.
One significant concern is the potential for AI to exacerbate the proliferation of low-quality, clickbait-heavy websites. These sites, which contribute little value beyond optimising for search engine visibility, could multiply rapidly as AI simplifies content creation. Additionally, the rise of AI-generated fake reviews, tweets, and articles is a looming issue. These developments threaten to further obscure the line between authenticity and fabrication, making it increasingly challenging to discern genuine content on the internet.
These are real issues that should be at the forefront of discussions when world leaders convene to deliberate on AI. However, a critical gap exists: the experts who truly understand these nuances and implications often seem absent from these high-level discussions. This absence raises concerns about the direction and effectiveness of any policies or strategies that may be devised. For meaningful progress, it's imperative that those with a deep understanding of AI's impact, both positive and negative, are included in these vital conversations.