ChatGPT heralds a new era for artificial intelligence but the same rules apply
A branch of AI research called generative AI reached a tipping point, generating images and writing text responses to user prompts that were beyond the sophistication and accuracy of anything produced by machines in the past.
This culminated in OpenAI’s release of ChatGPT – a free-to-use text chatbot – late in 2022, which has shown remarkable results ranging from answers to essay questions to writing song lyrics in the style of Taylor Swift.
Baby steps of crypto regulation not enough to spur asset management interest
But what is really game-changing here?
What impact will it have on business models big and small?
Are investors right to join the public excitement or are there reasons to temper our enthusiasm?
Stepping back, we can trace ChatGPT’s abilities to Google’s 2017 breakthrough in a deep learning technique called a ‘transformer’, a form of machine learning used for training generative AI models.
This transformer enabled speech and text to be converted to machine-readable text and started an exponential evolution in the size of AI models trained on massive data sets that were then able to generate text, images – even music.
As the number of model parameters expanded, improvements in the accuracy and sophistication of the models’ outputs dramatically increased.
The early generative AI models started with around 100 million parameters.
The GPT-3 model that led to ChatGPT’s chatbot service was originally built on 175 billion parameters.
Newer models are over 500 billion, and some even approach 1 trillion parameters.
None of this comes cheap.
Reports on Microsoft’s investments in their partnership with OpenAI suggests over $11bn has been spent to date, enabling OpenAI to offer the ChatGPT service for free to the general public – arguably the greatest step change the project has delivered, opening the floodgates of innovation to all.
Computer servers used for AI required significantly higher performance and more expensive semiconductors to be able to cope with the demands of ingesting and making sense of the massive datasets available from scraping the public internet.
This huge upfront cost has until now been a block to widespread access to powerful AI tools.
Public clouds drive innovation and competition – a redux
The recent history of public cloud evolution provides a blueprint for how the AI economy will grow and prosper.
In the last decade, the public clouds have pursued a dual track of building their own software applications and tools native to their own clouds, but at the same time worked in partnership with third-party vendors that built competing and often superior products.
IMF raises global growth forecast but UK set to shrink
Datadog’s observability tools compete with AWS’s Cloudwatch; Netflix competes with Amazon Prime, etc.
The largest infrastructure providers and the disruptive new entrants can thrive as the exponentially improving capabilities both sustain the improvements in existing products as well as create whole new markets yet to be monetised.
Speedbumps will inevitably temper growth
A dose of caution is needed though.
As AI is trained on vast open datasets, often without human supervision, the risks of bias and prejudice are ever present.
OpenAI’s improvements to their models have included reinforcement learning with human feedback (RLHF) to ‘teach’ the models not to be racist, misogynist or otherwise biased.
Cyber security analysts are concerned that generative AI models will raise both the quantity and the sophistication of phishing and malware attacks used to steal security credentials and sensitive data.
The companies looking to adopt AI are wrestling with the balance between going all-in on AI with all the risk of reputational damage and regulatory penalties that entails, versus being slower to market while trying to address the failings of the models, running the risk that more reckless competitors may steal a march on the market.
The Infrastructure spend is just getting started
ChatGPT is reportedly built on 10,000 graphics processing chips (GPUs) designed by nVidia and manufactured by TSMC.
The largest infrastructure providers go even further to profit from the scale of the opportunity by designing their own application specific chips, such as Google’s TPU, or AWS’s Trainium chips, with leading foundry, TSMC, again the manufacturer of choice.
Generative AI’s promise is so vast, and the dissemination of these tools so rapid that we see both wild optimism and deep fear in equal measure.
Despite seemingly putting the brakes on the ‘AI will solve everything’ zeitgeist, the productivity gains to the whole economy from widespread adoption of general-purpose AI tools for automating any digital process, of which ChatGPT is just one of many, will offer a long run way of investment and innovation that investors should very much be paying attention to.
Matthew Ward is head of global technology equities and Colin Moar is global technology analyst on Barings’ technology sector team
World News || Latest News || U.S. News