Just over a year ago we were all enthralled by the emergence of a technology that promised to disrupt all facets of our lives. What started as a live social experiment of opening large Natural Language Processing (NLP) algorithms, in the form of ChatGPT, to answer the public’s unscripted questions, soon proved so effective that most see it as a technology with the potential for hard-to-even-comprehend disruption. 

Most of us are currently observing with both interest and anxiety, the rapid rise of this technology, wondering how this will impact and reshape the industries we work in.  

Many observers’ enthusiasm was somewhat curbed in recent months by the realisation that physical limitations exist in building larger and ever more capable training models. It turns out, when we ask ChatGPT a question online, somewhere a machine is whirring and crunching numbers. It is, after all, not magic. Nor is it a tireless sentient being answering our questions – it is machines doing complex math. The more we ask of it, the bigger it needs to be. 

Despite clear physical and hardware challenges, to many it seems the future domination of Artificial Intelligence (AI) through Large Language Models (LLMs) is inevitable. Others are more sceptical. First, it is costly. Building, training and maintaining these models require incredible costs and expensive manpower, access to enormous datasets and access to advanced chip making infrastructure. In fact, Sam Altman, CEO of OpenAI, the creators of ChatGPT, is reportedly looking to raise up to $7 trillion to further boost GPU-chip supply, critical in building more capable algorithms able to power the truly disruptive power of AI. That number is greater than Apple and Google’s market caps combined, and while other executives (notably the CEO of Nvidia, the leading chip maker) have been critical of these numbers, most estimates suggest a chip making industry worth several trillion dollars in the near future. 

Apart from hardware limitations, other key drawbacks include the environmental costs associated with training and managing these energy hungry algorithms, the inherent biases that undocumented training could embed in model responses with limited recourse, the inability of modelers to reverse engineer parameters (making it a black-box process by design), as well as the lack of ethical scrutiny required to ensure it is safely deployed on society.

Proponents of AI’s unbridled growth might point to these being fixable problems; eggs broken in the pursuit of a sentient omelette. But a bigger problem might lurk in its current design that should give even the most optimistic pause: the lack of Intelligence in AI. People are mesmerised by Generative AI’s output and the illusion of understanding that it possesses. It also doesn’t help that researchers and companies at the forefront of its development have strong incentives to feed this illusion with anthropomorphic language like learning, intelligence and reasoning becoming mainstream. 

But at its core, the models we interact with are simply computer algorithms that take text as input (think all the public Reddit, Wikipedia, etc. pages) and produce answers by predicting what word (or what pixel when drawing) should come next. It is, ultimately, super-efficient predictive text strung together in a way that, given its vast library of human conversational data for training, seem intelligent and human-like. But it is a parrot, not a mind. A remarkable achievement in mimicry and information collation, but not sentient and completely void of understanding. It is simply pattern matching at unprecedented scale, meaning its current design will always scupper its ability to “think” outside the (very black) box. Even if a methodology is identified that can bring us to the holy grail of Artificial General Intelligence, that doesn’t mean that it will necessarily be achieved. Think the decades long pursuit of nuclear fusion which is theoretically possible.

While we believe the destination of broad-based adoption of some form of AI across industries is still a way off, we have been thrust upon a path to discover its full potential. But the path still needs to be paved, sidewalks erected and streetlamps put in. The biggest current opportunities therefore likely are with companies able to harness their size and scale to access the best minds and computer hardware needed to build the foundation for tomorrow’s application. Small and nimble are not traits that help with securing trillion-dollar chip deals or access to enormous computing warehouses. 

A sensible proxy for investing in the future of AI may very well be an index comprised of larger, more established tech companies, like the Nasdaq 100. Despite returning more than 50% in US$ in 2023, if the future is indeed broad-based adoption of AI: which sentient being currently dares bet against the established tech giants in the Nasdaq? 

By Nico Katzke, Head of Portfolio Solutions at Satrix

 

Disclaimer

Satrix Investments (Pty) Ltd is an approved FSP in terms of the Financial Advisory and Intermediary Services Act (FAIS). The information does not constitute advice as contemplated in FAIS. Use or rely on this information at your own risk. Consult your Financial Adviser before making an investment decision. Satrix Managers is a registered Manager in terms of the Collective Investment Schemes Control Act, 2002.

While every effort has been made to ensure the reasonableness and accuracy of the information contained in this document (“the information”), the FSPs, their shareholders, subsidiaries, clients, agents, officers and employees do not make any representations or warranties regarding the accuracy or suitability of the information and shall not be held responsible and disclaim all liability for any loss, liability and damage whatsoever suffered as a result of or which may be attributable, directly or indirectly, to any use of or reliance upon the information.