Over the last few weeks, you might have heard about a “revolutionary” piece of artificial intelligence (AI) software called Chat GPT, devised by a San Francisco-based startup called OpenAI and partially funded by the software giant, Microsoft. The software was released to the public in November of ’22 and in five days ~a million people signed up. Two months later, ~100 million people had tried and or are currently using Chat GPT, making it one the most successful consumer product launches of all time.
Pundits and futurists have already coined Chat GPT as the next “big thing”, a disruptive new technology that will both aid and displace humans in workforce and fundamentally change how we we interact with technology.
Microsoft v Google
Microsoft plans to incorporate Chat GPT into its Bing search engine to create an unbeatable value proposition and take back market share from Google, the industry leader that commands ~93% of the search market. (Bing has ~3% market share, followed by Yahoo, with ~1%.) The stakes are high. Google generated over $160 billion in revenue last year from search; every percentage point of market share Microsoft can manage to claw back is worth ~2b in ad revenue.
People are paying attention. The stocks of all things related to AI, including Microsoft, have surged in recent weeks. Many companies are announcing competing and or complementary products to Chat GPT. Google, who released a beta version of its own AI software called BARD, has lagged.
Are the “experts” right? What exactly is Chat GPT and what does it do? We will provide more details later in this post but if the thought of AI enabled bots displacing humans makes you shiver, take solace.
Seemingly whenever a technological breakthrough is upon us, analysts predict that jobs will be displaced, people will suffer, and bad actors will use the technology to commit crimes. History has taught us that these predictions are usually partially right, but mostly wrong.
While a small minority of directly affected people sometimes do, in-fact become dispensable, and most new technologies are harnessed for nefarious purposes by a small minority of miscreants, new technology and inventions typically prove beneficial to the great majority of society and spur demand for new types of jobs.
(Economists refer to predictions of innovation detracting from jobs as the Luddite Fallacy. The word Luddite is now used to describe somebody who is opposed to new technology. Originally, the Luddites were English mill workers in the early 1800s who banded together and destroyed innovative machinery, specifically knitting frames, and looms, which they worried would threaten their livelihoods.)
Here but a few prominent examples:
• The printing press (Mid 1400’s): Before the printing press was invented by a German goldsmith named Johannes Gutenberg, scribes were used to write books and perform related functions. As a result, books were extremely rare, expensive, and accessible only to the wealthy. The printing press obviated the need for humans to undertake the painstakingly laborious process of copying books. The result: Scribes were displaced but people were hired to build and service these new machines. Importantly, books suddenly became accessible to the masses. By ~1500, 20 million books were in circulation. Just ~ten years later, this number was 200m; people became literate and more knowledgeable.
• Automated Teller Machines (~1970s): ATMs existed in the 60s, but they were not connected to mainframe computers and could, therefore, only perform a few predetermined sets of functions. Networking began in the 70s and the number of ATMs grew exponentially. The magnetic stripe (now we use chips) was also introduced around this time. The combination of a magnetic stripe (for access) and ATMs ushered in an era of 24/7 banking - something we take for granted today and can now be completed on a smartphone - without human contact. After the ATM became ubiquitous, pundits questioned the need for bank tellers. There were predictions abound about the downfall of the bank teller. As it turned out, ATMs proved to be complementary to bank tellers, not detrimental. Tellers not only survived, but they also proliferated. Today ATMs (and phones) handle the mundane tasks of dispensing cash and taking deposits, while tellers focus mostly on higher value, customer service-orientated transactions.
• Global Positioning System (1978): Prior to Global Positioning System or GPS, a satellite system called the Navy Navigation Satellite system was developed in the 1960s to help guide nuclear submarines. In 1978, the first GPS satellite was launched into orbit. In the late 1990s, a few consumer products were enabled with GPS. (Your correspondent will never forget seeing one in a car for the first in 1996; I was in awe.) In 2000, Bill Clinton (finally) granted civilians full access to satellite-enabled GPS. As a result, map makers lost jobs and Atlases (remember trying to read one while driving?) became obsolete but an entire ecosystem of workers were/are needed to build, service, market, etc. all things related to GPS. Most significantly, cheap navigation and related services became available to billions of individuals and greatly benefited people, poor and rich, throughout the world.
Chat GPT: Writing My Own Obituary?
As it turns out, the theory and applications that underpin Chat GPT are not new nor “revolutionary.” The idea of machines being able to “think” was theorized by British mathematician Alan Turing in the 1950s. In 1996 a company called Ask Jeeves – you can now find them at the end of the internet (that was a joke) – pioneered the idea of incorporating AI into internet searches to answer general queries. (Ask Jeeves, renamed Ask was eventually acquired by internet conglomerate IAC Inc.)
What Chat GPT benefits from that Ask Jeeves and other predecessors lacked is a robust internet infrastructure, tens of thousands more hours of human knowledge, superior algorithmic models, and perhaps most importantly, exponentially more computer processing power to help Chat GPT “learn.”
Chat GPT can write articles, blog posts (it did not write this one), essays, even books, answer trivia questions, summarize text (think Cliffs Notes), provide detailed answers to customer service inquiries, and much more. According to the Economist, it can even pass medical and legal exams! And unlike humans, Chat GPT can work 24/7, is not entitled to paid sick days and does not have to worry about childcare.
Humans v Bots
This then begs the question, will Chat GPT put millions of professionals out of work, relegate students writing their own essays to history, fundamentally change society as we know it, and more? The answer is conceivably but probably not anytime soon.
Cheerleaders of Chat GPT highlight what the software can do, and it is impressive. However, it is equally as important to focus where Chat GPT comes up woefully short. Simply put, despite consistent confidence bordering on bravado in its answers, the simple fact is that Chat GPT often makes mistakes. It also incorporates misinformation and bias on the internet into its replies, some of which can be downright comical or even offensive.
Said financial journalist Tae Kim of Barron's, “Artificial intelligence still can’t match human intelligence in terms of accuracy, creativity, or originality.” Even Sam Altman, the CEO of OpenAI conceded that Chat GPT and other related AI offerings are currently “impressive but not robust…At first when you use them, they seem incredible, but after you use them 100 times, you see the weaknesses.”
Chat GPT is remarkable and will certainly be a useful tool for consumers, academics, in businesses, etc but it is not a panacea. Perhaps at some point it will be. But often there is a considerable lag between the invention of a disruptive technology and when it materially alters the economic and social fabric of society.
As the Economist points out “even the most powerful new tech takes time to change an economy. James Watt patented his steam engine in 1769, but steam power did not overtake water as a source of industrial horsepower until the 1830s in Britain and 1860s in America…The silicon chip was invented in 1961 but it was not until the mid-1990s that a computer-powered productivity boom eventually emerged in America.”
Today information flows faster and business cycles are shorter. The time lag between AI and its impact on society will probably be measured in years not decades. For now, humans can breathe a collective sigh of relief.