Artificial Intelligence (AI) in the form of ChatGPT was made open to public use in late 2022. ChatGPT is a product of OpenAI LP, a for-profit company owned by the non-profit OpenAI Incorporated. The funding sources of OpenAI Incorporated are somewhat obscure and reportedly include such names as Microsoft and Elon Musk. ChatGPT immediately garnered huge attention, millions of subscribers, and prompted the further introduction of several AI platforms that were not yet scheduled for release (Microsoft Bing; Google Bard, etc.) After a few months, the features and limitations of these programs are beginning to become more clear.
In the brief period after introduction, much has been written about what is generally referred to as AI, or Artificial Intelligence. The opinions range from wildly negative (it will destroy civilization) to equally wildly positive (it will save civilization). I've made a brief (and somewhat superficial) foray into the topic, just because it is interesting. I have some impressions, and some second-hand stories. I have some cautions for the users of AI, and general concerns about the prospective evolution of AI. I think it is fair to say that AI will change things for many folks, very soon and in ways not yet known.
On the OpenAI platform, there are multiple systems and subscription levels. ChatGPT is free. Other more sophisticated or more focused systems (GPT-4, DALL-E, and others) are paid based on usage. I'll focus on the free system since it appears to be the most widely used. While some of the paid systems have access to the internet, the free system has its own database which was updated through October 2021 and so far remains static. The database for the free system consists of some 300 billion words, mostly pulled from the internet up to the terminal date.
In more than one education session on AI, I heard the word "predictive" in describing ChatGPT, and it seems the term applies to most if not all AI. The use of the term predictive doesn't mean that the AI can predict the future: stock market, weather, election results, and whatnot. The term predictive seems to refer more narrowly to selection of the next word in the produced text, be it poetry, report on tax law, or report on any of the subjects found in its prodigious database.
One of the more curious aspects of this "predictive" nature of the system shows up when the system MAKES STUFF UP FROM NOTHING. It has been widely reported that ChatGPT will frequently make things up. One well-reported example is when the system was asked to write an essay on the oldest and youngest governors of a particular state (Montana, I think). ChatGPT reported fairly accurately on the oldest governor but simply made up a story about the youngest governor, including a name, age, and brief biography. The made-up story looked plausible on its face but was pure fiction. There was no disclaimer in the write-up that it was, in fact, made up.
The proponents and advocates of AI have fostered the use of the term "hallucinating" to describe this problem, I guess to make it unintentional and therefore more innocent, i.e. more human. I think this is a massive flaw in the system that needs to be corrected as quickly as possible. It appears that the instances of making stuff up are not just incidental, but inherent in the predictive nature of the systems and are not readily turned off or programmed out. Open AI and other producers of AI have expressed interest in the hallucination problem. It appears that it is not a simple instruction of "tell no lies". Most of the work I've seen which purports to mitigate the problem refers to "avoidance" of hallucinations and depends heavily on modification of the prompts provided by the user. So, it seems, that the tendency to make stuff up is indeed inherent in the system and may be difficult to extinguish.
I think this hallucination problem makes ChatGPT, and possibly any other "predictive" AI systems fundamentally un-trustable. In one presentation on the use of AI in a tax environment, the system returned answers to a tax question that were "nearby, and close" but wrong. The presenter then illustrated how to refine the questions to coax and coach the system into giving the right answer. Curious, isn't it that you need to know the right answer in order to coach the system into giving the right answer? What we are then left with is a system that can be coaxed into producing a stream of text that will pass for a thoughtful exposition of almost any subject. But this leaves ChatGPT as not really functional in answering questions. Its function at this point is to format the answer based on prompting by the user. I can see how, for some if not many folks, the automation of this production and organization of text can be valuable.
Now consider the consequences of introduction of hallucinated material into the domain of publicly available information on the internet. It seems like that will make efforts to validate information obtained from online near impossible.
Anthropomorphism - the attribution of human traits, emotions, or intentions to non-human entities. Considered to be an innate tendency of human psychology. It seems to me that this is exactly what the creators of the current state of AI are fostering and encouraging. One can certainly trust something that is more human. This is not to make this a sinister conspiracy. It is just a truism. Part of the objective and challenge of AI is to make it as human as possible in order to facilitate interaction with people. Part of this humanization is in making the system conversational. You ask a question. AI provides a plainly worded answer. What ChatGPT seems to be really good at is constructing intelligent-appearing text. In fairness, OpenAI makes several (general, not specific) disclosures and disclaimers and warnings regarding potential inaccuracies in materials produced by their systems.
Ethical and Commercial Considerations
This content was entirely human-created without any use of any AI system.
Reminder to get your quarterly Credit Report from: Equifax
The table below shows the returns through June 30, 2023, for selected investment asset classes. In most cases, the results below are appropriate benchmarks for the related mutual funds in your investment portfolio.