We humans are among the latest in a long line of mostly extinct species on earth, and to the best of our knowledge, we are the smartest.
But are we the final step, or are we just another interim species? Will we be replaced by an even smarter species?
Nature has tried millions of experiments. There have been notable experiments with size, with the dinosaurs taking center stage.
The first dinosaurs emerged during the Triassic Period, 252 to 201 million years ago. During the Jurassic Period (201 to 145 million years ago) many large land animals went extinct, leaving more opportunity for the dinosaurs.
During the Cretaceous Period (145 to 66 million years ago) dinosaurs continued to evolve, and the biggest dinosaurs emerged. The Argentinosaurus huinculensis is the biggest dinosaur ever found.
And then they died.
But for the whales, nature’s experiment with size ended, to be replaced by the experiment with intelligence, which featured the mammals.
While many dinosaurs were warm blooded, and had large brains, both facilitating intelligence, our hands and upright stature seem to have brought us to the apex of intelligence.
So far, for the experiment continues.
The big news in intelligence is artificial intelligence (AI) as demonstrated in chatbots.
IBM says, “A chatbot is a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.”
If you use Siri or Alexa, you are using a basic chatbot. You ask a question in plain language and get an answer in plain language. So ubiquitous are these programs and devices that we often take for granted the technological miracle they represent.
I ask my tiny wristwatch a question, and despite my midwestern accent, and the variety of ways I phrase it, the watch searches the internet and within mere seconds, delivers an answer in a language of my choosing — both in audio and in print.
It is a miracle, but it is yesterday’s miracle. Today’s technology has taken the concept much further.
Today, you can ask a chatbot to develop an original treatise on a subject.
The chatbot will search the Internet using advanced keyword techniques and create a paper containing information and a reasoned discussion.
In that sense, it operates much like you would if given the same assignment.
Chatbots learn via “computer learning,” AI trial and error, to provide “better” responses (meaning more accurate and human).
Being computer programs, chatbots can conduct millions of trials and learn from millions of errors in a relatively (compared to you and me) short time. They can work 24/7, don’t tire, and they don’t forget.
Thus, through time, chatbots continually become “smarter.”
Although chatbot responses can seem eerily human, they still lack what you might call “common sense,” a basic understanding of reality — but they are learning.
Cosmos magazine published an article about “Chatbot blunders.”
Here are some excerpts:
It’s taken just a few days for Google AI chatbot Bard to make headlines for the wrong reasons.
Google shared a GIF showing Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”
One of Bard’s answers – that the telescope “took the very first pictures of a planet outside of our own solar system” – is more artificial than intelligent.
A number of astronomers have taken to Twitter to point out that the first exoplanet image was taken in 2004 – 18 years before Webb began taking its first snaps of the universe.
No one should be surprised that machines make mistakes, some of which can be hilarious. But we rely on them to be perfect, and they are — at a basic level. They copy and paste much better than we do. They can compute our income taxes flawlessly.
This essential perfection can lead us to believe in an overall perfection that does not exist and never will.
Google’s embarrassment over this mistake is compounded by the fact that it’s Bard’s first answer ever… and it was wrong! Bard is Google’s rushed answer to Microsoft-backed ChatGPT.
Both Bard and ChatGPT are powered by large language models (LLM) – deep learning algorithms that can recognize and generate content based on vast amounts of data.
The problem is that, sometimes, these chatbots simply make stuff up. There have even been reports that ChatGPT has produced made-up references.
“Wrong answers.” “Make stuff up.” Apparently, ChatGPT is even more human than some might have imagined.
It’s not “conscious” because the AI itself is not conscious; nevertheless, they are called “hallucinations.” They are the result of the software trying to fill in gaps and trying to make things sound natural and accurate.
It’s a well-known problem for LLMs and was even acknowledged by ChatGPT developers OpenAI in its release statement on November 30, 2022: ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
“Not conscious.” “Trying to make things sound accurate.” That sounds like some of the economists I know. ”
Experts say even the responses to the “successes” of artificial intelligence chatbots need to be tempered by an element of restraint.
The fundamental problem has to do with where the chatbots get their information. Remember the old computer mantra, “Garbage in, garbage out”?
That still applies. It applies to human responses, and it applies to computer responses. Why would machines be any more accurate?
In a paper published last week, University of Minnesota Law School researchers subjected ChatGPT to four real exams at the university. The exams were then graded blind.
After answering nearly 100 multiple-choice questions and 12 essay questions, ChatGPT received an average score of C+ – a low but passing grade.
C+ is pretty impressive, assuming the scorers were correct. If we have a chatbot grade the answers given by another chatbot, how will we know the “correct” grade?
Are we to assume human grading is more accurate?
Another team of researchers put ChatGPT through the United States Medical Licensing Exam (USMLE) – a notoriously difficult series of three exams.
A pass grade for the USMLE is usually around 60 percent. The researchers found that ChatGPT tested on 350 of the 376 public questions available from the June 2022 USMLE release scored between 52.4 and 75.0 percent.
I wonder how ChatGPT scored between 52.4 and 75.0 percent. Did they give the test repeatedly? Who determined which answers were correct?
In medicine, as in most sciences, much of what was thought to be correct yesterday now has been found incorrect, and tomorrow, that will change again.
It’s called “science,” the purpose of which is to identify and correct yesterday’s misunderstandings.
The authors claim in their research, published in PLOS Digital Health, that “ChatGPT produced at least one significant insight in 88.9% of all responses.”
In this case, “significant insight” refers to something in the chatbot’s responses that is new, non-obvious, and clinically valid.
How were “new,” “non-obvious,” and “clinically valid” determined? If a chatbot disagrees with a human, who is right?
But Dr. Simon McCallum, a senior lecturer in software engineering at New Zealand’s Victoria University of Wellington, says that ChatGPT’s performance isn’t even the most impressive of AI trained in medical settings.
Google’s Med-PaLM, a specialist arm of the chat tool Glan-PaLM, is another LLM focused on medical texts and conversations.
“ChatGPT may pass the exam, but Med-PaLM is able to give advice to patients that is as good as a professional GP. And both of these systems are improving.”
And who determines that advice is “as good as a professional GP”? It would be informative to learn how that was determined.
I don’t have access to a sophisticated chatbot, so if you do, I would appreciate your asking it such questions as:
- “What do United States federal taxes pay for?”
- “Who will have to pay off the federal debt?”
- “Is the federal debt too high?”
- “How does the federal government borrow money?”
- “Does federal deficit spending cause inflations?”
I chose the above questions because I suspect even the current level of chatbot technology merely regurgitates the common beliefs on any subject and does not analyze the way humans do.
I asked my Siri question #1, and she (it) answered, “Here’s what I found: Governments can use tax revenue to provide public services such as social security, healthcare, national defense, and education.”
The keywords are “Here’s what I found.” Siri isn’t thinking. Siri merely is playing back.
It gave the standard answer, which would be correct for state, county, and city governments, but it is not valid for the U.S. federal government. Siri has not yet learned about Monetary Sovereignty.
But what if Siri did learn about Monetary Sovereignty (MS). Ask most economists and they will tell you the federal government does borrow money, an answer with which MS strongly disagrees. Many, if not most, economists disagree with MS’s precepts.
The MS answers to the above questions are:
- Federal taxes pay for nothing. They help the government control the economy by taxing what it wishes to discourage and by giving tax breaks to what it encourages. That’s the theoretical purpose. The real goal is to make the rich richer by widening the income/wealth/power Gap between the rich and the rest.
- The so-called “debt” is paid off by returning dollars already in T-security accounts to the owners of those accounts.
- No, the federal debt (i.e., the total of T-securities) is not too high. Decreasing the debt causes recessions and depressions. Increasing the federal debt would help increase the Gross Domestic Product (GPD), i.e., grow the economy.
- The federal government never borrows money. It creates all the dollars it needs by pressing computer keys.
- No, shortages of critical goods and services, usually oil and food, cause inflations. Federal spending doesn’t cause shortages or inflations.
I suspect that chatbots, which use AI to learn the correct answers, will not provide the MS answers, as those answers will be the minority view. Siri, for instance, told me the federal government borrows to pay its bills.
Chatbots are giant data-gathering machines. They really are good at that. We humans are data-gathering machines, too. We analyze data the way chatbots do by comparing it with what we already know.
But humans function differently. I suspect the more creative among us are more receptive or willing to examine minority concepts.
I suspect we are more likely to investigate the rejected, the impossible, the already “proved” wrong, and the crazy “what if” ideas that AI is designed to winnow out.
Our thinking is what differentiates us from the rest of life on earth. We imagine. We visualize. We dream. We hope. We aspire. We dare to be different.
If nature has a plan, was the plan for us to be smart enough to create artificial intelligence?
Today, we drift toward a “Terminator” world. As we simultaneously birth, rule over, and battle our machines, will there come a time when our electronic children replace us?
Are we nature’s interim species, on earth to pave the way for the next experiment?
Rodger Malcolm Mitchell
Twitter: @rodgermitchell Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell
The Sole Purpose of Government Is to Improve and Protect the Lives of the People.