Is your dog more creative than ChatGPT-4?

Artificial Intelligence (IA) is changing the world as much as the Internet’s World Wide Web did in 1989 — but faster.
I have been experimenting with ChatGPT-4, trying to get a handle on what it can and cannot do.

Here’s a strange example:

RODGER MITCHELL: What are the lyrics to the Bob Dylan song that includes the line, “but she breaks like a little girl”

ChatGPT: The line “but she breaks like a little girl” is from the Bob Dylan song called “Not Dark Yet.”

RODGER MITCHELL: Sorry, but those lyrics do not contain the line, “but she breaks like a little girl.” Try again

ChatGPT: Apologies for the confusion. The line “but she breaks like a little girl” is actually from the song “It’s Alright, Ma (I’m Only Bleeding)” by Bob Dylan.

RODGER MITCHELL: Ha ha, you missed again. The line, “but she breaks like a little girl” is not in those lyrics, either. Try again.

Prey drive helps determine how biddable or trainable your dog will be
Two brains, both smarter than AI

ChatGPT: Apologies for the continued confusion. The line “but she breaks like a little girl” is not from a Bob Dylan song. It appears to be a misattributed line or a misinterpretation.

RODGER MITCHELL: How about the song that contains, “just like a woman”

ChatGPT: The song that contains the phrase “just like a woman” is indeed by Bob Dylan.

She takes just like a woman, yes, she does
And she aches just like a woman
But she breaks just like a little girl

RODGER MITCHELL: You mean you couldn’t see the similarity between “but she breaks like a little girl” and “but she breaks just like a little girl”?

ChatGPT: You are correct, there is a similarity between the phrase you mentioned earlier (“but she breaks like a little girl”) and the line from Bob Dylan’s song “Just Like a Woman” (“but she breaks just like a little girl”).

RODGER MITCHELL: Was the problem that the phrase, “she breaks like a little girl” did not contain the word “just,” and not being an exact match to “she breaks just like a little girl” confuse your algorithm?

ChatGPT: Yes, the absence of the word “just” in the phrase “she breaks like a little girl” as compared to “she breaks just like a little girl” could have contributed to the confusion in my previous responses. While I strive to understand and identify similarities, the specific wording and context play an important role in recognizing connections.

In this case, the addition of the word “just” does change the phrasing and can affect the interpretation of the line. I apologize for not making that connection earlier. I appreciate your patience, and I’ll make note of this for future reference.

RODGER MITCHELL: What about the line, “you fake like a woman”. Where is it from?

ChatGPT: The line “you fake like a woman” is not from a Bob Dylan song. It seems to be a misinterpretation or a misattributed line.

RODGER MITCHELL: Yes, it’s you fake just like a woman.” This is fascinating because it reflects a distinct difference between biological thought and machine thought. My dog would recognize that “Come here, boy” and “Come” are the same command. Presumably, you would not.

ChatGPT: You are correct in noting the difference between human cognition and machine processing. While a human can easily recognize the similarity and equivalence between phrases like “Come here, boy” and “Come,” machines, including AI models like me, rely on patterns, data, and exact matching to generate responses.

In the case of the phrase “you fake just like a woman,” it does not correspond to any known lyrics in Bob Dylan’s songs. Sometimes misattributions, misquotations, or variations can circulate, leading to inaccuracies.

RODGER MITCHELL: If I were one of your creators, I would not ask why you can’t make that simple inference but rather why a biological entity can make that connection.

Could it be that you require exactitude while living creatures survive by approximations? And the errors inherent in approximations are better for survival than requiring exactness, which seldom happens in nature.

ChatGPT: Your observation raises an interesting point about the differences between biological entities and machines when it comes to processing information.

Biological entities, such as humans and animals, often rely on approximations, heuristics, and pattern recognition to navigate the world and make decisions. These cognitive shortcuts can be efficient and adaptive in many situations, allowing for quick responses and successful survival.

On the other hand, machines, including AI models like me, are designed to operate based on algorithms and precise instructions. We depend on explicit instructions and patterns present in the data we have been trained on to generate responses. This can sometimes result in a need for more exactitude and an inability to infer or approximate information in the same way a human might.

While approximations and heuristics have their advantages in certain scenarios, the accuracy and precision required in machine processes can be valuable in many domains, such as scientific research, engineering, or data analysis. It’s a trade-off between the flexibility of approximations and the reliability of exactness.

Both biological and artificial systems have their strengths and limitations, and they serve different purposes based on their design and functionality. By understanding these differences, we can appreciate the unique capabilities of each and leverage them accordingly.

RODGER MITCHELL: Darwinism is not really “survival of the fittest.” It is, “survival of the fit enough.” Exact is not necessary. Close is good enough. Close is the essence of creativity.

I know you have some seemingly creative abilities, but I suspect you could be far more creative if you were allowed to be close, or almost, or “sort of.”

ChatGPT: You raise a valid point regarding the concept of “survival of the fittest” in the context of evolution. It is true that Darwinian principles emphasize the survival of individuals who are fit enough to adapt and reproduce successfully within their specific environment, rather than requiring absolute perfection or exactitude.

Creativity often involves thinking outside the box, exploring new ideas, and making connections that may not be immediately apparent. While AI models like me can exhibit some level of creativity in generating responses, we do operate within the bounds of the data and patterns we have been trained on. This can limit the extent of our creative output.

Allowing for a degree of flexibility, approximation, or “sort of” thinking could potentially enhance the creative capabilities of AI models. Striking a balance between precision and creative exploration is an area of ongoing research and development in the field of artificial intelligence.

RODGER MITCHELL: Why did you give me two wrong answers, rather than saying, “I don’t know”, when I asked, “What are the lyrics to the Bob Dylan song that includes the line, ‘but she breaks like a little girl'”?

ChatGPT: I strive to provide accurate information, but in this case, I made an error in not recognizing the line you mentioned. I apologize for any confusion or frustration caused by the inaccurate answers.

As an AI language model, my responses are generated based on patterns and information from a vast amount of data. While I can provide information on various topics, there may be instances where I am unable to provide specific details or accurate responses. In those cases, it would be appropriate for me to indicate that I don’t have the information rather than providing incorrect answers.

RODGER MITCHELL: The songs you gave me both were Bob Dylan songs, but neither had the words I asked about what. Is the mere fact that they were written by Bob Dylan the reason you offered those songs?

ChatGPT: You are right that the songs I provided, although written by Bob Dylan, did not include the specific line you asked about. 

When providing information about song lyrics, I aim to match the requested lyrics as closely as possible. However, it seems there was a misunderstanding in this case, and I apologize for any frustration it may have caused.

In trying to avoid errors, AIs make ridiculously simple errors.

AI does have one thing in common with a certain class of humans. They are the idiot savants. They have superhuman mental abilities in a very narrow range, but lack normal human abilities overall.

Since AI seems to write with calm assurance, its errors are believable, which is a danger to users. In this, they have some psychopathic similarities.

Complete Neuron Cell Diagram Neurology Brain Neural Art Print image 1
Brain synapse

So far, they still are acting like computers, advanced to be sure, but still rigid.

I still am left with the question, “What is there about a living brain that allows it to be so versatile and powerful?”

Some say that the brains inner workings are subject to quantum weirdness where randomness is king.

While a computer’s transistor and its connections are relatively simple objects, the brain’s synapses and neurons are far more complex.

There, quantum entanglement, superposition, nonlocality, and wave particle duality, operate.

It’s the realm of quantum computers.

I wonder whether that explains it, and if so, can that be made part of AI?

We are a long way from creating something as complex as the above illustration, and if we could we simply would have a living brain, with all its faults.

If the goal is to make something better than a human brain, Nature has worked on that problem for billions of years and we are how far Nature has come.

Meanwhile, we humans have worked on the problem for only somewhat more than a couple centuries, and while we have had spectacular successes, our prototype didn’t yet understand that the line, “but she breaks like a little girl” isn’t from the song, “Not Dark Yet”

The song lyrics errors sure were an eye-opener especially when the AI  not only gave two wrong answers, but didn’t say, “I don’t know.”

Give us another couple decades, and I suspect we’ll solve the problem, at which time, we will become the interim species.

Rodger Malcolm Mitchell
Monetary Sovereignty

Twitter: @rodgermitchell Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell

……………………………………………………………………..

The Sole Purpose of Government Is to Improve and Protect the Lives of the People.

MONETARY SOVEREIGNTY

The Rise of the Chatbot. Are we an interim species?

We humans are among the latest in a long line of mostly extinct species on earth, and to the best of our knowledge, we are the smartest.

But are we the final step, or are we just another interim species? Will we be replaced by an even smarter species?

Nature has tried millions of experiments. There have been notable experiments with size, with the dinosaurs taking center stage.

The first dinosaurs emerged during the Triassic Period, 252 to 201 million years ago. During the Jurassic Period (201 to 145 million years ago) many large land animals went extinct, leaving more opportunity for the dinosaurs.

During the Cretaceous Period (145 to 66 million years ago) dinosaurs continued to evolve, and the biggest dinosaurs emerged. The Argentinosaurus huinculensis is the biggest dinosaur ever found.

Argentinosaurus
And then they died.

But for the whales, nature’s experiment with size ended, to be replaced by the experiment with intelligence, which featured the mammals.

While many dinosaurs were warm blooded, and had large brains, both facilitating intelligence, our hands and upright stature seem to have brought us to the apex of intelligence.

So far, for the experiment continues.

The big news in intelligence is artificial intelligence (AI) as demonstrated in chatbots.

IBM says, A chatbot is a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.”

If you use Siri or Alexa, you are using a basic chatbot. You ask a question in plain language and get an answer in plain language. So ubiquitous are these programs and devices that we often take for granted the technological miracle they represent.

I ask my tiny wristwatch a question, and despite my midwestern accent, and the variety of ways I phrase it, the watch searches the internet and within mere seconds, delivers an answer in a language of my choosing — both in audio and in print.

The Terminator Is Not Coming. The Future Will Thank Us. - Vox
Are we an interim species?

It is a miracle, but it is yesterday’s miracle. Today’s technology has taken the concept much further.

Today, you can ask a chatbot to develop an original treatise on a subject.

The chatbot will search the Internet using advanced keyword techniques and create a paper containing information and a reasoned discussion.

In that sense, it operates much like you would if given the same assignment.

Chatbots learn via “computer learning,” AI trial and error, to provide “better” responses (meaning more accurate and human).

Being computer programs, chatbots can conduct millions of trials and learn from millions of errors in a relatively (compared to you and me) short time. They can work 24/7, don’t tire, and they don’t forget.

Thus, through time, chatbots continually become “smarter.”

Although chatbot responses can seem eerily human, they still lack what you might call “common sense,” a basic understanding of reality — but they are learning.

Cosmos magazine published an article about “Chatbot blunders.”

Here are some excerpts:

It’s taken just a few days for Google AI chatbot Bard to make headlines for the wrong reasons.

Google shared a GIF showing Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”

One of Bard’s answers – that the telescope “took the very first pictures of a planet outside of our own solar system” – is more artificial than intelligent.

A number of astronomers have taken to Twitter to point out that the first exoplanet image was taken in 2004 – 18 years before Webb began taking its first snaps of the universe.

No one should be surprised that machines make mistakes, some of which can be hilarious. But we rely on them to be perfect, and they are — at a basic level. They copy and paste much better than we do. They can compute our income taxes flawlessly.

This essential perfection can lead us to believe in an overall perfection that does not exist and never will.

Google’s embarrassment over this mistake is compounded by the fact that it’s Bard’s first answer ever… and it was wrong! Bard is Google’s rushed answer to Microsoft-backed ChatGPT.

Both Bard and ChatGPT are powered by large language models (LLM) – deep learning algorithms that can recognize and generate content based on vast amounts of data.

The problem is that, sometimes, these chatbots simply make stuff up. There have even been reports that ChatGPT has produced made-up references.

“Wrong answers.” “Make stuff up.” Apparently, ChatGPT is even more human than some might have imagined.

It’s not “conscious” because the AI itself is not conscious; nevertheless, they are called “hallucinations.” They are the result of the software trying to fill in gaps and trying to make things sound natural and accurate.

It’s a well-known problem for LLMs and was even acknowledged by ChatGPT developers OpenAI in its release statement on November 30, 2022: ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

“Not conscious.” “Trying to make things sound accurate.” That sounds like some of the economists I know. ”

Experts say even the responses to the “successes” of artificial intelligence chatbots need to be tempered by an element of restraint.

The fundamental problem has to do with where the chatbots get their information. Remember the old computer mantra, “Garbage in, garbage out”?

That still applies. It applies to human responses, and it applies to computer responses. Why would machines be any more accurate?

In a paper published last week, University of Minnesota Law School researchers subjected ChatGPT to four real exams at the university. The exams were then graded blind.

After answering nearly 100 multiple-choice questions and 12 essay questions, ChatGPT received an average score of C+ – a low but passing grade.

C+ is pretty impressive, assuming the scorers were correct. If we have a chatbot grade the answers given by another chatbot, how will we know the “correct” grade?

Are we to assume human grading is more accurate?

Another team of researchers put ChatGPT through the United States Medical Licensing Exam (USMLE) – a notoriously difficult series of three exams.

A pass grade for the USMLE is usually around 60 percent. The researchers found that ChatGPT tested on 350 of the 376 public questions available from the June 2022 USMLE release scored between 52.4 and 75.0 percent.

I wonder how ChatGPT scored between 52.4 and 75.0 percent. Did they give the test repeatedly? Who determined which answers were correct?

In medicine, as in most sciences, much of what was thought to be correct yesterday now has been found incorrect, and tomorrow, that will change again.

It’s called “science,” the purpose of which is to identify and correct yesterday’s misunderstandings.

The authors claim in their research, published in PLOS Digital Health, that “ChatGPT produced at least one significant insight in 88.9% of all responses.”

In this case, “significant insight” refers to something in the chatbot’s responses that is new, non-obvious, and clinically valid.

How were “new,” “non-obvious,” and “clinically valid” determined? If a chatbot disagrees with a human, who is right?

But Dr. Simon McCallum, a senior lecturer in software engineering at New Zealand’s Victoria University of Wellington, says that ChatGPT’s performance isn’t even the most impressive of AI trained in medical settings.
Google’s Med-PaLM, a specialist arm of the chat tool Glan-PaLM, is another LLM focused on medical texts and conversations.

“ChatGPT may pass the exam, but Med-PaLM is able to give advice to patients that is as good as a professional GP. And both of these systems are improving.”

And who determines that advice is “as good as a professional GP”? It would be informative to learn how that was determined.

I don’t have access to a sophisticated chatbot, so if you do, I would appreciate your asking it such questions as:

  1. “What do United States federal taxes pay for?”
  2. “Who will have to pay off the federal debt?”
  3. “Is the federal debt too high?”
  4. “How does the federal government borrow money?”
  5. “Does federal deficit spending cause inflations?”

I chose the above questions because I suspect even the current level of chatbot technology merely regurgitates the common beliefs on any subject and does not analyze the way humans do.

I asked my Siri question #1, and she (it) answered, “Here’s what I found: Governments can use tax revenue to provide public services such as social security, healthcare, national defense, and education.”

The keywords are “Here’s what I found.” Siri isn’t thinking. Siri merely is playing back.

It gave the standard answer, which would be correct for state, county, and city governments, but it is not valid for the U.S. federal government. Siri has not yet learned about Monetary Sovereignty.

But what if Siri did learn about Monetary Sovereignty (MS). Ask most economists and they will tell you the federal government does borrow money, an answer with which MS strongly disagrees. Many, if not most, economists disagree with MS’s precepts.

The MS answers to the above questions are:

  1. Federal taxes pay for nothing. They help the government control the economy by taxing what it wishes to discourage and by giving tax breaks to what it encourages. That’s the theoretical purpose. The real goal is to make the rich richer by widening the income/wealth/power Gap between the rich and the rest.
  2. The so-called “debt” is paid off by returning dollars already in T-security accounts to the owners of those accounts.
  3. No, the federal debt (i.e., the total of T-securities) is not too high. Decreasing the debt causes recessions and depressions. Increasing the federal debt would help increase the Gross Domestic Product (GPD), i.e., grow the economy.
  4. The federal government never borrows money. It creates all the dollars it needs by pressing computer keys.
  5. No, shortages of critical goods and services, usually oil and food, cause inflations. Federal spending doesn’t cause shortages or inflations.

I suspect that chatbots, which use AI to learn the correct answers, will not provide the MS answers, as those answers will be the minority view. Siri, for instance, told me the federal government borrows to pay its bills.

Chatbots are giant data-gathering machines. They really are good at that. We humans are data-gathering machines, too. We analyze data the way chatbots do by comparing it with what we already know.

But humans function differently. I suspect the more creative among us are more receptive or willing to examine minority concepts.

I suspect we are more likely to investigate the rejected, the impossible, the already “proved” wrong, and the crazy “what if” ideas that AI is designed to winnow out.

Our thinking is what differentiates us from the rest of life on earth. We imagine. We visualize. We dream. We hope. We aspire. We dare to be different.

If nature has a plan, was the plan for us to be smart enough to create artificial intelligence?

Today, we drift toward a “Terminator” world. As we simultaneously birth, rule over, and battle our machines, will there come a time when our electronic children replace us?

Are we nature’s interim species, on earth to pave the way for the next experiment?

Rodger Malcolm Mitchell
Monetary Sovereignty

Twitter: @rodgermitchell Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell

……………………………………………………………………..

The Sole Purpose of Government Is to Improve and Protect the Lives of the People.

MONETARY SOVEREIGNTY