Twitter: @rodgermitchell; Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell


Our latest election results make me wonder again about human intelligence.

What is intelligence? A quick visit to Wikipedia produced this description:

Intelligence (includes) one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity and problem solving.

It can be the ability to perceive information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

Logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, problem solving, perceive and retain and apply knowledge toward adaptive behaviors. Whew!

If all of that is intelligence, are ants intelligent? Do ants have “logic, understanding, self-awareness, etc.?

Quanta Magazine: Give a colony of garden ants a week and a pile of dirt, and they’ll transform it into an underground edifice about the height of a skyscraper in an ant-scaled city.

Without a blueprint or a leader, thousands of insects moving specks of dirt create a complex, sponge-like structure with parallel levels connected by a network of tunnels.

How do insects with tiny brains engineer such impressive structures? It turns out that ants perform these complex tasks by obeying a few simple rules.

Guy Theraulaz, a behavioral biologist at the Research Center on Animal Cognition in Toulouse, France, discovered three basic guidelines governing when and where ants pick up and drop off their building materials are sufficient to create sophisticated, multilayered structures.

–The ants picked up grains at a constant rate, approximately 2 grains per minute;
–They preferred to drop them near other grains, forming a pillar;
–and they tended to choose grains previously handled by other ants, probably because of marking by a chemical pheromone.

The researchers used these three rules to build a computer model that mimicked the nest-building behavior.

In the model, virtual ants moved randomly around a three dimensional space, picking up pieces of virtual sand soaked in a virtual pheromone. The model ants created pillars that looked just like those made by their biological counterparts.

The researchers could alter the pillars’ layout by changing how quickly the pheromone evaporates, which could explain why different environmental conditions, such as heat and humidity, influence the structure of ant nests.

“For the longest time, people never would have believed this is possible,” said Chris Adami, a physicist and computational biologist at Michigan State University, who was not involved in the study. “When looking at complex animal behavior, people assumed they must be smart animals.”

The ants created functional nests, under wildly varying conditions, using just three rules. So, are ants intelligent?

What about computers? Computers, using massive amounts of data, and a few rules, beat the world’s best humans at chess, Jeopardy! and the extremely complex game, Go.

Was that intelligence? Was it “logic, understanding, self-awareness, creativity,” etc.?

What if intelligence is “none of the above”? What if intelligence is nothing more than vast amounts of data acted upon by a few simple rules.

New Scientist Magazine: The road to artificial intelligence: A case of data over theory
Computers that could simulate human intelligence were once a futuristic dream. Now they are all around us – but not in the way their pioneers expected

While its goals have remained essentially the same, the methods of creating Artificial Intelligence (AI) have changed dramatically.

The instinct of early engineers was to program machines from the top down. They expected to generate intelligent behaviour by first creating a mathematical model of how we might process speech, text or images, and then by implementing that model in the form of a computer program, perhaps one that would reason logically about those tasks.

They were proven wrong.

By the early 1990s, with little to show for decades of work, most engineers started abandoning the dream of a general-purpose top-down reasoning machine. They started looking at humbler projects, focusing on specific tasks that were more likely to be solved.

Some early success came in systems to recommend products. While it can be difficult to know why a customer might want to buy an item, it can be easy to know which item they might like on the basis of previous transactions by themselves or similar customers.

If you liked the first and second Harry Potter films, you might like the third. A full understanding of the problem was not required for a solution: you could detect useful correlations just by combing through a lot of data.

This pragmatic attitude produced success in speech recognition, machine translation and simple computer vision tasks such as recognising handwritten digits.

By the mid-2000s, with success stories piling up, the field had learned a powerful lesson: data can be stronger than theoretical models. A new generation of intelligent machines had emerged, powered by a small set of statistical learning algorithms and large amounts of data.

Researchers also ditched the assumption that AI would provide us with further understanding of our own intelligence.

Try to learn from algorithms how humans perform those tasks, and you are wasting your time: the intelligence is more in the data than in the algorithm.

Consider Geometry. Euclid listed only five postulates that provided all the “algorithms” necessary for the entire field of plane geometry (the geometry of planes) — a huge field of mathematics based on just five basic “guidelines.”

The ants had three to build their houses; Euclid had five to build plane geometry.

We humans each are complex, difficult to predict creatures.  But, psychologists know that the single, best predictor of anyone’s behavior is what they have done in the past.

Psychology fails when it tries to analyze cause and effect; the true “cause” is almost impossible to discern for any individual.  

What will person “A’s” effects be if he accidentally hits his thumb with a hammer? Will the effect be swearing? Screaming? Taking a breath until the pain goes? Punching a wall? Kicking the dog? Crying? Laughing? Fainting? Throwing the hammer? Keep hammering?

We have no idea what the effect will be. How will we find out? What is our plan?

The true cause of the above-mentioned effects is not the hammer. Hammers don’t cause laughter, etc. The true cause of those effects happens somewhere in person “A’s” nervous system.

To locate that cause, in an effort to predict the effect, we might try to analyze all the trillions of molecules in person “A’s” brain and body, and using that information, try to determine all the chemical interactions that will cause certain electrical circuits to be activated, ultimately creating which effect.

Or will we simply analyze the data to determine what person “A” did the last five times he accidentally hurt himself, and predict he would do much of the same?

To predict, we don’t need to know why, if we can infer what from history.

That is the foundation of learning, “machine learning,” and it also is what we call “intelligence.”

When George Santayana wrote, “Those who cannot remember the past are condemned to repeat it,” he was talking about intelligence, though he may not have understood it quite that way.

Intelligence is a vast amount of data plus a few rules, and greater intelligence is much more data, also plus a few rules. 

Consider how the spam filter in your mailbox decides to quarantine some emails on the basis of their content.

Every time you drag an email into the spam folder, you enable it to estimate the probability that messages from a given recipient or containing a given word are unwanted. Combining this information for all the words in a message allows it to make an educated guess about new emails.

No deep understanding is required – just counting the frequencies of words.

But when these ideas are applied on a very large scale, something surprising seems to happen: machines start doing things that would be difficult to program directly, like being able to complete sentences, predict our next click, or recommend a product.

Taken to its extreme conclusion, this approach has delivered language translation, handwriting recognition, face recognition and more. Contrary to the assumptions of 60 years ago, we don’t need to precisely describe a feature of intelligence for a machine to simulate it.

The (machine) has no internal representation of why it does what it does.

Every time you access the internet to read the news, do a search, buy something, play a game, or check your email, bank balance or social media feed, you interact with this infrastructure.

It isn’t just a physical one of computers and wires, but also one of software, including social networks and microblogging sites.

The challenges AI might present us with include surveillance, discrimination, persuasion, unemployment and possibly even addiction.

Year 2009: Google researchers publish an influential paper called “The unreasonable effectiveness of data”. It declares that “simple models and a lot of data trump more elaborate models based on less data.”

We humans generally agree that we are the most intelligent living species. Our intelligence has allowed us to dominate the earth, despite the greater physical size, strength, and numbers of many other living things.

Being intelligent means being able to receive, store and access data, and then apply certain rules to that data. But which data? We receive so very much data every second.

As you sit here now, your body is receiving trillions of data bits, from the soles of your feet to the top of your head. At any moment in time, you aren’t conscious of them all, but your skin alone has millions of sensors, allowing you to feel heat, cold, pain, itch, tickle and other subtle sensations, over every inch of your body, right this minute.

And that doesn’t count all the other sensory inputs from your eyes, ears, nose, mouth, and your insides — millions, billions, trillions of data bits, all of which are sensed, many of which are stored and relatively few of which will be retrieved.

Which data will be retrieved? This is where probability comes into play, for we can’t say, with much confidence, which data will escape the “forget filter.” Yesterday, you remembered someone’s name; today you can’t remember; tomorrow again, you will. Why?

Psychologist World: Forgetting
Why do we forget information? Find out in this fascinating article exploring the purpose of forgetting.

In Short-Term Memory: There are three ways in which you can forget information in the STM:

  1. Decay
    This occurs when you do not ‘rehearse’ information, ie you don’t contemplate it. The physical trace of such memory is thought to fade over time.
  2. Displacement
    Displacement is quite literally a form of forgetting when new memories replace old ones. Everyone knows the potentially vast capacity of memory, particularly long-term memory, but research has shown that numbers can replace old ones being memorised (using the serial probe technique).
  3. Interference
    It’s sometimes difficult to remember information if you’ve been trying to memorise stuff that’s similar, eg words which sound similar. Interference can either be proactive (when old memories interfere with new ones) or retroactive, when new information distorts old memories.

Long-Term Memory is supposed to be limitless in its capacity and length in terms of time. Still though, we can forget information through decay (as in short-term forgetting) and interference from other memories.

Intelligence relies on the availability of data. But the availability of data in the human brain has uncertain characteristics — not random, but clearly uncertain. There are data you probably will recall, and other data you probably will not recall.

Time and importance are factors, but you might recall something seemingly unimportant from elementary school, while forgetting something important your boss told you to do, today.

Human recollection seems based on probabilities, unlike machine memory which approaches perfection. 

What then is intelligence? The answer: Fundamentally, intelligence is data, manipulated by simple rules.  And the recollection of data is based on statistical probability.

The illusion of intelligence is merely the application of a limited number of rules to a vast and ever-changing assemblage of data. While we tend to think the rules themselves are intelligence, it is the application to data that is real intelligence.

Probability is the essence of intelligence. Quantum mechanics tells us everything in the universe — every atom, every particle is a function of probability, which means the entire universe has what we could call “intelligence.”

Probability, data, data recovery and a few instructions have “intelligently” created all we know and all we are.

Are ants clever in their ability to build complex structures, under widely varying circumstances? Or do they robotically follow a few, simple “If/Then” instructions, built into their brains? Are they are nothing more than machines. Are we?

Are we?

As discussed, we receive truly vast amounts of data from our eyes, ears, noses, mouths, and skin.  We have multiplied this data by creating huge communication systems, from sophisticated speech, writing, electronics — radio, TV, Intenet. And we have developed the most effective data storage and retrieval systems on earth: from books to computer databases.

Humans receive more data, have better communication of data, better storage of data and better retrieval of data than any other living creature –in our brains and in our infrastructure. 

And each helped multiply the other. Early on, our slightly better brains helped create slightly better communication and storage, which evolution used to improve our brains, which we used to improve our reception, communication, and storage, and the process kept repeating.

Within the past few thousand years, and the last hundred years, and particularly the past twenty years, the process has changed. It’s doubtful that our living brains have continued to improve, and even may have begun to recess.

Under the “use it or lose it” reality of evolution, our brains today no longer need to store as much in memory, or need retrieve as much, or even need to calculate as much. We use reading and writing, books and machines for those activities. (When I can’t remember something, I first ask Siri.)

While our brains may or may not be declining in intelligence, our partners, the books and computers are vastly growing in intelligence, i.e. data storage, retrieval, and usage, so the “team” is becoming more intelligent — or what we term “intelligent.”

Will our machines ever become more intelligent than we are? That’s like asking, “Will our machines ever become stronger than we are?” Or “faster?” Or “longer-lived?”

The answer: They already are. Though currently, machines are our slaves, they already can do, or can be taught to do, many of the physical or intellectual tasks we can do, and do them faster and more accurately.

Our brains have one advantage. Size.  Within just three pounds of flesh, nature has packed a massive amount of computing power, based not just on molecules, but all the way down to quantum effects. So that spongy little organ not only can do most of our thinking, but while it’s at it, run our bodies, too.

Also, there are a lot of us, and we communicate well, though that advantage over computers is disappearing.

The closest thing machines have to the community of human brains is the Internet, and I suspect that if all the computers on earth were truly interconnected, a few key instructions instantly would make them much smarter than any of us.

Our human brains apply built-in programming to vast amounts of data, to produce seemingly new ideas. What we term “creativity” is the application of some instructions to some data, in a way that has not existed before. The instructions and data already must exist. We cannot imagine from a blank slate.

We cannot imagine the universe. Or ten dimensions. Or infinity. Or a color we never have seen. Or a note beyond our range of hearing. Or the space/time continuum.

Our difficulty understanding quantum dynamics relates to our never having known anything like it.

In quantum mechanics, everything new is visualized in terms of something familiar. Light is described as a quantum and a wave, though it is neither. Einstein described gravity as being like a rubber sheet, because we cannot visualize what gravity really is.

How do we deal with data we cannot imagine? We use mathematics.

The universe and everything in it seems to be represented by mathematics. And since everything that exists is based on probability, there has been some speculation that the universe actually is composed of mathematics.

Is the Universe Made of Math?  by Max Tegmark on January 10, 2014

In this excerpt from his new book, Our Mathematical Universe, M.I.T. professor Max Tegmark explores the possibility that math does not just describe the universe, but makes the universe.

Why does our universe seem so mathematical, and what does it mean? In my new book “Our Mathematical Universe”, I argue that it means that our universe isn’t just described by math, but that it is math in the sense that we’re all parts of a giant mathematical object.

When we derive the consequences of a theory, we introduce new concepts and words for them, such as “protons”, “atoms”, “molecules”, “cells” and “stars”, because they’re convenient. It’s important to remember, however, that it’s we humans who create these concepts; in principle, everything could be calculated without this baggage.

The Mathematical Universe Hypothesis implies that we live in a relational reality, in the sense that the properties of the world around us stem not from properties of its ultimate building blocks, but from the (mathematical) relations between these building blocks. 

When we ask the question, “Of what is the universe made?” and we answer, “It’s made of mathematics,” have we taken a step too far?

Intelligence is real, but its reality is mathematical. Having no physical reality, but only a mathematical reality, does that mean intelligence is an illusion?

Intelligence does not do what we generally believe it does. Intelligence only applies our built-in instructions to a few of the many data we have stored and recovered.

Are your instructions different from mine? I don’t know. Surely your data is different, and that is why one of us is more “intelligent” than the other.

Although that may be an illusion.

Rodger Malcolm Mitchell
Monetary Sovereignty



•Those, who do not understand the differences between Monetary Sovereignty and monetary non-sovereignty, do not understand economics.

•Any monetarily NON-sovereign government — be it city, county, state or nation — that runs an ongoing trade deficit, eventually will run out of money.

•The more federal budgets are cut and taxes increased, the weaker an economy becomes..

•No nation can tax itself into prosperity, nor grow without money growth.

•Cutting federal deficits to grow the economy is like applying leeches to cure anemia.

•A growing economy requires a growing supply of money (GDP = Federal Spending + Non-federal Spending + Net Exports)

•Deficit spending grows the supply of money

•The limit to federal deficit spending is an inflation that cannot be cured with interest rate control.

•The limit to non-federal deficit spending is the ability to borrow.

•Until the 99% understand the need for federal deficits, the upper 1% will rule.

•Progressives think the purpose of government is to protect the poor and powerless from the rich and powerful. Conservatives think the purpose of government is to protect the rich and powerful from the poor and powerless.

•The single most important problem in economics is the Gap between the rich and the rest.

•Austerity is the government’s method for widening the Gap between the rich and the rest.

•Until the 99% understand the need for federal deficits, the upper 1% will rule.

•Everything in economics devolves to motive, and the motive is the Gap between the rich and the rest..