“Liberals think the purpose of government is to protect the poor and powerless from the rich and powerful. Conservatives think the purpose of government is to protect the rich and powerful from the poor and powerless.” RMM
All multi-celled animals, even insects, have something that serves as a brain. But, even the powerful human brain is not unique; it merely is “better” (depending on how one defines “better.”)
The human brain was not planned; it evolved naturally via trillions of chemical reactions influenced by trillions of outside stimuli. One could say that brains are accidents of natural selection, a slow process requiring millions of years.
By contrast, computers result from planning, i.e. directed selection , a much faster process.
Within one human generation, a computer has become the word’s best chess player (because of its superior ability to analyze the future) and another became the best Jeopardy player (because of its superior recall).
Siri recognizes your voice, can translate it into printed letters, words, and sentences, and can extract some meaning from those sentences. Google maps can determine the best route from among an infinite number of alternatives, based on various criteria changing in real time. Computers can recognize faces — almost, as you will read.
A living brain receives inputs, which it analyzes to produce results. Humans learn via neuronal changes that give greater weight to the inputs leading to correct results. Computers can do the same.
But, computer “thinking” differs vastly from human thinking.
A team of researchers from Pittsburgh’s Carnegie Mellon University have created sets of eyeglasses that can prevent wearers from being identified by facial recognition systems, or even fool the technology into identifying them as completely unrelated individuals.
The attack works by taking advantage of differences in how humans and computers understand faces. By selectively changing pixels in an image, it’s possible to leave the human-comprehensible facial image largely unchanged, while flummoxing a facial recognition system trying to categorize the person in the picture.
Computer systems don’t understand faces the way we do; they’re simply looking for patterns of pixels.
Pictures showing how researchers were able to use their glasses to impersonate celebrities, as well as each other. Photograph: Mahmood Sharif, Sruti Bhagavatula, Michael K Reiter and Lujo Bauer
A facial recognition computer confused the upper row with the lower row.
We have created computers that can learn, not from human input, but from their own experiences (“machine learning”).
We now have begun to develop “deep learning,” a closer approximation of living brains, in which layers of neural networks functionally program succeeding network layers. Development of deep learning is the most advanced field of Artificial Intelligence (AI), wherein a computer continually improves its results, without human supervision.
Imagine now, an AI computer continually improving, faster and faster, millions of times a second, producing ever more intelligent generations, ultimately leading to a super-intelligence beyond that of any human or group of humans.
If we create an Artificial Intelligence (AI) that repeatedly can “improve” on existing AI, thus creating an even more “improved” AI, eventually, step by ever faster step, there would become an AI so powerful it would rule the world, if not the universe. That result is known as a “technological singularity.”
That leaves many questions, not the least of which is the meaning of the word “improve.”
Here are some excerpts from a November 9, 2016, NewScientist article titled, “A Singular Proposition”:
Sooner or later, humankind will invent a true AI, a thinking machine that fully deserves attributes such as wisdom, acumen, self-awareness, mind, and consciousness — an entity that can out-think us as we outthink mice.
We will have achieved the “singularity,” but what will be its purpose — not our purpose — but its purpose? What will the singularity want?
As living creatures, our prime purpose is survival, but what kind of survival?
–Our personal survival?
–Our family’s survival?
–The survival of our mores?
–Our species’ survival?
–Our government’s survival?
–Our planet’s survival?
Survival relates to time, i.e. survival for how long?
We value the present more than the future because prediction is uncertain. Still, we are concerned about the survival of things that will outlive us. We sacrifice to protect our children, and soldiers risk death to protect our government, though our children and our government probably will survive beyond us.
We make sacrifices to protect the future of the planet. Scientists labor to make discoveries that will benefit future generations. Authors and Presidents are concerned with their legacy. A few of us even write the words to appear on our gravestones.
To facilitate the above-mentioned survivals, we have invented governments and laws. For some societies, the fundamental law is the “Golden Rule, ” a subset of which is the Ten Commandments.
Such laws form the basis for our “morality,” which in turn, is based on the length of, and pleasure in, our lives, often measured by “fairness.”
To summarize, survival, our basic drive, leads to morality which leads to the consideration of fairness:
But what exactly is “fairness”?
- Is it fair for the lazy person to receive as much as the hard-worker?
- Is it fair for the sick to receive more than the well?
- Is it fair for the rich to receive more than poor?
- Is it fair for the unwise to receive as much as the wise?
- Is it fair for the selfish to receive more than the generous?
- Is it fair for the devious to receive more the honest?
- Is it fair for the strong to receive more than the weak?
- Is it fair that some are sicker, poorer, less wise, weaker or live shorter or less pleasant lives than others?
Are “all men created equal”? Do we have equal rights to Life, Liberty and the Pursuit of Happiness”? Do we have “certain inherent natural rights”?
Even if we do not, should we?
I suggest that the single most important question you ever address — a question you answer dozens, if not hundreds, of times every day, consciously or not, is: “Is this “fair?”
That question forms the basis for much of our law. Humans have a powerful, instinctive need for fairness. It is the foundation of human society and the human species.
And not just human. Some animals too, are influenced by what they perceive as fairness and unfairness. Many experiments have demonstrated this.
The results of thousands of sensory inputs have been weighted in your neurons, so today you have a general belief about fairness. This general belief allows you to ascribe fairness even to situations you never have encountered, thus the multitude of disparate laws, all based on fairness.
But what laws would intelligent machines follow? Would their morality, fairness, and laws be like ours? Or would they fashion completely different rules?
We evolved as social animals, and fairness seems important to the cohesive strength of social groups. But would AI machines evolve as social animals? Would they evolve naturally at all, or would they remain programmed with our biases?
Social animals form specializations, which benefit the society. For “higher” animals there are leaders who generally benefit more than others.
Human leaders receive special privileges in the form of money, service, glory, and protection. Objectively, this is unfair. Objectively, caste systems are unfair, as are poverty vs. wealth, worker vs. boss, waiting vs. being seated immediately.
But we have evolved naturally to accept those “unfairnesses” as seeming to bring us survival benefits. Altruism, for instance, is not just common, but vital, among humans and other social animals.
Is altruism fair? When voluntary? When enforced?
Is there ever true altruism, or is what we call “altruism” really self-serving in disguise?
The basic purpose of economics is to find uses of money to improve our lives. As a social science, economics is heavily influenced by fairness and altruism.
Visualize an AI machine that has reached singularity. Would fairness be a concern?
Now visualize that machine creating, then interacting with, thousands of other machines, which also have achieved singularity. What would be the prime issues for any one of those machines and for all machines collectively? Would fairness naturally evolve?
If (big “IF”) a machine or a group of machines are motivated by survival, what would survival mean to machines?
The “Terminator” movies postulated that intelligent machines would be motivated to eliminate humans. But to a machine brain, would that be the best use of the human species?
Humans are not motivated to eliminate other predators. Currently, we try to prevent their extinction.
(This very question has arisen regarding the use of CRISPR to eliminate malaria- and zika-transmitting mosquitoes.
We already have eliminated the smallpox virus species. Specific mosquito species can be made extinct using a sterile insect technique that has existed for over 50 years. It has been effectively used to eliminate species for disease prevention in humans and animals, most notably, with the screwworm and the melon fly.)
Although we have discussed AI as a thinking issue, the vast majority of the human brain is devoted to body control. That is why a whale, being larger, has a larger brain, though we may have higher intelligence, according to our measures.
Science News Magazine: November 12, 2016: Robot Awakening, by Meghan Rosen
For robots, AI means more than just “brains.” The body matters too. In humans, eyes and ears and skin pick up cues from the environment.
Even someone sitting (nearly) motionless at a desk in a quiet, temperature-controlled office is bombarded with information from the senses.
Fluorescent lights flutter, air conditioning units hum and the tactile signals are too numerous to count. Fingertips touch computer keys, feet press the floor, forearms rest on the desk. If people couldn’t tune out some of the “noise” picked up by their skin, it would be total sensory overload.
“You have millions of tactile sensors, but you don’t sit there and say, ‘OK, what’s going on with my millions of tactile sensors,’ ” says Nikolaus Correll, a roboticist at the University of Colorado Boulder. Rather, the brain gets a filtered message, more of a big-picture view.
In UCLA’s Biomechatronics Lab, a green-fingered robot just figured out how to use its body for one seemingly simple task: closing a plastic bag.
Two deformable finger pads pinch the blue seal with steady pressure (the enclosed Cheerios barely tremble) as the robot slides its hand slowly along the plastic zipper. After about two minutes, the fingers reach the end, closing the bag.
It’s deceptively difficult. The bag’s shape changes as it’s manipulated — tough for robotic fingers to grasp. It’s also transparent — not easily detectable by computer vision.
So the researchers let the robot learn how to close the bag itself.
First they had the bot randomly move its fingers along the zipper, while collecting data from sensors in the fingertips — how the skin deforms, what vibrations it picks up, how fluid pressure in the fingertips changes.
They also taught the robot where the zipper was in relation to the finger pads. The sweet spot is smack dab in the middle, Santos says.
Then the team used a type of algorithm called reinforcement learning to teach the robot how to close the bag. The program gives the robot “points” for keeping the zipper in the fingers’ sweet spot while moving along the bag.
“If good stuff happens, it gets rewarded.” When the bot holds the zipper near the center of the finger pads, “it says, ‘Hey, I get points for that, so those are good things to do.’ ”
As you read this, your body senses many thousands of inputs each second, from the soles of your feet, to your legs, your back, your arms and hands, your heart and digestive system, your head.
Your brain selects only the meaningful ones for your conscious attention. You don’t feel your bladder unless you need to empty it or it contains a stone.
A machine needs a body, too, and that body needs to signal the machine’s brain, not just with intellectual questions, but with signals about operation.
The UCLA machine taught itself to zip a plastic bag, but future AI+ or AI++ machines may not even use plastic bags. They may move items balanced between entangled atoms, or some other sophisticated method. (Star Trek’s “beam me up” transporter?)
Evolution is not straight-line. The evolution of the telephone involved the creation of the coin slot, which proved to be a “time-wasting” divergence when cell phones were created.
Today, billions of dollars and millions of man-hours are devoted to landing a human on Mars. It is seen not just as an exploratory, information-gathering effort, but more importantly, as a survival insurance policy should the earth become inhabitable.
Most of those dollars and hours are being spent to find ways to protect humans from the dangers of space travel and the Martian climate.
But machines already have made that trip and “live” on Mars, and they are not even AI. So all these efforts to send humans to Mars might be an evolutionary digression, like those “time-wasting” coin slots on phones.
At what point will computers achieve self-determination, the ability to decide and to procreate, without human intervention? You well might ask, do we have self-determination now? Aside from a cave-dwelling hermit, does any person operate without human intervention?
In summary, the evolution of computers toward AI is ongoing and inevitable. What nature did in a long, complex series of “accidents,” humans and computers will achieve, purposefully.
The question is “When (not if) AI+ or AI++ is achieved, what will be their prime issues? Will they pursue fairness, as many animals, including humans, do? And if so, what will be their measures of “fairness”?
Will they have governments, laws, rewards & punishments, emotions, goals, and if so, what will they be?
Will they be altruistic, and if so, what form will it their altruism take?
Remember the face recognition experiment? Computers don’t understand faces; they understand pixels. Even an AI computer doesn’t understand the world the way you do.
If you have owned a dog and a cat, you will have noticed that cats especially, think differently from dogs. But that difference is nothing compared to the differences between your thinking and a computer’s “thinking.”
Despite the pleasant voices in Siri and your GPS system, even the most advanced computers contain the most foreign brains you ever will encounter. You simply have no intuition about computers’ wants, needs, and motivations.
While your sensory world causes an analog agglomeration of brain chemicals, a computer’s world causes a completely different, digital, organized array of electrical charges. Although computers can achieve amazing feats of what we see as “thinking,” we cannot know whether even the most advanced “A+++ . . . +” ever will understand the same way we do.
That is both the strength and the weakness of computer “thinking.” The more we make computer brains like our brains, the more susceptible they will be to the weaknesses of our brains: Forgetting, emotions, fatigue, biases, computational mistakes.
We don’t need to develop artificial human brains. We already have real human brains. The future will deliver advanced computer brains, and these will be quite alien to us.
Will computers, despite their massive knowledge, memory, and sensing abilities, ever truly understand concepts like compassion and fairness, anger, and fear?
We began this post with an economics postulate:
“Liberals think the purpose of government is to protect the poor and powerless from the rich and powerful. Conservatives think the purpose of government is to protect the rich and powerful from the poor and powerless.”
It mentions such concepts as “purpose,” “government,” the “rich,” the “poor,” the “powerless,” the “powerful,” and “protect.” It implies economics.
Will there be an AI+ corollary?
What will be the purpose of economics in an AI+ world?
Rodger Malcolm Mitchell
Twitter: @rodgermitchell; Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell
•Those, who do not understand the differences between Monetary Sovereignty and monetary non-sovereignty, do not understand economics.
•Any monetarily NON-sovereign government — be it city, county, state or nation — that runs an ongoing trade deficit, eventually will run out of money.
•The more federal budgets are cut and taxes increased, the weaker an economy becomes..
•No nation can tax itself into prosperity, nor grow without money growth.
•Cutting federal deficits to grow the economy is like applying leeches to cure anemia.
•A growing economy requires a growing supply of money (GDP = Federal Spending + Non-federal Spending + Net Exports)
•Deficit spending grows the supply of money
•The limit to federal deficit spending is an inflation that cannot be cured with interest rate control.
•The limit to non-federal deficit spending is the ability to borrow.
•Until the 99% understand the need for federal deficits, the upper 1% will rule.
•Liberals think the purpose of government is to protect the poor and powerless from the rich and powerful. Conservatives think the purpose of government is to protect the rich and powerful from the poor and powerless.
•The single most important problem in economics is the Gap between the rich and the rest.
•Austerity is the government’s method for widening the Gap between the rich and the rest.
•Until the 99% understand the need for federal deficits, the upper 1% will rule.