The human brain can do far more, using less energy and with less mass, than any known computer.
No computer, from your little Apple Watch to Japan’s huge Fugaku (the world’s fastest computer, which can perform more than 415 trillion computations a second), can compare to what the human brain can accomplish.
So far as we currently know, and excepting certain past and present political leaders, your brain is the most complex and sophisticated object in the universe.
Yes, the mechanical monsters can do math really fast, and they almost never forget where their keys are, but still, they don’t have the power to operate every function of the human body, as well as the brain-body interface, does.
Consider just your 20 square feet of skin that your human brain oversees. Your skin, which is just one of the many organs in the human body, contains about 300 million cells, of which 30,000 die and are replaced every minute.
While all this creation, destruction, and replacement are happening, every inch of your skin senses the exact location of heat, cold, several kinds of touch, and pain, all of which are interpreted, every second, by your brain. Try to visualize a machine able to accomplish this.
Visualize driving your car at top speed, while thousands of parts are being replaced every minute.
And that’s just your approximately 8 lbs. of skin. Consider the rest of you. Various types of sight, sound, touch, heat, pain, and taste receptors are springled throughout your body, all monitored by your brain, and all with multiple functions. (Yes, there are taste receptors all over, even in your lungs.)
And though no computer in existence could handle even your body’s sensing and response needs, that is child’s play compared to your more sophisticated psychological tasks your brain handles.
What computer can feel fear, hatred, loneliness, joy, empathy, compassion, love, greed, disgust, etc., etc. How many different kinds of love can your brain feel?
And even that is child’s play compared to your more sophisticated brain tasks: Self-consciousness, pride, sangfroid, the creation of quantum mechanics, general relativity, evolution, math, and other sciences.
With all that computing power tucked in your skull, still you use computers for specialized tasks. Your brain was designed specifically to help you survive here, in this tiny environment you know as “earth,” in the year 2021.
But your brain includes desires, so you wish to do more than just survive in the here and now.
You wish to survive the new diseases that may come your way, and the meteors, and the comets, and the storms and the solar storms, and the global warming and cooling.
As a species, we wish to survive long term on our moon and on other moons and on other planets. And of course, we wish to survive our own foibles — the wars and prejudices our brains instigate.
And to accomplish that long-term survival, we will have to become smarter, and not just smarter, but better.
By “better,” I mean moral.
The purpose of morality is group survival, but morality is a mixed survival mechanism.
Short term, the least moral among us may have advantages. A crook acquires; a murderer eliminates competition. Even long-term, immorality can benefit the individual, though morality can support the survival of our species.
To accomplish all of our short- and long-term needs, we have two alternatives: Improve our brains or augment/replace our brains.
Nature has not improved our brains for millennia, so we currently focus on augmentation/replacement, which involves the use of computers.
Discover Magazine: The Singularity Might Redefine What It Means to Be Human and Machine
Ever since computers took shape — first filling rooms, then office desks, then pockets — they have been designed by human minds. Over the years, plenty of people have asked: What would happen if computers designed themselves?
Someday soon, an intelligent computer might create a machine far more powerful than itself. That new computer would likely make another, even more powerful, and so on. Machine intelligence would ride an exponential upward curve, attaining heights of cognition inconceivable to humans.
This, broadly speaking, is the singularity.
The singularity is a formidable proposition. Superintelligent computers might leap forward from nanotechnology to immersive virtual reality to superluminal space travel.
Instead of being left behind with our cell-based brains, humans might merge themselves with AI, augmenting our brains with circuits, or even digitally uploading our minds to outlive our bodies.
The result would be a supercharged humanity, capable of thinking at the speed of light and free of biological concerns.
Philosopher Nick Bostrom thinks this halcyon world could bring a new age entirely. “It might be that, in this world, we would all be more like children in a giant Disneyland — maintained not by humans, but by these machines that we have created,” says Bostrom, the director of Oxford University’s Future of Humanity Institute and the author of Superintelligence: Paths, Dangers, Strategies.
There’s the classic sci-fi nightmare of a robot revolution, of course, where machines decide they’d rather be in control of the Earth.
But perhaps more likely is the possibility that the moral code of a superintelligent AI — whatever that may be — simply doesn’t line up with our own.
An AI responsible for fleets of self-driving cars or the distribution of medical supplies could cause havoc if it fails to value human life the same way we do.
There are ways we might teach human morality to a nascent superintelligence. Machine learning algorithms could be taught to recognize human value systems, much like they are trained on databases of images and texts today.
Or, different AIs could debate each other, overseen by a human moderator, to build better models of human preferences.
But morality cuts both ways.
There may soon be a day, Bostrom says, when we’ll need to consider not just how an AI feels about us, but simply how it feels. “If we have machine intelligences that become artificial, digital minds,” he continues, “then it also becomes an ethical matter [of] how we affect them.”
In this age of conscious machines, humans may just have a newfound moral obligation to treat digital beings with respect.
Call it the 21st-century Golden Rule.
One problem: Teaching morality to a machine becomes a question of “whose morality.”
For instance, consider this man’s moral beliefs:
Indian police have arrested a man and accused him of decapitating his own teenage daughterin a rage over her relationship with another man he didn’t like
In what appears to be the latest gruesome case of so-called “honor killing” in the Asian nation. Police in the northern state of Uttar Pradesh said Sarvesh Kumar was arrested as he walked toward the local police station carrying his daughter’s head.
Honor crimes are a major problem in India, neighboring Pakistan, and other countries where family members — most often women and girls — are attacked and even killed by their relatives for bringing perceived shame onto the family.
Such crimes are more common in rural communities where centuries-old traditions and deep-rooted cultural norms still dictate the rules of everyday life.
While that murder may seem immoral to you, and definitely seems immoral to me, what would you call turning off a computer that is so advanced it is sentient? How different is that from murder?
We humans face an infinite number of moral dilemmas some of which are addressed by laws and some of which are addressed ad hoc. The fact that there are dilemmas indicates the absence of “right-or-wrong” answers.
Machines, by their very nature, can be physically much stronger and have more physical survivability than we humans have. If we also make them smarter and more imaginative who will rule whom?
Won’t it be vital to give them a moral imperative, while we still can?
But again, whose moral imperative? If there becomes a choice between killing a human vs. turning off a computer, which choice will a sentient computer make. And by the way, when exactly does a computer become sentient?
Fortunately, the human brain operates on a completely different system from the electronic artificial brain. So even with quantum computers being developed, I suspect computers will, for at least several human generations, continue lag well behind us in their overall capabilities.
That suspicion could change suddenly, however. Science does not work in a straight line.
Previously, the fastest vaccine ever created was the mumps vaccine, which required four years of development. Yet, months, perhaps years of developing the COVID-19 vaccine were eliminated via the evolutionary shortcut of CRISPR-Cas 9.
Tomorrow, a new way to create a superior artificial brain could be announced, and we immediately would be faced with the possibility of sentient computers, and all the dilemmas they would bring, from turning them on, ruling them, putting them into hazardous or unpleasant situations, and turning them off.
The First Steps Toward a Quantum Brain: An Intelligent Material That Learns by Physically Changing Itself
An intelligent material that learns by physically changing itself, similar to how the human brain works, could be the foundation of a completely new generation of computers.
Radboud physicists working toward this so-called “quantum brain” have made an important step. They have demonstrated that they can pattern and interconnect a network of single atoms, and mimic the autonomous behavior of neurons and synapses in a brain.
Says project leader Alexander Khajetoorians, Professor of Scanning Probe Microscopy at Radboud University, “This requires not only improvements to technology, but also fundamental research in game-changing approaches.
Our new idea of building a ‘quantum brain’ based on the quantum properties of materials could be the basis for a future solution for applications in artificial intelligence.”
For artificial intelligence to work, a computer needs to be able to recognize patterns in the world and learn new ones.
Today’s computers do this via machine learning software that controls the storage and processing of information on a separate computer hard drive. “Until now, this technology, which is based on a century-old paradigm, worked sufficiently. However, in the end, it is a very energy-inefficient process,” says co-author Bert Kappen, Professor of Neural networks and machine intelligence.
The physicists at Radboud University discovered that by constructing a network of cobalt atoms on black phosphorus they were able to build a material that stores and processes information in similar ways to the brain, and, even more surprisingly, adapts itself.
An intelligent material that learns by physically changing itself, similar to how the human brain works, could be the foundation of a completely new generation of computers. Radboud physicists working toward this so-called “quantum brain” have made an important step. They have demonstrated that they can pattern and interconnect a network of single atoms, and mimic the autonomous behavior of neurons and synapses in a brain.
Science, human survival, and morality march to different drum beats. We developed atomic energy first. Since then we have struggled to use it without destroying ourselves.
From corn to cows to trees to your pet dog, almost every living thing we touch has been subject to some measure of our evolutionary tinkering. But our evolutionary tinkering required years of trial and error, and sometimes didn’t work at all.
Now, with CRISPR-Cas9, we can do in a day what formerly took years. It has given us the power to eliminate malaria simply by eliminating the Anopheles Stephensi mosquito via what is known as a “gene drive.”
But should we eliminate that species? We already have, many times. But should we do it intentionally? That question is addressed in the New York Times Magazine, here.
The elimination of an entire species is no small decision. No one can say for certain what the side effects will be, what other animals will be affected and how?
And if we intentionally eliminate one species will we eliminate another unpopular species the same way?
But then, in reality, there is no question at all. If it’s possible to do it, someone will do it, and then someone else will do it. And the question will have been answered.
The title of this post is: “Should we build a moral computer?”
But in reality, a better title might be, “When will we build a moral computer, and how will we survive it?“
Rodger Malcolm Mitchell
Monetary Sovereignty Twitter: @rodgermitchell Search #monetarysovereignty Facebook: Rodger Malcolm Mitchell …………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..
THE SOLE PURPOSE OF GOVERNMENT IS TO IMPROVE AND PROTECT THE LIVES OF THE PEOPLE.
The most important problems in economics involve:
- Monetary Sovereignty describes money creation and destruction.
- Gap Psychology describes the common desire to distance oneself from those “below” in any socio-economic ranking, and to come nearer those “above.” The socio-economic distance is referred to as “The Gap.”
Wide Gaps negatively affect poverty, health and longevity, education, housing, law and crime, war, leadership, ownership, bigotry, supply and demand, taxation, GDP, international relations, scientific advancement, the environment, human motivation and well-being, and virtually every other issue in economics. Implementation of Monetary Sovereignty and The Ten Steps To Prosperity can grow the economy and narrow the Gaps:
Ten Steps To Prosperity:
- Eliminate FICA
- Federally funded Medicare — parts A, B & D, plus long-term care — for everyone
- Social Security for all or a reverse income tax
- Free education (including post-grad) for everyone
- Salary for attending school
- Eliminate federal taxes on business
- Increase the standard income tax deduction, annually.
- Tax the very rich (the “.1%”) more, with higher progressive tax rates on all forms of income.
- Federal ownership of all banks
- Increase federal spending on the myriad initiatives that benefit America’s 99.9%
The Ten Steps will grow the economy and narrow the income/wealth/power Gap between the rich and the rest.