–Why the cloud is is not the future. A return to face-to-face roots

Twitter: @rodgermitchell; Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell

Mitchell’s laws:
●The more federal budgets are cut and taxes increased, the weaker an economy becomes.
●Austerity is the government’s method for widening the gap between rich and poor,
which leads to civil disorder.
●Until the 99% understand the need for federal deficits, the upper 1% will rule.
●To survive long term, a monetarily non-sovereign government must have a positive balance of payments.
●Those, who do not understand the differences between Monetary Sovereignty and monetary non-sovereignty, do not understand economics.
●The penalty for ignorance is slavery.
●Everything in economics devolves to motive.

=====================================================================

I’m first to admit it. I’m a long way from geek status. Oh, about a million years ago, I did some rudimentary programming, but came before a whole lot of yesterdays. So I hope you’ll straighten me out if you think I’m headed in the wrong direction.

This “cloud” thing may be all the rage these days, especially with the proliferation of “smart” phones and pads, but I think the cloud is a dead-end street.

Essentially, the so-called “cloud” is a remote computer. You input a problem to your local computer, whether a PC, a smart phone or some similar device. Your device sends the problem to a remote computer (the “cloud”), which deals with the problem and returns the results to your local computer.

The “cloud” offers several advantages:

1. People can get the speed and storage advantages of a huge computer, while they themselves may need own a minimal, comparatively inexpensive, send/receive device.
2. Efficiency: Because of worldwide time differences, one central cloud computer can service people ’round the clock. When New York works at its maximum, Tokyo may work at its minimum.
3. The costs of maintaining and updating a cloud computer can be shared by thousands of users, who themselves don’t need continually to buy updates and service.
4. Cloud users have access to thousands of different, specialized computers, each providing unique services.

I can sit at home with my little PC, and using the cloud peer into all corners of this planet, including a ground-level view of most streets in America, while listening to thousands of music selections and keeping track of the stock market in real time. No way could my little smart phone handle that kind of assignment by itself.

To those who predict the cloud to be the future of the Internet, the following article may give pause:

When Will the Internet Reach Its Limit (and How Do We Stop That from Happening)?
By Larry Greenemeier

The number of smartphones, tablets and other network-connected gadgets will outnumber humans by the end of the year. Global mobile data grew 70 percent in 2012. Yet the capacity of the world’s networking infrastructure is finite, leaving many to wonder when we will hit the upper limit, and what to do when that happens.

There are ways to boost capacity of course, such as adding cables, packing those cables with more data-carrying optical fibers and off-loading traffic onto smaller satellite networks, but these steps simply delay the inevitable.

Imagine you’d like to find the cube root of 24,875. You have choices. One is to use your own computer, which if it has a root calculating function, will give you the answer: 29.19136230407+.

Or you can use the cloud. You can transmit the problem to a computer somewhere in Asia. It will do the calculation and transmit the answer back to you, thereby using up a tiny fraction of the world’s Internet capacity.

If enough people use the cloud, instead of their own device, the Internet will hit that “upper limit” Mr. Greenemeier mentions.

The solution is to make the infrastructure smarter. Two main components would be needed: computers and other devices that can filter their content before tossing it onto the network, along with a network that better understands what to do with this content, rather than numbly perceiving it as an endless, undifferentiated stream of bits and bytes.

Said simply, the first solution is little more than “do it yourself.” Instead of asking the cloud to compute that cube root, go back to the old way: Use your own device.

The second solution is far more sophisticated.

Scientific American recently spoke with Markus Hofmann, head of Bell Labs Research in New Jersey

How do we know we are approaching the limits of our current telecom infrastructure?
The signs are subtle, but they are there. When I use Skype to send my parents in Germany live video of my kids playing hockey, the video sometimes freezes at the most exciting moments. In all, it happens more frequently lately—a sign that networks are becoming stressed by the amount of data they’re asked to carry.

We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels. That phenomenon is called the nonlinear Shannon limit. Within the next four or five years—we will exceed the Shannon limit.

How do you keep the Internet from reaching “the limit”?
The most obvious way is to increase bandwidth by laying more fiber. Instead of having just one transatlantic fiber-optic cable, for example, you have two or five or 10. It’s very expensive—you need to dig up the ground and lay the fiber, you need multiple optical amplifiers, integrated transmitters and receivers, and so on.

Mr. Hoffman then goes on to describe a “smarter” Internet, which led to the following question:

Even if a smarter Net can move data around more intelligently, content is growing exponentially. How do you reduce the amount of traffic a network needs to handle?
We might move to a model where decisions are made about data before it is placed on the network. For example, if you have a security camera at an airport, you would program the camera or a small computer server controlling multiple cameras to perform facial recognition locally, based on a database stored in a camera or server.

Instead of bottlenecking the network with a stream of images, the camera would communicate with the network only when it finds a suspect.

See where this is headed? The solution to the inherent, quantum limits of the cloud, is to use non-cloud computing.

The cloud is a small, though fundamental, step for the Internet, something that always will be with us. But I predict, for longer term growth, we’ll return to our roots: More and more local computing.

Today, the cloud involves massive, two-way communication, between your little, relatively “dumb” local computer, and huge mainframes geographically distant.

But, one day, your refrigerator, your stove, your bathtub, every piece of clothing in your closet, every board, wire and pipe in your house and yes, even your own body, will contain micro-chips that will calculate and communicate with each other short range, and together will comprise one big, local computer, with a rated speed in Geopflops (a thousand, trillion floating-point operations per second). Each of your neighbors would have the same. The world will be filled with billions of personal “mainframe” computers.

And if your personal computer doesn’t know the answer, it first would ask your neighbor’s computer, in a short “off-the-grid” communication. And your neighbor’s computer might ask his neighbor’s computer, in a kind of local, crowd-sourcing exercise.

The cloud will be like a highway between distant cities — used seldom and only as a last resort — while the vast majority of traffic will be within each city block.

The future is not to devote all the time, money and brains building an ever wider highway for distant communication. Yes, we may need that, but the future is not the cloud. The future is to devote that time, money and brains merging our surroundings and our selves into individual, local computers, which will communicate locally, the way people have communicated for thousands of years.

That’s what I think.

Rodger Malcolm Mitchell
Monetary Sovereignty

====================================================================================================================================================

Nine Steps to Prosperity:
1. Eliminate FICA (Click here)
2. Medicare — parts A, B & D — for everyone
3. Send every American citizen an annual check for $5,000 or give every state $5,000 per capita (Click here)
4. Long-term nursing care for everyone
5. Free education (including post-grad) for everyone. Click here
6. Salary for attending school (Click here)
7. Eliminate corporate taxes
8. Increase the standard income tax deduction annually
9. Increase federal spending on the myriad initiatives that benefit America’s 99%

10 Steps to Economic Misery: (Click here:)
1. Maintain or increase the FICA tax..
2. Spread the myth Social Security, Medicare and the U.S. government are insolvent.
3. Cut federal employment in the military, post office, other federal agencies.
4. Broaden the income tax base so more lower income people will pay.
5. Cut financial assistance to the states.
6. Spread the myth federal taxes pay for federal spending.
7. Allow banks to trade for their own accounts; save them when their investments go sour.
8. Never prosecute any banker for criminal activity.
9. Nominate arch conservatives to the Supreme Court.
10. Reduce the federal deficit and debt

No nation can tax itself into prosperity, nor grow without money growth. Monetary Sovereignty: Cutting federal deficits to grow the economy is like applying leeches to cure anemia.
Two key equations in economics:
1. Federal Deficits – Net Imports = Net Private Savings
2. Gross Domestic Product = Federal Spending + Private Investment and Consumption – Net Imports

#MONETARY SOVEREIGNTY

24 thoughts on “–Why the cloud is is not the future. A return to face-to-face roots

  1. “The cloud” is nothing more than remote, centralized computing. Cloud providers like Amazon don’t use mainframes, mostly, they use servers. Cloud computing takes advantage of the exponentially increasing network capacity to allow an application running on a single server or cluster of servers connected to the Internet or an Intranet to serve users in a remote location. Physically, it is no different from you using your PC to update your blog, or using your phone to check your bank account balance. Whether that blog server is in a “cloud”, or your bank’s mainframe is in one of their own data centers, makes no difference to you as a user.

    The cloud has the same advantages that traditional centralized computing has always had, in that it allows a single IT staff and infrastructure to maintain and upgrade the hardware and OS software for multiple groups of customers. Outsourcing has the same advantages for customers that are unrelated companies. And cloud computing has the same disadvantages, notably a loss of control and longer network latency. The network latency problem is orders of magnitude less than it was 30 years ago. But for users who are not in the IT business, outsourcing and the cloud make a great deal of sense. Managing a data center to provide bulletproof security, availability, and performance is a hard thing to do, something a manufacturing company, for instance, might rather not develop and pay for the expertise to do.

    It’s not a dead end, and you can expect end user devices to get smaller and cheaper, but they will also get faster and be able to hold more data. Letting your neighbors connect to your computer will continue to be a security threat, and is not a likely solution as long as your neighbor might be using a Microsoft operating system. The Internet already has devices in it that cache data close to end users, so that not every transaction has to go all the way back to the server. That sort of thing will grow, and tend to mitigate the bandwidth requirement. Routers are also capable of managing multiple routes from here to there, and choosing one based on various parameters, including how much you paid, how much you’re using lately, and how busy the other routers in the network are at the moment. Thus a network can provide consistently good performance at much higher utilizations than any single line could do.

    Bluetooth and local wireless networks will continue to proliferate, especially in the home environment. Your devices will get smarter, and will communicate with each other. Maybe even with your embedded microchip. iTV and iWatch are only the beginning.

    When fiber is laid in the ground, they put in extra, and they put it in conduits and tunnels so that more fiber can be added later without digging it up again. Adding fiber isn’t a big problem, except maybe in undersea cables. I expect they would have included more spare strands for those.

    Calculations like the cube root will continue to be done locally, for the most part. The cloud is about sharing data and applications, and human support resources, more than sharing processing capacity.

    Like

    1. Thanks golfer,

      The lead for the above-mentioned Scientific American article read:

      To keep the Web from collapsing under the weight of ever more data, the network needs to radically change the way it handles information, says the head of Bell Labs Research

      The cloud is about billions of relatively dumb little computers sending questions and receiving answers, with the calculations being done remotely.

      All those questions and answers take up lots of transmission space. It is transmission space, not calculations or memory, that constitutes the Internet limitation.

      Like

      1. “All those questions and answers take up lots of transmission space.”

        Not really. Video—especially high definition video—dwarfs nearly every other common sort of data. E.g., from
        http://arstechnica.com/business/2012/05/bandwidth-explosion-as-internet-use-soars-can-bottlenecks-be-averted/ :

        “Lots of applications are driving this growth, but most notable is video. Video surpassed peer-to-peer file sharing as the largest type of Internet traffic in 2010. It’s expected to account for more than 50 percent of consumer Internet traffic by sometime this year. By 2015, on-demand video traffic will be the equivalent of three billion DVDs per month, and one million minutes worth of video will cross global IP networks every second.

        It’s not all for consumers, either; videoconferencing is “growing at pretty much the fastest rate from a traffic perspective, more than any other business application,” Thomas Barnett, a service provider marketing manager for Cisco, told Ars.”

        Most of these uses—like the Scientific American article’s video-by-Skype—can’t be improved by making the endpoints “smarter” in the sense suggested. (Improving video compression codecs might still be possible, though.)

        Clouding will be of minor significance to the transmission space except as it affects video (much of which is already effectively clouded anyway).

        Like

        1. I remember seeing somewhere that Netflix now accounts for 1/3 of all Internet traffic. With quality of service controls, that should not impact applications that companies are putting on the cloud, and paying for. But when bandwidth limitations are met, the Netflix users may get jittery video and pauses, unless they implement some sort of buffering technique. They could run just fine as a background app, with proper buffering.

          Like

      2. I think what’s different about cloud computing vs. traditional client/server is that it uses the Internet as its network, vs. a private network or a secure tunnel (VPN). Companies are also using internal clouds, and I’m not sure how that differs from the old way. I’ve been out of the business since before “cloud” became popular, but the new thing then was that a client could ask for an application by name, instead of the server name on which it was running, in order to establish the connection. Maybe that protocol is what became cloud computing. Of course, you could also do that with a simple directory lookup. It sure would make it easier to outsource an application, and then change your mind about to whom you outsource the hosting of it.

        When you say “relatively dumb little computers” and “calculations done remotely”, I think “application server”. That’s not new, and not unique to cloud computing.

        Network speed and bandwidth has been an issue since the first users moved out of the data center. The proliferation of clients and users and applications is what is putting a strain on Internet (and ISP, in local situations) capacity, not the cloud protocols or the idea of remote application servers.

        The idea of doing the facial recognition near the cameras is the exact same thought that led to distributed computing (on cheap server hardware) replacing some centralized computing (on expensive mainframes) in the 1980’s. For highly data-intensive or compute-intensive applications it makes a lot of sense, although it introduces a different set of problems involving data replication, change management, performance management, reliability, availability, and security. Nevertheless, it is a good answer to network latency and server capacity problems for such things as a facial recognition application that would attempt to identify every unknown face that passed by the camera. Another type of application that simply compared one face at a time to a stored image of who the person claimed to be would not put much of a burden on either the server or the network.

        Like

      3. I should have done this before, but I looked up “cloud” on Wikipedia. Some interesting statements there:

        “Cloud Computing is a jargon term without a commonly accepted non-ambiguous scientific or technical definition.”

        “The popularity of the term Cloud computing can be attributed to its use in marketing to sell hosted services in the sense of Application Service Provisioning that run Client server software on a remote location.”

        “The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe became available in academia and corporations, accessible via thin clients / terminal computers, often referred to as “dumb terminals”, because they were used for communications but had no internal computational capacities.”

        So, when you say “The cloud is about billions of relatively dumb little computers sending questions and receiving answers, with the calculations being done remotely” and I say ““The cloud” is nothing more than remote, centralized computing”, we’re both saying the same thing.

        Plus ca change, plus c’est le meme chose.

        Like

  2. Clouding is not about efficiency; it’s about business models, control and monetization. From consumers’ perspective, the carrot is convenience and the stick will be the combination of “intellectual property” laws and oligopoly making (authorized) alternatives unavailable.

    “And if your personal computer doesn’t know the answer, it first would ask your neighbor’s computer, in a short “off-the-grid” communication. And your neighbor’s computer might ask his neighbor’s computer, in a kind of local, crowd-sourcing exercise.”

    See: BitTorrent.

    The problem with this is that consumer Internet connections are optimized for download traffic being many times greater than upload traffic. This is a reasonable engineering decision, since most people consume far more data than they create; but it does make peer-to-peer a technologically inefficient way to distribute data. Server/client is less strain on networks; the only reasons peer-to-peer exists as more than a novelty are social and legal.

    Server/client doesn’t preclude, and often uses, multiple servers located strategically throughout the network; but this is still unlike peer-to-peer in that the server machines and their connections are optimized for very different traffic patterns than home Internet connections. In traditional server/client, though, content or applications can be downloaded to the client and used there as needed. Clouding means going back to the server every time you want to hear the same song or use the same program: that’s all about monetization.

    Like

    1. Well, since TJ Watson thought he would sell a maximum of 5 computers when IBM began selling them, I’m not sure predictions about 10-20-50 years are worth a whole lot. Like the weather, it’s easy to say that tomorrow will be pretty much like today, except that any trends in place will continue. There will be more traffic, more devices using computers, including many devices that exist today but don’t use them, more communication between devices and more remote control. Some things that exist today as demonstrations, like “smart houses”, will become commonplace. Cars already can see behind you and stop themselves (there’s a standalone app for you!). It seems reasonable that cars will be able to almost drive themselves, getting data feeds from highway monitors about congestion and optimum speed, on very local, wireless networks. Google glass puts your computer in your eyeglass frame, and lets you control it with eye movements. There will be more and more “out of the box” ideas coming to fruition as machines get smaller and smaller. IBM already can store and read data on atomic levels, one atom = one bit. Even the binary system may become obsolete, as a single atom can have multiple states, not just “on” and “off”. Nanotechnology products are available, it’s only a matter of time until we build nano-machines, and then nano-machines that build and repair other nano-machines. Sci-fi books are a good source of wild ideas that can become possible as technology progresses, even along the lines of already established trends, like Moore’s Law and miniaturization. Computers can translate written words into other languages, and can recognize speech and reproduce it, so is the Star Trek universal voice translator really that unbelievable? We have 3D printers, so is the Star Trek replicator so impossible? I’m not sure I’d want to eat food from it, but almost anything else seems reasonable.

      For household applications, the local wireless network will carry lots more traffic than the Internet, so those will continue to develop even if Internet bandwidth can’t keep up.

      Government will be able to gather more data than ever, and store and transmit it on their private networks. The threat to civil liberties will grow as more and more monitoring of public places takes place. The IRS scandal is only the beginning of what a government run amok could do with more and more information about your movements, associations, and activities. Not that governments haven’t done such things before, but improved technology makes a larger scale that much easier. OTOH, public safety could improve. Cameras caught the Boston Marathon bombers planting their pressure cookers, and if that video could have been analyzed in time the incident might have been averted.

      The savior of the human race, I think, will be solar energy. We are running out of fresh water, and the only reasonable source that is large enough is desalination, but it takes too much energy. If energy is free, nearly anything is affordable, and there is less reason for conflict, whether individual or governmental.

      Unfortunately, I don’t see how any of this will help us do macroeconomics any better.

      Like

  3. Prediction:

    As cloud-based services (especially video-on-demand) become more prevalent, broadband providers will introduce more problematic data usage caps and overage penalties. The conflict will be resolved by abandoning net neutrality: e.g., Amazon will pay Comcast some gigantic monthly fee, and Amazon cloud traffic will no longer count toward Comcast customers’ allotted data use. Of course, this will have the side-effect of putting smaller, independent web services and content providers at a disadvantage—users will be wary of web sites that aren’t on their broadband provider’s “free data use” list—but that’s not a bug, it’s a feature.

    All this will have nothing to do with technological limitations and everything to do with maximizing cash flow for monopolists and near-monopolists. The herd will remain obediently bewildered.

    Like

  4. You are correct that video is a huge transmission hog, but there are different kinds of video.

    For commercial movies, the solution might be merely to pass them along, like ripples in a bucket, with the movie never entering the Internet. Someone would send it directly to 20 neighbors, each of whom would send it to 20 neighbors.

    After half dozen iterations, the movie might reach almost everyone on earth.

    Live streaming is a different problem, but I can visualize a couple solutions:

    1. In any live-streaming video, the vast majority of the picture doesn’t change from second to second. Think of a football game. The field doesn’t change, and even the players change only in pieces. Perhaps, only the changes could be transmitted.

    2. Predictive. All live video is highly repetitious. Think of a video phone. Not only do the faces change very little and in small increments, but the changes can be predicted.

    Even current rudimentary programming predicts what I am about to type into Google. And an audio processor can predict what the speakers face looks like.

    Like

    1. “1. In any live-streaming video, the vast majority of the picture doesn’t change from second to second. Think of a football game. The field doesn’t change, and even the players change only in pieces. Perhaps, only the changes could be transmitted.

      2. Predictive. All live video is highly repetitious. Think of a video phone. Not only do the faces change very little and in small increments, but the changes can be predicted.”

      All lossy video compression (which is essentially all digital video) since at least 1988 makes use of these characteristics. See:
      http://en.wikipedia.org/wiki/H.261
      for a description an early digital video codec; refinements include the methods used for VCDs, DVDs and Blu-Rays, as well as any current streaming Internet video.

      You’re correct, but about 25 years behind.

      Like

    2. “For commercial movies, the solution might be merely to pass them along, like ripples in a bucket, with the movie never entering the Internet. Someone would send it directly to 20 neighbors, each of whom would send it to 20 neighbors.

      After half dozen iterations, the movie might reach almost everyone on earth.”

      Again, please see BitTorrent:
      http://en.wikipedia.org/wiki/BitTorrent

      I’m not sure what you mean by “never entering the Internet” though—you’re postulating some kind of infrastucture that doesn’t exist today? Local area networks (like those we use to share one Internet connection among all devices in one home) that, in addition to Internet connectivity, also provide connectivity to some number of neighboring local area networks without using an Internet Service Provider?

      In any case, the fundamental problem (as usual) with this isn’t technical, it’s social/political/economic: how would the copyright holders monetize this sort of transmission? So long as they can extract money more efficiently via the current system, they will use all of their quite considerable power as oligopolists and lobbyists to keep things as they are.

      Like

      1. They don’t need oligopolists and lobbyists, they can use technology to prevent copying of their property, and do already. If they cannot continue to manage the losses that way, they would simply not publish it in electronic form.

        You don’t need new networks, you probably can already see some of your neighbors’ wireless networks, and they can see yours. That’s why it’s important to secure your network. If you don’t secure it, or if you allow your neighbors access, they would be able to share your data.

        Like

    3. 1. That is exactly the technique used by video compression algorithms: transmit only the changes from the previous frame.

      2. This web site is hilarious. It’s a collection of things the predictive application put into people’s text messages. There’s still a lot of work to be done on this technique.

      http://www.damnyouautocorrect.com/13603/the-25-funniest-autocorrects-of-dyacs-first-year/

      As for passing along commercial movies, that is what won’t happen as long as IP continues to be protected. Some people do it today, and some go to jail for it. Music, too.

      Like

  5. Most of the responses fall into two categories:
    1. Ho hum, that’s already been done
    2. No, it can’t be done.

    Yet, experts tell us the cloud is reaching saturation, so something will need to be done.

    The two primary issues are volume and speed. Volume isn’t an issue if we wish to pave the world with cable. Speed always will be an issue,

    And yes, predicting technology is a fool’s game — but interesting.

    Like

    1. Networks have been approaching saturation since they were first built. It’s not a new problem. In the 90’s a huge buildout of the networks used by the Internet occurred, and the investment was too much for many telephone companies to bear, and only the largest of them have survived it. Now that capacity is being filled up. More investment will occur, and better technology (LTE).

      As for speed, I believe that to a large extent a given hardware link, whether copper or fiber, can be used at higher speed by changing out the transmitters and receivers at each end. That is where the technology is changing, where ever faster processors and smaller form factor storage is making the routers capable of more volume at higher speed. So capacity can be increased quite a bit without paving the world. And, when they put in all that fiber in the 90’s, if they needed 3 strands they put in 300, so a good deal of it is already there, and there is room for more as fiber technology changes. The cable is cheap, it’s the routers and management of them that are not so cheap.

      But, as they say, if a trend cannot continue, it will not. If the Internet does get too congested to be useful, then pricing and QoS will remove the chaff, and local networks will absorb some of the traffic.

      Like

  6. The speed limit is light speed. Otherwise we simply could use satellites to solve all volume problems.

    Some high-speed, stock traders have positioned their computers as close as possible to trading floor computers, so they can reduce the micro-second time delay, and get their trades in before the competition.

    A primary reason to make chips smaller is to reduce the distance a signal must travel within the chip.

    Light speed is the brick wall of electronics — another advantage of local networks.

    Like

    1. True, the word “speed” is misleading. All data moves at the same speed (c). Maybe “data rate” is better. The bits move at the speed of light, but the time between bits is variable, so you can have more of them per second on a “faster” link. Human response is about 200 ms, and in that time light could travel farther than the circumference of the earth. If you’re getting 1 second response time from a web server in the US (oh, don’t we wish!), the speed of light is not your limiting factor.

      Chips move data orders of magnitude faster than TP links, and with more parallelism and less noise, so their data transfer capacity does vary noticeably with distance.

      Stock traders co-locate, so that they can use direct connections rather than TP links. Every router in the path (even the one that creates your local network) is like a red light on the street, the data must “pause” there. Switches don’t have that delay, so if you can connect on the same LAN segment, you’re much faster than anyone going through a router.

      Like

  7. Since you wrote “So I hope you’ll straighten me out if you think I’m headed in the wrong direction,” I will.

    Your definitions are faulty. The “cloud” is the name for big servers that maintain your data previously stored on individual computers (if you so agree) and that can be in any jurisdiction around the world, so you lose legal control of it. Technically. Jurisdiction is virtual.

    It’s a big hub, for whatever reason: Maybe so that all your devices can download the same data on-demand (or maybe it’s ‘pushed’ to you). Maybe for convenience—you’re tired of buying hard drives for backup Maybe to sell you software as a “service” for a subscription fee, software that you would previously install or download, and the software vendor promises to give you updates for free, after you pay the fee. Maybe it’s where your email is stored if you use gmail, yahoo, or hotmail. Facebook is stored on the cloud. You’re separating the internet from the cloud. The cloud is the internet. It’s just that they figured out how to monetize their big data and backup servers (because storage is now dead cheap), how to give you a reason to trust them more than your home or business hard drives, and how to charge you money for storing your stuff on theirs. All in the name of connectivity, and a gullible name for it, the CLOUD, which is marketing 101 for making it less threatening.

    Here’s another analogy: 1910 AT&T telephone system through a switchboard. That’s all the internet is. Now, let’s say AT&T wants to capture your phone numbers and contacts, and your conversations, and what you think of Molly and her deadbeat husband, and store it (cheaply) in downtown Manhattan in one of their big buildings, and they hit upon the idea of calling their collecting hub the “Cloud” back in 1910. That’s all it is. Bigtime centralization.

    I heartily recommend you watch these two videos, which will not disappoint. The first one is 45 minutes, the second 15 min. It’s Eben Moglen, the guy who wrote the first networked email program when he was 16, lawyer to the GNU project, and visionary guru to even 17-year-old programmers. The guy who represented Phil Zimmerman with PGP encryption against the govt in 1995, and won: he threatened to print the code in a book and ship it overseas which would bypass the govt’s espionage charge against Zimmerman for shipping code out of the country. The 45 minute one is seminal. It started a movement.

    “Freedom in the Cloud” http://www.youtube.com/watch?v=QOEMv0S8AcA (45 min) [Explains the structure of the web that no one knows in a clear un-tech highly entertaining way… and the mesh (neighborhood) networking you write about.] “The Alternate Net We Need” http://www.livestream.com/pdf2011/video?clipId=pla_8ad51bab-a440-4e9b-87c8-6e0b9e196903 (15 min)

    As for Markus Hofmann, he either doesn’t know what he is talking about or he’s being disingenuous (a favorite thing to do when Scientific American comes calling considering the disrespect it’s fallen into). (Disclaimer: I worked at Bell Labs) It’s $300 mill to build and lay a new fiber optic cable from NYC to London…peanuts. It’s a de minimus operation to up the data reception capability at the nodes at each end of the cable (in NY and London)…just move the switch–that’s how volume is done now–but there’s no guarantee the landlines can handle it yet (a commercial consideration more than anything since all fibre optic cable power is throttled, cut down, otherwise we’d all have TV-level speed and quality reception on our iPhones and iPads.) We’ve had holographic storage since 1977, and quantum computing since the 90s–I’ve seen it–they’re just burying it until ancient dead-end digital finishes its commercial run. ‘Nuff said.

    I gotta’ run, but don’t forget that US broadband is sub-standard, third-world. We are years behind South Korea or Japan, even Malaysia, because the US govt gave up its duty to protect our airwaves and licensed (privatized) those rights to AT&T, Cox, and Comcast, Time-Warner, who have no legal requirement to bring the technology up to current levels, or even to supply it to rural areas. Basic infrastructure.

    More: When The Speed Of Light Is Too Slow: Trading at the Edge http://www.kurzweilai.net/when-the-speed-of-light-is-too-slow

    Submarine Cable Map http://www.submarinecablemap.com

    Andrew Blum: author of Tubes

    Like

  8. You wrote “All those questions and answers take up lots of transmission space. It is transmission space, not calculations or memory, that constitutes the Internet limitation.”

    Then why don’t we have this issue with television? It’s exactly the same technology.

    Like

    1. As I’m sure you know, television broadcasts one signal on a dedicated channel. There are no other users, nobody else whose usage can delay you, because the entire bandwidth is dedicated to the single user. The Internet is shared bandwidth, with millions of simultaneous users.

      Like

  9. =================== golfer1john says: May 28, 2013 at 4:58 pm As I’m sure you know, television broadcasts one signal on a dedicated channel. There are no other users, nobody else whose usage can delay you, because the entire bandwidth is dedicated to the single user. The Internet is shared bandwidth, with millions of simultaneous users. ===================

    You’re talking about ‘last mile’ technology for residential service, not the backbone, not the global network. Further, cable internet service uses the cable television infrastructure; again, as a ‘last mile’ technology. Ditto DSL or ADSL using the 100-year-old copper telephony infrastructure. And it’s not “millions of simultaneous users.” It’s by neighborhood or region, usually the last 19-99 miles. Digital TV travels along the exact same backbone as the net does as data.

    The fact of the matter is that a huge amount of the fiber infrastructure *already installed* in the US is not in use. I mean in the ground right now. An enormous amount. It’s called ‘dark fiber’ in the biz, because it’s dark, unused. Because of labor costs, companies and countries installed–and install–excess capacity to handle capacity until 2100 AD and sometimes beyond. For example, when the city of Las City started building out in 1989, they installed fiber optic cabling to all their residential areas. It was cheap to add extra human-hair-width fiber in the one-inch pipe snaking up to each house. They knew that the need would be there. So Las Vegas in the 90s had the capacity to handle telecommunications, internet, and television technologies that hadn’t even been invented. And that’s still the case. It’s one of the most fiber optic cities in the US, and the majority of the capacity goes unused.

    When huge civil engineering firms (or railroads-EU) build, they add extra capacity with an eye to leasing it out a decade or two–or maybe seven–down the road. But it’s still dark; it’s sitting there. All over the world.

    That’s why what the Bell Labs guy said makes no sense, or the reporter didn’t understand what he was talking about.

    Like

    1. I was talking about broadcast TV.

      The cable used for both cable TV and internet is also shared, as is the internet backbone. I have had “jittery” cable TV sometimes, but I assume the cable company would use QoS controls to make sure its TV signal was always higher priority than the internet signal, so that any congestion would not affect the TV.

      I agree, one or the other is being misleading, whether intentional or not. I had never heard of the Shannon limit, but I looked it up and it seems to relate to the signal to noise ratio on a fixed capacity carrier. Since we’re constantly upping the carrier capacity, we’re also constantly moving the Shannon limit, so it makes no sense to say we will reach it in some fixed time. Plus, as we have both mentioned, there is still lots of unused capacity already installed and easily added to.

      Like

Leave a comment