Key Takeaways
- Edge computing represents a major new architectural option for enterprise computing, but its rapid adoption should not be surprising.
- Geoffrey West of the Santa Fe Institute wrote a book, Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies, that explains the evolution of flow-centric systems like distributed computing
- Key elements of distributed computing, namely processing power and network performance scale at a power law of less than one relative to one another
- The limitations of these key elements at scale make edge computing a logical addition to data centers and end-user computing for software at scale
As every enterprise finds their software portfolio merging into a single, large scale distributed computing architecture, we are also seeing that architecture evolve. “New” patterns seem to be trending that are purported to accommodate the sheer scale of digitizing corporate operations. For some, the rise of edge computing would be one of the great changes to the way computing gets done since cloud computing replaced private data centers.
But the truth is edge computing is an easily predictable pattern as digital operations scale, especially for geographically distributed businesses, such as national retail chains, insurance brokerages, or even modern manufacturing supply chains. There is science that explains these patterns, coming from the field of complex adaptive systems. It is probably worth understanding the basics of this science, as it will help you make better decisions about what to run centrally, and what to run “on the edge”.
First, let’s take a second to look at this amorphous beast we call edge computing. Normally my next sentence would be something like “What exactly is edge computing?”, but having been through the cloud wars, I know better than to launch yet another debate on definitions and semantics. Instead, “the edge” can be seen as the set of deployment locations that are not in the data center. This is good enough for our purposes.
Now, let’s look at what complexity science can tell us about the evolution of the edge. Geoffrey West is a theoretical physicist and distinguished professor at the Santa Fe Institute, which has been the epicenter of complexity science for the last three decades. West studied the ways systems handle flow: the movement of shared resources between the agents of the system. Think blood vessels delivering oxygen to cells, the electric grid delivering electrons to machines and appliances, or even city infrastructure delivering passengers to homes and businesses.
What West noticed is that these systems seem to evolve in very analogous patterns. Our circulatory system, for example, delivers blood throughout our body using a system that has huge central blood vessels which deliver blood to smaller vessels, that in turn do the same, until you reach the capillaries that are tiny, but serve very specific sets of cells.
Our electric grid has evolved such that large, central generators feed a massive core transmission infrastructure (sometimes operating at tens of thousands of volts!), which in turn feeds more localized transmission infrastructure (working at hundreds or thousands of volts), which again passes through step-down transformers at the neighborhood level to become the local standard voltage that runs in our homes. (Even our homes are one further step-down point as the incoming current is controlled and distributed to allow the same utility lines to serve dozens of different devices simultaneously.
Cities are especially interesting to West (and to me, frankly), as vehicular infrastructure has evolved into a very similar pattern. Interstates bring traffic to highways and major thoroughfares, which in turn feed local streets and neighborhood lanes. Air infrastructure has major international hubs feeding smaller national airports which may even feed tiny regional airports. The shipping industry has massive ships that feed railroads and giant semi-trucks that in turn feed local distribution warehouses that feed (often smaller) trucks that feed local stores.
This pattern of large, core “trunk” flows with “limbs, branches, and leaves” is incredibly common in complex systems that handle flow. So much so, in fact, that West and his colleagues discovered there are mathematical patterns in the way these systems are structured. While some elements scale very quickly, and others scale very slowly relative to the system as a whole, all scale with an exponential trend known as a “power law”.
Power laws and limits
A power law is defined as follows by Wikipedia:
In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four.
In flow systems, components have a power law relationships with each other, The nature of those relationships can tell us a lot about how systems will continue to evolve over time, Systems where key resources scale at a power law less than one with respect to the infrastructure depended on to deliver those resources will hit limits that preclude further growth.
In mammals, metabolism increases with a ¾ power law relative to size. Thus, a larger animal has a higher overall metabolic rate (the energy used to maintain cellular function per unit of time) than its smaller counterparts, but the difference is not directly proportional. As the animal gets larger, the change in metabolic rate gets lower for each additional kilogram. So, on a per-cell basis, the metabolism actually gets lower in larger animals than in smaller animals, but because the number of cells is so much higher, it adds up to a larger metabolic rate.
Combine this with the interesting fact that all mammals have the same average number of heartbeats in their lifespan. Tiny animals that live short lives have to beat their tiny heart much faster to maintain blood flow through their blood vessels. Large animals can ship more blood a longer distance with more efficiency (thanks to a bigger pump), and thus the heart doesn’t have to beat as often to maintain the same blood pressure at the capillaries (where oxygen and carbon dioxide are exchanged with the cells). However, there is a limit there, too, as the physics of animal blood flow hit a wall at about the size of an elephant for land animals, or a blue whale for ocean dwelling mammals.
Interestingly, the slower metabolic rate per cell in larger animals explains another limit. Ever wonder why large animals live longer than small ones? Well, that slow cellular metabolic rate means cells in large animals wear out slower than those in smaller animals, thus contributing to a longer overall life span. (There are obviously many details that I am leaving out, but read West’s book if you are curious to know more.)
Power Laws and Computing
So, let’s apply this line of thinking to another system in which a resource (data) must flow between agents (computers) at great scale. The Internet is an amazing example of such a system, and the application portfolio (and supporting infrastructure) for the average Fortune 2000 company is a great subset. As the scale of these systems increases, how might we predict the patterns of computing architecture will change with it?
We can start with the simple fact that two fundamental elements of distributed computing aren’t changing anytime soon: the time-per-instruction-executed on modern CPUs (which has levelled out in the last decade or so), and the speed of light. Anything we do to scale distributed computing will be limited by these constants. This means that a) the only way to gain more computing power is to add more processors, and b) operational latency will naturally suffer if you increase geographic distance between processors.
Thus, as computing power grows, the performance gain of each additional GB/s will result in a smaller gain in performance than the last. At some point, the gain in processing power will largely be displaced by the loss in network performance, and the application won’t be able to scale effectively any further.
Edge and Data Centers Are Both Essential
How do you address that? Well, the first obvious answer might be to reduce the distance between compute nodes that depend on each other. The more “chatter” between nodes, the less network distance there should be between the two (ideally). So centralizing everything in data centers should be perfect, right?
It turns out that some of the highest volume of chatter in a distributed application is between the end nodes (things like mobile phones, laptops, and even digital sensors) and the servers that deliver data and user interfaces to them. For latency sensitive applications, having everyone globally reach one or two data centers results in some percentage of the population having less than optimal performance.
OK, so distribute everything to computers as close as possible to those end nodes, you might think. Well, now you have the problem that the backend services that depend most on each other are talking over longer distances (and through more network hops, etc.). This may make the entire system less efficient.
There is a third option, however. Place as much of the system that directly interfaces with end nodes in computers that can be placed as close to those end nodes as possible. Then, place the services (and data) that are heavily dependent on one another in data centers (hopefully also distributed, but not as extensively). This creates a “trunk, limb, branch” model that enables the direct interaction between end nodes to be handled by local edge nodes, but the exchange of shared data, events, and other interactions between services to be optimized in central computing locations (data centers).
This mimics what happens in nature in very real ways, and is why I believe that the future architectures of enterprise and commercial systems will follow this pattern (though it will morph a little from this strict interpretation). We’ll see massive growth in edge computing because of it (though anyone using an application or content delivery network is already doing it). And, you’ll see data center business continue, though more and more of that business is likely to move to public cloud providers (with a few, very large scale exceptions).
While I assume that just about any company with distributed physical operations has some form of edge computing today, I would be surprised if more than 30% or so actually have an edge computing strategy. We all should, however, as this is a natural evolution of the world’s great digital complex adaptive system, Internet-based computing.
About the Author
James Urquhart is a Global Field CTO with VMware Tanzu. Mr. Urquhart brings over 25 years of experience in distributed applications development, deployment, and operations, focusing on software as a complex adaptive system, cloud native applications and platforms, and automation. Mr. Urquhart has also written and spoken extensively about software agility and the business opportunities it affords. Prior to joining VMware, Mr. Urquhart held leadership roles at Pivotal, AWS, SOASTA, Dell, Cisco, Cassatt, Sun and Forte Software. Mr. Urquhart was also named one of the ten most influential people in cloud computing by both the MIT Technology Review and the Huffington Post, and is a former contributing author to GigaOm and CNET.
"flow" - Google News
May 08, 2020 at 07:20PM
https://ift.tt/2YNYt6a
Edge Computing and Flow Evolution - InfoQ.com
"flow" - Google News
https://ift.tt/2Sw6Z5O
https://ift.tt/2zNW3tO
Bagikan Berita Ini
0 Response to "Edge Computing and Flow Evolution - InfoQ.com"
Post a Comment