Categories of hybrid cloud: symmetric and asymmetric

The advantages of public cloud are well-known: self-serve API-oriented resources and large scale allow enormous elasticity and agility. Compute and storage resources can be spun up and down quickly in response to changing needs, whether they be load pikes or simply the convenience of not paying for development machines on weekends. Public cloud resources are available across the world, making it easier to run applications where they are needed, and expand businesses quickly to new markets. And certain technologies, because of their scale and elasticity are particularly suited to public cloud: big data and machine learning being among obvious examples.

But not everything is appropriate for public cloud. Some hardware or applications may simply be unavailable in public cloud, or in the geo-region necessary. Regulation or corporate policy restrict what can be run outside of a region or data-center. Or latency requirements may force applications to be closer to customers, to plant or devices, or to databases or hardware that cannot be moved for other reasons. Sometimes it is just easier and less expensive to keep using existing hardware where it is (if it ain’t broke, don’t fix it).

The hybrid cloud model is where a scenario involves code and applications running in a hyperscale public cloud like Microsoft Azure and a traditional data center or co-location facility. We see many common patterns for hybrid cloud. Dev and test are often done in public cloud before final deployment to a private data-center. As mentioned above, the requirements for developer and test machines are often uneven: reduced in the evenings and over weekends, spiking when certain types of tests are running. Because development is usually not done against production, the security, compliance and latency needs may not be as severe making easier to use public cloud without upsetting the security team or auditors.

In talking with customers, partners, analysts and my colleagues in Azure, I find that the range of hybrid scenarios is so wide that it becomes confusing. The requirements for one hybrid scenario are so different from another, that we end up talking about different things. It helps to break hybrid down into some patterns and categories.

Here I will not enumerate all the hybrid cloud patterns, but rather point out that all of them seem to fall into two broad categories, which are often conflated. In the first category, the same code is running in both the public cloud and the local datacenter. In the second category, different code is running in the two different places. These categories I call symmetric and asymmetric.

Dev/test, cloud bursting, and failover for availability are examples of symmetric hybrid patterns: the same code must be running in both places to enable these scenarios. Most people I talk to think first of symmetric patterns when discussing hybrid cloud.

Asymmetric scenarios are by usually more complex, though cloud backup is a common simple use case. Another easily understood example is where for compliance or hardware reasons, production data must stay in a local datacenter, but can be appropriately queried from a cloud-hosted web interface. IoT edge scenarios are often extreme examples of asymmetric hybrids.  If all the devices are talking directly to public cloud, that’s a very asymmetric edge scenario because the devices don’t resemble the cloud services at all. But often it makes sense for devices to talk to field gateways and local datacenters, where aggregation and filtering is done before cloud-appropriate data is sent to the public cloud for further analytics and longer-term storage. These field gateways start to look more like cloud servers, and in cases like with Azure Stack may be very close…falling into the symmetric cloud category.

Of course, some scenarios have symmetric and asymmetric elements, with some code being the same in both places, and some different. It is also true that even if the same production code is running in two places, the control plane APIs may be different (also true of multi-cloud), so that symmetric usage patterns may have asymmetric operations.

Describing every hybrid pattern is beyond the scope of this blog post. In fact, many hybrid patterns are unknown and still being developed, especially asymmetric hybrids. Here I simply want to introduce this terminology, because we have found it useful and clarifying in our discussions with customers and among ourselves to differentiate.

Advertisements

Lies, damned lies, and (bad) AIs

Along with the recent advances in machine learning have come a series of ethical and security concerns. For example, there is a whole body of ongoing research on corrupting training datasets in order to cause specific, incorrect inferences. If you haven’t seen it, glance over this paper where they were able to cause image recognition to mis-identify physical road signs by attaching stickers to them: Robust Physical-World Attacks on Deep Learning Models. Other papers have talked more generally about the problems of safety and security in machine learning: https://blog.acolyer.org/2017/11/29/concrete-problems-in-ai-safety/.

Meanwhile, all sorts of unfair, unpleasant, and even potentially deadly (think ML for diagnosis and treatment as just one example) forms of (presumably) unintentional bias has slipped into our ML models: The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity

I am not a trained statistician, but my father taught statistics and social science for many years, and I learned about the pragmatic problems of statistics from him at the breakfast table. Dad would read the newspaper, come across a report of some new scientific study or poll, and call all out all the problems with it: everything from lack of control groups; phrasing of questions; insufficient sample size; as well as more subtle statistical fallacies like Simpson’s Paradox. As I’ve said elsewhere, I had a bit of weird childhood, but in this case it left me with both an appreciation for and a great amount of skepticism of statistics.

In many ways, statistics are a way of compressing or summarizing a dataset, and almost by definition this is a lossy process: Same stats, different graphs: generating datasets with varied appearance and identical statistics through simulated annealing. But you have to ask yourself how do you know what you lost in that summarization?

So it seems naively to me that we should not be surprised that machine learning, which is largely based on various statistical techniques needs to be viewed with a similar degree of skepticism. One of the great discoveries of the past few years is that if you throw a ton of data and processing power at a problem, you can often get excellent results from relatively simple algorithms, whether that problem be vision, playing games like Chess and Go, or interpreting chest X-rays.  These algorithms can find correlations and causation that would not occur to a human researcher.

Unfortunately, the outputs of these algorithms are often just big matrices full of unexplained numbers. And this is exactly where the issues above come into play: without any explanation for why an AI works as it does, how can we know if it is really correct?

Statisticians have traditionally addressed this by modeling. The model has served to frame the input, guide the computation, and explain the results of experiments and polls. Of course this risks restricting the results too much, as mentioned above a lot of the most exciting results of ML have been unexpected and could not be predicted beforehand by a model, but I think we do need to have explainable models as outputs of these algorithms. This is very active area of research and it will be fun to see how it advances in the next few years.

 

 

Top 10 developer reasons for procastination

It is so easy, especially in recent years, to not get to it on the code you mean to write. Without further ado, the top excuses:

Special Mention: “Just waiting for the tests to finish” — This one probably accounts for as much time as any, but the alternative (“Uh, I didn’t run the tests”) is worse. So I’m leaving it aside. You all know, though, that adding one comment is not an excuse to run 20 minutes of unit tests…again.

  1.  “Meeting time”
  2. “E-mail/Slack/IRC”
  3. “Github (or internal source code control system) is down”
  4.  “Installing…”
    • “Operating system updates”
    • “Visual Studio/Eclipse/Xcode”
    • “Latest SDK/Compiler/Runtime”
    • “Slack/Teams/IRC”
    • “Docker update”
  5. “Waiting for an answer on Stackoverflow”
  6. “The build is broken (not me!)”
  7. “Compiling”
  8. “Need to fix this one (low priority) bug first”
  9. “Just a quick peek at Hacker News” and of course…
  10. “Gonna need some caffeine before I start!”

btw — The answer for me today is “installing Visual Studio 2017!”

Living in a world of Science Fiction

Have any of noticed that we now basically live in a world of Science Fiction? We’ve got electric cars, and self-driving cars; we have private companies building space craft and planning trips to Mars; we have the Pluto probe; we have drones…when I drive home on the freeway at night, I see drones flying over Lake Washington. Look at the advances in machine learning, voice recognition, image recognition…you can log into your computer using your fingerprint or by having the computer recognize your face. We have virtual reality and augmented reality. And of course we have the smart phone, which is basically the old Star Trek communicator…except much more advanced.

I’ve talked to a lot of my friends, and even my young friends…10-20, feel the rate of change has picked up in the last few years. I asked my 89-year old mother what was the biggest technological change she had seen in her lifetime. “Indoor plumbing” she answered.  But after that, she said it was the cell phone. My mom does not use a cell phone. But her little-old lady friends and their kids and their kids all do, and mom thinks it has changed how people behave, how they interact with each other, more than any other technology she has seen in her lifetime.

About me (2017 edition)

I am an architect in the Azure Core team. Basically the Azure Core team writes the software the runs the Azure data centers, from low-level things like firmware for NICs to the basic fabrics for compute, networking and storage up to the public APIs for these most basic layers of the cloud…the IaaS layers basically. The APIs that let you create VMs, networks, blobs, and manage the infrastructure. The Azure Core team does build some verticals (like IoT), but generally, other teams in Azure build higher level, more vertical services on top of our horizontals: unsurprisingly the SQL team builds the Azure SQL service; another team in our database group builds our HDInsight Hadoop/Spark service; other teams build services for web sites and mobile backends; etc. all built on top of the low-level infrastructure services built by Azure Core.

So if Azure Core is the bottom of the stack, I work on the top of the bottom of the stack. I helped standardize our REST APIs, and was the architect for Azure Resource Manager, the latest version of our control plane APIs, as well as helping design the public APIs for some of our compute services. More recently, I have been focused on getting non-Microsoft technologies to work well on Azure: so I work with the Linux vendors to make sure Red Hat and Ubuntu work well Azure, and I’ve worked with Puppet, Chef, Docker, Hashicorp, Pivotal and others so that those systems can be used with Azure.

 

Keeping up with the cloud

The pace of change in the cloud business is incredible. There is significant news about basic technology, products and the state of the business almost every day. It’s a lot of fun and makes for the most interesting job I’ve ever had. I try to spend a minimum of one hour in the morning just reading about the latest. Here’s a set of links I visit:

Hacker News is a little hipster at times, but something worthwhile almost every visit

DataTau is “Hacker News for data scientists” though far less active

Ars Technica covers a wide range of topics and has some of the best writers working in the business

Techmeme is a more mainstream aggregator

High Scalability regularly provides reports on interesting large scale systems, and their “Stuff the Internet says about scalability” reports always contain interesting links

I spend a lot of time working with containers and dev ops technologies, and for this the best daily source is The New Stack

The Register delivers great stories overlaid with a healthy (usually) degree of snark

For keeping track of Linux news, nothing beats LWN.net

Docker Scoop-It! aggregates Docker related stores

Trending projects on Github is a good way to catch a new project early

In my current job I am highly focused on non-Microsoft technologies, but the easiest way to keep my finger on the pulse of .NET and related topics is The Morning Brew

And of course there is Scott Hanselman’s blog

With the rapid pace of change in cloud computing, big data systems and machine learning, the lines between basic research, applied research, advanced development and product development are very blurred. I find it very useful to keep track of the research literature in a way I haven’t since I left university (a *long* time ago).

The read I most look forward every day is Adrian Colyer’s The Morning Paper.

Adrian introduced the concept of reviewing a paper a day here. As he says there, it is a cumulative thing. At first the papers (even Adrian’s summaries) can be tough going. But quickly the context builds. New papers start to refer to old papers you’ve already read and the connections start becoming apparent. Then suddenly, something you thought was theoretical and cutting edge comes up in a practical work problem. I highly recommend this investment of time.

If you enjoy Adrian’s stuff, you may also like Murat Demirbas’s blog

 

There are a lot of other sites in my bookmarks, but most of them either aren’t as relevant or don’t update as often. If you follow the above links every day, one or more them will very often lead to new posts on other sites and blogs.

Introduction

How did famously cloudy, rainy Seattle get to be known as the home of Cloud Computing? The obvious answers are that Amazon and Microsoft are based here. But why did these two companies and this city become the pioneers of cloud computing when so much else in the tech world revolves around Silicon Valley and the many companies based there?

I can’t answer these questions, but I do have some thoughts. I grew up in the area, embedded in Seattle culture. My father lived here in the 1930s (which is not long long by some standards, but means my family has lived here about half the time since Seattle’s founders arrived in 1851) and worked on the B-17 production line in the early 40s.

It seems to me the public cloud is a bizarre cross between the cultures of several Seattle born companies, ultimately expressing themselves in another. Boeing is a huge engineering company operating at world-scale. Nordstrom is famously service obsessed and takes the “customer is always right” motto to an extreme.

Cloud computing brings to my mind two images: the service obsessed staff at Nordstrom transformed into online experiences, and the giant airplane hangers of Boeing transformed into enormous data-centers. The analogy is imperfect and almost certainly false: the real world is vastly more complex and origin stories are famously applied retroactively. But it seems equally unlikely that the Amazon and Microsoft employees working on the Amazon Store and Microsoft Search in the early 2000s weren’t influenced consciously or unconsciously by the cultures and every day presence of these iconic Seattle companies.

I don’t expect this blog to offer much more speculation in this vein. I intend to talk about my thoughts about working in the center of the cloud computing world, and the rapid changes I see every day in the technology and business. This is more a culture blog than a technical blog, but I will point out interesting technology I am working on as well as broader trends. We’ll see how it evolves. Thanks for reading this far!