In the stormy race of the digital transformation, the hybrid landscape of corporate IT is increasingly evading manageability. Could the control plane of a hyperscaler provide a lasting remedy? Would you need special hardware?
Cloud hyperscalers are all trying to get into the company’s own data center head over heels. But Microsoft already has a home advantage here. From Office365 applications to BI workloads to the operating system, the cloud giant has a presence in virtually every organization.
The software giant’s preferred route to the company’s own cloud in its corporate data center leads via Azure services on certified hardware.
With Azure Stack Hub, previously known as Azure Stack, Microsoft has created an ecosystem of solutions for the “cloudification” of corporate IT resources. It basically consists of three main components:
- Azure Stack Hub: this Azure extension brings Microsoft Azure cloud services into the corporate data center. With Azure Stack Hub, companies can set up their own autonomous Azure cloud on the company’s own IT infrastructure in order to run cloud-native Azure apps in their own local environment while maintaining data sovereignty.
- Azure Stack Edge: this cloud-managed edge appliance is designed to facilitate the delivery of ML and IoT workloads.
- Azure Stack HCI: this hyper-convergent infrastructure supports the delivery of scalable virtualization workloads and scalable storage for high-performance local workloads.
To realize this type of “cloudification,” Microsoft has chosen to work with strategic infrastructure partners. In addition to Dell and HPE, Avanade, Cisco, Fujitsu, Lenovo and Wortmann AG have had their own systems certified by Microsoft for Azure Stack. These companies are building workload-optimized server, network and storage stacks, with which, among other things, a limited version of Azure’s control software is to run on a managed appliance deployed in the data center on site with Microsoft’s blessing.
HPE thus offers its “Proliant” servers in an edition optimized for Microsoft Azure Stack. Azure Stack HCI runs on “HPE Apollo 4200 Gen 10” in All Flash, Hybrid SAS, SSD and NVMe configurations and, thanks to its high storage density, can handle massive big data workloads with a comparatively small data center footprint.
Dell EMC has developed its own cloud-optimized, hyper-convergent infrastructure for Azure. It delivers its unique offering with RDMA networking, persistent Intel Optane storage and connectivity to the Microsoft Windows Admin Center and Systems Center Virtual Machine Manager. High-performance storage based on SSDs and NVMe, advanced backup features and encryption services complete the offering.
In turn, the Dell EMC Tactical System for Microsoft Azure Stack Hub enables companies to populate their edge locations.
Stumbling blocks alert
Azure’s data center-local hub appliance is still subject to limitations. For example, the Software Development Kit (SDK) is an evaluation version. Unlike the Azure public cloud, hub support for Kubernetes requires the use of a specialized Azure Kubernetes engine, which is reported by affected users to take days to configure.
Creating and managing VMs in the hub comes with its own list of challenges. These include the fact that first generation VMs can only run on the appliance. Microsoft has not yet provided any means to convert these compute instances to second generation ones to run in the Azure cloud.
Even the storage environment from the hub comes with a host of stumbling blocks. File storage either does not exist at all or can only be retrofitted with additional software from one of the certified Azure partners. In contrast to Azure, hub storage does not know what to do with snapshot copies or storage tiers. Even Azure Active Directory for Storage is still under development. And the list goes on and on.
Microsoft and the seven hubs
As a side effect, the enthusiasm for hubs on the part of large corporate users of Azure has cooled down for the time being.
Has Microsoft now concluded that an HCI appliance with the same Azure stack branding would be easier to support and could therefore be better used by both customers and partners as a prelude to Azure public cloud services?
A partnership to design infrastructure components to instantiate hubs in the corporate data center essentially resulted in the creation of seven different editions of the platform. While all partners basically provide the same Azure cloud services on hub, each one does it in its own way: this is not the final straw. This is not the ultimate wisdom, because it forces Microsoft to continuously manage a validation process on seven different vendor-specific platforms.
In parallel to the Azure Stack Hub, Microsoft also has a confusing HCI (Hyper-Converged Infrastructure) appliance with the same Azure stack branding in its quiver. Confusingly, these systems do not bring Azure services into the data center. On the contrary, they extend a customer’s local compute environment to the Azure cloud as a kind of fast track into the Azure cloud.
The Azure architecture is unique among public clouds. Since its debut about a decade ago, Microsoft has been continuously refining it. Now Azure’s control layer is ready to move into the company’s own data center regardless of unforeseen building blocks.
The control level of the Azure cloud is based on the so-called Azure Fabric Controller. This software manages the provisioning and return of resources, from virtual machines to database instances, Hadoop to Kubernetes clusters, throughout the life of each deployment. Each time you provision, scale, stop or terminate a resource such as a VM, the process passes through the Fabric Controller.
Each resource in Azure continuously reports its current status to the Fabric Controller. Between the Fabric Controller and the resources concerned, there is another level of abstraction called the “Azure Resource Manager” (ARM). It automates the resource life cycle.
Microsoft has developed a resource provider for each of the services executed in Azure. Azure users can declare the configuration of the required resources using an ARM template, a simple text file.
At the heart of Microsoft’s strategy for moving Azure into a data center near you is a management service called Azure Arc. With Azure Arc, Microsoft has extended support for the Azure Resource Manager (ARM) to resources located outside its Azure data centers, including Linux and Windows servers and entire Kubernetes clusters in multi-cloud and edge environments.
To the Fabric Controller, a physical Windows or Linux server running in a corporate data center or at the edge looks like a native Azure resource. Whether the server is behind a corporate firewall and proxy is irrelevant; as long as it is running the Azure agent, it can be controlled through the Azure control layer.
A Kubernetes cluster managed by Azure Arc then looks as if it belonged to the Kubernetes service “AKS”. Even VMs running on VMware vSphere, Amazon EC2 or Google Compute Engine can be registered with Azure Resource Manager and managed like native Azure services.
Azure Arc thus functions as an extension of the Azure control layer for the corporate data center and the edge.
With Azure Lighthouse, Microsoft went even a step further.
Light it up, baby, in Azure blue
Azure Lighthouse enables Microsoft’s partners to individualize the user experience of Azure with partner-owned services. Wolfgang Grausam, Vice President for Managed Cloud Services for Microsoft Cloud at T-Systems, sums it up by saying “Azure Lighthouse is an innovation that simplifies our processes and helps hundreds of corporate customers transform their business”.
T-Systems offers Azure services in a public, private or hybrid cloud offering. This path is also generally open to other data center operators.
With Azure Lighthouse, co-location data centers, system integrators and other service providers can manage cross-customer management of infrastructure modules of large-scale deployments. Lighthouse can manage and automate any infrastructure, from a data center to the edge, location-agnostically at a central location. But the real highlight is the ability to differentiate its services based on operational efficiency or automation characteristics.
Steve Tack, SVP Product Manager at Dynatrace, an Austrian-based provider of cloud monitoring systems and headquartered in the US state of Massachusetts, explains the advantages from his perspective: “The combination of Azure Lighthouse and the proprietary features of Dynatrace such as management zones, into which Lighthouse integrates, offers differentiated access control in company-wide microservice and container environments with a single credential”. Dynatrace Management Zones provide an enterprise information partitioning mechanism designed to facilitate collaboration and sharing of relevant team-specific data while providing secure access control.
With Azure Lighthouse, a service provider can manage the resources of several customers in its own Azure AD client, i.e. the representative office of an organization in Azure Active Directory, the so-called Azure Active Directory client. Most tasks can be performed with delegated Azure resource management via managed clients and most services can be performed in exactly this way.
This approach enables the logical projection of resources from one organization, an AD client, to another. This allows authorized users from Azure AD clients to perform management operations across different Azure AD clients on behalf of the associated Azure Enterprise customers.
For example, service providers in delegated customer subscriptions and resource groups can also perform the required management operations without having an account in the customer’s AD client. Azure logs all activities of the service provider in the activity log of its own AD client; this log can be viewed by users in the managing client to identify the operations performed.
The various functions of Azure Lighthouse can be applied consistently across market offerings, Azure services, APIs and even licensing models. The concept can best be summed up as “reverse managed co-location in the as-a-service delivery model with value-added services”.
The use of Azure Lighthouse is free of charge. Only the underlying services, such as “Azure Monitor Log Analytics” or “Security Center”, are subject to the usual user fees.
The engineers from Redmond also have a solution for connecting customer sites to Azure, aptly namend Microsoft Azure Express Route. It involves a private connection to Microsoft Azure, Office 365 and Dynamics 365 via dedicated infrastructure from a network provider such as EdgeConnex, Inc.
All communication via Azure Express Route draws a wide arc (no pun intended) around the public Internet, which benefits both performance and data security.
Leave a Reply