Windows Server 2012 from an Architecture Point of View

Reading Time: 5 minutes

Customers often ask me and my colleagues for innovative solutions, based on proven technology. While that sounds like a contradictio in terminus, we always deliver. The key behind these successes is to research and implement new technology in an early stage. From there, we combine these new technologies, where appropriate, with our proven architectural principles and project methods.

One of the new technologies on the horizon is Windows Server 2012, the server flavor of Windows 8. At this moment, this product has reached Release Candidate and will be released to customers soon. We’ve been looking at and deploying Windows Server 2012 when it was still Windows Server “8”.

In this blogpost we’ll take a look at Windows Server 2012 from an architecture point of view. We’ll research the validity of the architectural principles as the foundation of our implementations.

When we look at current trends in IT, like Bring Your Own (BYO), Consumerization of IT (CoIT) and Anyplace, Anytime, Any device (Any*), a bigger trend can be spotted. Lew Tucker (CTO at Cisco) describes this trend crystal clear in his presentation ‘The Time Is Now’. Although I don’t 100% agree with his opinion, the contents of slide 6 are carved in my memory:

image

This figure illustrates the transition, occurring in IT today. For the last couple of years, IT departments have deployed increasingly performing systems in increasingly complex environments to deliver highly available applications.  These organizations have embraced the Enterprise Approach.

On the other side of the spectrum, suppliers and customers took the high road, where they used vast quantities of cheap and highly standardized systems to offer information, regardless of the location and platform of the persons consuming this information (with or without applications).

Let’s analyze the slide point by point:

Disadvantages of scale-up as your model

Hardware suppliers like HP and Dell offer systems like the ProLiant DL980 and PowerEdge R910. These machines with over 100 logical processors and terabytes of RAM are real workhorses. You can deploy these as data crunchers and database servers, with confidence. Windows Server can run on this type of machines for years, for Windows Server 2008 R2 already supported 256 logical processors, as you can clearly see in this screenshot by Intel.

Yet, this type of hardware also has disadvantages. It is naïve to think this type of hardware scales linearly. These systems, for instance, use the same type of RAM as our current desktops and lower spec'd servers (DDR3 RAM). Networking interfaces, even though they offer 10GB/s speeds, also don’t resemble the scale of the processing power of these machines. The results of this combination is long startup and boot times and slow memory allocation.

The same 8U rack height you would allocate for one of these systems, you might be better of with 8 PowerEdge R620s or 8 Proliant DL360s. Doing so switches you from using scale-up as your architecture principle to scale-out. Windows Server 2012 helps managing large quantities of servers. Server Manager Remoting is the perfect example and embodies the “The power of many servers, the simplicity of one” tagline. From one Server Manager instance you can make multiple Windows Server 2012 installations on multiple servers install and configure Server Roles (like Active Directory Domain Services) and Server Features (like WINS).

This architectural principle serves as the basis for showing  only 64 logical processors in the task manager in Windows Server 2012. Microsoft expects most systems running Windows Server 2012, not to have more than 64 logical processors.

Fail-safe or safe-fail

Lots of enterprise environments are based on the principle of fail-safe. In these environments every part of a complex chain is over-engineered to safeguard them from failing. The result, however, is hardly ever a flawless chain, since the weakest link determines the strength of the chain. Often, this way of thinking results in a ridiculously over-engineered and highly-redundant IT, that has lost any contact with the business and still is not 100%…

Adopting safe-fail from an architectural point of view, results in a different IT solution. A solution where individual parts are safe to fail or even anticipated to fail. The resulting degradation is only marginal, so the business would not notice. This way, organization can create fault-tolerant IT systems, where these systems can self-heal system faults and failures of hardware components.

In earlier Windows Server versions, Microsoft has focused on Clustering Services. This functionality allowed IT departments to create active-passive IT functionality. It brings distinctive advantages to the table in terms of availability. More and more, Microsoft is introducing active-active clustering into its products and technologies. File Servers, running Windows Server 2012 can be used in an active-active setup, so both File Server Cluster Nodes can serve their shared information without shared storage.

It’s the employees, not the applications

Many knowledge leaders talk application delivery these days. In their minds, the productivity of an employee revolves around the time they can spend with these applications.

I share the opinion that business can benefit from employees having access to their application from any place, on any device at any time. From a technology point of view, all the ingredients are already here.However, entering vast amounts of data into a Win32 . Most of the time, Anyplace, Anytime, Any device does not result in a unambiguous experience for employees.

In it’s “three screens and the cloud” vision, Microsoft set out to streamline the end user experience. Elements from the Windows Phone Metro interface, therefore, also show up in Windows 8, Windows Server 2012, Xbox 360, Bing and even Facebook:

image

This vision goes beyond the information-centric point Lew Tucker makes and is perhaps already more a Web 2.0 phenomenon.

In many organization, employees are already utilizing their own hardware, bandwidth, communication devices, power, online storage and off-time to take on their business challenges. Nowadays, we call this BYO. Information and efficient access to this information are the only two things these organizations have to be worried about. Using Dynamic Access Control (DAC) in Windows Server 2012, businesses can configure this access with more granularity than ever before. Not just based on username and group membership, but on any attribute in Active Directory and based on file classification. Of course, this technology extends into Active Directory Rights Management Services (RMS).

In the future, applications and platforms will get intertwined even more in terms of experience. Facebook and Bing can both be seen as platforms and applications. Also, making Office Apps a built-in part of Windows RT (Windows on ARM) is a step in the direction of vertical experiences.

Commodity systems

There is a good reason why many organizations are getting rid of their mainframes. These powerhouses cost truckloads of money, compared to x86-based systems in terms of deprecation of value, maintenance and power consumption. Switching these systems to commodity systems is a logical choice. Although, in the short run, organizations are faced with (migration) pains, in the long run, this switch will pay for itself.

With commodity systems like Dell’s PowerEdge R620 and server virtualization, practically all standard datacenter functionality can be effectively offered. Standardizing hardware, software and processes lead to vast reductions of cost. Microsoft offers a solution here. For the majority of organizations, with Windows-standardized environments, Hyper-V in Windows Server 2012 offers the same functionality as VMware’s vSphere. For a fraction of the cost.

NIC Teaming is another great feature in Windows Server 2012. It allows to bundle Network Interface Cards (NICs) for bandwidth or redundancy from within the Operating System and allows for greater flexibility when deploying commodity systems.

You’ll be surprised by Microsoft’s stack, when, on top of Windows Server 2012, you would also install and configure System Center 2012. The combination of these two products allows for business users, themselves, to create, configure and even phase out virtual servers, desktops and applications, while IT departments in the back solve the availability, security, compliancy, auditing and even chargeback challenges with ease. With these two products, Elastic IT is within reach.

 

Concluding

Microsoft has a reputation of being an enterprise software company. With Windows Server 2012, Microsoft joins the ranks of companies, whose products can be implemented based on the web approach.

One Response to Windows Server 2012 from an Architecture Point of View

  1.  

    Hi,

    Regarding your statement "It’s not a problem, since the Hyper-V Guest is run in the memory of two Hyper-V hosts at the same time. Failure of one of the nodes no longer leads to downtime of restarts of the Hyper-V guest. The business will not suffer."

    My understanding of Hyper-V Replica is taken from Aidan Finn's blog here:
    http://www.aidanfinn.com/?p=12147
    In it he explains that Hyper-V Replicas are an async replication mechanism and therefore you would incur data loss and downtime if a Hyper-V host should fail. It is not an equivalent to VMware's Fault Tolerance feature, its more like their Site Recovery Manager product. Can you clarify? Thanks.

    Regards,
    Mike

leave your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.