Enabling Linux Support on Windows Server 2012 R2 Hyper-V

This post is a office of the nine-role " What 's New in Windows Server & System Center 2012 R2 _" serial that is featured on Brad Anderson 'due south _ In the Cloud blog.   Today's blog post covers Linux Back up on Windows Server 2012 R2 and how it applies to Brad's larger topic of "Transform the Datacenter".  To read that post and see the other technologies discussed, read today's post:  " What 's New in 2012 R2:  Enabling Open Source Software _. "  _

The ability to provision Linux on Hyper-5 and Windows Azure is i of Microsoft's core efforts towards enabling smashing Open Source Software support. Every bit part of this initiative, the Microsoft Linux Integration Services (LIS) team pursues ongoing development of enlightened Linux drivers that are direct checked in to the Linux upstream kernel thereby allowing straight integration into upcoming releases of major distributions such as CentOS, Debian, Red Hat, SUSE and Ubuntu.

The Integration Services were originally shipped as a download from Microsoft's sites. Linux users could download and install these drivers and contact Microsoft for any requisite support. Equally the drivers have matured, they are now delivered straight through the Linux distributions. Not only does this arroyo avoid the extra step of downloading drivers from Microsoft's site but it also allows users to leverage their existing support contracts with Linux vendors.

For instance Scarlet Hat has certified aware drivers for Hyper-V on Scarlet Hat Enterprise Linux (RHEL) v.9 and certification of RHEL half dozen.iv should be complete by summer 2013. This will allow customers to directly obtain Red Hat support for any issues encountered while running RHEL v.ix/vi.4 on Hyper-V.

To further the goal of providing great functionality and performance for Linux running on Microsoft infrastructure, the post-obit new features are now bachelor on Windows Server 2012 R2 based virtualization platforms:

  1. Linux Constructed Frame Buffer driver – Provides enhanced graphics functioning and superior resolution for Linux desktop users.
  2. Linux Dynamic memory support – Provides higher virtual machine density/host for Linux hosters.
  3. Live Virtual Auto Backup support – Provisions uninterrupted backup support for live Linux virtual machines.
  4. Dynamic expansion of fixed size Linux VHDs – Allows expansion of alive mounted fixed sized Linux VHDs.
  5. Kdump/kexec support for Linux virtual machines – Allow creating kernel dumps of Linux virtual machines.
  6. NMI (Non-Maskable Interrupt) support for Linux virtual machines – Allows delivery of manually triggered interrupts to Linux virtual machines running on Hyper-Five.
  7. Specification of Memory Mapped I/O (MMIO) gap – Provides fine grained command over available RAM for virtual appliance manufacturers.

All of features have been integrated in to SUSE Linux Enterprise Server 11 SP3 which can be downloaded from SUSE website (https://world wide web.suse.com/products/server/). In addition integration work is in progress for the upcoming Ubuntu 13.x and RHEL half-dozen.5 releases.

 Further details on these new features and their benefits are provided in the following sections:

  **1.        **Synthetic Frame Buffer Commuter

** ** The new synthetic 2d frame buffer driver provides solid improvements in graphics operation for Linux virtual machines running on Hyper-V. Furthermore, the driver provides full HD way resolution (1920x1080) capabilities for Linux guests hosted in desktop style on Hyper-V.

 One other noticeable impact of the Synthetic Frame Buffer Commuter is elimination of the double cursor trouble.  While using desktop mode on older Linux distributions several customers reported two visible mouse pointers that appeared to chase each other on screen. This distracting issue is now resolved through the constructed 2D frame buffer driver thereby improving visual experience on Linux desktop users.

** ** **2.        **Dynamic Retention Support

 The availability of dynamic retention for Linux guests provides higher virtual machine density per host. This will bring huge value to Linux administrators looking to consolidate their server workloads using Hyper-V. In house examination results indicate a xxx-40% increment in server capacity when running Linux machines configured with dynamic retentiveness.

 The Linux dynamic retentivity driver monitors the memory usage within a Linux virtual motorcar and reports it back to Hyper-Five on a periodic basis. Based on the usage reports Hyper-V dynamically orchestrates memory allocation and deallocation beyond various virtual machines being hosted. Note that the user interface for configuring dynamic memory is the aforementioned for both Linux and Windows virtual machines.

The dynamic Memory commuter for Linux virtual machines provides both Hot-Add together and Ballooning back up and can be configured using the Get-go, Minimum RAM and Maximum RAM parameters as shown in Figure 1.

Upon system start the Linux virtual machine is booted up with the corporeality of retentivity specified in the Start parameter.

If the virtual car requires more memory then Hyper-V uses the Hot-Add mechanism to dynamically increase the amount of retentivity available to the virtual machine.

On the other hand, if the virtual machine requires less retentiveness than allocated then Hyper-V uses the ballooning mechanism to reduce the memory available to the virtual machine to a more appropriate amount.

|

Configuring a Linux virtual machine with Dynamic Memory

Effigy ane Configuring a Linux virtual automobile with Dynamic Retentiveness

---|---

 Increment in virtual car density is an obvious advantage of use of dynamic memory. Another great awarding is the use of dynamic retentiveness in scaling awarding workloads. The following paragraphs illustrate an example of a web server that was able to leverage dynamic memory to scale operations in the event of increasing client workload.

 For illustrative purposes, two apache servers hosted within carve up Linux virtual machines were fix up on a Hyper-V server. 1 of the Linux virtual machines was configured with a static RAM of 786 MB whereas the other Linux virtual machine was configured with dynamic memory. The dynamic retention parameters were setup as follows: Startup RAM was set to 786MB, Maximum RAM was set to 8GB and the Minimum RAM was set to 500MB. Next both apache server were subjected to monotonically increasing spider web server workload through a client application hosted in a Windows virtual machine.

 Nether the static memory configuration, as the apache server becomes overloaded, the number of transactions/second that could be performed by the server continue to autumn due to high memory demand. This can exist observed in Effigy two and Figure 3. Effigy 2 represents the initial warm upward period when at that place is ample complimentary memory available to the Linux virtual machine hosting apache. During this flow the number of transactions/second is every bit high as 103 with an average latency/transaction of 58ms.

number of transactions

Server and Client statistics during initial warm up period for the Linux apache server configured with static RAM

  _ Figure 2 Server and Client statistics during initial warm upward period for the Linux apache server configured with static RAM_

 As the workload increases and the corporeality of free retentivity becomes scarce, the number of transactions/2d drops to 32 and the boilerplate latency/transaction increases to 485ms. This situation can be observed in Figure 3.

As the workload increases and the amount of free memory becomes scarce, the number of transactions/second drops to 32 and the average latency/transaction increases to 485ms.

Server and client statistics for an overloaded Linux apache server configured with static RAM

  _ Figure 3 Server and client statistics for an overloaded Linux apache server configured with static RAM_

 Side by side consider the example of the apache server hosted in a Linux virtual auto configured with dynamic memory. Figure 4 shows that for this server the amount of available memory quickly ramps up through Hyper-Five's hot-add mechanism to over 2GB and the number of transactions/second is 120 with an boilerplate latency/transaction of 182 ms during the warm up stage itself.

Server and client statistics during startup phase of Linux apache server configured with Dynamic RAM Figure 4 part 2

Figure 4 Server and client statistics during startup phase of Linux apache server configured with Dynamic RAM

 As the workload continues to increase, over 3GB of free memory becomes available and therefore the server is able to sustain the number of transactions/2d at 130 fifty-fifty though average latency/transaction increases to 370ms. Notice that this memory gain can merely be achieved if there is plenty memory available on the Hyper-Five server host. If the Hyper-Five host memory is low then whatsoever demand for more retentivity by a invitee virtual machine may not exist satisfied and applications may receive no complimentary memory errors.

Overloaded Linux apache server configured with Dynamic RAM Figure 5 part 2

Effigy 5 Overloaded Linux apache server configured with Dynamic RAM

**3.        **Live Virtual Machine Backup Support

A much requested characteristic from customers running Linux on Hyper-Five is the ability to create seamless backups of alive Linux virtual machines. In the past customers had to either suspend or shutdown the Linux virtual machine for creating backups. Not only is this process difficult to automate but it also leads to an increase in down time for disquisitional workloads.

To address this shortcoming, a file-system snapshot driver is at present available for Linux guests running on Hyper-V. Standard fill-in APIs available on Hyper-V can be used to trigger the commuter to create file-system consistent snapshots of VHDs attached to a Linux virtual machine without disrupting any operations in execution within the virtual machine.

The best manner to effort out this feature is to take a backup of a running Linux virtual machine through Windows Backup. The backup can be triggered from the Windows Server Fill-in UI as shown in Figure 6. As can be observed the alive virtual auto labeled OSTC-Workshop-WWW2 is going to exist backed upward. In one case the backup operation completes a message screen similar to Figure 7 should be visible.

Using Windows Server Backup to backup a live Linux virtual machine  _ Figure vi Using Windows Server Fill-in to backup a live Linux virtual auto_

|

More of using Windows Server Backup to backup a live Linux virtual machine _ Figure vii Completion of backup operation for a live Linux virtual machine_

---|---

One important departure between the backups of Linux virtual machines and Windows virtual machines is that Linux backups are file-organization consistent merely whereas Windows backups are file-system and application consistent. This difference is due to lack of standardized Volume Shadow Re-create Service (VSS) infrastructure in Linux.

**4.        **Dynamic Expansion of Live Fixed Sized VHDs

The ability to dynamically resize a fixed sized VHD allows administrators to classify more storage to the VHD while keeping the performance benefits of the fixed size format. The feature is now bachelor for Linux virtual machines running on Hyper-V. It is worth noting that Linux file-systems are quite adjustable to dynamic changes in size of the underlying disk drive. To illustrate this functionality let usa look at how a fixed sized VHD attached to a Linux virtual automobile can exist resized while it is mounted.

Showtime, equally shown in Figure 8, a 1GB fixed sized VHD is attached to a Linux virtual machine through the SCSI controller. The corporeality of space available on the VHD tin can be observed through the df command as shown in Figure ix.

Fixed Sized VHD attached to a Linux virtual machine through the SCSI Controller

Figure eight Stock-still Sized VHD attached to a Linux virtual machine through the SCSI Controller

Space usage in the Fixed Sized VHD

Figure 9 Space usage in the Fixed Sized VHD

Next, a workload is started to eat more infinite on the fixed sized VHD. While the workload is running, when the amount of used infinite goes beyond the fifty% marker (Effigy 10), the ambassador may increase the size of the VHD to 2GB using the Hyper-V director UI equally shown in Effigy eleven.

Amount of used space goes beyond 50% of the current size of the Fixed Sized VHD

Figure 10 Amount of used infinite goes beyond fifty% of the electric current size of the Fixed Sized VHD

Expanding a Fixed Size VHD from 1GB to 2GB Figure 11 part 2

Figure xi Expanding a Stock-still Size VHD from 1GB to 2GB

Once the VHD is expanded, the df command will automatically update the amount of deejay space to 2GB as shown in Figure 12. It is important to note that both the disk and the file-organisation adapted to the increment in size of the VHD while information technology was mounted and serving a running workload.

Dynamically adjusted df statistics upon increase in size of Fixed Sized VHD

Effigy 12 Dynamically adjusted df statistics upon increase in size of Fixed Sized VHD

**5.        **Linux kdump/kexec support

One particular pain point for hosters running Linux on Windows Server 2012 and Windows Server 2008 R2 environments is that legacy drivers (every bit mentioned in KB 2858695 ) must be used to create kernel dumps for Linux virtual machines.

In Windows Server 2012 R2, the Hyper-V infrastructure has been changed to allow seamless creation of crash dumps using enlightened storage and network drivers and therefore no special configurations are required anymore. Linux users are gratis to dump core over the network or the fastened storage devices.

**6.        **NMI Support

If a Linux system becomes completely unresponsive while running on Hyper-5, users at present have the option to panic the arrangement by using Non-Maskable Interrupts (NMI). This is particularly useful for diagnosing systems that have deadlocked due to kernel or user way components.

The following paragraphs illustrate how to test this functionality. As a first step observe if any NMIs are awaiting in your Linux virtual machines by executing the command in a Linux final session shown in Figure 13:

Existing NMIs issued to the Linux virtual machine

Figure 13 Existing NMIs issued to the Linux virtual car

Adjacent, consequence an NMI from a powershell window using the command shown below:

Debug-VM -Proper name <Virtual Machine Name> -InjectNonMaskableInterrupt -ComputerName <Hyper-V host name> Ostend:$False –Force

Next bank check if the NMI has been delivered to the Linux VM by repeating the command shown in Figure 13. The output should be similar to what is shown in Figure 14 below:

New NMIs issued to the Linux virtual machine

Figure 14 New NMIs issued to the Linux virtual machine

**7.        **Specification of Retentiveness Mapped I/O (MMIO) gap

Linux based appliance manufacturers use the MMIO gap (also known equally PCI hole) to divide the available concrete retentiveness between the But Enough Operating Arrangement (JeOS) that boots up the appliance and the actual software infrastructure that powers the appliance. Inability to configure the MMIO gap causes the JeOS to consume all of the available retentiveness leaving nothing for the appliance'south custom software infrastructure. This shortcoming inhibits the evolution of Hyper-V based virtual appliances.

The Windows Server 2012 R2 Hyper-V infrastructure allows appliance manufacturers to configure the location of the MMIO gap. Availability of this feature facilitates the provisioning of Hyper-V powered virtual appliances in hosted environments. The post-obit paragraphs provide technical details on this feature.

The memory of a virtual automobile running on Hyper-V is fragmented to conform ii MMIO gaps.  The lower gap is located directly below the 4GB address.  The upper gap is located directly below the 128GB address.  Apparatus manufacturers tin now fix the lower gap size to a value between 128MB and 3.5GB. This indirectly allows specification of the start address of the MMIO gap.

The location of the MMIO gap tin can be set up using the following sample PowerShell script functions:

############################################################################

GetVmSettingData()

Getting all VM's system settings information from the host hyper-v server

############################################################################

role GetVmSettingData([String] $name, [Cord] $server)
{
$settingData = $null

    if (-non $name)
{
return $null
}

    $vssd = gwmi -n root\virtualization\v2 -grade Msvm_VirtualSystemSettingData -ComputerName $server
if (-not $vssd)
{
return $naught
}

    foreach ($vm in $vssd)
{
if ($vm.ElementName -ne $proper name)
{
continue
}

        render $vm
}

    return $null
}

###########################################################################

SetMMIOGap()

Description:Office to validate and set the MMIO Gap to the linux VM

###########################################################################
function SetMMIOGap([INT] $newGapSize)
{

    #
# Getting the VM settings
#
$vssd = GetVmSettingData $vmName $hvServer
if (-non $vssd)
{
return $faux
}

    #
# Create a direction object
#
$mgmt = gwmi -n root\virtualization\v2 -grade Msvm_VirtualSystemManagementService -ComputerName $hvServer
if(-not $mgmt)
{
render $simulated
}

    #
# Setting the new MMIO gap size
#
$vssd.LowMmioGapSize = $newGapSize

    $sts = $mgmt.ModifySystemSettings($vssd.gettext(ane))

       if ($sts.ReturnValue -eq 0)
{
return $true
}

    return $false
}

The location of the MMIO gap tin can be verified by searching the keyword "pci_bus" in the post boot dmesg log of the Linux virtual car. This output containing the keyword should provide the beginning retentivity accost of the MMIO gap. The size of the MMIO gap can and so be verified past subtracting the start address from 4GB represented in hexadecimal.

Summary

Over the by twelvemonth, the LIS squad added a slew of features to enable great support for Linux virtual machines running on Hyper-V. These features will not only simplify the process of hosting Linux on Hyper-V merely will also provide superior consolidation and improved performance for Linux workloads. As of now the squad is actively working with diverse Linux vendors to bring these features in newer distribution releases. The team is eager to hear customer feedback and invites whatever characteristic proposals that volition assistance improve Linux hosters experience on Hyper-V. Customers may get in touch with the team through linuxic@microsoft.com or thorough the Linux Kernel Mailing List (https://lkml.org/).

To see all of the posts in this series, check out the What 'south New in Windows Server & System Centre 2012 R2 archive