WARNING: ESXi 10-15-2014 Patches disconnect VM’s having E1000 nic from network

Do not install ESXi updates with a release date of 15 October 2014. Virtual machines with E1000 network interface cards will disconnect from the network.

This issue has been reported by multiple VMware customers in this VMware communities thread.

To resolve the issue, rollback to a previous version of ESXi. This can be done by rebooting the ESXi host, press Shift+R. The procedure is explained here.

After installing the updates, the build is reported as v5.5.0 2143827.

The working build based on ESXi 5.5.0 Update 2 is 206190

The image below shows the affected patches.

esxi-updates-break-e1000

Oracle wants customers to license multiple vSphere 5.1 clusters when managed by single vCenter Server

Oracle is notorious for its licensing. The company is very unclear about how their software needs to be correctly  licensed. This leaves room for multiple interpretations. And Oracle takes advantage of that in many cases by claiming the customer did not license Oracle software in the correct way. Part of their business model is to get additional income from claims.

Many customers are ignorant, are afraid for court cases, do not want the hassle and settle the claim.

Recently Oracle Germany announced that all CPU’s in all ESXi hosts in multiple clusters managed by the same vCenter Server need to be licensed for Oracle. The reason for this change is that with the introduction of vSphere 5.1, virtual machines can be vMotioned to other clusters.

oracle-vsphere51

So before vSphere 5.1 , Oracle claimed customers needed to license all CPU’s of the VMware cluster. Even when Oracle VM’s never run on all of that cluster hosts.

Now Oracle even goes a step further.

The new Oracle statement was published on the German Oracle Users Group (DOAG) website at September 18.

This is all legally incorrect statements by Oracle. Not a single official Oracle document clearly mentions a host CPU requires licensing even when Oracle software has not run on it.

Dave Welch, CTO of House of Brick, an Oracle consultancy firm the the US wrote a blog about Oracle licensing and mentioned the new statement.

The VMware communities has a thread on this as well.

In the past I wrote many blogposts on Oracle licensing. See here , here and here.

If you are in doubt make sure to contact a realy independant Oracle licensing expert.

 

 

 

VMware wil disable Transparant Page Sharing by default in future ESXi releases

VMware ESXi has an interesting feature called Transparent Page Sharing (TPS). TPS allows a deduplication of host memory. Typically virtual machine guest operating systems share a lot of common code. TPS basically scans on duplicate code in the host memory, make sure only 1 instance of code is loaded while pointers in memory of guests point to that instance.

The effect is savings on host memory and a better density. The result is lower costs per virtual machine.

VMware announced however it will disable TPS by default in future ESXi release because of security concerns.

VMware has released a knowledgebase article saying:

This article acknowledges the recent academic research that leverages Transparent Page Sharing (TPS) to gain unauthorized access to data under certain highly controlled conditions and documents VMware’s precautionary measure of no longer enabling TPS in upcoming ESXi releases. At this time, VMware believes that the published information disclosure due to TPS between virtual machines is impractical in a real world deployment.
Published academic papers have demonstrated that by forcing a flush and reload of cache memory, it is possible to measure memory timings to try and determine an AES encryption key in use on another virtual machine running on the same physical processor of the host server if Transparent Page Sharing is enabled. This technique works only in a highly controlled system configured in a non-standard way that VMware believes would not be recreated in a production environment.
Even though VMware believes information being disclosed in real world conditions is unrealistic, out of an abundance of caution upcoming ESXi Update releases will no longer enable TPS between Virtual Machines by default.

 

Andrea Mauro published a very well written blog about TPS and explaining some other caveats.

This paper in detail explains the security concerns when using TPS. The abstract of the paper reads:

 

TPS

VMworld 2014 Europe announcements

This is a summary of the announcements made at VMworld Europe Barcelona during the keynote on Tuesday October 14 .

The recording of the keynote can be seen here.

The announcements made at VMworld 2014 US can be read in this post.

Tuesday October 14 announcements

  • HP and Hitachi will deliver EVO:RAIL systems as well soon. HP product is called  HP ConvergedSystem 200-HC
  • VMware vCloud Air will be available in a Germany based datacenter
  • vRealize CodeStream announced
  • vRealize Air Compliance anouncement. A new SaaS based tool to quickly report on the configuration compliance of  avSphere Infrastructure and take proactive action
  • introduction of the vRealize Suite 
  • announcement of Horizon Flex . Enables to run virtualized desktop on offline clients. Kit Colbert of VMware has written a blog. 
  • EVO:RAIL comes with vCloud Air – Disaster Recovery service
  • CloudVolumes is now VMware App Volumes . It will be available this quarter and free of charge with VMware Horizon Enterprise. Sign up for the Early Access Program here.
  • A partnership between VMware and Palo Alto Network. Annoucement of  Palo Alto Networks VM-1000-HV designed specifically for VMware NSX interoperability. It is expected to be available in vCloud Air in the first half of 2015.

Nutanix announces all flash model and NOS 4.1 with Metro Availability

Nutanix made two new announcements today: 

  • a new hardware model: NX-9240 with only flash capacity. No spinning disks 
  • release of Nutanix OS 4.1 with Metro Availabiliy 

 

The NX-9240 is the first All Flash appliance of Nutanix. It has a raw Flash storage of 20 TB.

The main purpose of the NX-9240 is running Tier 1 applications like Oracle and SQL Server

Unlike the other models of Nutanix which can be mixed, the NX-9240 cannot be member of a cluster having non NX-9240 members.

The NX-9240  all-flash hyper-converged system is available today with list prices beginning at $110,000 per node.

The Nutanix press release is here.

Nutanix also announced the release of Nutanix Operating System (NOS) 4.1

New features in 4.1 are:

  • Metro Availability
  • Cloud connect
  • On-click hypervisor upgrade
  • Microsoft System Center integration
  • Data at rest encryption
  • SMI-S support for System Center

 

Metro Availability enables Nutanix clusters to stretch over two sites. The functionality is very similar to for example NetApp MetroCluster or EMC VPLEX.

Metro Availability allows for a seamless failover of virtual machines when the Nutanix cluster of complete site is unavailable. The solution has a Recovery Point Objective of zero (0). The recovery time objective is very low. In case of a unplanned failover the time to recovet will basically be the time required to boot the virtual machines.

Metro availability is very simply to setup unlike other solutions. It requires a latency of 5 ms and a fiber network. This limits distance between two sites to something like 100-150 km.

Mixing models in both sites is supported.

Metro Availability will be part of the Ultimate software edition. The primary site and the secondary site will need to have Ultimate license.

Details in this Nutanix blog.

Cloud Connect was announced in August. It will create a hybrid cloud infrastructure with many future features. In NOS 4.1 cloud connect is still limited to storing backup data in the public cloud. Initially Amazon Web Services is supported. Azure and Google Compute are on the roadmap.

Other future use cases are disaster recovery and cloud bursting.

http://www.nutanix.com/blog/2014/08/19/announcing-nutanix-cloud-connect/

 

One click hypervisor upgrade allows for a simple upgrade of any Nutanix supported hypervisor: ESXi, Hyper-V and KVM.

System Center integration enables Virtual Machine Manager and Operation Manager to detect Nutanix nodes and report on performance and health.

Data at rest encryption enables self encrypting of drives. Initially this feature is supported on the 3000 and 6000 series. This is a often requested feature in finance, healthcare and government environments.

More information here 

Nutanix 4.1 Features Overview (Beyond Marketing) – Part 1

Nutanix 4.1 Features Overview (Beyond Marketing) – Part 2

Nutanix 4.1 Features Overview (Beyond Marketing) – Part 3