VMware ThinApp 5.1 released

VMware released at September 9 ThinApp 5.1

VMware ThinApp 5.1 is the latest version of ThinApp. This release has the following enhancements.

Release notes are here 

ThinApp is part of the VMware Horizon Suite. Download here

ThinApp Package Management

In earlier versions of ThinApp, to change some specific Package.ini parameters, you had to first make those changes in the configuration file, save the file, and rebuild the package. Using the new ThinApp package management feature, you can dynamically reconfigure the attributes of deployed ThinApp packages at runtime. ThinApp Package management helps IT administrators manage ThinApp packages and define group policy for each package. Each package to be managed must have an associated group policy defined using its inventory name.

When you install ThinApp 5.1, a new folder named Policy is created in the installation directory. This folder contains tools and templates for managing ThinApp packages and contains the following files:

  • AppPolicy.exe
  • README.TXT
  • ThinAppBase.adml
  • ThinAppBase.admx
  • ThinAppGeneric.adml
  • ThinAppGeneric.admx

ThinApp selects the policy precedence when you rebuild a package and deploy it. If a package is managed by a group policy, ThinApp gives precedence to the policy over its Package.ini configuration.

Group Policy Administrative Templates

ThinApp 5.1, introduces the group policy administrative template (ADMX/ADML) files. With these template files you can reconfigure group policy settings for applications that were packaged by ThinApp. The GPO files work on domain controllers that run in the following environments:

  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012

Administrative Template files contain markup language that is used to describe a registry-based Group Policy. The administrative files are divided into language-neutral (.admx files) and language-specific resources (.adml files), available to all Group Policy administrators. These factors allow Group Policy tools to adjust the user interface according to the administrator’s configured language.

Reconfiguring Attributes of Deployed ThinApp Packages

In addition to ThinDirect, with ThinApp 5.1 you can now reconfigure or manage the following attributes of a deployed package:

  • AppSync
  • AppLink
  • Entry-Point Shortcuts

Note: To know how to reconfigure these package attributes, see the ThinApp 5.1 User’s Guide.

ThinDirect

In ThinApp 5.1, the following enhancements have been made to the ThinDirect plug-in:

    • Support for the update of ThinDirect settings at specified time intervals

In ThinApp 5.1, the ThinDirect functionality has been enhanced to periodically poll for the ThinDirect setting changes. Since, the ThinDirect settings are now detected dynamically, the user need not restart the browser to view the updated changes.

    • Support for dynamic changes to Thindirect via ADM

In ThinApp 5.1, you can use ThinDirect.ADM file to manage the thindirect enabled firefox.

    • Support for Overriding the Thindirect settings through GPO

In ThinApp installation directory, locate the Thindirect.admx andThinDirect.adml files. Use these files to manage the settings for ThinDirect feature by defining a group policy object. If the Thindirect feature is defined through the GPO, these settings would override the text file (thindirect.txt) based redirection.

    • Support for Redirection between two virtual browsers

ThinApp 5.1 supports redirection between two virtual browsers.

    • Support for Thindirect in Mozilla Firefox

In ThinApp 5.1, ThinDirect has been enhanced to work with Mozilla Firefox version 22 and later. In earlier versions of ThinApp, ThinDirect was limited to Internet Explorer.

New Package.ini Parameter Introduced

ThinApp 5.1 introduces the SandboxWindowClassName parameter. When you set theSandboxWindowClassName=1 you can sandbox or isolate the application defined window class names created and used within the ThinApp package.

Extracting an existing ThinApp project to a system

ThinApp 5.1 allows you to extract an existing ThinApp project to a capture and build operating system by using snapshot.exe and snapshot64.exe commands.

Pre-requisites
Before you extract an existing ThinApp project to a capture and build operating system, ensure that the following conditions are met:

  • Verify that the architecture and type of the captured operating system is the same as that of deployed operating system.
  • Perform the extraction of an existing ThinApp project on a clean capture and build machine.
  • Ensure that the user profile in the existing virtual project is the same as that of the capture and build machine.

ThinApp 5.1 has the following command line options to extract existing projects to capture and build operating systems.

  • snapshot.exe: Is used to extract an existing ThinApp project to a capture and build 32-bit operating system
  • snapshot64.exe: Is used to extract an existing ThinApp project to a capture and build 64-bit operating system

Note: To know more about the process for extracting existing ThinApp packages, see the ThinApp 5.1 User’s Guide.

MAPI Support

ThinApp 5.1 supports the Messaging Application Programming Interface (MAPI) on the following Microsoft Windows platforms:

  • Windows 7
  • Windows 8 32-bit
  • Windows 8 64-bit
  • Windows 8.1 32-bit
  • Windows 8.1 64-bit

ThinApp 5.1 provides the DefaultEmailProgram option in Package.ini to register a virtual email client as the host’s default email program. You have to enable this option to register the default email program. The Messaging Application Programming Interface (MAPI) is not supported on Windows XP x86 operating system. For more information, see KB artilce2087898.

Support for Internet Explorer 10 and Internet Explorer 11

ThinApp 5.1 supports Internet Explorer 10 and Internet Explorer 11 only on the Windows 7 operating system.

Support for Windows 8.1

ThinApp 5.1 works on the Windows 8.1 August update (Update 2).

For additional information about ThinApp 5.1, visit the following Web site:

VMware Virtual SAN & EVO:RAIL do support Tier 1 applications

VMware recently announced EVO:RAIL. A combination of server hardware, VMware software and vendor support bundled as an appliance. The use cases communicated by VMware are general purpose workloads, VDI, ROBO and Virtual Private Clouds.

The software of EVO:RAIL consists of  vSphere 5.5 Enterprise Plus Edition, VSAN , vCenter Server Appliance and Log Insight. Combined with a nice HTML interface for initial configuration and daily management.

The image below shows the use cases for EVO:RAIL as communicated by VMware.

evorail-usecases

You might think: what about Tier 1 applications like Exchange Server, SQL Server and Oracle. Can I run those on EVO:RAIL?

So far VMware marketing & technical communication do not mention  Tier 1 as a use case for EVO:RAIL. However on September 7 a Tweet by a VMware managed Twitter account mentioned running a  Tier 1 app (Exchange) on EVO:RAIL.

evorail-exchange

I was surprised by that Tqeet as I had not heard ‘EVO:RAIL and Exchange ‘ mentioned in a single line.

The next day  Duncan Epping of VMware who works on EVO:RAIL provided some insight on running Tier 1 apps  EVO:RAIL in this post.

Basically Duncan says:

“Running Tier-1 applications on top of VSAN (or EVO:RAIL) is fully supported as it stands today however … your application requirements and your service level agreement will determine if EVO:RAIL or VSAN is a good fit.”

VMware targets potential customers for EVO:RAIL (and VSAN 1.0) initially for the use cases shown in the image above. As with any new technology customers will have to get faith into the solution. Only a few customers dared to run their business critical applications on VMware GSX Server when it was released in 2001. Even today there are people who have fear of running their Tier 1 apps on a hypervisor.

Another  reason for VMware initially not having a priority marketing focus on running Tier 1 on EVO:RAIL is the lack of synchronous replication support by for instance Site Recovery Manager.  Business critical applications typically require site resiliency.

Mind many such applications are not dependant on storage replication or other infrastructure based replication solutions to provide site resiliency. Exchange Server can use Stretched Database Availability Groups when running on vSphere. Actually a stretched DAG is the recommended way of protecting Exchange for site failures.

Conclusion
EVO:RAIL fully supports (in a technical context) any application including Tier 1 when supported by the vendor to run on VMware vSphere. If it is wise to run a Tier 1 application on EVO:RAIL mainly depends on application requirements and if additional tooling and features  meet those requirements.

 

 

 

 

Combine the best of two worlds: vSphere stretched cluster and Site Recovery Manager

For Business Continuity and Disaster Recovery in a vSphere infrastructure most customers make a choice of two options. Either use VMware Site Recovery Manager (SRM) or build a  vSphere Metro Stretched Cluster.

While both are great options for BC/DR, both also have some disadvantages.

VMware announced at VMworld 2014 they are working on integration of SRM with a vSphere Metro Stretched Cluster.

I am sure these new features are not in  SRM 5.8 announced during VMworld. Jason Boche made three videos demoing the new release. In the demo of a tech preview some errors were encountered.

In May 2012 I predicted a couple of the enhancements now announced. Good to see there are now becoming a reality.

This is a summary of breakout session BCO1916. You can watch this interesting session yourself here.

So the two options for BC/DR in a vSphere datacenter are:

-option 1: two datacenters, both running production in an active-active configuration with stretched storage and networking. We call this a vSphere Metro Stretched Cluster

-option 2: two datacenters in active/passive. One running production, the other test/dev. If production site fails, VMware Site Recovery Manager is used to perform an orchestrated recovery of the virtual machines in the recovery site. Alternative tools are vSphere Replication or Zerto Virtual Replication.

Option 1 is great for disaster avoidance, balancing of resources and planned maintenance. When IT knows in advance one of the datacenters might become unavailable because of a hurricane/downtime of power/SAN maintenance etc virtual machines can be vMotion-ed to the alternate datacenter.

In case of an unplanned event like a fire or earthquake, VMware HA will take care of the restarts of virtual machine. The advantage is that up-to-date virtual machine disk files are available in the recovery site so RPO as well as RTO is  low.

However VMware HA is not designed for large scale recovery of a complete site. VMware  HA does not offer runbooks for an automated  recovery. It is not aware of application dependancies nor is it site aware. HA does not offer a granular control over VM start priority. Also a failover cannot be tested.  So we cannot shutdown and reboot a VM without taking a production VM down.

srm-stretched-clusters-issues

Another restriction is that because of the synchronous replication of the storage layer the distance between the two datacenters is limited to about 100km.   A  vSphere Metro Stretched Cluster  is typicaly deployed in a metro area. A huricane or earthquake is likely to hit a larger area so both sites might be hurt.

Option 2 does offer orchestration using runbooks aka recovery plans. IT can test a recovery without disturbing the production environment.

Currently combining a vSphere Metro Stretched Cluster with VMware Site Recovery is not  possible.

Quite a few blogposts, VMworld sessions and whitepapers have been written about the advantages/disadvantages of both scenario’s. Duncan Epping wrote a blogpost titled SRM vs Stretched Cluster solution  about this in 2013. This is a great whitepaper about this topic published by VMware.

 

As said VMware is working on a tech preview of SRM which enables using SRM in a vSphere Metro Stretched Cluster.

Some of the requirements for such a setup have been announced at VMworld:

  • vSphere 6.0 will enhance Longdistance vMotion by supporting a roundtrip of 100ms. This enables a much bigger distance between two datacenters
  • vMotion  will be possible between two different vCenter Servers.  Two vCenters are a requirment for SRM.

In the future SRM will allow organizations using vSphere Metro Stretched Cluster  to orchestrate planned failovers using SRM. So SRM will use a recovery plan to initiate vMotion of virtual machines. It will monitor the vMotion progress and report success or failure. Not always a vMotion will succeed for example because of latency issues. In that case SRM allows to execute a rerun of a planned failover runbook. If a vMotion fails, SRM will shut down the VM on the production site and restart it on the recovery/secondary site.

For SRM to understand stretched storage, vendors will need to develop new Storage Replication Adapters (SRA).

Storage Profile Protection Groups (SPPG) will be a new component of SRM. The idea is that once a protection group (PG) is created, storage profiles are added to the PG. Any virtual machine or datastore part of the storage profile will automatically be protected.

srm-protectiongroups

In case of an unplanned failover SRM will obviously not use vMotion as the production site is down. SRM will take care of restarting VMs in the recovery site according to the recovery plan.

srm-unplannedfailover

 

One of the new features of combining SRM with a stretched cluster is the ability to perform test failovers. This will not actually perform a vMotion. It will power on the virtual machines in the recovery site using an isolated network.

 

srm-testfailover

 

It is also possible to reprotect. Reprotect is to make the recovery site the primary site. The site which was originally the protected site now will become the recovery site. Failback is supported as well.

srm-reprotect

 

VMware did not reveal the release number of SRM which will support stretched clusters. Nor did they reveal a release date. My guess this feature will be in SRM 6.0 to be released near the GA of vSphere 6.0.

 

VMworld 2014 session STO1965 – Virtual Volumes Technical Deep Dive

This was a great session on Virtual Volumes. This post will provide a summary of the information given in this session.

You can watch the recording of the session at YouTube here. I assure you this will be one hour well spent!

It will take some time for VMware customers to fully understand the concept of Virtual Volumes.

Currently virtual machines running on vSphere are stored on LUNs. These have restrictions. All VMDK’s on a LUNs are treated the same in regard of LUN capabilities. For instance you cannot replicate just a single VMDK on the storage level. It is all or nothing. Also LUNs are restricted in size. And we cannot attach more than 256 LUNs to a ESX host.

vvol-slide1

VMware has been working on a more granular level of management of virtual machine disk files. This technology is known as Virtual Volumes (VVOLs). Virtual Volumes are nothing more than the diskfiles associated to a virtual machine. Think Virtual Volume =VMDK.

When using VVOLs we can forget the concept of LUNs as well as VMFS. LUNs and VMFS are not used in Virtual Volumes.

Virtual Volumes provide:

  • abstration
  • simplicity of management
  • granularity of applying storage capabilities (data services) to virtual machines

Nowadays a LUN provides two functions: storage of data and accesspoint to hosts. In VVOls these two functions are split.

vvol-architecture

Three concepts of VVOLs are important to understand.

  • The Protocol Endpoint (PE) is the endpoint of the data stream from the host. ESXi talks to the protocol endpoint which is able to understand all storage protocols (block and file bases). The PE is created by the storage admin.
  •  VASA provider. VASA stands for vSphere Storage APIs – Storage Awareness . This piece of software is provided by the storage array vendor and publishes the storage capabilities to the ESXi host. The ESXi hosts exposes a set of VASA API’s. Each storage vendor has to support the API.VASA 1.0  is currently being used in vSphere 5 & 5.5. Virtual Volumes in vSphere 6.0 will use VASA 2.0
  • Storage container. This is a collection of storage objects. Each array has at least one storage container. A storage container has capabilties assigned like replication, snapshot retention times, encryption etc. The maximum number of storage containers is determined by the storage vendor. Storage containers cannot snap multiple storage arrays. Each storage container has one or more protocol endpoints associated.

There is no size limit of the storage container. It can be as big as the total storage capacity of the storage array. If the array contains LUNs accessed by physical hosts, the size of the storage container can be restricted by the storage admin.

Protocol Endpoints (PE) are used to zone storage containers. Nowadays  at the LUN level is managed which host has access to the LUN. This function is in VVOLs done by the PE.

Dataservices like replication, encryption, de-duplication are all provided by the storage array.

Virtual Volumes do not need a LUN nor a VMFS filesystem

When using VVOLs we still use and see the datastores in the vCenter Server console. Many features of vSphere are based on the concept of datastores. VVOLs are the same as datastores.

Virtual Volumes will allow Storage Policy Based management. VMware Virtual SAN allowed to place virtual machine diskfiles using storage policies. In VVol policies can be created based on storage capabilities like:

  • which disk types are used for the storage container
  • which snapshot capabilities the storage container provides
  • deduplication
  • replication
  • encryption

Virtual Volumes now allow a per VMDK policy. Admins can now for example snapshot only the virtual disk containing application data. The system disk of the virtual machine will not be snapshotted.

There are 5 types of  Virtual Volume objects:

  1. VMDK
  2. snapshot
  3. swap files
  4. vendor specific
  5. metadata

Migration of virtual machines from VMFS to VVOls is done using vMotion and Storage vMotion

Virtual Volumes does has a couple of restrictions since it is a 1.0 release. It will not support Storage DRS. SRM will also not support VVOLs but VMware is working hard to support it in later versions of SRM.

VVols itself does not support RDM

A lot of storage vendors will support VVOLs when it becomes GA. Not every storage array is likely to support VVOLs. It is up to the storage vendor to decide which arrays will be provided with a firmware update with VVOL support.

The slide below shows the vendors currently supporting VVOL.

vvol-partners

What is new in vSphere 6.0

This post is part of a serie blogposts on VMworld. For an overview of VMworld announcements see this post. 

———————————————————————————————————————————————————————————

At the August 26 keynote of VMworld VMware announced 3 new features of vSphere 6.0. The next version of vSphere is currently in a public beta. Everyone interested can download the bits and try. However the program is under NDA so information on features cannot be publicly shared.

So VMware gave a sneak preview on vSphere 6.0  yesterday. Immediately a couple of blogs published more information on those revealed features.VMware also revealed some information on vSphere 6.0 including vCenter in various breakout sessions. Derek Seaman write some great blogs covering breakout sessions. For example here his blog on a session about vCenter. Session to look for is INF2311

Eric Sloof made an interview with Mike Foley of VMware. Mike explains some new features in his video.

Lets have a look on what is new on vSphere 6.0

ESXi 6.0

  • Fault Tolerance support for 4 vCPU’s. According to a breakout session at VMworld 2014 US Fault Tolerance has been rewritten from the groud up. Info here
  • long distance vMotion. More info here and here Long distance vMotion now support 100ms roundtrip. Used to be 10ms.
  • vMotion across vCenters , vMotion using routed vMotion networks and vMotion across virtual switches
  • Using VMware NSX, network properties will now be vMotioned as well when using long distance vMotion.
  • better vSphere Web Client performance and added features. Still Flex based. Response time is enhanced greatly! The user interface of the web client now is much more similar to the C# client. So less searching for a feature you know to find in c# client.
  • Virtual Volumes. Info here
  • Virtual Datacenters. More info here
  • The vSphere Client (the C# one installed on Windows) will remain available in vSphere 6.0. According to VMware vSphere 6.0 will be the last release having C# client support. Haven’t we heard that before ;-)?

vCenter Server 6.0

Julian Wood also blogged about vCenter Server 6.0. Read his blog here.

  • vCenter 6.0 will support Fault Tolerance. 4 vCPUs, separate storage for primary/secondary VMs and 64GB of RAM
  • Content Library. A way to centrally store VM templates, vApps, ISO images and scripts. The function is similar to the Content Library of vCAC. Content Library’s are replicated over vCenter Server instances. The advantage is a central managed repository preventing for instance severalcopies of  templates of the same guest OS.
  • Certificate management will be greatly improved by a new tool
  • Platform Services Controller (PSC) which  will providings licensing, certificate management and SSO for solutions like vCenter, vCOPs, vCloud Director, vCloud Automation Center
  • PSC will support HA, Fault Tolerance and load balancer for high availability.
  • vCenter Server Appliance (VCSA) 6.0 will support 1000 hosts and 10,000 powered on VMs
  • VCSA still limited to Oracle as the only external database. No ODBC driver in Linux for Microsoft SQL Server.
  • Improved installer. It will ask configuration details, check and then do the install in one go. No seperate installations required.

The features below where not ann0unced during the keyote but where ann0unced by partners. While there is no confirmation these features will be part of vSphere 6.0 it is very likely they will.

  • NVIDIA GRID vGPU. More info and background here. Announced by VMware for 2015. 
  • vSphere APIs for IO Filtering (VAIO)
  • VMFork. A method to  instantly clone running virtual machines.

The VAIO feature was not revealed during the keynote but was mentioned in a session at TechField Day.  vSphere I/O Filters is a VMware supported method for third party vendors to allow interception of storage traffic flow between VM and the ESXi kernel. This allows for features like replication and host based read and write caching. The later allows VMware partners to develop solutions like PernixData does. PernixData developed a kernel module themself for host based read and write caching.

 

VMware has a course available for developers who like to learn develop vSphere I/O filters. The first two courses are scheduled for mid October and are held in Palo Alto. The program is explained in this short video.

EMC just announced a new product named ‘EMC RecoverPoint for Virtual Machines‘ which uses the VAIO.