Jul 122011
 

Bye Bye ESX and the Service Console!! The long time talked about removal of the service console from the vSphere product set means vSphere 5.0 is now a converged platform offering (ESXi).

So what platform enhancements are in the vSphere 5.0 release?

Image Profiles – An image profile allows for the create of bespoke ESXi images. Common issues in previous version of ESX meant that existing ESXi images could have missing drivers and no plugins like nk1v. It describes all the components in an ESXi instillation image, but not configuration. The Image Builder is a set of command line utilities (powercli version). Image Profiles are kept in a depot, a repository (library of images and VIBs)

What is Auto-Deploy?

ESXi is downloaded and ran from memory. VMware vSphere Auto Deploy virtual appliance loads the hypervisor software onto host as it is booting. It uses the image profiles obtaining from the depot and boot it via PXE. Combining the features of host profiles, Image Builder, and PXE, VMware vSphere Auto Deploy simplifies the task of managing ESXi installation and upgrade for hundreds of machines. New hosts are automatically provisioned based on rules defined by the user. Rebuilding a server to a clean slate is as simple as a reboot and the physical memory is cleared. To move between ESXi versions, you update a rule using the Auto Deploy PowerCLI and perform a test compliance and repair operation.

Auto-Deploy allows for rapid provisioning and re-provisioning of new hosts and enables simplified updates.image

The big bonus for Auto-Deploy is no local storage for host hypervisors. The same goes for Boot from SAN.

image

Recipe for Auto-Deploy:

Custom ESXi image + network boot machine + auto deploy virtual appliance + configuration + host profiles + runtime state information + vCenter server + recording of events + syslog netdump servers

Firewall

Used to be IPtables – not anymore. Service aware but not state of host. 3rd party components.

Can add rules thorough GUI or xml format.

New Virtual Machine Capabilities

VM scaling:

Hardware version 8. 32 virtual cpu per VM, 1tb ram per VM.

Support for up to 512 virtual machines. vSphere 5.0 supports up to 512 virtual machines totalling a maximum of 2048 virtual CPUs per host.

Support for larger systems. vSphere 5.0 supports systems with up to 160 logical CPUs and up to 2TB RAM.

other new features – UI for multi-core virtual CPUs, extended VMware tools compatibility, support for mac os server.

  • New Virtual machine capabilities. ESXi 5.0 introduces a new generation of virtual hardware with virtual machine hardware version 8, which includes the following new features:
    • 32-way virtual SMP. ESXi 5.0 supports virtual machines with up to 32 virtual CPUs, which lets you run larger CPU-intensive workloads on the VMware ESXi platform.
    • 1TB virtual machine RAM. You can assign up to 1TB of RAM to ESXi 5.0 virtual machines.
    • Nonhardware accelerated 3D graphics for Windows Aero support. ESXi 5.0 supports 3D graphics to run Windows Aero and Basic 3D applications in virtual machines.
    • USB 3.0 device support. ESXi 5.0 features support for USB 3.0 devices in virtual machines with Linux guest operating systems. USB 3.0 devices attached to the client computer running the vSphere Web Client or the vSphere Client can be connected to a virtual machine and accessed within it. USB 3.0 devices connected to the ESXi host are not supported at this time.
    • UEFI virtual BIOS. Virtual machines running on ESXi 5.0 can boot from and use the Unified Extended Firmware Interface (UEFI).

Other new features – UI for multi-core virtual CPUs, extended VMware tools compatibility, support for mac OS server.

STORAGE

Storage DRS (SDRS):

  • Initial virtual disk placement (Pod – pool of data stores)
  • Out of space avoidance,
  • I/O load balancing,
  • Virtual disk affinity anti-affinity
  • New man agent object,
  • Storage equivalent of SRS clusters,
  • Consists of similar data stores,
  • storage load balancing domain.
  • Storage Policy based management (SPBM),
  • 2TB+ LUN support. vSphere 5.0 provides support for 2TB+ VMFS data stores.

VMFS-5 – Scalability and performance improvements, increase limits of the file-system (limits a FS size that increases when the FS extends/grows), Reduce SCSI reservation with VAAI primitives. Plumbing of UNMAPs (Space reclamation on TP LUNs). Set the stage for future proofing, more efficient snapshot facility within VMFS, support for 2 TB plus files.

vStorage API for storage awareness (VASA) – to allow array vendors to provide more info about the storage array, (drives, RAID level etc).

VAAI – now doing NAS and Thin Provisioning (pass info down to array to clear up the space). Ability to warn when the array runs out of space.

Storage vMotion Enhancement – speeded up, now snapshot VMs can be storage vMotioned.

Storage vMotion snapshot support – Allows Storage vMotion of a virtual machine in snapshot mode with associated snapshots. You can better manage storage capacity and performance by leveraging flexibility of migrating a virtual machine along with its snapshots to a different data store.

Network

LLDP – Link Layer Discovery Protocol) – bit like CDP

Netflow – inter-vm traffic, vm-physical infrastructure traffic – (vDS sends records to 3rd party collectors such as NetQoS and NetScout)

DVMirror – can monitor traffic inter-vm or intra-vm.

NIOC (network IO control)– network prioritisation (I/O shares) at VM level – QOS extended to network infrastructure – workload isolation between tenants. – done through vDS resource pools and shares within them.

 

vCenter

vMotion – support for up to four 10GBPS or sixteen 1gbps nics –single vMotion can now scale over multiple nics (load balance). It can now slowdown during page send (SDPS)(slows down the VM slightly to ensure the copy can complete) feature throttles busy VMs to reduce timeouts and improve success. vMotion can now ensure less that 1 second switchover time in almost all cases. It can support higher latency networks (up to 10MS), improved error reporting, resource pool integration by vMotion now puts VMs in the proper resource pool.

DRS – integration with vShield, support for vShield agents (zones, edge, app, endpoint), a DRS/DPM cluster hosting vShield agent VMs, resource pool management consistent for clusters and non clusters being managed by VC. Resource pool settings stored in VC for non-clustered hosts (same as clustered hosts), enables support for stateless ESXi hosts running in standalone/non-clustered mode, prevents direct host access to resource pool settings managed by VC, improved behaviour when host disconnected from VC, if host loses access to VC users can connect directly to the host and override VC to take control, no longer requires restarting the VPXA.

Cloud Level Scalability:

  • 1000’s of VDC’s
  • 50,000+ VMs
  • 2000 hosts
  • 150,000 Objects
  • 200+ concurrent administrators

Can now run on Linux and also comes with a downloadable appliance.

FDM (fault domain manager) – Replacement for VMware HA. no more AAM

  • More reliable
    • Deploys and reconfigures within seconds, regardless of cluster size.
    • Uses multiple channels for agent-to-agent communication: both network and storage
    • Removes dependencies on commonly misconfigured services (e.g., DNS)
  • New and Improved features
    • Management network partition support (new)
    • Single HA log file per host and syslog integration (new)
    • Host isolation response (improved)
    • Admission control (improved)
    • Agent error reporting (improved)

Flex Based Client:

  • Empowering the Administrator
    • Centralize all VMware Administrative User Interfaces
    • Seamless integrate with all VMware solutions
    • Create a common user experience
    • Common “Look and Feel”
    • Single Sign On (SSO)
    • Scale to the Cloud
    • Support multiple platforms
    • Provide for ease of extensibility

Its worth noting this is better than the web clients prior to vSphere 5.0 but still has a little way to go to match the vSphere Client.

Update Manger – can now do delayed and staggered upgrades – can upgrade a Virtual Appliances.

And for VDI:

Accelerator. An accelerator has been delivered for specific use with View (VDI) workloads. With this option configured in ESXi, a read cache is constructed in memory that is optimised for recognising, handling, and de-duplicating VDI client images. The cache is managed from within the View Composer and delivers a significant reduction, as high as 90% by early estimates, in IOPS from each ESXi host to the storage platform holding client images. This reduction in IOPS enables large scaling of the number of clients in case multiple I/O storms, typical in large VDI deployments, occur.

 

References

(I’ll keep updating as I find them):

Post to Twitter

Twitter links powered by Tweet This v1.8.3, a WordPress plugin for Twitter.