Search This Blog

Thursday, 17 December 2009

PowerCLI to alter VM BIOS setting



I've been asked by a client how to change the BIOS boot order of a Virtual Machine so that they can PXE boot it first to build it, and then change it to boot from the hard disk afterwards.

So, I've not discovered any information where it is possible to actually change the order of boot devices, but it IS possible to allow/disallow certain devices. So, the code is as follows


Connect-VIServer ipaddress
$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.extraConfig += New-Object VMware.Vim.OptionValue
$spec.extraConfig[0].key = "bios.bootDeviceClasses"
$spec.extraConfig[0].value = "allow:net"

(get-view (Get-VM -Name "Virtual Machine").ID).ReconfigVM_Task($spec)

Thursday, 3 December 2009

vSphere client crashes with C++ error



If you have installed Converter onto vCenter for vSphere 4, make sure that you don't have the VirtualCenter 2.5 client installed along with the vSphere Client as you will get this c++ error.


I haven't tested it, but http://communities.vmware.com/thread/118140?start=30&tstart=0 states that you can run the two versions if you follow the instructions.

Update Manager Proxy settings



Just discovered something important when you are creating proxy settings in update manager for vSphere 4.

You must make sure that the account you are using for the proxy server

a) exists!!
b) is the service account that Update Manager is running under.

Friday, 6 November 2009

ESXi Architecture



This is my summary of the following VMware article:

www.vmware.com/files/pdf/vmware_esxi_architecture_wp.pdf




ESXi Components

1. VMkernel

POSIX like operating system. It provides functionality such as process creation & control, signals, file system and process threads. It provides resource scheduling, I/O stacks and device drivers to enable the running of multiple Virtual Machines.

2. File System

The VMkernel provides a simple in memory file system to hold config file, log files and staged patches. The file system layout is similar to the ESX Service Console.

The file System is independent of the VMFS file system, and if all VMFS file systems for the host are on shared storage, then the host can be a diskless machine. However if the host will be a member of an HA cluster, refer to this KB article as you will need to configure the location of your swap file:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004177

Remote access to the file systems is manage by HTTP get and put requests with authentication against local groups and users privileges.

The file system is not persistent, and thus log files will be lost on reboot, so it is definitely worth configuring a syslog server. However, for ESXi embedded, configuration information is written to a readable and writable memory location. This memory is persistent, and is read from on boot.

3. Users and Groups

They can be defined locally for use with the Vi Client, VIM API, or remote command line.

4. User Worlds

"User World" is a process running in the VMkernel environment. The user world environment is very limited compared to normal POSIX-like environments. For Example:

  • The set of available signals is limited.
  • The system API is a subset of POSIX.
  • The /proc file system is very limited.
  • A single swap file is available for all user world processes. If a local disk exists, the swap file is created automatically in a small VFAT partition. Otherwise, the user is free to set up a swap file on one of the attached VMFS datastores, by changing the advanced paramenter ScratchConfig.ConfiguredScratchLocation, see KB link above for more details. For each host, the file is about 1GB in size, and each ESXi host will require a unique directory name for its swap file location. Probably the best approach is to have a single LUN for all swap files (say 10GB for 8 hosts).

The user world is a very limited set of functionality aimed at only running the processes required, and no more.

5. Direct Console User Interface

Runs in a user world. It is used for configuration and management interface through the system console. This is used for initial basic configuration of the host. It uses the system user dcui in the VMkernel so it can identify itself when communication with other processes. The DCUI can:

  • Set administrator password
  • Configure netowrking, or set it to DHCP
  • Perform some network tests
  • View logs
  • Restart agents
  • Restore defaults

it is possible to give individual users access to the DCUI by adding them to the localadmin group, thus removing the need to hand out the root password.

6. Other User World Processes

  • hostd - provides interface to VMkernel. Tracks users & groups privileges and provides authentication.
  • vpxa - used for connection to vCenter. Runs as special system user vpxuser and interfaces between hostd and vCenter.
  • HA agent - runs in its own user world
  • syslog daemon - used to forward logs to syslog server
  • iSCSI discovery - is performed in its own user world. After discovery, the VMkernel handles all traffic. The iSCSI network interface is the same as the main VMkernel network interface.
  • NTP - has process to manage NTP time synchronisation
  • SNMP - has process to manage SNMP monitoring & alerts

7. Startup

Any upgrade to ESXi can be easily performed because the ESXi image is only 32MB, and thus will just be replaced for upgrades or patching. There are 2 banks for ESXi packages, and either can be used to boot the ESXi host. So if an upgrade is performed and there is a problem, the previous package can be used from the alternate bank. This can be either automatic, or the administrator can choose which bank to boot from.

8. CIM

The Common Information Model (CIM) is an open standard that defines how computing resources can be represented and managed. It enables a framework for agentless, standards-based monitoring of hardware resources for ESXi. This framework consists of a CIM object manager, often called a CIM broker, and a set of CIM providers.

The CIM providers provide a way gain management access to device drivers and hardware. Both hardware manufacturers and VMware have written specific CIM providers. These providers are packaged with ESXi and can be installed at runtime. The CIM broker take all the information from the CIM providers and presents it via standard APIs.

9. VI API

The VI API provides a common interface for vCenter, ESX and ESXi enabling bespoke applications and functionality to be developed, but obvioulsy certain functions will only work with certain targets.

Tuesday, 31 March 2009

VMware View Notes



Things to take note of when using VMware View




  • When using Linked clones for desktops, DNS is key. Make sure that the master desktop can communicate to the View Manager server using FQDN otherwise the desktops when provisioned will just sit on cutomizing, and eventually fail.

  • If you fail to connect to a desktop when using the View Client, it is possible that the Windows firewall is stopping the RDP traffic. The View Agent is meant to configure the firewall, but didn't seem to work for me (although it may have been user error :)). My solution was to turn off the firewall before closing down the desktop before snapshotting it. You could then use GPO to manage the firewall when desktops are deployed.

Friday, 6 March 2009

ESX Boot from SAN




  • ESX v3.x. RDMs are supported with v3, but not v2

  • HBA should be plugged into the lowest PCI bus and slot number to help the drivers detect the HBA quickly.

  • For Active/Passive arrays, the SP's WWN that is specified in the HBAs BIOS must be active. If not, no boot!!

  • Ensure that boot LUNs are masked to only the ESX server that needs it.

  • Must be in SAN switched fabric. Direct connect and FC arbitrated loop are not supported.

  • For IBM eServer BladeCenter, IDE drives must be disabled on the blades.

  • The HBA BIOS must designate the FC card as the boot controller.

  • The FC card must be configured to initiate a primitive connection to the target boot
    LUN.

To set up BfS, see the SAN Config guide.

Thursday, 5 March 2009

Storage Best Practice



Except from Scalable Storage Performance for ESX 3.5 document

Things that affect scalability

Throughput

  • Fibre Channel link speed
  • Number of outstanding I/O requests
  • Number of disk spindles
  • RAID type
  • SCSI reservations
  • Caching or prefetching algorithms

Latency

  • Queue depth or capacity at various levels
  • I/O request size
  • Disk properties such as rotational, seek, and access delays
  • SCSI reservations
  • Caching or prefetching algorithms.

Factors affecting scalability of ESX storage

Number of active commands

  • SCSI device drivers have configurable parameter called LUN queue depth which determines how many commands can be active to a given LUN at any one time.
  • QLogic fibre channel HBAs support up to outstanding commands 256, Emulex 128
  • Default value in ESX is set to 32 for both
  • Any excess commands are queued in vmkernel which increases latency
  • When VMs share a LUN, the total number of outstanding commands permitted from all VMs to that LUN is goverened by Disk.SchedNumReqOutstanding. If this is exceeded, commands will be queued in VMkernel. Maximum figure recommended is 64. For LUNs with single VM, this figure is inapplicable, and HBA queue depth is used.
  • Disk.SchedNumReqOutstanding should be the same value as the LUN queue depth.
  • n = Maximum Outstanding I/O Recommended for array per LUN (this figure should be obtained with help from the storage vendor)
  • a = Average active SCSI Commands per VM to shared VMFS
  • d = LUN queue depth on each ESX host
  • Max number VMs per ESX host on shared VMFS = d/a
  • Max number VMs on shared VMFS = n/a
  • To establish a look at QSTATS in esxtop, and add active commands to queued commands to get total number of outstanding commands.

SCSI Reservations

  • Reservations are created by creating/deleting virtual disks, extending VMFS volume, creating/deleting snapshots. all these result in metadata updates to the file system using locks.
  • Recommendation is to minimise these activities during the working day.
  • Perform these tasks on the same ESX host that hosts I/O intensive VMs as the SCSI reservations are issued by the same host as there will be no reservation conflicts as the host is already generating the reservations. I/O intensive VMs on other hosts will be affected for the duration of the task.
  • Limit the use of snapshots. It is not recommended to run many virtual machines from multiple servers that are using virtual disk snapshots on the same VMFS. Snapshot files grow in 16MB chunks, so for vmdks with lots of changes, this file will grow quickly, and for every 16MB chunk that the file grows by, you will get a SCSI reservation.

Total available link bandwidth

  • Make sure you have enough FC links with enough capacity (1/2/4 Gbps) to all VMFS volumes
  • e.g. with each VM on different VMFS volume, each using 45MBps, 2Gbps link will be saturated with 4VMs.
  • Balancing VMs across 2 links means 8VMs will be able to perform.
  • recommended to balance VMFS LUNs across HBA links.

Spanned VMFS Volumes

  • VMFS volume is spanned if it includes multiple LUNs.
  • Done by using extends
  • Good to add storage to an existing VMFS datastore on the fly
  • Hard to calculate performance
  • SCSI reservations only lock first LUN in a spanned volume, therefore potentially improving performance

Multipathing

Fo Active/active arrays, it's important to find out if they are Asymmetric or Symmetric. some really good information here: http://frankdenneman.wordpress.com/2009/02/09/hp-continuous-access-and-the-use-of-lun-balancing-scripts/

For Asymmetric active/active arrays, or ALUA (such as HP EVA 4x00,6x00 & 8x00 see here for more info), multipathing should be configured on the host so that the "owning" storage processor will be on the primary preferred path for each LUN on all hosts. The non owning processor is then only used as a backup, and no cross-talk between the SPs will happen thus reducing the latency of requests.

Here is another good article that helps http://virtualgeek.typepad.com/virtual_geek/2009/02/are-you-stuck-with-a-single-really-busy-array-port-when-using-esx-script-for-balancing-multipathing-in-esx-3x.html

Unload unnecessary drivers

  • Unload VMFS-2 driver if it's not required. Command vmkload_mod -u vmfs2
  • Unload NFS drivers if not required. Command vmkload_mod -u nfsclient

How many VMs per LUN?

Depends .... on

  • LUN queue depth limit, i.e. the sum of all active SCSI commands from all VMs on a single ESX host sharing the same LUN should not consistently exceed the LUN queue depth
  • Determine the max number of outstanding I/O commands to the shared LUN. Array vendor may be able to help supply this value. A latency of 50 milliseconds is the tipping point, you don't really want it any higher.

Zoning

  • Single initiator hard zone is what I'd recommend. For a description what this means, see here and here

Friday, 20 February 2009

Installing ESXi onto Intel Optiplex 745



What a fuss!!


So I got the machine, plugged in a cheap 1TB Samsung HD103UJ disk along with the 80Gb disk that came with it. I coneected the 80GB into SATA 0 and the 1TB into SATA 1, and the CD into SATA 4. The 745 comes with SATA 0, 1, 4 & 5.


So, first i tried installing ESX, but this failed as the storage controller wasn't recognised. I then put the ESXi installer onto a USB stick (here's how to do this) and tried installing, but again, this failed for the same reason.


So, a bit of surfing later, I had discovered that the OEM.TGZ that comes in the root of the 3i installer, and contains all the PCI IDs for all the devices for ESXi (heres the list) doesn't have the ID for the Intel controllers on my motherboard which seem to be 8086:2820 (4 port SATA IDE Controller (ICH8)) and 8086:2825 (2 port SATA IDE Controller (ICH8)). To discover these PCI IDs for the device, I booted the 3i installer, and when it booted, I hit F1, logged in as root with no password, and entered lspci -v or lspci -p. This will list all the PCI IDs, fortunately, the storage devices were at the end of the list as I couldn't pipe the output to more for some reason.


Fortunately, the post here provides an updated oem.tgz that contains the updated OEM.TGZ for my controllers in the simple.map file.


So, i just copied this OEM.TGZ onto the USB stick to replace the current one.


I then booted my machine, but the install still failed. Looking at the output of lspci -p, it seemed that the ata_piix driver had been loaded for the 8086:2825, but not the 8086:2820 - no idea why. So, I unplugged the 80GB disk, and plugged the 1TB disk into SATA 4, and enabled this connection in the BIOS, rebooted and bingo, the install worked!!


So, now we are on to the next problem. The OEM.TGZ is only used when running the installer. As soon as I unplugged the USB stick and rebooted so that ESXi would boot, it failed as it didn't recognise the storage controller. So I need to either go into this install and replace the OEM.TGZ file that has been installed, or replace it in the installer files contained on the memory stick.


Now I've decided that I think it will be better if I try and make a memory Stick bootable with 3i, rather than trying to get the OEM.TGZ into the file system on the newly installed machine.

The instructions that Duncan gives (here) help me get the standard 3i install onto the memory stick, but without the new OEM.TGZ for the storage controllers. So, after you have extracted the VMware-VMvisor-big-3.5.0_Update_3-123629.i386.dd file, you will then need to start Winimage, and select Disk -> Convert Virtual Hard Disk Image, select the .dd file, and create a .vhd image.

You then need to open this new file with winimage, connect to the first partition which should be labelled Hypervisor 1. All you then need to do is inject the new OEM.TGZ and overwrite the existing one.

Booting my machine with this memory stick now works like a dream, and the on-board NIC works too!!

This method of editing the .dd disk file should also work with updating the installer on the memory stick as described above, but I haven't tried this yet.