Monday, August 24, 2015

Summary for SQL Server on VMware Best Practices Guide

The following is a summary for SQL Server on VMware Best Practices Guide:
  • Using VMware Capacity Planner™, this service collects all essential performance metrics, including processor, disk, memory, and network statistics, and specific SQL Server metrics on existing database servers in the environment. Capacity Planner analyzes this data to recommend server consolidation opportunities. This exercise helps you to understand what resources your current physical SQL Servers use, and makes it easier for you to create a short list of SQL Server instances to virtualize and to determine the order in which you should virtualize the SQL Server instances.
  • We strongly recommend using a VMware enterprise-class hypervisor, vSphere, to deploy virtualized SQL Server instances, even for development and test environments.

CPU

  • When maximizing performance is generally the primary goal for these workloads: Provide CPU resources by maintaining a 1:1 ratio of the physical cores to vCPUs. For example, if the workloads run on a 4 CPU core machine in the physical system, allocate 4 vCPU on the vSphere host for the virtual machine. 
  • For small SQL Server virtual machines, allocate virtual machine CPUs equal to or less than the number of cores in each physical NUMA node. For wide SQL Server virtual machines, size virtual machine CPUs to align with physical NUMA boundaries. Configure vNUMA to enable SQL Server NUMA optimization to take advantage of managing memory locality.
  • Hyperthreading Sharing on the Properties tab of a virtual machine provides control of whether a virtual machine should be scheduled to share a physical processor if hyperthreading is enabled on the host. Choose one of the following settings:
    • Any – This is the default setting. The vCPUs of this virtual machine can freely share cores with other virtual CPUs of this or other virtual machines.
    • None – The vCPUs of this virtual machine have exclusive use of a processor whenever they are scheduled to the core. Selecting None in effect disables hyperthreading for your virtual machine. When maximizing performance is generally the primary goal for these workloads.
    • Internal – This option is similar to none. Virtual CPUs from this virtual machine cannot share cores with virtual CPUs from other virtual machines. They can share cores with the other virtual CPUs from the same virtual machine.

Memory

  • To avoid performance latency resulting from remote memory accesses, you should size an SQL Server virtual machine’s memory so it is less than the amount available per NUMA node.
  • Large pages can significantly improve the performance of this workload, compared to running the workload using small pages; so if the server (the VM) is SQL Server dedicated (or all major servers on that VM) can benefit from large pages then: ❶enable large page on windows and ❷ enable large page in SQL Server. This is possible when the following conditions are met:
    • You are using SQL Server Enterprise Edition.
    • The computer has 8GB or more of physical RAM.
    • The Lock Pages in Memory privilege is set for the service account…. I personally see granting this (Lock Pages in Memory) as dangerous option to play with and has to be set after very careful considerations.    
  • When calculating the amount of memory to provision for the virtual machine, use the following formulas:
    VM Memory = SQL Max Server Memory + ThreadStack + OS Mem + VM Overhead
    ThreadStack = SQL Max Worker Threads * ThreadStackSize
    The value for 'max worker threads' can be found in the max_worker_count column of sys.dm_os_sys_info. ThreadStackSize = 1MB on x86 = 2MB on x64 = 4MB on IA64. OS Mem: 1 GB for each 4 GB of RAM installed from 4–16 GB, and then 1 GB for every 8 GB RAM installed above 16 GB RAM.

      Storage

      • A single Iometer thread running in a virtual machine can saturate the bandwidth of the respective networks for all four storage protocols (NFS, SW iSCSI, HW iSCSI, and Fibre Channel), for both read and write. Fibre Channel throughput performance is higher because of the higher bandwidth of the Fibre Channel link. For the IP-based protocols, there is no significant throughput difference for most block sizes.
      • It is preferable to deploy virtual machine files on shared storage to take advantage of vSphere vMotion, vSphere HA, and vSphere DRS. This is considered a best practice for mission-critical SQL Server deployments that are often installed on third-party, shared-storage management solutions. VMware recommends that you set up a minimum of four paths from a vSphere host to a storage array. This means that each host requires at least two HBA ports.
      • Create VMFS partitions from within VMware vCenter. They are aligned by default. Align the data disk for heavy I/O workloads using diskpart or, with Windows Server 2008, the disk is automatically aligned to a 1 MB boundary.
      • If you are deploying a heavy workload SQL Server, VMware recommends that you consider using eagerzeroedthick disks for SQL Server data, transaction log, and tempdb files. An eagerzeroedthick disk can be configured by selecting the Thick Provision Eager Zeroed option in vSphere 5.x. 
      • Place SQL Server binary, log, and data files into separate VMDKs (Virtual Machine Disk). In additional to the performance advantage, separating SQL Server binary from data and log also provides better flexibility for backup. The OS/SQL Server Binary VMDK can be backed up with snapshot-based backups, such as VMware Data Recovery. The SQL Server data and log files can be backed up through traditional database backup solutions.
      • Maintain 1:1 mapping between VMDKs and LUNs. When this is not possible, group VMDKs and SQL Server files with similar I/O characteristics on common LUNs.
      • Use multiple vSCSI adapters. Place SQL Server binary, data, log onto separate vSCSI adapter optimizes I/O by distributing load across multiple target devices.

      Networking:

      • Use two physical NICs per vSwitch, and if possible, uplink the physical NICs to separate physical switches.
      • Use separate physical NICs for management traffic (vSphere vMotion, FT logging, VMkernel) and virtual machine traffic.
      • If using iSCSI, the network adapters should be dedicated to either network communication or iSCSI, but not both.

      No comments:

      Post a Comment