Configure SQL Server in Azure VMs for scale and performance

  1. Home
  2. Configure SQL Server in Azure VMs for scale and performance

Go back to DP-300 Tutorials

In this we will learn about how to configure SQL Server in Azure VMs for scale and performance. And, understand about optimizing SQL Server performance in Microsoft Azure Virtual Machines.

Getting started

  • If you are creating a new SQL Server on Azure VM and are not migrating a current source system, create your new SQL Server VM based on your vendor requirements. The vendor requirements for a SQL Server VM are the same as what you would deploy on-premises. However, if you are creating a new SQL Server VM with a new application built for the cloud, you can easily size your SQL Server VM as your data and usage requirements evolve.
  • Secondly, the recommended minimum for a production OLTP environment is 4 vCore, 32 GB of memory, and a memory-to-vCore ratio of 8. For new environments, start with 4 vCore machines and scale to 8, 16, 32 vCores or more when your data and compute requirements change.
  • Further, use the SQL Server VM marketplace images with the storage configuration in the portal. As this will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput necessary for your workloads.
  • Next, SQL Server data warehouse and mission critical environments will often need to scale beyond the 8 memory-to-vCore ratio. For medium environments, you may want to choose a 16 core-to-memory ratio, and a 32 core-to-memory ratio for larger data warehouse environments.

VM size guidance

For transferring a current on-premises SQL Server database to SQL Server on Azure VMs, use the vCPU and RAM settings from your source server as a baseline. Bring your SQL Server core license to Azure to get the Azure Hybrid Benefit and save money on SQL Server licensing.

Memory optimized

Memory-optimized virtual machine sizes are a fundamental goal for SQL Server VMs, and Microsoft recommends them. Memory-optimized virtual machines, on the other hand, have higher memory-to-CPU ratios and medium-to-large cache choices.

M and Mv2 series

  • The M-series offers vCore counts and memory for some of the largest SQL Server workloads.
  • Secondly, the Mv2-series has the highest vCore counts and memory and is recommended for mission critical and data warehouse workloads. Mv2-series instances are memory optimized VM sizes providing unparalleled computational performance to support large in-memory databases and workloads with a high memory-to-CPU ratio.

Edsv4-series

  • The Edsv4-series is designed for memory-intensive applications. Moreover, these VMs have a large local storage SSD capacity, strong local disk IOPS, up to 504 GiB of RAM, and improved compute compared to the previous Ev3/Esv3 sizes with Gen2 VMs.
  • Secondly, this VM series is ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage.
  • Lastly, the Edsv4-series virtual machines support premium storage, and premium storage caching.

DSv2-series 11-15

  • Firstly, the DSv2-series 11-15 has the same memory and disk configurations as the previous D-series. This series has a consistent memory-to-CPU ratio of 7 across all virtual machines.
  • Secondly, the DSv2-series 11-15 supports premium storage and premium storage caching, which is strongly recommended for optimal performance.
Dp-300 practice tests

Storage optimized

The VM sizes that have been optimized for storage are for certain use scenarios. These virtual machines were created with optimum disc throughput and IO in mind. This virtual machine series is designed for large transactional databases, data warehousing, and big data situations.

Lsv2-series

  • The Lsv2-series features high throughput, low latency, and local NVMe storage. The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable data disks.
  • Secondly, these virtual machines are strong options for big data, data warehouse, reporting, and ETL workloads. The high throughput and IOPs of the local NVMe storage is a good use case for processing files that will be loaded into your database and other scenarios where the source data can be recreated from the source system or other repositories such as Azure Blob storage or Azure Data Lake. Lsv2-series VMs can also burst their disk performance for up to 30 minutes at a time.
  • Lastly, these virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and for every 8 vCPUs there is 1.92 TB of NVMe SSD. This means for the largest VM of this series, the L80s_v2, there is 80 vCPU and 640 BiB of memory with 10×1.92TB of NVMe storage. T

Disks guidance

There are three main disk types on Azure virtual machines:

Operating system disk

An operating system disc, sometimes known as the C drive, is a VHD that you may boot and mount as a running version of an operating system. On the operating system disc, however, the default caching strategy is Read/Write. We propose using data discs instead of the operating system disc for performance-sensitive applications.

Temporary disk

The D drive, which is used for temporary storage, is not persisted to Azure Blob storage. The D: disc should not be used to store user database or transaction log files. For mission-critical SQL Server operations, put TempDB on the local SSD D: disc (after choosing correct VM size). If you use the Azure portal or Azure quickstart templates to build the VM and put the TempDB on the Local Disk, you don’t need to do anything further; in all other circumstances, follow the procedures in the blog article Using SSDs to store TempDB to avoid failures during restarts.

Data disks

  • Firstly, use premium SSD disks for data and log files: If you are not using disk striping, use two premium SSD disks where one disk contains the log file and the other contains the data.\
  • Secondly, Disk striping: For more throughput, you can add additional data disks and use disk striping. However, to determine the number of data disks, you need to analyze the number of IOPS and bandwidth required for your log file(s). Use the following guidelines:
  • For Windows 8/Windows Server 2012 or later, use Storage Spaces with the following guidelines:
    • Set the interleave (stripe size) to 64 KB (65,536 bytes) for OLTP workloads and 256 KB (262,144 bytes) for data warehousing workloads to avoid performance impact due to partition misalignment. This must be set with PowerShell.
    • Set column count = number of physical disks. Use PowerShell when configuring more than 8 disks (not Server Manager UI).
    • For example, the following PowerShell creates a new storage pool with the interleave size to 64 KB and the number of columns equal to the amount of physical disk in the storage pool:
SQL Server in Azure VMs for scale and performance
Image Source: Microsoft
  • Thirdly, Caching policy: Note the following recommendations for caching policy depending on your storage configuration.
  • Aftre that, NTFS allocation unit size: When formatting the data disk, it is recommended that you use a 64-KB allocation unit size for data and log files as well as TempDB.
  • Lastly, Disk management best practices: When removing a data disk or changing its cache type, stop the SQL Server service during the change.

I/O guidance

  • The best results with premium SSDs are achieved when you parallelize your application and requests. However, Premium SSDs are designed for scenarios where the IO queue depth is greater than 1. So, you will see little or no performance gains for single-threaded serial requests (even if they are storage intensive).
  • Secondly, consider using database page compression as it can help improve performance of I/O intensive workloads. However, the data compression might increase the CPU consumption on the database server.
  • Thirdly, be aware that autogrow is considered to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-to-day basis with autogrow. If autogrow is used, pre-grow the file using the Size switch.
  • Next, Make sure autoshrink is disabled to avoid unnecessary overhead that can negatively affect performance. And, move all databases to data disks, including system databases.
  • Move SQL Server error log and trace file directories to data disks. This can be done in SQL Server Configuration Manager by right-clicking your SQL Server instance and selecting properties.
  • Then, set up default backup and database file locations. Use the recommendations in this article, and make the changes in the Server properties window.
  • Next, enable locked pages to reduce IO and any paging activities. For more information, see Enable the Lock Pages in Memory Option (Windows).
  • Lastly, consider compressing any data files when transferring in/out of Azure.
SQL Server in Azure VMs for scale and performance DP-300 online course

Reference: Microsoft Documentation

Go back to DP-300 Tutorials

Menu