The key element of data storage in SQL Server is managed in pages, that means the database data files are logically divided into pages contiguously starting from 0 to n. So all the I/O based operations are performed and managed at the page level which leads to reads and/or writes while the data is manipulated.
Just to going back to basics on page, BOL highlights: In SQL Server, the page size is 8 KB. This means SQL Server databases have 128 pages per megabyte. Each page begins with a 96-byte header that is used to store system information about the page. This information includes the page number, page type, the amount of free space on the page, and the allocation unit ID of the object that owns the page.I recommend to read through topics listed on file and filegroup architecture along with table/index structure architecture.
Going on the subject line, page split occurs when majority of the rows or data entries in a full data/index page have to be moved to a new or other pages to make room for a new row in a clustered table/index. The general practice and value for this counter is to have less than 20 out of every 100 batch requests. So what it can cause if there is a high value is it can impair the performance and fragment the data storage in a table. However I have noticed that the default behaviour of page split counter also acknowledges new page allocations when a page is added to the end of the table.
Splitting a page is an intensive operation (when the disks are not optimized) and such DML statements are initiated the page split operation generates excessive disk I/O until the split operation is completed.
Having said that the issue of slow performance is observed in your data platform, then you should observe the page splits/second counter in particular and if you see greater than 20 per 100 batch requests then it indicates overflowing of index pages. This also affects due the table/index design, so the value needs to be monitored regularly. Also the heavily fragmented indexes will result in high page splits/sec too, which means you must have regular schedule to REORGANIZE and REINDEX the frequently used tables/indexes. Also you need to consider the FILLFACTOR set for the indexes, best recommendation (from my experience) is to test and have reduced fill-factor value to reduce page splits by increasing amount of free space on each page.
Then coming to page reads/sec, it is the number of physical database page reads issued. 80 - 90 per second is normal, anything that is above indicates indexing or memory constraint. This counter shows the value for the number of physical database page reads that are issued per second, such statistics are displayed as total number of physical page reads across all the databases. In an ideal world the value should be below 90, because the physical IO is always expensive/costly during the query execution it will cause timeouts for application too. The recommendation is also to look at the Page Lookups/sec counter too, this is used for number of requests to find a page in the buffer pool. This is shown as ratio of page lookups to batch requests which indicates that while query plans are looking up too many pages in the buffer pool which in turn shows inefficient plans on that SQL instance. Best option is to identify queries with highest amount of Logical IO and use the referred blogs here: Robert Pearl's TSQL Tuesday #18 using CTE to analyze disk IO and SQLServerIO's Gather virtual file statistics using TSQL Tuesday #15.
Also I recommend to view at SQL Server Management Studio's Dashboard reports: Get information about SQL Server 2012 Management and Performance Dashboard Reports and SSMS-dashboard related posts. Not to mention the SQL Server 2008 R2 Administration cookbook on related topics.
Republished from SQLServer-QA.net [10 clicks].
Read the original version here [3 clicks].