Getting Started with Adobe After Effects - Part 6: Motion Blur


Upload Image Close it
Select File

SQL Server world: Learning and Sharing my experience & knowledge ....
Browse by Tags · View All
sql server 51
best practices 39
#SQL Server 28
dba 25
BRH 19
beginners 17
download 15
#TSQL 15
dba practices 9
#DBA 9

Archive · View All
September 2010 13
August 2011 5
October 2010 5
November 2011 4
October 2011 4
May 2011 4
February 2011 4
November 2010 4
January 2012 3
July 2011 3

SQL Server DBA Best Practices: Disk defragmentation methods

Feb 17 2011 9:27AM by Satya Jayanty (@sqlmaster)   

Fragmentation - have you ever heard about it when you are dealing with databases.

Just talking on DBA perspective I would say fragmentation is dearest friend of database when you need to deal with Performance, so by using the DBCC statements and other methods here you can deal the database level fragmentation, what about the physical level fragmentation?

So what is the best way forward to fix the data file fragmentation in SQL Server, say when you have much of SHRINK operations are performed on Database level.  The generic answer would be to use the defragmentation tools from windows or operating system based tools, but the issue will be they treat the SQL data file as a whole and will not defragment it granularly. You may be aware that FILE FRAGMENTATION for SQL Server data file occurs even when you perform a backup and restore of that database, in case of having multiple files for data & indexes.
But to address this issue you cannot perform a downtime to SQL Server services in order to let the operating system defragmentation tool too defrag the physical file, and theoritically speaking there isn't much external fragmentation anyway. It's more helpful to regularly reindex your data to reduce the internal fragmentation as much as possible, that will reduce the outer layer of fragmentation of physicial file where the effectiveness of the read-aheads and the amount of data that can be buffered, to the maximum (as per my tests).
 
The efficiency of performance (resource level, such as Disk) will be efficient disk I/O, by making sure the disk alignment and RAID configuration is correct, scaling your disk arrays to properly handle the I/O load, and maintaining proper layout of the Log, Data, TempDB, and backup files. As usual the best practice of not letting the database with AUTO-SHRINK and leavintg the AUTO-GROW settings on the volatile inserts/update/deletes processes of database. There is a reason behind for not letting the default-setting of auto-grow (in case of auto-shrink too), as it will reduce the number of trips of volume-level file fragments created.
 
So for the newly created database it may not be much as the  data and log files are initialized with empty space. In this case Data and log files are first initialized by filling the files with zeros when you create a database; add files, log or data, to an existing database; increase the size of an existing file (including autogrow operations); or restore a database or file group.
File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros. Within SQL Server 2005 version, the data file are initialized instantly, that will be capable for fast/quick execution of file operations. This can be achieved with an instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files, but not in the case of log files. Bear in mind such an operation (advantage) will be good at when using Windows 2003 server version and later.
 
So with using DBCC statements such as DBCC SHOWCONTIG the SQL Server will not report fragmentation, but will give 0 percent fragmentation even when the on-disk files are horribly fragmented. As the operating system level layout is not available to SQL Server, that also depends upon the physically lays out the bits on disk; so it's the operating system's job to manage physical bits on disk. In this case, performing an OS-level defragmentation could help performance by making the files more contiguous on disk.
However, defragmenting at the OS level doesn't always have the effect that you might expect. SQL Server pre-allocates space when you create a file. It is best to perform a scheduling of regular OS-level defragmentation using the tools such as Disk-Keeper and http://www.diskeeper.com/defrag/dk-boost-sql-server.asp. So it is better to integrate the OS level jobs during quiet period on SQL Server as file defragmentation is integrated in the operating system, defragmenting open files is perfectly safe. However, I/O activity generated must be considered if continuous high performance is mandatory. To close the last piece of this blog I would like to suggest to create the data file on contiguous space that has already been defragmented, such as speicifying the next 3 or 5 year growth of your database, this is specific to OLTP based databases.
Over the coming days I would like to touch on DBA best practice series to encourage your comments that will help the community & the aspiring DBAs too.

Tags: best practices, dba, sql server, #SQL Server, tsql, #TSQL, BRH, beginners, data, disk, fragmentation,


Satya Jayanty (@sqlmaster)
34 · 5% · 1720
0
Liked
 
0
Lifesaver
 
0
Refreshed
 
0
Learned
 
0
Incorrect



Submit

Your Comment


Sign Up or Login to post a comment.

    Copyright © Rivera Informatic Private Ltd Contact us      Privacy Policy      Terms of use      Report Abuse      Advertising      [ZULU1097]