First of all, before database creation, you should estimate how big your database will be. To estimate the reasonable database size, you should estimate the size of each table individually, and then add the values obtained. See this link for more information: Estimating the Size of a Table
The general rule of thumb for setting the transaction log size is to set it to 20-25 percent of the database size. The smaller the size of your database, the greater the size of the transaction log should be, and vice versa. For example, if the estimation database size is equal to 10Mb, you can set the size of the transaction log to 4-5Mb, but if the estimated database size is over 500Mb, the 50Mb may be enough for the size of the transaction log.
Leave this feature to let SQL Server to automatically increase allocated resources when necessary without DBA intervention. The Autogrow feature is necessary when there is no DBA in your firm or when your DBA doesn't have a lot of experience.
Setting a database to automatically grow results in some performance degradation, therefore you should set a reasonable size for the Autogrow increment to avoid automatically growing too often. Try to set the initial size of the database, and the size of the Autogrow increment, so that automatic growth will occur once per week or less.
Autoshrinking results in some performance degradation, therefore you should shrink the database manually or create a scheduled task to shrink the database periodically during off-peak times, rather than set Autoshrink feature to on.
Specify the maximum size to which the files can grow to prevent disk drives from running out of space.
It's a good decision in most cases to store and manage system and user objects separately from one another, so the user objects will not compete with system objects for space in the primary filegroup. Usually, a user- defined filegroup is not created for small databases, for example, if the database is less than 100Mb.
LOAD TABLE and DUMP TABLE are no longer supported in SQL Server 7.0 (and higher), but you can place a table in its own filegroup and can backup and restore only this table. So you can group user objects with similar maintenance requirements into the same filegroup.
This will improve performance, because when a table is accessed sequentially, a separate thread is created for each file on each disk array in order to read the table's data in parallel.
Leaving the autogrow feature on for the data and for the log files can cause fragmentation of those files if there are many files on the same physical disk array. In most cases, it's enough to have 1-2 database files on the same physical disk.
This will improve performance, because separate threads will be created to access the tables and indexes.
You can use CREATE TABLE statement with TEXTIMAGE_ON keyword to place text/image columns in a different filegroup. See the SQL Server BOL for details.
Because logging is more write-intensive, it's important that the disk arrays containing the SQL Server log files have sufficient disk I/O performance.
This not only increases read performance, it prevents any data changes and allows you to control permissions to this data.
Consider reducing the number of files and filegroups you have for your databases if the Disk Queue length on your server averages above 3, and continue monitoring once you have made your changes to ensure that your disk I/O is optimum over the long term.
No comments:
Post a Comment