Monday, August 22, 2011

New articles on CodeProject.com: Database performance optimization (part 1 and part 2)

Before a couple of weeks I have posted first part of series of articles about database performance optimization. The first part is about indexing strategies and how to choose appropriate indexes. Second part discuss about maintenance of indexes. More about indexes you can find here:

Friday, August 19, 2011

SQL Server - Fragmentation

While indexes can speed up execution of queries several fold as they can make the querying process faster, there is overhead associated with them. They consume additional disk space and require additional time to update themselves whenever data is updated, deleted or appended in a table. Also when you perform any data modification operations (INSERT, UPDATE, or DELETE statements) index fragmentation may occur and the information in the index can get scattered in the database. Fragmented index data can cause SQL Server to perform unnecessary data reads and switching across different pages, so query performance against a heavily fragmented table can be very poor. In this article I am going to write about fragmentation and different queries to determine the level of fragmentation. There are two major types of fragmentation: logical (external fragmentation) and SQL Server fragmentation.

Logical fragmentation
Logical fragmentation occurs when an index leaf page is not in logical order, in other words it occurs when the logical ordering of the index does not match the physical ordering of the index. This causes SQL Server to perform extra work to return ordered results. For the most part, external fragmentation isn’t too big of a deal for specific searches that return very few records or queries that return result sets that do not need to be ordered.

SQL Server fragmentation (internal)
This type of fragmentation occurs when there is too much free space in the index pages. Typically, some free space is desirable, especially when the index is created or rebuilt. You can specify the Fill Factor setting when the index is created or rebuilt to indicate a percentage of how full the index pages are when created. If the index pages are too fragmented, it will cause queries to take longer (because of the extra reads required to find the dataset) and cause your indexes to grow larger than necessary. If no space is available in the index data pages, data changes (primarily inserts) will cause page splits as discussed above, which also require additional system resources to perform. Internal fragmentation usually occurs when:
  • random deletes resulting in empty space on data pages
  • age-splits due to insert or updates
  • shrinking the row such as when updating a large value to a smaller value
  • using fill factor of less than 100
  • using a large row sizes
Extent fragmentation
Extent fragmentation occurs when the extents of a table or index are not contiguous with the database leaving extents from one or more indexes intermingled in the file. This can occur due to:
  • Random deletes, which could leave some of the pages in an extent unused while the extent itself is still reserved as part of the table’s space allocation. Think of it like Internal fragmentation, but in extents instead of pages
  • Deletes on ranges of contiguous rows within the table, causing one or more entire extents to become de-allocated, thus leaving a gap between the surrounding extents of the table or index
  • Interleaving of a table’s data extents with the extents of other objects

SQL Server - Page Splits

To store data, SQL Server uses pages that are 8 kb data blocks. The amount of data filling the pages is called the fill factor, and the higher the fill factor, the more full the 8 kb page is. A higher fill factor means fewer pages will be required resulting in less IO/CPU/RAM usage. At this point, you might want to set all your indexes to 100% fill factor; however, here is the gotcha: Once the pages fill up and a value comes in that fits within a filled-up index range, then SQL Server will make room in an index by doing a "page split."

In essence, SQL Server takes the full page and splits it into two separate pages, which have substantially more room at that point. You can account for this issue by setting a fill-factor of 70% or so. This allows 30% free space for incoming values. The problem with this approach is that you continually have to "re-index" the index so that it maintains a free space percentage of 30%.

Tuesday, August 16, 2011

SMO - .NET Framework Assemblies

Microsoft SQL Server now features the integration of the common language runtime (CLR) component of the .NET Framework for Microsoft Windows. The CLR supplies managed code with services such as cross-language integration, code access security, object lifetime management, and debugging and profiling support. For SQL Server users and application developers, CLR integration means that you can now write stored procedures, triggers, user-defined types, user-defined functions (scalar and table-valued), and user-defined aggregate functions using any .NET Framework language, including Microsoft Visual Basic .NET and Microsoft Visual C#. Note that Visual Studio .NET 2003 cannot be used for CLR integration programming. SQL Server includes the .NET Framework version 2.0 SP1 pre-installed, and Visual Studio .NET 2003 cannot use the .NET Framework 2.0 assemblies.

The following example demonstrates how you can enumerate .NET Framework assemblies defined in "AdventureWorks" database. Assembly property of Database object represents collection of SqlAssemblies objects. Every assembly can contains more than one file. When you want to get a list of assembly files, you have to use SqlAssemblyFiles property of SqlAssembly objects.
ServerConnection conn = new ServerConnection(@"ANANAS\ANANAS2009", "kanasz", "chaaron");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];

    foreach (SqlAssembly assembly in db.Assemblies)
    {
        Console.WriteLine("Assembly name: " + " " + assembly.Name);
        foreach (SqlAssemblyFile assemblyFile in assembly.SqlAssemblyFiles)
            Console.WriteLine("  " + assemblyFile.Name);
    }

}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}

SMO - Server Permissions

SMO allows you to grant, deny and revoke server permissions to and from SQL Server login account. ServerPermissionInfo object captures the set of server permission returned by EnumServerPermissions() method. You can pass the name of SQL Server login as parameter to this method. Than you will get permissions for this login.

ServerPermissionSet object represents a set of SQL Server permissions you want to grant, deny or revoke.

Server permissions are required when granting, denying, or revoking server-level permissions on an instance of SQL Server. The ServerPermission object is used to specify the set of permissions that apply to the Grant, Deny, and Revoke methods of the Server object. Also, server permissions can be added to the ServerPermissionSet object, which can also be used with the Deny, Revoke, and Grant methods.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];

    foreach (ServerPermissionInfo serverPermInfo in srv.EnumServerPermissions("slimak"))
    {
        Console.WriteLine(serverPermInfo.ToString());
    }
    Console.WriteLine("----------------");

    ServerPermissionSet sps;
    sps = new ServerPermissionSet(ServerPermission.CreateAnyDatabase);
    srv.Grant(sps, "slimak");

    foreach (ServerPermissionInfo serverPermInfo in srv.EnumServerPermissions("slimak"))
    {
        Console.WriteLine(serverPermInfo.ToString());
    }
    Console.WriteLine("----------------");

    sps = new ServerPermissionSet(ServerPermission.ViewAnyDatabase);
    srv.Deny(sps, "slimak");

    foreach (ServerPermissionInfo serverPermInfo in srv.EnumServerPermissions("slimak"))
    {
        Console.WriteLine(serverPermInfo.ToString());
    }
    Console.WriteLine("----------------");

    sps = new ServerPermissionSet(ServerPermission.ViewAnyDatabase);
    srv.Revoke(sps, "slimak");

    foreach (ServerPermissionInfo serverPermInfo in srv.EnumServerPermissions("slimak"))
    {
        Console.WriteLine(serverPermInfo.ToString());
    }
    Console.WriteLine("----------------");
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}

SMO - Roles

Roles of the database are represented by Roles property of Database object. Roles property is a collection of DatabaseRole objects. Using this collection is possible to enumerate all database roles. For each DatabaseRole object, you can get all members of this role calling EnumMembers() method.

Enumerating Database Roles
The following example demonstrates how to enumerate database roles and members for each role.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];
    foreach (DatabaseRole dr in db.Roles)
    {
        Console.WriteLine("Name: " + dr.Name);
        Console.WriteLine("CreateDate: " + dr.CreateDate);
        Console.WriteLine("Owner: " + dr.Owner);
        Console.WriteLine("Role members:");
        foreach (string s in dr.EnumMembers())
            Console.WriteLine("  " + s);
        Console.WriteLine();
    }

}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
Creating New Database Role
For creating a new database role, it is important to create a new DatabaseRole object. Constructor of this object has two parameters that specifies Database object and name of new database role. When properties are set, Create() method creates a new database role. The following example demonstrates how to create a new database role and assign user "NewUser" to this role.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];
    DatabaseRole dbRole = new DatabaseRole(db,"db_newRole");
    dbRole.Create();
    User u = db.Users["NewUser"];
    u.AddToRole("db_newRole");

}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}

SMO - Users

Database users are represented by Users property of Database object. This example enumerates the users of database "AdventureWorks". User object represents SQL Server security principal used for controlling access permissions with a database. When SQL Server login is created and mapped to the database, user object is created and added into Users collection.

Enumerating Database Users
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];
    foreach(User user in db.Users)
    {
        Console.WriteLine("User: " + user.Name);
        Console.WriteLine("Login: " + user.Login);
        Console.WriteLine("Type: " + user.UserType);
    }
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
Creating Database Users
The following example demonstrates how to create a database user. User object represents a new user. User class has a constructor with two parameters. The first parameter specifies a database object and the second specifies name of new login. Login property represents valid SQL Server login for user.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];
    User u = new User(db, "NewUser");
    u.Login = "kanasz";
    u.Create();
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
Assigning Role to Database User
ServerConnection conn = 
	new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Database db = srv.Databases["AdventureWorks"];
    User u = db.Users["NewUser"];
    u.AddToRole("db_owner");
    u.Alter();
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}

SMO - Logins

Class Login represents login of SQL Server. Using instance of Login class, login can be added, deleted to or from server roles. Server object includes Logins properties that represents all SQL Server logins. Login can be associated with user in more than one database.

Enumerating SQL Server Logins
The following code demonstrates how to enumerate SQL Server logins and users associated with login.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);

    foreach (Login login in srv.Logins)
    {
        Console.WriteLine(login.Name);
        if (login.EnumDatabaseMappings() != null)
        {
            foreach (DatabaseMapping map in login.EnumDatabaseMappings())
            {
                Console.WriteLine(" Database: " + map.DBName);
                Console.WriteLine(" User: " + map.UserName);
            }
        }

    }
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
EnumDatabaseMappings property is a list of DatabaseMapping objects that represent mapping to database and database user.

Creating SQL Server Login
Login class can be used for creation of new database login. Login class has a constructor with two parameters. The first parameter specifies database server represented by Server object and the second is login name. Method AddToRole() adds login to a SQL Server role. The following example shows how to create new SQL Server login called "NewLogin" with password pwd and adds it to the sysadmin server role.
ServerConnection conn = new ServerConnection(@"SQL_SERVER_INSTANCE", "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Login login = new Login(srv, "NewLogin");
    login.LoginType = LoginType.SqlLogin;
    login.Create("pwd");
    login.AddToRole("sysadmin");
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}

SMO - Schemas

In SQL Server 2005, Microsoft introduced the concept of database schemas as opposed to object owners. A schema is a container of objects distinct from the users who created those objects.

Schema in SMO is represented by the Schema object. The Schema object represents an ownership context for database objects.

Enumerating schemas
You can enumerate all schemas of a database using the Schemas property of the Database object. The Schemas property is an ordinary collection of Schema objects.
Database myNewDatabase = srv.Databases["myNewDatabase"];
foreach (Schema schema in myNewDatabase.Schemas)
{
    Console.WriteLine(schema.Name);
}
Creating schemas
The following example demonstrates how to create a new database table named MySecondSMOTable in the newly created NewSchema schema:
Database myNewDatabase = srv.Databases["myNewDatabase"];

Schema newSchema = new Schema(myNewDatabase, "NewSchema");
newSchema.Owner = "dbo";
newSchema.Create();

Table mySecondSMOTable = new Table(myNewDatabase, 
      "MySecondSMOTable","NewSchema");

DataType dt = new DataType(SqlDataType.Int);
Column idColumn = new Column(mySecondSMOTable, "ID", dt);
idColumn.Nullable = false;
idColumn.Identity = true;
idColumn.IdentityIncrement = 1;
idColumn.IdentitySeed = 1;
mySecondSMOTable.Columns.Add(idColumn);
dt = new DataType(SqlDataType.VarChar, 50);
Column nameColumn = new Column(mySecondSMOTable, "Name", dt);
mySecondSMOTable.Columns.Add(nameColumn);
mySecondSMOTable.Create();

SMO - Triggers

Before talking about the code part, it is important to tell something about triggers. Triggers are a special kind of Stored Procedure that respond to special events. SQL Server offers two types of Triggers: Data Definition Language (DDL) Triggers and Data Manipulation Language (DML) Triggers.

DDL Triggers fire in response to a change in the structure of a database (when CREATE, ALTER, or DROP statements are executed).

Creating a DML Trigger
DML Triggers fire in response to changing of data (when INSERT, UPDATE, or DETELE statements are executes). DML Triggers are represented by a Trigger object. One of the main properties of the Trigger object is TextBody. This property represents code that runs when a Trigger is fired. The type of Trigger is set by the Insert, Update, or Delete properties.
Database myNewDatabase = srv.Databases["myNewDatabase"];
Table myFirstSMOTable = myNewDatabase.Tables["myFirstSMOTable"];
Trigger trigger = new Trigger(myFirstSMOTable, "SMOTrigger");
trigger.TextMode = false;
trigger.Insert = true;
trigger.Update = true;
trigger.Delete = false;
trigger.InsertOrder = 
  Microsoft.SqlServer.Management.Smo.Agent.ActivationOrder.First;
trigger.TextBody = " RAISERROR('MESSAGE',16,10) "; ;
trigger.ImplementationType = ImplementationType.TransactSql;
trigger.Create();
In this example, a Trigger named SMOTrigger is created. This trigger fires when an Insert or Update statement is executed. DML Triggers are bound to a table. In this case, the trigger is bound with MyFirstSMOTable and when one of the mentioned statements (INSERT or UPDATE) occurs, the Trigger fires.

Creating a DDL Trigger
Another type of Triggers is DDL Triggers. In SMO, this type is represented by two objects: ServerDdlTrigger and DatabaseDdlTrigger. Here is an example of DatabaseDdlTrigger:
Database myNewDatabase = srv.Databases["myNewDatabase"];
DatabaseDdlTrigger databaseTrigger = 
  new DatabaseDdlTrigger(myNewDatabase,"SMODatabaseTrigger");
databaseTrigger.TextHeader = 
  "CREATE TRIGGER SMODatabaseTrigger ON DATABASE FOR DROP_TABLE AS";
databaseTrigger.TextBody = 
  "PRINT 'You can not delete table!' ; ROLLBACK "; ;
databaseTrigger.ImplementationType = ImplementationType.TransactSql;
databaseTrigger.ExecutionContext = DatabaseDdlTriggerExecutionContext.Caller;
databaseTrigger.Create();
In this example, a DDL trigger named SMODatabaseTrigger in the database MyNewDatabase is created. This Trigger fires when someone tries to execute a Drop table statement.

SMO - Stored procedures

The StoredProcedure object represents SQL Server Stored Procedures.

Enumerating Stored Procedures
Stored Procedures are represented by the StoredProcedures property of the Database object. The following example demonstrates how to enumerate Stored Procedures of a selected database (in this case, AdventureWorks).
Server srv = new Server(conn);
Database db = srv.Databases["AdventureWorks"];
foreach (StoredProcedure sp in db.StoredProcedures)
{
    /*
     *  SOME CODE
     */ 
}
Creating a Stored Procedure
When you want to create a Stored Procedure, the TextBody property must be set to the T-SQL script that defines the Stored Procedure. Sometimes developers create a Stored Procedure with parameters. Parameters of the Stored Procedure are represented by the Parameters property of the StoredProcedure object. The Parameters property is a collection of StoredProcedureParameter objects. All parameters require the "@" prefix in the Stored Procedure parameter name.
Database myNewDatabase = srv.Databases["myNewDatabase"];
StoredProcedure sp = new StoredProcedure(myNewDatabase, "spRowByID_Get");
sp.TextMode = false;
sp.AnsiNullsStatus = false;
sp.QuotedIdentifierStatus = false;
StoredProcedureParameter param;
param = new StoredProcedureParameter(sp, "@ID", DataType.Int);
sp.Parameters.Add(param);
string spBody = "SELECT * FROM MyFirstSMOTable WHERE ID=@ID";
sp.TextBody = spBody;
sp.Create();
In this example, a Stored Procedure named spRowByID_Get was created with an input parameter @ID. When you want to alter an existing Stored Procedure, just create a StoredProcedure object, set all the properties you need to change, and call the Alter() method.
Database myNewDatabase = srv.Databases["myNewDatabase"];
StoredProcedure sp = myNewDatabase.StoredProcedures["spRowByID_Get"];
/*
 *  stored procedure modifications
 */ 
sp.Alter();
When you want to drop an existing Stored Procedure, just call the Drop() method of the StoredProcedure object.

Monday, August 15, 2011

SMO - Views

In SQL Server Management objects, Views are represented by a View object.

Enumerating Views
Server srv = new Server(conn);
Database db = srv.Databases["AdventureWorks"];
foreach (View view in db.Views)
{
    /*
     *  SOME CODE
     */ 
}
Creating and removing Views
In the following example, a view is created in text mode. In this case, the TextHeader property must be set. This property gets or sets the header part of the string that defines a View. The body of a View is represented by the TextBody property. Other properties that need mention are IsSchemaBound which gets or sets the value that specifies whether a schema is bound to the View, and IsEncrypted that specifies whether a View is encrypted. When a View is created and IsEncrypted is true, the definition of the View is stored in an obfuscated format. When IsSchemaBound is true, Views and tables participating in a View clause cannot be dropped unless that View is dropped or changed so that it no longer has schema binding.
View myview = new View(myNewDatabase, "My_SMO_View");
myview.TextHeader = "CREATE VIEW [My_SMO_View] AS";
myview.TextBody = "SELECT ID, NAME FROM MyFirstSMOTable"; 
myview.Create();
This example shows how to create a View in MyNewDatabse. The newly created View is named My_SMO_View and it selects all the rows from the table MyFirstTable.

SMO - Working with tables

In SMO, tables are represented by the Table object. In SMO objects hierarchy, the Table object is below the Database object.

Enumerating tables and columns
This example demonstrates how to enumerate all the tables and columns of the AdventureWorks database.
Server srv = new Server(conn);
Database db = srv.Databases["AdventureWorks"];

foreach (Table table in db.Tables)
{
    Console.WriteLine(" " + table.Name);
    foreach (Column col in table.Columns)
    {
        Console.WriteLine("  " + col.Name + " " + col.DataType.Name);
    }
}
Creating and removing tables
The following example shows how to create a database table. It creates MyFirstSMOTable in the MyNewDatabase database. The table contains two columns (ID and Name) and a Primary Key index on the first column.
Database myNewDatabase = srv.Databases["myNewDatabase"];

Table myFirstSMOTable = new Table(myNewDatabase, "MyFirstSMOTable ");

DataType dt = new DataType(SqlDataType.Int);
Column idColumn = new Column(myFirstSMOTable, "ID", dt);
idColumn.Nullable = false;
idColumn.Identity = true;
idColumn.IdentityIncrement = 1;
idColumn.IdentitySeed = 1;
myFirstSMOTable.Columns.Add(idColumn);

dt = new DataType(SqlDataType.VarChar, 50);
Column nameColumn = new Column(myFirstSMOTable, "Name", dt);
myFirstSMOTable.Columns.Add(nameColumn);

myFirstSMOTable.Create();

Index primaryKey = new Index(myFirstSMOTable, "PK_ID");
IndexedColumn indexedColumn = new IndexedColumn(primaryKey, "ID");
primaryKey.IndexedColumns.Add(indexedColumn);
primaryKey.IndexKeyType = IndexKeyType.DriPrimaryKey;
primaryKey.Create();
In this example, the Table object is instantiated and two parameters are passed into the constructor of this class. The first parameter is a database represented by the myNewDatabase object of the Database class. The second parameter is the name of the table to be created. The columns of the table are created by instantiating the Column object. Before creating a column object, it is important to define the data type of the newly created column. The data type is represented by the DataType object. Next, this DataType object is passed into the constructor of the Column object. The constructor of the Column object has two other parameters: table and name of column.

In this example, two columns are created and a Primary Key index. For creating Primary Key index, serves the Index object with two parameters in the constructor. First is the table and the second, the ID name of the index. The IndexedColumn object is important to be created with two parameters in the constructor (index and name of column).

Checking table integrity
The following example demonstrates how to check the integrity of all the tables and their indexes in the database. In this example, we call the method CheckIntegrity() with a single parameter. This parameter specifies the RepairType.
Server srv = new Server(conn);
Database database = srv.Databases["AdventureWorks"];
StringCollection sc = database.CheckTables(RepairType.None);
foreach (object o in sc)
{
    Console.WriteLine(o.ToString());
}

SMO - Working with databases

Enumerating databases, filegroups, and files
The Database property of the Server object represents a collection of Database objects. Using this collection, you can enumerate the databases on SQL Server.
Server srv = new Server(conn);
foreach (Database db in srv.Databases)
{
    Console.WriteLine(db.Name);
    foreach (FileGroup fg in db.FileGroups)
    {
        Console.WriteLine("   " + fg.Name);
        foreach (DataFile df in fg.Files)
        {
            Console.WriteLine("      " + df.Name + " " + df.FileName);
        }
    }
}
Enumerating database properties
Database properties are represented by the Properties property of a Database object. Properties is a collection of Property objects. The following sample demonstrates how to get database properties:
Server srv = new Server(conn);
Database database = srv.Databases["AdventureWorks"];
foreach (Property prop in database.Properties)
{
    Console.WriteLine(prop.Name + " " + prop.Value);
}
Creating databases
With SMO, you can create databases. When you want to create a database, you must create the Database object. This example demonstrates how to create a database named MyNewDatabase and create a data file called MyNewDatabase.mdf (in the primary filegroup) and a log file named MyNewDatabase.log.
Database database = new Database(srv, "MyNewDatabase");
database.FileGroups.Add(new FileGroup(database, "PRIMARY"));
DataFile dtPrimary = new DataFile(database.FileGroups["PRIMARY"], 
         "PriValue", @"E:\Data\MyNewDatabase\MyNewDatabase.mdf");
dtPrimary.Size = 77.0 * 1024.0;
dtPrimary.GrowthType = FileGrowthType.KB;
dtPrimary.Growth = 1.0 * 1024.0;
database.FileGroups["PRIMARY"].Files.Add(dtPrimary);

LogFile logFile = new LogFile(database, "Log", 
        @"E:\Data\MyNewDatabase\MyNewDatabase.ldf");
logFile.Size = 7.0 * 1024.0;
logFile.GrowthType = FileGrowthType.Percent;
logFile.Growth = 10.0;
 
database.LogFiles.Add(logFile);
database.Create();
database.Refresh();
SMO allows you to set the Growth of a database and other properties. More about properties can be found on MSDN. When you want to drop a database, just call the Drop() method of the Database object.

Database backup
SMO allows you to backup databases very easily. For backup operations, you must create an instance of the Backup class and then assign the Action property to BackupActionType.Database. Now you have to add a device you want to backup to. In many cases, it is file. You can backup not only to file, but to tape, logical drive, pipe, and virtual device.
Server srv = new Server(conn);
Database database = srv.Databases["AdventureWorks"];
Backup backup = new Backup();
backup.Action = BackupActionType.Database;
backup.Database = database.Name;
backup.Devices.AddDevice(@"E:\Data\Backup\AW.bak", DeviceType.File);
backup.PercentCompleteNotification = 10;
backup.PercentComplete += new PercentCompleteEventHandler(ProgressEventHandler);
backup.SqlBackup(srv);
SMO allows you to monitor the progress of a backup operation being performed. You can easily implement this feature. The first thing you must do is create an event handler with the PercentCompleteEventArgs parameter. This parameter includes the Percent property that contains the percent complete value. This value is an integer between 0 and 100.
static void ProgressEventHandler(object sender, PercentCompleteEventArgs e)
{
    Console.WriteLine(e.Percent);
}
Performing a log backup operation is similar to a database backup. Just set the Action property to Log instead of Database.

Database restore
SMO allows you to perform database restore easily. A database restore operation is performed by the Restore class which is in the Microsoft.SqlServer.Management.Smo.Restore namespace. Before running any restore, you must provide a database name and a valid backup file. Then you must set the Action property. To restore a database, set it to RestoreActionType.Database. To restore a log, just set it to RestoreActionType.Log. During restore, you can monitor the progress of the restoring operation. This could be done the same way as in the case of database backup.
Server srv = new Server(conn);
Database database = srv.Databases["AdventureWorks"];
Backup backup = new Backup();
Restore restore = new Restore();
restore.Action = RestoreActionType.Database;
restore.Devices.AddDevice(@"E:\Data\Backup\AW.bak", DeviceType.File);
restore.Database = database.Name;
restore.PercentCompleteNotification = 10;
restore.PercentComplete += new PercentCompleteEventHandler(ProgressEventHandler);
restore.SqlRestore(srv);

Sunday, August 14, 2011

SMO - Connecting to the SQL Server

When you want to connect to SQL Server 2005 (or later version), you can user SQL Server SMO. To get started with SMO, first you must add references in Visual Studio. In the Add Reference window, select:
  • Microsoft.SqlServer.Smo
  • Microsoft.SqlServer.SmoExtended
  • Microsoft.SqlServer.Management.Sdk.Sfc
  • Mircorost.SqlServer.ConnectionInfo
When the references are added, you must add two using statements for these namespaces:
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
Now you can start working with SMO. Before doing restore and backup tasks, you must connect to the server. This connection is provided by the ServerConnection object. It lets you connect to the server and assign that connection to the Server object. The Server object represents an instance of SQL Server. In SMO programming, the Server object determines the connection to a physical SQL Server installation.
ServerConnection conn = new ServerConnection("INSTANCE", 
                            "LOGIN", "PASSWORD");
try
{
    Server srv = new Server(conn);
    Console.WriteLine("Server: " + srv.Name);
    Console.WriteLine("Edition: " + srv.Information.Edition);
    conn.Disconnect();
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
This example shows you how to connect to SQL Server using a user name and password. Alternatively, you can use Windows Integrated Authentication:
ServerConnection conn = new ServerConnection();
conn.ServerInstance = "INSTANCE";
try
{
    Server srv = new Server(conn);
    Console.WriteLine("Server: " + srv.Name);
    Console.WriteLine("Edition: " + srv.Information.Edition);
    conn.Disconnect();
}
catch (Exception err)
{
    Console.WriteLine(err.Message);
}
For disconnecting from SQL Server, we use the function Disconnect() of the ServerConnection class. The ServerConnection object is represented by Server.ConnectionContext, and alternatively you can disconnect from SQL Server by calling:
Server.ConnectionContext.Disconnect();

Tuesday, August 9, 2011

Fill Factor

The fill factor option is provided for tuning index data storage and performance. The fill-factor value determines the percentage of space on each leaf-level page to be filled with data, reserving the remainder on each page as free space for future growth. For example, specifying a fill-factor value of 80 means that 20 percent of each leaf-level page will be left empty, providing space for index expansion as data is added to the underlying table. The empty space is reserved between the index rows rather than at the end of the index. The fill-factor value is a percentage from 1 to 100, and the server-wide default is 0 which means that the leaf-level pages are filled to capacity. Fill-factor values 0 and 100 are the same in all respects.

The fill-factor setting applies only when the index is created, or rebuilt. The SQL Server Database Engine does not dynamically keep the specified percentage of empty space in the pages. Trying to maintain the extra space on the data pages would defeat the purpose of fill factor because the Database Engine would have to perform page splits to maintain the percentage of free space specified by the fill factor on each page as data is entered. Here are some information you should take into consideration when you modifying fill factor:
  • Depending on how the data is inserted, updated and deleted in the table dictates how full the leaf level pages should be in the table. To fine tune this setting typically takes some testing and analysis. This could be critical for large active tables in your database.
  • If data is always inserted at the end of the table, then the fill factor could be between 90 to 100 percent since the data will never be inserted into the middle of a page. UPDATE and DELETE statements may expand (UPDATE) or decrease (DELETE) the space needed for each leaf level page. This should be fine tuned based on testing.
  • If the data can be inserted anywhere in the table then a fill factor of 60 to 80 percent could be appropriate based on the INSERT, UPDATE and DELETE activity. However, it is necessary to conduct some testing and analysis to determine the appropriate settings for your environment.
  • With all things being equal i.e. table size, SQL Server versions, options, etc., the lower the fill factor percentage the more storage that could be needed as compared to a higher fill factor where the pages are more compact.
  • Another aspect to take into consideration is your index rebuild schedule. If you cannot rebuild your indexes on a regular schedule and if you have a high level of INSERT, UPDATE and DELETE activity throughout the table, one consideration may be to have a lower fill factor to limit the fragmentation. The trade-off may be that more storage is needed.

Friday, August 5, 2011

Best Practices for Creating Indexes

Creating indexes is a vital for system performance and future maintenance (we will discuss about it later in this article) of your SQL server database. Choosing appropriate indexes can improve your application performance significantly, often by order of magnitude. This is not as simple task as it might sound. There are several points you must consider when choosing indexes. Each index you define can improve performance of a SELECT query, but on the other hand can decrease performance of INSERT and UPDATE queries. The reason for this is that SQL Server automatically maintains index keys. Therefore, each time you issue a data modification statement, not only data modification SQL Server provides, but also index it updates each index defined on affected table. The performance degradation is noticeable in case of large tables with many indexes., or few indexes with long keys. In some cases it is suitable to drop index before updating or inserting new records and than recreate index. This is in case of large tables where you need to update or insert a large amount of data. You will find out that insert into table without indexes will be much more faster than into table with indexes.
There are a several tips you should keep in mind when implementing indexing.
  • Keep indexes lean. Try to build indexes on one or few columns at most. Wide indexes take longer to scan than narrow indexes.
  • Create the clustered index on every table. However, choose the column(s) for the clustered index judiciously. Try to create the clustered index on the column which is used most frequently for retrieving data.
  • Try to create the clustered index on the column with high selectivity; that is the column that does not have many duplicate values.
  • Try to create the clustered index on the column that will never be updated or will be updated infrequently. Every time the clustered index key is updated, SQL Server must maintain not just the clustered index but also the non-clustered indexes since non-clustered indexes contain a pointer to the clustered index. This is yet another reason why you shouldn't create the clustered index on multiple columns.
  • By default, SQL Server creates the clustered index on the PRIMARY KEY column(s) unless the clustered index is created prior to creating the primary key. It is often beneficial to have the clustered index on the primary key, but sometimes you're better off saving the clustered index for other column(s). Feel free to override the default behavior if your testing shows that clustered index on a non-key column will help your queries perform better.
  • SQL Server has to perform some maintenance when index keys are updated, so each additional index could impose a slight performance penalty on the performance of INSERT, UPDATE and DELETE statements. Nevertheless the cost of index maintenance could be far less than the performance improvement you'll reap for your SELECT statements. Therefore you can be fairly liberal with the number of non-clustered indexes.
  • Be sure to eliminate duplicate indexes, that is, multiple indexes created on the same set of columns. Such indexes provide no additional benefit but use up extra disk space and might hinder performance of INSERT, UPDATE and DELETE statements.
  • Check the default fill factor level configured at the server level. If you don't specify the FILLFACTOR option with CREATE INDEX statement, every new index will be created using the default fill factor. This may or may not be what you intend.
  • Non-clustered indexes can be created in different file groups which can reside on separate disk drives to improve the data access i.o. I/O operations.
In a couple of articles I have found that the identity column is the best choice for clustered index, but you must keep in mind that this way shouldn’t be the best way because it force users to enter new data into the last data page of the table. This condition is sometimes referred to as a “hotspot” since there may be multiple users competing for the last available spot on a page and therefore making INSERT statements slow.

In some cases you can find tables that are never queried based on one column. In this case some developers prefer to create clustered index on set of columns that are most frequently used for data retrieving and uniquely identifies each record. This types of indexes are called composite clustered indexes. Maybe you consider this as a good idea because identity column has no business meaning in most cases. However, from a performance view you should avoid composite clustered indexes. Generally speaking, the leaner index, the faster SQL Server can scan or seek through it. For small tables (or data sets) composite indexes perform relatively well, but as number of records grows, performance decreases.

Creating indexes has some limitations. Not every columns can be added on index. Especially columns that are of the ntext, text, image, varchar(max), nvarchar(max), and varbinary(max)data types cannot be specified as index key columns. However, varchar(max), nvarchar(max),varbinary(max), and xml data types can participate in a nonclustered index as nonkey index columns.