DIFFERENCES BETWEEN SQL SERVER CLUSTERED INDEX SCAN AND INDEX SEEK

Index scan means it retrieves all the rows from the table and index seek means it retrieves selective rows from the table.


INDEX SCAN:

Index Scan touches every row in the table it is qualified or not, the cost is proportional to the total number of rows in the table.

Thus, a scan is an efficient strategy if the table is small or most of the rows qualify for the predicate.

INDEX SEEK:

Index Seek only touches rows that qualify and pages that contain these qualifying rows.

The cost is proportional to the number of qualifying rows and pages rather than the total number of rows in the table.

They are two types of Indexes are there:

1. Clustered Index.

2. Non Clustered Index.

Clustered Index:

A non-clustered index can consist of one or more columns, but the data storage is not dependent on this create index statement as is the case with the clustered index.

For a table without a clustered index, which is called a heap, the non-clustered index points the row (data).

In the circumstance where the table has a clustered index, then the non-clustered index points to the clustered index for the row (data).

Although many implementations only have a single column for the clustered index, in reality a clustered index can have multiple columns.

Just be careful to select the correct columns based on how the data is used. The number of columns in the clustered (or non-clustered) index can have significant performance implications with heavy INSERT, UPDATE and DELETE activity in your database.

SQL Server – Get Size of All Databases in MB and GB

 The Following script  will retrieve the size of all your databases in MB and GB.


SELECT d.NAME
    ,ROUND(SUM(CAST(mf.size AS bigint)) * 8 / 1024, 0) Size_MBs
    ,(SUM(CAST(mf.size AS bigint)) * 8 / 1024) / 1024 AS Size_GBs
FROM sys.master_files mf
INNER JOIN sys.databases d ON d.database_id = mf.database_id
WHERE d.database_id > 4 -- Skip system databases
GROUP BY d.NAME
ORDER BY d.NAME
--Exclude Offline Databases
 SELECT d.NAME
    ,ROUND(SUM(CAST(mf.size AS bigint)) * 8 / 1024, 0) Size_MBs
    ,(SUM(CAST(mf.size AS bigint)) * 8 / 1024) / 1024 AS Size_GBs
FROM sys.master_files mf
INNER JOIN sys.databases d ON d.database_id = mf.database_id
WHERE d.database_id > 4 
--and d.state_desc='online' -- Skip system databases
GROUP BY d.NAME
ORDER BY d.NAME

Database level roles in Azure SQL Database

 Database security is critical for an organization to protect unauthorized access to the sensitive and critical data stored in the database objects. There are many layers of security in terms of infrastructure security, database authentication, authorization, encryption.

Once you create a new Azure SQL DB using Azure portal or Azure CLI, the provisioning process deploys a logical Azure SQL Server in the Azure region. You get a virtual master database for managing the configuration and security at the server level. It also configures a server-level principal as a database owner of the Azure database. This account has the highest permissions in Azure SQL DB(PaaS) and has sufficient rights to manage server and database-level security.


The following table summarizes the difference in database security management of Azure SQL Database and on-premises SQL Database.


Fixed Database Roles in Azure DB

Expand the Azure SQL DB and navigate to security -> Roles -> Database Roles to get a list of available fixed database roles, expand the Azure SQL DB and navigate to Security -> Roles -> Database Roles. You get the following fixed-database roles.


Additional roles in the virtual master database

If you look at the same database roles in the virtual master database, you get additional database roles, as shown below.


Azure Database contains additional security roles: loginmanager for creating logins and dbmanager for creating databases.

Note: The users in the master database can only be added to these database roles.

Loginmanager role:

Users in the loginmanager database role can create and delete the logins in the master database.

dbmanager role

The dbmanager role allows the user to create a database, delete a database as a database owner. It allows users to connect the Azure database as a DBO user that contains all DB permissions. The user in the role does not have permission to access other databases that they do not own. 


How to Manage a Very Large Database in SQL Server

 Few years ago when I started my career as a SQL Server DBA, I remember a 200-500 GB sized database was considered as a very large database. During that time maintaining and managing those databases were not a tedious task. But over the years the definition of a very large database has changed. Today we’re looking at database size in terabytes and petabytes.

With cloud computing, hardware requirements needed to sustain the growth of databases are now just one click away. Auto-scaling is now a blessing to companies, as they previously had to run through their budgets whenever there was a need to add resources to the database servers.

Because database sizes have grown over time, managing and maintaining them have become a pain. And when I say managing and maintaining a database, it means taking regular backups, performing index maintenance, integrity checks, etc. etc.

Most of the time we try to archive the old/cold data so that we can keep the database size in check. But sometimes there are cases where the scope to archive the database is very limited. This is especially in medical and financial sectors where old data is still used for various purposes.

Given the rate of database daily growth, I’m going to take you through some database management tasks to give you a better understanding of how you can keep up.

Consider an OLTP database that is active 24*6 and is around 10TB in size.

Backup Strategies

Taking daily full backup of databases whose size is 10TB can be a very demanding task, especially for an OLTP database. Even with better hardware, the backup time would be around five to seven hours. Therefore, having a proper backup strategy in place is as important as maintaining its availability.

It would be wise to consider having a weekly full backup with daily differential and hours transaction backup when looking at the time and cost of resources.

Using third party tools to backup the database is a better option. These tools not only help you in reducing the time taken to backup the database, but also reduces the size of the compressed backup.

INDEX Maintenance

Performing normal index maintenance tasks on a very large database is not the same as performaning on a regular size database. REBUILDING index on big tables having big indexes is very time consuming. This is something that also causes blocking on the servers which hampers the performance of other applications.

The only way to maintain the indexes on such a huge database is to REORGANIZE them. REBUILD index option can only be chosen during index corruption or when there is an absolute need to REBUILD a particular large index.

One important option that we need to consider for creating indexes on such a large database is to use is to specify WITH SORT_IN_TEMPDB in the statement. SORT_IN_TEMPDB forces the index build to occur in tempdb, and when it is complete, it is then written to its destination filegroup. This is an often-overlooked option that can provide huge reductions in index build times.

Another advantage of reorganizing the index is that we can stop the operation in the middle if the execution time overlaps business hours – and this will not cause the index to go in ROLLBACK state.

Database Consistency Check (DBCC)

DBCC command is used to check the consistency and integrity of the database. This command helps us to make sure that our databases are in a healthy state and if in case any issue/corruption occurs then this command helps to identify and fix the issue.

Executing the DBCC command is very resource-intensive. It causes constrain on both memory and disk. Running DBCC command on such a large database can be very risky because if the command does not execute in the allotted time frame and if we try to KILL the execution process it will go in ROLLBACK state. This is not only time consuming but also jeopardizes the consistency of the database. Hence running this command on a very large OLTP database is not a feasible option.

The speed of completion of this command completely depends on the Memory provided and the type of RAID used for hosting the tempdb database.

Other options can be used such as DBCC CHECKTABLE against individual tables or groups of tables on a rotating basis, DBCC CheckDB WITH PHYSICAL_ONLY. This option limits the processing to check the integrity of the physical structure of the page and record headers in addition to the consistency between the pages for the allocation structures (data and indexes).

The most recommended and best option is to have a STANDBY server, restore the backup of the production database on that server, and then run the DBCC command. If the consistency checks run ok on the standby database, the production database should be ok as it is the source of the standby. If the standby database reports corruption, then DBCCs or other tests for corruption can be run against the production database.

I hope this gives you a better understanding of the complexity and management options for very large databases in SQL Server. 

Backup Report via Email using Power Shell Script.

 $ServerList = "C:\DBA\Servers.csv"

$OutputFile = "C:\DBA\Output.htm"

$emlist="sqldbanow@gmail.com"

$MailServer="smtp.sqldbanow.com"

 

$HTML = '<style type="text/css">

#Header{font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;width:100%;border-collapse:collapse;}

#Header td, #Header th {font-size:14px;border:1px solid #98bf21;padding:3px 7px 2px 7px;}

#Header th {font-size:14px;text-align:left;padding-top:5px;padding-bottom:4px;background-color:#A7C942;color:#fff;}

#Header tr.alt td {color:#000;background-color:#EAF2D3;}

</Style>'

$HTML += "<HTML><BODY><Table border=1 cellpadding=0 cellspacing=0 width=100% id=Header>

  <TR>

   <TH><B>Database Name</B></TH>

   <TH><B>RecoveryModel</B></TD>

   <TH><B>Last Full Backup Date</B></TH>

   <TH><B>Last Differential Backup Date</B></TH>

   <TH><B>Last Log Backup Date</B></TH>

   </TR>"

  

  

[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null

Import-CSV $ServerList |ForEach-Object {

$ServerName=$_.ServerName

$AppName=$_.ApplicationName

$HTML += "<TR bgColor='#ccff66'><TD colspan=8 align=center>$ServerName - $AppName</TD></TR>"

$SQLServer = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $ServerName 

 Foreach($Database in $SQLServer.Databases)

{

$DaysSince = ((Get-Date) - $Database.LastBackupDate).Days

$DaysSinceDiff = ((Get-Date) - $Database.LastDifferentialBackupDate).Days

$DaysSinceLog = ((Get-Date) - $Database.LastLogBackupDate).Days

IF(($Database.Name) -ne 'tempdb' -and ($Database.Name) -ne 'model')

{

if ($Database.RecoveryModel -like "simple" )

{

if ($DaysSince -gt 1){

  $HTML += "<TR >

     <TD>$($Database.Name)</TD>

     <TD>$($Database.RecoveryModel)</TD>

     <TD bgcolor='RED'>$($Database.LastBackupDate)</TD>

     <TD>$($Database.LastDifferentialBackupDate)</TD>

     <TD>NA</TD>

     </TR>"

}

}

  if ($Database.RecoveryModel -like "full" )

{

if ($DaysSince -gt 1){

  $HTML += "<TR >

     <TD>$($Database.Name)</TD>

     <TD>$($Database.RecoveryModel)</TD>

     <TD bgcolor='RED'>$($Database.LastBackupDate)</TD>

     <TD>$($Database.LastDifferentialBackupDate)</TD>

     <TD>$($Database.LastLogBackupDate)</TD>

     </TR>"

}

}

if ($DaysSince -lt 1)

{

$HTML += "<TR >

     <TD>$($Database.Name)</TD>

     <TD>$($Database.RecoveryModel)</TD>

     <TD bgcolor='GREEN'>$($Database.LastBackupDate)</TD>

     <TD>$($Database.LastDifferentialBackupDate)</TD>

     <TD>$($Database.LastLogBackupDate)</TD>

     </TR>"

}

 }

}

}

 

$HTML += "</Table></BODY></HTML>"

$HTML | Out-File $OutputFile

 

Function sendEmail  

 

param($from,$to,$subject,$smtphost,$htmlFileName)  

 

$body = Get-Content $htmlFileName 

$body = New-Object System.Net.Mail.MailMessage $from, "$to", $subject, $body 

$body.isBodyhtml = $true

$smtpServer = $MailServer

$smtp = new-object Net.Mail.SmtpClient($smtpServer)

$smtp.Send($body)

 

}  

$date = ( get-date ).ToString('MM/dd/yyyy')

$emlist

sendEmail sqldbanow@gmail.com $emlist "SQLDBANOW Test Server Backup Report for - $Date" $MailServer $OutputFile



---------------------------------------------

-- You need to create one csv file with servers list as below mentioned screenshot C:\DBA\Servers.csv