In my last article “Converting
Comma Separated Value to Rows and Vice Versa in SQL Server”, I talked about
how you can convert comma separated (or separated with some other character)
values in a single column into rows and vice versa. In this article, I
demonstrate how you can convert rows values into columns values (PIVOT) and
columns values into rows values (UNPIVOT) in SQL Server. For more information click here.
Friday, October 16, 2015
Converting Comma Separated Value to Rows and Vice Versa in SQL Server
Often while reporting you will encounter a situation where you will have comma
separated (or separated with some other character) values in a single column but
you want to report them in rows whereas in some other cases you might have
values in multiple rows and want them to be a single value separated by comma or
some other character. In this article, I am going to demonstrate how you can
write queries in SQL Server to handle these scenarios quickly. For more information, click here.
CONCAT and STUFF Functions in SQL Server 2012
We often need to combine two or more string values to use this combined string
value in reporting. Although there was a way to do that in earlier versions of
SQL Server, starting with SQL Server 2012 we have CONCAT function for this
specific situation. This T-SQL function also takes care of data type conversion
and handling NULLs appropriately. Apart from that, there has been a STUFF T-SQL
function in SQL Server, which you can use to insert\replace a string value into
another string value. The difference lies in the fact that CONCAT allows you to
append a string value at the end of another string value whereas STUFF allows
you insert or replace a string value into or in between another string value. I
am going to demonstrate these functions and their real life usages in this
article. For more information, click here.
Getting Starting with Database Engine Tuning Advisor in SQL Server
There are different techniques to optimize the performance of SQL Server
queries, like keeping required and updated statistics, creating required
indexes, partitioning tables, etc., but wouldn’t it be great if we had some
recommendations before we started planning or optimizing queries so that we
didn’t have to start from the scratch every time and in every scenario? This is
where you can use the Database Engine Tuning Advisor utility to get
recommendations based on your workload. I will be talking about Database Engine
Tuning Advisor, how it works and its different interfaces, in this article
series. For more information click here for Part 1 and here for Part 2.
Wednesday, October 14, 2015
Importance of Statistics and How It Works in SQL Server
Statistics refers to the statistical information about the distribution of
values in one or more columns of a table or an index. The SQL Server Query
Optimizer uses this statistical information to estimate the cardinality, or
number of rows, in the query result to be returned, which enables the SQL Server
Query Optimizer to create a high-quality query execution plan. For example,
based on these statistical information SQL Server Query Optimizer might decide
whether to use the index seek operator or a more resource-intensive index scan
operator in order to provide optimal query performance. In this article series,
I am going to talk about statistics in detail. For information click here for Part 1 and here for Part 2.
Lead and Lag functions in SQL Server 2012
Have you ever been in a situation where you needed to write a query that needed
to do comparisons or access data from the subsequent rows along with the data
from the current row? This article discusses different ways to write these types
of queries and more specifically examines LEAD and LAG analytics functions,
which were introduced with SQL Server 2012, and helps you understand how
leveraging these functions can aid you in such situations. For more information click here.
Labels:
SQL Server 2012,
SQL Server 2014,
SQL Server 2016
Backup and Restore strategies in SQL Server
There are several high availability solutions that can be used with SQL Server,
like AlwaysOn, Fail-over clustering, or Database mirroring. While these high
availability solutions ensure maximum uptime for your databases, you need to
setup backup and restore strategies to recover the data or minimize the risk of
data loss in case a failure happens. In this article series, I am going to
discuss backup and restore strategies in SQL Server in detail. For more information click here for Part 1 and here for Part 2.
Importance of Recovery Model in SQL Server and why its important
Have you ever wondered, especially in the case of a data warehousing scenario,
why the transaction log file grows bigger and bigger and sometimes even much
bigger than your actual database's data files? What caused it to happen? How do
you control it? How does the recovery model of a database control the growing
size of the transaction log? These are some of the questions I am going to
explain to you in this article. For information click here.
Getting Started with Hashing in SQL Server
In my most recent articles, I’ve talked about encryption in detail and
demonstrated its usage at the entire database level with Transparent
Data Encryption and at the column
level with granular\cell level encryption. In this article, I am going to
discuss hashing in SQL Server and how it is different from encryption. For more information click here.
Subscribe to:
Posts (Atom)