SQL Server

Views or Functions?

Someone asked this question on SQLServerCentral, I thought I’d post the test here for interest.

The question was on which would execute faster, creating a view, then creating a procedure that accepted a parameter and filtered the view, or creating a table valued function that took a parameter.

From testing, the answer is the procedure, but not by much. A inline table-valued-function comes a very close second and the multi-statement table-valued-function waddles in last. Not really surprising. (more…)

SQL Server 2008 is here!

SQL Server 2008 RTMs today, 6 August 2008. The announcement was made at the closing keynote of Tech Ed South Africa, about two hours ago.

Congrats to all in the SQL Server team, all the developers, testers, managers and everyone else involved. You guys did an awesome job.

Recovery model and transaction logs

Or “Remind me again why we’re doing these backups in the first place

If there’s one topic that comes up again and again on the forums, it has to be the problem of full transaction logs, usually followed by the discovery that the DB is in full recovery mode and no log backups are running.

Quite often, someone will suggest to just truncate the log and shrink it. It’s a dangerous suggestion, not so much for what is said, but for what is not said. To understand why, requires a little background information. First, a look at recovery models and how they affect the transaction log.

Recovery modes

I’m going to ignore bulk-logged mode for now, mainly it’s less used than the other two and I’m not completely comfortable with how it works. I’m also going to ignore replication and database mirroring, as they complicate the issue.

Regardless of the recovery model that the database is in, transactions are logged into the transaction log before they are considered complete. The entries in the transaction log are considered active until all the data pages that were modified by the transaction have been written to disk. The two processes that write pages to disk are the lazy writer and the checkpoint process. Once all the pages that the transaction have been written to disk, the log records for that transaction are marked as inactive.

(more…)

What is a deadlock

I’ve run into a fair bit of confusion in the forums recently on this, so I thought I’d quickly explain what a deadlock is, and how it differs from blocking and locks.

Locks

Locking is a normal part of SQL’s operations. When a row is read by a query, a lock is placed on that row to prevent the row from changing during the read. When a row is modified a lock is placed on the row to prevent any queries reading the value as it is changing.

Locks can occur at the row level, at the page level or at the table level. As long as they are short lived, they are not a problem in and of themselves

(more…)

SQL 2008 – More impressions

I spent a little more time playing with SQL 2008 this week, though not as much time as I would have liked. I took a look at filtered indexes and some enhancements and changes around the display of the execution plans in Management Studio.

Filtered indexes

These are an interesting feature. There are many uses for these in larger systems and on larger tables. Normally an index is built over an entire table. With a filtered index, one or more predicates is specified when the index is created and the index only contains data that satisfies the predicates. The columns involved in the predicate do not have to be part of the index key, or the index include columns.

(more…)

SQL 2008 – impressions

I started playing with the RC0 of SQL 2008 yesterday. So far, I like it. There are some things that don’t quite work the way I would like (intellisense) but overall it looks good. I’m going to talk briefly about some of the features that I quite liked.

First, enhancements and changes to the client tools. I’ll talk about some of the new engine features sometime next week.

Multi-server queries

I’ve spoken about this before, and it’s still a feature I really like. The icing on the cake here… the servers involved in the query don’t have to be SQL 2008. They don’t even have to be the same version. It works for SQL 2008, 2005, 2000 and even SQL 7 boxes.

This offers a really quick way for someone responsible for multiple servers of different versions to check settings, change passwords (like sa), create standard database or tables, etc

(more…)

It's coming…

There have been some jokes going round here recently about the upcoming release of SQL Server 2009, however it’s starting to look like it’ll be SQL Server 2008 after all.

SQL Server 2008 Release candidate 0 is now available at Microsoft’s download centre.

I’m looking forward to playing with it. Based on the current download speed, I should have it sometime next week.

Parameter sniffing, pt 3

Last, but not least, here’s one final look at parameter sniffing.

In part 1 of this mini-series I wrote about data skew and the problems it can cause with parameters. In part 2, I looked at what happens when the optimiser can’t sniff values. In this post, I’m going to look at a third thing that can cause parameter sniffing problems. It’s fairly obvious once you know about it, but the first time it was pointed out to me I was a little surprised.

So, to recap. When the query optimiser gets a stored procedure to compile, it knows the values of any parameter passed to that procedure. It will then compile the procedure and optimise the queries based upon the value of those parameters. The optimiser cannot sniff the values of variables, because the values of the variables have not been set at the time that the procedure is optimised.

I’m going to use the same sample code and data as in the first article, to keep things consistent.

From the tests that were done before, I know that the query

select * from largetable where somestring = 'zzz'

executes optimally with an index seek and returns 9 rows. Likewise, I know that the query

select * from largetable where somestring = 'abc'

executes optimally with a clustered index scan and returns 1667 rows.

Now, let’s see if I can get the optimiser to make the wrong choice.

(more…)

Common T-SQL mistakes

I have the pleasure at the moment of doing a code review on some vendor code. No names will be mentioned. I’ve seen better. I’ve seen a lot better. I’m seeing very common mistakes in the code, so, in the interests of my sanity, I’m going to go over a couple of common T-SQL mistakes in the hopes that the next batch of code I get to review doesn’t have these mistakes in…

1. Error Handling

Proper error handling is hard. SQL 2005 has made it a lot easier with the TRY…CATCH blocks, but it still means that everything that can throw an error be wrapped inside a TRY block, with an appropriate CATCH block to handle any errors.

It was a lot harder on SQL 2000 when all we had to work with was @@Error. What I think was not well understood was what statements set and reset @@Error, and how long a non-zero value persists, leading to code constructs like this

Insert into SomeTable ...
Update SomeTable SET ...
Delete From SomeOtherTable ...

IF @@Error !=0
Print 'An error occured'

(more…)

SQL Injection

This is a bit of a rant, so please ignore it you’re looking for technical info.

There’s been a fair bit of news on SQL injection in the last week or so. Mainly cause some people figured out a way to automate the exploit.

What scares me if the widespread lack of knowledge of SQL injection. I’m fairly active on a couple of the SQL forums and on monday this week there were 2 posts by people who have had their databases hacked via a SQL injection exploit.

If this was a new exploit, I wouldn’t be so disappointed, but it’s not. SQL injection’s been around for years. I first read about it in 2001 when I started in web development.

So, why 7 years later are people still being hit with it? Why does a quick google search turn up a number of web sites with entire queries in the url? Sites including some government organisations, a couple of police departments, online stores and the coast guard (No, I’m not posting links. Search for yourself if you’re curious)

Completely preventing SQL injection is not hard. If web pages call the database using properly parameterised calls to stored procedures then SQL injection cannot be done. Set minimum permissions in the database for the web user and it’s even more secure.

So, why is it that so many sites, new and old, are still vulnerable?

Edit: For some in-depth info on preventing SQL injection, see this blog post on Technet