Author Archive: Gail

Mid-year round up

It’s half way through 2008 and I’m not sure where the year went. It feels like it was Christmas just the other week. Being half way through the year, I thought I’d do a quick look back over what has happened so far.

(more…)

SQL 2008 – More impressions

I spent a little more time playing with SQL 2008 this week, though not as much time as I would have liked. I took a look at filtered indexes and some enhancements and changes around the display of the execution plans in Management Studio.

Filtered indexes

These are an interesting feature. There are many uses for these in larger systems and on larger tables. Normally an index is built over an entire table. With a filtered index, one or more predicates is specified when the index is created and the index only contains data that satisfies the predicates. The columns involved in the predicate do not have to be part of the index key, or the index include columns.

(more…)

Pass 2008

I’ve been pondering for some weeks whether or not I will be going to PASS in Seattle this year. Unlike previous years, I’m going to have to pay my own way, and it’s not going to be a cheap trip

I finally decided that it is worth going. It will probably be the last time I can attend the US conference, so I intend to make the most of it.

SQL 2008 – impressions

I started playing with the RC0 of SQL 2008 yesterday. So far, I like it. There are some things that don’t quite work the way I would like (intellisense) but overall it looks good. I’m going to talk briefly about some of the features that I quite liked.

First, enhancements and changes to the client tools. I’ll talk about some of the new engine features sometime next week.

Multi-server queries

I’ve spoken about this before, and it’s still a feature I really like. The icing on the cake here… the servers involved in the query don’t have to be SQL 2008. They don’t even have to be the same version. It works for SQL 2008, 2005, 2000 and even SQL 7 boxes.

This offers a really quick way for someone responsible for multiple servers of different versions to check settings, change passwords (like sa), create standard database or tables, etc

(more…)

It's coming…

There have been some jokes going round here recently about the upcoming release of SQL Server 2009, however it’s starting to look like it’ll be SQL Server 2008 after all.

SQL Server 2008 Release candidate 0 is now available at Microsoft’s download centre.

I’m looking forward to playing with it. Based on the current download speed, I should have it sometime next week.

Stepping down

Sunday saw the last game (for now) of the d20 Modern campaign I’m running. I’m going to miss DMing, but it was for the best. I’m struggling with a bit of burnout and a lack of free time.

The campaign went quite well. The players seemed to enjoy it and that’s the only real measure of any importance. We didn’t get quite as far as I had hoped, but a lot of hints and clues were dropped during the various adventures and I think that all the players have a better idea of the ‘big picture’. They’ve also managed to disrupt the big bad guy’s plans often enough to be irritating.

For the next few months at least, we’ll be playing high-fantasy D&D as we return to the world of Per-rune. Currently our characters are about to disembark ship after a rather exciting trip to the Elven kingdom. All we need is to do some shopping for winter gear and then head inland back to the small town of Kurat, where the whole thing started. Easy, right?

Food for thought

I ran across this interesting article this morning. It’s long, but a very worthwhile read.

While there is a lot more to it, I came away from it with 2 realisations:

  1. To get better at something, it must be difficult.
  2. The longer you work on something, the easier it gets.

Just something to think about over the weekend.

Parameter sniffing, pt 3

Last, but not least, here’s one final look at parameter sniffing.

In part 1 of this mini-series I wrote about data skew and the problems it can cause with parameters. In part 2, I looked at what happens when the optimiser can’t sniff values. In this post, I’m going to look at a third thing that can cause parameter sniffing problems. It’s fairly obvious once you know about it, but the first time it was pointed out to me I was a little surprised.

So, to recap. When the query optimiser gets a stored procedure to compile, it knows the values of any parameter passed to that procedure. It will then compile the procedure and optimise the queries based upon the value of those parameters. The optimiser cannot sniff the values of variables, because the values of the variables have not been set at the time that the procedure is optimised.

I’m going to use the same sample code and data as in the first article, to keep things consistent.

From the tests that were done before, I know that the query

select * from largetable where somestring = 'zzz'

executes optimally with an index seek and returns 9 rows. Likewise, I know that the query

select * from largetable where somestring = 'abc'

executes optimally with a clustered index scan and returns 1667 rows.

Now, let’s see if I can get the optimiser to make the wrong choice.

(more…)

Common T-SQL mistakes

I have the pleasure at the moment of doing a code review on some vendor code. No names will be mentioned. I’ve seen better. I’ve seen a lot better. I’m seeing very common mistakes in the code, so, in the interests of my sanity, I’m going to go over a couple of common T-SQL mistakes in the hopes that the next batch of code I get to review doesn’t have these mistakes in…

1. Error Handling

Proper error handling is hard. SQL 2005 has made it a lot easier with the TRY…CATCH blocks, but it still means that everything that can throw an error be wrapped inside a TRY block, with an appropriate CATCH block to handle any errors.

It was a lot harder on SQL 2000 when all we had to work with was @@Error. What I think was not well understood was what statements set and reset @@Error, and how long a non-zero value persists, leading to code constructs like this

Insert into SomeTable ...
Update SomeTable SET ...
Delete From SomeOtherTable ...

IF @@Error !=0
Print 'An error occured'

(more…)

SQL Injection

This is a bit of a rant, so please ignore it you’re looking for technical info.

There’s been a fair bit of news on SQL injection in the last week or so. Mainly cause some people figured out a way to automate the exploit.

What scares me if the widespread lack of knowledge of SQL injection. I’m fairly active on a couple of the SQL forums and on monday this week there were 2 posts by people who have had their databases hacked via a SQL injection exploit.

If this was a new exploit, I wouldn’t be so disappointed, but it’s not. SQL injection’s been around for years. I first read about it in 2001 when I started in web development.

So, why 7 years later are people still being hit with it? Why does a quick google search turn up a number of web sites with entire queries in the url? Sites including some government organisations, a couple of police departments, online stores and the coast guard (No, I’m not posting links. Search for yourself if you’re curious)

Completely preventing SQL injection is not hard. If web pages call the database using properly parameterised calls to stored procedures then SQL injection cannot be done. Set minimum permissions in the database for the web user and it’s even more secure.

So, why is it that so many sites, new and old, are still vulnerable?

Edit: For some in-depth info on preventing SQL injection, see this blog post on Technet