Author Archive: Gail

Speaking Engagements

It’s going to be a busy 6 months in terms of conferences and speaking (well, busy for me, I’m just getting used to the whole speaker thing)

TechEd Africa

TechEd Africa is running from the 2nd to the 5th of August in Durban. I’m presenting two sessions there, one on query hints and plan guides and one on evaluating your indexing strategies. I’ll also very likely be helping out in the community lounge and the Ask the Experts area.

PASS Community Summit

I’m presenting two sessions at the PASS Community Summit this year. A spotlight session on titled ‘Lies, damned lies and Statistics’ and a general session titled ‘Insight into Indexes’

The spotlight session will be covering column statistics, why SQL creates them and how, when they’re updated, the importance of accurate statistics, some of the problems that can result when they’re not accurate and some maintenance strategies.

The general session will be looking at what SQL can tell you about indexes, how they’re been used, what they’re been used for and what indexes SQL thinks it wants, and how reliable all that information is.

SQL Usergroup

I’ll be presenting at the October meeting of the SQL Server usergroup and will be doing a final dry-run of one of the presentations that I’ll be giving at PASS. I haven’t decided which one yet.

Copyright policy

I really hate to have to do this, but there have been a couple cases recently that have indicated the necessity of making things explicit.

All of the entries on this blog are copyright by Gail Shaw. The full content of any post may not be published elsewhere without prior permission from the copyright holder (that would be me). Excerpts from blog posts may be posted elsewhere providing they are short (max 25% of the article) and are attributed to me by name and with a link to this site.

The only site that currently has permission to repost full entries is SQLServerPedia.com

There is now a copyright entry, detailing the full copyright policy, in the sidebar of this blog. I’m also considering adding copyright info to the rss feeds (thought I’ll wait and see if that’s necessary)

If this entry appears in full anywhere other than sqlinthewild.co.za or SQLServerPedia.com, it is republished without permission and should be considered stolen content.

The Lazy Writer and the Checkpoint

Or “What writes what when?

It’s important to understand how and when SQL makes changes in memory and how and when those changes are written to the data file. It has a big impact on memory use and IO patterns.

Data modification queries (insert, update, delete) always make changes to data pages in memory. In fact, queries in general only operate on in-memory data pages. When a query makes a change to an in-memory page, that page is marked as ‘dirty’, indicating that there are changes on that page that have to be written to disk before the page can be discarded from memory. That writing is the job of two system processes, the lazy writer and the checkpoint.

Lazy Writer

The job of the lazy writer is to find dirty pages in the buffer pool and write them out to disk and drop those pages from cache. It does this to keep a certain amount of free pages available within the buffer pool for data that may be requested by other queries. The pages that it writes out are ‘older’ pages, ones that haven’t been used for a while.

(more…)

Lost: The Database Tales

So, picture the scenario. White sand and palm trees. Tropical sun, warm breeze and crystal clear waters. And a laptop with wifi access to the office.

Hmmm, something doesn’t quite fit there. Though I must admit I really like the idea of telecommuting from a tropical beach. Must speak to the boss about that….

This is probably one of the more creative of the blog memes that go around, this time originally from Tim Ford

“So You’re On A Deserted Island With WiFi and you’re still on the clock at work.  Okay, so not a very good situational exercise here, but let’s roll with it; we’ll call it a virtual deserted island.  Perhaps what I should simply ask is if you had a month without any walk-up work, no projects due, no performance issues that require you to devote time from anything other than a wishlist of items you’ve been wanting to get accomplished at work but keep getting pulled away from I ask this question: what would be the top items that would get your attention?”

Well, actually that’s almost the situation I’ve been in for the last couple months. Except the ‘on-the-clock’ part. And the deserted island part. Also I don’t admin servers so there’s no pile of outstanding backups or standardisation to do.

Ok, so let’s say that I had a month with no projects and that I couldn’t work on any personal or community stuff in that time. What would be sitting on my list of things to do.

Organise documentation

Pretty much anyone who knows me would say that I’m pretty disorganised, and it’s true. I have IT-related documentation, presentations, saved articles, webcasts and podcasts scattered across two computers and several stacks of DVDs. One of the first things on my to-do list would be to start organising it. Not organise everything, that would take far more than a month, but start organising.

I have Sharepoint set up on a server here and the very beginnings of an IT library on it. The first goal would be to get the IT-related articles and saved blog posts into a sharepoint library and categorised.

Learn Sharepoint

To go with the previous step, I would study up on Sharepoint and how it works. I’m not talking about the interface, how to navigate through it, set up libraries or configure sites. I’m talking about learning how it works, how it scales, how the pieces work together, how the security works. The long-term goal here is to be able to make recommendations to clients on planned sharepoint implementations.

Dig deep into clustering

I’ve dabbled in clustering before, but I’ve barely scratched the surface with it. I know just about enough to get into serious trouble with it. I would spend some of my month getting familiar with clustering, both at the OS level, installing and configuring the cluster and at the app level, on SQL’s interaction with clustering.

WPF

Lastly, if there’s any of the month left, I’d play with Windows Presentation Foundation. Just because.

I’m not going to tag anyone, because I think just about everyone’s done this one. I’m slow to get to it. As usual.

SA SQL Usergroup June meeting

The June meeting of the South African SQL Server Usergroup will be held Tue 23 June, 18h30 at the Microsoft Offices in Bryanston, 3012 Willam Nicol Drive. The meeting is a week later than usual due to the public holiday on the 16th. This month Mark Stacey will be talking about Service Broker

Please let me know if you’re coming, I need numbers for catering purposes.

Look forward to seeing you there.

Deleting the transaction log

Or “Why is my database now inaccessible?

This is another things that I’ve seen a few things recently on various forums. A DB’s transaction log fills the disk, probably due to a lack of log backups, and then either SQL is stopped and the log file deleted or the database is detached and the log file deleted. Either way it’s not a particularly good thing to do.

No, I’ll be blunt, it’s a appallingly bad thing to do. The transaction log is not an optional piece of the database. It’s not like an installation log or activity log where the entries are there just in case anyone’s interested. The transaction log is what SQL uses to ensure, at all times, that the data in the database is transactionally consistent. It’s what gives SQL databases the consistency and durability properties that are required from a relational database engine.

The transaction log has a number of uses within SQL.

  1. In SQL 2000 and before, the inserted and deleted tables were materialised from the transaction log. This was changed in SQL 2005 and they’re now materialised from the row version store in TempDB.
  2. A transaction rollback uses the transaction log to determine what needs to be undone.
  3. In transactional replication, the log reader uses the transaction log to determine what changes need to be replicated
  4. Used by Change Data Capture in SQL 2008 to extract changes made to registered tables
  5. During restart-recovery to ensure that transactions that had committed when the service stopped but whose changes had not been written to the data file are replayed and to ensure that transactions that hadn’t completed are rolled back.

For now, I’m just going to consider the last one.

(more…)

Does Repair_Allow_Data_Loss cause data loss?

Short answer: Yes. That’s why the option has that name.

Long answer:

When CheckDB finds corruption it will, as part of the output, include the minimum level of repair that will fix the errors that it found. In SQL 2005 and higher, it will look something like the following.

CHECKDB found 42 allocation errors and 137 consistency errors in database ‘SomeDB’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (SomeDB).

There are two options for the minimum repair level. REPAIR_REBUILD and REPAIR_ALLOW_DATA_LOSS.

If the minimum level is REPAIR_REBUILD then it means that the corruption was confined to just the nonclustered indexes. This is good. Nonclustered indexes are redundant data and hence can be dropped and recreated (which is pretty much what CheckDB with the REPAIR_REBUILD option does)

If the minumum level is REPAIR_ALLOW_DATA_LOSS then running CheckDB with the REPAIR_ALLOW_DATA_LOSS option will cause data loss. That’s why the option is named like that. How much it will lose depends on the extent of the corruption.

There’s one other case. What about when checkDB doesn’t specify a minimum level?

CHECKDB found 3 allocation errors and 7 consistency errors in database ‘SomeDB’.

That means that there’s damage to one or more of the allocation pages or there’s corruption in the system tables. That kind of corruption isn’t repairable, so no minimum repair level can be specified.

So, in conclusion, will running CheckDB with the REPAIR_ALLOW_DATA_LOSS option cause data loss? If that’s the minimum level that CheckDB specified, then yes it will.

For more info, see Paul Randal’s blog and an article that I wrote at SQL Server Central.

Edit: There is one case where the repair level repair_allow_data_loss is required, but no data will be lost. This is the case of the incorrect PFS page (see my article at SQLServerCentral for more info). If that is the only error, repair_allow_data_loss will be the minimum level, but no data will actually be lost.

Corruption month

There’s been a rash of database-corruption posts recently. On the two forums that I read, there have so far been around 12 corruption-related problems so far this month. That’s frightening. Many of the problems were only solved either with some data loss or with a lot of work, or both. So what are the two things that most of these cases had in common.

  1. No backups
  2. No corruption checks

Backups

Having no backups is pretty inexcusable, whether its because no backups were scheduled or whether it’s because the backups were failing. Paul’s written about the importance of the right backups, and I’m not going to repeat him here.

Just having backups is not sufficient. The backups have to be restorable. A backup that’s not restorable is, in my opinion, worse than no backup at all. That means that the jobs have to be checked to ensure that they did run without error and the backups have to be restored somewhere to test them. Ultimately that’s the only way to be 100%, absolutely sure that a backup is restorable.

(more…)