In the last couple of ws, my SQL server has been occasionally
bogging down during the day. I noticed that it was when a stored
procedure that usually takes 2 seconds to run timed out.
I ran sp_updatestats in Query Analyzer and everything went back to
normal.
AUTO_UPDATE_STATISTICS is set to ON in the database
As part of my database maintenance, I do the following:
- Update Statistics at 05:30am on wdays
- Perform a complete backup nightly at 03:00am
- Perform a transaction log backup every half an hour between 9am and
7pm on wdays
My first hunch is that I happened to be running the sp when the logs
were being backed up. However, the logs take less than a couple of
seconds to back up. Shouldn't things go back to normal after the
back-up is done?
The load on the server is pretty low, at most 10 users hitting the web
app front end at one time.
Any suggestions or tips? Thank You(george.durzi@.gmail.com) writes:
> In the last couple of ws, my SQL server has been occasionally
> bogging down during the day. I noticed that it was when a stored
> procedure that usually takes 2 seconds to run timed out.
> I ran sp_updatestats in Query Analyzer and everything went back to
> normal.
> AUTO_UPDATE_STATISTICS is set to ON in the database
> As part of my database maintenance, I do the following:
> - Update Statistics at 05:30am on wdays
> - Perform a complete backup nightly at 03:00am
> - Perform a transaction log backup every half an hour between 9am and
> 7pm on wdays
> My first hunch is that I happened to be running the sp when the logs
> were being backed up. However, the logs take less than a couple of
> seconds to back up. Shouldn't things go back to normal after the
> back-up is done?
> The load on the server is pretty low, at most 10 users hitting the web
> app front end at one time.
It's difficult to say with this little amount of information. But I
would guess that parameter sniffing is part of the plot. When SQL Server
builds the query plan for a stored procedure, it looks at the parameter
values, and uses these as guidance when building the plan. This means
that if the procedure is initially called with some odd value, you
may be stuck with a plan that is not good for regular values. Here
is a brief example:
CREATE PROCEDURE get_data @.last_key int = 0 AS
IF @.last_key = 0
SELECT ... FROM tbl
ELSE
SELECT ... FRON tbl WHERE keycol > @.last-key
Assume that this procedure is called in the morning to do an initial
load of a screen of some sort, and is then called repeatedly during
the day too update that screen. Assume further that the index on last_key
is non-clustered. If there is no plan cached in the morning, the optimzer
will use a table scan for both cases, as the NC index is not good
for reading all values.
When you run sp_updatestats, the optimizer might notice that statistcs
have changed and recompile the procedure with the currently value,
for which the index is useful.
This is a bit of speculation on my part. You might be able to get some
more cluse, if you start to inspect query plans. You can also use
DBCC SHOW_STATISTCS before and after to see whether are any significant
changes in statistics.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland,
Thanks for your tip, I'll look into this further
Friday, March 30, 2012
Q Statistics & Query Performance
Labels:
couple,
database,
microsoft,
mysql,
occasionallybogging,
oracle,
performance,
query,
server,
sql,
statistics,
storedprocedure
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment