Manual T-SQL Scripting VS SSDT (SQL Server Data Tools)

Started by Mr. Analog, October 28, 2014, 12:12:10 PM

Previous topic - Next topic

Tom

Just tell them there's nothing you can do to speed it up any more. more resources don't dramatically help and that its up to them to fix their own problems. let them stew in their own filth :P
<Zapata Prime> I smell Stanley... And he smells good!!!

Lazybones

Quote from: Thorin on October 30, 2014, 11:00:22 PM
Wow, sounds like you have crappy devs.  I would take it personally if someone told me my queries were garbage inefficient queries.

The developers for one of the projects are from an employer we have both worked for in the past.  :-[

Lazybones

Quote from: Tom on October 30, 2014, 11:07:52 PM
Just tell them there's nothing you can do to speed it up any more. more resources don't dramatically help and that its up to them to fix their own problems. let them stew in their own filth :P

Problem,  I have a boss.. My boss contracted the developers... Now he is actually really good at trusting my judgment on these things but there is SHOCKINGLY little info on how to correctly size CPU and RAM for SQL servers based on DB size or even reliable metrics.. Lots of maybes and toss more RAM at it really.. It is like DBAs are taught @%&# ALL about capacity planning.

Tom

Right, you tell your boss that theres not much you can do, except spend way too much time and money throwing more hardware at it. Or he can get his contractors to fix their @%&#. one of the two.
<Zapata Prime> I smell Stanley... And he smells good!!!

Mr. Analog

Well both options are preferable.

Databases rarely shrink.

Sometimes the first line of defence is to increase the database pool, but that just buys you some time. It might even be worth hiring a consultant DBA to come in and find the bottlenecks.

I think Redgate has some demo stuff as well, that's what lead me to them in the first place. I remember getting some good reports out of it that helped me to debug things, even with a supposedly skilled DBA I still found a lot of stuff we could tweak

I'll see what I can dig up.

If you really want some advise I'd like to know a few things:

1. Do you have load testing automation you can run
2. Do you have a staging environment where you can simulate load
3. Barring 1 and 2 do you have scheduled maintenance times or low volume times

If the answers to any of these questions is YES then Run the SQL Server Profiler with the TSQL_Duration template against the DB you are testing load against. Stop the run and then look for long running queries by sorting duration time. Narrow down the largest and then run those queries with the execution plan enabled in SQL Server Management Studio.

If you can't eyeball the problem by reading the execution plan then run the query by selecting Query-> Analyse Query in Database Tuning Advisor.

Sometimes the blocks can be really obvious, sometimes not

The next step is to run the Profiler with the TSQL_Locks template, any locking conditions you spot you MUST address.

Finally when you've got a list of "top 10" (or more) queries or locking conditions output a report and send it to your devs and make those stories high priority.

It's costing YOUR business money to support slow or badly running queries, and while hardware can help it's only a temporary stopgap.
By Grabthar's Hammer

Tom

Quote from: Mr. Analog on October 31, 2014, 07:41:28 AM
Well both options are preferable.
Indeed. But it sounds like his boss has already thrown a lot of money at hardware to "solve" the problem, and it hasn't worked.
<Zapata Prime> I smell Stanley... And he smells good!!!

Mr. Analog

Either way it's classified as "time and materials" and in general hardware is the lowest hanging fruit
By Grabthar's Hammer

Lazybones

We increased the hardware performance, but the question is TO WHAT END.

We tasked the developers to fix there code and they improved it, but it is still an issue.

As I stated I DID bring in a DBA to look at things, he pointed out the low hanging fruit but it ultimately didn't fix the problem (also he may have not bee the best DBA).

My biggest problem here is there is no clear answer even after the above.... I could be hunting for optimizations for ever or spending hundreds of thousands on hardware.

However if I had a clear metric to what is the limiting factor it would make this easier... To be honest running query analyzer is lower level then my team would normally go. Hiring a full time DBA seems out of the question and well the consultants / developers are supposed to be filling that roll.

Thorin

Quote from: Lazybones on October 31, 2014, 12:02:32 AM
Quote from: Thorin on October 30, 2014, 11:00:22 PM
Wow, sounds like you have crappy devs.  I would take it personally if someone told me my queries were garbage inefficient queries.

The developers for one of the projects are from an employer we have both worked for in the past.  :-[

Having worked for said past employer, I have first-hand knowledge of how good and how bad some of their devs are.  More importantly, I know that some of their devs just don't care.  Which is a problem; a dev has to care about their work or it's gonna be shoddy.

Quote from: Lazybones on October 31, 2014, 12:04:47 AM
there is SHOCKINGLY little info on how to correctly size CPU and RAM for SQL servers based on DB size or even reliable metrics.. Lots of maybes and toss more RAM at it really.. It is like DBAs are taught @%&# ALL about capacity planning.

This is really the meat of your complaint, I think.  There's no clear-cut rules for how much RAM to how big a DB.  Really, there isn't.  Sql Server DBAs usually aren't specifically trained, they just fall into the roles.  Oracle DBAs tend to have better training, only because Oracle is a quagmire and you need to know everything about everything to do anything in it.  And even in Oracle, a question of "how much RAM do we need?" would be responded to with "let me run about a hundred queries first to see your database usage".
Prayin' for a 20!

gcc thorin.c -pedantic -o Thorin
compile successful

Lazybones

Ya the key point about training here is I can't even find something on line better than .. Max out your license (Standard supports 32GB, Ent is TB so that doesn't help.

There are some queries to check utilization of the memory and try to determine if more ram would help but it is all fuzzy..

Some applications behave inconsistently so even with repeated tests it is hard to nail down if things are improving

Mr. Analog

The sad reality is that there's no one size fits all solution for stuff like this, it really does sound like you need a dedicated and knowledgeable DBA to pick out each knot in the bundle.

:(
By Grabthar's Hammer

Darren Dirt

Quote from: Mr. Analog on October 31, 2014, 07:41:28 AM

1. Do you have load testing automation you can run
2. Do you have a staging environment where you can simulate load
3. Barring 1 and 2 do you have scheduled maintenance times or low volume times

If the answers to any of these questions is YES then Run the SQL Server Profiler with the TSQL_Duration template against the DB you are testing load against. Stop the run and then look for long running queries by sorting duration time. Narrow down the largest and then run those queries with the execution plan enabled in SQL Server Management Studio.

If you can't eyeball the problem by reading the execution plan then run the query by selecting Query-> Analyse Query in Database Tuning Advisor.

Sometimes the blocks can be really obvious, sometimes not

The next step is to run the Profiler with the TSQL_Locks template, any locking conditions you spot you MUST address.

Finally when you've got a list of "top 10" (or more) queries or locking conditions output a report and send it to your devs and make those stories high priority.

It's costing YOUR business money to support slow or badly running queries

I really like your approach -- it's the kind of perspective that comes from years of experience (and therefore lots of PAIN). But it's also a logical analysis that applies to more than just SQL efficiency bottlenecks ... way more rational and therefore time-saving than just trial-and-error or adding more hardware resources.

Seeing this conversation at this point sure does make me happy I never chose the fulltime-DBA path I coulda back in like 2001 or 2002 or whatever it was. Developer ftw. Can't imagine how many more gray hairs I would have if I had chosenly less... wisely.
_____________________

Strive for progress. Not perfection.
_____________________

Mr. Analog

I've seen, and had to contend with, a lot of weird stuff over the years. Everything from "not quite working as advertised" connection pooling (that didn't work) to massive complex queries intended to be used in high volume (hint: create a flat structured table for high volume queries or be doomed to the whims of whatever optimizer you are stuck with).

I only know a little bit even, I've worked with people that just by looking at stuff could tell where potential bottlenecks were.

It's both frustrating and satisfying to try to work out how to make something really efficient AND useful, but a lot of times it always feels like a trade between what you want, CPU, storage and RAM
By Grabthar's Hammer