SSDT Pros:
-Lets you easily compare database schemas
-Generate a targeted merge script
SSDT Cons:
-BATCHES every goddamn THING
-Generated Script cannot be created as a batch
-Generated Script does not add IF EXISTS checks
-Generated Script is not for updating any target DB only the DB you compared to
VS
Manual Script Pros:
-You control the batch statements
-Add control flow where needed
-No weird flip flop toggling of DB features
Manual Script Cons:
-You need to know every database change and include it
-Takes a long time
I've used SSDT a lot in the past but there are two key assumptions that Microsoft needs to change to put other schema compare tools out of business:
1. The ability to automatically batch the whole script
See SSDT puts a GO (end of batch) statement after literally everything, do I need an individual batch for a PRINT statement? No! Obviously some things need to be batched, like SPROC creation but then why couldn't SSDT give you an option to use EXEC sp_executesql for proc creation le sigh
2. The ability to automatically add IF EXISTS statements
This should be obvious but it's not and it's very frustrating, obviously there could be collisions between things like indexes but I kind of want to know if there are name conflicts by setting print statements that would allow me to print out the name of something I think exists but doesn't, at least it would make debugging easier!
THE VERDICT?
Both ways suck, but in different ways
UGH
If it's Sql Server only, you'd probably enjoy using DB Ghost.
I will continue to use manual scripts.
You haven't covered the problem of when something goes wrong in the middle of a script with GO statements in it - if you're in a transaction you get booted out, so even though the transaction got rolled back and your first thirty changes in your script didn't occur, the rest of the changes will occur and will do so outside of a transaction. This could completely screw up your database! In short, making db update scripts safe is not as simple as starting them with a Begin Transaction statement.
Believe me I've seen this in action more than a few times and it's a load of horse pucky
Like I say, the SSDT schema compare tool is like 90% useful for doing exactly one thing and with just a few tiny tweaks could be an outstanding tool for building scripts.
The reason why it's not? Microsoft wants you to make updates via packages and their own brand of cool aid so no good schema merge tools for YOU
Sometimes after a long mentally exhausting day I read the forum for a change in focus and read stuff like this and am thankful that I am not like a hardcore DBA role.
And sometimes threads are so tech-heavy or full of TLAs and ETLAs that I feel like this guy:
(http://bestuff.com/images/images_of_stuff/210x600/i-love-lamp-31170.jpg?1173111236)
Also I love lamp.
I hear ya
I just get so frustrated when I see stuff that's mere inches from being everything I wanted.
Maybe as a side project I'll write my own comparison / script generation tool, except with hookers and blackjack... in fact forget about the script tool!
Darren, we're not in hardcore DBA roles, either.
In fact, we barely need to remember how to manipulate the SGA settings in Oracle...
I'm guessing we're a lot closer to those roles than most
Finding a DBA that can explain "why" max out the ram is the only solution they will give to anything or a developer who bothers to take two seconds to optimize their query or index their damn table appears to be rare.
In the mean time it's my fault the DB that is over 4TB is slow even, thought it is on high speed disk and write latencies are in the single digits .
Edited some typos and clarifications.
Ugggg
Y'know what's funny? Or rather not funny? For a fraction of hardware upgrade cost you can just buy DB Analysis software that can point out optimisations
Then again I see people trying to use things like ORMs to do seriously heavy operations that should be in PROCs or rethought completely.
ORMs are great for flat structures, for everything else build Views or SPROCs
Quote from: Mr. Analog on October 30, 2014, 09:57:01 AM
Ugggg
Y'know what's funny? Or rather not funny? For a fraction of hardware upgrade cost you can just buy DB Analysis software that can point out optimisations
Then again I see people trying to use things like ORMs to do seriously heavy operations that should be in PROCs or rethought completely.
ORMs are great for flat structures, for everything else build Views or SPROCs
Please send me the link to this tool NOW.
SQL Server Management Studio has a built in tool for tuning that works well:
SQL Server Database Engine Tuning Advisor
You can combine this with SQL Server Profiler and even using some of the default profile templates learn where bottlenecks are and then use the results to run the Tuning Advisor for suggestions on what to add / change. Sometimes all it takes is the right index here and there.
Redgate has tools for optimising / analysing SQL Server as well that I have used in the past that also work pretty well.
There are more out there for sure!
Lazy, are you running a single 4TB database on Sql Server? Or is that multiple databases on a single instance?
If you are using Sql Server, then Database Engine Tuning Advisor is a useful tool. But you're right, there aren't enough developers that really, truly, understand how their queries are more or less effective. And DBAs do tend to dumb down the message when talking to people they perceive as not technically competent. But if you have a decent DBA and a decent developer and tell them to work together to figure out ways to make the queries more efficient and tell the developer to make any changes the two of them decide need to be made, suddenly you have empowered, knowledgeable employees who just got told to do their job and do it well.
Developers need to learn what a Query Execution Plan is and why it's important to look at for frequently run queries.
That .001 millisecond collation step on a join might not mean much for a 100 records but when your clients are pulling 100k+ in parallel the cost multiplies significantly.
Ya I thought you where referring to something more than query analyzer, been there done that. More problems than I can solve for the external developers.
I currently have one application that has a dedicated SQL Instance / VM, with several databases totaling over 4 TB, some of the individual ones are between 1 and 2TB. I have another application with a dedicated SQL DB Instance / VM with a DB around 128GB..
In both cases the we have found HIGHLY inefficient queries (query analyzer,) but that isn't the whole performance story we nailed some big ones but getting down further would take too much of my teams time when this is supposed to be the job of the developer, however they seem lazy and the solution is toss more resources on the fire.
We do not have a Dedicated DBA, I contracted one in that highlighted a few of the bad quires but I can't afford a full time resource. I also can't afford to be providing hundreds of gigs of ram to DBs just to try and cache the ENTIRE DB in ram once they get to those larger sizes.. That is why we have them on SSD accelerated / Highspeed disk.
Wow, sounds like you have crappy devs. I would take it personally if someone told me my queries were garbage inefficient queries.
Just tell them there's nothing you can do to speed it up any more. more resources don't dramatically help and that its up to them to fix their own problems. let them stew in their own filth :P
Quote from: Thorin on October 30, 2014, 11:00:22 PM
Wow, sounds like you have crappy devs. I would take it personally if someone told me my queries were garbage inefficient queries.
The developers for one of the projects are from an employer we have both worked for in the past. :-[
Quote from: Tom on October 30, 2014, 11:07:52 PM
Just tell them there's nothing you can do to speed it up any more. more resources don't dramatically help and that its up to them to fix their own problems. let them stew in their own filth :P
Problem, I have a boss.. My boss contracted the developers... Now he is actually really good at trusting my judgment on these things but there is SHOCKINGLY little info on how to correctly size CPU and RAM for SQL servers based on DB size or even reliable metrics.. Lots of maybes and toss more RAM at it really.. It is like DBAs are taught @% ALL about capacity planning.
Right, you tell your boss that theres not much you can do, except spend way too much time and money throwing more hardware at it. Or he can get his contractors to fix their @%. one of the two.
Well both options are preferable.
Databases rarely shrink.
Sometimes the first line of defence is to increase the database pool, but that just buys you some time. It might even be worth hiring a consultant DBA to come in and find the bottlenecks.
I think Redgate has some demo stuff as well, that's what lead me to them in the first place. I remember getting some good reports out of it that helped me to debug things, even with a supposedly skilled DBA I still found a lot of stuff we could tweak
I'll see what I can dig up.
If you really want some advise I'd like to know a few things:
1. Do you have load testing automation you can run
2. Do you have a staging environment where you can simulate load
3. Barring 1 and 2 do you have scheduled maintenance times or low volume times
If the answers to any of these questions is YES then Run the SQL Server Profiler with the TSQL_Duration template against the DB you are testing load against. Stop the run and then look for long running queries by sorting duration time. Narrow down the largest and then run those queries with the execution plan enabled in SQL Server Management Studio.
If you can't eyeball the problem by reading the execution plan then run the query by selecting Query-> Analyse Query in Database Tuning Advisor.
Sometimes the blocks can be really obvious, sometimes not
The next step is to run the Profiler with the TSQL_Locks template, any locking conditions you spot you MUST address.
Finally when you've got a list of "top 10" (or more) queries or locking conditions output a report and send it to your devs and make those stories high priority.
It's costing YOUR business money to support slow or badly running queries, and while hardware can help it's only a temporary stopgap.
Quote from: Mr. Analog on October 31, 2014, 07:41:28 AM
Well both options are preferable.
Indeed. But it sounds like his boss has already thrown a lot of money at hardware to "solve" the problem, and it hasn't worked.
Either way it's classified as "time and materials" and in general hardware is the lowest hanging fruit
We increased the hardware performance, but the question is TO WHAT END.
We tasked the developers to fix there code and they improved it, but it is still an issue.
As I stated I DID bring in a DBA to look at things, he pointed out the low hanging fruit but it ultimately didn't fix the problem (also he may have not bee the best DBA).
My biggest problem here is there is no clear answer even after the above.... I could be hunting for optimizations for ever or spending hundreds of thousands on hardware.
However if I had a clear metric to what is the limiting factor it would make this easier... To be honest running query analyzer is lower level then my team would normally go. Hiring a full time DBA seems out of the question and well the consultants / developers are supposed to be filling that roll.
Quote from: Lazybones on October 31, 2014, 12:02:32 AM
Quote from: Thorin on October 30, 2014, 11:00:22 PM
Wow, sounds like you have crappy devs. I would take it personally if someone told me my queries were garbage inefficient queries.
The developers for one of the projects are from an employer we have both worked for in the past. :-[
Having worked for said past employer, I have first-hand knowledge of how good
and how bad some of their devs are. More importantly, I know that some of their devs
just don't care. Which is a problem; a dev has to care about their work or it's gonna be shoddy.
Quote from: Lazybones on October 31, 2014, 12:04:47 AM
there is SHOCKINGLY little info on how to correctly size CPU and RAM for SQL servers based on DB size or even reliable metrics.. Lots of maybes and toss more RAM at it really.. It is like DBAs are taught @% ALL about capacity planning.
This is really the meat of your complaint, I think. There's no clear-cut rules for how much RAM to how big a DB. Really, there isn't. Sql Server DBAs usually aren't specifically trained, they just fall into the roles. Oracle DBAs tend to have better training, only because Oracle is a quagmire and you need to know everything about everything to do anything in it. And even in Oracle, a question of "how much RAM do we need?" would be responded to with "let me run about a hundred queries first to see your database usage".
Ya the key point about training here is I can't even find something on line better than .. Max out your license (Standard supports 32GB, Ent is TB so that doesn't help.
There are some queries to check utilization of the memory and try to determine if more ram would help but it is all fuzzy..
Some applications behave inconsistently so even with repeated tests it is hard to nail down if things are improving
The sad reality is that there's no one size fits all solution for stuff like this, it really does sound like you need a dedicated and knowledgeable DBA to pick out each knot in the bundle.
:(
Quote from: Mr. Analog on October 31, 2014, 07:41:28 AM
1. Do you have load testing automation you can run
2. Do you have a staging environment where you can simulate load
3. Barring 1 and 2 do you have scheduled maintenance times or low volume times
If the answers to any of these questions is YES then Run the SQL Server Profiler with the TSQL_Duration template against the DB you are testing load against. Stop the run and then look for long running queries by sorting duration time. Narrow down the largest and then run those queries with the execution plan enabled in SQL Server Management Studio.
If you can't eyeball the problem by reading the execution plan then run the query by selecting Query-> Analyse Query in Database Tuning Advisor.
Sometimes the blocks can be really obvious, sometimes not
The next step is to run the Profiler with the TSQL_Locks template, any locking conditions you spot you MUST address.
Finally when you've got a list of "top 10" (or more) queries or locking conditions output a report and send it to your devs and make those stories high priority.
It's costing YOUR business money to support slow or badly running queries
I really like your approach -- it's the kind of perspective that comes from years of experience (and therefore lots of PAIN). But it's also a logical analysis that applies to more than just SQL efficiency bottlenecks ... way more rational and therefore time-saving than just trial-and-error or adding more hardware resources.
Seeing this conversation at this point sure does make me happy I never chose the fulltime-DBA path I coulda back in like 2001 or 2002 or whatever it was. Developer ftw. Can't imagine how many more gray hairs I would have if I had chosenly less... wisely.
I've seen, and had to contend with, a lot of weird stuff over the years. Everything from "not quite working as advertised" connection pooling (that didn't work) to massive complex queries intended to be used in high volume (hint: create a flat structured table for high volume queries or be doomed to the whims of whatever optimizer you are stuck with).
I only know a little bit even, I've worked with people that just by looking at stuff could tell where potential bottlenecks were.
It's both frustrating and satisfying to try to work out how to make something really efficient AND useful, but a lot of times it always feels like a trade between what you want, CPU, storage and RAM