I’m excited to be hosting this month’s T-SQL Tuesday. This month we’re talking about logging. Logging comes in many form and fashions. If you think about it, when you go to the grocery store with at big list, Do you put the items in the cart and then check them off the list? If so, isn’t that a kind of write-ahead log?
I point that out because I don’t want anyone to constrain themselves to talking about logging within just SQL Server. Please bring your ideas for file transfers, report generating, performance gathering, uptime monitoring and the like. But don’t stop there! This is an open invite to anyone that does anything in the SQL Server community.
If Karla Landrum ( blog | twitter ) wants to explain to us how on Earth she keeps track of all these SQL Saturdays around the world, that’s logging! If Tim Radney ( blog | twitter ) wants to tell us how he makes sure he keeps in touch with all of the chapters he’s responsible for as a PASS Regional Mentor, that’s logging!
How you keep track of blog ideas, white papers you read, or however it is you life-hack *your* SQL world: please share it with us!
Rules •Your post must be published between 00:00 GMT Tuesday June 12th, 2012, and 00:00 GMT Wednesday June 13th, 2012
•Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
•Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work
Optional, but highly encouraged…
•Include a reference to T-SQL Tuesday in the title of your post
•Tweet about your post using the hash tag #TSQL2sDay
•Consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list.
P.S. Since I’m the host, any PowerShell topic on logging is cool too!
Well I’d say this T-SQL Tuesday was a success. 33 people contributed blogs; and two people ended up with two different posts they wanted to submit. I would have had this round-up out sooner but trying to read 33 blog posts after work is a little much and terribly fun at the same time. The fact that I kept stopping to try out what I had just read also hindered my progress.
If you read through all of these posts and aren’t inspired to go create some report that you’ve been meaning to get around to I don’t know what will ever inspire you. I’m going to summarize them in the order they were received.
Rob Farley shows us some new features that involve easy mapping in SQL Server 2008 R2 with Report Builder 3.0. VERY COOL!
Michael Coles Was part of the reason this was taking so long. STOP what you are doing and DO THIS!!
Brad Schulz Well I’m not sure who Ella Vader is but I sure agree with the sentiment about the airlines. Continuing with the ‘Holy Crap I’ve got to try that!’ theme that we’ve got going is a way to build org-charts or any other type of hierarchy you need.
Pinal Dave steps us through with screenshots how to configure a Management Data Warehouse.
Stef Bauer shows us an easy way to setup an HTML table for presenting a cleaner list of events in an email.
Barnaby Self doesn’t want to go off on a rant or anything but would like to show us how to build a report using an MDX query (which I didn’t know is something that you can’t do straight out of the box).
Gethyn Ellis Goes into a more advanced version of something that I touched on in my post. It’s important to off load your reporting to a different server. If you don’t the next thing that you will hear is that your organization wants to store everything in XML.
Jes Borland gives us a method to use when you need to find out what your business needs when building a new report. If you don’t already have one yourself, print this out and pin it to your cube wall. (Oh and use it!)
Jason Strate steps us through how to create a Wait Stat report and even gives us the code!!
Dave Levy Shows us how to take the dashboard inside of SSMS and give it to anyone that doesn’t have SSMS installed but wants to know why the server is slow. I will be getting up from my desk now to go show a couple of sysadmins how to do this!
Nicolas Cain gives us an explanation on row versioning. I will be pointing a few of the developers I work with at this post. If that doesn’t get them to stop putting WITH (NOLOCK) in their queries I’m just going to find myself a lead pipe (Just kidding, I do not condone work place violence unless it involves the physical safety of the data; and even then I don’t have to worry about it, that’s why our security guards carry guns.)
Jack Corbett Keeps us from looking like fools by showing us how to prevent emailing a report that is blank. Why has no one ever told me how to do this before?! Thanks Jack!!
Pat Wright Shows us how to work with vertical tables. You need to take a look at this. I don’t know if this is allowed but I asked for a follow-up post because this one hits really close to home on something that I’m working on right now.
Seth Phelabaum Saw into the future came through with something that we’re all going to need after we get inspired and setup a bunch of the reports that these people have showed us are so easy to do. A way to report on your reports.
Chad Miller shows us how to use PowerShell to report on the connection strings inside of your SSIS packages.
Robert Davis Points out a feature bug of running SSRS 2008 with smart card login on Windows 2008*. This is not the first time I have seen a bug reported about SSRS causing problems with things that seem completely unrelated so I guess I will just take comfort that the SSRS really like the concept of securing every angle.
Sankar Reddy Answers a freaking excellent question: Who put my database offline? He tells us about an undocumented feature fn_dblog that lets us know changes that have been going on with our db. I’m definitely going to spend some time
Michael Swart takes us back to that subject brought up by Jes Borland about gathering requirements and encourages us to be the user.
Paul Randal shows us that we can query internal objects in one database that doesn’t have the functionality we need from another database that does have the functionality that we do need.
Mike Walsh who hosted last month (and I wish would have warned me just how much reading I was in for!) shows us a feature that I had no idea was in SQL Sentry. It actually reports database growth for you now. I use SQL Sentry about 3 times a week and man I’m starting to feel like I need a full day class on all of the new features they’ve packed into this new version.
Kalen Delaney tells us some history info on the COMPUTE BY clause which I didn’t know was being removed. I have some ways I wish they would improve it instead of removing it and Adam Machanic tells me ‘COMPUTE BY has been replaced by a feature called Grouping Sets – Grouping Sets produces a single result, with NULLs in place of the values for columns that are being grouped’… but I digress back to Kalen’s terrific post. If you’ve ever tried to explain how a cube works to someone that understands relational data but doesn’t understand dimensional data you’re in luck because she has included a diagram that can really help
Mark Blakey takes us back to the topic of MDX in SSRS reports and shares a rule with us “Rule 1 – Never trust the automatically generated hidden datasets”.
Adam Machanic the founder of this little shindig answers the question “Why is Adam wasting my time with this garbage?” He also walks us through some common problems that arise with purely technical reports. In many of the places that I have worked no one has taken the time to solve these problems. Well at least now I have somewhere to point them to.
Jorge Segarra shows us how to gets some nice reports out of a SQL Server 2008 feature called Policy Based Management.
Jason Brimhall who technically did two posts on this topic – shows us how he answered the question “why my primary file group was still so large after moving all of the User objects out of that filegroup and into new filegroups?” I actually posed a very similar question to Paul Randal during the BOF lunch at the 2009 PASS Summit. Maybe I should finally get around to blogging everything I wrote down from his answer.
Andre Kamman steps us through how to create an index like you find in the back of a book. Then he take it further and teaches us how to make it look good.
Adam Haines shows us how to create and email HTML reports.
Stacia Misner shares with us some observation about new properties and globals that you can find in the upcoming release of Reporting Services including PageName, ResetPageNumber, Globals!PageName & Globals!PageNumber, Globals!OverallPageNumber and Globals!OverallTotalPages.
Steve Jones walks us through his solution to one of the most common problems I run into when automating reports. Two people running the same report at different times and getting different answers.
Aaron Nelson and finally I give some insight into a better way to allow desktop gadgets (and other reports) to report off of your production sales data. I then walk you through how to create a desktop gadget using a third party tool.
Since this topic was Reporting I thought I’d point out that a free e-book has been release for SQL Server 2008 R2 and Chapter 9 is on Reporting Services. You’ll probably recognize that one of the names on that book is one of the bloggers that joined us in this round of T-SQL Tuesday. You can read more about the book here.
That’ wraps up the wrap-up. If there’s something that I missed or a better summary that I could use please let me know.
Oh, and some advice to future hosts. Consider taking a half-day off.
For this month’s T-SQL Tuesday I thought I’d go with something useful that I’ve been meaning to blog about for weeks now. While listening to the PowerScripting Podcast a few month’s back I heard about a product that sounded really cool for simple data visualization and quickly showing people just how powerful PowerShell is. Now to get this to work using all the code on the screen you’ll at least have to download the trial version of of the Power Gadgets however even without the method I describe is important if you’re new to this type of reporting (desktop gadgets).
Turns out this little piece of software had even more packed into it than I was hoping for. I needed a piece of software that could allow me to pipe data to it to display it in a desktop gadget as a list.
Before I show you how to setup this quick-and-easy tool to create desktop gadgets I want to issue a warning against setting up this type of thing directly against your production database; read all the way through or don’t blame me when you bring your production environment down.
What you want to do (at the very least) is to have all of the gadgets that you deploy read from a central location where they only have to scoop up the results of the query that you’re trying to report on, and not 400 gadgets running the query itself. An easy way to accomplish this would be to run the query in an Agent Job against your production environment (assuming that you don’t have a live reporting copy) and deposit those results in a table. Let’s see if we can step through an example with the AdventureWorks database.
If this below were your query to show your sales for today…
SELECTSUM(SOD.OrderQty)AS‘OrderQty‘,s.StateProvinceCode FROMSales.SalesOrderHeader SOH INNERJOINSales.SalesOrderDetail SOD ONSOH.SalesOrderID=SOD.SalesOrderID JOINProduction.Product P ONSOD.ProductID=P.ProductID JOINPerson.Address A ONSOH.BillToAddressID=A.AddressID JOINPerson.StateProvince S ONA.StateProvinceID=S.StateProvinceID WHERESOH.OrderDate=’2001-07-01 00:00:00.000′–<—This would get GETDATE() or something GROUPBYS.StateProvinceCode
…you would really want to create an Agent Job to load that into a table and select off of the table. For simplicity we’ll leave out the job and just use the queries.
WHERESOH.OrderDate=’2001-07-01 00:00:00.000′–<—This would get GETDATE() or something
/* </This is the part that would eventually go into an agent job>*/
Now you’ll want to create a stored proc that will be used to call the report data. It will also make creating the report nice and clean.
CREATEPROCEDURE [dbo].[rp_MapThat] AS
SELECT [OrderQty] ,[StateProvinceCode] AS‘State’ FROM [AdventureWorks].[Report].[TodaysSalesOrderByState] GO
After that all you have to do is to go ahead and create the gadget using this little script:
#Go ahead and add the SQL Snapinsadd-pssnapinSqlServerCmdletSnapin100add-pssnapinSqlServerProviderSnapin100#add the Gadgets Snapinadd-pssnapinPowerGadgetsinvoke-sqlcmd-query"EXEC dbo.rp_MapThat"-databaseAdventureWorks-serverinstanceWIN7\KILIMANJARO|out-map-valuesOrderQty-labelState-title"CountByState"-refresh0:1:0
And now you’ve got a gadget prototype. I’m sure you can figure out how to turn it into an operational gadget for your company.
Spring Break or not, time marches on and so does T-SQL Tuesday. I’ll be your host this month and we are going with “Reporting”.
The Post must go live between 00:00:00 GMT on Tuesday the 13th of April and 00:00:00 GMT on Wednesday the 14th (also known as 8pm EDT, 5pm PDT).
You Don’t Know What T-SQL Tuesday Is Yet?
Adam Machanic (Also on twitter as @AdamMachanic) had a great idea 5 months back – Invite new and existing SQL Server bloggers to post about the same topic on the same day. The results have been excellent – diverse skill sets and data related job roles all posting from differing perspectives on the same issue.
Since this is still early in the game, a quick list of the post roundups from the first 4 topics:
#004 Mike Walsh wanted to glean the ins and outs of IO
Reporting is a potentially huge topic so here are just a handful of ideas that might get you started:
Server Performance Reporting
Technologies to help offload reporting
Reporting services tricks
T-SQL tricks for presentation and formatting
Database design considerations for reporting
Reports for executives
Reporting from Oracle in SSRS
Don’t forget that the DMVs are pretty much qualified as ‘Reports’ so those count too!
Pesky little rules…
Once again, please note the time is in GMT (occasionally called UTC). I also echo earlier encouragements to feel free to write your post ahead of time and schedule it. The rules are around when the post publishes, not when it is written.These rules are the same as the previous couple of T-SQL Tuesday’s but I’ll recap:
The Post must go live between 00:00:00 GMT on Tuesday the 13th of April and 00:00:00 GMT on Wednesday the 14th (also known as 8pm EDT, 5pm PDT). If it isn’t, it can’t be included in the round up post.
Your post must link back to this post (Trackback or Comment).
“T-SQL Tuesday #005” MUST be in the title of the post.
It is your responsibility to verify the trackback or comment appears here on this post. If you don’t see your trackback add your own comment with a link back to your T-SQL Tuesday post and it will be included in the roundup.
Not a rule but a great idea. A lot of the folks who read and participate are on twitter. Follow the hashtag #TSQL2sDay and when your post goes live, tweet a link to it with that tag.
Want To Host?
All you have to do is participate in at least two events and let Adam Machanic know. You can tweet him or leave a comment on his blog, all described in his first T-SQL Tuesday Invitation.
If you need any more clarification, leave a comment. I will respond in the comments or update this post if necessary. Have fun thinking of a topic and I look forward to reading the posts!
Please be aware that I haven’t had a chance to double-check the technical details of this post before the deadline so if you see an error please feel free to comment about it. The events happened years ago but it’s funny how history has a way of repeating itself.
Because I have too few hours in each day and I still suck at this blogging thing I am combining my MacGyver Moment and TSQL2sDay posts into this one so here we go.
The Beginning: Once upon a time in a conference room on a very high floor sat over a dozen technical people trying to quickly fix a server that had begun crashing the day before. This company was pretty big. Their data, it was massive. We only worked with summary rows because of this and yet it was still in numbers that make you stop and shake your head every once in a while. For instance, one time the daily import process failed just a few percent in and it already had over 400,000 rows cleansed and loaded.
To set the scene a little: We had developers, data folks, server experts, network experts, a SAN expert, the SAN expert’s boss, application SMEs, business liaisons, directors, and the company’s TAM from Microsoft. Oh and another SAN expert who also doubled as a Microsoft SQL Server MVP who just happened to specialize in High Availability (specifically clustering). This conference room had been turned into a War Room and was going to stay that way for as long as it took until the problem was fixed.
The Problem: A problem child of an application that already had a reputation for crashing every once in a while had now started crashing every couple of hours. I don’t know the exact specs of the hardware anymore but it was pretty close to the top of the line. It had something like 16 LPs and 16 Gigs of RAM backed by nearly 2 dozen disks on a SAN that held over 200 disks. This server had a twin and based on the error logs that was part of the problem. It was in a single node failover cluster and all of a sudden it had started logging 15,000 millisecond timeout errors while trying to check on it’s disk resources. This timeout in turn was causing whichever node was active to freak out and think that it had lost contact with the SAN altogether.
The Situation: While the overall SAN supporting this server is large by a lot of companies’ standards it was actually just a run of the mill small SAN at this place. In fact we later ordered up another one that looked just like it (only 2 years faster in every aspect) but at the time we kept on being told how solid this particular SAN was. There were a lot of moving parts and we were a fresh set of eyes being brought in. One of the moving parts that we had taken note of was that another server on different disks on the same SAN had recently been rebuilt as a 64 bit server. We took a look and said I’m sorry but your SAN just can’t support the needs of this SQL Server. You would have thought I had just told the SAN guys that I had caught them cross-dressing or something like that. They assured me that “No [little] Windows box” could ever take down their mighty SAN. Regardless of that we were stuck with that particular SAN for at least the next couple of weeks and needed to come up with something to help.
The Other Problem: We asked for all the IO stats they could give us but unfortunately there was something about a new version that needed to be installed and needing to wait for an outage window and all that. Bottom line we were stuck with some total IO numbers over a wide enough time frame that they weren’t all that useful however; being a glorified mathematician I took all the numbers I had and tried to figure out the size of the ‘unknown’ portion by subtracting out the three servers I could get stats for myself. As it turned out in one particular time interval that was just enough information for me to notice a trend. It wasn’t much but it was more solid than anything else being offered up (or at least I felt so) and the situation was to say the least a little tense.
The Lame-Brained Idea: One of the sever experts mentioned that he could “borrow” some RAM from a server that hadn’t gone into production yet and we could stick that in the servers (plural because they are a FC) that were crashing to see if that would fix it. Given the error message mentioned above I just didn’t think this would work. I had come up with something that just didn’t sound right even saying it but par for the course I said it anyways. I asked them to humor me and stick the “borrowed RAM” into a box that wasn’t crashing; that other box in the situation that had recently been rebuilt as 64 bit. They laughed at my idea and I don’t blame them. Unless you stepped through the situation in the the exact same order as I did this was like saying let’s just reboot and see if that fixes it (oh wait it does reboot every time it crashes ). It’s easy to decide to stick the only spare RAM you have into a server that’s crashing; it’s a little bit tougher to make the call to bring down a completely different system and have them stick the RAM in that one. If you’re wrong you’ve just caused a second outage for no reason.
Stupid Windows Guys: That’s what they must have thought when they agreed to give my idea a try. We had all noticed the crashes had been happening almost like clockwork. Almost but no one could tie them back to anything else happening on that machine. As we all know, almost only counts in horse shoes and hand grenades. And occasionally, wild ass guesses. Funny thing happened though. We were able to guesstimate the next crash within 5 to 10 minutes at this point only it didn’t happen this time.
The Hidden Pattern: Now I’ve always enjoyed learning about the storage engine internals but I sure ain’t no Paul Randal. I am a quick study and love a good disaster though. What I had found was that there was cube that was being built every couple of hours on the newly minted 64 bit server and while the system did a good job of still letting you query the old cube while the new cube was being built there would inevitably come a point where something was happening that was causing the SSAS engine to flush ooh-gobs of data back down to the disk. More data than the total size of the cubes even. The SAN ended up getting saturated with gigs and gigs of non-stop writes (and I hear with RAID 5 that’s like a bad thing but maybe it’s just a rumor).
The Takeaway: Make sure you know what tools your SAN admin has available and more specifically find out what they can collect on a continuous basis for weeks on end, you might be surprised by the answer. The disks your server relies on may be fine but the controller cards accessing it might be getting saturated from something else. You may need to purchase some additional software or something. Look for solutions in places they might not make sense unless you hold your head just right. The error log is your friend. And above all, just because someone says it’s not possible, don’t completely believe them until they prove it to you.
Tag-You’re-it: To keep the fun going I’m tagging Peter Schott and Matt Schultz (Matt blame Jorge) to find out what their MacGyver moments have been. Thank you Geoff Hiten for tagging me.
Today’s post about changing your Max Server Memory setting is to answer the call from Adam Machanic about Puzzling Situations .
A couple weeks before I headed out to the 2009 PASS Summit I encountered a puzzler of my own. One of our servers issued an alert that it had an extremely low Page Life Expectancy (PLE); like 16. All of the databases on this server had recently migrated from an older 32 bit server with 4 GB of RAM to thier current 64 bit server with 8 GB of RAM. As luck would have it, this was the source of my puzzling situation. When we migrated the dbs we inadvertantly transfered the memory configuration used in their old 32 bit server home. Who made this classic rookie error?? Yours truly. I was doing two server migrations at once and bobbled the checklists.
I rectified the situation by increasing the RAM settings from 3 GB to 6 GB:
sp_configure'max server memory (MB)','6144' RECONFIGURE
Want to guess what happened when I ran the RECONFIGURE command?
I watched as the perfmon counters immediately went down!Indicating that memory usage had dropped rather than increased.
As it turns out in SQL Server 2005 (and in SQL Server 2008 as far as I know) when you run RECONFIGURE for anything you dump the procedure cache. So for example if you were to change the setting for ‘”Web Assistant Procedures” you would dump the procedure cache. If you were to change the setting for “fill factor (%)” you would dump the procedure cache.
So beware before you change a configuration setting in your Production environment and run RECONFIGURE.
So there’s my Puzzling Situations for T-SQL Tuesday #002
Adam Machanic told us all about a new craze sweeping the SQL Blogosphere Nation last week and that is T-SQL Tuesdays. Real quick, the way it works is that on Patch-Tuesday everyone who’s got something to say about the topic of the month releases a blog post about it and the “Host” of the topic recaps all of the various blog posts in a blog at their site. OK now onto the topic.
Because I think there might be a few people bolgging about this today I will keep min short-and-sweet and go over 5 main points of the Old-School DateTime data type.
Before I start I want to remind you that DBAsData Professionalsare a little different, a lot of us think in terms of Largest to Smallest unit when we think of DateTime; which makes it into the way that you think of a standard number. Right now it is “200912051505” or “2009-12-05 15:05”
Selecting and aggregating using Styles all of the rows in a table can be a little problematic if you are selecting a date that looks something like this one “2009-12-05 15:05”. One of my favorite ways to get around this is with the CONVERT function and changing it to a VARCHAR and then using a style. Here’s what I usually do.
Run this query in AdventureWorks and you’ll notice that it also pulls in time which might be something that we want to avoid:
So what I do is CONVERT it to VARCHAR and then add Style 112 to it
SELECT COUNT(*) AS'# of Orders'
, CONVERT(VARCHAR, OrderDate, 112) AS'Order Date'FROM Sales.SalesOrderHeader
GROUP BYCONVERT(VARCHAR, OrderDate, 112)
ORDER BYCONVERT(VARCHAR, OrderDate, 112)
# of Orders
Now this isn’t overly readable but it sorts great. Another problem with it is that it doesn’t paste into Excel real well. For pasting into Excel and having it quickly recognize it as a DateTime field I use Style 110 but there’s a catch. Style 110 pastes into Excel fine but it doesn’t sort properly so I end up having to keep my Style 112 column so that I can sort on it.
SELECT COUNT(*) AS '# of Orders'
, CONVERT(VARCHAR, OrderDate, 110) AS 'Order Date'
, CONVERT(VARCHAR, OrderDate, 112) AS 'Order Sort'
GROUPBYCONVERT(VARCHAR, OrderDate, 110)
, CONVERT(VARCHAR, OrderDate, 112)
ORDER BYCONVERT(VARCHAR, OrderDate, 112)
# of Orders
Selecting just the time with Style 108 is another trick I have used in the past. As a quick and easy way to extract just the time out of a DateTime field is to CONVERT it to VARCHAR making sure to specify a length of 5. (Now if you also want the seconds you need to make the length 8.)
SELECTGETDATE() AS 'GetDate', CONVERT(VARCHAR(5)
, GETDATE(), 108) AS 'GetTime'
And now for my final trick Selecting the date of the Sunday of the Week (or any recurring day of the week)
Sometimes – especially in Business Intelligence – you want to group data by week but you don’t want to use DATEPART (WK, OrderDate) because that produces a number and you might want something more like an actual date. In this case what I typically do is just use this: DATEADD(DAY, 1-DATEPART(WEEKDAY, OrderDate), OrderDate)
(Now you can pick the Saturday of that week just as easily by swapping out that 1 for a 7.)
SELECTCOUNT(*) AS '# of Orders'
, DATEPART (WK, OrderDate)
, DATEADD(DAY, 1-DATEPART(WEEKDAY, OrderDate), OrderDate) AS 'Sunday of the Week'
, DATEPART(WEEKDAY, OrderDate) AS 'Day# of Week'
, CONVERT(VARCHAR, OrderDate, 112) AS 'Order Date'
GROUP BYDATEPART (WK, OrderDate)
, CONVERT(VARCHAR, OrderDate, 112)
, DATEADD(DAY, 1-DATEPART(WEEKDAY, OrderDate), OrderDate)
, DATEPART(WEEKDAY, OrderDate)
ORDERBYCONVERT(VARCHAR, OrderDate, 112)