Backups are essential for a successful business model. That statement may or may spark some topics for debate, but at the end of the day if the data professional does not have a form of backup in place for his/her business needs you may, no you will, feel the pain. It may not happen today, tomorrow, next week, but you can with 100% certainty guarantee that at some point in ones career you will need a backup of your database.
Let me start off this way and ask a very simple question, “Do I have to take a backup?” The answer to that is yes, yes you do. If you are a data professional than you should care about your data enough to take a backup of it in some form or fashion.
Types of Backups
Full Backup – this type of backup contains all the data for a specific database.
Differential Backup – think about this backup as what it’s name states; contains only the data since it’s last differential base backup; you can find these backups to be smaller in nature versus the full backup methodology
Transaction Log Backup (T-Log Backup) – this type of backup is a record of all transactions that have been performed against the database since the transaction log was backed up. Most often times these types of backups are taken on a more frequent basis.
**Note** the differential and transaction log backups are both dependent upon the full backup initially being executed.
Depending on how extensive your business model is some companies will rely on backups for their disaster recovery planning. Whether you log ship, utilize always on, restore databases periodically etc. backups can and will always be an essential part of disaster recovery.
Most people don’t realize that they can tune their backups. One of the ways you can do this is by turning on some trace flags and increasing some throughput. Below are two statements you can utilize.
DBCC TRACEON (3605, -1) and DBCC TRACEON (3213, -1)
What those two statements do is tell you (in your error log) what the settings are set to
The buffer count and maxtransfersize are the two settings you want to check. Make note of what the settings are initially; then when backing up your database, whether by a stored procedure or method of choice, you can include the following code.
, BUFFERCOUNT = 800
**NOTE – never take code from the web and execute it in production. Utilize this in a testing environment to see how it performs.
Wait, what? You mean I need to test my backups. Let me pose this question another way. If you take a solid backup and you store it for a certain period of time; then how do you know if you can restore it or not? Taking backups are only half the process; I used to think early on in my career that I was golden to have a backup versus the people who don’t take backups at all. Sure that is somewhat true but the flip side to that is I was missing the bigger picture; periodically test your backups. In a perfect world an automated process would restore backups to an isolated environment then fire off an alert if you find one that could not be restored. Most shops don’t or can’t go to that extent so at the minimum periodically test your backups for validity. Not only will it prove that your backups are working but will keep your skill set honed in the restoration process.
Backups – they are important. As with anything in your data professional career; take this concept to be very important. If you aren’t backing up your data than I suggest you start. If you are backing up your data; then are you sure you can restore it? Are your backups taking forever; perhaps you can tune them? I tell you what…keep reading below and you can check out what some of my colleagues have to say around backups. Enjoy
- Julie Koesmarno: will return shortly; check back!
- Jeffrey Verheul: Speeding up your backups
- Mickey Stuewe: Transaction Log Backups for the Accidental DBA
On a SQL Collaboration Quest
Four SQL professionals gathered from the four corners of the world to share their SQL knowledge with each other and with their readers: Mickey Stuewe from California, USA, Chris Yates from Kentucky, USA, Julie Koesmarno from Canberra, Australia, and Jeffrey Verheul from Rotterdam, The Netherlands. They invite you to join them on their quest as they ask each other questions and seek out the answers in this collaborative blog series. Along the way, they will also include other SQL professionals to join in the collaboration.
This month the nod goes to none other than Jason Strate (Blog|Twitter). A few years back I sat in on one of Jason’s sessions at PASS Summit. From attending that session I found my way to his blog series called Index Black OPS which helped me tremendously, and I’ve carried some of the methodology since then.
Jason works for Pragmatic Works which is in and of itself a good company; what I’ve seen over the years that resonates with me is an extreme work ethic sprinkled in with some SQL Karaoke madness. A real down to earth guy who has a genuine love for helping people.
On this note I strongly suggest you check out his blog; he has some stellar information over there around several topics:
Don’t just limit reviewing the topics; make sure you check out the resources and publications to.
Like Jason, many SQL family members contribute on a daily basis in sharing their knowledge and helping the community grow. It’s time we (myself included) start paying homage and respect to those that give selflessly day in and day out sacrificing a lot to make our community one of the best there is.
Thanks Jason for being an impact player in our community.
Stay tuned to next month to catch Part 3 in the series of Impact Players.
This months T-SQL Tuesday is hosted by none other than Kenneth Fisher (B|T). His topic for this month revolves around security and how you manage security. There probably couldn’t be a more fitting topic; especially with the many breaches we have had lately both ones that are known and ones that are not known.
With that said I want to take this time to expound on a wider variety of topics instead of diving into specific targeted areas within SQL.
When I first heard this topic I immediately drifted to thoughts such as:
- Vendor Apps
- Breaches within
- Password Strength
Countless times over the years I have seen, reviewed, fixed, and contemplated over security within SQL that simply was an afterthought. Security whether role based, AD Groups, etc. should be worked into any project plan. If you have ever inherited a system only to review that 600 users have sysadmin access you know how detrimental that could be to the data contained within.
Being a DBA means you have great responsibility. Every single database is under your care, own it. Each day someone will be trying to access that database; at least that should be your mentality especially with any production environment.
A lot of us are creature are habits. It is very easy for a data professional to fall into the trap of becoming accustomed to daily routines. Security should not fall into this category; I repeat security should not fall into this category. Do you know who has access to your databases and why? Do you know what user accounts are tied to specific groups? If you can’t answer this then you may find yourself in this category
I like this one, how many of us validate our security measures? Do we take any proactive approaches to see just how safe our data is? Maybe you rely on an outside 3rd party to see if they can hack in; whatever the case maybe it would behoove us as a group of data professionals to be actively testing our systems looking for points of entry. I will be completely honest; if you aren’t you can guarantee someone else is.
Yes, I am a vendor installing an app your company purchased and we will need sysadmin rights on the box or cluster. Um. yeah you go right ahead – NOT. I hope by now, as a DBA, you have strategies in place where you will work with the vendor directly or have some form of processes that allow for tracking of such activity. Remember, these databases are yours if you maintain them; you be the gate keeper not the other way around. Don’t let anyone on your system without our knowledge and you better know what kind of data is on your system and who is accessing it.
If you aren’t careful all your eggs will be in the “protecting from the outside syndrome”. Yes potential threats are rampant from people both stateside and abroad; with that said however have you ever thought about what maybe at risk within your own walls? Do you have safeguards in place for co-workers and fellow employees? Security cannot be just thought of with outside threats. No you need to prepare for both outside and inside threats. To make it even better if you are on a DBA team is your team being audited to keep everyone honest? The data should be your top priority
These little rinky dinky passwords aren’t cutting it guys. Ensure you are following best practices and standards when setting up password strength. The easier you make it the easier it is for threats and breaches to occur. Are the passwords on your systems set to be changed every so often? But that would require a lot of work – yes and when you sign up to be a DBA or Data Professional you retain great responsibility.
Security is one place where you cannot be lackadaisical about. It is a crucial role within SQL or any platform for that matter that usually becomes an afterthought. If you are in a shop you should review your security guidelines and if you don’t have any I suggest that you take initiative and create some. Without proper security you ones business could be jeopardized and once issues arise what would become of the companies reputation; or your reputation. Be proactive, make it yours, own it, and get it done.
What can you do to make your SQL server healthy?
The theme is broad, and there are plenty of tips and tricks that can be said. I’ll only touch on a few that may be of some use in this upcoming year and hope they can resonate with someone in the community.
Policy Based Management and Central Management Server are two useful resources at the data professionals disposal that can aid in a multiple SQL server configuration shop.
PBM allows you to execute a set of standard and custom policies against one or a set of servers allowing you to receive custom daily automated reports. Why not have this at your disposal to see what is going on with your servers before you even get into the office.
CMS allows for a one stop shop of all your servers. One thing I like about CMS is execution of scripts against multiple servers at one time; with that said with much access such as this comes great responsibility and not for the faint of heart. It’s imperative you truly understand what you are working with before getting involved with this but is a great resource to have.
If you aren’t monitoring your servers then why not start today. Some ideas you can take into consideration but not limited to are:
- Job notifications on event of failure
- Space limitations
- Wait Stats
- Index Fragmentation
- User\Login information
- General baselines
- New servers brought online
Don’t end with these; the intent is to get you to think about what might work for you at your shop.
I put this topic in here because I wonder how many people are testing their restores? Do you receive notifications in event of backup failures? Trust me on this; don’t be the one to get caught not having a backup or not knowing if your backup works.
Automate, Automate, and Automate
Look at your day to day activities and then ask yourself; can any of these tasks be automated? The idea is to become more efficient and be pro-active instead of re-active.
DBA Standard Database
Do you have a standard DBA database on all the servers that can house your maintenance stored procedures, tasks, server info (yes you need to know what is in your environment), any other pertinent documentation.
Is your code source controlled? If not time to get in the game. One good place to start is Red Gate’s Source Control utility
Listen, these are just ideas and not even the tip of the iceberg. The intent is to jump start your mind and think of some possibilities that you may not already be utilizing.
I sure hated to miss this month’s block party, but that is okay. Time doesn’t always work out in our favor, but we pick ourselves up and move on. Nothing is handed to you; work hard for it. Look at your environment and be that impact player or game changer. You be the one to make the difference.
**Always always always test new things you find on a test environment. Do not put anything straight into production.