User Clone Script

One of my favorite subjects in SQL server is security, and one of my favorite security topics is permissions.  This post will cover a script I wrote to recreate a user’s permissions within a database.  I call it my User Clone Script, and use it quite heavily.

Use Cases

  • Troubleshooting Permissions
  • Cloning accounts
  • Switching service accounts
  • Copying permissions between development/test/production environments
  • Refactoring permissions to use a SQL Server Role instead of explicit user permissions
  • Refactoring permissions to use an AD group instead of explicitly named users

Highlights of the Script

This script has support for the following:

  • Most object/column permissions (includes stored procedures, views, tables, and functions)
  • Object ownerships
  • Type permissions
  • Schema permissions
  • Schema ownerships
  • Database permissions
  • Role memberships

 Implementation

In a few separate queries, I dump meta-data into a temp table from the following catalog views:

  1. Database permissions ( sys.database_permissions )
  2. Schema ownerships ( sys.schemas )
  3. Object Ownership ( sys.objects )
  4. Role Memberships ( sys.database_role_members )

Then— and this is the key to unraveling role memberships— I use a Recursive CTE to generate the final result set.

A recursive CTE takes the following form:

More information about Recursive CTEs can be found here on TechNet.

Output Sample

The following screenshot shows the output of the script, including commands to copy the permissions of User1 to User2.

PermissionsClone

A couple of key points:

  • If the @NewUser variable is NULL, the script will generate the commands used to grant [User1] the permissions they currently have (to copy permissions to another database, for example).
    • Otherwise, it will replace [User1] with [User2].
  • In this example:
    • [User1] is an immediate and direct member of three roles: [CoolUsers], [Level2] and [Level2a].
    • [CoolUsers] is a member of the [db_owner] fixed role.
    • The [Level2] role is a member of the [Level1] role.
  • The commands output by the script will create the roles, if they do not exist, and assign them the same permissions they had before (for the case that you’re trying to copy the permissions to a different database, for example)
  • If a schema or object ownership is detected, the script will generate the command to transfer ownership to the new user, but it will be commented-out.  This is done as a precautionary measure.  As only one user can own any given schema/object, running the commands generated would possibly break existing rights for [User1].  If this is your intention, simply uncomment the ALTER AUTHORIZATION commands before running the scripts.

 The Script

The only thing you’ll need to change before running the script are the two variables (@OldUser and possibly @NewUser) in the <setable> section.

Remember, like many of my scripts, this script doesn’t change anything; rather it generates TSQL commands you can choose to run.

Other Considerations

While this script does not generate a user’s effective permissions, it can help you to reverse-engineer them.  It works on the level of the database user.  This works fine if you have a named database user.  If you are inheriting your rights through a single AD group membership, you can still ascertain this information by supplying the database user for that AD group as the @OldUser.  What happens if you are a named user in the database, but are also granted certain privileges through a number of different AD groups?  Well, that’s where it gets more difficult.  One trick I like to employ is to use two catalog views, sys.login_token and sys.user_token.  Both views are similar, but the first works at the server login level, whereas the second works at the database user level.

These two views can help you to unravel your AD group memberships.  For the current user, they display the role memberships and the Windows group memberships of the user.  In the case of a domain environment, they connect to AD and actually enumerate all your domain security groups.  If a particular group actually matches a server login or database user, this view will also show you the principal_id you can use to join it back to sys.server_principals (in the case of sys.login_token) or sys.database_principals (in the case of sys.user_token).

I hope you’re beginning to see how powerful this is 🙂 .

This works great if you were to want to troubleshoot your own permissions.  How though, as a DBA, would you be able to use this to troubleshoot a different account’s permissions?  You can have a user run these queries, as themselves, and return the output to you… or, as a DBA, you can use the power of IMPERSONATE.  If you are the DBA (or otherwise have impersonation rights on another account), you can run any command as that user.  Be aware that this may violate security policies at your company, and could be ill-advised in certain security-sensitive environments.  Also, if you have any type of auditing enabled, chances are good that you will leave footprints all over your audit logs that you are running commands on behalf of another user.  This may or may not be a problem for you.  That being said, if you determine that it’s acceptable to do, you can run a command such as:

This will show you [DOMAIN\User]’s login token for the server and user token for the Adventureworks database.  Using this technique in conjunction with the User Clone script can go a long way to managing and troubleshooting database permissions.

 

The full script can be found here: UserClone.txt.

Please let me know if you found this script useful or if you have any suggestions to improve it.

LLAP,
Ryan


As always, please note the waive of liability published on the About page.

The text of this entire blog site (including all scripts) is under the copyright of Ryan Snee. You are free to reproduce it, as outlined by the Creative Commons Attribution-ShareAlike 3.0 Unported License.

Files and Freespace

UPDATE 2014/02/01: Script is now version 1.2 to accommodate CS collations

Happy new year, everyone! With this post, I will be sharing a script I wrote to examine how many files you have for an instance, where they are, how big they are, and how full they are. It will also help you to reclaim some space, if that’s your goal. The script outputs 1 row per file.

DISCLAIMER: I am not advocating you shrink the free space out of your files.  Generally, having free space is a good thing.  Also, there are negative consequences of doing a file-shrink.

My advice

  • Disable auto-shrink!
  • Don’t use this, or any other mechanism, to otherwise routinely shrink files.
  • Use it as the far exception, not the rule.
  • Try to plan ahead, size your files, your drives, etc.
  • Understand the side-effects of shrinking a file!
    • This script will help give you information which will help you make a better decision.
    • There may be non-technical factors that influence  your decision on what to shrink or not shrink.
  • Sometimes life happens… you may find that, in full awareness of the consequences, shrinking a file is the best option.

Perhaps I will blog about the effects of a fileshrink in a future post… Otherwise, there are many, many other posts on the matter.  A quick web-search will get you started.  I recommend Brent Ozar’s or Paul Randal’s, in particular.  Bottom line: it’s really bad for performance; don’t do it unless you understand the costs and they are outweighed by the benefits.  You’re the professional, and you’ll have to make that call, ultimately.

And that’s all I will say about that.

This script can also be used to snapshot space usage and trend it over time.  There are a lot of things you probably want to remove if you’ll be doing that… but there’s a lot of useful data there.

Highlights of the Script

  • Supports Filestream/FileTable, Database Snapshots
  • Understands Filegroups
  • Visual representation (careful, it is normalized)
  • Generates convenient commands you can choose to run
  • Displays VLF info
  • Written for SQL 2012… works in 2008 with minor tweaks
  • Converts sizes to human-readable format (but preserves raw bytes for sorting, filtering, etc)

Implementation

3 primary DMOs:

  • sys.master_files
    • I prefer this DMV… contains info for all DBs in one place
    • Only way to get info for offline databases
    • tempdb shows its initial size here, not its current size
    • For user databases, I’ve seen times when this was inaccurate
  • sys.database_files
    • One view for each database
    • Only way to get info for Filestream data
  • sys.dm_io_virtual_file_stats
    • DMF… can’t use CROSS APPLY prior to SQL 2012
    • Only way to get true size, on disk, for database snapshots (NTFS sparse file)

Credit where Credit is Due

A big thanks to Ken Simmons and Tim Ford (cited at the bottom of this post) for their blog posts on the same topic. I derived many ideas and some coding techniques from their examples.

Output Sample

FilesOutput

Additional columns:

FilesOutput2

And finally:

FilesOutput3

The entire result set can be found here:

FilesOutputFull

The Script

 

“Filegroup Aware”

One of the things I’m most proud of with this script is actually easy to miss… the sort order.  I’m not an expert on filegroups, but based on my understanding of SQL’s proportional file-fill algorithm, you want to try to keep the files in a filegroup as close as possible to the same size.  As such, the script works as follows:

  1. It finds the file in each filegroup that’s using the most space
  2. The script attempts to shrink all files to that size
  3. No files are grown (that would defeat the purpose of freeing-up space)

Based on how much the script estimates can be reclaimed in the whole filegroup, the results are sorted (by filegroup), with the biggest space-savers on top.  If you aren’t using explicit filegroups, don’t worry- the files with the most free space will float to the top.

Below is a picture from my slide-deck to help you visualize the calculation:

 FilegroupAware

The green space is summed for the filegroup, and the filegroup with the most “free space” (calculated as described above) will be on top.  All the files in a filegroup will sort together.

Ratio Bar

The last thing I wanted to point out is visual ratio bar:

VisualRatioBar

It’s a normalization of a ratio, like a percentage, but with a resolution of 10 instead of 100.  Having 100 characters resulted in too much scrolling, so I opted for just 10.  I tried to find a word for this, but had no success.  There’s a word for a ratio normalized to 1,000 (permillage) and one for 10,000 (permyriad), but I found nothing for 10.  I wanted to call it a perdecage, but Google/Bing don’t seem to agree with me that it’s a real word.

As with any normalization, I wanted to point out that, while useful, it can be misleading.  A file with 1MB free of 10 MB will look the same as a file with 100GB out of 1 TB.  It’s not really a problem, but an observation.

For anybody wondering how I did it, here’s the magic line:

For reference, the Unicode characters are as follows:

  • nchar(9621): “▕”
  • nchar(9608): “█”
  • nchar(9601): “▁”
  • nchar(9615): “▏”

 

So we have vertical bars on either end, and 10 characters between.. each either a full bar, or a “bottom 1/8 of a bar”, depending on the ratio.

Comparability with SQL 2008/2008 R2

There are two changes needed for the script to run in versions prior to 2012

  1. Wrapper function- you can’t use correlated parameters for DMFs like sys.dm_io_virtual_file_stats prior to 2012.  The workaround is to create a user-defined wrapper function for it.
  2. dbcc loginfo- A new column was added to its output in 2012, [RecoveryUnitId].  The definition of the temp table in the script, #LogInfo, must be modified to remove that column if running it pre-2012.

These changes can be found in the second script below.

 

The full script can be found here: SpaceUsage.txt.
And a SQL 2008 version: SpaceUsage2008.txt.

Please let me know if you found this script useful or if you have any suggestions to improve it.

LLAP,
Ryan

A few sources I’d like to cite for contributing ideas:


As always, please note the waive of liability published on the About page.

The text of this entire blog site (including all scripts) is under the copyright of Ryan Snee.  You are free to reproduce it, as outlined by the Creative Commons Attribution-ShareAlike 3.0 Unported License.

Moving a Set of Tables to a Read-Only Filegroup

This is my first post.  My goal with this post is to share how I moved a subset of the tables in my database to another filegroup.  Why would you want to do this, you ask?  A few possible use-cases:

  • You want to move a set of tables to a different drive
  • You like playing with SQL Server Files and Filegroups
  • You want to move some tables out-of a database, but are not ready to drop them from their source DB.

The third option was my reason for researching it, but obviously you may have other reasons yet.

My overall approach to the problem above was to:

  1. Create a new filegroup
  2. Create a file in that filegroup
  3. Move the tables, online where possible (SQL Server Enterprise is required), to the new filegroup
  4. Flip that filegroup to READ_ONLY mode… unfortunately, I found out while doing this that this step requires that nobody else be in the database… I ultimately flipped the database to SINGLE_USER mode after-hours.  This may not be an easy option for you.
  5. Using SSMS and the Import/Export Data Wizard (SSIS under-the-covers), I copied the data to its new destination where I had earlier prepared empty tables

I was only moving a sub-set of the tables… luckily for me, they all had the same prefix in their table name, so it was easy for me to grab them programmatically.  It’s important to note that running my script doesn’t do anything to your database… rather it generates TSQL commands for you to review and, optionally, run.

The Issue

I wanted to move the tables… every part of them.  I wanted my code to account for heaps, clustered indexes, and non-clustered indexes.  I found some very useful links (included before) which did part of what I wanted… but not everything.  I also wanted to accommodate as many index options as possible (and not lose settings like fill factor, filters, and ALLOW_TABLE_LOCK options).  This is my first attempt, and I’ve tried to note exceptions to what I’ve tested.

The Approach

I’m sure I’ll catch a lot of flack for the iterative nature of my script… I’d normally be the first one to offer criticism!  As it turns out, it’s very difficult to build a set of things into a single string (eg, building a set of index columns into a command).  I found a few alternatives (including a very clever use of the STUFF() function and some XML generation… but I felt that was unintuitive and it seemed “hacky” to me.  In the end, I make heavy use of WHILE loops.  In TSQL.  I know.  Please feel free to disagree with me on this, but this is the choice I made in the end.

The Demo Setup

Check Current Filegroup Layout

Of course, as we didn’t specify any filegroup options, we have SQL’s default: a single filegroup called [PRIMARY] with everything on it.  We can verify this by running the following query:

Let’s start by setting up our new filegroup and creating a file in it:

The Solution

Next, we can use the script below (after changing to the appropriate database context and modifying any of the “setable” parameters.

Remember, the script doesn’t change anything; rather it generates TSQL commands you can choose to run.

Running the Generated Commands and Verification

The final step would be to review and possibly run the commands in the last column of the table.

After doing so, we can run the query mentioned in the “Check Current Filegroup Layout” section above to verify that every index of every (user) table has moved to the new filegroup/file.

As a note, system tables are forever in the first, primary data file.  It can be moved (an offline operation, unfortunately), but it cannot be removed.  Attempting to do so will yield the following error:

Msg 5020, Level 16, State 1, Line 1
The primary data or log file cannot be removed from a database.

Changing the Filegroup to Read_Only

Now that my objective of moving a subset of the user tables to a new filegroup has been accomplished, I can continue on to step 4 of my approach:

The full script can be found here: MovingTablesToNewFG.txt.

That’s it! Please let me know if you found this script useful!

LLAP,
Ryan

A few sources I’d like to cite for contributing ideas:


The text of this entire blog site (including all scripts) is under the copyright of Ryan Snee.You are free to reproduce it, as outlined by the Creative Commons Attribution-ShareAlike 3.0 Unported License.