What’s In a NAME?

NetApp customers that heavily utilized MultiStore (vFilers) are aware that naming conventions are important, but with clustered Data ONTAP, naming conventions are critical.  Many administrators name objects, such as volumes, after the application or purpose of the data being stored.  Great!  Keep doing that, but we need to do a little more. 

One of the things that I always do is prepend the name of the SVM that owns the volume to the volume name.  When cluster admins do a volume show from the ONTAP CLI, they get the SVM name in the results table for the volumes.  Similarly, SVM admins only connect to the particular SVM they manage so the SVM name isn’t really necessary.  Everything is great as long as we stay in the clustershell CLI or System Manager.  We simply don’t do that though.  OnCommand Unified Manager, OnCommand Performance Manager, and the nodeshell CLI make matching a volume to an SVM a little more difficult. 

Let’s take a look at what I’m talking about.  I fired up my trusty simulator and created 3 SVMs named SVM1, SVM2, and SVM3.  On each of these SVMs has a volume named vol1_NFS_Volume.  From clustershell everything is pretty easy to keep straight.  While the volume names are the same, the name of the SVM that owns the volume is in the field beside it.


What happens when we drop into nodeshell and try a vol status command?


Since the node shell doesn’t understand the SVM concept and volumes can’t have identical names, numbers are appended to the volume name.  Granted, there aren’t a lot of reasons to need to do this, but it is still nice to be able to instantly know what you are looking at without needing a secret decoder ring or spreadsheet.

The only real downside to this is that the mount points and exports are a little long and complicated by default.  Of course those can be changed if you don’t mind the mount points and junction paths not matching the volume names. 

Similarly, LIF and aggregate names should also have some sort of naming convention that allows identification on sight.  For example, in my lab I’ve named the aggregates after the node that owns the aggregate and the size and type of disk in the aggregate.  (FP means FlashPool and I could’ve gone further with adding the disk type and size and the SSD size, but that is overkill for my environment.)  If I rehome an aggregate permanently using ARL, I simply rename the aggregate for the new node name. 


Also remember the things that shouldn’t be in the naming convention.  For example, it may seem like a good idea to include the name of the aggregate in the volume name to help identify that too.  This isn’t a good idea because of NDO.  Volumes are meant to be portable, so that name can easily be misleading after a volume is moved to another node or aggregate. 

Please use some naming convention; I really don’t care which one.  As I recently told a colleague, as long as you are at least thinking about it and work through some of the possibilities, you are most of the way there and are going to be in good shape – as long as you stick to it.  Follow these guidelines and come up with a naming convention that works for you.  The second part of that is the most important.  The rule that I follow is modeled after a quote from Albert Einstein – “Everything should be made as simple as possible, but not simpler.”