In a prior blog, I covered the basics of file system objects, namespaces, and paths. In the associated examples, each file was accessed from a single name entry in the namespace. While this is a common scenario, sometimes there is also a need for several name entries to point to the same file. Most file systems offer two mechanisms to achieve that: symbolic links (also known as “soft” links in UNIX) and hard links.
Soft links are special files containing a path to another file. When the client file system logic encounters a soft link, it needs to “follow” that path. In the example below, the file “/tmpJunk.txt” is a symbolic link to “/tmp/junk.txt”. Note that the name of the symbolic link is completely independent of the target name such that it can be the same name (but in a different directory) or a completely different name. Names like “shortcut to …” are only by convention and/or OS specific. Soft links are a client-side mechanism. From the file system’s (or file server’s) point of view, the soft links are simple text files. This means that the file system cannot check whether the path to the target is legal and/or whether the target exists. Soft links can refer to any namespace entry including directories, files or even other soft link entries. Soft links can also create cycles in the namespace. As a result, some applications and utilities either don’t follow soft links, or handle them in some application-specific way (e.g. via as “cycle detection” algorithm).
Symbolic link example
A note regarding Windows
Under windows there are 3 types of links: hard links (similar to the UNIX hard links described below), soft links (implemented using a reparse point which is a special file containing data that is interpreted by a file filter), and symbolic links that are similar to the UNIX symbolic/soft links described above.
Unlike soft links, which are a client-side mechanism, hard links are an internal part of the file system. Hard links are not special files. Instead, hard linking denotes the ability of the file system to support several directory name entries referring to the same inode. This ability is a fundamental part of the UNIX file system architecture and is supported in modern NTFS also (though likely infrequently used).
In the example below, you can see that the upper right inode can be accessed by two name entries: “/tmpJunk.txt” and “/tmp/junk.txt”. Note that, unlike the soft link where one file points to another, here no file is pointing to another file, but this is a file with two “heads” - i.e. two name entries in the namespace (like a Cerberus with N heads…). Removing any of these names will NOT remove the file itself until all name entries pointing to the same inode are removed. In other words, each file maintains a reference count, and the file is removed when the count is decreased to zero. Standard UNIX file systems do not allow users to hard link directories together, even through many file systems are internally using this ability to link parents to children and vice-versa (as discussed in the “Directory References Count” section below). Since the file attributes are stored in the inode, changing the file using any name entry will impact the metadata shown from any occurrences. For example, changing the data of /tmp/junk.txt will modify the /tmp/junk.txt timestamp (mtime) but also that of the /tmpJunk.txt file, since they share the same metadata!
Hard link example
Note again that this is not a special file mode - but rather each file under a UNIX file system has a reference count, that can be shown by a simple list command “ls -l”. For example, the image below depicts the output of a sample directory in my Ubuntu VM (note the reference count == 4 marked by red circle).
Hard link - Reference count in UNIX file system
There is no simple way to identify the different hard link name entries, short of scanning the file system and grouping the name entries by inode number. The inode number can be retrieved by using the “-i” flag to the “ls” command, as shown below.
UNIX file system - Using ls command to display the inode number
Luckily for us, the very handy “find” utility can achieve our goal by either looking for occurances of a specific file using the “-samefile <name>” option or by looking for files with a specified inode number using the “-inum <inode-number>”. Both methods are depicted below.
UNIX file system - Find occurances of the same file
Directory references count
By convention, UNIX file systems treat the self entry “.” and the parent entry “..” as hard links to the same directory and to the parent directory, respectively. This means that an empty directory has a reference count of two and any immediate (i.e. not nested) subdirectory is counted as an additional reference associated with the parent “..” reference. So in the example below, the directory /elastifile has one subdirectory as its reference count is 3. Some utilities, such as “find”, use that fact to optimize their namespace scans.
Comparing hard and symbolic links
Issues with hard links
The most common issues with hard links are 1) operating on a sub-namespace with hard links outside the namespace and 2) breaking the hard links during copy/sync/backup.
Operating on a namespace with links to external files
Sometimes operations are done on a sub-namespace basis. For example, it is common to copy sub-namespaces, backup and restore them, and do operations such as changing the ownership of files in a sub-namespace. If the target namespace contains files that are linked to files outside the target namespace, unexpected results can occur. For example, changing the permissions of all files to read-only in a sub-namespace will also impact the hard link occurrences and may break applications.
Hard links and snapshots
One particularly nasty issue occurs if you are allowed to take a snapshot of a sub-namespace and then you restore it. If the sub-namespace has files linked to outer namespace files, what should happen to those linked files during the restore? There are 3 primary options:
- Restore the linked files and by doing so, also impact the outer namespace instances. This is very dangerous as it is not obvious and transparent what files are to be changed, and if this is OK with some applications.
- Skip those files and do not change them during the restore operations. This breaks the snapshot’s internal consistency and may result unpredictable outcomes, probably breaking applications.
- Break the hard links and create new instances. This also is dangerous as some applications/processes may assume that some files are hard-linked together (if not, what was the purpose of the linking in the first place?), so breaking it may again lead to broken applications.
In some cases, a careful, case-by-case analysis of the hard links could lead to a working state, but this is very time consuming and, in most cases, practically impossible as it is very hard to analyze which applications rely on which hard links behavior.
To avoid these issues, the common practice is to control the scope of the snapshot. For example, the Elastifile Cloud File System avoids this issue by only allowing snapshots at the data container level.
Breaking hardlinks during copy/sync/backup
Another common challenge with hard links is the preservation of those links when datasets are copied to a different system and/or a different medium (e.g. for copy/sync/backup/archival purposes). Here, the main problems are:
1. Hard links need to be recreated on the target side (see below). This requires the creation of a hard link map to identify the different names that each hard linked file has. Failing to do so means breaking the hard links (i.e. creating a seperate file for each file name instance). This can break applications. Note that the inode number of the source and target will not be the same, as inode numbers are internal and ‘file system’-specific.
Hard link transfer from source to target
Sync with hard link awareness vs. sync without hard link awareness
3. Hard link updates are particularly tricky as the inode number of the hard linked file in the target is not the same as the source. This is because the inode number is an internal ID of the file system and the user has no control over its assignment. So if the target is to be updated (i.e. it has already some partial or full dataset that has to be synchronized to represent the source), the local group of hard links has to be identified and then matched to the source. For example, let’s say that, on the source side the three files A, B, C are hard linked to inode 3.
Source and target are in sync, but inodes are different
After the initial sync, we have similar files A, B, C linked to inode 8 (see below). Now we add another link D to the source inode 3 and, after the sync we want D to be created as another hard link of inode 8 (i.e. not as a standalone new file). A good sync mechanism will recognize that D should be transferred and a new simple link will be created (as shown below). Similarly links can be removed, changed or even switched from inode to inode (i.e. A that used to be linked to inode 3, can be switched to link to inode 7). All of these cases require special attention. Many tools and mechanisms fail to do that correctly.
Adding a new link at source should translate to different link on target
Hard Link handling with Elastifile CloudConnect
Elastifile CloudConnect includes several mechanisms to handle hard links correctly and to optimize the target capacity and the WAN traffic required to sync the source and target datasets. This is done by on-the-fly mapping during “check out” operations (i.e. when files are restored into a file system from object storage), to build a map that contains the following information:
- Which source files are to be linked together and to which inodes
- Which is the existing target inode assignment for each hard-linked file group
- The mapping between source and target inode group numbers.
Using that map, CloudConnect is able to handle critical hard link uses cases, including the complex ones described above.