next up previous
Next: 6. Related Work Up: A Stackable File System Previous: 4. Examples

Subsections

   
5. Performance

    
5.1 Wrapfs and Cryptfs

For most of our tests, we included figures for a native disk-based file system because disk hardware performance can be a significant factor. Since Cryptfs is a stackable file system, we included figures for Wrapfs and for Lofs, to be used as a base for evaluating the cost of stacking. When using lofs, Wrapfs, or Cryptfs, we mounted them over a local disk-based file system. CFS[3] and TCFS[4] are two encryption file systems based on NFS, so we also included the performance of native NFS. All NFS mounts used the local host as both server and client (i.e., mounting localhost:/path on /mnt), and used protocol version 2 over a UDP transport, with a user-space NFS server3. CFS was configured to use Blowfish (same as Cryptfs), but we had to configure TCFS to use DES, because it does not support Blowfish.

For the first set of tests, we measured the time it took to perform 10 successive builds of a large package (Am-utils[20]) and averaged the elapsed times. These results are listed in Table 1. For these tests, the standard deviation did not exceed 0.8% of the mean.

 
Table 1: Time to Build a Large Package (Sec)
File System SPARC 5 Intel P5/90
ext2 1097.0 524.2
lofs 1110.1 530.6
wrapfs 1148.4 559.8
cryptfs 1258.0 628.1
nfs 1440.1 772.3
cfs 1486.1 839.8
tcfs 2092.3 1307.4

 

Lofs is only 1.1-1.2% slower than the native disk based file system. Wrapfs adds an overhead of 4.7-6.8%, but that is comparable to the 3-10% degradation previously reported for null-layer stackable file systems[8,18] and is the cost of copying data pages and file names.

Wrapfs is the baseline for evaluating the performance impact of the encryption algorithm, because the only difference between Wrapfs and Cryptfs is that the latter encrypts and decrypts data and file names. Cryptfs adds an overhead of 9.5-12.2% over Wrapfs. That is a significant overhead but is unavoidable. It is the cost of the Blowfish encryption code, which, while designed as a fast software cipher, is still CPU intensive.

Next, we compare the three encryption file systems. Cryptfs is 40-52% faster than TCFS. Since TCFS uses DES and Cryptfs uses Blowfish, however, it is more proper to compare Cryptfs to CFS. Still, Cryptfs is 12-30% faster than CFS. Because both CFS and Cryptfs use the same encryption algorithm, most of the difference between them stems from the extra context switches that CFS incurs.

For the second set of tests we performed microbenchmarks on the file systems listed in Table 1, specifically reading and writing of small and large files. These tests were designed to isolate and show the performance difference between Cryptfs, CFS, and TCFS for individual file system operations. Table 2 summarizes some of these results.


 
Table 2: x86 Times for Read and Write Calls (Sec)
File Writes Reads
System 1024$\times$8KB 8$\times$1MB 1024$\times$8KB 8$\times$1MB

cryptfs

9.27 8.33 0.26 0.34
cfs 101.90 50.84 0.89 8.77
tcfs 110.86 84.64 6.45 7.94

 

A complete and detailed analysis of the results listed in Table 2 is beyond the scope of this paper, and will have to take into account the size and effectiveness of the operating system's page and buffer caches. Nevertheless, these results clearly show that Cryptfs improves performance from as little as 43% to as much as over an order of magnitude. Additional performance analysis of Cryptfs is available elsewhere[23].

   
5.2 Usenetfs

To test the performance of Usenetfs, we setup a test Usenet news server and configured it with test directories of increasingly greater number of files in each. Then we compared the performance of typical news server operations when these large directories were managed by Usenetfs and when they were not (i.e., straight onto ext2fs).

We performed 1000 random lookups of articles in large directories. When the directory had fewer than 2000 articles, Usenetfs added a small overhead of 70-80 milliseconds. The performance of ext2fs continued to degrade linearly, and when the directory had over 250,000 articles, performance of Usenetfs was over 100 times faster. When we performed sequential lookups, thus involving kernel caches, Usenetfs's performance was only two times better than ext2fs's for directories with 500 or more articles.

The results for deleting and adding new articles showed that Usenetfs' performance remained almost flat for all directory sizes we tested, while ext2fs's performance degraded linearly. With just 10,000 articles in the directory, adding or deleting articles was more than 10 times faster with Usenetfs.

Since Usenetfs uses 1000 more directories for managed ones, we expected the performance of reading a directory to be worse. Usenetfs takes an almost constant 500 milliseconds to read a managed directory, while ext2fs once again degraded linearly. It is not until there are over 100,000 articles in the directory, that Usenetfs's readdir is faster than ext2fs's. Although Usenetfs's performance also starts degrading linearly after a certain directory size, this is not a problem because the algorithm can be easily tuned and extended.

The last test we performed took into account all of the above factors. Once again, we built a large package on a busy news server that was configured to manage the top 6 newsgroups using Usenetfs. This test was designed to measure the reserve capacity on the news server, or how much more free did the CPU become due to using Usenetfs. With Usenetfs, compile times improved by an average of 22%. During periods of heavy activity on the news server, such as article expirations, compile times improved by a factor of 2-3. Additional performance analysis of Usenetfs is available elsewhere[22].

   
5.3 Portability

Table 3 shows the overall estimated times that it took us to develop the file systems mentioned in this paper. Since the first ports were for Linux 2.0, they took longer as we were also learning our way around Linux and stackable file systems in general. The bulk of the time was spent initially on porting the Wrapfs template. Using this template, other filesystems were implemented faster.

 
Table 3: Time to Develop and Port File Systems
File Systems Linux 2.0 Linux 2.1/2.2    
wrapfs 2 weeks 1 week    
lofs 1 hour 30 minutes    
rot13fs 2 hours 1 hour    
cryptfs 1 week 1 day    
usenetfs 2 days 1 day    

 

Another interesting measure of the complexity of Wrapfs is the size of the code. The total number of source code lines for Wrapfs in Linux 2.0 is 2157, but that number grew to by more than 50% to 3279 lines when we ported Wrapfs to the 2.1 kernel. This is a testament to the unfortunate complexity that Linux 2.1 added, mostly due to the integration with the dentry concept.


next up previous
Next: 6. Related Work Up: A Stackable File System Previous: 4. Examples
Erez Zadok
1999-03-29