The idea of using main memory as a fast block storage device is not new. Supercomputers and databases frequently use solid state disks, as performance accelerators. These ``disks'' are peripheral devices filled with DRAM chips that behave like regular block storage devices. Since solid state disks have high throughput and zero seek time, they are perfectly suitable for I/O intensive applications. In our work, we organize main memories to behave much like a solid state disk. The main advantage of our approach, compared to solid state disks, is its low cost, since we exploit the (otherwise) unused main memory that already exists in a workstation cluster. A network RamDisk may be used in several cases to store temporary data, including a web cache, intermediate compilation files, ``/tmp'' files, etc. Reliable versions of the Network RamDisk can be used to provide low-latency high-bandwidth storage to applications that need it, like databases, storage managers, etc.
Several operating systems provide software RamDisks. A software RamDisk is a part of a computer's main memory, that is being used as a block storage device [21]. Such RamDisks are being used to store temporary files in order to improve system performance. Our work extends the notion of a software RamDisk into a Network of Workstations. Instead of using the main memory of a single computer as the RamDisk does, we use the collective main memory of several computers in a NOW as a network RamDisk. Furthermore, we allow users to configure the network RamDisk using various reliability policies, so that in the event of a computer crash, the contents of the network RamDisk will not be lost. On the contrary, the contents of traditional software RamDisks are lost in the event of a crash.
Several research groups have proposed various methods of exploiting network main memory to avoid disk accesses. In remote memory paging systems, for example, applications use the remote memory as swap area [2, 3, 8, 12, 16, 24]. As a generalization to remote memory paging, several researchers propose to use the remote main memory as a file system cache [1, 9, 10, 11, 15]. For example, Feeley et al. have implemented a global memory management system (GMS) in a workstation cluster, using the idle memory in the cluster to store clean pages of memory loaded workstations [11]. Anderson et al. have implemented xFS, a serverless network file system [1, 10]. Both GMS and xFS have been incorporated inside the kernel of existing operating systems and their performance has been demonstrated. Our approach differs from the above in that the Network RamDisk can be easily incorporated into an existing commercial operating system without any kernel changes. Both GMS and xFS have been closely integrated within their underlying operating system kernel (although a device-driver implementation for xFS has been reported [13]). Thus, in order for users to be able to exploit the performance benefits of remote memory, they have to use the GMS and/or xFS systems and their associated operating system kernels. On the contrary, with our approach, ordinary file systems (e.g. NFS, UFS) are able to exploit the benefits of remote memory. Thus, an existing file system (like UFS) can be easily turned into a network memory file system without any modifications to it.
Moreover, we believe that the Network RamDisk due to its simple implementation has the potential to perform better than Network Memory Filesystems (e.g. like xFS). For example, using NFS over an xFS distributed filesystem, a block read operation, will require 2 network accesses if the data block exists in the client's filesystem cache, or 5 network accesses if it resides on another client's cache (the client would have to query the block's manager first and then the storage server holding the block). The data block write operations would need in xFS, from 2 to 4 network accesses (depending on whether the client contacted has write ownership of the data block). Accessing data blocks using NFS over our device would need 4 network accesses. Even in this case, because of its simplicity and portability, the memory and runtime overhead of our driver would be much lower (we do not use distributed maps, etc.), resulting in faster real time responses to I/O requests. Network Memory Filesystems (like xFS) may offer scalability, but our approach offers simplicity and efficiency.