The NRD client is a disk device driver that handles all read and write requests. In order to service these requests, it may forward them to user level NRD servers running on remote machines. The above design minimizes the modifications needed to port the system to another operating system, and avoids modifications to the operating system kernel. The Digital Unix Client has been linked with the Digital Unix 4.0 kernel of a DEC-Alpha 3000 model 300 with 32 MB main memory, while the Linux Client was linked with the Linux 2.0.33 kernel of a Pentium II 233 MHz PC, with 64 MB of RAM.
When the NRD client starts, is passes through a configuration phase where it finds out which servers it will use, how many blocks each server will store, which reliability or caching policy will be used, and then initializes its data structures. The NRD client keeps two maps in its memory, a block table with the location of each block, and a bitmap with the full or empty blocks on each server. The size of these maps is quite small. For example for every 1024 byte block and a maximum of 256 servers, we need 5 bytes for the location of each block (1 byte for the server and 4 bytes for the block number on that server), so for a NRD of 256 MB the block table is only 1.25 MB.
After the configuration phase, the NRD client connects to the NRD servers, and functions as a normal disk accepting block I/O requests from any filesystem that is created on it. Depending on the reliability policy used the client issues block read/write requests to the servers using TCP/IP sockets over the network interface(s) the NRD client machine has installed. Security is ensured by allowing access to our device only to the superuser and by using privileged ports for the communication among the NRD client and the servers.
The operating system is not aware that we use remote main memory instead of magnetic disk as a disk I/O device. It just performs ordinary block I/O activities through the Virtual File System (VFS) using what it considers a disk, mainly due to the device driver's response to all normal ioctl() commands of an ordinary magnetic disk. The disk's geometry (capacity, cylinders, tracks, sectors, etc.) are customizable from the driver's ioctl interface. We also added a disk type entry to the /etc/disktab (or /etc/fstab) file of our experimental NRD client machines, so that the disklabel (or the fdisk) command could easily identify the NRD as a disk. The amount of blocks transferred with each request to the NRD server, is also a matter of disk's geometry and can be adapted according to the current interconnection network, so that the performance is optimum.
The filesystem we used on our Network RamDisk is the classic UFS (Unix FileSystem) on Digital Unix (exists on most Unix-based operating systems), and the Ext2 filesystem on Linux. We are aware that these do not take full advantage of our disk being a Network RamDisk, however we wanted to measure performance resulting purely from the device and not from the filesystem on it. Moreover using a common filesystem, allowed us to use the ordinary system administrative tools for disks (disklabel, newfs, fdisk, mkfs, mount/umount, fsck, df, etc.) to install, create, and test the filesystem on our device. It is quite certain that the performance would be much higher if a more efficient FS was used (e.g. AdvFS). This is an area we could experiment in the future.
The current implementation of the Digital Unix Network RamDisk client contains only an unreliable version, and runs on top of a low-bandwidth 10 Mbps Ethernet.
The current Linux Network RamDisk client implementation contains three different reliability/caching policies: