next up previous
Next: The Design of Up: Implementation and Evaluation of Previous: Introduction

Related Work

 

Li and Petersen [10] have implemented a related system where they add main memory module on the I/O bus (VME bus) of a computer system. This memory module can be used both as backing store, and as (slow) main memory accessed via simple load and store operations. Although this approach increases the amount of memory available to a single workstation, this memory module can not be accessed by the other workstations in the same cluster. Thus, only a single workstation benefits from the extra memory. Instead, our approach uses the existing main memory of workstations in the same cluster for storing an application's data. Thus, (i) we do not increase the cost of any workstation by adding main memory to its I/O bus, (ii) we use the otherwise unused memory in the workstation cluster, and (iii) we scale the amount of memory available to an application with a factor proportional to the number of workstations that are part of the same LAN.

Felten and Zahorjian [6] have implemented a remote paging system on top of a traditional Ethernet-based system, and presented an analytical model to predict its performance. Unfortunately, they do not report any results regarding the benefits of remote memory paging on real applications.

Schilit and Duchamp [14] have implemented a remote memory paging system on top of Mach 2.5 for portable computers. Their remote memory paging system has performance similar to local disk paging. The cost of a single remote memory page-in over an Ethernet, they quote, is about 45 ms for an 4Kbyte page, which we believe is rather high. Their implementation is dominated by various overheads induced by Mach, and the slow local buses of portable computers. Thus, their performance figures are somewhat discouraging with respect to the usefulness of remote memory paging. Our implementation instead, has eliminated all unnecessary overheads, reducing the remote memory page-in time over an Ethernet to as low as 8.7 msec for an 8Kbyte page.

Comer and Griffoen [3] have implemented and compared remote memory paging vs. remote disk paging, over NFS, on an environment with diskless workstations. Their results suggest that remote memory paging can be 20% to 100% faster than remote disk paging, depending on the disk access pattern. Our work differs from [3] in the following aspects: (i) we argue that local disk paging is slower than remote memory paging, (which we think is not at all obvious), while [3] argues that remote disk paging is slower than remote memory paging. (ii) Instead of using dedicated servers for remote memory paging, any workstation in the system can be a remote memory server.

Anderson et. al. have proposed the use of network memory as backing store [1] and as a file cache [4]. Their simulation results suggest that using remote memory over a 155Mbits/s ATM network ``is 5 to 10 times faster than thrashing to disk'' [1]. In their subsequent work [12], they outline the implementation of a remote memory pager on top of an ATM-based network. Our work differs from [1] in that (i) we base our results on executing real applications on top of our implemented pager, instead of simulating them, (ii) we show that even in the case where the interconnection network has as low bandwidth as the disk, remote memory paging results in significant performance improvements over the disk, and (iii) we present and evaluate a novel mechanism that provides fault-tolerance in case of a memory server crash, while it requires only an insignificant additional amount of memory.

Our work bares some similarity with distributed-shared-memory systems [11,5] in that both approaches use remote memory to store an application's data. Our main difference is that we focus on sequential applications where pages are not (or rarely) shared, while distributed-shared-memory projects deal with parallel applications, where the main focus is to reduce the cost of page sharing.



next up previous
Next: The Design of Up: Implementation and Evaluation of Previous: Introduction



Evangelos Markatos
Fri Mar 24 14:41:51 EET 1995