Back to index

9 Applications

In this chapter we present two applications of the distributed objects system described in Chapter 7. The first example presented in Section 9.1 extends the Dining Philosophers example from Section 6.2 to the network. It separates the clients (philosophers) from the server (table). The second example shown in Section 9.2 describes the implementation of a distributed file system on top of the distributed objects system.

9.1 Synchronisation

In this section we extend the Dining Philosophers example presented in Section 6.2 by separating the clients from the server. The first implementation that relied only on simple semaphores, woiuld have to be rewritten; e.g. a Java application using the wait and notify methods ceases to work as soon as one of the forks is a remote object, as both wait and notify access only the stub object. However, the second implementation that uses locking filters can be adapted without any changes to the logic of the philosophers..
Our server represents the table and the synchronisation part of our implementation. The actual activity of the philosophers occurs on the clients, i.e. invocations of Eat by the clients result in a remote method invocation (see Figure 9.1). In order to achieve this we extend the invocations semantics of our non-distributed implementation. The cascade of locking filters builds the server-side semantic. As the client-side semantic we choose DObjects.SyncInvocation. The server defines the desired message semantics and assigns them to the philosophers using DObjects.Export. The implementation is similar to the non-distributed version presented in Section 6.2.
MODULE Philosophers;

IMPORT Lock, Invocations, DObjects;

  Eater* = POINTER TO EaterDesc;
  EaterDesc* = RECORD (Threads.ThreadDesc)

  phils: ARRAY 5 OF Eater;

PROCEDURE (me: Eater) Think*;
END Think;

PROCEDURE (me: Eater) Eat*;
END Eat;

  i, res: INTEGER; l, r, first: Lock.Lock; c: Invocations.Class; m: Invocations.Method;
  forks: ARRAY 5 OF Lock.Semaphore;
  room: Lock.Semaphore;
  si: Invocations.Invocation;
  name: ARRAY 6 OF CHAR;
  FOR i := 0 TO 4 DO
    NEW(forks[i]); forks[i].Init(1)
  NEW(room); room.Init(4);

  si := DObjects.SyncInvocation();
  FOR i := 0 TO 4 DO
    c := Invocations.GetClass(phils[i]);
    m := c.GetMethod("Eat");
    first := Lock.New(room); l := Lock.New(forks[i]); r := Lock.New(forks[(i+1) MOD 5]); := l; := r; := DecObjects.Invocation();
    name := "PhilX"; name[4] := CHR(i+ORD('0'));
    DObjects.Export(phils[i], Network.DefaultHost(), name, c, res)

END Init;

END Philosophers.

The main differences are (bold lines):
  • We do not start the threads of the philosophers as they will run on other hosts.
  • We use DObjects.Export instead of DecObjects.SetSemantics to assign our semantics to the philosophers.
  • We distinguish the caller-side semantic from the callee-side semantic by calling SetCallerInvocation and SetCalleeInvocation.
In all other aspects the distributed implementation is equal to the non-distributed implementation.
The corresponding client is quite simple. It imports the desired philosopher object by calling DObjects.Import. Later invocations will use the assigned semantics. Whenever the method Eat is invoked, the message semantics framework intercepts the invocation and executes the assigned semantic, i.e. it issues a synchronous remote invocation and the server automatically obtains the assigned semaphores before the method is actually executed.
The client accesses only one philosopher. We let the user decide which of the 5 philosophers should be impersonated by this client. We chose this approach to emphasise the fact that the 5 philosopers (clients) need not run on the same host, but may also be distributed over the network.
MODULE Client;

IMPORT Threads, DObjects, Philosophers;

VAR t: Threads.Thread; self: Eater;
  t := Threads.ActiveThread();
  self := t(Eater);
END Start;

  name: ARRAY 6 OF CHAR; res: INTEGER; p: Philosophers.Eater;
  In.Open; In.Int(i);
  name := "PhilX"; name[4] := CHR(i+ORD('0'));
  DObjects.Import(Network.ThisHost("...", name, p, res);

  Threads.Start(p, Start, 10000)
END Dinner;

END Philosophers.

9.2 Distributed File System

In this section we describe a bigger sample application of our distributed objects system. It was developed as a diploma thesis at the Johannes Kepler University. We restrict ourselves to describing the aspects directly related to the distributed objects system and the composable message semantics framework. For a complete overview of the distributed file system see [Lich99].
The distributed objects described in this thesis are used as the basis of the implementation of a distributed file system. They enable the programmer to profit from some advantages of object-oriented programming: extensibility, readability and dynamic reconfigurability. We see a file or directory server not as a process but as an object exported by a host. Access to the file server is achieved by using remote method invocations. Therefore, every access to the distributed file system uses the remote access model (except if local decorations are used, e.g. a cache). To enhance performance we offer caching of specific distributed objects (files, directories). Simultaneous access is controlled with the use of locks. Locks and cache are examples for the extensibility of our basic framework.
We did not use all semantic degrees of freedom offered by the composable message semantics framework. We used a class-centric view for assigning the invocation semantics to the classes of the distributed file system, i.e. all instances of a given class have the same invocation semantics. Additionally, we did not use all possible semantic options (automatic update, shallow parameters). However, we used the possibility to return shallow copied objects. This feature considerably enhances the implementation of our transparent network access. By using distributed objects we actually prevent almost all distribution aspects from cluttering up the actual file system code. The necessary distribution specific code is almost completely concentrated within the module bodies. We define our semantics and marshallers within them. The code that actually deals with distribution is less than 5% of the complete source code.
We implemented two test applications that use the distributed file system: A file dialog and a text editor. They are both based on existing implementations that made use of the local Oberon file system. Their adaptation was mostly straightforward. The main task was the change from the statically bound procedures of the local file system Files, to the dynamically bound methods of DFiles.
In this sub-section, we will show some small examples that point out some typical usage patterns of the distributed file system. It demonstrates the view on the file system as seen by an application programmer.
A DirServer is the network interface that grants remote hosts access to the local file system. It acts as an intermediate that offers clients access to directories and files. As soon as access has been granted, all communication between the client and the accessed server object is handled directly without involving the directory server. The DirServer exports only directories explicitly configured to be public.
DObjects.Export(server, Network.DefaultHost(), serverName, NIL, {}, err);
The exported server uses the invocation semantics that were previously defined for all server objects by calling DObjects.SetDefaultClassInfo. The above invocation of DObjects.Export is actually the only explicit export action within the whole distributed file system. All other export actions occur implicitly by returning shallow-copied objects, i.e. the automatic instantiation of the client-side stub objects as return values (anonymous import).
Our name space builds on the Localizer concept, which is an extensible mechanism that builds on the URL syntax. A localizer specifies the name of an entity (file or directory), its location and the desired access method. Basically, we use an URL-like syntax to specify file and directory names.
URL = "file://" serverName "/" fileName
This mechanism can be extended by adding additional protocols, e.g. we added the cache protocol in order to add caching to the file returned for the above URL.
URL = "cache://file://" serverName "/" fileName
This mechanism is extensible and one can introduce arbitrary new file access decorators. One could also implement other naming schemes that, e.g. increase location transparency. Our locking and access control mechanisms are implemented using this mechanism.
Opening and reading/writing of a remote file are similar to handling a local file. The distribution is completely hidden from the client.
  f: DFiles.File;
  r: DFiles.Rider;

f := DFiles.Old("file://MyServer/Dir1/Dir2/File");
r := f.CreateRider(0);
FOR i := 1 TO f.Length() DO
  ... use ch

Similar to the local Oberon system, we use a rider to actually read from and write to files. We set the rider to the position zero and read Ð byte by byte Ð until we reach the end of the file. In this example, we use no caching. In order to have cached access to the file, we would have to prepend "cache://" to the name of the desired file.
Similarly, access to the directory structure does not need explicit access to DObjects. However, access to a directory is only granted by the directory server.
  dir: DDir.Directory;

dir := server.OpenDir("file://MyServer/Dir1");
it := dir.CreateIterator();
it.Cur(name, isDir);
WHILE name#"" DO
  ... use name and isDir
  it.Cur(name, isDir)

Usage of the Distributed Objects System
As already mentioned, the distributed file system explicitly uses the distributed objects system only in a few places:
  • When the file system starts the directory server. The server object has to be exported explicitly in order for other hosts to see it. If a host does not wish to act as a server and wishes to act only as a client, it may skip this export action.
  • Whenever the file system first accesses a specific host, it has to gain initial access to it. It gains this access by importing the host's directory server object.
  • The marshallers are the next aspect where the file system has to care about distribution. Every type transferred over the network (DDir.Directory, DFiles.File, ...) has to be marshalled. However, the marshalling mechanism used by the object system is not able to marshal these objects itself. Therefore, the file system must supply marshallers for all these types.
  • The final aspect, where the distributed file system directly accesses the distributed objects system, is the definition of the desired invocation semantics. As they are used in a class-centric way, these definitions are concentrated all together in the module bodies. The desired invocation semantics are assigned to the invocation information and they are set to be the default choice for all instances of their corresponding class.
Advantages and Disadvantages of Using Distributed Objects
In this sub-section we try to summarise some advantages and disadvantages from using the distributed objects system. An advantage was the inherent extensibility of the distributed objects system. It allowed us to easily add new features, such as caching, locking or access control into our file system, while still separating these aspects (lock, caching, ...) from the distributed file system. The usage of the decorator pattern was almost natural due to the structure of the object system used.
Another advantage was the reduced implementation effort. Some of the main implementation areas, e.g. network transport and the handling of server-side threads were completely delegated to the distributed objects system. This delegation considerably reduced the necessary implementation effort and resulted in well legible source code that concentrates on the key areas of the file and directory access. Additionally, it makes the file system independent of the actually chosen network protocol.
However, to use distributed objects as the means of communication brought some disadvantages as well. One disadvantage is the loss in efficiency that stems from using objects as the distribution mechanism. This loss comes from the slight overhead necessary for distributed method invocations and results in slower performance than, e.g. NFS [Sun89, Sun95] on the same test platform. The worse performance also results from the fact that our file system runs at user-level and is not incorporated into the kernel of the operating system. The easiest way to increase the performance of our file system would be to optimise our marshalling mechanism.
A final disadvantage of using distributed objects is the difficulty to merge the local file system of Oberon with our distributed file system. The local file system builds on statically bound procedure calls which make the interfaces of the two file systems completely different. We found no smooth solution to replace the strictly local file system of Oberon with the distributed file system without invalidating all Ð or at least some Ð of the existing clients.

Back to index