Jim Ries

CECS 486

Fall Semester 1999

Homework #6

November 15, 1999



The goal of this assignment was to implement the basic skeleton of the multi-agent infrastructure for the Educational Information Architecture (EIA). A complete implementation of this system would take far more time than was available for this assignment, and would include such features as clever load-balancing, database mirroring, security and encryption, and perhaps replaceable transport mechanisms. For the purposes of this assignment, however, I merely implemented a basic Personal Agent, Teaching Agent, and Course Agent. Each of these is parameterized, and can thus be used for a wide variety of courses, student enrollments and preferences, etc.

For this initial implementation, Course Agents are simply the keepers of the course content database. Course Agents are provided with parameters to indicate the course in which they have charge, and these parameters are used to find a subdirectory that contains content for that course. Course Agents support image content (JPG and GIF) and treat other content as the default (text) content. Provision has also been made to support video content, but this is not implemented in this version of the system. Additional content handlers can be "plugged in" to the system by simply extending the CourseContent class and overriding the appropriate methods (e.g., Display() and getContentType()).

Similarly, Teaching Agents are very simple shells in this implementation. Teaching Agents are essentially an interface through which Personal Agents may retrieve content. Teaching Agents can easily be extended to filter content based on a studentís personal characteristics (e.g., a bright student might be allowed to skip ahead through the content), and can track such things as dependencies among the content pieces (e.g., one may need to teach some background before covering a more advanced topic). However, neither of these features is implemented here. The current implementation includes design features to support such things, but time precludes their implementation at this time.

The purpose of Personal Agents is to maintain user preferences, such as their enrollment in a given course, whether the user wishes to display content of given types, preferred fonts, preferred colors, etc. In this implementation, only the content preference control is implemented. The user may choose to have his or her Personal Agent retrieve text, images, or both types of content. The design allows for additional content types to be easily supported as they are added to the system.



The following screen shot shows two Course Agents, two Teaching Agents, and one Personal Agent all running on my home Windows NT 4.0 machine. This is the second trial run, so the agent consoles show multiple status messages. Notice that the "CECS 383" course contains image content which has been displayed. The "CECS 486" course contains only Java source code, and it is displayed as text in the Personal Agent console. Notice also that timing information is being gathered by the Personal Agent.

For the purposes of this demonstration, the Personal Agent simply enumerates the names of the available content and then requests all of the content available from each teaching agent and displays it as it comes in. In a more realistic situation, a user would select content from a dialog box.

Below is a second screen shot taken from my campus Windows NT 4.0 machine, but running Exceed XWindows host and running the Agents on various machines in the Distributed Computing Laboratory. For this trial, the Course Agents were running on Amberjack, the Teaching Agents were running on Pollack, and the Personal Agent was running on Grouper.

Notice that the left console is Grouper, the middle is Amberjack, and the right is Pollack. Hopefully, one can get a feeling for the fact that the agents can run on different machines without any problem.


Performance Metrics and Analysis

I ran several tests to get the feel of the performance of the system. I timed each of the RMI calls from the PersonalAgent side (caller). In general, the performance seemed to be quite good; at least acceptable for the small-scale tests I was doing. Even for tiny files, however, one typically saw at least 5 milliseconds to do an RMI call. This is probably the latency of the network and RMI marshalling.

It was interesting to see that some calls took longer even when transferring less total data. For example, the getContentNameList() call transfers a linked list of strings containing the names of the content files. Though many of the content files were quite large (certainly larger than a list of the file names), the content almost always transferred faster than the linked list. I suspect this is due to the code which marshalls the linked list being somewhat slow. This supposition is supported by a paper I have recently read (Nester, Philipsen, and Haumacher, "A More Efficient RMI for Java", Proceedings of the ACM 1999 conference on Java Grande, pp. 152-159).

A brief sample of the timings for one run follows:

getContentNameList() took: 82 millisecs.

getContentFindID() took: 8 millisecs.

getNextContent() took: 92 millisecs.

getNextContent() took: 19 millisecs.

getNextContent() took: 16 millisecs.

getNextContent() took: 41 millisecs.

getNextContent() took: 19 millisecs.

getNextContent() took: 14 millisecs.

getNextContent() took: 11 millisecs.

getNextContent() took: 66 millisecs.

getNextContent() took: 7 millisecs.

getNextContent() took: 7 millisecs.

getNextContent() took: 7 millisecs.

getNextContent() took: 6 millisecs.

getNextContent() took: 7 millisecs.

getNextContent() took: 7 millisecs.

getNextContent() took: 44 millisecs.

getNextContent() took: 62 millisecs.

getNextContent() took: 5 millisecs.

getContentNameList() took: 35 millisecs.

getContentFindID() took: 4 millisecs.

getNextContent() took: 17 millisecs.

getNextContent() took: 42 millisecs.

getNextContent() took: 19 millisecs.

getNextContent() took: 11 millisecs.

getNextContent() took: 30 millisecs.

getNextContent() took: 10 millisecs.

In addition to these performance observations, there are several improvements that would need to be made to the design of this system to adequately address the scaling needs of the EIA. First, as implemented, the system requires at all Teaching Agents reside on the same machine. This could easily be changed and is an artifact of the design of the command-line parser for the Personal Agent. That is, the Personal Agent could easily be modified to specify a separate machine for each Teaching Agent. However, this shortcoming points out the more serious issue that the location of the Teaching Agents should, in fact, be transparent to the Personal Agents. It would be much better if there were a central namespace server (e.g., JavaSpaces) that could identify the actual location of the individual Teaching Agents. Similarly, it would be better if the location of the Course Agents were transparent to the Teaching Agents.

Perhaps a more serious issue is the lack of load balancing capability among the Course Agents (or the Teaching Agents for that matter). It was specified in the assignment that multiple Course Agents should be deployable and that they should synchronize content among themselves. This was not implemented here, but would certainly be needed for a highly scalable system. In fact, I was able to test the entire system running on my single home Windows NT 4.0 machine with 10 Personal Agents running simultaneously. This test brought the system to its knees and the last 2 Personal Agents to be scheduled were unable to connect to the rmiregistry (presumably due to some timeout).



This was an extremely interesting assignment, and I frankly lament the fact that I could not spend several more weeks improving my implementation and addressing some of the issues Iíve raised here. My hope is that my proposal for a semester project (which is closely related to portions of this assignment) will be accepted, thus providing me an additional chance to work in this area. As mentioned in the introduction, this assignment is rich with issues such as load balancing, fault tolerance, scalability, transport performance, etc.


Appendix: Code