Skip to content Skip to site navigation

Projects

Autonomous intelligent vehicles: robotic helicopter

Prof. Oliver von Klopp Lemon of CSLI is leading a Stanford team to build a multi-modal interface to an autonomous aerial robot (see www.ida.liu.se/ext/witas) in collaboration with the WITAS group at Linkoping University in Sweden. The team is designing for combinations of spoken, typewritten, and gestural/graphical commands to the robot, and its outputs and state will be presented to the user multi-modally, including audio output, visualizations of sensor outputs, and tagged video footage which the user can refer to across different modalities (e.g. "Follow this [pointing at map in GUI] road until you hit El Camino, then show me video footage of the intersection."). The interface is also designed to be "conversational", in the sense that the multi-modal dialog is planned and managed, and that different display and communication strategies are activated in different dialog and resource/time-bounded contexts. Currently, the team is using the Open Agent Architecture to manage communicating processes, Nuance for speech recognition, and Gemini for NL parsing.

Faculty Contact: Oliver von Klopp Lemon, CSLI (lemon@csli.stanford.edu)

Brain activity visualization using 3D graphics

Software tools designed in the Stanford Psychology Department enable users to render three-dimensional images and analyze measurements of activity in the human brain. This project involves the modification of this software to allow multiple users at remote sites to visualize and modify a single data set collaboratively, allowing users to open many views of the same data set and to add non-destructive hypertext overlays and annotations on the dataset. Eventually, the project participants intend to create a large repository of data accessible to the entire academic community. Some access to the data and analyses will be provided via an HTML/XML interface.

Faculty Contact: Brian A. Wandell, Psychology Department

Building design services in a distributed object environment

Project organizers in Civil Engineering have developed a prototype system for the real-time transfer and analysis of actual building designs. This represents an important application of the Internet2 bandwidth and low-latency capabilities. The project organizers will distribute their services among other Internet2 sites, both at Stanford and beyond, to test the practicality of their infrastructure, using real-life facility examples. Potential hosts for the computationally intensive services include the San Diego Supercomputing Center (SDSC) and National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign, both of which are Internet2 member universities.

Faculty Contact: Kincho H. Law, Civil and Environmental Engineering (law@ce.stanford.edu)

Course: democracy and governance in post-colonial Africa

In Winter Quarter 2000-2001, the Berkeley-Stanford Joint Center for African Studies will offer a jointly taught colloquium on “Democracy and Governance in Post-colonial Africa” employing Internet2 connectivity for synchronous videoconferencing and real-time, remote collaboration. Using this course as an experiment, the Center expects to develop joint Stanford-Berkeley course offerings on different themes every year. If the experiment succeeds, the Center will offer these — and parallel courses on research methodologies for international field research — via Internet2. The project’s organizers expect to be able to archive class sessions and refine them for future distance learning applications between Stanford/Berkeley and several South African universities.

Faculty Contact: Richard Roberts, African Studies (rroberts@leland.stanford.edu)

Course: bodyworks

Bodyworks is a multidisciplinary course organized by Professor Tim Lenoir of the History Department involving participation from faculty from History of Science, Comparative Literature, and from the Department of Surgery at the Stanford Medical Center. Using Internet2 connectivity and the Zydacron OnWAN 350 videoconferencing system, the course is being taught simultaneously in Winter Quarter 2000 on the Stanford campus, at Stanford’s Paris Overseas Studies Program, and at SUNY Buffalo. At each site, incoming and outgoing video are projected on one screen, allowing the class to see themselves and the remote class at the same time. Another screen projects PowerPoint slides. Session are output to videotape so that teachers and students can then capture clips for future presentations, demonstrations or review of material via MPEG and RealVideo.

Faculty Contact: Tim Lenoir, History (tlenoir@leland.stanford.edu)

Course: computer integrated architecture/engineering/construction

The Project Based Learning Laboratory (PBL) brings together students, faculty and industry practitioners from architecture, engineering, and construction in a course called “Computer Integrated Architecture/Engineering/Construction (A/E/C)”. In this learning environment a wide spectrum of Web-mediated, Integrated 3D CAD applications, videoconferencing, digital video streaming on the Web are employed to support geographically distributed teams to learn and work on projects. All these applications, as well as an innovative, Web-based drawing application and archive/database called RECALL, need Internet 2 (I2) high-speed, low latency network connections. Participants and users include students and faculty from Stanford University, UC Berkeley, and Georgia Tech, all of which are Internet2 universities.

Faculty Contact: Renate Fruchter, Civil and Environmental Engineering (fruchter@ce.stanford.edu)

Distributed surgical planning/collaboration

Researchers at the National Biocomputation Center, headed by Dr. Kevin Montgomery of the School of Medicine, are leading this project to allow groups of geographically dispersed surgeons to collaborate on a surgical plan. The project’s team members have developed a system that constructs a virtual workspace in which surgeons can visualize, interact with, and understand their patient’s data and collaborate and consult with surgeons in other locations over the Internet2 backbone. Together with collaborators at the NASA Ames Center for BioInformatics, the system’s developers demonstrated the first wide-area multi cast stream over Internet2 (including CalREN and Abilene) last March. Now, this work will be fully deployed and useful for collaborating on real patients and to include new client sites.

Faculty Contact: Kevin Montgomery, School of Medicine (kevin@biocomp.stanford.edu)

Distributed surgical stimulation

Researchers at the National Biocomputation Center, headed by Dr. Kevin Montgomery of the School of Medicine, are leading this project to build force-feedback (haptic) surgical simulators that are distributed over the Internet (allowing one to “pull the liver here and feel it in Boston”). Using surgical simulators, surgeons-in-training can learn surgical skills at an accelerated pace and practice surgery without risk to actual patients. A client-server system of distributed force-feedback (haptic) surgical simulators has already been developed at the Center. This project’s goal is to extend this system to work across a wide-area — specifically between Stanford and collaborators at the University of Wisconsin and Texas Tech University — allowing collaborators in Stanford’s SUMMIT project to assess how distance learning can be performed in surgical training.

Faculty Contact: Kevin Montgomery, School of Medicine (kevin@biocomp.stanford.edu)

Distributed web indexing and feature extraction

The Stanford Digital Library project is developing a database for very large numbers of Web pages. This facility, code named WebBase, is on its way to containing 100 million pages. Participating scientists within and outside of Stanford are indexing and listing features — such as the “genre” (“advertisement”, “product sheet”, “scholarly report”, etc.) — of each page. In order to enable scientists at different locations to compute and share this information, the project personnel will ‘stream’ the collection to other institutions, taking advantage of Internet2’s high bandwidth and low latency.

Faculty Contacts: Hector Garcia-Molina (hector@cs.stanford.edu) and Andreas Paepcke (paepcke@cs.stanford.edu), Computer Science

Distributed automated grading from Internet2 nodes

The Center for the Study of Language and Information at Stanford (CSLI) and the Visual Inference Laboratory at Indiana University (VIL) are currently providing an Internet-based grading/tutorial service for logic instruction. As of November 1999, the service had received 14,800 student submissions, containing a total of 64,200 exercise files. These are currently being processed by a single server at Stanford with an identical server at Indiana acting as backup. To handle the anticipated load that the service will experience in Fall 00-01, the system will convert to a distributed architecture which requires submissions to be synchronized among the multiple servers approximately every ten minutes. The process of synchronization involves a significant transfer of time-dependent data between the Stanford and Indiana sites. Ultimately, the project’s organizers intend to bring additional servers on line, some at other Internet2 sites.

Faculty Contacts: John Etchemendy and Dave Barker-Plummer (etch@csli.stanford.edu, dbp@csli.stanford.edu)

FOLDING@HOME: simulating protein folding

Understanding how proteins self-assemble ("protein folding") is a holy grail of modern molecular biophysics. What makes it such a great challenge is its complexity, which renders simulations of folding extremely computationally demanding and difficult to understand. Our group has developed a new way to simulate protein folding using distributed computing ("distributed dynamics") which should remove the previous barriers to simulating protein folding. However, this method is extremely computationally demanding and we need your help (see below). We have already demonstrated that our distributed dynamics technique can fold small protein fragments and protein-like synthetic polymers. The next step is to apply these methods to larger, considerably more important and complicated proteins. Unfortunately, larger proteins fold slower and thus we need more computers to simulate their folding. While the alpha helix folds in 100 nanoseconds, proteins just a little larger fold 100x slower (10 microseconds). Thus, while 10-100 processors were enough to simulate the helix, we will need many more to simulate these larger, more interesting proteins. Moreover, we are extending our methods to take advantage of the low latency/high bandwidth communication in the Internet2 in order to greatly improve the efficiency of our methods. This will involve implementing a peer-to-peer version of Folding@home for the Internet2.

Faculty Contact: Vijay S. Pande (pande@stanford.edu)
folding@home

Internet based inverse treatment planning

Intensity modulated radiation therapy (IMRT) is being developed into an important radiation therapy modality. However, the clinical IMRT treatment planning process has been found to be computationally time-consuming. The planning system uses patient CT/MRI images, physician defined tumor volumes and sensitive structure information as input and derives a set of optimal beam parameters for the patient treatment using computerized optimization. Currently, the computation is still done on a designated treatment planning computer with pre-installed software. The goal of this project is to establish a new computational paradigm utilizing the state-of-the-art Internet technology and to provide client-server planning environment for IMRT as well as conventional radiation therapy treatment. This proposal represents a pioneer attempt to use Internet technology to facilitate tedious radiation therapy treatment planning process. It will make the World Wide Web more useful and hold the promise of improving the current treatment planning in a fundamental way. The network-computing model proposed here is intrinsically insulated from obsolescence, and offer economies of scale through shared hardware, software, and administration. Successful completion of the project will revolutionize the clinical treatment planning procedure to realize the maximum technical and economic benefit of the state-of-the-art Internet2 technology.

Faculty Contacts: Lei Xing, Ph.D., Principal Investigator, and Co-Investigators Arthur L. Boyer, Ph.D. and Russell Hamilton, Ph.D.
 

LOCKSS (lots of copies keeps stuff safe)

LOCKSS is an attempt to ensure long-term access to academic journals published on the Web. It allows each library to run a pre-loaded web-cache (which never gets flushed) of the journals to which it subscribes. A very slow IP multicast protocol runs between the caches to detect and repair damage. The project is exploring techniques for fault-tolerant distributed systems which have far more replicas than needed to survive expected failures. It is funded by Stanford Library, the NSF and Sun Microsystems.

library.stanford.edu/projects/lockss

Simulation-based medical planning for cardiovascular disease

This project of the Departments of Surgery and Mechanical Engineering will help establish a new paradigm of predictive medicine. In this paradigm, the physician uses computational tools to construct and evaluate a combined anatomic/physiologic model which can then be used to predict the outcome of alternative treatment plans for individual patients. This process has been implemented in a software simulation called ASPIRE (Advanced Surgical Planning Interactive Research Environment) that combines a Web-based user interface, image segmentation, geometric solid modeling, automatic finite element mesh generation, computational fluid dynamics and scientific visualization techniques. Internet2 connectivity will support the further development of ASPIRE to include Java-based 3-D tools for simultaneously visualizing volume image data (using volume-rendering techniques) and the surgical plans (using surface-rendering techniques) in a client-server environment.

Faculty Contacts: Charles A. Taylor, Surgery and Mechanical Engineering (taylor@leland.stanford.edu)

SoundWIRE

Researchers at the Center for Computer Research in Music and Acoustics (CCRMA) are creating a tool for evaluating quality of service (QoS) using “Sound Waves on the Internet from Real-time Echoes.” SoundWIRE will be developed into a network software layer that provides an intuitive way of qualitatively evaluating transaction delay and delay constancy. The final form of SoundWIRE is envisioned as a plug-in to the common browsers for the World-Wide Web. The person who needs to test a path to a server would click on the server’s test address and listen for a musical tone, perhaps a guitar pluck. The sound would be created by repeatedly reflecting a digital acoustic signal between the server and client. Using the inherent network delay between these reflections in place of the guitar string will allow the pitch of the sound to represent transmission latency and the tone’s stability to represent perfectly regular network service. SoundWIRE will be useful as an intuitive diagnostic for full-duplex channels supporting media-rich applications, such as high-quality teleconferencing, remote sensing and teleoperation.

www-ccrma.stanford.edu/~cc/soundwire/NSFDescription.html

Teleconcerting application for joint concert, distance learning

Researchers at the Center for Computer Research in Music and Acoustics (CCRMA) are developing a new “teleconcert” application. The higher throughput of Internet2 promises to carry uncompressed multi-channel sound and high-quality video with the low latency needed for interactivity, acceptable synchronization, ease of use and reliability in its application to live concert settings. A joint concert between the University of Washington and Stanford, scheduled for May 2000 will feature a shared program of music using point-to-point communication between a concert hall on the UofW campus and a CCRMA performance space. Pieces will be performed live on both ends and be broadcast to the other. Once the system is tested successfully, the project participants will develop these tools so that they can be used in distance learning and remote ensemble situations.

Faculty Contact: Chris Chafe, Music (cc@ccrma.stanford.edu)

Last modified September 28, 2021