MAGIC (Mobile, Augmented reality, Group Interaction, in Context) is a hardware and software platform dedicated to archaeological prospecting activities. It enables the archaeologists to perform ground analysis of the site and to communicate with other mobile archaeologists working in the site as well as with distant archaeologists. We first describe the hardware and then the software responsible for the fusion of the two worlds, the physical and digital worlds . The complete software and its architecture are detailed in [Renevier 01].
The hardware platform is an assembly of commercial pieces of hardware. We use a Fujitsu Stylistic 3400 pen computer. This pen computer is a PC (processor Pentium III 450 MHz, 196 Mb of RAM), with a tactile screen having the size of a A4 sheet of paper. Its weight is 1,5 kg. Moreover, it has a video exit allowing the dual display, to which we connect a semi-transparent Head-Mounted Display (HMD), SONY LDI D100 BE. A camera is fixed between the two screens of the HMD (between the two eyes). The hardware platform also contains a magnetometer (HMR 3000 of Honywell) which determines the orientation of the camera as well as a GPS which locates the mobile user. For sharing data amongst users and communication between users, a WaveLan network by Lucent (11 Mb/s) was added (PCMCIA). Figure 19.2 shows a MAGIC user , fully equipped.
Based on the above described hardware platform, we designed and developed interaction techniques that enable the users to perform the functions associated with the scenarios. Figure 19.3 presents the graphical user interface displayed on the tactile screen of the pen computer. The developed software offers several functions for communication (electronic forum and messages), for coordination (archaeologists location on the map of the archaeological site) and for production (editing tools, database of found objects). In order to smoothly combine the digital and the real, we create a gateway between the two worlds. This gateway has a representation both in the digital world on the screen of the pen computer (bottom right part window in Figure 19.3) and in the real environment, displayed on the HMD.
Information from the real environment is transferred to the digital world thanks to the camera carried by the user. The camera is positioned so that it corresponds to what the user is seeing, through the HMD. The real environment captured by the camera can be displayed in the gateway window on the pen computer screen as a background. Based on the gateway window, we allow the user to select or click on the real environment. The interaction technique is called "Clickable Reality". Before taking a picture, the camera must be calibrated according to the user's visual field. Using the stylus on screen, the user then specifies a rectangular zone thanks to a magic lens [Bier 93]. The specified rectangular zone corresponds to a part of the real environment. By selecting the button "take" (to take a picture), the user carries out a capture of the part of real world contained within the framework of the lens. The real world becomes clickable like digital objects. All the images thus captured are recorded in a database with their localisation thanks to the GPS.
Information from the digital world is transferred to the real environment, via the gateway window, thanks to the HMD. For example the archaeologist can drag a drawing or a picture stored in the database to the gateway window. The picture will automatically be displayed on the HMD on top of the real environment. Moving the picture using the stylus on the screen will move the picture on top of the real environment. This is for example used by archaeologists in order to compare objects, one from the database and one just discovered in the real environment.
In addition, when an archaeologist walks in the site, s/he can see discovered objects removed from the site and specified in the database by colleagues: we called this interaction technique "Augmented Stroll". The technique consists in superimposing an image of an object in its original real context (in the real world), thanks to the semi-transparent HMD. Because a picture is stored along with the location of the object, we can restore the picture in its original real context (2D location). "Augmented Stroll" is an example of mobile and collaborative augmented reality (AR) technique. On the one hand, it is a mobile AR technique because the augmentation of the real world is based on the current position (GPS) and orientation of a user (magnetometer). On the other hand, it is an asynchronous collaborative technique: a user initially captures an object in its physical context before removing it from the site; another user can later on perceive the object in its original real context. While walking in the archaeological site, the user can observe green spots superimposed in the real world to indicate that objects are available. By selecting a green spot, the user can see the digital object in its original real context. The reuse of digital objects initially from the real world but no more physically present supplements the interaction cycle between the user, her/his environment and the computer. This mode makes it possible to follow the evolution of the archaeological site through space (the movements of the user within the site) and time (the various recorded information collected by several users).
Our approach consists of augmenting the user: a MAGIC user is wearing and holding the MAGIC platform. Nevertheless, from the point of view of the user, physical objects or places are augmented. Indeed the approach adopted is to assist the user by providing extra information about the physical field via a device carried by the user. Another approach [Mackay 98] would be to augment the physical environment itself. Each time the user removes a physical object, s/he would have to place an input/output devices at the position of the object. The links between the physical and digital worlds would then have been dynamic but explicit: the user would have to perform a specific action to define the link. Our design solution enables links between the physical and digital worlds that are dynamic but implicit, because the localisation of an object is automatically computed by the system.