Fulfilling the above requirements calls for an architecture that jointly articulates these different models of location.
We propose to define a layered template for this architecture that draws inspiration and is conceptually similar to those used in generic network services and protocols: the bottom layers are closest to the physical properties of space, and as we go higher they get more and more abstracted away and closer to concepts understandable by human users, much as we move up from physical connection, to MAC addressing, to IP, then to DNS and possibly UDDI addressing, in network-based identification protocols.
This model juxtaposes two vertical categories of information, orthogonal to layers, corresponding respectively to loci and locants.
By analogy to the physical layer of network protocols, this is the lowest level of our architecture, directly related to the location-sensing and identification technologies used: from the variety of technologies available, we can distinguish two (overlapping) categories: technologies that identify locants, and technologies that locate them in space. It is only by combining the two kinds of sensor that you can track a specific locant through space. The identification sensor itself may provide some minimal kind of location information if it has a limited range, asserting the presence of the identified locant within this range.
The relation between entities and sensor can be seen in the two directions, because with some technologies, the sensor activately searches for objects, or the entity can itself announce its presence, both for identifying or locating.
The final aim of this layer is to provide a relative location for an identified entity (the sensor may not be unique but it could result from the combination several of them) it could be vague (near some identifying sensor) or more accurate. But at this state it is only relative to the sensor(s); it has no real sense for the physical space.
At this level, relevant loci are geometrically defined as sets or neighbourhoods that aggregate information coming from the physical layer, depending on the actual distribution of sensors through space. Position information for a sensor allows mapping of the relative position information it provides to a more global coordinate reference system.
In this layer, multiple locants and loci will be associated in complex ways to model the structural relationships between them. This corresponds to a graph-based model for which vertices (nodes) may be either loci or locants.
In the first case, these models may be seen as enrichments of a set-based/topological model, modelling not only loci as subsets or neighbourhoods of space but also their structural relationships. This is implicitly the kind of model underlying the cell pavings used in cellular networks, where adjacency relationships between cells are used for the handover of a locatable entity from one cell to another. These models are also used in navigation systems where it is necessary not only to locate the user , but also to find a route for him to some destination. Adjacency is but one particular case of relationship. A complementary hierarchical model loosely underlies most of the semantic models used in directories, but is also an implicit model for the space within a building, as decomposed in floors, rooms, cabinets , etc.
A much more abstract view of location corresponds to the case where the vertices of the graph are locants, and loci may correspond to various subsets of this graph (e.g. paths, walks, cycles, or arbitrary subgraphs). This may be used in conjunction with a metric model (yielding a valuated graph) with a location technology that is purely relative and bilateral between objects themselves , rather than related to more or less fixed loci.
General graphs may be used to model all kinds of bilateral or multilateral relationships between their nodes, besides relative location as put forward here. Of course, all classical network models are based on graphs modelling their connectivity relationships, and this is not what we are attempting to reinvent. Other relationships may be described for which location may still be used as a metaphor, by extracting topological properties from the graph itself. Some semantic relationships may, for example, be described in a structural way, enabling inverse location queries similar to those that may be made in a physical location model.
In these models, spatial location may be defined implicitly rather than explicitly, by reference to more or less abstract concepts relevant to a given universe of discourse , i.e. a semantic frame of reference. Loci may correspond to such divisions as streets , precincts, municipalities, regions , states, as used in regular directories. At a lower level and a smaller scale, buildings , floors, rooms, or even shelves in a cupboard, cells on a given shelf, etc could be used as loci providing a spatial reference for all kinds or locants, which will themselves be defined by some supposedly well-known characterisation rather than their physical properties.
These symbolic locus descriptions will themselves be mapped to one of the lower-level models described before, i.e. either a hierarchical graph model, a topological model or a metric model, and this correspondence has to be accounted for by the architecture.
These characterisations may be compounded with other non-univocal high-level properties associated with a particular locus. These may correspond to a typing or profiling of a particular locus (e.g. authorisations, security constraints, etc).
One of the main goals of the infrastructure shown before is to respond to a query for direct location. The query is of the type "Where can I find something?" something being a communicating entity and the response to where being a locus. So the general path to respond to the query is from the entity to locus.
In a human understandable way, the something is likely to be expressed with a semantic definition (like where is the coffee-machine?).
You can also research an entity upon the relation it may have with other entities that build an higher level device (for example a streaming server which can be linked to a video client such as a giant TV flat screen, the query would be, where are the TV located?). Finally you can search for an entity with its low level identification (like with an Ethernet address or an unique URI), but that sort of request should be transparent for users and only accessible for performance purpose (because it skips in some way the infrastructure model).
On the entity column, the request follows an up to down path, being transcoded layer to layer. It may descend to the physical identification before being translated to the locus column, and then it follow an upward path, stopping at the level required by the asker.
The response could takes many forms too (depending on the need of the request), it could be an absolute positioning in a specific reference frame (for example GPS coordinates if you are outdoor). The response could be the neighbourhood in which the object is located (for example, the coffee-machine is inside room F107) or the structural path to access to it (the coffee machine is on the first floor of the building, section F, between the room F105 and F109, in front of the corridor etc).
And finally, it may be a semantic response, especially if the asker is human (the coffee machine is in the cafeteria of section F).
Of course these answers could also be combined to fulfill the requirements of an application (especially if you're using a navigating software to guide you to the coffee machine).
As explained in the requirements, there is another type of query which is useful, about inverse location search, responding to question like "what entities can I find there" there being a locus. It may be seen on the opposite side, from locus to entities.
The query is likely to be addressed nearly anywhere on the locus column.
From the semantic viewpoint, very understandable for human beings with questions like "who is in the conference room?" (people considered as communicating entities through the use of pda or mobile phone) for example. In this case, the place is semantically known to be a conference room and semantic properties are attached to the entity searched (in fact humans , not coffee-machine).
But the query can target the structural relation between loci (answering to query of the type "what device can I use in the room next door" or "what can I find inside this entire building").
For proximate selection, a lot of queries are of the type "what services can I use near me". "Near me" can be understood as in the same set of space, or in my neighbourhood, or near my absolute position. Neighbourhoods are also targeted for not proximate selections (asking about entities in neighbourhoods or sets in whose you are not).
Absolute positioning can also being used if you have no idea of the semantic identity or the set of space concerned with the position you are asking for (particularly useful if you are outdoor).
Finally, but this should not be used, is the direct targeting of the sensors (you ask about the relative positioning or identification of located entities).
The way of the request is nearly symmetric to direct location query, going down on the Locus column, and up in the entity one.
As in direct location queries, the response can be of different types, fulfilling the specific needs of the application asking. It may be some basic identification of the entities responding to the criteria formulated in the request found () or directly some high level semantic definition of it, depending on the possibility of the subjacent.