In this video William describes how technology can support the exploration of the city in an interactive and informative way, yet unobtrusively. This project combines dialogue based systems with location aware technology.
The SpaceBook project is an EU funded project bringing together experts in location aware devices, positioning technology, GIS and natural language programming. The ambition is to deliver a range of services and information that a tourist might typically need in order to explore and learn about their environment. Information about transport services, 'my nearest', historical information about any number of places and features, entertainment venues, hotels, restaurants, together with directions to get to these places. The precise 3D modelling of geographic space is fundamental to the delivery of such augmented information. Determining the precise location of the tourist at any given time is also critical to understanding what is in the tourists field of view at any given time. Unique to SpaceBook is the delivery of these services solely through dialogue based interaction, leaving the tourist hands free and eyes free to explore the city as they so wish – without the need for touch screen interaction, or to read text or interpret maps.
At its core, SpaceBook uses speech recognition techniques to interpret the dialogue, and from this to infer the ambitions of the tourist. These goals are the basis by which relevant information is retrieved and then converted into a form that can be synthetically spoken back to the tourist. SpaceBook must find a balance in providing sufficient information to any given service, whilst being able to supply more detail where so requested. Whilst the focus of SpaceBook is the urban tourist, this type of technology could be adapted to rural environments, and to users for whom being 'hands and eyes free' was critical. Where sufficient locational accuracy can be determined, and where precision and richness in the description of the urban geography allows, SpaceBook also has the potential to be used by the visually impaired.
The project is coordinated by Michael Minock at Umeå University in Sweden (Coordinator), with partners from the University of Edinburgh, School of GeoSciences and School of Informatics, The Interaction Lab at Heriot Watt University, The Computer Laboratory NLIP Group at The University of Cambridge, UK; The School of Computer Science and Communications at KTH, and Liquid Media, Sweden; and the Artificial Intelligence Group at Pompeu Fabra University in Spain.
Find out more: