A recent article by Alex Iskold brilliantly captures the separations of where we imagine semantic search should be and the reality. Even if it were trying to knock Google off a top spot, what he highlights is that it would be an unnecessary exercise.
Google does its thing very well. Few would argue with that. Alex suggests that semantic search should do something completely different…
To really showcase semantic search, these companies need to come up with innovative UIs that will help users to understand the power that is being put at their fingers.
I also think it should display results differently, the interface should encourage an exploratory experience and allow lateral thought to occur during research. Iskold states we should move away from the search box, as this is the wrong type of input for a user.
Peter Morville is currently producing a book on search patterns and this will also highlight how facetted navigation enables the user to experience a different search journey. He has made the slides available here on Flickr.
If you look at the core of what should be shown is the relationships between items, that give relevance to the user’s query. Somehow a system needs to be designed that will reveal elements that have relationships and connections. Ontologies could be made, linking different data sets as if they were relational databases.
If a user found truly relevant and accurate information around an entity then a business objective could be fulfilled. Really targeted advertising would occur. Users are often more forgiving of adverts if what they see is relevant.
Images as a search device
But the challenge is in the interface, how do you convey a fluid, ‘noosphere‘, visually? It has to be visual because the variety of content types are so different, and to scan and associate quickly – images could allow instant recognition.
Blaise Aguera y Arcas is an architect at Microsoft Live Labs, architect of Seadragon, and the co-creator of Photosynth
Recently Mike Laurie wrote a post about the use of video, virals and such like and he ended the post highlighting the Photosynth software developed by Microsoft. The video shows the potential of this software and it’s impressive. One quality is the ability of a system to collate images from Flickr that have been tagged with recognised terms and build those images around a 3D model.
Another element is the resolution of some of these images is 300 megapixels allowing a user to zoom right into them and read or see the contents.
Photosynth showing Flickr images mapped to 3D models of the subject
If you think about this collation of files that have been tagged by a massive variety of users (from Flickr in this instance), it seems to be a big step to Tim Berners-Lee’s vision of the semantic web.
Sure these are only pictures, they are not documents, however when you think about the metadata in the file and how this can be organised to conform to a widely accepted mental model then this is really exciting.
An interactive mental model
Imagine an interactive concept model around a physical object, you could extract the related items around this and draw relationships between inter-linked entities. I recently designed a taxonomy for a science magazine, and it had to encompass every type of science from physics to psychology to civil engineering. How would you draw relationships between these fields?
Well, what I like about this model is that this would be a visual representation of a knowledge landscape (in this case using images) that could easily be with video, audio and standard web pages. It would also be a 3d representation that would encourage digital discovery.
At the moment it is perhaps too flexible for your average user, but give it time. As more digital natives reach maturity and form the majority of the browsing public, this interface will not phase them at all.
Google’s repository of human entered queries
We have a vast amount of data against document types about relationships between one and another, search engines have log files where keywords and most relevant results are displayed.
Around any search, even if it is a mathematical equation, there is a physical object that can be related to it. Be it the theory’s creator, university, or even the theory itself the physical entity (or even known concept) could form the basis of the visual model.
If we truly want to move towards a semantic web than this type of interface would offer a rich, interactive and flexible approach to showing layers of detail that would encourage digital discovery and serendipitous finding.
Producing a list of most relevant links is still a compromise to what we could display to our users. It could be far better to show a knowledge landscape for each query where there are paths to other areas of knowledge and layers of related data that can be sorted by a series of user interactions.
In a way Microsoft has produced a microfiche for the 21st century, the difference though is the librarians that have tagged it are now the users of the system and they create the content. The machine has enabled the creation of something that is entirely user generated but it will also help in organising this huge potential of harnessing the world’s knowledge.
Consider how Google are also integrating wikipedia entries to their maps and geo-locating photographs. By using the map, as mental model in this case, they are merely super imposing extra data types directly onto the two dimensional base.
Perhaps this is the greatest challenge to interaction designers and visual thinkers. Visually represent the knowledge available from an interlinked network of sources that are authorities around a subject area. As Alex Isold points out we are far from the solution yet.