Maps are increasingly becoming more and more popular on web pages, either embedded in existing pages or as dedicated pages. Additionally new tools featuring advanced analytical functionality are also on the up and coming. This increase in web-maps has spawned a great need for geographical data, from basic information, such as administrative boundaries to more detailed information such as hiking trails and similar, in addition there is the information space which holds geo-referenced information, such as images, people and similar.
In a response to this need several large map sites offer the ability to add/change (and even delete) the geographical information seen in the map such as Open Street Map (OSM) and Google Map Maker to mention a few. Google Map Maker base itself on commercial made information, either freely available or purchased in addition to the contributions from the users, Open Street Map on the other hand base itself solely on user contributions – both approaches has to deal with the accuracy of the data. For the commercial data the accuracy is believed (or guaranteed) to be of some level, often acceptable for the regular user, for the user contributed there are no accuracy provided, however it is believed that it is correct. This problem of accuracy is starting to emerge quite rapidly as the screenshots below indicates. Which map holds the correct information? Well probably it is the one with the most details and with the least jagged lines – and yes, the N50 map is the most semantically correct – not surprising since it is the Norwegian Mapping Authority which is responsible for it. However, how should one know this by just looking at, for instance, the Open Street Map? Or even worse the Yahoo Map?
Here is where the legacy of maps come in to play. Maps have an enormous trust among the users – people generally trust their lives with maps and strongly believe that what the map depict is the truth – however that may not be true, at least not always. The example shown here is quite harmless, but what if it was a reef in the ocean that was slightly jagged in the map, but in real life stretched further? Such accuracy is not communicated with todays maps, at least not as explicitly as it should. Although I believe for the larger part the accuracy is known, probably not in centimetres or meters, but at least in the sense correct/medium/false or similar. The advent of participatory map making communities/tools is also posing in favour of this, as the community as such could rank the semantic validity of the information (this may already exist in for instance OSM?).
So, I suggest that more effort is put into the explicit communication of the accuracy of information communicated in maps.
Some solutions that could support this is either to avoid inaccuracy in the information. This is done at a large scale already by aggregating data (arithmetic average/weighted average etc), filtering from sources (user contributions vs. commercial data), or rating from users. However it is inevitable that inaccuracy occurs, so allowing it (or embracing it) is better than avoiding it. If one is to allow for maps with possibly large inaccuracies, then one must have a way of handling it and presenting it to the user in such way that the user benefits from it and not the opposite. I believe one such method lies in the presentation of data – if it is utmost clear that the data may be inaccurate, then the user can freely decide what he will do with it (trust it or just “keep it in mind” etc). Techniques for this may be; Continue reading Cartography 2.0 a response to participatory GIS