Cartography 2.0 a response to participatory GIS

Maps are increasingly becoming more and more popular on web pages, either embedded in existing pages or as dedicated pages. Additionally new tools featuring advanced analytical functionality are also on the up and coming. This increase in web-maps has spawned a great need for geographical data, from basic information, such as administrative boundaries to more detailed information such as hiking trails and similar, in addition there is the information space which holds geo-referenced information, such as images, people and similar.

In a response to this need several large map sites offer the ability to add/change (and even delete) the geographical information seen in the map such as Open Street Map (OSM) and Google Map Maker to mention a few.  Google Map Maker base itself on commercial made information, either freely available or purchased in addition to the contributions from the users, Open Street Map on the other hand base itself solely on user contributions – both approaches has to deal with the accuracy of the data. For the commercial data the accuracy is believed (or guaranteed) to be of some level, often acceptable for the regular user, for the user contributed there are no accuracy provided, however it is believed that it is correct. This problem of accuracy is starting to emerge quite rapidly as the screenshots below indicates. Which map holds the correct information? Well probably it is the one with the most details and with the least jagged lines – and yes, the N50 map is the most semantically correct – not surprising since it is the Norwegian Mapping Authority which is responsible for it. However, how should one know this by just looking at, for instance, the Open Street Map? Or even worse the Yahoo Map?

Comparison between different web maps. From the left; Yahoo Maps, Open Street Map,, N50 (Norwegian Mapping Authority)
Comparison between different web maps. From the left; Yahoo Maps, Open Street Map,, N50 (Norwegian Mapping Authority)

Here is where the legacy of maps come in to play. Maps have an enormous trust among the users – people generally trust their lives with maps and strongly believe that what the map depict is the truth – however that may not be true, at least not always.  The example shown here is quite harmless, but what if it was a reef in the ocean that was slightly jagged in the map, but in real life stretched further? Such accuracy is not communicated with todays maps, at least not as explicitly as it should. Although I believe for the larger part the accuracy is known, probably not in centimetres or meters, but at least in the sense correct/medium/false or similar. The advent of participatory map making communities/tools is also posing in favour of this, as the community as such could rank the semantic validity of the information (this may already exist in for instance OSM?).

So, I suggest that more effort is put into the explicit communication of the accuracy of information communicated in maps.

Some solutions that could support this is either to avoid inaccuracy in the information. This is done at a large scale already by aggregating data (arithmetic average/weighted average etc), filtering from sources (user contributions vs. commercial data), or rating from users. However it is inevitable that inaccuracy occurs, so allowing it (or embracing it) is better than avoiding it. If one is to allow for maps with possibly large inaccuracies, then one must have a way of handling it and presenting it to the user in such way that the user benefits from it and not the opposite. I believe one such method lies in the presentation of data – if it is utmost clear that the data may be inaccurate, then the user can freely decide what he will do with it (trust it or just “keep it in mind” etc). Techniques for this may be; Continue reading Cartography 2.0 a response to participatory GIS

My master project – the what and why

Atle came up with, and manifested, a great idea, namely to write some words on what he does on his master project and some background. Aimed at the regular human being (i.e. not very nerdy). The pros are way in favor of the cons for doing this – and I will as such try to do the same. Hopefully I will try to post an update on the project with (ir)regular intervals – however, I will not promise this.

So, what is the title?

“Quality aspects of combined cartographic maps and conceptual models”

Fancy title eh? The motivation for the title is to be as general as possible as well as keeping the topics I’m interested in. Some background is probably needed here. Conceptual modelling is a science from computer science, which roughly said focus on software and enterprise modelling. Well known examples of such modelling techniques are UML (Unified Modelling Language), DFD (Data Flow Diagrams), ER (Entity Relationship) and BPMN (Business Process Modelling Notation), to mention a few. These modelling techniques captures and describes an abstract representation of some reality, and is very useful for many purposes, especially to make complex problems less complex (through the abstraction).

Cartography is, roughly said, the science of map design, primarily geographic maps. Cartography is a very old science with several hundred years of history. The legacy has brought a strong emphasis towards paper representation and highly traditional geographic maps. However in newer times, computer supported map tools have been the de-facto standard of making and (often) viewing maps. As such cartography is undergoing a change. In response to this change new or re-engineered techniques are needed. One such change that I believe is necessary is the understanding of quality in cartography. The phrase “understanding of quality” is somewhat complex, however, the general idea is that cartography needs a set of general, comprehensive and accepted guidelines that guides the making and evaluation of maps. One attempt to create such a framework of guidelines was undertaken last fall and the result was MAPQUAL. MAPQUAL attempts to adapt the quality framework SEQUAL from conceptual modelling into a cartographic context.

So, that was a bit theoretic background. Over to my master project.

Conceptual models have recognized the inherent importance of geographical location for a long time (Zachman). However little exploitation of this has been made, so far. The underlying goal for the project is to experiment with models (maps and conceptual) that exhibits both geographical information and conceptual information and from that, design a set of guidelines that support such models.

Pfuh. So, for the practical stuff.

The case that I have choosen is in collaboration with COSTT, a research project that focus on transparency in the health sector. The case focus on visualizing relevant information, supporting self coordination for the user.

Typical example of such information need; A doctor (user) has a set of pre-defined tasks during a day (schedule) and a set of patients that are of particular interest (responsibility for, scientific interest, etc). Both the patients and the tasks have a location in the hospital, so does the doctor. As well, patients have a state that they’re in (say; healthy, crashing etc..), tasks does also have much non-geographical information attached to them (if equipment is ready, OR ready, all staff members ready etc). As you can see, this information need exhibits both geographical and conceptual information. Although the geographical information is position in a geography the actual need is for instance; “how fast can I move from here to the location of Patient 1”, and as such a temporal aspect is also included.

The idea is to visualize as much as this information space which is relevant to the individual user – and preferably come up with some good ideas for visualizing such information.

More practical aspects to the project. Experiments are probably going to be loose proof-of-concept implementations, but mostly paper-prototype-like methods. However some consideration is going to be taken towards the geographical information space and the temporal aspects needed – this is most likely to be solved using PostGIS with some extensions.

I can easily see that this project is covering aspects that are not at the moment very popular – although, I believe the ideas can be worth something to someone:)

Not so much fancy pictures, diagrams and such yet, but probably some graphics will come:) I realize that this post was a bit ad-hoc, and maybe it lacks some examples/topics that could be of interest – however that may spawn to new posts – which is a good thing:)

For the bleeding edge notes on my project you can browse my wiki which I use as a virtual notepad – so do not trust what you read there:)

So, now for your thoughts. Was this text very complicated (language and/or content)? Is the idea any good? Do you have any other examples of information that exhibits both geographical and conceptual information?