Cartographic adaptation through eye tracking and deep learning: Gaze-Aware Interactive Map System (GAIMS)

: Adaptive map interfaces which learn from their users’ behaviour can be accomplished through integrating eye movements as a tool to interact with maps and as an input for map design. Identifying map features within attentional hotspots derived from eye tracking data provides valuable insights into map users’ cognitive procedures and how map design influences spatial cognition. One step further would be not only identifying the map features that are looked at but also extracting other map features having similar vector characteristics. This way features not yet looked at can be highlighted, for example, by deep learning. Achieving this user-driven interactivity during map reading tasks ( e.g., geovisual exploration, information extraction, visual search, spatial memory) would result in more personalised map interfaces. These interfaces can facilitate cognitive processes of individuals to better understand the geospatial data itself, discover the relationships and patterns within, guide exploratory map reading, and provide efficient visual search strategies. It would be particularly useful when the spatial data is presented as map-like but unstructured representations, meaning that some essential elements such as scale, symbology, annotations, and attributes might be missing. This is often the case in defence and security sectors. However, establishing such gaze-aware cartographic interactivity is not straightforward as it requires the combination of eye tracking, advanced vector processing, for example deep learning, and careful map design. This on-going research explores the possibilities to provide in-situ or semi real-time revisualizations based on the collected eye movement data for 2D static vector maps. In this context, the study has been carried out in two folds: (i) learning about map users’ attentional behaviour

Adaptive map interfaces which learn from their users' behaviour can be accomplished through integrating eye movements as a tool to interact with maps and as an input for map design. Identifying map features within attentional hotspots derived from eye tracking data provides valuable insights into map users' cognitive procedures and how map design influences spatial cognition. One step further would be not only identifying the map features that are looked at but also extracting other map features having similar vector characteristics. This way features not yet looked at can be highlighted, for example, by deep learning. Achieving this user-driven interactivity during map reading tasks (e.g., geovisual exploration, information extraction, visual search, spatial memory) would result in more personalised map interfaces. These interfaces can facilitate cognitive processes of individuals to better understand the geospatial data itself, discover the relationships and patterns within, guide exploratory map reading, and provide efficient visual search strategies. It would be particularly useful when the spatial data is presented as map-like but unstructured representations, meaning that some essential elements such as scale, symbology, annotations, and attributes might be missing. This is often the case in defence and security sectors. However, establishing such gaze-aware cartographic interactivity is not straightforward as it requires the combination of eye tracking, advanced vector processing, for example deep learning, and careful map design.
This on-going research explores the possibilities to provide in-situ or semi real-time revisualizations based on the collected eye movement data for 2D static vector maps. In this context, the study has been carried out in two folds: (i) learning about map users' attentional behaviour towards geovisualizations and the influence of highlighting on making sense of map content (ii) linking gaze data with vector data in geovisualizations and utilizing users' eye movements to interact with the geovisualizations. The research is currently in the first phase in which eye tracking user experiments are conducted to study the effect of highlighting on users' attentional behaviour before designing the map refinement system.
We describe a map refinement system that learns from and adapts to its users' eye movement behaviour and revisualizes the map content based on this individual attentional data as Gaze-Aware Interactive Map System (GAIMS), for which the initial ideas are presented in Keskin and Kettunen (2021). While designing GAIMS, the following steps are taken into consideration ( Figure 1): • Automated eye tracking analysis on-the-go: Collecting eye movements and calculating hotspots while interacting with map stimuli.
• Linking eye movements and vector geometry on the map stimuli.
• Selection of linear and polygon features throughout the whole map based on the similarity of vector characteristics of the features within hotspots: Vector data processing with deep learning (e.g., RNN, CNN, GCNN) (In this context, training data should be sufficient, and its metrics should be specified).
• Revisualizing the map content: Highlighting the selected features based on hotspots.  The motivation behind this research is to improve the human-computer interaction during exploration of geospatial data or similar map reading processes, and the innovative aspect is the planned connection between the future GAIMS system and GeoAI, more specifically the planned vector data processing helped by deep learning. Leveraging latest technologies in eye tracking and deep learning is promising in terms of facilitating map readers to find new viewpoints for understanding the map content.