As the Nordic Observatory for Digital Media and Information Disorder (NORDIS) consortium reaches its two-year milestone, the hub’s new leader, Morten Langfeldt Dahlback, head of innovation and technology at the Norwegian fact-checker Faktisk.no, offers insight into NORDIS’ endeavours to date and plans for the upcoming years.
NORDIS is one of the regional hubs of The European Digital Media Observatory (EDMO), and the only Nordic hub within EDMO, a network funded by the European Commission. The hub’s mission is to develop effective strategies, models, and practices to combat digital misinformation and bolster media literacy among citizens in Nordic welfare states. The new leadership of NORDIS is located in Norway and taken over by Morten L. Dahlback in August. The consortium of NORDIS thus seeks to strengthen the collaborative work of researchers and fact-checkers from Denmark, Norway, Sweden, and Finland.
We asked Morten L. Dahlback, who has led for fact-checking and communication in NORDIS since 2021, how he sees the future of the NORDIS’ work.
A Nordic joint project
Anna Pacholczyk (AP): In European comparisons, Nordic countries are often regarded as exceptionally robust in their media systems and highly resilient against disinformation. However, the landscape is evolving rapidly, with the digital realm and social media playing a pivotal role in facilitating the rapid spread of misinformation, even within the Nordic region. Two years into the operation, what insights has the NORDIS hub gained regarding the prevalence of information disorder in the region?
Morten Langfeldt Dahlback (MLD): As you mentioned, the Nordic countries are generally perceived as particularly resilient to misinformation, within both the European and global contexts. At NORDIS, over the past two years, we have gathered some intriguing insights into the information disorder landscape in the Nordics.
Aarhus University, NORDIS’s research partner, delved into the emotional aspects of misinformation spread during the Covid-19 pandemic, focusing on Twitter data. Their research revealed that tweets regarding the pandemic and responses to the crisis were often saturated with emotions like anger, fear, and sadness. They hypothesised that this prevailing negative emotional tonality within the misinformation landscape played a role in amplifying the dissemination of further misinformation. This finding aligns with the experiences of fact-checkers, who noted a surge in misinformation during times of crisis or in periods marked by contentious issues.
The funding period of NORDIS coincided partly with the pandemic, which allowed us also to observe instances where people took to the streets to protest against Covid-19 policies based on misleading narratives, which were more pronounced during the pandemic in Norway and Denmark.
“The funding period coincided partly with the pandemic, which allowed us also to observe instances where people took to the streets to protest against Covid policies based on misleading narratives.”
Furthermore, our fact-checkers identified specific topics that were generating more misinformation, with these subjects evolving over time. Initially, there was a significant surge in misinformation surrounding the Covid-19 pandemic and vaccines. Subsequently, as the conflict in Ukraine unfolded, there was an anticipated upsurge in misinformation concerning the war itself. However, this increase did not turn out to be as substantial as initially expected. What we have seen, however, was an occurrence of misguided information surrounding energy prices and inflation, particularly attributing rising energy prices to the war and supply chain disruptions. Similar claims were found across all the Nordic countries.
We also conducted a joint investigation into misinformation about Child Protective Services in Norway, Sweden, Denmark, and Finland. Notably, a common narrative emerged, alleging that these services kidnapped or abducted children from their parents. In Norway, the prevalent misinformation has sometimes implied that Barnevernet targets conservative Christian families, and at other times that the CPS is generally unjust. These narratives have, naturally, been bolstered by the fact that Norway has lost several cases concerning children being removed from their parents by the CPSD in the ECHR. In Sweden, Denmark, and Finland, by contrast, the narrative was that these services focused especially on Muslim families. In the case of Sweden, such a misinformation campaign was run by Egyptian blogger Mustafa Al-Sharqawi.
I think it is important to acknowledge that we have limited insight into the true extent of misinformation in the Nordic countries, and this is due to several contributing factors. In Sweden and Finland, funding for fact-checking and independent efforts against misinformation is limited and on a small scale, making systematic assessment challenging. Although Sweden has the governmental Psychological Defence Agency working on misinformation, it is relatively recent and yet to provide comprehensive insights. Finland faces a similar issue with limited fact-checking support. Furthermore, universally, there is a lack of access to data from major online platforms where misinformation spreads, hindering our ability to gauge the true extent of the problem. That being said, it’s very important that observers and decision-makers do not infer from the fact that we’re not detecting large amounts of misinformation that there is little misinformation. The troubling thing, from our perspective, is that we do not know how much misinformation there really is.
Towards a code of practice of disinformation
AP: NORDIS partners have collaborated to offer guidance on the most effective approaches for monitoring, understanding, and mitigating information disorders within the Nordic countries. Drawing from your research and findings, what strategies or measures should be considered by different stakeholders to prevent the spread of misinformation?
MLD: In the realm of misinformation and its mitigation, various stakeholders play pivotal roles. Policymakers, in particular, have the ability to influence this landscape positively. One crucial area of focus is the bolstering of digital information literacy. This field remains relatively underdeveloped, not only in the Nordic countries but in many other regions. While some notable efforts are undertaken by actors like those within NORDIS, a more substantial commitment to strengthening digital information literacy across the region is needed.
Another significant step policymakers can take is an implementation of the Code of Practice on Disinformation, as part of the Digital Services Act (DSA). I think it’s worth emphasising that countering and preventing the spread of misinformation through regulatory means is a potent tool.
Turning attention to the tech companies and the media sector, we emphasise the importance of granting more access to large online platforms for researchers and journalists. While the Code of Practice stipulates that online platforms provide real-time data access to researchers through APIs, we advocate extending this privilege to at least some journalists as well. Journalistic methods are designed for quick and reliable results, enabling faster responses to misinformation campaigns and timely debunking.
As for the media sector, we call for greater caution in publishing unconfirmed findings. Recent events, such as the coverage of the Gaza hospital explosion, serve as a reminder of the consequences of hasty reporting based on uncertain claims. In case of the explosion at the Al-Ahli Arab Baptist Hospital in Gaza, many media outlets, including in Norway, reported claims from the Palestinian side that the explosion was caused by an Israeli attack. This claim was unconfirmed at the time and, at least at the present moment, it seems more likely than not to have been false. So, more cautious reporting is recommended, as this is essential to maintain public trust and minimise the prevalence of misinformation.
Lastly, the creation of dedicated verification teams, such as Faktisk Verifiserbar or BBC’s Verify, either as collaborative projects between media companies or as specialised units within larger media organisations, should be recommended. This measure would not only protect against the dissemination of unverified content but could also foster production of more compelling journalism.
Increased monitoring of platforms
AP: Recent years have witnessed a growing weaponisation of information, particularly evident in the context of the Russian–Ukrainian war. The ease of content manipulation through advanced technology, and the amplification of false narratives on social networks, have created formidable challenges for fact-checkers. At NORDIS, you have conducted an evaluation of the daily practices and the exact challenges and needs of Nordic fact-checkers. What findings have emerged as a result of this in-depth analysis? Have you identified any shared concerns and issues among other fact-checking organisations worldwide?
MLD: It’s important to note that the evaluation we’re discussing took place nearly two years ago, and much has changed since then. However, there are enduring insights that hold substantial relevance for fact-checkers and continue to guide our endeavors at NORDIS.
Among the foremost concerns for fact-checkers is the ability to enhance their online monitoring capabilities. Back in 2021, we were mostly relying on access to Facebook’s CrowdTangle tool, which proved valuable for monitoring content on the Facebook platform. However, the landscape has shifted, with information consumption patterns increasingly favouring audio and visual content, which presents a more challenging monitoring scenario. Already in 2021, when the evaluation was done, most of the fact-checkers expressed a need to monitor audiovisual platforms such as TikTok, YouTube, and Telegram. The need to monitor the first two platforms has, if anything, intensified, although I think the monitoring of Telegram has seen significant improvements.
Among the foremost concerns for fact-checkers is the ability to enhance their online monitoring capabilities.
One of the key challenges that fact-checkers routinely encounter is the difficulty in discovering the content that they should fact-check, which essentially means locating those pieces of misinformation that have both significant impact and are consumed and/or believed by large audiences. So, refining monitoring and detection methodologies to combat misinformation in different modalities remains a crucial need among fact-checkers, as recognised in our evaluation. Through our collaboration with the University of Bergen and our new technology partner, Factiverse, we will work towards addressing the specific needs of fact-checkers.
Testing and developing fact-checking tools
AP: The consortium has also yielded valuable outcomes in the assessment and development of fact-checking technology. NORDIS team members stand behind the creation of a fact-checking tools base, along with the creation of novel prototypes that have been adopted by journalists and fact-checkers in the Nordics. Do you have any indications on how the fact-checking technology is expected to evolve in the near future? What sort of tools will be needed?
MLD: We worked closely with Duc Tien Nguyen’s team at the University of Bergen. This collaboration resulted in Foto Verifier and AI-based classifier tools. NORDIS member Laurence Dierickx in turn compiled the fact-checking tools database. This database was developed further in collaboration with the SCAM project at OsloMet.
As we were delving into our work, we discerned the potential for automating parts of the fact-checking process, a task that has proven quite challenging. We have made some progress in this realm and remain committed to further advancements.
The technology landscape has seen substantial progress, resulting in increased accessibility to AI systems. This development serves both fact-checking practitioners and those seeking to disseminate misinformation.
We believe that an important challenge for fact-checkers today lies in developing their capacity to detect and respond to synthetic information, including deep fakes and synthetic, or artificially generated text. The systems that generate such content have been improving significantly over the past few years, and we need tools to address that. At NORDIS, our agenda involves collaborating with Factiverse to develop detection methods specifically designed for synthetic text.
AP: If the necessary funding is secured, the consortium will move into what could be called a second phase. What would be the plans and objectives for the upcoming years?
MLD: The second phase will be more industry-oriented, with Faktisk.no taking the lead. In this phase, the hub will comprise a higher number of fact-checkers than research representatives. Our aim is to foster closer collaboration between scientists and fact-checkers, and this is part of the reason why we want to build on the image verification tools that researchers at the University of Bergen have created. We seek to facilitate the tool’s integration into a structured development cycle, which includes prototyping, newsroom application, and user feedback gathering, with the ambition of creating verification tools that can be put to use in any newsroom, but particularly in the European fact-checking community. This process is designed to create value for all parties involved.
Furthermore, we have established partnerships with several journalist education institutions, such as SUJO at the University of Bergen and FOJO Media Institute at Linnæus University in Sweden. Collaborative agreements are also in place with similar institutions in Denmark and Finland. We believe that the knowledge and methodologies we have developed – and will continue to develop – should be shared with the entire media ecosystem to further educate professionals.
Morten L. Dahlback is Head of Innovation and Technology at the Norwegian fact-checking organization Faktisk.no, where he has been the work package lead for fact-checking and communication in NORDIS since 2021. Dahlback has a PhD in philosophy from the Norwegian University of Science and Technology (2017). He has been a visiting researcher at Rutgers University and the Munich Center for Mathematical Philosophy.