Young Digital Detectives — EU Youth vs. Disinformation

by | May 3, 2026 | Uncategorized | 0 comments

Every week, 76% of young Europeans encounter disinformation or fake news. Yet only 36% of them ever stop to check whether what they have seen is real. That gap is not simply a digital skills problem. It is a quiet threat to the foundations of democratic life — and it is widening every year. Young Digital Detectives

The consequences are compounding. False narratives spread before corrections even begin to circulate. Coordinated campaigns exploit moments of crisis, amplifying division and manufacturing outrage. And the young people most active online — the very generation expected to carry democracy forward — are often the least equipped to defend themselves against manipulation. Furthermore, the tools used to deceive them are growing more sophisticated by the day.

AI-generated deepfakes, synthetic audio, and algorithmically amplified propaganda are no longer the stuff of distant geopolitics. They are present in the comment sections, group chats, and social feeds of young Europeans every single day. As a result, the stakes of digital illiteracy have never been higher. When young people cannot distinguish between a coordinated campaign and a grassroots movement, the information space becomes a lever for those seeking to destabilise democratic institutions.

Russia’s operations during the war in Ukraine involved more than 10,500 social media channels and websites — a figure documented in the European External Action Service’s own threat reports. This is not organic misinformation. It is industrial-scale manipulation designed specifically to overwhelm critical thinking before it has a chance to engage.

Table of Contents

Growing Response

There is, however, a growing response. Across Europe, a new generation of young people is learning to investigate, not just scroll. The Young Digital Detectives initiative represents one of the most ambitious attempts to build genuine investigative capacity among youth — moving far beyond awareness campaigns toward a systematic, professional-grade methodology for identifying and dismantling disinformation.

This work is part of what L4Y Learning For Youth is actively championing through transnational Erasmus+ partnerships. If you want to explore more of the research behind this area, our Research Category brings together the evidence base and project insights. Stay connected to the latest developments by following L4Y Learning For Youth on LinkedIn, where we share updates, tools, and findings from the field.

Key takeaways

Young digital detectives are not simply learning to spot fake news. They are learning to analyse the intent behind a narrative — to move from asking “is this true?” to asking “who benefits from this, and why?” That shift is fundamental. Content can be accurate and still be designed to manipulate. A skilled digital investigator understands the difference.

The three categories of information disorder — misinformation, disinformation, and malinformation — each require a different investigative response. First, misinformation spreads without intent to harm. In contrast, disinformation is weaponised. Meanwhile, malinformation uses truth against people. Therefore, understanding which category a piece of content falls into determines which investigative tools to apply and how to communicate the findings responsibly.

Professional-grade investigation follows a four-stage methodology. It begins by extracting specific claims from the noise, moves on to constructing precise questions, gathers evidence from reliable sources, and concludes with a human-led verdict. Automated tools support this process, but the final judgment always belongs to a human investigator capable of reading context and nuance.

Foreign information manipulation and interference, or FIMI, is a core feature of modern hybrid warfare. It is not random. It is opportunistic, coordinated, and designed to exploit moments of close public attention. In 21.3% of investigated FIMI incidents, actors deliberately seized on pre-existing media interest to amplify their reach — a finding that demands heightened scrutiny during elections, public health events, and political summits.

Building cognitive resilience means more than raising awareness. Signal Detection Theory research shows that what matters most is increasing a young person’s sensitivity — their actual ability to distinguish true from false — while reducing their response bias, the tendency to accept information that confirms existing beliefs. Training that focuses on deep investigation, not surface awareness, is what moves the needle on both.

What you will learn in Young Digital Detectives Post

By engaging with this research, you will understand why digital nativity is not the same as digital literacy — and why high technical proficiency in young people does not automatically produce the cognitive frameworks needed for evidence-based analysis.

You will learn how professional fact-checkers approach verification using a four-stage investigative method — and how that methodology has been adapted for youth work contexts through the Young Digital Detectives project.

Gain a clear understanding of the three-part taxonomy for information disorder — misinformation, disinformation, and malinformation — and why each category demands a distinct investigative and communicative response.

You will see how foreign information manipulation operates as a strategic tool of hybrid warfare, with documented examples of coordinated campaigns designed to destabilise European democratic institutions.

Finally, you will understand how non-formal education and youth work create a uniquely effective environment for building these investigative competences — one that formal education systems are often too slow to replicate.

The digital literacy crisis facing European youth

The term “digital native” has done a great deal of harm. It has convinced a generation of policymakers and educators that growing up with smartphones is sufficient preparation for navigating a weaponised information environment. The data tells a different story. More than 40% of 13–14-year-olds across the EU lack basic digital skills. Fewer than 40% of educators felt prepared to use digital technologies effectively in their teaching — a figure captured in research conducted during the COVID-19 pandemic, when digital competence suddenly became non-negotiable.

The disconnect is profound. Young people are highly competent at using digital tools. They are significantly less competent at evaluating the information those tools deliver. Technical proficiency and critical reasoning are not the same skill. Moreover, algorithmic systems are specifically designed to reward emotional engagement over accuracy — meaning the platforms young people use daily are structurally incentivised to surface content that provokes, not content that informs.

The societal awareness of this problem is, however, already strong. 80% of European citizens agree that digital literacy protects against disinformation. 90% believe that teachers and educators need specific skills to address it. That level of consensus is remarkable. It signals not just awareness, but genuine urgency — and it creates the political and institutional space for interventions like the Young Digital Detectives project to operate with broad public support.

Young digital detectives and the 76% problem

The 76% figure — the share of young Europeans encountering disinformation on a weekly basis — is not a statistical abstraction. It represents a structural condition in which manipulation is the norm, not the exception. Every time a young person opens social media, they are operating in an environment where false content, algorithmically amplified and emotionally calibrated, is a default feature of the landscape.

The Young Digital Detectives project begins from this reality. Rather than treating disinformation as an occasional threat to be avoided, it treats it as a constant condition to be navigated. That reframing is consequential. It moves the conversation from “how do we protect young people from fake news?” to “how do we equip young people to function effectively in a world where disinformation is always present?” The second question is harder — and more honest.

Consequently, the project focuses not on building defensive walls but on building investigative capacity. A young person who knows how to extract a claim, trace its origin, assess the source’s credibility, and reach a reasoned verdict is not just protected from a single piece of false content. They have the tools to function as an informed citizen in any information environment — including ones that do not yet exist.

The information disorder taxonomy

One of the foundational contributions of the Young Digital Detectives methodology is the adoption of a precise, three-part taxonomy for information harms. This framework, aligned with Council of Europe standards, distinguishes between misinformation, disinformation, and malinformation — and that distinction matters enormously for how an investigation is conducted.

Misinformation involves the sharing of false information without intent to harm. It is the error of the well-meaning but insufficiently critical sharer — the person who forwards a claim because they believe it to be true and have not paused to verify it. Misinformation spreads through social trust, not malice. That makes it pervasive and difficult to counter, because correcting it requires challenging the credibility of people who had good intentions.

Disinformation is a different category entirely. It involves the intentional creation and dissemination of false information designed to deceive or manipulate audiences. State-sponsored propaganda, coordinated inauthentic behaviour on social media platforms, and bot networks amplifying fabricated narratives are all forms of disinformation. The intent to deceive is what defines it — and it is that intent that makes disinformation the most strategically dangerous category in the taxonomy.

Malinformation is perhaps the most counterintuitive of the three. It refers to the sharing of factually true information with the specific intent to cause harm. The leak of private correspondence to damage a political opponent, the decontextualised use of genuine statistics to support a false narrative, or the targeted release of personal data to intimidate or silence — all of these are acts of malinformation. Truth, in this context, becomes a weapon.

Young Digital Detectives – Understanding intent, not just content

The critical insight embedded in this taxonomy is that content analysis alone is insufficient for responsible digital investigation. The question “Is this true?” is necessary but not sufficient. The equally important question is “what is the intent behind its dissemination?” A piece of content can be entirely accurate and still be misinformation. It can be false and entirely accidental.

This shift — from content analysis to actor analysis — is what distinguishes a digital detective from a casual fact-checker. By examining not just what is being shared but who is sharing it, through which channels, at what moment, and with what apparent purpose, an investigator can begin to map the intent behind a narrative. That analytical layer is what allows young people to engage with the full complexity of the modern information environment, rather than reducing it to a simple binary of true and false.

How young digital detectives investigate digital claims

The investigative methodology at the heart of the Young Digital Detectives project is drawn from the professional standards of international fact-checking networks, including the International Fact-Checking Network and the European Digital Media Observatory. It is a four-stage process designed to be objective, transparent, and reproducible — the same qualities demanded of professional journalism and scientific inquiry.

Stage 1 — extracting claims from the noise

Every investigation begins with identifying a specific, verifiable assertion. In the context of youth work, this typically means monitoring social media channels, comment sections, and influencer content for claims that can be tested against evidence. The skill here is distinguishing between subjective opinion — which is not verifiable — and empirical assertion, which is.

This stage is more demanding than it sounds. The information environment deliberately blurs the line between opinion and fact, between satire and news, between legitimate commentary and coordinated manipulation. Training young people to extract precise claims from this noise requires practice. The use of platforms like Truly Media — an EU-supported tool using machine learning and large language models to detect widespread disinformation narratives — can support investigators in identifying content worth examining more closely.

Stage 2 — building the right questions

Once a claim is extracted, the investigator formulates precise questions to guide the verification process. This stage is explicitly designed to combat cognitive bias. By defining what the investigation is actually trying to answer before gathering evidence, the investigator reduces the risk of confirmation bias — the tendency to find the evidence that supports what you already believe.

For example, if a narrative suggests that voter fraud occurred in a specific region, the investigator does not simply ask “did fraud occur?” — a question that frames the assumption as given. Instead, they ask what the official results show, whether independent observers documented any irregularities, and what the verifiable source of the original claim is. Each question is specific, answerable, and bounded. That precision is what makes the investigation defensible.

Stage 3 — sourcing evidence and tracing digital footprints

The credibility of any investigation rests on the quality of the sources used to build it. Professional standards require quality media, original source documents, and official government data — not secondary aggregators or unattributed summaries. The Young Digital Detectives project trains participants to evaluate sources based on transparency, editorial independence, and accountability.

This stage also includes technical forensic work. Investigators learn to use tools that reveal the processing history of images and videos — detecting edits, synthetic alterations, or contextual misuse of genuine footage. A photograph from a conflict three years ago presented as breaking news today is a form of disinformation. A synthetic audio clip with a verification score approaching 1.0 on an AI detection tool is a signal for deeper scrutiny. These technical skills are not optional extras — they are central to responsible investigation in the current environment.

Young digital detectives and the human-in-the-loop principle

The final stage of the investigative process involves synthesising the gathered evidence to reach a verdict — typically categorised as true, false, misleading, or unverified. A critical principle embedded in the Young Digital Detectives methodology at this stage is the human-in-the-loop approach.

Automated tools can produce verification scores. They can cluster related content, flag synthetic media, and track narrative spread across platforms. However, the final judgment must always rest with a human investigator. Algorithms cannot reliably interpret cultural context, detect irony, or assess the democratic significance of a contested claim. Furthermore, the ethical weight of publishing a verdict about a person or institution demands human accountability — not algorithmic output.

This principle is not a limitation of the technology. It is a deliberate values choice. The Young Digital Detectives project treats automated tools as powerful supports for human reasoning, not replacements for it. That distinction matters enormously in a world where AI is increasingly used to both create and counter disinformation.

Foreign information manipulation and why it matters to youth

Foreign information manipulation and interference — FIMI — is defined by the European External Action Service as a pattern of behaviour that threatens or has the potential to negatively impact democratic values, procedures, and political processes. It is a key instrument of modern hybrid warfare, and its primary targets include the perceptions and behaviours of ordinary citizens — including young people.

The scale of documented FIMI activity is significant. Russia’s information operations accompanying the war in Ukraine have involved over 10,500 social media channels and websites. These are not grassroots expressions of public sentiment. They are coordinated infrastructure — designed, funded, and operated with the specific objective of shifting opinion, undermining trust, and destabilising democratic institutions across Europe.

What makes FIMI particularly difficult to counter is its opportunism. In 21.3% of investigated incidents, FIMI actors specifically seized on pre-existing media interest to amplify their reach. Elections, public health emergencies, political summits, and economic crises all create conditions of heightened attention and emotional vulnerability that foreign actors are trained to exploit. As a result, the periods when people most need reliable information are precisely the periods when the information environment is most heavily manipulated.

The FIMI playbook — tactics, techniques, and procedures

Professional analysis of FIMI uses the framework of Tactics, Techniques, and Procedures — TTPs — to describe the operational methods of threat actors. This framework shifts the focus from the content being shared to the behaviour of the actor sharing it. A tactic describes the operational goal. A technique describes how that goal is pursued. A procedure describes the specific pattern of behaviour observed in a given campaign.

By analysing TTPs, investigators can identify coordinated campaigns that would be invisible to anyone examining individual pieces of content in isolation. A single post may look innocuous. Ten thousand posts, published across a coordinated network of accounts, timed to coincide with a breaking news event, using identical phrasing and amplified by bot traffic — that is a campaign. The Young Digital Detectives project teaches participants to see not just the piece, but the pattern.

Additionally, FIMI actors use the STIX data format to standardise threat intelligence — allowing findings to be shared across the EU network of investigators and policymakers. The ability to contribute to that shared intelligence infrastructure is one of the most significant capabilities the Young Digital Detectives project aims to build among its participants.

Young digital detectives ‘ tips that actually work

Start with the source, not the story

When a piece of content catches your attention — positive or negative — resist the instinct to engage with the content first. Instead, trace the source immediately. Who published this? When was the account created? Does it have a consistent posting history, or does it appear to have been created recently and then suddenly become very active? Source credibility is the first filter, not an afterthought.

This matters because the most sophisticated disinformation campaigns are designed to look credible. They use professional-looking graphics, authoritative-sounding names, and real statistics taken out of context. The story is engineered to be persuasive. The source, however, often reveals the architecture of the operation behind it.

Ask what the content wants you to feel

Disinformation is engineered to provoke emotional responses — outrage, fear, contempt, or tribal loyalty. Before you assess whether a claim is true, notice what it is trying to make you feel. If a piece of content generates a strong emotional reaction, that is a reason to slow down, not a reason to share it faster. Emotional intensity is a warning signal, not a measure of importance.

Professional investigators are trained to recognise this pattern and work deliberately against it. They apply what researchers call slow thinking — pausing to ask structured questions before forming or acting on a judgment. That cognitive discipline is one of the most valuable habits the Young Digital Detectives project develops in its participants.

Cross-reference before concluding

No single source is a sufficient basis for a verdict. Professional fact-checkers require multiple independent sources to corroborate any finding. Moreover, they specifically seek out sources that might challenge their emerging conclusion — not to undermine it, but to stress-test it. A conclusion that survives scrutiny from hostile sources is a conclusion that is likely to hold.

In practice, this means checking official government data, comparing accounts from different national outlets, and consulting source documents rather than secondary summaries. It takes more time. It also produces more reliable results — and results that are defensible when challenged.

Know when a true fact is still disinformation

Malinformation is one of the most important and least understood concepts in digital investigation. The fact that a piece of content is factually accurate does not mean it is being used honestly. Statistics can be selectively presented to imply a false conclusion. A genuine video from one event can be presented as footage of a different event. A real quote can be stripped of its qualifying context to reverse its meaning entirely.

Young digital detectives are trained to ask not just “is this accurate?” but “is this being used accurately?” That second question requires understanding the context in which information was originally produced — and comparing it with the context in which it is now being deployed. The difference between those two contexts is often where the manipulation lives.

Recognise catalyst events and raise your scrutiny

FIMI actors are opportunists. Elections, summits, disease outbreaks, and economic shocks all create windows of heightened public attention that are systematically exploited. Research shows that 21.3% of investigated FIMI incidents deliberately seized on pre-existing media interest. When a major public event is underway, the likelihood of encountering coordinated manipulation increases significantly.

Experienced digital detectives apply heightened scrutiny during these periods as a matter of professional discipline. They slow down, verify more carefully, and pay particular attention to narratives that align suspiciously well with the interests of foreign state actors. The timing of content — not just its content — is a meaningful signal in its own right.

Use technical tools as signals, not verdicts

AI detection tools, synthetic media scoring systems, and narrative-tracking platforms are powerful investigative assets. However, they are inputs to human judgment — not replacements for it. A verification score of 0.9 for AI-generated audio is a reason to investigate further, not a definitive verdict. Algorithms are trained on existing patterns. Novel manipulation techniques may not yet register accurately.

The human-in-the-loop principle is therefore not just an ethical position. It is a practical necessity. No automated system yet possesses the cultural awareness, contextual sensitivity, and ethical accountability required to reach final investigative verdicts. That task belongs to trained human investigators, which is precisely what the Young Digital Detectives project is producing.

Build psychological resilience alongside analytical skills

Digital investigation is cognitively demanding. Sustained exposure to disinformation, propaganda, and manipulative content carries psychological costs — including emotional fatigue, cynicism, and compulsive engagement patterns reinforced by platform design. Youth workers facilitating investigative training need to address these risks directly, not treat them as a side effect to be managed privately.

The Young Digital Detectives project integrates psychological awareness into its methodology. Participants learn to recognise the emotional toll of investigative work and develop strategies for managing it sustainably. That combination — analytical rigour and psychological resilience — is what enables investigators to do this work long-term without burning out or becoming desensitised.

Understanding disinformation through analogy

The climate scientist versus the weather forecaster

Imagine the difference between a weather forecaster and a climate scientist. The forecaster looks at today’s conditions and tells you what to expect tomorrow. The climate scientist looks at patterns across decades and tells you what is changing about the system itself. Most people respond to disinformation like weather forecasters — they assess each piece of content individually, deciding whether it seems true or false in the moment.

Young digital detectives are trained to think like climate scientists. They look at patterns — coordinated networks, recurring narratives, timed campaigns — rather than isolated incidents. A single misleading post is a weather event. A coordinated campaign amplifying that post across 10,000 accounts during an election is the climate. Understanding the difference changes everything about how you investigate and respond.

The chess player and the checkers player

A checkers player responds to the move in front of them. A chess player reads the board three moves ahead — anticipating what the opponent is trying to achieve, not just what they are doing right now. Most consumers of online information play checkers with disinformation. They respond to the content directly in front of them, without asking what objective the content is designed to serve.

The Young Digital Detectives methodology teaches chess thinking. By analysing the Tactics, Techniques, and Procedures of FIMI actors, investigators learn to ask not just “what is this claim?” but “what is this claim designed to make me believe, feel, or do — and who benefits from that outcome?” That shift in analytical frame makes it dramatically harder for manipulation to succeed. A chess player cannot be easily surprised by an opponent whose strategy they have already mapped.

The immune system

A healthy immune system does not just react to infections — it remembers them. The first time an immune system encounters a pathogen, the response is slower and more costly. Subsequently, it recognises the threat and neutralises it far more efficiently. Building digital resilience works in the same way. When people first encounter a manipulation technique in a controlled educational environment, they learn to recognise its signature.

This is the principle behind pre-bunking — one of the innovative pedagogical approaches used in the Young Digital Detectives project. By exposing participants to manipulation techniques in a safe, facilitated setting before they encounter them in the real information environment, the project builds a kind of cognitive immunity. Subsequent exposure to the same techniques triggers faster, more accurate recognition — and a more measured, less emotionally driven response.

Frequently asked questions about young digital detectives

What is the Young Digital Detectives project?

The Young Digital Detectives project — formally a KA220-YOU Erasmus+ Strategic Partnership — is a transnational initiative led by L4Y Learning For Youth, with partners in Germany, Turkey, Romania, and North Macedonia. Its goal is to equip young people and youth workers with professional-grade investigative skills for identifying, analysing, and responding to disinformation and foreign information manipulation.

The project operates through non-formal education, which allows it to adapt more rapidly to the evolving information landscape than formal school systems typically can. Its timeline spans 2027 to 2028, positioning it to contribute to the EU’s broader digital education and democratic resilience agenda during the final years of the Digital Education Action Plan 2021–2027.

Why does the project focus on youth specifically?

Young people are both the most active users of digital platforms and, statistically, among the least likely to verify what they encounter. Only 36% of young Europeans actively check content before sharing or acting on it — despite 76% encountering disinformation on a weekly basis. This combination of high exposure and low verification behaviour makes youth a priority group for intervention.

Additionally, young people are in the process of forming the civic habits they will carry into adulthood. Developing critical investigative skills at this stage has a compounding effect — it shapes not just how they consume information now, but how they participate in democratic life for decades to come. Furthermore, trained young people become multipliers, bringing investigative methods back to their peer networks and communities.

What is FIMI, and why should young people know about it?

FIMI stands for Foreign Information Manipulation and Interference. It refers to coordinated efforts by foreign state actors to manipulate the information environment of other countries to undermine democratic institutions, shift public opinion, and advance strategic objectives. EEAS threat reports identify Russia and China as the most prominent actors engaging in these activities.

Young people need to understand FIMI because they are a primary target. Platforms popular among youth are key distribution channels for FIMI operations. Moreover, young people’s political attitudes are still forming — making them a strategically valuable audience for actors seeking to shift the long-term direction of democratic societies. Understanding that manipulation of this kind exists, is systematic, and follows identifiable patterns is the first step toward resisting it.

How is the project’s success measured?

The project uses Signal Detection Theory to evaluate its impact. This research framework distinguishes between two key variables. Sensitivity — denoted as d-prime — refers to the actual cognitive skill of distinguishing true from false information, an objective measure of investigative accuracy. Response bias — denoted as beta — refers to the tendency to accept information uncritically, often driven by prior beliefs or emotional priming.

Effective training increases sensitivity while reducing response bias. The project also aligns its evaluation with the Structural Indicators proposed for the EU Code of Practice on Disinformation, including measures of user capability, narrative reach, and audience resilience. Together, these metrics provide a robust picture of whether the training is producing genuine investigative capacity — not just self-reported confidence.

Young Digital Detectives – What role does AI play in the project?

AI plays a dual role. Threat actors use AI to generate synthetic media, amplify coordinated campaigns, and manipulate search results by flooding platforms with algorithmically produced false content. AI detection services give human investigators verification scores that estimate whether AI generated a given audio or video file, helping them flag content for deeper review.

The project integrates AI literacy into its core methodology. Participants learn how AI systems work, how threat actors exploit them in FIMI operations, and how investigators can use AI detection tools responsibly as investigative supports rather than definitive arbiters. The human-in-the-loop principle ensures that AI remains a tool in service of human judgment — not a replacement for it.

What is pre-bunking, and how does it differ from fact-checking?

Fact-checking is reactive. Fact-checkers identify false content after people have published and begun spreading it, then issue a correction. Pre-bunking is proactive. It exposes people to manipulation techniques — in a controlled, facilitated environment — before they encounter those techniques in the real information landscape. The goal is to build cognitive immunity so that when people encounter the same techniques outside the training context, they recognise them faster and respond more accurately.

Research supports pre-bunking as an effective complement to fact-checking. While corrections can reduce belief in specific false claims, they do not generalise well to new claims using the same technique. Pre-bunking, by focusing on the technique rather than the content, builds transferable recognition skills. Young digital detectives benefit from both approaches — the analytical rigour of fact-checking and the psychological preparation of pre-bunking exercises.

How does non-formal education support this kind of learning?

Non-formal education — delivered through youth work, youth centres, training courses, and peer-led programmes — offers several advantages over formal classroom settings for this type of learning. In particular, it is participatory and learner-centred. As a result, it creates the right conditions for developing the collaborative, creative, and critical thinking skills that investigative work requires. It can also respond rapidly to changes in the information landscape, without the constraint of fixed curricula.

The Young Digital Detectives project uses gamified and blended learning approaches, including role-play activities, fact-checking challenges, and peer-led workshop design. Youth workers trained through the project become multipliers — taking investigative methods back to their communities and adapting them for local contexts. That model of distributed, peer-driven learning is precisely the kind of scalable approach that the scale of the disinformation challenge demands.

Young Digital Detectives – Conclusion

The challenge of disinformation in the European information space is not a temporary disruption. Instead, it is a structural condition of the digital age — one that will intensify as AI-generated content becomes cheaper, more convincing, and more pervasive. Therefore, awareness campaigns and one-off digital literacy workshops are not sufficient responses to a threat of this scale. Instead, we must invest consistently in investigative capacity — giving people the skills, habits, and tools they need to act as informed, critical citizens in an environment of chronic manipulation.

The Young Digital Detectives project represents a serious attempt to build that capacity. By combining professional fact-checking methodology, a rigorous taxonomy of information harms, an understanding of FIMI and its patterns, and a pedagogical framework designed for non-formal education, the project equips young people not just to survive the current information landscape but to actively defend the democratic values it threatens. That is not a small ambition. It is, however, an achievable one — and the evidence base supports both its necessity and its approach.

Join L4Y Learning For Youth on LinkedIn to stay connected to the projects, research, and conversations shaping the future of youth digital resilience across Europe.

Young Digital Detectives – References

European Commission. (2021). Digital Education Action Plan 2021–2027. European Education Area. https://education.ec.europa.eu/focus-topics/digital-education/actions

European Commission. (2024). Guidelines for teachers to foster digital literacy and tackle disinformation. European Education Area. https://education.ec.europa.eu/focus-topics/digital-education/actions/plan/guidelines-for-teachers-to-foster-digital-literacy-and-tackle-disinformation

European External Action Service. (2026). 4th EEAS Report on Foreign Information Manipulation and Interference Threats. European Union. https://www.eeas.europa.eu/sites/default/files/2026/documents/EEAS%204th%20Threat%20Report_web%20version_1.pdf

European External Action Service. (2024). 2nd EEAS Report on Foreign Information Manipulation and Interference Threats. European Union. https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report%20on%20FIMI%20Threats-January-2024_0.pdf

European External Action Service. (2025). 3rd EEAS Report on Foreign Information Manipulation and Interference Threats. European Union. https://www.eeas.europa.eu/sites/default/files/documents/2025/EEAS-3nd-ThreatReport-March-2025-05-Digital-HD.pdf

European External Action Service. (n.d.). Information integrity and countering foreign information manipulation and interference. EEAS. https://www.eeas.europa.eu/eeas/information-integrity-and-countering-foreign-information-manipulation-interference-fimi_en

European External Action Service. (n.d.). European Democracy Shield and EU Strategy for Civil Society. EEAS. https://www.eeas.europa.eu/eeas/european-democracy-shield-and-eu-strategy-civil-society-pave-way-stronger-and-more-resilient_en

European Digital Media Observatory. (2022). Report on the user needs of fact-checkers. EDMO. https://edmo.eu/wp-content/uploads/2022/10/Report-task-4.2-Fact-checkers-user-needs.pdf

European Digital Media Observatory. (2023). Disinformation narratives during the 2023 elections in Europe. EDMO. https://edmo.eu/wp-content/uploads/2023/10/EDMO-TF-Elections-disinformation-narratives-2023.pdf

European Digital Media Observatory. (2024). D.17 Public Report M24. EDMO. https://edmo.eu/wp-content/uploads/2024/12/D.17_Public-report_M24.pdf

European Digital Media Observatory. (2024). Structural Indicators of the Code of Practice on Disinformation. EDMO. https://edmo.eu/wp-content/uploads/2024/03/SIs_-2nd-EDMO-report.pdf

Council of Europe. (n.d.). Dealing with propaganda, misinformation and fake news. Free to Speak – Safe to Learn. https://www.coe.int/en/web/campaign-free-to-speak-safe-to-learn/dealing-with-propaganda-misinformation-and-fake-news

Council of Europe. (2021). Implementation of Recommendation CM/Rec(2019)10 on Developing and Promoting Digital Citizenship Education. Council of Europe. https://rm.coe.int/prems-015426-gbr-2553-implementation-of-recommendation-cmrec-2019-10-b/48802a99d2

SALTO-YOUTH. (n.d.). MINDtheGaps Handbook for Youth Digital Citizenship Education. SALTO-YOUTH. https://www.salto-youth.net/downloads/toolbox_tool_download-file-2515/HandBook%20Youth%20Digital%20Citizenship%20Educationrev2.pdf

SALTO-YOUTH. (n.d.). Young Digital Detectives — KA220-YOU. Otlas Partner Finding. https://www.salto-youth.net/tools/otlas-partner-finding/project/ka220-you-young-digital-detectives.19957/

SALTO-YOUTH. (n.d.). ICT&SM 3.0: Advancing youth work for civic engagement. European Training Calendar. https://www.salto-youth.net/tools/european-training-calendar/training/ict-sm-3-0-advancing-youth-work-for-civic-engagement.13898

SALTO-YOUTH. (2020). Developing Scientific and Digital Citizenship Skills for Disadvantaged Youth. SALTO-YOUTH. https://www.salto-youth.net/downloads/toolbox_tool_download-file-3748/Install%20Youth%20BEST%20PRACTICES%20MANUAL.pdf

YouthRex. (2025). Artificial Intelligence Competence Needs for Youth Workers (AI4YW)https://youthrex.com/wp-content/uploads/2025/12/AI4YW-AI-Competence-Needs-for-Youth-Workers.pdf

European Commission. (2024). Revising guidelines on disinformation and AI ethics — workshops insights. European Education Area. https://education.ec.europa.eu/news/revising-guidelines-on-disinformation-and-ai-ethics-workshops-insights

EACEA National Policies Platform. (n.d.). 6.8 Media literacy and safe use of new media — Sweden. European Union. https://national-policies.eacea.ec.europa.eu/youthwiki/chapters/sweden/68-media-literacy-and-safe-use-of-new-media

EACEA National Policies Platform. (n.d.). 6.8 Media literacy and safe use of new media — Serbia. European Union. https://national-policies.eacea.ec.europa.eu/youthwiki/chapters/serbia/68-media-literacy-and-safe-use-of-new-media

European Commission. (n.d.). Media literacy. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/media-literacy

Related Posts

SEO
yoast seo basics
Yoast SEO Basics

Yoast SEO Basics

Welcome to Unit 2 of the training titled “Yoast SEO Basics”! Whether you are a beginner or an intermediate content creator, understanding how to use the Yoast SEO plugin effectively can significantly improve your search engine visibility. First, in this article, we...

Archives