
2.3 Engaging in Citizenship through Digital Technologies
Topic Activity
Fact or Fake? Digital Citizens in Action
Aim of the activity
To empower adult learners to participate ethically and critically in digital spaces by detecting misinformation, practicing respectful communication, and reflecting on their digital footprint, with support from AI tools.
Target Group
Duration
45–60 minutes
(adaptable depending on participants’ digital fluency)
Objective
To empower adult learners to participate ethically and critically in digital spaces by detecting misinformation, practicing respectful communication, and reflecting on their digital footprint, with support from AI tools.
Materials necessary to execute activity
Online / Hybrid:
Zoom / Google meet / MS Teams with breakout rooms
Google Docs or Padlet
Access to AI tools: ChatGPT or NewsGuard (for text analysis)
In-person:
Laptops or tablets
Printed versions of sample articles/posts (both real and fake)
Markers and flipcharts for group work
Steps for implementation
1. Warm-up & Framing (10 min)
Ask: “Have you ever shared something online and later found it wasn’t true?”
Show two short online posts (real vs. misleading). Quick poll: “Which one would you trust?”
Introduce key concepts: digital citizenship, misinformation, respectful participation, digital rights
2. Group Task – Fake or Fact? (30 min)
Participants work in small groups (3–4). Their mission:
Analyze a short social media post or article (provided by the facilitator – include at least one misleading or AI-generated post).
Use an AI tool (e.g. ChatGPT) to:
Summarize the content
Detect biased or emotionally charged language
Suggest neutral or corrected phrasing
Fact-check the post using:
Google Fact Check
EUvsDisinfo
Snopes
Re-write the post to reflect verified and respectful communication.
Facilitator tip: Provide AI-generated examples with emotional or misleading tone to make the task more challenging.
Adaptation Tips
Online:
Use breakout rooms for group analysis and rewriting tasks. Assign each group a shared Google Doc or Padlet board to work collaboratively.
Upload all sample posts/articles in advance to a shared folder or LMS so all participants can access the same materials easily.
Provide a list of pre-tested fact-checking links and a guide on how to use ChatGPT to detect bias or suggest neutral phrasing.
Encourage learners to share their rewritten posts in a common Padlet wall or collaborative whiteboard for class-wide discussion.
During reflection, allow participants to vote or react via emoji/reactions to identify which rewritten post was most respectful or effective.
In person:
Prepare printed versions of social media posts/articles, ensuring a mix of accurate, misleading, and AI-generated examples.
Equip each group with a laptop or tablet (or access to one), so they can use AI tools and fact-checking websites.
If AI tools cannot be accessed directly, the facilitator can act as the AI proxy, inputting text into ChatGPT/NewsGuard and displaying results via projector.
Print and distribute checklists or fact-checking guides, including links to EUvsDisinfo, Google Fact Check, and Snopes.
Use flipcharts or boards for each group to record findings and display their “corrected” posts during the sharing phase.
Provide support cards or glossary sheets explaining key concepts like bias, digital footprint, and netiquette.
Hybrid:
Assign groups that include both online and in-person participants, using shared collaboration tools (e.g., Google Docs, Jamboard) to ensure everyone contributes.
·Set up a central presentation point in the classroom with a projector and webcam so both audiences can view the examples and group outputs.
Ensure AI tools are accessible to both audiences. If needed, designate one “AI support role” per group to run prompts and share results across platforms.
Post all sample posts/articles on a shared platform and provide both printed and digital versions.
During the sharing phase, allow each group to present either in-person or online, while screen sharing their analysis and rewritten version.
Skills developed with the activity
By the end of the activity, learners will:
· Recognize and evaluate misinformation or bias online
· Use AI to support critical reading and ethical rewriting
· Understand digital rights (e.g. privacy, consent, footprint)
· Practice respectful and constructive digital participation
Methodology
- On-site
- Online
- Hybrid
Evaluation
Reflection & Discussion (15 min)
Each group briefly shares:
Was the post reliable? Why or why not?
What did AI help you discover or clarify?
What rights and responsibilities apply when sharing such content?
Discuss as a whole group:
“How would you respond if a friend shared something false or harmful online?”
“What’s your digital footprint saying about you?”
Links & References
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

