U.S. 🇺🇸 Last updated: 2026.01.22 Very few, but proposedAs good as it gets (see tooltip) In the United States, certain categories of speech are not protected by the First Amendment and can be considered illegal. These include obscenity, which must meet the Miller test by appealing to prurient interests, depicting sexual conduct in a patently offensive way, and lacking serious literary, artistic, political, or scientific value. Speech that constitutes fraud, child pornography, or is integral to illegal conduct is also unprotected. Additionally, speech that incites imminent lawless action, as established by the Brandenburg test, is not protected, meaning advocacy for violence or illegal acts is only punishable if it is directed at producing imminent lawless action and is likely to…
U.S. 🇺🇸 Last updated: 2026.01.22 Very few, but proposedAs good as it gets (see tooltip) In the United States, certain categories of speech are not protected by the First Amendment and can be considered illegal. These include obscenity, which must meet the Miller test by appealing to prurient interests, depicting sexual conduct in a patently offensive way, and lacking serious literary, artistic, political, or scientific value. Speech that constitutes fraud, child pornography, or is integral to illegal conduct is also unprotected. Additionally, speech that incites imminent lawless action, as established by the Brandenburg test, is not protected, meaning advocacy for violence or illegal acts is only punishable if it is directed at producing imminent lawless action and is likely to do so.
Other unprotected categories include true threats, which involve serious expressions of intent to commit unlawful violence against a person or group, and "fighting words" that are likely to provoke an immediate violent response from an average person.
False statements of fact made with actual malice or negligence can lead to liability in defamation cases, especially when they harm a person’s reputation. Lying under oath (perjury) or making false statements to federal investigators is also criminal and unprotected. Commercial speech, such as advertising, receives diminished protection, and false or misleading advertising can be punished.
Hate speech, while often offensive, is generally protected under the First Amendment unless it falls into one of the unprotected categories like true threats or incitement to violence. The government may regulate speech in specific contexts, such as broadcasting indecent language on public airwaves, which the FCC can fine under certain conditions. Intellectual property violations, such as copyright infringement or counterfeiting currency, also constitute illegal speech under relevant laws.
The STOP HATE Act (proposed 2025) would ban ‘hate speech’, antisemitism, and ‘disinformation’ Indirect & proposed The Algorithm Accountability Act (proposed), like the scrapped California’s Senate Bill 771, aims to hold social media platforms liable for the content their algorithms distribute, which would lead to indirect censorship as companies may excessively moderate or censor content to avoid hefty penalties for alleged civil rights violations, effectively prioritizing algorithmic compliance over free expression. Similarly, the Sunset Section 230 Act (proposed) would force online platforms to be responsible for what their users comment and post, which would force platforms to become more restrictive in order to protect themselves from lawsuits. The Block BEARD Act (proposed 2025) would force ISPs to block piracy websites. Indirect censorship is possible already:
- The government has pressured social media platforms to remove content under the pretext of fighting misinformation and hate speech.
- High-profile cases such as WikiLeaks, SamouraiWallet and The Pirate Bay involve domain seizures framed as law enforcement actions against crime, which are considered legal despite First Amendment concerns.
- TAKE IT DOWN Act: Aimed at combating non-consensual sharing of intimate images, this act could enable censorship by allowing platforms to remove content based solely on complaints, without proof of harm or an appeals process.
- PAFACA: Commonly known as the "TikTok ban", targeting apps or websites owned by foreign entities. Proponents argue it is not censorship because a new (American) owner of TikTok would still be allowed to circulate the same content.
- Stop Hiding Hate Act (New York): Forces social media platforms to report ‘hate speech’ incidents; while no fines for retaining legal content are imposed, it may coerce platforms into more aggressive moderation practices. No bans Though such laws are regularly proposed, they have so far all failed e.g. the EARN IT Act, Lawful Access to Encrypted Data Act, and Florida’s “Social Media Use by Minors” bill (HB 744/SB 868) No, but proposed VPN bans in some states Some US states have proposed VPN bans or restrictions, but no laws have passed yet. Age verification in some states Age verification laws for the web are in place in several US states, but not on a federal level. The Kids Online Safety Act (proposed 2025) and SCREEN Act (proposed 2025) aim to implement restrictions on a federal level. The proposed Kids Off Social Media Act would bar under‑13s from having social media accounts, which would require adults to verify their age. California’s Digital Age Assurance Act forces OSes, device makers, and app stores (no FOSS exemptions) to send age-related signals to developers; it does however not require ID checks. App Store Accountability Acts in Texas, Utah, Louisiana and other states requires app stores and developers to implement age verification; Apple and Google say compliance will require collecting personally identifiable data. Passwords no, biometrics yes Passwords are protected by the Fifth Amendment and do not need to be disclosed. The situation for biometric unlocking is more disputed, but courts have generally allowed police to compel biometric unlocks (e.g. forcing a suspect’s finger onto a phone or holding a device to their face), starting with cases like United States v. Dionisio (1973). No bans, but devs punished There is no ban on the use of anonymous payment methods such as Monero, but in the past developers of cryptocurrency software allowing for financial privacy and anonymity have been prosecuted in the name of anti-money laundering, e.g. ‘US v. Storm’ and ‘US v. Rodriguez’ targeting the developers of Tornado Cash, a privacy protocol that mixes cryptocurrency transactions to obscure their origin. None No comprehensive federal requirement for ISPs to retain connection logs or metadata for all users; any retention is voluntary, although proposals for mandatory logging have existed (e.g. SAFETY Act 2009). CLOUD Actrequires US-based service providers to provide law enforcement with data stored on their servers, even if those servers are located outside the United States. However, it does not require companies to retain or log data that they would not otherwise maintain as part of their operations. It only governs access to data that a provider already stores. and PRISMPRISM is the name of a US intelligence program, disclosed in 2013, which enables the NSA to collect internet communications from US-based tech companies. PRISM allows for the compelled disclosure of content or metadata held by providers when targeted at non-US persons outside the US. are not data retention laws. No Platform-agnostic, can use browser + OTP Government services such as Login.gov or ID.me support browser-based login with password + OTP (via SMS, email, or authenticator app), and no Android/iOS smartphone is mandatory for access or authentication. Fair Use, but DMCA misuse Broad, flexible exceptions allowing various uses (such as commentary, criticism, news reporting, teaching, scholarship, and research) based on four fairness factors (purpose, nature, amount, market impact). However, the DMCADigital Millennium Copyright Act has in the past been misused for censorship and "taking down" legal content, as content needs to be removed quickly and without proving copyright infringement. Canada 🇨🇦 Last updated: 2025.11.21 RestrictedMostly relating to vaguely defined ‘hate speech’ and Holocaust denial in Criminal Code §318 & §319 Proposed Bill C-9 (2025) would also ban the display of Nazi and Hamas symbols and widen the definition of ‘hate speech’, particularly relating to anti-religious offences. (+ failed laws like Bill C-36 (failed in 2021) or Bill C-63 (failed in 2025))This bill would have included a maximum penalty of life imprisonment for hate crime offences, including non-violent speech offences such as ‘hate propaganda’. Selective censorship In the past, ISPs have been ordered to block websites associated with copyright infringement, though major sites like Anna’s Archive or The Pirate Bay remain available. Critics also worry that the Online Streaming Act enables state control about what Canadians see online. This act extends the Canadian Radio-television and Telecommunications Commission’s regulation to online streaming platforms like YouTube, Netflix, TikTok, and Spotify, requiring them to promote and recommend Canadian content. A central controversy is the perceived risk of government censorship and overreach. Critics worry that giving the CRTC authority to influence algorithms and content recommendations. No, but proposed Bill C-26, centered on cybersecurity and expanding surveillance powers, passed the Parliament and reached Senate review in June 2024. Senate found technical flaws and amended it, sending it back to the House of Commons. As of July 2025, it has not yet become law and remains subject to legislative review and correction. No bans No, but proposed Bill S-209, aimed at mandatory age verification for access to online adult content, returned to the Senate for first reading in May 2025. Debate continues in Parliament with a focus on privacy and implementation challenges. The bill has not yet been enacted. None No bans, but restrictions However, Monero has been delisted from most CEX for Canadian users due to KYC and other regulations, even though it’s not banned per se. Also, Justin Trudeau’s Emergency Act granted the government the power to restrict cryptocurrency transactions, including Monero, as part of efforts to curb funding for the Freedom Convoy protests. It did, however, not constitute an outright ban of Monero or other cryptocurrencies. None No Platform-agnostic, can use browser + OTP Government services such as GCKey or Sign-In Partner support browser-based login with password + OTP (via SMS, email, or authenticator app), and no Android/iOS smartphone is mandatory for access or authentication. Fair Dealings Use permitted only if it falls into prescribed categories (e.g., research, private study, criticism, review, news reporting, education, parody, satire). More restrictive than US. Australia 🇦🇺 Last updated: 2026.01.22 Severe limitations of speech Mostly relating to vaguely defined ‘hate speech’ and showing National Socialist symbols, such as the Racial Discrimination Act 1975 and the Criminal Code Amendment (Hate Crimes) Bill 2025"The laws at both federal and NSW levels aim to curb hate-fueled violence, particularly against Jewish Australians. They criminalize advocating force or violence against protected groups, toughen penalties for Nazi-related symbolism, and even impose mandatory minimum sentences for some offenses.
The new laws stretched the rules in ways that might make civil liberties advocates nervous. Previously, to be charged with urging violence against a group, prosecutors had to prove intent. Now? Recklessness will do. This means you don’t have to actually intend for violence to happen — just failing to consider the possibility could land you in serious trouble.
The law also takes a broad approach to Nazi symbolism. Displaying a swastika was already illegal in some contexts, but now similar prohibitions apply to a range of extremist symbols, with penalties jumping from one year in prison to five. And if you’re caught making a "Nazi salute?" Enjoy your 12-month mandatory minimum sentence." - Reclaim The Net
. The Combatting Antisemitism, Hate and Extremism Bill 2026, passed in 01/26, significantly restricts speech in ways that are dangerous and unusual. It criminalizes public conduct or expression (including online) if it would cause a ‘reasonable person’ to feel intimidated or harassed, without requiring proof of actual harm, real victims, or incitement to violence. The law shifts the burden of proof onto the accused for certain offenses (like displaying prohibited hate symbols), forcing them to justify exemptions. Furthermore it empowers the government to blacklist so-called hate groups based on executive discretion, and (even retroactively) punishes mere association, membership, or support with up to 15 years in prison. This goes far beyond typical hate speech laws in other countries, which usually demand intent to incite hatred or violence and include stronger safeguards for political, academic, or journalistic expression, making this bill exceptionally broad, subjective, and restricting free speech. Widespread censorship The Australian Communications and Media Authority (ACMA) has the power to enforce content restrictions on Internet content hosted within Australia, and maintain a blocklist of overseas websites. The eSafety Commission can order the removal of ‘harmful’ content and block access to certain websites, which in the past included archive.org and specific videos deemed inappropriate, such as violent incidents shared on platforms like X [1], [2]. In the past, ISPs have also been ordered to block websites associated with copyright infringement (e.g. Anna’s Archive, The Pirate Bay). Indirect censorship through the Online Safety Act which requires age verification for accessing potentially ‘harmful’ content. **Yes (backdoor on demand)**The Assistance and Access Act 2018 allows intelligence and police agencies to issue notices to compel cooperation from technology companies in building in backdoor access. For example, the government demanded that Signal create a backdoor for them, which they refused so far. Not banned, but restrictions Social media firms are expected per eSafety guidance to block VPNs as they can be used to bypass Australia’s under-16 ban. In practice, platforms may have to blacklist VPN-associated IPs because they can’t prove a VPN user isn’t an Australian under 16. Alternatively, they would need to cross-check an account’s historical IPs and collected location data in order to detect and block VPN use for Australians only. Age verification The Online Safety Bill 2024 mandate age verification to restrict the use of social media by minors under the age of 16. Furthermore, age verification requirements have been extended to YouTube and search engines like Google and Bing. Yes The Cybercrime Act 2001 grants police with a magistrate’s order the wide-ranging power to require "a specified person to provide any information or assistance that is reasonable and necessary to allow the officer to access computer data that is ‘evidential material’; this is understood to include mandatory decryption. Failing to comply carries a penalty of 6 months’ imprisonment. No bans, but restrictions However, Monero has been delisted from most CEX for Australian users due to KYC and other regulations, even though it’s not banned per se. Yes (24 months) The Data Retention Act 2015 requires retention of ISP metadata (such as IPs, connection logs, or browsing history), email and telephony metadata (including mobile phone locations) for 2 years. Yes, must register with official ID Limited support, iOS/Android/AOSP required For certain government tasks requiring strong authentication (e.g. ATO linkage, DIN), you either need the myID app on an Android/iOS smartphone or must handle the process in-person. For now, the myID appnot to be confused with the Australian myGov app, which enforces strong Play Integrity checks and therefore will not run on non-stock Android; however this app is not required for authentication, and its features are available in the web portal) seems to be working on non-stock Android systems such as LineageOS or GrapheneOS, and it is unclear if it requires Play Services / microG, but it is only available on the Play Store, which requires a Google accountA possible workaround is using Aurora Store to download the app from Play Store without a Google account, but it is not officially supported.. Fair Dealings Only allowed for specified purposes such as research, criticism, review, news reporting, parody/satire, professional advice, or education. Additional exceptions are very situation-specific and narrowly crafted. U.K. 🇬🇧 Last updated: 2026.01.16 Severe limitations of speech Illegal speech includes vaguely defined ‘hate speech’, anti-immigration speechIn 2025, the UK government has deployed a "social media surveillance unit" for monitoring social media for anti-immigrant posts., speech likely to cause ‘distress’, ‘indecent’ or ‘offensive’ speech, ‘false’ or ‘misleading’ information, obscenity, insults, advocating against the monarchyTreason laws prohibit advocating the abolition of the monarchy or imagining the death of the monarch., blaspheming Islam"England now has a blasphemy law" - The Spectator - There is no official blasphemy law criminalizing criticism of Islam or Muslims. However, concerns have grown over recent prosecutions for actions deemed offensive to Islam (e.g., Quran burning) under existing public order and hate crime laws. Multiple high-profile cases and political discussions suggest a de facto return to blasphemy law principles via prosecution tactics, but no explicit blasphemy legislation has been passed as of July 2025.
Furthermore, anti-Islam activists such as Ryan Williams and Tommy Robinson have been asked by police to unlock their phones and charged under Schedule 7 of the Terrorism Act 2000.
, and moree.g. UK laws on defamation are among the strictest in the western world, imposing a high burden of proof on the defendant. The most important laws are the Malicious Communications Act 1988Prohibits sending letters, electronic communications, or articles with the purpose to cause distress or anxiety by conveying messages that are indecent, grossly offensive, or false (known or believed to be false by sender). Covers hate speech that is racially or religiously motivated. Jurisprudence may interpret any pro-White or nationalist sentiments as incitement, even benign expressions like "Love your Nation" or "It’s OK to be White" (e.g., in the case of Samuel Melia). Criminalizes any malicious communications in general, including insults. Prison sentences up to 2 years possible., the Hate Crime and Public Order (Scotland) ActAddresses stirring up hatred on grounds such as race, religion, sexual orientation. Covers threatening communications that stir up religious hatred. Includes offences related to behaviour causing breach of the peace aggravated by racial or religious hatred., and the Online Safety Act 2023 (particularly §179)Enforces investigations and regulation of harmful online content, including disinformation. Section 179 establishes offence of false communications
"Section 179 criminalizes knowingly false communications intended to cause "non-trivial psychological or physical harm." The wording here is as vague as it is dangerous. What qualifies as ‘non-trivial psychological harm’? If the government decides that criticisms of its handling of the grooming gang scandal cause emotional distress to MPs—or, conveniently, to the public—it could label them as harmful misinformation. Knowing the penalties—up to 51 weeks in prison and unlimited fines—citizens may think twice before questioning the government on sensitive issues. And that’s the goal: silence through fear." - [need to find source for quote]
but this is not a comprehensive list. Furthermore, police records non-crime hate incidents (NCHIs) which are classified as legal speech, but still remains on police records and may appear in background checks. Widespread censorship In the past, ISPs have been ordered to block websites associated with copyright infringement (e.g. Anna’s Archive, The Pirate Bay) or Russian government propaganda (e.g. RT). Indirect censorship through the Online Safety Act which requires removal of speech that could be illegal in the UK as well as age verification for accessing potentially ‘harmful’ contentincluding: Sexually explicit content. Content which encourages, promotes or provides instructions for: suicide,deliberate self-injury, or disordered eating or behaviors associated with an eating disorder. Content which is abusive or incites hatred against people by targeting any of the following characteristics: race, religion, sex, sexual orientation, disability, or gender reassignment. Bullying content. Violent content which: encourages, promotes or provides instructions for an act of serious violence against a person, or depicts real or realistic serious violence against a person, an animal, or a fictional creature, including the graphic depiction of a serious injury. Content which encourages, promotes, or provides instructions for a challenge or stunt highly likely to result in serious injury to the person who does it or to someone else. Content which encourages a person to ingest, inject, inhale, or self-administer a physically harmful substance, or a substance in physically harmful quantity. Content that shames or otherwise stigmatises body types or physical features. Content that promotes or romanticizes depression, hopelessness and despair. Filesharing websites.. Many UK-based websites were forced to close due to the OSA and or have blocked UK IPs. **Yes (backdoor on demand)**The Investigatory Powers Amendment Bill, passed in 2024, expands the powers of the UK government to demand access to encrypted communications. The Online Safety Act, particularly Clause 122, allows Ofcom to compel companies to break end-to-end encryption, enabling mass surveillance of private communications. This law has been used against Apple, forcing them to stop offering iCloud end-to-end encryption in the UK and demanding a backdoor to access encrypted iCloud data for UK users. Furthermore, since 2026, the UK’s Online Safety Act authorises Ofcom to require online platforms to deploy automated scanning systems (such as AI-driven content detection algorithms) that scan and analyse user messages, images, and videos on the client side before end-to-end encryption applies. Not banned, but restrictions Advertising the use of VPNs can be illegal under the Online Safety Act. The House of Lords (who to be fair don’t have the power to make laws) proposed in 12/25 (HL Bill 135) to have mandatory age verification for VPNs. Age verification & imprint obligation The Online Safety Act 2023 requires age verification for a variety of ‘potentially harmful’ contentSexually explicit content. Content which encourages, promotes or provides instructions for: suicide,deliberate self-injury, or disordered eating or behaviors associated with an eating disorder. Content which is abusive or incites hatred against people by targeting any of the following characteristics: race, religion, sex, sexual orientation, disability, or gender reassignment. Bullying content. Violent content which: encourages, promotes or provides instructions for an act of serious violence against a person, or depicts real or realistic serious violence against a person, an animal, or a fictional creature, including the graphic depiction of a serious injury. Content which encourages, promotes, or provides instructions for a challenge or stunt highly likely to result in serious injury to the person who does it or to someone else. Content which encourages a person to ingest, inject, inhale, or self-administer a physically harmful substance, or a substance in physically harmful quantity. Content that shames or otherwise stigmatises body types or physical features. Content that promotes or romanticizes depression, hopelessness and despair. (not just limited to sexually explicit content). As of 12/25, the UK government wants to ‘encourage’ Google and Apple to have mandatory client-side scanning of photos and videos using AI on all smartphones and block all nudity unless the user has verified themselves as an adult. The House of Lords (who to be fair don’t have the power to make laws) furthermore proposed (HL Bill 135) to ban all users from social media unless they have verified their age to be 16+, similar to the situation in Australia. The Electronic Commerce (EC Directive) Regulations 2002 have imprint obligations not just for commercial websites, but even for private websites with a small commercial element such as advertising banners. Yes The Regulation of Investigatory Powers Act 2000 gives UK authorities the power to compel the disclosure of encryption keys or the decryption of encrypted data. Refusal to comply can result in a maximum sentence of two years imprisonment, or five years in cases involving national security or child indecence. No bans, but restrictions However, Monero has been delisted from most CEX for British users due to KYC and other regulations, even though it’s not banned per se. Yes (12 months) The Investigatory Powers Act 2016 requires retention of ISP metadata (such as IPs, connection logs, or browsing history), email and telephony metadata (including mobile phone locations) for 1 year. No May need Google or Apple account & device Government services such as GOV.UK One, HMRC, and NHS support browser-based login with password + OTP (via SMS or authenticator app), so an Android/iOS smartphone is not required for normal sign-in. However, to verify your identity or register a new company, you need to use the GOV.UK One Login Android/iOS app, or alternatively you can verify your identity in person at a post office, or answer security questions online (depends on credit-reference data held by Experian, which might not work if your credit history is sparse or you don’t have a UK bank account). Hence, while not strictly mandatory, the GOV.UK One Login app is the main route for identify verification. The Android app uses Play Integrity and is only available from Google Play, and therefore requires a Google account and a phone with unmodified Android (incompatible with open source Android distributions like GrapheneOS or LineageOS, which fail Play Integrity). Furthermore, the government is planning to roll out a digital ID scheme ("Brit Card") for all citizens, which will most likely require an Android/iOS app with yet to be determined alternatives for people without smartphone. Fair Dealings Permitted uses limited to research, private study, criticism, review, news reporting, parody, caricature, pastiche, and quotation. Other uses require permission. Germany 🇪🇺 🇩🇪 Last updated: 2026.01.16 Severe limitations of speech Illegal speech includes vaguely defined ‘hate speech’ (Penal Code §130) (including "liking" a postsee LG Meiningen, Beschl. v. 05.08.2022, 6 Qs 146/22), insulting religions (Penal Code §166), Holocaust denial (Penal Code §130, §189), insults (Penal Code §185), insulting politicians including cases where people were prosecuted (though unclear if convicted) for bagatelles like calling Robert Habeck an "imbecile", Ricard Lang "fat", or Andy Grote a "penis" (Penal Code §188), National Socialists symbols and phrasestechnically it refers to the Dissemination of Means of Propaganda of Unconstitutional Organizations, or Use of Symbols of Unconstitutional Organizations, but in practice this does not just refer to things like a Swastika flag but can even result in convictions for seemingly harmless phrases like ‘Alles für Deutschland’ (Everything for Germany) (Penal Code §86), disparagement of the President or the State and its symbols (Penal Code §90), revealing someone’s biological sex or birth name (Self-Determination Act) or misgendering themDecision Landgericht Frankfurt a.M. 18.07.2024, Az. 2-03 O 275/24, and moree.g. German laws on defamation are very strict, imposing a high burden of proof on the defendant Widespread censorship In the past, ISPs have been ordered to block websites associated with copyright infringement (e.g. Anna’s Archive, The Pirate Bay), Russian government propaganda (e.g. RT), and far-right politics. The NetzDG requires social media platforms to remove illegal speech within strict timeframes and imposes fines for non-compliance. This law effectively forces social media companies to over-censor and remove even legal speech. The EU’s Digital Services Act creates an obligation for platforms to take action in the form of ‘content moderation’ against not just illegal content, but also legal but ‘harmful’ content such as ‘disinformation’ (including truthful information, as a Berlin court ruled) or ‘negative effects on civic discourse or elections’. In the future it will also require age verification from many websites, leading to further de facto censorship. In 12/2025, the EU Commission fined X €120m for spurious ‘transparency failures’ under the DSA, which has been interpreted as a punishment for not censoring enough. Potential backdoors, and proposed****eIDAS Art. 45, an EU regulation, can act as a potential backdoor by obliging browsers to trust government-designated certificate authorities, which could technically allow lawful man-in-the-middle interception of HTTPS traffic. So far, no major browser has implemented Art. 45 QWAC support as envisioned, and open-source and non-EU browsers can largely ignore it. Various EU proposals, including the ProtectEU strategyThe ProtectEU strategy and related Roadmap are at the initial policy stage, aiming to provide law enforcement with "lawful and effective" access to encrypted data. As of July 2025, no legislative bill has been passed, but the Commission’s plan has raised alarm among privacy advocates and the **HLG Recommendations on ‘Access to Data for Effective Law Enforcement’**The EU’s High Level Group’s recommendations - including weakening end-to-end encryption and regulating VPNs - are not legally binding but inform legislative proposals. No formal law has passed as of July 2025, but these recommendations continue to shape digital policy debates. aim to ban end-to-end encryption or mandate backdoors or circumvent it using client-side scanning. "Chat Control 2.0", was approved by the EU Council on 2025.11.26. While earlier versions included mandatory client-side scanning to circumvent end-to-end encryption in chats, the final version made it ‘voluntary’, however companies are encouraged to do ‘voluntary’ scanning of private messages, as it would give them legal certainty. Furthermore, a planned Commission review in 3 years could lead to mandatory scanning for some providers. National authorities may force ’high-risk; services to adopt risk-mitigation tech such as client-side scanning. The EU Parliament still needs to approve Chat Control with an expected vote in H1 2026. No bans Age verification & imprint obligation §5 TMG prescribes imprint obligations not just commercial websites, but also for private websites with a small commercial element such as advertising banners. The EU’s Digital Services Act will require mandatory age verification to access ‘potentially harmful’ content online. It also requires social media platforms to supply the government with the identity of people publishing ‘harmful’ (but mostly legal) opinions; 90% of the requests received by X in 2024 came from Germany. Since 12/2025, an amendment to the Youth Protection Act (JMStV) mandates that content harmful to minors must be restricted to adults, requiring age verification. "Chat Control 2.0", approved by the EU Council on 2025.11.27 but not yet voted on by the EU Parliament, would also require age or ID verification for creating an email or messenger account. Furthermore, the EU Parliament on 2025.11.27 approved report A10-0213/2025, proposing mandatory recurring age verification (every 3 months) for social media, video platforms and AI chatbots. While this was only a non-binding resolution (i.e. it does not have direct legal force or become national law) it is still expected to significantly influence national policies and EU regulatory development. Passwords no, biometrics yes German law distinguishes between biometric data and passwords. Forcing biometric unlocks is more likely to be considered permissible because it involves physical evidence, whereas compelling a password may infringe on the right against self-incrimination. However, case law on this is limited and evolving. A 2019 case in Bavaria allowed police to use a suspect’s fingerprint to unlock a phone, though the decision was controversial and not universally binding. Unlocking a cell phone by forcibly placing a defendant’s finger on the phone’s fingerprint sensor was ruled legal in 2025 by a court (OLG Bremen ruling Ref. 1 ORs 26/24 8.1.25) and police is also allowed to take fingerprints and attempt to use them for unlocking a device later (LG Ravensburg AZ 2 Qs 9/23 jug.). Partially banned Art. 79 of the EU’s Anti-Money Laundering Regulation states that, starting in 2027, financial service providers such as banks and crypto exchanges are not allowed to handle privacy-preserving cryptocurrencies such as Monero. However, it will remain legal to hold, send and receive Monero in self-custodial wallets, and to accept Monero payments (e.g. VPN providers). No, but proposed Despite several attempts to introduce a data retention law (Vorratsdatenspeicherung) and passing parliament, it has been declared unconstitutional. There is currently no mandatory data retention in Germany. An EU Council paper from 12/2025 (WK 16133/2025 INIT) proposed a mandatory 1-year metadata retention (such as IP addresses and phone locations) that would apply not only to telecom operators but to nearly every major digital service, including cloud platforms, domain hosts, payment processors, and even end-to-end encrypted messengers such as WhatsApp and Signal Yes, must register with official ID Cross-platform, with open source app Some tasks requiring strong authentication require the AusweisApp, either on an Android/iOS smartphone with NFC support or on a desktop computer with compatible USB smartcard reader. Linux is explicitly supported as a desktop OS. The AusweisApp is open source and has also been ported to FreeBSD and is available on F-Droid. While the smartcard reader is less convenient than the mobile app and has upfront purchase cost, it is still possible to do everything without a smartphone or proprietary OS. The upcoming EU Digital Wallet will only be available as an app for iOS and stock Android (requiring strong Play Integrity and Play Store), therefore making an Apple or Google account mandatory. Narrow statutory exceptions No general fair use; only narrow, enumerated exceptions for uses such as quotation, research, criticism, and certain educational and private uses. The list is exhaustive and exceptions are strictly interpreted. France 🇪🇺 🇫🇷 Last updated: 2026.01.16 RestrictedMostly relating to vaguely defined ‘hate speech’ (Gayssot Act 1990 & Law of 30 Dec 2004), Holocaust denial, as well as positive representation of drugs or incitement to their consumption (Penal Code §222-234 to §222-239) Widespread censorship In the past, ISPs as well as third-party DNS and VPN providers, have been ordered to block websites associated with copyright infringement (e.g. The Pirate Bay), Russian government propaganda (e.g. RT), and far-right politics. This law effectively forces social media companies to over-censor and remove even legal speech. The EU’s Digital Services Act creates an obligation for platforms to take action in the form of ‘content moderation’ against not just illegal content, but also legal but ‘harmful’ content such as ‘disinformation’ or ‘negative effects on civic discourse or elections’. The DSA also requires age verification from many websites, leading to further de facto censorship. There is strong government pressure to censor on social media companies, for example Rumble was forced to blocked French IPs due to censorship demands (until an opposing court ruling in Oct 2025), the CEO of Telegram (Pavel Durov) was arrested in 2024 with the prosecutors alleging that censorship on Telegram was insufficient, and a French prosecutor classified X as an ‘organised crime group’ in 2025 for not censoring enough. In 12/2025, the EU Commission fined X €120m for spurious ‘transparency failures’ under the DSA, which has been interpreted as a punishment for not censoring enough. Potential backdoors, and proposed****eIDAS Art. 45, an EU regulation, can act as a potential backdoor by obliging browsers to trust government-designated certificate authorities, which could technically allow lawful man-in-the-middle interception of HTTPS traffic. So far, no major browser has implemented Art. 45 QWAC support as envisioned, and open-source and non-EU browsers can largely ignore it. Various EU proposals, including the ProtectEU strategyThe ProtectEU strategy and related Roadmap are at the initial policy stage, aiming to provide law enforcement with "lawful and effective" access to encrypted data. As of July 2025, no legislative bill has been passed, but the Commission’s plan has raised alarm among privacy advocates and the **HLG Recommendations on ‘Access to Data for Effective Law Enforcement’**The EU’s High Level Group’s recommendations - including weakening end-to-end encryption and regulating VPNs - are not legally binding but inform legislative proposals. No formal law has passed as of July 2025, but these recommendations continue to shape digital policy debates. aim to ban end-to-end encryption or mandate backdoors or circumvent it using client-side scanning. "Chat Control 2.0", was approved by the EU Council on 2025.11.26. While earlier versions included mandatory client-side scanning to circumvent end-to-end encryption in chats, the final version made it ‘voluntary’, however companies are encouraged to do ‘voluntary’ scanning of private messages, as it would give them legal certainty. Furthermore, a planned Commission review in 3 years could lead to mandatory scanning for some providers. National authorities may force ’high-risk; services to adopt risk-mitigation tech such as client-side scanning. The EU Parliament still needs to approve Chat Control with an expected vote in H1 2026. Not banned, but restrictions In May 2025, a Paris court ordered several VPN providers to block access to hundreds of domains. The court classified the VPN providers as ‘technical intermediaries’, wih the expectations that they have to monitor and restrict user access to banned content. Age verification & imprint obligation Loi pour la confiance dans l’économie numérique prescribes imprint obligations not just commercial websites, but also for private websites with a small commercial element such as advertising banners. The EU’s Digital Services Act will require mandatory age verification to access ‘potentially harmful’ content online, and France is trialling the implementation. Since 2025 (SREN Law), France requires age verification for accessing pornographic websites, and is is likely that will expand to other websites with content deemed inappropriate for children. A proposed law would ban under-15s from using social media from 09/26 onwards, requiring identity checks for all users. "Chat Control 2.0", approved by the EU Council on 2025.11.27 but not yet voted on by the EU Parliament, would also require age or ID verification for creating an email or messenger account. Furthermore, the EU Parliament on 2025.11.27 approved report A10-0213/2025, proposing mandatory recurring age verification (every 3 months) for social media, video platforms and AI chatbots. While this was only a non-binding resolution (i.e. it does not have direct legal force or become national law) it is still expected to significantly influence national policies and EU regulatory development. Yes The Article 30 of the Law No. 2001-1062 of 15 Nov 2001 allows a judge or prosecutor to compel any qualified person to decrypt or surrender keys to make available any information encountered in the course of an investigation. Failure to comply with such a request can result in penalties, including three years of jail time and a fine of €45,000; if the compliance would have prevented or mitigated a crime, the penalty increases to five years of jail time and €75,000. Partially banned Art. 79 of the EU’s Anti-Money Laundering Regulation states that, starting in 2027, financial service providers such as banks and crypto exchanges are not allowed to handle privacy-preserving cryptocurrencies such as Monero. However, it will remain legal to hold, send and receive Monero in self-custodial wallets, and to accept Monero payments (e.g. VPN providers) Yes (12 months) Mandatory retention of ISP metadata (such as IPs, connection logs, or browsing history), email and telephony metadata (including mobile phone locations) for 1 year. An EU Council paper from 12/2025 (WK 16133/2025 INIT) proposed a mandatory 1-year metadata retention (such as IP addresses and phone locations) that would apply not only to telecom operators but to nearly every major digital service, including cloud platforms, domain hosts, payment processors, and even end-to-end encrypted messengers such as WhatsApp and Signal Yes, must register with official ID Limited support, iOS/Android/AOSP required For certain government tasks requiring strong authentication (e.g. tax filings, e-signatures), you need a certified FranceConnect+ app for Android/iOS such as France Identité and L’Identité Numérique La Poste. For now, these apps seems to be working on non-stock Android systems such as LineageOS or GrapheneOS, but they require Play Services / microG and are only available on the Play Store, which requires a Google accountA possible workaround is using Aurora Store to download the app from Play Store without a Google account, but it is not officially supported.. The upcoming EU Digital Wallet will only be available as an app for iOS and stock Android (requiring strong Play Integrity and Play Store), therefore making an Apple or Google account mandatory. Narrow statutory exceptions Uses are only allowed if they fit an exhaustive list of exceptions (quotation, press review, private copy, educational use). No general fair use doctrine; exceptions are narrowly interpreted. Italy 🇪🇺 🇮🇹 Last updated: 2026.01.21 Restricted Illegal speech includes vaguely defined ‘hate speech’ (Penal Code §604), Holocaust denial (Law 16 June 2016 n. 115), insulting religions (Penal Code §403), speech that is offensive to public moralityLaws prohibiting publications or performances offensive to public morality ("buon costume") do exist, but enforcement and prosecution for such offenses appear to be rare and not a high priority in practice (Penal Code §21), insulting the President (Penal Code §278) Widespread censorship In the past, ISPs as well as third-party DNS and VPN providers, have been ordered to block websites associated with copyright infringement (e.g. Anna’s Archive, The Pirate Bay), Russian government propaganda (e.g. RT), and adult content. The ‘Piracy Shield’ censorship framework targets piracy and sports streaming websites, but has also affected many innocent websites such as Google Drive. Italy fined Cloudflare for not blocking worldwide access to piracy websites via their DNS resolver 1.1.1.1. archive.today/archive.is is DNS-blocked in Italy for copyright reasons (i.e. showing paywalled content). The EU’s Digital Services Act creates an obligation for platforms to take action in the form of ‘content moderation’ against not just illegal content, but also legal but ‘harmful’ content such as ‘disinformation’ or ‘negative effects on civic discourse or elections’. The DSA also requires age verification from many websites, leading to further de facto censorship. In 12/2025, the EU Commission fined X €120m for spurious ‘transparency failures’ under the DSA, which has been interpreted as a punishment for not censoring enough. Potential backdoors, and proposed****eIDAS Art. 45, an EU regulation, can act as a potential backdoor by obliging browsers to trust government-designated certificate authorities, which could technically allow lawful man-in-the-middle interception of HTTPS traffic. So far, no major browser has implemented Art. 45 QWAC support as envisioned, and open-source and non-EU browsers can largely ignore it. Various EU proposals, including the ProtectEU strategyThe ProtectEU strategy and related Roadmap are at the initial policy stage, aiming to provide law enforcement with "lawful and effective" access to encrypted data. As of July 2025, no legislative bill has been passed, but the Commission’s plan has raised alarm among privacy advocates and the **HLG Recommendations on ‘Access to Data for Effective Law Enforcement’**The EU’s High Level Group’s recommendations - including weakening end-to-end encryption and regulating VPNs - are not legally binding but inform legislative proposals. No formal law has passed as of July 2025, but these recommendations continue to shape digital policy debates. aim to ban end-to-end encryption or mandate backdoors or circumvent it using client-side scanning. "Chat Control 2.0", was approved by the EU Council on 2025.11.26. While earlier versions included mandatory client-side scanning to circumvent end-to-end encryption in chats, the final version made it ‘voluntary’, however companies are encouraged to do ’volu