INVESTIGATIVE UX DATABASE

They designed it
to trick you.

Dark Wiki is the open investigative encyclopedia cataloguing every manipulative design pattern in digital products — with evidence, severity ratings, and the legal frameworks catching up.

0 Patterns
Catalogued
0 Deception
Categories
$1.2B+ FTC Fines
Since 2022

Most Documented Patterns

Real examples. Real harm. Real consequences.

● CRITICAL #DW-001

Confirmshaming

Guilt-tripping language on opt-out buttons to manipulate users into accepting. "No thanks, I don't want to save money."

Emotional Manipulation Very Common
Read Full Analysis →
● HIGH #DW-002

Roach Motel

Easy to sign up, impossible to cancel. Hidden cancellation flows, phone-only cancellation, excessive retention steps.

Obstruction Extremely Common
Read Full Analysis →
● CRITICAL #DW-003

Hidden Costs

Concealing fees, taxes, or surcharges until the final checkout step when users are psychologically committed to purchasing.

Sneaking Very Common
Read Full Analysis →
● HIGH #DW-004

Forced Continuity

Silently charging users after a free trial ends without clear notification. Auto-renewal with buried cancellation options.

Sneaking Common
Read Full Analysis →
● MEDIUM #DW-005

Misdirection

Drawing attention to one element to distract from another. Bright "Accept All" vs. tiny "Manage Preferences" on cookie banners.

Visual Interference Ubiquitous
Read Full Analysis →
● CRITICAL #DW-006

Privacy Zuckering

Confusing privacy settings that trick users into sharing more data than intended. Named after Facebook's repeated privacy scandals.

Data Exploitation Very Common
Read Full Analysis →
● HIGH#DW-007

Bait and Switch

Advertising one thing, then swapping it for another after the user has committed. From free-to-paid upgrades to product substitutions.

SneakingCommon
Read Full Analysis →
● HIGH#DW-008

Sneak Into Basket

Silently adding items to the user's cart during checkout — insurance, warranties, accessories they never selected.

SneakingCommon
Read Full Analysis →
● MEDIUM#DW-009

Disguised Ads

Advertisements styled to look like content, navigation, or download buttons, tricking users into clicking ads.

Visual InterferenceVery Common
Read Full Analysis →
● MEDIUM#DW-010

Trick Questions

Using double negatives and confusing language in forms so users accidentally opt into things they meant to decline.

Visual InterferenceCommon
Read Full Analysis →
● HIGH#DW-011

Friend Spam

Accessing a user's contacts and sending messages on their behalf without clear consent. LinkedIn paid $13M for this.

Data ExploitationDeclining
Read Full Analysis →
● HIGH#DW-012

Fake Urgency

Countdown timers that reset, fake stock warnings, and artificial time pressure to rush purchasing decisions.

Urgency / ScarcityVery Common
Read Full Analysis →
● CRITICAL#DW-013

Fake Social Proof

Fabricated reviews, inflated ratings, manufactured activity notifications. A $152B fake review economy.

Data ExploitationExtremely Common
Read Full Analysis →
● MEDIUM#DW-014

Nagging

Persistent, repeated prompts that interrupt user workflow to push upgrades, ratings, or notification permissions.

Visual InterferenceUbiquitous
Read Full Analysis →
● CRITICAL#DW-015

Obstruction

Making cancellation, data deletion, or account downgrade deliberately unreasonably difficult.

ObstructionExtremely Common
Read Full Analysis →
● MEDIUM#DW-016

Preselection

Pre-checking radio buttons and toggles that favor the company. 90% of users accept defaults — companies know this.

SneakingVery Common
Read Full Analysis →
● MEDIUM#DW-017

Price Comparison Prevention

Structuring pricing to make it impossible to compare plans on equal terms. Feature fragmentation and unit pricing tricks.

Visual InterferenceCommon
Read Full Analysis →
● HIGH#DW-018

Dark Defaults

Settings configured from day one to maximize data collection and sharing. Fewer than 5% of users ever change defaults.

Data ExploitationUbiquitous
Read Full Analysis →
● CRITICAL#DW-019

Hard to Cancel

The most complained-about dark pattern. Phone calls, retention gauntlets, and $1.8B in unwanted annual charges.

ObstructionExtremely Common
Read Full Analysis →
● CRITICAL#DW-020

Drip Pricing

Revealing the true cost incrementally through checkout. Ticketmaster, hotel resort fees, and the FTC's Junk Fees Rule.

SneakingVery Common
Read Full Analysis →
● HIGH#DW-021

Gamification Pressure

Streaks, loss aversion, loot boxes using variable rewards to create compulsive engagement against user wellbeing.

Forced ActionCommon
Read Full Analysis →
● HIGH#DW-022

Interface Interference

Asymmetric button styling, visual hierarchy manipulation, and deceptive layouts that guide users toward unwanted choices.

Visual InterferenceUbiquitous
Read Full Analysis →
● CRITICAL#DW-023

Attention Depletion

Infinite scrolls, autoplay, algorithmic feeds designed to hijack attention. 200,000 human lifetimes wasted per day.

Forced ActionUbiquitous
Read Full Analysis →

Pattern Categories

Dark patterns grouped by the type of manipulation they employ.

🎭

Emotional Manipulation

Guilt, shame, urgency, and fear used to override rational decision-making.

5 patterns
🚧

Obstruction

Making unwanted actions difficult — hidden cancellation, multi-step opt-outs.

4 patterns
🕵️

Sneaking

Hiding information, adding items to cart, or auto-enrolling without consent.

5 patterns
🪞

Visual Interference

Misdirection, pre-selection, and design tricks that guide users toward a preferred choice.

4 patterns

Urgency & Scarcity

Fake countdown timers, "only 2 left!" warnings, and artificial pressure tactics.

3 patterns
🔐

Data Exploitation

Confusing privacy controls that expose more user data than intended.

3 patterns
🔄

Forced Action

Requiring unrelated actions to complete a task — forced registration, social sharing gates.

3 patterns

Severity Index

How we rate the impact and harm of each dark pattern.

Critical 9-10

Financial harm, data exploitation, or illegal under EU/FTC regulations. Causes measurable user damage. Companies fined for these patterns.

High 7-8

Significant manipulation that most users cannot detect. Regulatory scrutiny increasing. Often involves financial or privacy consequences.

Medium 5-6

Manipulative but detectable by aware users. Degrades trust and user experience. Common in e-commerce and SaaS onboarding.

Low 3-4

Mild nudging that borders on legitimate persuasion. May cross ethical lines depending on context and user vulnerability.

Advisory 1-2

Grey area practices. Legitimate marketing techniques that could become dark patterns if applied aggressively.

Need a Dark Pattern Audit?

Our UX forensics team at Garnet Grid Consulting analyzes your product (or your competitor's) for deceptive design patterns. We deliver a full severity-scored report with remediation recommendations and compliance guidance for GDPR, DSA, and FTC regulations.

  • Full-product UX forensic analysis
  • Severity-scored pattern report
  • GDPR/DSA/FTC compliance check
  • Remediation roadmap & competitor benchmarking
Book a UX Audit →

Frequently Asked Questions

What is a dark pattern?

A dark pattern is a user interface design technique that manipulates users into taking actions they didn't intend. The term was coined by UX researcher Harry Brignull in 2010. Dark patterns exploit cognitive biases and psychological vulnerabilities to benefit the company at the user's expense.

Are dark patterns illegal?

Increasingly, yes. The EU's Digital Services Act (DSA) explicitly bans dark patterns. The FTC has taken enforcement action against companies like Epic Games ($520M fine) and Amazon for dark pattern practices. California's CPRA and Colorado's CPA include dark pattern provisions. The legal landscape is rapidly evolving.

How do I identify dark patterns in my own product?

Start with a UX audit focused on user consent flows, cancellation processes, pricing transparency, and opt-in/opt-out defaults. Ask: "Would a reasonable user understand what's happening?" If the answer is uncertain, it may be a dark pattern. Our professional audit service provides comprehensive analysis.

Who created Dark Wiki?

Dark Wiki is an open educational resource maintained by Garnet Grid Consulting LLC, a technology consulting firm specializing in AI, UX, and digital infrastructure. Our mission is to make deceptive design practices visible and accountable.