The General Data Protection Regulation (GDPR) introduced a new level of protection for individuals in the digital age, particularly in response to the growing use of algorithms, artificial intelligence, and automated systems. One of the most debated and misunderstood provisions is GDPR Article 22, which governs automated decision-making, including profiling.
Article 22 addresses a crucial concern: what happens when decisions that significantly affect people are made by machines rather than humans? Credit approvals, insurance pricing, recruitment filtering, targeted advertising, fraud detection, and even healthcare triage are increasingly driven by automated systems. While these technologies offer efficiency and scale, they also introduce risks of bias, opacity, and loss of human oversight.
What Is GDPR Article 22?
GDPR Article 22 gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects or similarly significant effects on them.
In simple terms, Article 22 aims to prevent situations where:
- A computer makes an important decision about a person
- No human is meaningfully involved
- The decision has serious consequences for the individual
The article reflects a fundamental GDPR principle: people should not be reduced to data points processed without accountability.
The Exact Scope of Article 22
Article 22 does not ban all automated decision-making. Instead, it applies only when all three conditions are met:
1. The decision is fully automated
There is no meaningful human involvement in the decision-making process. A superficial or rubber-stamp review does not count as human involvement.
2. The decision has legal or similarly significant effects
Examples include:
- Approval or rejection of a loan
- Hiring or firing decisions
- Insurance eligibility or pricing
- Access to education, housing, or healthcare
- Credit limits or account suspensions
3. The decision is about an identifiable individual
Article 22 applies to natural persons, not anonymous or aggregated data.
If any one of these elements is missing, Article 22 may not apply, though other GDPR provisions still might.
What Is “Profiling” Under GDPR?
Profiling is defined in Article 4(4) of the GDPR as:
“Any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person.”
This includes analyzing or predicting:
- Work performance
- Economic situation
- Health
- Personal preferences
- Interests
- Reliability
- Behavior
- Location or movements
Profiling becomes relevant under Article 22 only when it leads to a solely automated decision with significant effects.
The Core Right Granted by Article 22
Article 22(1) establishes the main rule:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them or similarly significantly affects them.
This is often called the “right to human decision-making”, although the GDPR does not use that exact phrase.
Exceptions: When Automated Decisions Are Allowed
Article 22(2) provides three exceptions where automated decision-making may be lawful:
1. Necessary for entering into or performing a contract
For example:
- Automated credit scoring required to issue a loan
- Real-time fraud detection to process payments
2. Authorized by Union or Member State law
This includes laws that provide safeguards, such as tax or social security systems.
3. Based on the data subject’s explicit consent
Consent must be:
- Freely given
- Specific
- Informed
- Unambiguous
- Explicit (especially when special category data is involved)
Even when an exception applies, additional safeguards are mandatory.
Mandatory Safeguards Under Article 22(3)
When automated decision-making is allowed under an exception, organizations must implement safeguards to protect individuals’ rights and freedoms. These include:
- The right to obtain human intervention
- The right to express one’s point of view
- The right to contest the decision
This means individuals must have a real opportunity to challenge and understand decisions made by algorithms.
Transparency Obligations Linked to Article 22
Although Article 22 itself does not explicitly require explanations, Articles 13–15 of the GDPR fill this gap.
Organizations must inform individuals about:
- The existence of automated decision-making
- The logic involved (at a meaningful level)
- The significance and consequences of the processing
This is often referred to as the “right to explanation,” though legally it is more accurately described as a right to meaningful information.
Article 22 and Special Category Data
Article 22(4) places stricter limits on decisions involving special category data, such as:
- Health data
- Biometric data
- Genetic data
- Racial or ethnic origin
- Religious beliefs
Automated decisions based on such data are generally prohibited, unless:
- Explicit consent is obtained, or
- Substantial public interest applies under law
Even then, strong safeguards are required.
Relationship Between Article 22 and AI Systems
Article 22 is particularly relevant in the age of:
- Artificial intelligence
- Machine learning
- Predictive analytics
- Algorithmic scoring systems
Importantly:
- The GDPR is technology-neutral
- It applies regardless of whether the system is “AI” or a simple rules engine
- What matters is effect, not sophistication
As AI systems become more complex, compliance with Article 22 becomes both more important and more challenging.
5 Practical Examples of GDPR Article 22 in Action
Example 1: Automated Loan Rejection by a Bank
A bank uses an automated credit scoring system to assess loan applications. The system automatically rejects applicants below a certain score without any human review.
Why Article 22 applies:
- The decision is fully automated
- It has legal and financial consequences
- It affects an individual
GDPR requirements:
- The applicant must be informed about the automated decision
- The applicant must be able to request human review
- The applicant must be able to challenge the decision
- The bank must explain the main factors behind the rejection
If no human review is available, the bank is likely violating Article 22.
Example 2: Automated CV Screening in Recruitment
A company uses AI software to automatically screen CVs and reject candidates based on predefined criteria, with no recruiter reviewing rejected applications.
Why Article 22 applies:
- Hiring decisions significantly affect individuals
- The rejection is based solely on automated processing
Compliance obligations:
- Candidates must be informed that automated screening is used
- Human intervention must be available upon request
- Candidates must be able to contest the decision
A system that automatically filters candidates but includes a recruiter review before final rejection may fall outside Article 22.
Example 3: Insurance Premiums Adjusted by Algorithm
An insurance company uses behavioral data (driving habits, app usage, purchase history) to automatically adjust insurance premiums in real time.
Why Article 22 may apply:
- Pricing decisions can have significant economic effects
- If changes are automatic and unchecked, Article 22 is triggered
Key considerations:
- Is the decision solely automated?
- Are individuals informed?
- Can customers challenge premium changes?
- Is human intervention available?
Failure to provide these safeguards can lead to GDPR violations.
Example 4: Automated Account Suspension for Fraud Detection
An online platform uses automated systems to detect suspicious activity and immediately suspend user accounts without human verification.
Why Article 22 may apply:
- Account suspension can significantly affect users
- Decisions are fully automated
Possible justification:
- The decision may be necessary for contract performance or fraud prevention
Still required:
- Post-decision human review
- A clear appeal mechanism
- Transparency about why the account was flagged
Even justified automation must respect Article 22 safeguards.
Example 5: Automated Eligibility Decisions in Healthcare Services
A healthcare provider uses an algorithm to automatically determine eligibility for certain treatments or services.
Why Article 22 strongly applies:
- Health decisions have serious consequences
- Special category data is involved
Strict requirements:
- Explicit consent or legal authorization
- Enhanced safeguards
- Human oversight is essential
Purely automated medical decision-making without human involvement is highly likely to violate the GDPR.
Common Misconceptions About Article 22
“All automated decisions are illegal”
False. Only solely automated decisions with significant effects are restricted.
“Adding a human name to the process is enough”
False. Human involvement must be meaningful, not symbolic.
“AI explanations must reveal source code”
False. Organizations must provide meaningful, understandable explanations, not technical blueprints.
Penalties for Non-Compliance
Violations of Article 22 can lead to:
- Administrative fines up to €20 million or 4% of global annual turnover
- Orders to stop processing
- Mandatory system redesigns
- Reputational damage
Supervisory authorities Authorities increasingly scrutinize algorithmic decision-making systems.
Best Practices for Compliance with GDPR Article 22
Organizations should:
- Map all automated decision-making processes
- Assess whether decisions are solely automated
- Evaluate the impact on individuals
- Implement meaningful human oversight
- Provide clear notices and explanations
- Offer accessible appeal mechanisms
- Document decision logic and safeguards
Proactive compliance is far cheaper than regulatory enforcement.
Final Thoughts: Why Article 22 Matters
GDPR Article 22 is not anti-technology. Instead, it is pro-human. It recognizes that while automation can improve efficiency, it must not come at the expense of dignity, fairness, and accountability.
As algorithms increasingly shape people’s lives, Article 22 ensures that humans remain in the loop, decisions remain challengeable, and individuals retain control over how technology affects them.