GDPR Art.22: Automated Individual Decision-Making, Including Profiling
What This Control Requires
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or (c) is based on the data subject's explicit consent.
In Plain Language
Automated systems increasingly make decisions that shape people's lives - whether they get a loan, a job interview, or a fair insurance quote. Article 22 says individuals have the right not to be subject to decisions made entirely by machines when those decisions have legal effects or significantly affect them. No human in the loop, no go.
The critical words are "solely automated" and "significant effects." If a genuine human reviews the automated output, considers the relevant factors, and exercises real judgment before making the call, Article 22 doesn't apply. But a rubber-stamp process where someone clicks "approve" on every algorithmic recommendation doesn't count as meaningful human involvement. Regulators see through that.
There are three exceptions: the automated decision is necessary for a contract, it's authorised by law with proper safeguards, or the individual gave explicit consent. Even when an exception applies, you still need to offer safeguards - at minimum, the right to obtain human intervention, express a point of view, and contest the decision. If special category data is involved, only the consent or substantial public interest exceptions are available, and you need additional safeguards on top.
How to Implement
Audit every decision-making process in your organisation to find the ones that are fully automated and produce legal or significant effects. Check credit and lending decisions, insurance underwriting, recruitment screening, fraud detection, content moderation, and personalised pricing. For each one, document the degree of automation, the nature of the effects on individuals, and whether there's meaningful human involvement.
For any process caught by Article 22, work out which exception applies and document it properly. If you're relying on contract necessity, explain why automation is actually necessary for the contract. If you're relying on explicit consent, make sure it meets every Article 7 requirement. Regardless of the exception, implement the minimum safeguards: human intervention on request, the ability for individuals to express their point of view, and a route to contest the decision.
Define what meaningful human oversight actually looks like for each automated process. The reviewer needs genuine authority to override the system, access to all the relevant information, and enough expertise to exercise independent judgment. Document the review process and monitor it to make sure reviewers aren't just waving things through.
Be transparent about how your automated decisions work. Articles 13, 14, and 15 require you to provide meaningful information about the logic involved, the significance, and the likely consequences. You don't need to hand over your algorithms or source code, but you do need to explain which factors matter, how they influence outcomes, and what someone can do to get a different result. Write it for a normal person, not a data scientist.
Set up a clear process for when people challenge automated decisions. When someone requests human intervention, wants to express their view, or contests an outcome, there needs to be a genuine review - not a cursory glance at the same automated output. Train your review staff to conduct independent assessments and communicate outcomes clearly.
Evidence Your Auditor Will Request
- Inventory of solely automated decision-making processes with impact assessments
- Documentation of applicable exceptions and safeguards for each automated process
- Evidence of meaningful human oversight mechanisms with authority to override
- Privacy notices and information providing meaningful explanation of automated decision logic
- Procedure for handling data subject challenges to automated decisions, including human review records
Common Mistakes
- Automated decision-making processes operating without awareness that Article 22 applies
- Human review that is merely a rubber stamp rather than genuine, meaningful oversight
- No mechanism for data subjects to contest automated decisions or obtain human intervention
- Privacy notices failing to provide meaningful information about the logic of automated decision-making
- Using special category data in automated decisions without meeting the heightened requirements
Related Controls Across Frameworks
Frequently Asked Questions
What counts as a 'legal effect' or 'similarly significantly affects'?
Does using AI to assist human decision-makers trigger Article 22?
What constitutes 'meaningful information about the logic involved'?
Track GDPR compliance in one place
AuditFront helps you manage every GDPR control, collect evidence, and stay audit-ready.
Start Free Assessment