Skip to content
AuditFront
PI1.3 SOC 2

SOC 2 PI1.3: Processing Integrity - System Inputs are Complete, Accurate, and Timely

What This Control Requires

The entity implements policies and procedures over system inputs, including controls over completeness and accuracy, to result in products, services, and reporting to meet the entity's objectives.

In Plain Language

Garbage in, garbage out. No amount of processing logic can fix bad input data, which is why this control zeroes in on validating data before it enters your pipeline. If incomplete or inaccurate data gets past the front door, every downstream system inherits the problem.

Every data entry point needs validation: user interfaces, API endpoints, file uploads, and system integrations. Each one should check format, completeness, accuracy, and authorisation. Invalid inputs get rejected with clear error messages, and validation failures get logged so you can spot patterns.

Auditors examine input validation across all your data channels, check that validation rules match your processing specifications from PI1.1, test how the system handles deliberately invalid inputs, and review your validation failure logs to see whether you are acting on what they reveal.

How to Implement

Implement input validation at every data entry point. For web forms and user interfaces, use client-side validation for user experience and server-side validation for actual enforcement. Check data types, formats, ranges, required fields, and business rules. Never rely on client-side validation alone - it can be bypassed trivially.

For APIs, enforce strict schema validation. Define expected input formats using OpenAPI/Swagger or equivalent. Validate all incoming data against the schema before any processing begins. Add rate limiting and input size restrictions to prevent abuse. Log all validation failures with enough detail to troubleshoot.

For batch file inputs, validate headers and trailers (record counts, control totals, file identifiers), validate individual records (field formats, required data, business rules), detect duplicates, and verify completeness (did the expected file arrive with the expected number of records?). Reject entire batches when critical validation fails and produce clear error reports.

For system-to-system integrations, define data contracts specifying the expected format and content. Validate incoming data against the contract before accepting it. Implement acknowledgment mechanisms confirming successful receipt and validation. Monitor integration health and alert on failures.

Build an input error management process. Generate clear error messages that explain what went wrong without leaking sensitive system details. Give users or sending systems a way to correct and resubmit. Track error rates by source and type to catch systemic quality problems early.

Document all your validation rules for each input channel, including why each rule exists. Review and update them when processing specifications change, new data sources come online, or error analysis shows gaps in your coverage.

Evidence Your Auditor Will Request

  • Input validation rule documentation for all data entry channels (UI, API, batch, integrations)
  • Server-side validation configurations demonstrating enforcement of data quality rules
  • API schema definitions and validation configurations
  • Batch file validation procedures including header/trailer checks and completeness verification
  • Input error logs and reports showing detection rates and resolution of validation failures

Common Mistakes

  • Input validation relies solely on client-side checks that can be bypassed by users or attackers
  • API endpoints accept and process data without schema validation or business rule enforcement
  • Batch file processing does not verify completeness, allowing partial files to be processed
  • Validation rules are not aligned with current processing specifications, allowing invalid data through
  • Input validation failures are not logged, preventing analysis of data quality trends

Related Controls Across Frameworks

Framework Control ID Relationship
ISO 27001 A.8.11 Related
ISO 27001 A.8.28 Related
nist-csf PR.DS-08 Partial overlap

Frequently Asked Questions

What is the difference between input validation and input sanitization?
Validation checks whether data meets expected formats and business rules - if it does not, it gets rejected. Sanitisation modifies input to strip potentially harmful content (SQL injection, XSS payloads) while keeping the useful data. You need both. Validation protects processing integrity, sanitisation protects security. They solve different problems and neither substitutes for the other.
How do we handle partial batch file submissions?
Set a clear policy and stick to it. Your options are: reject the whole batch and require resubmission, process what you have and flag the batch as incomplete, or hold the partial batch for a set period waiting for the rest. The right choice depends on your business requirements and risk tolerance. Whatever you choose, log every partial submission and alert the operations team. Silent partial processing is a recipe for data integrity problems.
Should we validate data from trusted internal systems?
Yes - always. Internal systems have bugs, configuration drift, and schema changes that can produce unexpected data. Trusting internal inputs without validation is how integration issues go unnoticed until they have corrupted data across multiple systems. The validation can be lighter-touch than what you apply to external inputs, but it should still exist. Think of it as a safety net, not a sign of distrust.

Track SOC 2 compliance in one place

AuditFront helps you manage every SOC 2 control, collect evidence, and stay audit-ready.

Start Free Assessment