Skip to main content

Headless

Automated verification using pre‑captured document and face images

Verifies identity using static evidence collected by the your own front end and submitted to IDnow via API.

In Headless mode, your application is responsible for collecting and providing identity evidence, such as images of the ID document and a selfie image of the user. IDnow then performs verification on these static inputs using AI-based analysis and data extraction modules. Unlike Capture mode, the Headless flow does not involve any user interaction within the IDnow front end; all evidence is provided directly by the customer’s system.

Because verification relies solely on static images, certain checks that require dynamic interaction or live capture cannot be performed. These include NFC chip reading and dynamic document validation (for example, visual inspection of holograms under movement or live liveness actions). As a result, Headless verification provides a more limited level of accuracy compared to Capture mode and is not suitable for regulated use cases such as KYC or AML processes.

Instead, Headless mode is best suited for unregulated scenarios, such as document data extraction, image quality assessment, or pre-validation steps where full dynamic verification is not required.


Key features

  • Customer-managed evidence collection – The customer application captures and submits document and selfie images via API.
  • Static evidence processing – Verification is performed using only static images, without live interaction or dynamic capture.
  • AI-based analysis – Automated checks assess document authenticity, extract data, and compare facial features between the document and the selfie.
  • Lightweight integration – Ideal for API-based use cases that require automated data extraction or streamlined verification flows.

Configuration

ParameterTypeDescription
biometricProcessingobjectA container for biometric verification parameters. If omitted, no biometric analysis is performed. The IDCheck.io realm configuration must match the options set here.
biometricProcessing.sampleTypestring, enumSpecifies the biometric sample type. Currently, only SELFIE is supported.

Input data blocks

Input requirements for this step.

Data block typeRequiredDescription
DocumentImagesYesContains the S3 URLs of the identity document images. This data block is always required.
BiometricSamplesConditionalContains the S3 URLs of the face of the end-user. This data block is required only if the biometricProcessing option is present.

Verdicts

This step does not produce any verdicts.


Output data blocks

Data blocks produced per verdict.

ScenarioData blocks producedDescription
verifiedBasicIdentity, ExtendedIdentity, DocumentData, DocumentImages, DocumentVerification, BiometricSamplesSuccessful document processing with identity extraction
fraud_detectedBasicIdentity, ExtendedIdentity, DocumentData, DocumentImages, DocumentVerification, BiometricSamplesDocument identified as fraudulent
capture_failedBasicIdentity, ExtendedIdentity, DocumentData, DocumentImages, DocumentVerification, BiometricSamplesDocument capture failed
defaultBasicIdentity, ExtendedIdentity, DocumentData, DocumentImages, DocumentVerification, BiometricSamplesUnexpected or technical error during processing