This is the main entry point that allows you to use the SDK.

Introduction 

The Capture SDK is targeted to developers who want to use the IDEMIA biometric technologies within their mobile apps.

The main features are:

  • Biometric captures
  • Biometric coding
  • Biometric authentication and identification
  • Identity documents reading

Please refer to Release Notes to see the list of improvements and fixed issues.

Prerequisites 

Skills Required 

The integration tasks shall be done by developers with knowledge of:

  • Xcode
  • Objective-C/Swift
  • iOS (min version is 15.0)
  • (optional) cocoapods

Resources Required 

Integration should be performed on a Mac.

The tools required are:

  • Xcode that support iOS 15
  • iOS device (simulator is not supported)

Licenses Required 

Depending of which declination of the library is used the licenses required are:

  • Biometry + Document:

    • MORPHOFACS
    • VERIF
    • IDENT
    • MIMA
    • MSC_CORE
    • MSC_LIVENESS
  • Biometry:

    • MORPHOFACS
    • VERIF
    • IDENT
    • MIMA
    • MSC_CORE
    • MSC_LIVENESS
  • Document:

    • MIMA
    • MSC_CORE

Note: To enable the feature for video dump you will need also:

  • MSC_DUMP

Sample Project 

The sample project is provided along with this documentation.

Getting Started 

Components 

The SDK is composed of six different components:

  • LicenseManager: Object responsible for handling the license.
  • FaceCaptureHandler: Handles the capture of the face biometrics through the camera of the device.
  • FingerCaptureHandler: Handles the capture of the finger biometrics through the camera of the device.
  • BioMatcherHandler: Handles the biometric coding and matching.
  • BioStoreDB: Repository to store biometric templates (This component is optional, just in case you don’t want to implement your own database).
  • DocumentCaptureHandler: Handles the document reading features (Like reading MRZ documents).
  • ImageUtils: Handle the images format conversion, in case the integrator needs to change the image format or import an image.

SDK Variants 

The SDK comes in six different variants.

  1. Full SDK: It contains all the features of the SDK. Includes components: LicenseManager, FaceCaptureHandler, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK'.

  2. Biometry variant: It contains all biometric features including face and finger capture and also biometric coding and matching used for liveness checking. Includes components: LicenseManager, FaceCaptureHandler, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK-biometry'.

  3. Document variant: It contains only document capture features. Includes components: LicenseManager, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK-document'.

  4. Finger variant: It contains only finger capture features including biometric coding and matching used for finger liveness checking. Includes components: LicenseManager, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK-finger'.

  5. Face variant: It contains only face capture features without biometric coding and matching, so for liveness checking you need to use external server. Offline liveness checking is not available. Includes components: LicenseManager, FaceCaptureHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK-face'.

  6. Face+document variant: It contains only face capture features without biometric coding and matching, so for liveness checking you need to use external server. Offline liveness checking is not available. Additionally it contains also document capture features. Includes components: LicenseManager, FaceCaptureHandler, BioStoreDB, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile: pod 'BiometricSDK-face_document'.

Analytics 

Capture SDK offers a logging mechanism that collects analytics data about SDK usage and sends this data to IDEMIA's server in the EU. This data helps IDEMIA to improve Capture SDK and the likelihood of integrator success within the app. It is strongly recommended to use the analytics mechanism.

  • Sending analytics data is enabled by default.
  • You can enable or disable sending analytics data.
  • You can choose to send analytics data only when you are connected to a Wi-Fi network, so as not to not use your cellular connection.
  • Analytics data that IDEMIA collects contains only technical data.
  • No sensitive personal data is collected.
  • IDEMIA does not collect any images.

Analytics data that we collect include following information:

  • Application name, bundle id, version
  • Capture SDK and RemoteLogger libraries versions
  • Capture SDK plugins versions
  • Device model and operating system version
  • Technical information about performed face, finger, and document capture (such as: used capture mode; timestamp; reason of error; time needed to perform a capture; quality of captured image; and light condition)
  • Technical information about performed authentication and identification events (such as: used threshold, duration, and obtained score)
  • Other technical information (such as: image compression, occurred errors, and SDK performance) that does not contain personal data

You can disable analytics reporting using the appropriate SDK method.

Analytics are enabled by default and data is sent through Wi-Fi and cellular connections to IDEMIA's server in Europe. You can switch to the server in US by calling:

Objective-C
1[[BIORemoteLogger sharedInstance] setConfiguration:[BIORemoteLoggerConfiguration defaultUS]];

You can switch to Wi-Fi only mode with:

Objective-C
1[BIORemoteLogger sharedInstance].wifiOnly = YES;

You can disable analytics as with:

Objective-C
1[BIORemoteLogger sharedInstance].loggerDisabled = YES;

You should perform above calls before using the SDK, for example, in your app delegate.

Project Configuration 

Before using our SDK you need to add Privacy - Camera Usage Description (NSCameraUsageDescriptionkey) to the Info.plist of your application as it will need to use the camera.

Adding the Biometric Capture SDK Framework 

We serve our artifacts with the artifactory. As an integrator you can choose one the methods of adding our framework to your project:

  • using CocoaPods
  • manually

Use CocoaPods (along with cocoapods-artplugin)

If you don't already have the CocoaPods with the Artifactory tool, install it by running the following command:

Bash
1gem install cocoapods-art
  1. The plugin uses authentication as specified in a standard .netrc file.
Swift
1machine mi-artifactory.otlabs.fr
2login ##USERNAME##
3password ##PASSWORD##
  1. Once set, add our repo to your CocoaPod's dependency management system:
Bash
1pod repo-art add smartsdk "https://mi-artifactory.otlabs.fr/artifactory/api/pods/smartsdk-ios-local"
  1. At the top of your project Podfile add:
Ruby
1plugin 'cocoapods-art', :sources => [
2 'master', # so it could resolve dependencies from master repo (the main one), for newer CocoaPods (1.10+) versions it may not be needed anymore
3 'smartsdk' # so it could resolve BiometricSDK depdendency
4]
  1. Add the Capture SDK in your Podfile in one of its pod's version:
Ruby
1pod 'BiometricSDK' # Full version of the SDK, contains biometrics & documents features
2pod 'BiometricSDK-biometry' # Contains only biometrics (finger+face) features
3
4pod 'BiometricSDK-finger' # Contains only finger features
5
6pod 'BiometricSDK-face' # Contains only face features without internal liveness checking mechanism
7pod 'BiometricSDK-document' # Contains only document features
8pod 'BiometricSDK-face_document' # Contains only face features without internal liveness checking mechanism & documents features

Above configuration installs iOS frameworks. For XCFramework, 'XCFramework' subspec can be used for face, face_document and document variants. Eg.

Ruby
1pod 'BiometricSDK-face_document/XCFramework'
  1. Then you can use install:
Bash
1pod install

Note: If you are already using our repo, and you cannot resolve some dependency, try to update the specs:

Bash
1pod repo-art update smartsdk

Manually

  1. Download the artifact manually from the artifactory:
  1. In the project editor, select the target to which you want to add a library or framework.

  2. Click Build Phases at the top of the project editor.

  3. Open the Embedded Binaries.

  4. Click the Add button (+).

  5. Click the Add Other button below the list.

  6. Add the following items:

    • BiometricSDK.framework
    • (optionally for face capture) BiometricSDKFaceCapturePluginNormal.framework (or other face capturing plugin)
    • (optionally for face capture) BiometricSDKAlgorithmPlugin_F6_5_LOW70.framework (or other matching algorithm plugin)
    • (optionally for finger capture new API) FingerCaptureSDK.framework
    • (optionally for finger capture new API) BiometricSDKUIFinger.framework

Note: XCFrameworks are supported with face, face_document and document variants. XCFrameworks are not yet supported with biometry, biometry_document, fingerprint variants.

Plugins 

Introduction 

Capture SDK comes in few different variants which provide different types of components and functionality as it was described in Getting Started page of this guide. In addition to different variants plugins have been introduced to give even more flexibility than variants of SDK. Every integrator might have different needs and size requirements, which is why we introduce a new plugin mechanism. Plugins are split to two groups: feature and algorithm.

Feature Plugins

Provides various SDK functionalities like: face capture, document capture, etc.

Algorithm Plugins

Algorithm plugins provide extracting biometric data from images, matching these data and store it as templates.

How it Works 

To use a particular plugin, it only needs to be embedded within the application. If a configuration is not valid, an error will be raised in the runtime. In the Recommended Configurations section the most common use cases are described.

When the cocoapods are being used for configuration, every pod has its own default configuration, so it doesn't need to have all elements configured.

For example pod 'BiometricSDK-face' is equivalent to setting:

Ruby
1pod 'BiometricSDK-face/Framework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # algorithm plugin

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # algorithm plugin

But when there's a need to use different plugins, it needs to have all the elements configured in the Podfile.

Face Capturing Plugins 

Face capturing plugins provide various SDK face capturing methods.

FaceCapturePluginLite

It's a plugin meant to be used to work along with an external server for the liveness check. The SDK won't be able to determine if a person in front of a camera is alive or not. It only runs a scanning process. As a result of the scanning, the scanning metadata will be provided. This metadata should be sent to the server to determine the liveness. Because the scanning process won't do any internal face matching, the plugin doesn't require any algorithm plugin. The plugin can be added manually or by using cocoapods by specifying it in the Podfile:

Ruby
1pod 'BiometricSDK-face/Framework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureLite' # face capturing plugin
3# No need to add algorithm plugin, but BIOMatcherHandler won't work in such case

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureLite-XCFramework' # face capturing plugin
3# No need to add algorithm plugin, but BIOMatcherHandler won't work in such case

FaceCapturePluginNormal 

It's a default plugin, which has been used in the SDK since the beginning, but internally. It's meant to be used for a face scanning with the offline liveness check. To determine liveness it uses an internal face matching algorithms. This is why it requires one of the face algorithm plugins to work. The plugin can be added manually or by using CocoaPods by specifying it in the Podfile:

Ruby
1pod 'BiometricSDK-face/Framework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # it needs some algorithm to work

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # it needs some algorithm to work

FaceCapturePluginCr2dMatching 

This plugin extends FaceCapturePluginNormal by introducing additional face checks during the offline liveness check for the liveness Active scanning mode. The additional checks makes sure that the same person is doing the liveness check for the whole process. It requires more CPU power to work. Moreover, to determine liveness it uses an internal face matching algorithms. This is why it requires one of the face algorithm plugins to work. The plugin can be added manually or by using CocoaPods by specifying it in the Podfile:

Ruby
1pod 'BiometricSDK-face/Framework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureCr2dMatching' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # it needs some algorithm to work

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK framework
2pod 'BiometricSDK-face/Plugins/FaceCaptureCr2dMatching-XCFramework' # face capturing plugin
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # it needs some algorithm to work

Face Matching Algorithm Plugins 

These algorithms are used for the face matching. The matching can take place during the face scanning internally or during either the authentication or the identification processes.

Warning: The algorithms are NOT compatible with each other. The templates generated by one of the algorithms cannot be processed with the other one, i.e., it's not possible to match a template generated with F5_0_VID81 against a template generated with F5_4_LOW75. F5_0_VID81 has been used before SDK 4.22.0 as a built-in algorithm. So if an integrator upgrades the SDK's version from a version earlier then SDK 4.22.0, they should continue using F5_0_VID81. If an integrator wants to change the algorithm in their solution, all the stored templates will need to be recreated with the new algorithm.

F6_5_LOW70

This is recommended algorithm plugin for selfie versus selfie matching.

It's very accurate algorithm for face matching. Templates are not compressed with this algorithm. Weighs ~7.8 MB (uncompressed 8.1 MB). It is not compatible with matching selfie versus portrait scanned from an ID document.

Ruby
1pod 'BiometricSDK-face/Algorithms/F6_5_LOW70'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework'

F5_0_VID81

It's a small face matching algorithm. It can be used when the matching accuracy is not a priority, but size, or when it's being used only for face capturing. Weighs ~4.1 MB (uncompressed 4.4 MB).

Ruby
1pod 'BiometricSDK-face/Algorithms/F5_0_VID81'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/Algorithms/F5_0_VID81-XCFramework'

F6_0_IDD80

It's a small face matching algorithm. It can be used when the matching accuracy is not a priority, but size, or when it's being used only for face capturing. Weighs ~3.0 MB (uncompressed 3.7 MB).

Ruby
1pod 'BiometricSDK-face/Algorithms/F6_0_IDD80'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/Algorithms/F6_0_IDD80-XCFramework'

F5_4_LOW75

It's a more accurate algorithm for the face matching than F6_0_IDD80 and F5_0_VID81. It compresses templates to a 116B block, so it can be encoded even into a QRCode, so it can be shared between phones. It's better for matching between a face photo from a document and from a face scanning. Weighs ~12.8 MB (uncompressed 15.5 MB).

Ruby
1pod 'BiometricSDK-face/Algorithms/F5_4_LOW75'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/Algorithms/F5_4_LOW75-XCFramework'
  1. The liveness check and the face matching on an external server:
Ruby
1pod 'BiometricSDK-face/Framework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureLite'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureLite-XCFramework'
  1. The liveness check on a device and the face matching on an external server:
Ruby
1pod 'BiometricSDK-face/Framework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal'
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # any algorithm plugin

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework'
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # any algorithm plugin
  1. The liveness check and the face matching on a device:
Ruby
1pod 'BiometricSDK-face/Framework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal'
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70'

XCFramework version:

Ruby
1pod 'BiometricSDK-face/XCFramework' # the SDK Core
2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework'
3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # any algorithm plugin

SDK Size 

SDK variant                                                
SDK size
Face+Document+Fingerprint (offline liveness + matching)48.58 MB
Face+Document+Fingerprint (offline liveness)           40.35 MB
Face+Document+Fingerprint (backend)                    36.28 MB
Face+Fingerprint (offline liveness + matching)         35.53 MB
Face+Fingerprint (offline liveness)                  27.30 MB
Face+Fingerprint (backend)                           23.23 MB
Face (offline liveness + matching)                   32.65 MB
Face (offline liveness)                                24.42 MB
Document                                             11.11 MB
Face (backend)                                         20.35 MB
Document+Face (offline liveness + matching)          45.72 MB
Document+Face (offline liveness)                     37.49 MB
Document+Face (backend)                                33.42 MB
Fingerprint                                            07.70 MB

All sizes are estimated download sizes from App Store on arm64 devices. Note that universal IPA file size containing the SDK might be visibly different in case it's built also for other architectures and/or includes bitcode.

Sizes are total sizes for the whole packages which includes:

  • an appropriate SDK variant
  • capture plugins
  • algorithms plugins
  • UIExtension library
  • UIExtension's additional resources like tutorials and animations

Different packages variants contains:

  • offline liveness + matching: an appropriate SDK, Face Normal plugin, F5_4_LOW75 algorithm, UIExtensions, face capturing tutorials
  • offline liveness: SDK, Face Normal plugin, F6_0_IDD80 algorithm, UI
  • backend: SDK, Face Lite plugin, no algorithm, UI

License Manager 

Remember: A valid license is required before using any feature of the SDK.

To have a valid license:

  1. Obtain an instance of LicenseManager via provideLicenseManager() method.
  2. Call activate() method on it.

Before Starting 

Note: If you use the debug LKMS server without a SSL connection, you should add permission for the arbitrary loads in the transport security section in your Info.plist file. But it's highly recommended NOT to set this permission without the reason.

XML
1<key>NSAppTransportSecurity</key>
2 <dict>
3 <key>NSAllowsArbitraryLoads</key>
4 <true/>
5 </dict>

New license manager 

The License manager is the main entry point to use the SDK. You can manage licenses through LicenseManager.

Note: A valid license is required before using any feature of the SDK.

provideLicenseManager

This static method provides an instance of LicenseManager with a predefined LKMS profile. Any interaction with LicenseManager must be executed before starting capture.

Swift
1let manager = LicenseManager.provideLicenseManager(profileId: LkmsProfileId, apiKey: LkmsApiKey, serverUrl: lkmsUrl)

Activating license

This method fetches the license if it's not locally stored and activates it. Additionally, in cases where the license has expired, the function retrieves a new license. This process is crucial and must occur each time the application starts.

Callback solution:

Swift
1manager.activate { (error: LicenseActivationError) in
2 if let error {
3 // Failed to fetch or activate the license.
4 } else {
5 // License fetched and activated with success.
6 }
7 }

Async solution:

Swift
1let result: Result<Void, LicenseActivationError> = await manager.activate()
2 switch result {
3 case .success:
4 // License fetched and activated with success.
5 case .failure(let error):
6 // Failed to fetch or activate the license.
7 }

LicenseActivationError

This is the information about why the license can not be activated.

Attribute
Description
type ActivationErrorTypeThe type of error that occurred during the license activation.
message StringThe reason for the activation failure.

ActivationErrorType

Attribute
Description
profileExpiredThe profile has expired, all licenses won’t work anymore. (Contact with support)
activationCountExceededNo more licenses can be consumed. (Contact with support)
authenticationIssueThe credentials and/or profile are wrong.
connectionIssueConnection issue. (Check internet connection and server url)
licenseSignatureVerificationFailedVerification of the license signature is failed. Mostly this error is thrown when integrator switch between development and App Store builds.
unknownUnknown issue.

Development issues 

The LKMS license on your device may become invalidated when switching between debug and App Store (also TestFlight) builds of your app. To have a valid license, uninstall the app first and then switch to debug / App Store build. There is no need to uninstall the app when updating the app from the same source (debug build replaced with debug build or App Store build replaced with App Store build). In the SDK there is dedicated error for this: licenseSignatureVerificationFailed.

NOTE: After version 4.45.0 it is not occurs anymore.

New face capture API 

Introduction 

In order to make integration of the SDK easier and more intuitive - new API for Face Capture has been delivered. It is based on use cases that are self-explaining which provide specific information depending on a given use case. This allows integrator to focus on working with the data provided by the SDK rather than on SDK configuration.

NOTE: As for now, the new API supports only remote use case.

Integration steps 

For successful integration 4 steps are needed.

  1. License activation and camera permission. In order to use SDK proper license has to be activated. License Manager section shows how to handle license and permissions.

  2. Integration with services New face capture is currently considered to be a remote-only use case. In order to initiate the capture process, integration with proofing services is required. These proofing services are responsible for performing liveness detection, ensuring that the captured face image corresponds to a genuine person. Please look at the next paragraphs [Create Session] (#fc-create-session)

  3. Use case creation. In order to start capture, it is essential to create a specific use case instance. Each use case have sessionInfo property that contains session configuration and set of delegate that will be used to handle callback methods. Use cases and delegates are described in more detail in the next paragraphs - Face Remote Use Case, Delegates.

To easily customize the look and feel of face capture, an optional parameter UISettings can be additionally created. More on that parameter here

Once these three steps are satisfied, theFaceCaptureView can be set up passing use case instance and optionally UI settings asa paramerers. After setting up a captureView, capture can be started by calling start() method. Delegate attached to the use case will receive data from the capture.

Swift
1let environmentInfo = // select type of authorization
2 let delegates = // create delegates
3 let useCase = //create use case
4 let uiSettings = // optionally create ui settings, could be nil
5 self.captureView.setup(with: useCase, uiSettings: uiSettings)
6 self.captureView.start()
  1. Add FaceCaptureView to the layout. It inherits from UIView and has to be used as a preview on the capture's UI. FaceCaptureView is also used to start face capture and cancel it if needed. So this is not only a capture preview but also entry point to the SDK.
Swift
1FaceCaptureView
2 func setup(useCase: FaceCaptureUseCase, uiSettings: UISettings?)
3 func start()
4 func cancel()

Adding FaceCaptureView to the view controller.

Storyboard

In project that uses storyboards FaceCaptureView should be added as an IBOutlet to a layout of the view used for face capture.

Swift
1@IBOutlet private weak var captureView: FaceCaptureView!
More programmatically

If you want to use FaceCaptureView programmatically just follow

Add property to your view controller class.

Swift
1private weak var captureView: FaceCaptureView?

Next is displaying FaceCaptureView:

Swift
1let captureView = FaceCaptureView()
2 captureView.translatesAutoresizingMaskIntoConstraints = false
3 captureView.contentMode = .scaleToFill
4 view.addSubview(captureView)
5 NSLayoutConstraint.activate([
6 captureView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
7 captureView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor),
8 captureView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
9 captureView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor)
10 ])
11 self.captureView = captureView

Type of authorization 

New Face Capture is compatible with two types of the authorization. It is API key and OAuth.

OAuth

In token type authorization, an access token is generated by the authorization server using the provided secrets. This token can be utilized by creating an AccessToken class and using the appropriate initializer in the EnvironmentInfo class: init(accessToken: AccessToken, baseUrl: URL).

Getting started with OAuth

If you want to read more about OAuth you can check this website.

AccessToken

The AccessToken class holds information about the secret and token type from the OAuth authorization server.

Where can secrets be found for token generation?

Secrets can be found on the webpage: https://experience.idemia.com/dashboard/my-identity-proofing/access/environments/. There is possiblity to generate it manually on the page or make a request to the authorization server.

An example successful response from the authorization server:
Language not specified
1HTTP/1.1 200 OK
2Content-Type: application/json;charset=UTF-8
3Cache-Control: no-store
4Pragma: no-cache
5
6 {
7 "access_token":"2YotnFZFEjr1zCsicMWpAA",
8 "token_type":"example",
9 "expires_in":3600,
10 }
AccessToken class description
Parameter
Description
secret StringThis is the access_token parameter mapped from the authorization server response.
tokenType StringToken type indicates how it should be used in the authorization request. This is the token_type parameter mapped from the authorization server response.

API key

The API key is generated once and is passed to the apikey parameter in EnvironmentInfo initializer: init(apiKey: String, baseUrl: URL)

Creating Session 

Session creation should be implemented on the backend side. As a result of session creation a sessionId is being returned. To create session following API calls have to be performed.

To proceed with ID&V platform/GIPS, follow these steps:

  1. Create identity by calling POST request: /gips/v1/identities As a result an identity is being returned

  2. Submit confirmation that the user has consented to perform specific evidence verifications POST request: /gips/v1/identities/{id}/consents

  3. Start liveness session by calling POST request /gips/v1/identities/{id}/attributes/portrait/live-capture-session?mode=nativeSDK Session Id is being returned as a result.

If you want more detailed instructions about creating session please refer to ID&V API documentation, steps 1 to 3: Liveness video capture using a native SDK

To proceed with WebBio, follow these steps:

  1. To create a session, make a POST request to the endpoint /bioserver-app/v2/bio-sessions and include the session data in the request body.
  2. Optionally, retrieve the session path from the response.
  3. Retrieve the bioSession ID by making a GET request to /bioserver-app{bioSessionPath}. The response to this call will contain the session ID used by the SDK.
  4. Initialize the session by sending a POST request to /bioserver-app/v2/bio-sessions/{bioSessionId}/init-liveness-parameters, providing the session ID from the previous step and the liveness parameters in the request body.

For more detailed instructions on creating a session with WebBio, refer to the "Replay challenge" section in the IDEMIA Biometric Services Documentation. You can find the described API requests on the biometric-services API Explorer page.

Use Cases 

Capture settings of New Face Capture API are done by using proper, predefined configuration, designed for specific use case. In this way capture configuration is more intuitive and less confusing.

  • feedbackDelegate - delegate receives feedback message that can be mapped to instructions presented to user,
  • trackingDelegate - receives face tracking data that can be used to draw box around tracked elements,
  • livenessActiveDelegate - if active liveness challenge is chosen this delegate receives pointer and targets status update.

More information about delegates is covered in here

FaceRemoteCaptureUseCase

This use case performs face capture with backend communication. In order to provide correct flow, sessionId, environmentInfo and RemoteCaptureDelegates must be provided during RemoteUseCase initialization. sessionId of face capture session on IDEMIA Identity and Verification service for SDK is require to start communication with the backend. More about creating session to be found here. Parameter environmentInfo contains type of authorization like apikey or OAuth token. Moreover it has address to the environment. After communication is established and challenge is started RemoteCaptureDelegates provide status and feedback information of the face capture.

RemoteCaptureUseCase field
Description
sessionId StringLive capture session id
environmentInfo EnvironmentInfoObject that contains both: url to face liveness server and secrets needed for authorization. It can be apikey or OAuth access token
delegates RemoteCaptureDelegatesDelegates used to handle face capture callbacks.
Swift
1let useCase = FaceRemoteCaptureUseCase(sessionId: sessionId, delegates: delegates, environmentInfo: environmentInfo)
2self.captureView.configure(with: useCase, uiSettings: uiSettings)
3self.captureView.start()

UISettings

Optional parameter that can contain configuration of UI elements colors, sizes etc. used in face capture challenges displayed on FaceCaptureView. It allows individual customization for all of 3 types of liveness challanges:

UISettings field
Description
passiveVideoCaptureSettings PassiveVideoCaptureSettingsCustom UI settings for passive video capture challange.
passiveCaptureSettings PassiveCaptureSettingsCustom UI settings for passive liveness challange.
livenessActiveCaptureSettings LivenessActiveCaptureSettingsCustom UI settings for active liveness challange
Swift
1class UISettings {
2 var passiveVideoCaptureSettings: PassiveVideoCaptureSettings?
3 var passiveCaptureSettings: PassiveCaptureSettings?
4 var livenessActiveCaptureSettings: LivenessActiveCaptureSettings?
5}

Delegates 

This section describes delegates available for specific use cases within new API for face capture. Please remember that all UI operations should be performed on main thread.

Capture Delegates

Delegates are used as a mechanism to return information about capture process and status. To receive such information simply set an instance of your class as a delegate and implement methods from the protocol. Delegates are dedicated for specific use cases. However, all use cases have common 3 general delegates, which are as follows:

Face feedback delegate

It is mandatory to instruct user to properly place device during capture and also provides instruction to pass the liveness challenge.

Swift
1func captureInfoReceived(feedback: FaceCaptureFeedback)

Method called after feedback info is available. Returns FaceCaptureFeedback which is an enum with 11 possible messages. Those feedback messages can be mapped and displayed to the user on the UI. For example, faceInfoCenterMoveForwards can be mapped to "Please move your face forward". More examples can be found in FaceSampleApp.

Face tracking delegate

Returns coordinates of users face position on screen.

Swift
1captureInfoReceived(trackingData: FaceTrackingInfo)

Delegate returns FaceTrackingInfo that has coordinates and dimensions of a frame that can be drawn around user's face.

Liveness Active Delegate

Delegate used for Join-the-points challenge.

Swift
1func pointerUpdated(pointerInfo: PointerInfo)
2 func targetUpdated(targetInfo: TargetInfo)
3 func targetsNumberReceived(numberOfTargets: Int)

Provides information about states and positions of various UI elements used during Active Liveness (Join The Points) challenge. It updates about current pointer position on screen, target coordinates and state and number of targets.

Remote Capture Delegates

Step Info Delegate

Delegate method that returns information about current capture step which reflects the status of face capture process.

Swift
1func captureInfoReceived(stepInfo: StepInfo)

Step info can have one of two values:

  • after capture is started - captureStarted step info is being returned. At this point any UI progress indicator should be hidden and FaceCaptureView has to be displayed to the user.
  • when preparingLiveness step is returned in the context of liveness, it indicates that liveness metadata is being sent to the backend. This callback can be utilized to display a UI component that visually represents the ongoing task, such as a system "UIActivityIndicatorView."
Passive Video Delegate

In case of passive video challenge, multiple events take place. This delegate provides information about what is happening and returns data that can be used as a parameter for custom UI components.

Swift
1func preparationStarted()
2 func preparationFinished()
3 func overlayDidUpdate(overlay: OvalOverlay)
4 func progressDidUpdate(progress: CGFloat)

First two delegate methods inform about when video passive challenge preparation has been started and when finished. This process takes about few seconds so this information can be used to show a loading indicator or other custom waiting screen. Overlay updates using overlayDidUpdate are needed to draw a custom overlay on screen. It is in shape of an oval on the center of the screen and the challenge for the user is to align his face inside it. If user's face place inside the overlay, a progress using progressDidUpdate is updated. Progress value has to be displayed as a text or visual hint to the user.

Capture Liveness Delegate

Delegate method that returns information about liveness mode used for current capture

Swift
1func captureLiveness(liveness: Liveness)

Liveness parameter contains information about current mode used in capture. It can have one of 3 values" active, passive, passiveVideo.

Capture Result Delegate

Delegate that has one method, which is being called after remote face capture is finished.

Swift
1func captureDidFinish(result: FaceCaptureResult)

It provides FaceCaptureResult parameter which informs if capture has been successful. In case of failed capture, it returns error. No other callback will occur after this method.

Liveness Processing Delegate

Delegate has one method which is called during the preparingLiveness phase of the capture process.

Swift
1func livenessMetadataUploadDidUpdate(progress: Double)

It provides information about upload progress of metadata.

Example integration 

Example integration: View controller with remote capture use case set to passive liveness.

Important - There are two types of remote face liveness integration available:

  • Integration with the ID&V platform/GIPS is demonstrated in the sample app called FaceSampleAppLite.
  • Integration with the WebBio service is showcased in the sample app called FaceSampleAppLiteWBS.

If you want to see examples and observe how the integration works, you can refer to the sample applications.

Swift
1import UIKit
2import FaceCaptureSDK
3
4// For convinience, used protocol can be grouped
5protocol CaptureDelegate: FaceFeedbackDelegate, FaceTrackingDelegate, CaptureResultDelegate, StepInfoDelegate {
6}
7
8class FaceCaptureViewController: UIViewController {
9 // session id has to be provided by the customer backend
10 private var sessionId: String
11
12 override func viewDidLoad() {
13 super.viewDidLoad()
14
15 if let useCase = // create Remote Use Case providing session info, api key and host url
16 let uiSettings = // create UISettings
17 self.captureView.configure(with: useCase, uiSettings: uiSettings)
18 self.captureView.start()
19 }
20 }
21}
22
23// Handling delegate callbacks
24extension FaceCaptureViewController: CaptureDelegate {
25 func captureInfoReceived(feedback: FaceCaptureFeedback) {
26 //TODO: Map feedback entry to proper information and display it to the user
27 //ie. case faceInfoComeBackField: -> "Move closer"
28 }
29
30 func captureInfoReceived(trackingInfo: FaceTrackingInfo) {
31 //TODO. If needed a rectangle box can be drawn on screen using coordinates
32 // from tracking info object.
33 }
34
35 func captureDidFinish(result: FaceCaptureResult) {
36 // Handle capture finished callback: show success or error screen.
37 }
38
39 func captureInfoReceived(stepInfo: StepInfo) {
40 // Handle UI depending on capture state, for example:
41 switch stepInfo {
42 case .preparingLiveness:
43 // Show loading indicator
44 case .captureStarted:
45 // Hide loading indicator, show preview
46 }
47 }
48}

Errors 

FaceCaptureError object contains a lot of useful information that helps in handling a failed flow.

Parameter
Description
type FailureTypeType of an error. High level information what goes wrong. Find types description below.
code IntSpecial code dedicated for particular case. Very helpful in L2, L3 troubleshooting.
message StringMessage with error description.
unlockDateTime NSNumberUnix timestamp in "UTC" time zone when capture will be unblocked. This field have value when ErrorType is deviceBlocked. Please remember, the value is represented in milliseconds.

ErrorType

Type
Description
timeoutTimeout occured during the flow.
connectionIssueRemote connection issue.
authenticationRemote capture authentication issue.
invalidSessionRemote session is invalid.
badCaptureFace capture failed.
cancelledCapture has been cancelled by the end user.
invalidLicenseThe LKMS license is invalid.
unknownUnknown type of error. Also used as default type for few cases.
deviceBlockedCapture on this device got blocked for period of time, because of many failures.

Network Security Configuration 

Face Capture SDK allows to setup public key pinning for network communication with backend services. To enable this feature, simply add in your app's Info.plist file object similar to below one:

XML
1<?xml version="1.0" encoding="UTF-8"?>
2<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
3<plist version="1.0">
4<dict>
5 <key>CaptureSDKNetworkSecurityConfig</key>
6 <dict>
7 <key>CaptureSDKDomainConfig</key>
8 <array>
9 <dict>
10 <key>CaptureSDKDomain</key>
11 <string>proofing.app.eu.identity-prod.idemia.io</string>
12 <key>CaptureSDKPinSet</key>
13 <array>
14 <string>VnaVguILcTfqALKLEhMtqKzXG6KK7w5T1Px7LO+dbVw=</string>
15 </array>
16 <key>CaptureSDKPinSetExpirationDate</key>
17 <date>2024-03-25T10:57:42Z</date>
18 </dict>
19 </array>
20 </dict>
21</dict>
22</plist>

CaptureSDKDomainConfig array should contain a dictionary for each domain pin set. Expiration date can be set (via CaptureSDKPinSetExpirationDate) to not check for expired certificates. CaptureSDKDomain must be exact domain (without wildcards). The CaptureSDKPinSet can contain multiple SHA-256 key hashes encoded in Base64. Supported public key types: RSA (2048, 3072 and 4096), ECDSA (secp256r1 and secp384r1).

Key pinning is considered enabled for a domain when at least one non-expired non-empty pin set is available for a domain. For each request made in Face Capture SDK, if there is no matching pin (public key hash) to the ones found in SSL challenge certificate public keys, the request will be canceled.

Biometric Capture SDK 

Get Info about Biometric Capture SDK 

The purpose of this method is allow the integrator to retrieve information about the SDK.

Objective-C
1BIOSDKInfo *info = [BIOSDK getInfo];

Returns

An object of the BIOSDKInfo type with the information about the SDK.

Create a FaceCaptureHandler 

This retrieves a capture handler to perform the face biometric capture operations. You must first configure the capture options.

Objective-C
1@interface ViewController () <FaceCaptureHandlerDelegate>
2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;
3 @end
4 ...
5 [BIOSDK createFaceCaptureHandlerWithOptions:[FaceCaptureOptions new] withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
6 if (!error) {
7 self.captureHandler = captureHandler;
8 self.captureHandler.delegate = self;
9 ...
10 }
11 }];
Parameter
Description
options FaceCaptureOptions*The capture options to configure the face capture handler.
completionHandler void (^)(id, NSError* error)Block of code that will be called after FaceCaptureHandler finishes initialization. It will pass an error if any occurred.

Note: Errors code list is here

Create a RemoteFaceCaptureHandler 

Note: ⚠️ RemoteFaceCaptureHandler is deprecated.

This retrieves a capture handler to perform the face biometric capture operations with the server support. You must first configure the capture options. More info regarding the server integration can be found here.

Objective-C
1@interface ViewController () <FaceCaptureHandlerDelegate>
2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;
3 @end
4 ...
5 [BIOSDK createFaceCaptureHandlerWithOptions:[FaceCaptureOptions new] withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
6 if (!error) {
7 self.captureHandler = captureHandler;
8 self.captureHandler.delegate = self;
9 ...
10 }
11 }];
Parameter
Description
options FaceCaptureOptions*The capture options to configure the face capture handler.
completionHandler void (^)(id, NSError* error)Block of code that will be called after FaceCaptureHandler finishes initialization. It will pass an error if any occurred.

Note: Errors code list is [here](

Create a BIOMatcherHandler 

This retrieves a handler to perform all the matching, identifying, and template coding operations.

Objective-C
1@interface ViewController () <BIOCaptureHandlerDelegate>
2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;
3 @end
4 .....
5 [BIOSDK createMatcherHandlerWithOptions:[BIOMatcherHandlerOptions new]
6 withCompletionHandler:^(id<BIOMatcherHandler> matcherHandler, NSError* error) {
7 self.matcherHandler = matcherHandler;
8 ....
9 }];
Parameter
Description
options BIOMatcherHandlerOptions*Object that can configure a session of matcher handler.
completionHandler void (^)(id, NSError* error)Block of code that will be called after BIOMatcherHandler finishes initialization. It will pass an error if any occurred.

Create a DocumentCapture Handler 

This retrieves a capture handler to perform all the document capture operations. You must first configure the capture options.

  • Please check the use case named Read MRZ.

  • Also, you can check all the features provided by this handler here.

Objective-C
1// Populate a CaptureOptions object
2 DocumentCaptureOptions captureOptions = [DocumentCaptureOptions new];
3 captureOptions.mode = DocumentCaptureModeReadMRZ;
4 captureOptions.camera = BIOCameraRear;
5 captureOptions.setOverlay = BIOOverlayON;
6 captureOptions.captureTimeout = 120;
7 ...
8 [BIOSDK createDocumentCaptureHandlerWithOptions:(DocumentCaptureOptions*)options
9 witchCompletionHandler:(^(id<DocumentCaptureHandler> documentCaptureHandler, NSError* error))completionHandler {
10 self.documentCapture = documentCaptureHandler;
11 self.documentCapture.delegate = self;
12 }];
Parameter
Description
options DocumentCaptureOptions*The capture options to configure the bio capture handler.
completionHandler void (^)(id, NSError* error)Block of code that will be called after the DocumentCaptureHandler finishes initialization. It will pass an error if any occurred.

BIOReplayProtocol 

This protocol defines the methods that will be available in all capture handlers. However, they should be implemented to reference its sub protocols (eg. FaceCaptureHandlerDelegate) instead of BIOReplayProtocol.

captureFinishedWithError 

This method is called whenever a capture finished with an error and cannot be resumed by Biometric Capture SDK.

Objective-C
1-(void)captureFinishedWithError:(NSError*)error
Parameter
Description
error NSError*The error that caused the capture to finish.

replayDidFinishRecording 

This method is called whenever a replay finished playing. These replays are used to play the recorded videos of a capture for debugging purposes.

Objective-C
1-(void)replayDidFinishRecording

Generic SDK Objects 

This section is going to cover the generic objects that are necessary to use the Biometric Capture SDK.

BIOSDKInfo

This exposes information about the SDK.

Parameters

Parameter
Description
version NSString*The version of the SDK.

BIOBiometrics

This class holds all the different Biometrics that are going into a subclass.

Parameters

Parameter
Description
biometricLocation BIOLocationThe BiometricLocation enum option.
biometricModality BIOModalityThe BiometricModality enum option.

BIOImage

This is the image object returned by the SDK. Its subclass is BIOBiometrics.

Parameters

Parameter
Description
buffer NSData*The image.
stride intThe stride of the biometric.
width uint32_tThe width of the image.
height uint32_tThe height of the image.
colorSpace BIOColorSpaceThe ColorSpace of the image.
resolution floatThe resolution of the image.
alive BOOLOBSOLETE. True is alive, otherwise false.
imageQuality intImage quality if available, otherwise 0. Currently only available for fingerprint images.

BIOTemplate

This is a Biometric template object returned by the SDK. Its subclass is BIOBiometrics.

Parameters

Parameter
Description
buffer NSData*The template.
uuid NSString*The template uuid in the database (Could be null).
uuidUser NSString*The user uuid (Could be null).

BIOFaceTemplate

This is a Biometric template for Face object returned by the SDK. Its subclass is BIOTemplate.

Parameters

Parameter
Description
qualityRawValue NSIntegerThe face quality raw value.
quality FaceTemplateQualityThe face quality value interpretation.

FaceTemplateQuality enum

This enum retrieves information about qulity of the face saved in template

Attribute
Description
FaceTemplateQualityLowThe quality of face is low (not recommended to perform matching).
FaceTemplateQualityMediumThe quality of face is medium.
FaceTemplateQualityHighThe quality of face is high.

BIOUser

This is the user object used to represent an individual in the SDK.

Parameters

Parameter
Description
name NSString*The name of the user.
uuid NSString*The UUID of the user.

BIOVideoRecordingOptions

This is the video recording object used to configure video recording.

Parameter
Description
recordingEnabledIs video recording enabled

BIODebugDataSettings

This is an object used for debug purposes. Currently it contains only one parameter which is a file path used for RTV video playback.

Parameter
Description
rtvFilePathPath to RTV file used for video playback.

Enums 

BIOLogLevel

These are the constants used to configure logs.

Attribute
Description
BIOLogLevelDebugDisplay all logs from SDK
BIOLogLevelInfoInformative message
BIOLogLevelWarningWarning message
BIOLogLevelErrorError message
BIOLogLevelNoneTurns logs off

BIOColorSpace

These are the ColorSpace constants.

Attribute
Description
BIOColorSpaceY8Grayscale 8bpp image
BIOColorSpaceY16LEGrayscale 16bpp image (Little Endian)
BIOColorSpaceBGR24Color 24bpp BGR image (BMP like memory layout)
BIOColorSpaceRGB24Color 24bpp RGB image (reversed memory layout compared to RT_COLORSPACE_BGR24)

BIOLocation

These are the biometric location constants.

Attribute
Description
BIOLocationFaceFrontalface
BIOLocationFingerRightIndexright index
BIOLocationFingerRightMiddleright middle
BIOLocationFingerRightRingright ring
BIOLocationFingerRightLittleright little
BIOLocationFingerRightThumbright thumb
BIOLocationFingerLeftIndexleft index
BIOLocationFingerLeftMiddleleft middle
BIOLocationFingerLeftRingleft ring
BIOLocationFingerLeftLittleleft little
BIOLocationFingerLeftThumbleft thumb
BIOLocationFingerUnknownfinger unknown
BIOLocationHandLefthand left
BIOLocationHandRighthand right
BIOLocationHandUnknownhand unknown

BIOModality

These are the biometric modality constants.

Attribute
Description
BIOModalityUnknownunknown
BIOModalityFaceFace
BIOModalityFrictionRidgeFriction ridge

BIOCaptureHandlerError

These are errors that can be thrown when there is an error with the capture handler.

Attribute
Description
BIOCaptureHandlerErrorSuccessNo error occurred
BIOCaptureHandlerErrorParametersWrong Parameters
BIOCaptureHandlerErrorParameterUnknownUnknown parameter
BIOCaptureHandlerErrorMemallocMemory allocation error
BIOCaptureHandlerErrorInitInitialization error
BIOCaptureHandlerErrorGraphInitialisationFailedGraph initialization failed
BIOCaptureHandlerErrorParameterNotFoundParameter not found
BIOCaptureHandlerErrorParameterSizeParameter size error
BIOCaptureHandlerErrorTypeMismatchType mismatch error
BIOCaptureHandlerErrorInvalidHandleInvalid handler
BIOCaptureHandlerErrorLicenseLicense is invalid
BIOCaptureHandlerErrorApplinotavailableThe application is not available
BIOCaptureHandlerErrorProfileNotAvailableThe profile is not available
BIOCaptureHandlerErrorSubprofileNotAvailableThe subprofile is not available
BIOCaptureHandlerErrorUnknownAn unknown error occurred
BIOCaptureHandlerErrorInvalidOperationThe operation is invalid
BIOCaptureHandlerErrorIncompatibleApiVersionThe API version is incompatible. Your application must be recompiled.
BIOCaptureHandlerErrorCameraErrorCamera issue has been encountered
BIOCaptureHandlerErrorParameterWrongTypeParameter is not the right type
BIOCaptureHandlerErrorParameterNotSetParameter is not set in current scope
BIOCaptureHandlerErrorCaptureIsLockedCapture is locked

BIOCapturingError

These are errors that can be thrown when there is an error during or after the capture.

Attribute
Description
BIOCapturingErrorUnknownUnknown error occurred
BIOCapturingErrorCaptureTimeoutCapture timeout
BIOCapturingErrorNotAliveThe capture returned an image with status not alive.
BIOCapturingErrorWrongBiometricLocationThe wrong biometric location was scanned (eg. looking for face, captured fingers).
BIOCapturingErrorImageBufferCould not read image buffer
BIOCapturingErrorBadCaptureFingersFingers were not captured properly
BIOCapturingErrorBadCaptureHandHand was not captured properly

BIOCamera

These are the constants used to configure the behavior of BioCapture.

Attribute
Description
BIOCameraFrontFront Camera
BIOCameraRearRear Camera

BIOOverlay

This is the enum used to configure the behavior of BioCapture.

Attribute
Description
BIOOverlayOFFOverlay off
BIOOverlayONOverlay on

BIOPreviewColorspace

These are the flags used to set the color space for the preview of the camera.

Attribute
Description
BIOPreviewColorspaceColorSets colorspace of preview to RGB24
BIOPreviewColorspaceColorBlurSets colorspace of preview to RGB24 with Blur
BIOPreviewColorspaceGraySets colorspace of preview to Grayscale
BIOPreviewColorspaceGrayBlurSets colorspace of preview to Grayscale with Blur
BIOPreviewColorspaceGrayBlurBorderedSets colorspace of preview to Grayscale with Blur and a white border

FaceCaptureHandler 

This document discusses FaceCaptureHandler.

Start Using FaceCaptureHandler 

Note: In order to use FaceCaptureHandler, it's required to load one of the capturing plugins (lite, normal or cr2dMatching).

  1. Import the framework header to your view controller.
Objective-C
1#import <BiometricSDK/BiometricSDK.h>
  1. Add at least one UIImageView or subclasses to your layout. It will be used to preview the stream from the camera. It is not necessary for the capture to have a preview.
Objective-C
1@property (weak, nonatomic) IBOutlet UIImageView *preview;
  1. Check your license status here.

  2. You also need to have the property for FaceCaptureHandler. This object is handling all operations related to capturing.

Objective-C
1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
2 self.captureHandler = captureHandler;
3 ...
4 }];
  1. Set the delegate for FaceCaptureHandler to self. self will have to implement the FaceCaptureHandlerDelegate protocol.
Objective-C
1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
2 self.captureHandler = captureHandler;
3 self.captureHandler.delegate = self;
4 ...
5 }];
  1. After FaceCaptureHandler has finished its initialization, the preview view can be set.
Objective-C
1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
2 self.captureHandler = captureHandler;
3 self.captureHandler.delegate = self;
4 self.captureHandler.preview = self.preview;
5 ...
6 }];
  1. Now it can start capturing.
Objective-C
1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {
2 self.captureHandler = captureHandler;
3 self.captureHandler.delegate = self;
4 self.captureHandler.preview = self.preview;
5 [self.captureHandler startCaptureWithCompletionHandlerError:nil];
6 }];
  1. Whenever the view controller disappears, the resources (e.g. camera) need to be released.
Objective-C
1- (void)viewDidDisappear:(BOOL)animated{
2 [super viewDidDisappear:animated];
3 [self.captureHandler destroy];
4 }

FaceCaptureHandler Info

Delegate

This sets the listener to receive the biometrics information.

Objective-C
1handler.delegate = ... //Object that implements `FaceCaptureHandlerDelegate` protocol

Preview

This sets the camera preview.

Objective-C
1handler.preview = ... //An `UIImageView`

Debug Settings

This sets the camera preview.

Objective-C
1BIODebugDataSettings *settings = [[BIODebugDataSettings alloc] init];
2 settings.rtvFilePath = ... //A path to RTV video file used for video playback.
3 handler.debugSettings = settings;

Note: To stop the camera preview, set preview to nil.

Start Capture

This starts the biometric capture.

Objective-C
1[handler startCaptureWithCompletionHandler:^(NSError \* error) {
2 ...
3 }];
Parameter
Description
completionHandler void (^)(NSError*)An object with an error code if an error occurred, otherwise it will show nil. In addition to error code other data can be returned in error's userInfo structure as described in the table below.

Additional optional userInfo data:

Parameter
Description
lockedUntilWhen capture is locked this parameter contains timestamp on which capture will be unlocked. This data is returned only in case of BIOCaptureHandlerErrorCaptureIsLocked.

Stop Capture

This stops a capture.

Objective-C
1[handler stopCaptureWithCompletionHandler:^(NSError \* error) {
2 ...
3 }];
Parameter
Description
completionHandler void (^)(NSError*)Object with an error code if an error occurred, otherwise it will show nil.

Switch Camera

This switches between different cameras.

Objective-C
1[handler switchCamera:BIOCameraFront withError:&error];
2 [handler switchCamera:BIOCameraRear withError:&error];
Parameter
Description
camera BIOCameraFlag that selects the camera.
error NSError**Object with an error code if an error occurred, otherwise it will show nil.

Overlay

This sets the overlay option.

Objective-C
1[self setOverlay:BIOOverlayOFF withError:&error];
2 [self setOverlay:BIOOverlayON withError:&error];
Parameter
Description
overlay BIOOverlayFlag that determines if overlay should be on or off.
error NSError**Object with an error code if an error occurred, otherwise it will show nil.

Orientation

This sets the orientation option.

Objective-C
1[self setOrientation:BIOOrientationPortrait withError:&error];
2 [self setOrientation:BIOOrientationUpSideDown withError:&error];
3 [self setOrientation:BIOOrientationLandscapeLeft withError:&error];
4 [self setOrientation:BIOOrientationLandscapeRight withError:&error];
Parameter
Description
orientation BIOOrientationSpecifies the orientation of the preview.
error NSError**Object with an error code if an error occurred, otherwise it will show nil.

Options

This retrieves the capture options used in this handler.

Note: Readonly

Objective-C
1FaceCaptureOptions* options = handler.options;

Partial Dump Video

This dumps the last played sequences.

Objective-C
1[handler requestPartialDumpVideoWithError:&error];

Note: To dump a partial video, capture with partial dump video recording enabled. This should be done previously.

Parameter
Description
error NSError**Object with an error code if an error occurred, otherwise it will show nil.

Capture Attempts Left

Returns the number of capture attempts left based on the maxCapturesBeforeDelay provided on the options.

Note: Readonly

Objective-C
1NSInteger attemptsLeft = handler.captureAttemptsLeft;

Time to unlock

Returns the number of seconds to unlock the capture or 0 if capture is not locked.

Note: Readonly

Objective-C
1NSInteger timeToUnlock = handler.timeToUnlock;

Destroy

This releases all of the handler resources.

Objective-C
1[handler destroy];

FaceCaptureHandlerDelegate 

FaceCaptureHandlerDelegate is a sub protocol of BIOReplayProtocol and implements all of its methods as well.

captureFinishedWithImages:withBiometrics:withError:

This is the main method called when images are captured and returned.

Objective-C
1- (void)captureFinishedWithImages:(NSArray<BIOFaceImage*>* )images
2 withBiometrics:(BIOBiometrics*)biometrics
3 withError:(NSError*)error
Parameter
Description
images NSArray<BIOFaceImage*>Array of images. BIOFaceImages with biometric data.
biometrics BIOBiometrics*Object that describes what biometric data was captured.
error NSError*Error if any occurred, otherwise nil.

receiveBioCaptureInfo:withError:

This method is called whenever there is information that should be transmitted to the user (eg. BIOCapturingInfoFaceInfoTurnLeft to "Turn Left"), or information about the current challenge.

Objective-C
1- (void)receiveBioCaptureInfo:(BIOCapturingInfo)info
2 withError:(NSError*)error
Parameter
Description
info BIOCapturingInfoDescribes action from the user that needs to be done to finish capturing.
error NSError*Error if any occurred, otherwise nil.

receiveCr2DTargetInfo:atIndex:outOf:withError:

This method is called whenever there is an update about a target for the CR2D challenge. Its position is relative to the captured picture resolution.

Objective-C
1- (void)receiveCr2DTargetInfo:(BIOCr2DTargetInfo *)target
2 atIndex:(NSUInteger)index
3 outOf:(NSUInteger)numberOfTargets
4 withError:(NSError *)error
Parameter
Description
target BIOCr2DTargetInfo*Contains information about the target.
index NSUIntegerNumber of the target. Starts with 0.
numberOfTargets NSUIntegerTotal number of targets for a challenge.
error NSError*Error if any occurred, otherwise nil.

receiveCr2DChallengeInfo:withError:

This method is called whenever there is information for the CR2D challenge. The challenge contains information about the current point. The position is relative to the captured picture resolution.

Objective-C
1- (void)receiveCr2DChallengeInfo:(BIOCr2DChallengeInfo * _Nullable)challengeInfo
2 withError:(NSError * _Nullable)error
Parameter
Description
challengeInfo BIOCr2DChallengeInfo*Contains information about the heading point.
error NSError*Error if any occurred, otherwise nil.

receiveFaceTrackingInfo:

This method is being called when there is a tracking information available about face and eyes position.

Objective-C
1- (void)receiveFaceTrackingInfo:(BIOFaceTrackingInformation *)faceTrackingInfo
Parameter
Description
faceTrackingInfo BIOFaceTrackingInformationObject containing face box cordinates, eyes position and size that those points are related to.

RemoteFaceCaptureHandlerDelegate 

Note: ⚠️ RemoteFaceCaptureHandlerDelegate is deprecated.

RemoteFaceCaptureHandlerDelegate is a sub protocol of BIOReplayProtocol and BIOPassiveVideoProtocol and implements all of their methods as well.

captureFinishedWithEncryptedMetadata:withFaceImage: (deprecated)

Method that is triggered when capture is finished and face metadata is available.

Objective-C
1- (void)captureFinishedWithEncryptedMetadata:(BIOEncryptedData *)encryptedMetadata withFaceImage:(BIOFaceImage *)faceImage;
Parameter
Description
metadata BIOEncryptedDataObject with encrypted metadata that can be used for liveness verification on the server.
faceImage BIOFaceImageObject with captured face image that can be used to display it within the app.

receiveBioCaptureInfo:withError: (deprecated)

This method is called whenever there is information that should be transmitted to the user (eg. BIOCapturingInfoFaceInfoTurnLeft to "Turn Left"), or information about the current challenge.

Objective-C
1- (void)receiveBioCaptureInfo:(BIOCapturingInfo)info
2 withError:(NSError*)error
Parameter
Description
info BIOCapturingInfoDescribes action from the user that needs to be done to finish capturing.
error NSError*Error if any occurred, otherwise nil.

receiveCr2DTargetInfo:atIndex:outOf:withError: (deprecated)

This method is called whenever there is an update about a target for the CR2D challenge. Its position is relative to the captured picture resolution.

Objective-C
1- (void)receiveCr2DTargetInfo:(BIOCr2DTargetInfo *)target
2 atIndex:(NSUInteger)index
3 outOf:(NSUInteger)numberOfTargets
4 withError:(NSError *)error
Parameter
Description
target BIOCr2DTargetInfo*Contains information about the target.
index NSUIntegerNumber of the target. Starts with 0.
numberOfTargets NSUIntegerTotal number of targets for a challenge.
error NSError*Error if any occurred, otherwise nil.

receiveCr2DChallengeInfo:withError: (deprecated)

This method is called whenever there is information for the CR2D challenge. The challenge contains information about the current point. The position is relative to the captured picture resolution.

Objective-C
1- (void)receiveCr2DChallengeInfo:(BIOCr2DChallengeInfo * _Nullable)challengeInfo
2 withError:(NSError * _Nullable)error
Parameter
Description
challengeInfo BIOCr2DChallengeInfo*Contains information about the heading point.
error NSError*Error if any occurred, otherwise nil.

BIOPassiveVideoProtocol 

passiveVideoPreparationDidStart

Method that is triggered when BIORemoteFaceCaptureHandler receive info update about Passive Video Liveness starts preparation.

Objective-C
1- (void)passiveVideoPreparationDidStart;

passiveVideoPreparationDidEnd

Method that is triggered when BIORemoteFaceCaptureHandler receive info update about Passive Video Liveness ends preparation.

Objective-C
1- (void)passiveVideoPreparationDidEnd;

passiveVideoOverlayDidUpdate:andPosition:orError:

Method that is triggered when BIORemoteFaceCaptureHandler receive any info update about update for passive video overlay.

Objective-C
1- (void)passiveVideoOverlayDidUpdate:(CGSize)overlaySize andPosition:(CGPoint)position orError:(NSError *)error;
Parameter
Description
overlaySize CGSizeSize of the overlay, if error CGSize.zero.
position CGPointPosition of the overlay, if error CGPoint.zero
error NSError*Error if something was wrong otherwise nil

passiveVideoProgressDidUpdate:orError:

Method that is triggered when BIORemoteFaceCaptureHandler receive any info update about progress for passive video liveness check.

Objective-C
1- (void)passiveVideoProgressDidUpdate:(CGFloat)progress orError:(NSError *)error;
Parameter
Description
progress CGSizeProgress of the passive video check, if error 0
error NSError*Error if something was wrong otherwise nil

Helper Objects 

FaceCaptureOptions / RemoteFaceCaptureOptions

Note: RemoteFaceCaptureOptions are deprecated.

This is the object used to configure the behavior of BioCapture.

Attribute
Description
FaceCaptureOptions
RemoteFaceCaptureOptions
livenessMode FaceCaptureLivenessModeThe app enum option to configure the FaceCaptureLivenessMode.YESYES
securityLevel FaceLivenessSecurityLevelThe app enum option to configure FaceLivenessSecurityLevel. (high by default.)YESYES
cr2dMode BIOCr2dMode*Sets the CR2D mode. It's valid when livenessMode is set to FaceCaptureLivenessModeActive. Can be set to BIORandomCr2dMode, BIOFixedTargetCr2dMode, BIOPathCr2dMode. The default mode is BIORandomCr2dMode.YESYES
challengeIntervalDelay NSTimeIntervalSets the time interval between challenges.YESYES
maxCapturesBeforeDelay NSIntegerProperty that defines the maximum number of captures before locking the capture for a certain delay. (Default: 5). It can be set to -1 to disable locking based on repeated failed captures.YESNO
timeCaptureDelayArray NSArrayProperty that defines capture delays (in seconds) for each lock which occur after number of captures configured with maxCapturesBeforeDelay property. First lock after maxCapturesBeforeDelay will take as long as the number of seconds defined in the first element from timeCaptureDelayArray. Second lock after maxCapturesBeforeDelay will take as long as the number of seconds defined in the second element from timeCaptureDelayArray and so on. For all capture attempts after the array length, last element of the array is used. All values are in seconds. Default values: [1*60, 5*60, 15*60, 60*60]).YESNO
camera BIOCameraThe app Camera option to configure BioCapture.YESYES
torch BIOTorchOBSOLETE. Sets the torch value.YESYES
overlay BIOOverlaySets the overlay value.YESYES
captureTimeout NSTimeIntervalCapture timeout in seconds (default value 120).YESYES
logLevel BIOLogLevelLevel of logs that are displayed during debug.YESYES
orientation BIOOrientationSets the orientation that the capture will be done in.YESYES
previewColorspace BIOPreviewColorspaceOption that sets colorspace of the camera preview.YESYES
dumpFileEnable BOOLIf dump file is enabled, the capture create logs.YESYES
dumpFileFolder NSString*Folder where the logs will be saved. If nil it will be saved on the Documents folder.YESYES
dumpMetadataEnable BOOLIf dump metadata is enabled, the capture saves metadata.YESYES
dumpMetadataFolder NSString*Folder where the metadata will be saved. If nil it will be saved on the Documents folder.YESYES
videoRecordEnable BOOLIf video recording is enabled, the capture will be recorded.YESYES
videoRecordFolder NSString*Folder where recorded videos will be saved.YESYES
partialDumpVideoRecordEnable BOOLIf partial dump video recording is enabled, the capture can be recorded afterwards.YESYES
partialDumpVideoRecordFolder NSString*Folder where partial dump recorded videos will be saved.YESYES
challengeIntervalDelay NSTimeIntervalSets the time interval delay between challenges.YESYES
videoRecordingOptions BIOVideoRecordingOptionsThe video recording related options.YESNO

BIOCr2dMode

Ths is a CR2D mode base class. All CR2D modes inherits from it.

Note: It should NOT be used independently.

BIORandomCr2dMode

This CR2D mode is where the target is fully random.

Objective-C
1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];
2 // [...]
3 options.cr2dMode = [BIOCr2dMode random];

BIOFixedTargetCr2dMode

CR2D mode is where the target has a defined position. The position values x and y are normalized to be independent from the screen dimensions. Both are values between -1.0 through 0.0 to 1.0.

  • [-1.0, -1.0] - top, left corner of the screen
  • [ 0.0, 0.0] - center of the screen
  • [ 1.0, 1.0] - bottom, right corner of the screen
Objective-C
1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];
2 // [...]
3 CGSize targetPosition = CGSizeMake(0, 0);
4 options.cr2dMode = [BIOCr2dMode fixedTargetWithPosition:targetPosition];

BIOPathCr2dMode

This is a CR2D mode where there is more than one target. The number of the targets is defined by the targetsNumber parameter.

Objective-C
1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];
2 // [...]
3 NSInteger numberOfTargets = 4;
4 options.cr2dMode = [BIOCr2dMode pathWithNumberOfTargets:numberOfTargets];

BIOTrainingCr2dMode

This is a CR2D mode where it's possible to set a starting point and an end point of a challenge.

Objective-C
1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];
2 // [...]
3 CGSize startingPosition = CGSizeMake(0, 0);
4 CGSize targetPosition = CGSizeMake(1, 0);
5 options.cr2dMode = [BIOCr2dMode pathWithNumberOfTargets:numberOfTargets];

BIOChallengeInfo

The object contains information about the ongoing challenge.

Parameters
Parameter
Description
currentChallengeNumber NSIntegerNumber of a current challenge.
totalChallenges NSIntegerTotal number of challenges.

BIOCr2DChallengeInfo

The object contains the CR2D challenge info. All coordinates and metrics are relative to the camera preview resolution.

PARAMETERS
Parameter
Description
headingPointVisible BOOLIf the heading point should be displayed or not.
headingPoint CGPointX,Y coordinates of the heading point (the point where a user is looking at)

BIOCr2DTargetInfo

The object that is a representation of a target for the CR2D challenge.

PARAMETERS
Parameter
Description
number NSUIntegerTarget number, starts with 0.
visible BOOLTells if the target should be displayed or not.
current BOOLTells if target is the current one for the try.
position CGPointX,Y coordinates of the target center.
radius CGFloatTarget area radius in pixels.

|completeness CGFloat | Property tells how long it requires the heading point to be in the target area to pass the try. The value is a float type between 0 and 1. Where values means: 0 - the heading point is NOT in the target area; 0..1 - when it's getting bigger, the heading point is in the target area and should be kept still. When it's getting smaller, the heading point is outside of the target area;1 - the try is passed.

BIOFaceImage

This is the image object of a face. Subclass BIOImage.

Parameter
Description
Notes
buffer NSData*The image.
stride intThe stride of the biometric.
width uint32_tThe width of the image.
height uint32_tThe height of the image.
colorSpace BIOColorSpaceThe ColorSpace of the image.
resolution floatThe resolution of the image.
alive BOOLTrueis alive, otherwisefalse`OBSOLETE. Use livenessStatus instead.
livenessStatus BIOFaceLivenessStatusThe liveness status of the liveness checking mechanism.
imageQuality intImage quality. Only available for fingerprint images, so it will always be 0
faceMetadata BIOFaceMetadataCaptured face metadata
faceTrackingInfo BIOFaceTrackingInformationFace tracking information

BIOFaceMetadata

This is the face metadata that can be used for liveness verification with server.

Parameter
Description
data NSData*Face metadata

BIOEncryptedData

This is encrypted face metadata that can be used for liveness verification with server.

Parameter
Description
data NSData*Encrypted face metadata
masterSecret NSData*Encrypted master secret

BIOFaceTrackingInformation

This is face tracking information with information such as the face's position or eyes' position

Parameter
Description
faceBox CGRectFace's position and size.
leftEye BIOEye*Left eye
rightEye BIOEye*Right eye
relativeSize CGSizeSize that the points are related to

Enums 

BIOFaceLivenessStatus

The enum describes liveness status of BIOFaceImage.

Attribute
Description
BIOFaceLivenessStatusFakeThe SDK determined the liveness changellange to have failed. The face image is probably a photocopy or a mask.
BIOFaceLivenessStatusLiveThe SDK determined the liveness changellange to have passed. The face image is probably of an alive person.
BIOFaceLivenessStatusNoDecisionThe SDK cannot determine if the face image is of a live person or of a mask/photocopy. The image should be processed by the server to be sure.
BIOFaceLivenessStatusUnknownCapture was performed using NoLiveness mode.

BIOCapturingInfo

This is the Bio capture info constants.

Attribute
Description
Notes
BIOCapturingInfoUndefinedUndefined info.OBSOLETE
BIOCapturingInfoFaceInfoGetOutFieldGet out of the camera field.OBSOLETE
BIOCapturingInfoFaceInfoComeBackFieldCome back in the camera field.
BIOCapturingInfoFaceInfoTurnLeftTurn head left.
BIOCapturingInfoFaceInfoTurnRightTurn head right.OBSOLETE
BIOCapturingInfoFaceInfoCenterTurnLeftFace center but turn head left.
BIOCapturingInfoFaceInfoCenterTurnRightFace center but turn head right.
BIOCapturingInfoFaceInfoCenterRotateDownFace center but rotate head down.
BIOCapturingInfoFaceInfoCenterRotateUpFace center but rotate head up.
BIOCapturingInfoFaceInfoCenterTiltLeftFace center but tilt head left.
BIOCapturingInfoFaceInfoCenterTiltRightFace center but tilt head right.
BIOCapturingInfoFaceInfoCenterMoveForwardsMove forwards.
BIOCapturingInfoFaceInfoCenterMoveBackwardsMove backwards.
BIOCapturingInfoFaceInfoCenterLookFrontOfCameraLook in front of the camera.OBSOLETE
BIOCapturingInfoFaceInfoCenterLookCameraWithLessMovementLook at the camera with less movement.OBSOLETE
BIOCapturingInfoFaceInfoTurnLeftRightTurn left then right or right then left.OBSOLETE
BIOCapturingInfoFaceInfoTurnDownTurn head down.OBSOLETE
BIOCapturingInfoFaceInfoTimeOutTimeout occurred.OBSOLETE
BIOCapturingInfoFaceInfoUnsuccessfulAttemptUnsuccessful attempt.OBSOLETE
BIOCapturingInfoFaceInfoTooFastMoved the head too fast.
BIOCapturingInfoFaceInfoCenterGoodGood position of head.
BIOCapturingInfoFaceInfoDontMoveDon't move.
BIOCapturingInfoFaceInfoChallenge2DMove the challenge point to the target using head movements.
BIOCapturingInfoFaceInfoMoveBrighterAreaMove to a brighter area, because exposure is too dark.OBSOLETE
BIOCapturingInfoFaceInfoMoveDarkerAreaMove to darker area, because exposure is too bright.OBSOLETE
BIOCapturingInfoFaceInfoStandStillStand still during the illumination check.
BIOCapturingInfoDeviceMovementDetectedDevice is being moved but should stay still.
BIOCapturingInfoDeviceMovementEndedDevice is no longer moved and stays still.
BIOCapturingInfoNoFaceMovementDetectedHead movement is expected but not detected.
BIOCapturingInfoFaceInfoOpenEyesOpen eyes.OBSOLETE
BIOCapturingInfoDeviceMovementEndedUser stopped moving the device.

FaceCaptureLivenessMode

This is the enum used to configure the behavior of BioCapture.

Attribute
Description
FaceCaptureLivenessModePassiveVideoFace tracking with passive video liveness for the sever integration only.
FaceCaptureLivenessModePassiveFace tracking with passive liveness.
FaceCaptureLivenessModeActiveFace tracking using CR2D challenge of liveness.
FaceCaptureLivenessModeNoLivenessFace tracking with default subprofile.

FaceLivenessSecurityLevel

This enum is used to configure the security level of a used capture mode. The modes have different APCER (Attack Presentation Classification Error Rate) values. APCER - Proportion of presentations using the same PAI (Presentation Attack Instrument) species incorrectly classified as bona fide presentations in a specific scenario. That is the equivalent of False Acceptance. The lower the better.

By default:

  • FaceLivenessSecurityLevelHigh is used for all modes (FaceCaptureLivenessModeHigh, FaceCaptureLivenessModePassive, FaceCaptureLivenessModeVideoPassive)
Attribute
Description
Notes
FaceLivenessSecurityLevelLowLow security levelAPCER ~6%
FaceLivenessSecurityLevelMediumMedium security levelAPCER ~3%
FaceLivenessSecurityLevelHighHigh security levelAPCER ~1%

The goal of the face liveness feature is to provide mechanisms that prevent and or fight fraud attempts. For example, through the use of still images or photos and or videos of a given person.

This IDEMIA Biometric Capture SDK offers three anti-spoofing measures:

  1. Artefact detection: the SDK detects artefacts that occur when a fraudster tries to perform face acquisition by placing the smartphone’s camera in front of a video feed.
  2. 3D model analysis: the SDK detects that the face is in three dimensions. To do this, the SDK asks the user to move his head or his smartphone.
  3. CR2D challenge: the SDK asks the user to perform a task. The task is to move the target point using a movement of the user's head.
Biometric Mode
3D model analysis
Artefacts (video attack detection)
FaceCaptureModeLivenessLowLOWON
FaceCaptureModeLivenessActiveHIGHON
FaceCaptureModeLivenessPassiveHIGHON
FaceCaptureModeLivenessPassiveVideoSERVER ONLYSERVER ONLY
FaceCaptureModeNoLivenessNot applicableOFF

Use cases 

Capture Face Biometrics

Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and get information about the biometry (Move your head to the left …).

capture_images.png

Capture Timeout

Below is displayed the generic execution flow to be followed when a capture timeout happens.

capture_timeout.png

Capture Enroll

Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that, extract the biometric’s template from the image returned by the capture component. Once we have the template we store it in a database and to link it to one user. We add the user ID to the template. As a result of the insertion we are going to receive the UUID of this template in the database.

capture_enrol.png

Compression Recommendations 

Selfie images:

  • Recommended width is 400 px.
  • Recommended compression is JPEG90.
  • Size of image will be about 100 KB.
Objective-C
1- (NSData *)compressFaceImage:(UIImage *)image {
2 NSInteger imageWidth = 400;
3 CGFloat compressionQuality = 0.90;
4
5 CGFloat scaleFactor = imageWidth / image.size.width;
6 NSInteger imageHeight = image.size.height * scaleFactor;
7
8 UIGraphicsBeginImageContext(CGSizeMake(imageWidth, imageHeight));
9 [image drawInRect:CGRectMake(0, 0, imageWidth, imageHeight)];
10 UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
11 UIGraphicsEndImageContext();
12
13 return UIImageJPEGRepresentation(scaledImage, compressionQuality);
14 }

BIOMatcherHandler 

When the BIOMatcherHandler is meant to be used for one of the SDK's face variants (face, face_document, biometry, biometry_document), it requires one of the face algorithms to be added to the project.

Authentication 

This verifies a list of candidate templates against a list of reference templates.

Please check the use case named Verify.

Objective-C
1BIOAuthenticationOptions* options = [[BIOAuthenticationOptions alloc] initWithModality:BIOModalityFace];
2 [self.matcherHandler authenticateWithOptions:options
3 withBiometricCandidate:biometricCandidate
4 withBiometricReference:biometricReference
5 andCompletionHandler:^(BIOAuthenticationResult* result, NSError* error) {
6 ...
7 }];
Parameter
Description
options BIOAuthenticationOptions*The options used to perform the authentication.
biometricCandidate BIOBiometricCandidate*The biometric candidate with the templates that you want to match.
biometricReference BIOBiometricReference*The biometric reference with a list of templates used as reference.
completionHandler void(^)(BIOAuthenticationResult result, NSError error)Callback to be executed when authentication is finished.

Identify 

This identifies the user to which belongs the list of candidate templates against a list of reference templates. This method can also be used to verify/authenticate users.

Please check the use case named Identify.

Objective-C
1BIOIdentificationOptions* options = [[BIOIdentificationOptions alloc] initWithModality:BIOModalityFace];
2 [matcher identifyWithOptions:options
3 withBiometricCandidate:biometricCandidate
4 withBiometricReferences:biometricReferences
5 andCompletionHandler:(^(BIOIdentificationResult* result, NSError* error){
6 .......
7 }];
Parameter
Description
options BIOIdentificationOptions*The options used to perform the identification.
biometricCandidate BIOBiometricCandidate*The biometric candidate with the templates that you want to match.
biometricReferences NSArray<BIOBiometricReference*>*The list of biometric references with a list of templates used as references.

|completionHandler void(^)(BIOIdentificationResult* result, NSError* error) | Callbacks to be executed when the identification is finished.

Detect Biometric 

This detects the biometrics in a BIOImage. This function is intended to be used to extract the all the biometric templates contained in an image (For example all the faces that are in an image).

Please check the use case named Detect Biometric.

Objective-C
1BIODetectBiometricOptions* options = [BIODetectBiometricOptions biometricsWithLocation:BIOLocationFaceFrontal
2 withModality:BIOModalityFace];
3 [matcher detectBiometricWithOptions:options
4 withBIOImage:image
5 withCompletionHandler:void(^)(NSArray<BIOTemplate*>* templates, NSError* error){
6 .....
7 }];
Parameter
Description
options BIODetectBiometricOptions*The options used to perform the verification.
image BIOImageThe image.
completionHandler void(^)(NSArray<BIOTemplate*>* templates, NSError* error)Callbacks to be executed when the detection is finished.

Destroy 

This releases all the handler resources.

Objective-C
1[matcher destroy];

Helper Objects 

BIOMatcherHandlerOptions

This object is used to configure the behavior of MatcherHandler.

Attribute
Description
logLevel BIOLogLevelLevel of logs that are displayed during debug.

BIOMatchingOptions

This is the object that represents a the basic matching options.

Parameter
Description
modality BIOModalityThe BIOModality option.

BIOAuthenticationOptions

This is the object that represents a the verification options. This object extends BIOMatchingOptions.

Parameter
Description
modality BIOModalityThe BIOModality option.
threshold intThe authentication threshold to be considered valid (default value: 3500).

Note: The threshold is the score value that is used to differentiate a HIT from a NOHIT.

FAR
1%
0.1%
0.01%
0.001%
0.0001%
0.00001%
Score250030003500400045005000

FAR: proportion of requests that generate a non-expected HIT with two biometric acquisitions of two different persons.

For the use case of a selfie against a selfie within the context of a smartphone, the recommended threshold is 3500.

BIOIdentificationOptions

This is the object that represents a the identification options. This object extends BIOMatchingOptions.

Parameter
Description
modality BIOModalityThe BIOModality option.

BIODetectBiometricOptions

This is the object that represents a the verification options. This object extends BIOBiometrics.

Parameter
Description
biometricLocation BIOLocationThe BiometricLocation enum option.
biometricModality BIOModalityThe BiometricModality enum option.

BIOBiometricReference

This the object that represents a biometric reference to be compared against for authentication and identification proposes.

Parameter
Description
templates NSArray<BIOTemplate*>*The templates to be used.
userUUID NSString*The user uuid to be matched against.

BIOBiometricCandidate

This is the object that represents a biometric candidate that is used to be authenticated or identified.

Parameter
Description
templates NSArray<BIOTemplate*>*The templates to be used.

BIOMatchingCandidate

This is the object that represents a candidate result.

Parameter
Description
UUID NSString*The candidate uuid.
score longThe identification score result.

BIOAuthenticationResult

This is the object that represents an authentication result.

Parameter
Description
score longThe authentication score result.
matchesSuccessfully BOOLTrue if the score is bigger than the threshold.

BIOIdentificationResult

This is the object that represents an identification result.

Parameter
Description
candidates NSArray<BIOMatchingCandidate*>*The authentication score result.

Use Cases 

Create BIOMatcherHandler

Below is displayed the generic execution flow to be followed to retrieve and release a BIOMatcherHandler.

create.png

Capture Verify

Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that extract the biometric’s template from the image returned by the capture component (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and verify that the candidate template belongs to the user. There are two ways to extract a list of template references. One is retrieving them from the database used during the enrolment process. The second one is extracting the templates from another image with detectBiometricWithOptions:withBIOImage:withCompletionHandler:.

capture enrol

Capture Identify

Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that extract the biometric’s template from the image returned by the capture component (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and identify to which user belongs the candidate template.

capture_identify

Authenticate

Below is displayed the generic execution flow to be followed to perform a generic authentication process extracting the biometric’s template from an image (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and verify that the candidate template belongs to the user. There are two ways to extract a list of template references. One is retrieving them from the database used during the enrollment process. The second one is extracting the templates from another image with detectBiometricWithOptions:withBIOImage:withCompletionHandler:.

verify

Identify

Below is displayed the generic execution flow to be followed to perform a generic identification process extracting the biometric’s template from an image. Once that we have the candidate template we need to retrieve a list of reference templates to match against the candidate and identify to which user belongs the candidate template.

identify

Detect Biometric

This detects the biometrics in a BIOImage. This function is intended to be used to extract the all the biometric templates contained in an image (For example all the faces that are in an image).

detect biometric

ImageUtils 

SDK provides methods to perform various operations on BIOImage such as for example converting BIOImage to various image formats. Those operations are described below.

Compress BIOImage to JPEG 

This is the method of the BIOImage class that converts the BIOImage object to NSData object with JPEG file with default compression quality (90% for finger images, 80% for face images, 70% for document images). Created JPEG will contain capture maker note data inside EXIF metadata containing information such as for example SDK version used for capturing the image.

Objective-C
1- (NSData *)toJPEG;
Return
Description
NSData*JPEG file binary data.

Compress BIOImage to JPEG with custom quality 

This is the method of the BIOImage class that converts the BIOImage object to NSData object with JPEG file with given compression quality level. Created JPEG will contain capture maker note data inside EXIF metadata containing information such as for example SDK version used for capturing the image.

Objective-C
1- (NSData *)toJPEGWithQuality:(CGFloat)quality;
Parameter
Description
quality CGFloatCompression quality in range [0, 1].
Return
Description
NSData*JPEG file binary data.

Get UIImage from BIOImageFromUIImage 

This is the method of UIImage (BIOImage) extension that converts a BIOImage to a UIImage.

Objective-C
1+ (UIImage*)imageFromBIOImage:(BIOImage*)bioImage
Parameter
Description
image BIOImage*BIOImage to convert to UIImage.
Return
Description
BIOImage*UIImage from BIOImage.

Get BIOImage from UIImage 

This is the method of the BIOImage (ImageGetters) extension that converts UIImage to BIOImage.

Objective-C
1+ (BIOImage*)BIOImageFromUIImage:(UIImage*)image;
Parameter
Description
image UIImage*UIImage to convert to BIOImage.
Return
Description
BIOImage*BIOImage from UIImage.

Create a BIOImage with Different Color Space 

This is the method of the BIOImage (ImageGetters) extension that converts a BIOImage to another BIOImage with a different color space.

Objective-C
1- (BIOImage*)BIOImageWithColorSpace:(BIOColorSpace)colorSpace;
Parameter
Description
colorSpace BIOColorSpaceColor space wanted for the new BIOImage.
Return
Description
BIOImage*BIOImage with the color space provided.

Compress Image with Quality to NSData 

This is a method of BIOImage (BIOResize) extension that compresses an image to a NSData* with determined quality.

Objective-C
1- (NSData*)dataByCompressingImageWithQuality:(uint32_t)quality
Parameter
Description
quality uint32_tQuality of the compression (value between 1 and 100).
Return
Description
NSData*Final data with compression applied.

Compress Image with Quality to BIOImage 

This is the method of the BIOImage (BIOResize) extension to compresses an image to a BIOImage* with determined quality.

Objective-C
1- (BIOImage*)imageByCompressingImageWithQuality:(uint32_t)quality
Parameter
Description
quality uint32_tQuality of the compression (value between 1 and 100.)
Return
Description
BIOImage*Final BIOImage with compression applied.

Compress Image from Size in Kilobytes to NSData 

This is the a method of the BIOImage (BIOResize) extension to compress an image to a NSData* with a determined final size in kilobytes.

Objective-C
1- (NSData*)dataByCompressingImageToSizeInKilobytes:(CGFloat)sizeInKilobytes
Parameter
Description
sizeInKilobytes CGFloatFinal size in kilobytes.
Return
Description
NSData*Final data with compression applied.

Compress Image from Size in Kilobytes to BIOImage 

This is a the method of the BIOImage (BIOResize) extension to compress an image to a BIOImage* with a determined final size in kilobytes.

Objective-C
1- (BIOImage*)imageByCompressingImageToSizeInKilobytes:(CGFloat)sizeInKilobytes
Parameter
Description
sizeInKilobytes CGFloatFinal size in kilobytes.
Return
Description
BIOImage*Final BIOImage with compression applied.

Compress Image from WSQ Ratio to NSData 

This is the method of the BIOImage(BIOResize) extension to compress an image from WSQ to a NSData* with a determined WSQ ratio. Only fingerprint images should be used in this method.

Objective-C
1- (NSData*)dataByCompressingImageToWSQRatio:(CGFloat)ratio withScannerBlack:(Byte)scannerBlack andScannerWhite:(Byte)scannerWhite;
Parameter
Description
ratio CGFloatWSQ ratio for the compression (value between 1.6 and 8000, recommended value between 12 and 15).
scannerBlack ByteBlack calibration value (if unknown use 0).

|scannerWhite Byte | White calibration value (if unknown use 255).

Return
Description
NSData*Final data with compression applied.

Compress Image from WSQ Ratio to BIOImage 

This is the method of the BIOImage (BIOResize) extension that compresses an image to a BIOImage* with a determined WSQ ratio. Only fingerprint images should be used in this method.

Objective-C
1- (BIOImage*)imageByCompressingImageToWSQRatio:(CGFloat)ratio withScannerBlack:(Byte)scannerBlack andScannerWhite:(Byte)scannerWhite;
Parameter
Description
ratio CGFloatWSQ ratio for the compression (value between 1.6 and 8000, recommended value between 12 and 15).

|scannerBlack Byte | Black calibration value (if unknown use 0). |scannerWhite Byte | White calibration value (if unknown use 255).

Return
Description
BIOImage*Final BIOImage with compression applied.

Compress Image from JPEG2000 to NSData 

This is the method of the BIOImage (BIOResize) extension that compresses and image to a NSData* in JPEG2000 format with a determined maximum size in kilobytes. Only fingerprint images should be used in this method.

Objective-C
1- (NSData*)dataByCompressingImageToJPEG2000InKilobytes:(CGFloat)maximumSizeInKilobytes
Parameter
Description
maximumSizeInKilobytes CGFloatMaximum size in kilobytes.
Return
Description
NSData*Final data with compression applied.

Compress Image from JPEG2000 to BIOImage 

This is the method of the BIOImage (BIOResize) extension that compresses an image to a JPEG2000 in a BIOImage* format with a determined maximum size in kilobytes. Only fingerprint images should be used in this method.

Objective-C
1- (BIOImage*)imageByCompressingImageToJPEG2000InKilobytes:(CGFloat)maximumSizeInKilobytes
Parameter
Description
maximumSizeInKilobytes CGFloatMaximum size in kilobytes.
Return
Description
BIOImage*Final BIOImage with compression applied.

Crop Image Region (document) 

This is the method of the BIOImage (RegionCropping) extension to crop a BIOImage to a determined region.

Objective-C
1+ (BIOImage*)cropImage:(BIOImage*)bioImage toRegion:(BIODocumentTrackingInformation*)documentTrackingInfo;
Parameter
Description
bioImage BIOImage*BIOImage to be cropped.
documentTrackingInfo BIODocumentTrackingInformation*Region coordinates that the BIOImage will be cropped to.
Return
Description
BIOImage*Final BIOImage with the result of the crop, or nil if an error occurred.

Crop Image Rect 

This is the method of the BIOImage (Cropping) extension to crop a BIOImage to a determined rectangle.

Objective-C
1- (instancetype _Nullable)cropToRect:(CGRect)rect withMargin:(CGFloat)margin;
2- (instancetype _Nullable)cropToRect:(CGRect)rect; // margin = 0
Parameter
Description
rect CGRectRegion coordinates that the BIOImage will be cropped to.
margin CGFloatOptional cropping margin
Return
Description
BIOImage*Final BIOImage with the result of the crop, or nil if an error occurred.

Crop Image Points 

This is the method of the BIOImage (Cropping) extension to crop a BIOImage to a determined rectangle determined by points.

Objective-C
1- (instancetype _Nullable)cropToRegionWithPoint1:(CGPoint)point1 point2:(CGPoint)point2 point3: (CGPoint)point3 point4:(CGPoint)point4 withMargin:(CGFloat)margin;
2- (instancetype _Nullable)cropToRegionWithPoint1:(CGPoint)point1 point2:(CGPoint)point2 point3: (CGPoint)point3 point4:(CGPoint)point4; // margin = 0
Parameter
Description
point1 CGPointFirst point of the region coordinates that the BIOImage will be cropped to.
point2 CGPointSecond point of the region coordinates that the BIOImage will be cropped to.
point3 CGPointThird point of the region coordinates that the BIOImage will be cropped to.
point4 CGPointFourth point of the region coordinates that the BIOImage will be cropped to.
margin CGFloatOptional cropping margin
Return
Description
BIOImage*Final BIOImage with the result of the crop, or nil if an error occurred.