This is the main entry point that allows you to use the SDK.
Introduction
The Capture SDK is targeted to developers who want to use the IDEMIA biometric technologies within their mobile apps.
The main features are:
- Biometric captures
- Biometric coding
- Biometric authentication and identification
- Identity documents reading
Please refer to Release Notes to see the list of improvements and fixed issues.
Prerequisites
Skills Required
The integration tasks shall be done by developers with knowledge of:
- Xcode
- Objective-C/Swift
- iOS (min version is 15.0)
- (optional) cocoapods
Resources Required
Integration should be performed on a Mac.
The tools required are:
- Xcode that support iOS 15
- iOS device (simulator is not supported)
Licenses Required
Depending of which declination of the library is used the licenses required are:
-
Biometry + Document:
- MORPHOFACS
- VERIF
- IDENT
- MIMA
- MSC_CORE
- MSC_LIVENESS
-
Biometry:
- MORPHOFACS
- VERIF
- IDENT
- MIMA
- MSC_CORE
- MSC_LIVENESS
-
Document:
- MIMA
- MSC_CORE
Note: To enable the feature for video dump you will need also:
- MSC_DUMP
Sample Project
The sample project is provided along with this documentation.
Getting Started
Components
The SDK is composed of six different components:
- LicenseManager: Object responsible for handling the license.
- FaceCaptureHandler: Handles the capture of the face biometrics through the camera of the device.
- FingerCaptureHandler: Handles the capture of the finger biometrics through the camera of the device.
- BioMatcherHandler: Handles the biometric coding and matching.
- BioStoreDB: Repository to store biometric templates (This component is optional, just in case you don’t want to implement your own database).
- DocumentCaptureHandler: Handles the document reading features (Like reading MRZ documents).
- ImageUtils: Handle the images format conversion, in case the integrator needs to change the image format or import an image.
SDK Variants
The SDK comes in six different variants.
-
Full SDK: It contains all the features of the SDK. Includes components: LicenseManager, FaceCaptureHandler, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK'
. -
Biometry variant: It contains all biometric features including face and finger capture and also biometric coding and matching used for liveness checking. Includes components: LicenseManager, FaceCaptureHandler, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK-biometry'
. -
Document variant: It contains only document capture features. Includes components: LicenseManager, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK-document'
. -
Finger variant: It contains only finger capture features including biometric coding and matching used for finger liveness checking. Includes components: LicenseManager, FingerCaptureHandler, BioMatcherHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK-finger'
. -
Face variant: It contains only face capture features without biometric coding and matching, so for liveness checking you need to use external server. Offline liveness checking is not available. Includes components: LicenseManager, FaceCaptureHandler, BioStoreDB, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK-face'
. -
Face+document variant: It contains only face capture features without biometric coding and matching, so for liveness checking you need to use external server. Offline liveness checking is not available. Additionally it contains also document capture features. Includes components: LicenseManager, FaceCaptureHandler, BioStoreDB, DocumentCaptureHandler, ImageUtils. You can integrate it into your project by specifying it in your Podfile:
pod 'BiometricSDK-face_document'
.
Analytics
Capture SDK offers a logging mechanism that collects analytics data about SDK usage and sends this data to IDEMIA's server in the EU. This data helps IDEMIA to improve Capture SDK and the likelihood of integrator success within the app. It is strongly recommended to use the analytics mechanism.
- Sending analytics data is enabled by default.
- You can enable or disable sending analytics data.
- You can choose to send analytics data only when you are connected to a Wi-Fi network, so as not to not use your cellular connection.
- Analytics data that IDEMIA collects contains only technical data.
- No sensitive personal data is collected.
- IDEMIA does not collect any images.
Analytics data that we collect include following information:
- Application name, bundle id, version
- Capture SDK and RemoteLogger libraries versions
- Capture SDK plugins versions
- Device model and operating system version
- Technical information about performed face, finger, and document capture (such as: used capture mode; timestamp; reason of error; time needed to perform a capture; quality of captured image; and light condition)
- Technical information about performed authentication and identification events (such as: used threshold, duration, and obtained score)
- Other technical information (such as: image compression, occurred errors, and SDK performance) that does not contain personal data
You can disable analytics reporting using the appropriate SDK method.
Analytics are enabled by default and data is sent through Wi-Fi and cellular connections to IDEMIA's server in Europe. You can switch to the server in US by calling:
Objective-C1[[BIORemoteLogger sharedInstance] setConfiguration:[BIORemoteLoggerConfiguration defaultUS]];
You can switch to Wi-Fi only mode with:
Objective-C1[BIORemoteLogger sharedInstance].wifiOnly = YES;
You can disable analytics as with:
Objective-C1[BIORemoteLogger sharedInstance].loggerDisabled = YES;
You should perform above calls before using the SDK, for example, in your app delegate.
Project Configuration
Before using our SDK you need to add Privacy - Camera Usage Description (NSCameraUsageDescriptionkey) to the Info.plist of your application as it will need to use the camera.
Adding the Biometric Capture SDK Framework
We serve our artifacts with the artifactory. As an integrator you can choose one the methods of adding our framework to your project:
- using CocoaPods
- manually
Use CocoaPods (along with cocoapods-artplugin)
If you don't already have the CocoaPods with the Artifactory tool, install it by running the following command:
Bash1gem install cocoapods-art
- The plugin uses authentication as specified in a standard .netrc file.
Swift1machine mi-artifactory.otlabs.fr2login ##USERNAME##3password ##PASSWORD##
- Once set, add our repo to your CocoaPod's dependency management system:
Bash1pod repo-art add smartsdk "https://mi-artifactory.otlabs.fr/artifactory/api/pods/smartsdk-ios-local"
- At the top of your project
Podfile
add:
Ruby1plugin 'cocoapods-art', :sources => [2 'master', # so it could resolve dependencies from master repo (the main one), for newer CocoaPods (1.10+) versions it may not be needed anymore3 'smartsdk' # so it could resolve BiometricSDK depdendency4]
- Add the Capture SDK in your
Podfile
in one of its pod's version:
Ruby1pod 'BiometricSDK' # Full version of the SDK, contains biometrics & documents features2pod 'BiometricSDK-biometry' # Contains only biometrics (finger+face) features34pod 'BiometricSDK-finger' # Contains only finger features56pod 'BiometricSDK-face' # Contains only face features without internal liveness checking mechanism7pod 'BiometricSDK-document' # Contains only document features8pod 'BiometricSDK-face_document' # Contains only face features without internal liveness checking mechanism & documents features
Above configuration installs iOS frameworks. For XCFramework, 'XCFramework' subspec can be used for face, face_document and document variants. Eg.
Ruby1pod 'BiometricSDK-face_document/XCFramework'
- Then you can use
install
:
Bash1pod install
Note: If you are already using our repo, and you cannot resolve some dependency, try to update the specs:
Bash1pod repo-art update smartsdk
Manually
- Download the artifact manually from the artifactory:
SDK variant | SDK link |
---|---|
Face+Document+Fingerprint | BiometricSDK-biometry_document.tar.gz |
Face+Fingerprint | BiometricSDK-biometry.tar.gz |
Document | BiometricSDK-document.tar.gz |
Face | BiometricSDK-face.tar.gz |
Document+Face | BiometricSDK-face_document.tar.gz |
Fingerprint | BiometricSDK-finger.tar.gz |
Algorithms | BiometricSDK-algorithms.tar.gz |
Plugins | BiometricSDK-plugins.tar.gz |
-
In the project editor, select the target to which you want to add a library or framework.
-
Click Build Phases at the top of the project editor.
-
Open the Embedded Binaries.
-
Click the Add button (+).
-
Click the Add Other button below the list.
-
Add the following items:
BiometricSDK.framework
- (optionally for face capture)
BiometricSDKFaceCapturePluginNormal.framework
(or other face capturing plugin) - (optionally for face capture)
BiometricSDKAlgorithmPlugin_F6_5_LOW70.framework
(or other matching algorithm plugin) - (optionally for finger capture new API)
FingerCaptureSDK.framework
- (optionally for finger capture new API)
BiometricSDKUIFinger.framework
Note: XCFrameworks are supported with face, face_document and document variants. XCFrameworks are not yet supported with biometry, biometry_document, fingerprint variants.
Plugins
Introduction
Capture SDK comes in few different variants which provide different types of components and functionality as it was described in Getting Started page of this guide. In addition to different variants plugins have been introduced to give even more flexibility than variants of SDK. Every integrator might have different needs and size requirements, which is why we introduce a new plugin mechanism. Plugins are split to two groups: feature and algorithm.
Feature Plugins
Provides various SDK functionalities like: face capture, document capture, etc.
Algorithm Plugins
Algorithm plugins provide extracting biometric data from images, matching these data and store it as templates.
How it Works
To use a particular plugin, it only needs to be embedded within the application. If a configuration is not valid, an error will be raised in the runtime. In the Recommended Configurations section the most common use cases are described.
When the cocoapods are being used for configuration, every pod has its own default configuration, so it doesn't need to have all elements configured.
For example pod 'BiometricSDK-face'
is equivalent to setting:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # algorithm plugin
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # algorithm plugin
But when there's a need to use different plugins, it needs to have all the elements configured in the Podfile.
Face Capturing Plugins
Face capturing plugins provide various SDK face capturing methods.
FaceCapturePluginLite
It's a plugin meant to be used to work along with an external server for the liveness check. The SDK won't be able to determine if a person in front of a camera is alive or not. It only runs a scanning process. As a result of the scanning, the scanning metadata will be provided. This metadata should be sent to the server to determine the liveness. Because the scanning process won't do any internal face matching, the plugin doesn't require any algorithm plugin. The plugin can be added manually or by using cocoapods by specifying it in the Podfile:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureLite' # face capturing plugin3# No need to add algorithm plugin, but BIOMatcherHandler won't work in such case
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureLite-XCFramework' # face capturing plugin3# No need to add algorithm plugin, but BIOMatcherHandler won't work in such case
FaceCapturePluginNormal
It's a default plugin, which has been used in the SDK since the beginning, but internally. It's meant to be used for a face scanning with the offline liveness check. To determine liveness it uses an internal face matching algorithms. This is why it requires one of the face algorithm plugins to work. The plugin can be added manually or by using CocoaPods by specifying it in the Podfile:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # it needs some algorithm to work
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # it needs some algorithm to work
FaceCapturePluginCr2dMatching
This plugin extends FaceCapturePluginNormal
by introducing additional face checks during the offline liveness check for the liveness Active scanning mode. The additional checks makes sure that the same person is doing the liveness check for the whole process. It requires more CPU power to work. Moreover, to determine liveness it uses an internal face matching algorithms. This is why it requires one of the face algorithm plugins to work. The plugin can be added manually or by using CocoaPods by specifying it in the Podfile:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureCr2dMatching' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # it needs some algorithm to work
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK framework2pod 'BiometricSDK-face/Plugins/FaceCaptureCr2dMatching-XCFramework' # face capturing plugin3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # it needs some algorithm to work
Face Matching Algorithm Plugins
These algorithms are used for the face matching. The matching can take place during the face scanning internally or during either the authentication or the identification processes.
Warning: The algorithms are NOT compatible with each other. The templates generated by one of the algorithms cannot be processed with the other one, i.e., it's not possible to match a template generated with F5_0_VID81
against a template generated with F5_4_LOW75
. F5_0_VID81
has been used before SDK 4.22.0 as a built-in algorithm. So if an integrator upgrades the SDK's version from a version earlier then SDK 4.22.0, they should continue using F5_0_VID81
. If an integrator wants to change the algorithm in their solution, all the stored templates will need to be recreated with the new algorithm.
F6_5_LOW70
This is recommended algorithm plugin for selfie versus selfie matching.
It's very accurate algorithm for face matching. Templates are not compressed with this algorithm. Weighs ~7.8 MB (uncompressed 8.1 MB). It is not compatible with matching selfie versus portrait scanned from an ID document.
Ruby1pod 'BiometricSDK-face/Algorithms/F6_5_LOW70'
XCFramework version:
Ruby1pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework'
F5_0_VID81
It's a small face matching algorithm. It can be used when the matching accuracy is not a priority, but size, or when it's being used only for face capturing. Weighs ~4.1 MB (uncompressed 4.4 MB).
Ruby1pod 'BiometricSDK-face/Algorithms/F5_0_VID81'
XCFramework version:
Ruby1pod 'BiometricSDK-face/Algorithms/F5_0_VID81-XCFramework'
F6_0_IDD80
It's a small face matching algorithm. It can be used when the matching accuracy is not a priority, but size, or when it's being used only for face capturing. Weighs ~3.0 MB (uncompressed 3.7 MB).
Ruby1pod 'BiometricSDK-face/Algorithms/F6_0_IDD80'
XCFramework version:
Ruby1pod 'BiometricSDK-face/Algorithms/F6_0_IDD80-XCFramework'
F5_4_LOW75
It's a more accurate algorithm for the face matching than F6_0_IDD80 and F5_0_VID81. It compresses templates to a 116B block, so it can be encoded even into a QRCode, so it can be shared between phones. It's better for matching between a face photo from a document and from a face scanning. Weighs ~12.8 MB (uncompressed 15.5 MB).
Ruby1pod 'BiometricSDK-face/Algorithms/F5_4_LOW75'
XCFramework version:
Ruby1pod 'BiometricSDK-face/Algorithms/F5_4_LOW75-XCFramework'
Recommended Configurations
- The liveness check and the face matching on an external server:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureLite'
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureLite-XCFramework'
- The liveness check on a device and the face matching on an external server:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal'3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70' # any algorithm plugin
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework'3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # any algorithm plugin
- The liveness check and the face matching on a device:
Ruby1pod 'BiometricSDK-face/Framework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal'3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70'
XCFramework version:
Ruby1pod 'BiometricSDK-face/XCFramework' # the SDK Core2pod 'BiometricSDK-face/Plugins/FaceCaptureNormal-XCFramework'3pod 'BiometricSDK-face/Algorithms/F6_5_LOW70-XCFramework' # any algorithm plugin
SDK Size
SDK variant | SDK size |
---|---|
Face+Document+Fingerprint (offline liveness + matching) | 48.58 MB |
Face+Document+Fingerprint (offline liveness) | 40.35 MB |
Face+Document+Fingerprint (backend) | 36.28 MB |
Face+Fingerprint (offline liveness + matching) | 35.53 MB |
Face+Fingerprint (offline liveness) | 27.30 MB |
Face+Fingerprint (backend) | 23.23 MB |
Face (offline liveness + matching) | 32.65 MB |
Face (offline liveness) | 24.42 MB |
Document | 11.11 MB |
Face (backend) | 20.35 MB |
Document+Face (offline liveness + matching) | 45.72 MB |
Document+Face (offline liveness) | 37.49 MB |
Document+Face (backend) | 33.42 MB |
Fingerprint | 07.70 MB |
All sizes are estimated download sizes from App Store on arm64 devices. Note that universal IPA file size containing the SDK might be visibly different in case it's built also for other architectures and/or includes bitcode.
Sizes are total sizes for the whole packages which includes:
- an appropriate SDK variant
- capture plugins
- algorithms plugins
- UIExtension library
- UIExtension's additional resources like tutorials and animations
Different packages variants contains:
- offline liveness + matching: an appropriate SDK, Face Normal plugin, F5_4_LOW75 algorithm, UIExtensions, face capturing tutorials
- offline liveness: SDK, Face Normal plugin, F6_0_IDD80 algorithm, UI
- backend: SDK, Face Lite plugin, no algorithm, UI
License Manager
Remember: A valid license is required before using any feature of the SDK.
To have a valid license:
- Obtain an instance of
LicenseManager
viaprovideLicenseManager()
method. - Call
activate()
method on it.
Before Starting
Note: If you use the debug LKMS server without a SSL connection, you should add permission for the arbitrary loads in the transport security section in your Info.plist file. But it's highly recommended NOT to set this permission without the reason.
XML1<key>NSAppTransportSecurity</key>2 <dict>3 <key>NSAllowsArbitraryLoads</key>4 <true/>5 </dict>
New license manager
The License manager is the main entry point to use the SDK. You can manage licenses through LicenseManager
.
Note: A valid license is required before using any feature of the SDK.
provideLicenseManager
This static method provides an instance of LicenseManager
with a predefined LKMS profile. Any interaction with LicenseManager
must be executed before starting capture.
Swift1let manager = LicenseManager.provideLicenseManager(profileId: LkmsProfileId, apiKey: LkmsApiKey, serverUrl: lkmsUrl)
Activating license
This method fetches the license if it's not locally stored and activates it. Additionally, in cases where the license has expired, the function retrieves a new license. This process is crucial and must occur each time the application starts.
Callback solution:
Swift1manager.activate { (error: LicenseActivationError) in2 if let error {3 // Failed to fetch or activate the license.4 } else {5 // License fetched and activated with success.6 }7 }
Async solution:
Swift1let result: Result<Void, LicenseActivationError> = await manager.activate()2 switch result {3 case .success:4 // License fetched and activated with success.5 case .failure(let error):6 // Failed to fetch or activate the license.7 }
LicenseActivationError
This is the information about why the license can not be activated.
Attribute | Description |
---|---|
type ActivationErrorType | The type of error that occurred during the license activation. |
message String | The reason for the activation failure. |
ActivationErrorType
Attribute | Description |
---|---|
profileExpired | The profile has expired, all licenses won’t work anymore. (Contact with support) |
activationCountExceeded | No more licenses can be consumed. (Contact with support) |
authenticationIssue | The credentials and/or profile are wrong. |
connectionIssue | Connection issue. (Check internet connection and server url) |
licenseSignatureVerificationFailed | Verification of the license signature is failed. Mostly this error is thrown when integrator switch between development and App Store builds. |
unknown | Unknown issue. |
Development issues
The LKMS license on your device may become invalidated when switching between debug and App Store (also TestFlight) builds of your app. To have a valid license, uninstall the app first and then switch to debug / App Store build. There is no need to uninstall the app when updating the app from the same source (debug build replaced with debug build or App Store build replaced with App Store build). In the SDK there is dedicated error for this: licenseSignatureVerificationFailed
.
NOTE: After version 4.45.0 it is not occurs anymore.
New face capture API
Introduction
In order to make integration of the SDK easier and more intuitive - new API for Face Capture has been delivered. It is based on use cases that are self-explaining which provide specific information depending on a given use case. This allows integrator to focus on working with the data provided by the SDK rather than on SDK configuration.
NOTE: As for now, the new API supports only remote use case.
Integration steps
For successful integration 4 steps are needed.
-
License activation and camera permission. In order to use SDK proper license has to be activated. License Manager section shows how to handle license and permissions.
-
Integration with services New face capture is currently considered to be a remote-only use case. In order to initiate the capture process, integration with proofing services is required. These proofing services are responsible for performing liveness detection, ensuring that the captured face image corresponds to a genuine person. Please look at the next paragraphs [Create Session] (#fc-create-session)
-
Use case creation. In order to start capture, it is essential to create a specific use case instance. Each use case have sessionInfo property that contains session configuration and set of delegate that will be used to handle callback methods. Use cases and delegates are described in more detail in the next paragraphs - Face Remote Use Case, Delegates.
To easily customize the look and feel of face capture, an optional parameter UISettings
can be additionally created. More on that parameter here
Once these three steps are satisfied, theFaceCaptureView
can be set up passing use case instance and optionally UI settings asa paramerers. After setting up a captureView
, capture can be started by calling start()
method. Delegate attached to the use case will receive data from the capture.
Swift1let environmentInfo = // select type of authorization2 let delegates = // create delegates3 let useCase = //create use case4 let uiSettings = // optionally create ui settings, could be nil5 self.captureView.setup(with: useCase, uiSettings: uiSettings)6 self.captureView.start()
- Add
FaceCaptureView
to the layout. It inherits from UIView and has to be used as a preview on the capture's UI.FaceCaptureView
is also used to start face capture and cancel it if needed. So this is not only a capture preview but also entry point to the SDK.
Swift1FaceCaptureView2 func setup(useCase: FaceCaptureUseCase, uiSettings: UISettings?)3 func start()4 func cancel()
Adding FaceCaptureView to the view controller.
Storyboard
In project that uses storyboards FaceCaptureView
should be added as an IBOutlet
to a layout of the view used for face capture.
Swift1@IBOutlet private weak var captureView: FaceCaptureView!
More programmatically
If you want to use FaceCaptureView
programmatically just follow
Add property to your view controller class.
Swift1private weak var captureView: FaceCaptureView?
Next is displaying FaceCaptureView
:
Swift1let captureView = FaceCaptureView()2 captureView.translatesAutoresizingMaskIntoConstraints = false3 captureView.contentMode = .scaleToFill4 view.addSubview(captureView)5 NSLayoutConstraint.activate([6 captureView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),7 captureView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor),8 captureView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),9 captureView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor)10 ])11 self.captureView = captureView
Type of authorization
New Face Capture is compatible with two types of the authorization. It is API key and OAuth.
OAuth
In token type authorization, an access token is generated by the authorization server using the provided secrets. This token can be utilized by creating an AccessToken
class and using the appropriate initializer in the EnvironmentInfo
class: init(accessToken: AccessToken, baseUrl: URL)
.
Getting started with OAuth
If you want to read more about OAuth you can check this website.
AccessToken
The AccessToken
class holds information about the secret and token type from the OAuth authorization server.
Where can secrets be found for token generation?
Secrets can be found on the webpage: https://experience.idemia.com/dashboard/my-identity-proofing/access/environments/. There is possiblity to generate it manually on the page or make a request to the authorization server.
An example successful response from the authorization server:
Language not specified1HTTP/1.1 200 OK2Content-Type: application/json;charset=UTF-83Cache-Control: no-store4Pragma: no-cache56 {7 "access_token":"2YotnFZFEjr1zCsicMWpAA",8 "token_type":"example",9 "expires_in":3600,10 }
AccessToken
class description
Parameter | Description |
---|---|
secret String | This is the access_token parameter mapped from the authorization server response. |
tokenType String | Token type indicates how it should be used in the authorization request. This is the token_type parameter mapped from the authorization server response. |
API key
The API key is generated once and is passed to the apikey
parameter in
EnvironmentInfo
initializer: init(apiKey: String, baseUrl: URL)
Creating Session
Session creation should be implemented on the backend side. As a result of session creation a sessionId is being returned. To create session following API calls have to be performed.
To proceed with ID&V platform/GIPS, follow these steps:
-
Create identity by calling POST request:
/gips/v1/identities
As a result an identity is being returned -
Submit confirmation that the user has consented to perform specific evidence verifications POST request:
/gips/v1/identities/{id}/consents
-
Start liveness session by calling POST request
/gips/v1/identities/{id}/attributes/portrait/live-capture-session?mode=nativeSDK
Session Id is being returned as a result.
If you want more detailed instructions about creating session please refer to ID&V API documentation, steps 1 to 3: Liveness video capture using a native SDK
To proceed with WebBio, follow these steps:
- To create a session, make a POST request to the endpoint
/bioserver-app/v2/bio-sessions
and include the session data in the request body. - Optionally, retrieve the session path from the response.
- Retrieve the bioSession ID by making a GET request to
/bioserver-app{bioSessionPath}
. The response to this call will contain the session ID used by the SDK. - Initialize the session by sending a POST request to
/bioserver-app/v2/bio-sessions/{bioSessionId}/init-liveness-parameters
, providing the session ID from the previous step and the liveness parameters in the request body.
For more detailed instructions on creating a session with WebBio, refer to the "Replay challenge" section in the IDEMIA Biometric Services Documentation. You can find the described API requests on the biometric-services API Explorer page.
Use Cases
Capture settings of New Face Capture API are done by using proper, predefined configuration, designed for specific use case. In this way capture configuration is more intuitive and less confusing.
- feedbackDelegate - delegate receives feedback message that can be mapped to instructions presented to user,
- trackingDelegate - receives face tracking data that can be used to draw box around tracked elements,
- livenessActiveDelegate - if active liveness challenge is chosen this delegate receives pointer and targets status update.
More information about delegates is covered in here
FaceRemoteCaptureUseCase
This use case performs face capture with backend communication. In order to provide correct flow, sessionId, environmentInfo and RemoteCaptureDelegates must be provided during RemoteUseCase initialization. sessionId of face capture session on IDEMIA Identity and Verification service for SDK is require to start communication with the backend. More about creating session to be found here. Parameter environmentInfo contains type of authorization like apikey or OAuth token. Moreover it has address to the environment. After communication is established and challenge is started RemoteCaptureDelegates provide status and feedback information of the face capture.
RemoteCaptureUseCase field | Description |
---|---|
sessionId String | Live capture session id |
environmentInfo EnvironmentInfo | Object that contains both: url to face liveness server and secrets needed for authorization. It can be apikey or OAuth access token |
delegates RemoteCaptureDelegates | Delegates used to handle face capture callbacks. |
Swift1let useCase = FaceRemoteCaptureUseCase(sessionId: sessionId, delegates: delegates, environmentInfo: environmentInfo)2self.captureView.configure(with: useCase, uiSettings: uiSettings)3self.captureView.start()
UISettings
Optional parameter that can contain configuration of UI elements colors, sizes etc. used in face capture challenges displayed on FaceCaptureView. It allows individual customization for all of 3 types of liveness challanges:
UISettings field | Description |
---|---|
passiveVideoCaptureSettings PassiveVideoCaptureSettings | Custom UI settings for passive video capture challange. |
passiveCaptureSettings PassiveCaptureSettings | Custom UI settings for passive liveness challange. |
livenessActiveCaptureSettings LivenessActiveCaptureSettings | Custom UI settings for active liveness challange |
Swift1class UISettings {2 var passiveVideoCaptureSettings: PassiveVideoCaptureSettings?3 var passiveCaptureSettings: PassiveCaptureSettings?4 var livenessActiveCaptureSettings: LivenessActiveCaptureSettings?5}
Delegates
This section describes delegates available for specific use cases within new API for face capture. Please remember that all UI operations should be performed on main thread.
Capture Delegates
Delegates are used as a mechanism to return information about capture process and status. To receive such information simply set an instance of your class as a delegate and implement methods from the protocol. Delegates are dedicated for specific use cases. However, all use cases have common 3 general delegates, which are as follows:
Face feedback delegate
It is mandatory to instruct user to properly place device during capture and also provides instruction to pass the liveness challenge.
Swift1func captureInfoReceived(feedback: FaceCaptureFeedback)
Method called after feedback info is available. Returns FaceCaptureFeedback
which is an enum with 11 possible messages. Those feedback messages can be mapped and displayed to the user on the UI. For example, faceInfoCenterMoveForwards
can be mapped to "Please move your face forward". More examples can be found in FaceSampleApp.
Face tracking delegate
Returns coordinates of users face position on screen.
Swift1captureInfoReceived(trackingData: FaceTrackingInfo)
Delegate returns FaceTrackingInfo
that has coordinates and dimensions of a frame that can be drawn around user's face.
Liveness Active Delegate
Delegate used for Join-the-points challenge.
Swift1func pointerUpdated(pointerInfo: PointerInfo)2 func targetUpdated(targetInfo: TargetInfo)3 func targetsNumberReceived(numberOfTargets: Int)
Provides information about states and positions of various UI elements used during Active Liveness (Join The Points) challenge. It updates about current pointer position on screen, target coordinates and state and number of targets.
Remote Capture Delegates
Step Info Delegate
Delegate method that returns information about current capture step which reflects the status of face capture process.
Swift1func captureInfoReceived(stepInfo: StepInfo)
Step info can have one of two values:
- after capture is started -
captureStarted
step info is being returned. At this point any UI progress indicator should be hidden andFaceCaptureView
has to be displayed to the user. - when
preparingLiveness
step is returned in the context of liveness, it indicates that liveness metadata is being sent to the backend. This callback can be utilized to display a UI component that visually represents the ongoing task, such as a system "UIActivityIndicatorView."
Passive Video Delegate
In case of passive video challenge, multiple events take place. This delegate provides information about what is happening and returns data that can be used as a parameter for custom UI components.
Swift1func preparationStarted()2 func preparationFinished()3 func overlayDidUpdate(overlay: OvalOverlay)4 func progressDidUpdate(progress: CGFloat)
First two delegate methods inform about when video passive challenge preparation has been started and when finished. This process takes about few seconds so this information can be used to show a loading indicator or other custom waiting screen.
Overlay updates using overlayDidUpdate
are needed to draw a custom overlay on screen. It is in shape of an oval on the center of the screen and the challenge for the user is to align his face inside it. If user's face place inside the overlay, a progress using progressDidUpdate
is updated. Progress value has to be displayed as a text or visual hint to the user.
Capture Liveness Delegate
Delegate method that returns information about liveness mode used for current capture
Swift1func captureLiveness(liveness: Liveness)
Liveness parameter contains information about current mode used in capture. It can have one of 3 values" active
, passive
, passiveVideo
.
Capture Result Delegate
Delegate that has one method, which is being called after remote face capture is finished.
Swift1func captureDidFinish(result: FaceCaptureResult)
It provides FaceCaptureResult
parameter which informs if capture has been successful. In case of failed capture, it returns error. No other callback will occur after this method.
Liveness Processing Delegate
Delegate has one method which is called during the preparingLiveness
phase of the capture process.
Swift1func livenessMetadataUploadDidUpdate(progress: Double)
It provides information about upload progress of metadata.
Example integration
Example integration: View controller with remote capture use case set to passive liveness.
Important - There are two types of remote face liveness integration available:
- Integration with the ID&V platform/GIPS is demonstrated in the sample app called FaceSampleAppLite.
- Integration with the WebBio service is showcased in the sample app called FaceSampleAppLiteWBS.
If you want to see examples and observe how the integration works, you can refer to the sample applications.
Swift1import UIKit2import FaceCaptureSDK34// For convinience, used protocol can be grouped5protocol CaptureDelegate: FaceFeedbackDelegate, FaceTrackingDelegate, CaptureResultDelegate, StepInfoDelegate {6}78class FaceCaptureViewController: UIViewController {9 // session id has to be provided by the customer backend10 private var sessionId: String1112 override func viewDidLoad() {13 super.viewDidLoad()1415 if let useCase = // create Remote Use Case providing session info, api key and host url16 let uiSettings = // create UISettings17 self.captureView.configure(with: useCase, uiSettings: uiSettings)18 self.captureView.start()19 }20 }21}2223// Handling delegate callbacks24extension FaceCaptureViewController: CaptureDelegate {25 func captureInfoReceived(feedback: FaceCaptureFeedback) {26 //TODO: Map feedback entry to proper information and display it to the user27 //ie. case faceInfoComeBackField: -> "Move closer"28 }2930 func captureInfoReceived(trackingInfo: FaceTrackingInfo) {31 //TODO. If needed a rectangle box can be drawn on screen using coordinates32 // from tracking info object.33 }3435 func captureDidFinish(result: FaceCaptureResult) {36 // Handle capture finished callback: show success or error screen.37 }3839 func captureInfoReceived(stepInfo: StepInfo) {40 // Handle UI depending on capture state, for example:41 switch stepInfo {42 case .preparingLiveness:43 // Show loading indicator44 case .captureStarted:45 // Hide loading indicator, show preview46 }47 }48}
Errors
FaceCaptureError object contains a lot of useful information that helps in handling a failed flow.
Parameter | Description |
---|---|
type FailureType | Type of an error. High level information what goes wrong. Find types description below. |
code Int | Special code dedicated for particular case. Very helpful in L2, L3 troubleshooting. |
message String | Message with error description. |
unlockDateTime NSNumber | Unix timestamp in "UTC" time zone when capture will be unblocked. This field have value when ErrorType is deviceBlocked . Please remember, the value is represented in milliseconds. |
ErrorType
Type | Description |
---|---|
timeout | Timeout occured during the flow. |
connectionIssue | Remote connection issue. |
authentication | Remote capture authentication issue. |
invalidSession | Remote session is invalid. |
badCapture | Face capture failed. |
cancelled | Capture has been cancelled by the end user. |
invalidLicense | The LKMS license is invalid. |
unknown | Unknown type of error. Also used as default type for few cases. |
deviceBlocked | Capture on this device got blocked for period of time, because of many failures. |
Network Security Configuration
Face Capture SDK allows to setup public key pinning for network communication with backend services. To enable this feature, simply add in your app's Info.plist
file object similar to below one:
XML1<?xml version="1.0" encoding="UTF-8"?>2<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">3<plist version="1.0">4<dict>5 <key>CaptureSDKNetworkSecurityConfig</key>6 <dict>7 <key>CaptureSDKDomainConfig</key>8 <array>9 <dict>10 <key>CaptureSDKDomain</key>11 <string>proofing.app.eu.identity-prod.idemia.io</string>12 <key>CaptureSDKPinSet</key>13 <array>14 <string>VnaVguILcTfqALKLEhMtqKzXG6KK7w5T1Px7LO+dbVw=</string>15 </array>16 <key>CaptureSDKPinSetExpirationDate</key>17 <date>2024-03-25T10:57:42Z</date>18 </dict>19 </array>20 </dict>21</dict>22</plist>
CaptureSDKDomainConfig
array should contain a dictionary for each domain pin set. Expiration date can be set (via CaptureSDKPinSetExpirationDate
) to not check for expired certificates. CaptureSDKDomain
must be exact domain (without wildcards). The CaptureSDKPinSet
can contain multiple SHA-256 key hashes encoded in Base64. Supported public key types: RSA (2048, 3072 and 4096), ECDSA (secp256r1 and secp384r1).
Key pinning is considered enabled for a domain when at least one non-expired non-empty pin set is available for a domain. For each request made in Face Capture SDK, if there is no matching pin (public key hash) to the ones found in SSL challenge certificate public keys, the request will be canceled.
Biometric Capture SDK
Get Info about Biometric Capture SDK
The purpose of this method is allow the integrator to retrieve information about the SDK.
Objective-C1BIOSDKInfo *info = [BIOSDK getInfo];
Returns
An object of the BIOSDKInfo type with the information about the SDK.
Create a FaceCaptureHandler
This retrieves a capture handler to perform the face biometric capture operations. You must first configure the capture options.
-
Please check the use case named Capture face biometrics.
-
Also, you can check all the features provided by this handler here.
Objective-C1@interface ViewController () <FaceCaptureHandlerDelegate>2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;3 @end4 ...5 [BIOSDK createFaceCaptureHandlerWithOptions:[FaceCaptureOptions new] withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {6 if (!error) {7 self.captureHandler = captureHandler;8 self.captureHandler.delegate = self;9 ...10 }11 }];
Parameter | Description |
---|---|
options FaceCaptureOptions* | The capture options to configure the face capture handler. |
completionHandler void (^)(id | Block of code that will be called after FaceCaptureHandler finishes initialization. It will pass an error if any occurred. |
Note: Errors code list is here
Create a RemoteFaceCaptureHandler
Note: ⚠️ RemoteFaceCaptureHandler is deprecated.
This retrieves a capture handler to perform the face biometric capture operations with the server support. You must first configure the capture options. More info regarding the server integration can be found here.
-
Please check the use case named Capture face biometrics.
-
Also, you can check all the features provided by this handler here.
Objective-C1@interface ViewController () <FaceCaptureHandlerDelegate>2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;3 @end4 ...5 [BIOSDK createFaceCaptureHandlerWithOptions:[FaceCaptureOptions new] withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {6 if (!error) {7 self.captureHandler = captureHandler;8 self.captureHandler.delegate = self;9 ...10 }11 }];
Parameter | Description |
---|---|
options FaceCaptureOptions* | The capture options to configure the face capture handler. |
completionHandler void (^)(id | Block of code that will be called after FaceCaptureHandler finishes initialization. It will pass an error if any occurred. |
Note: Errors code list is [here](
Create a BIOMatcherHandler
This retrieves a handler to perform all the matching, identifying, and template coding operations.
-
Please check the use case named Create BIOMatcherHandler.
-
Also, you can check all the features provided by this handler here.
Objective-C1@interface ViewController () <BIOCaptureHandlerDelegate>2 @property (strong, nonatomic) id<FaceCaptureHandler> captureHandler;3 @end4 .....5 [BIOSDK createMatcherHandlerWithOptions:[BIOMatcherHandlerOptions new]6 withCompletionHandler:^(id<BIOMatcherHandler> matcherHandler, NSError* error) {7 self.matcherHandler = matcherHandler;8 ....9 }];
Parameter | Description |
---|---|
options BIOMatcherHandlerOptions* | Object that can configure a session of matcher handler. |
completionHandler void (^)(id | Block of code that will be called after BIOMatcherHandler finishes initialization. It will pass an error if any occurred. |
Create a DocumentCapture Handler
This retrieves a capture handler to perform all the document capture operations. You must first configure the capture options.
-
Please check the use case named Read MRZ.
-
Also, you can check all the features provided by this handler here.
Objective-C1// Populate a CaptureOptions object2 DocumentCaptureOptions captureOptions = [DocumentCaptureOptions new];3 captureOptions.mode = DocumentCaptureModeReadMRZ;4 captureOptions.camera = BIOCameraRear;5 captureOptions.setOverlay = BIOOverlayON;6 captureOptions.captureTimeout = 120;7 ...8 [BIOSDK createDocumentCaptureHandlerWithOptions:(DocumentCaptureOptions*)options9 witchCompletionHandler:(^(id<DocumentCaptureHandler> documentCaptureHandler, NSError* error))completionHandler {10 self.documentCapture = documentCaptureHandler;11 self.documentCapture.delegate = self;12 }];
Parameter | Description |
---|---|
options DocumentCaptureOptions* | The capture options to configure the bio capture handler. |
completionHandler void (^)(id | Block of code that will be called after the DocumentCaptureHandler finishes initialization. It will pass an error if any occurred. |
BIOReplayProtocol
This protocol defines the methods that will be available in all capture handlers. However, they should be implemented to reference its sub protocols (eg. FaceCaptureHandlerDelegate
) instead of BIOReplayProtocol
.
captureFinishedWithError
This method is called whenever a capture finished with an error and cannot be resumed by Biometric Capture SDK.
Objective-C1-(void)captureFinishedWithError:(NSError*)error
Parameter | Description |
---|---|
error NSError* | The error that caused the capture to finish. |
replayDidFinishRecording
This method is called whenever a replay finished playing. These replays are used to play the recorded videos of a capture for debugging purposes.
Objective-C1-(void)replayDidFinishRecording
Generic SDK Objects
This section is going to cover the generic objects that are necessary to use the Biometric Capture SDK.
BIOSDKInfo
This exposes information about the SDK.
Parameters
Parameter | Description |
---|---|
version NSString* | The version of the SDK. |
BIOBiometrics
This class holds all the different Biometrics that are going into a subclass.
Parameters
Parameter | Description |
---|---|
biometricLocation BIOLocation | The BiometricLocation enum option. |
biometricModality BIOModality | The BiometricModality enum option. |
BIOImage
This is the image object returned by the SDK. Its subclass is BIOBiometrics
.
Parameters
Parameter | Description |
---|---|
buffer NSData* | The image. |
stride int | The stride of the biometric. |
width uint32_t | The width of the image. |
height uint32_t | The height of the image. |
colorSpace BIOColorSpace | The ColorSpace of the image. |
resolution float | The resolution of the image. |
alive BOOL | OBSOLETE. True is alive, otherwise false . |
imageQuality int | Image quality if available, otherwise 0 . Currently only available for fingerprint images. |
BIOTemplate
This is a Biometric template object returned by the SDK. Its subclass is BIOBiometrics
.
Parameters
Parameter | Description |
---|---|
buffer NSData* | The template. |
uuid NSString* | The template uuid in the database (Could be null ). |
uuidUser NSString* | The user uuid (Could be null ). |
BIOFaceTemplate
This is a Biometric template for Face object returned by the SDK. Its subclass is BIOTemplate
.
Parameters
Parameter | Description |
---|---|
qualityRawValue NSInteger | The face quality raw value. |
quality FaceTemplateQuality | The face quality value interpretation. |
FaceTemplateQuality enum
This enum retrieves information about qulity of the face saved in template
Attribute | Description |
---|---|
FaceTemplateQualityLow | The quality of face is low (not recommended to perform matching). |
FaceTemplateQualityMedium | The quality of face is medium. |
FaceTemplateQualityHigh | The quality of face is high. |
BIOUser
This is the user object used to represent an individual in the SDK.
Parameters
Parameter | Description |
---|---|
name NSString* | The name of the user. |
uuid NSString* | The UUID of the user. |
BIOVideoRecordingOptions
This is the video recording object used to configure video recording.
Parameter | Description |
---|---|
recordingEnabled | Is video recording enabled |
BIODebugDataSettings
This is an object used for debug purposes. Currently it contains only one parameter which is a file path used for RTV video playback.
Parameter | Description |
---|---|
rtvFilePath | Path to RTV file used for video playback. |
Enums
BIOLogLevel
These are the constants used to configure logs.
Attribute | Description |
---|---|
BIOLogLevelDebug | Display all logs from SDK |
BIOLogLevelInfo | Informative message |
BIOLogLevelWarning | Warning message |
BIOLogLevelError | Error message |
BIOLogLevelNone | Turns logs off |
BIOColorSpace
These are the ColorSpace
constants.
Attribute | Description |
---|---|
BIOColorSpaceY8 | Grayscale 8bpp image |
BIOColorSpaceY16LE | Grayscale 16bpp image (Little Endian) |
BIOColorSpaceBGR24 | Color 24bpp BGR image (BMP like memory layout) |
BIOColorSpaceRGB24 | Color 24bpp RGB image (reversed memory layout compared to RT_COLORSPACE_BGR24) |
BIOLocation
These are the biometric location constants.
Attribute | Description |
---|---|
BIOLocationFaceFrontal | face |
BIOLocationFingerRightIndex | right index |
BIOLocationFingerRightMiddle | right middle |
BIOLocationFingerRightRing | right ring |
BIOLocationFingerRightLittle | right little |
BIOLocationFingerRightThumb | right thumb |
BIOLocationFingerLeftIndex | left index |
BIOLocationFingerLeftMiddle | left middle |
BIOLocationFingerLeftRing | left ring |
BIOLocationFingerLeftLittle | left little |
BIOLocationFingerLeftThumb | left thumb |
BIOLocationFingerUnknown | finger unknown |
BIOLocationHandLeft | hand left |
BIOLocationHandRight | hand right |
BIOLocationHandUnknown | hand unknown |
BIOModality
These are the biometric modality constants.
Attribute | Description |
---|---|
BIOModalityUnknown | unknown |
BIOModalityFace | Face |
BIOModalityFrictionRidge | Friction ridge |
BIOCaptureHandlerError
These are errors that can be thrown when there is an error with the capture handler.
Attribute | Description |
---|---|
BIOCaptureHandlerErrorSuccess | No error occurred |
BIOCaptureHandlerErrorParameters | Wrong Parameters |
BIOCaptureHandlerErrorParameterUnknown | Unknown parameter |
BIOCaptureHandlerErrorMemalloc | Memory allocation error |
BIOCaptureHandlerErrorInit | Initialization error |
BIOCaptureHandlerErrorGraphInitialisationFailed | Graph initialization failed |
BIOCaptureHandlerErrorParameterNotFound | Parameter not found |
BIOCaptureHandlerErrorParameterSize | Parameter size error |
BIOCaptureHandlerErrorTypeMismatch | Type mismatch error |
BIOCaptureHandlerErrorInvalidHandle | Invalid handler |
BIOCaptureHandlerErrorLicense | License is invalid |
BIOCaptureHandlerErrorApplinotavailable | The application is not available |
BIOCaptureHandlerErrorProfileNotAvailable | The profile is not available |
BIOCaptureHandlerErrorSubprofileNotAvailable | The subprofile is not available |
BIOCaptureHandlerErrorUnknown | An unknown error occurred |
BIOCaptureHandlerErrorInvalidOperation | The operation is invalid |
BIOCaptureHandlerErrorIncompatibleApiVersion | The API version is incompatible. Your application must be recompiled. |
BIOCaptureHandlerErrorCameraError | Camera issue has been encountered |
BIOCaptureHandlerErrorParameterWrongType | Parameter is not the right type |
BIOCaptureHandlerErrorParameterNotSet | Parameter is not set in current scope |
BIOCaptureHandlerErrorCaptureIsLocked | Capture is locked |
BIOCapturingError
These are errors that can be thrown when there is an error during or after the capture.
Attribute | Description |
---|---|
BIOCapturingErrorUnknown | Unknown error occurred |
BIOCapturingErrorCaptureTimeout | Capture timeout |
BIOCapturingErrorNotAlive | The capture returned an image with status not alive . |
BIOCapturingErrorWrongBiometricLocation | The wrong biometric location was scanned (eg. looking for face, captured fingers). |
BIOCapturingErrorImageBuffer | Could not read image buffer |
BIOCapturingErrorBadCaptureFingers | Fingers were not captured properly |
BIOCapturingErrorBadCaptureHand | Hand was not captured properly |
BIOCamera
These are the constants used to configure the behavior of BioCapture
.
Attribute | Description |
---|---|
BIOCameraFront | Front Camera |
BIOCameraRear | Rear Camera |
BIOOverlay
This is the enum used to configure the behavior of BioCapture
.
Attribute | Description |
---|---|
BIOOverlayOFF | Overlay off |
BIOOverlayON | Overlay on |
BIOPreviewColorspace
These are the flags used to set the color space for the preview of the camera.
Attribute | Description |
---|---|
BIOPreviewColorspaceColor | Sets colorspace of preview to RGB24 |
BIOPreviewColorspaceColorBlur | Sets colorspace of preview to RGB24 with Blur |
BIOPreviewColorspaceGray | Sets colorspace of preview to Grayscale |
BIOPreviewColorspaceGrayBlur | Sets colorspace of preview to Grayscale with Blur |
BIOPreviewColorspaceGrayBlurBordered | Sets colorspace of preview to Grayscale with Blur and a white border |
FaceCaptureHandler
This document discusses FaceCaptureHandler
.
Start Using FaceCaptureHandler
Note: In order to use FaceCaptureHandler
, it's required to load one of the capturing plugins (lite, normal or cr2dMatching).
- Import the framework header to your view controller.
Objective-C1#import <BiometricSDK/BiometricSDK.h>
- Add at least one UIImageView or subclasses to your layout. It will be used to preview the stream from the camera. It is not necessary for the capture to have a preview.
Objective-C1@property (weak, nonatomic) IBOutlet UIImageView *preview;
-
Check your license status here.
-
You also need to have the property for
FaceCaptureHandler
. This object is handling all operations related to capturing.
Objective-C1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {2 self.captureHandler = captureHandler;3 ...4 }];
- Set the delegate for
FaceCaptureHandler
toself
.self
will have to implement theFaceCaptureHandlerDelegate
protocol.
Objective-C1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {2 self.captureHandler = captureHandler;3 self.captureHandler.delegate = self;4 ...5 }];
- After
FaceCaptureHandler
has finished its initialization, the preview view can be set.
Objective-C1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {2 self.captureHandler = captureHandler;3 self.captureHandler.delegate = self;4 self.captureHandler.preview = self.preview;5 ...6 }];
- Now it can start capturing.
Objective-C1[BIOSDK createFaceCaptureHandlerWithOptions:options withCompletionHandler:^(id<FaceCaptureHandler> captureHandler, NSError* error) {2 self.captureHandler = captureHandler;3 self.captureHandler.delegate = self;4 self.captureHandler.preview = self.preview;5 [self.captureHandler startCaptureWithCompletionHandlerError:nil];6 }];
- Whenever the view controller disappears, the resources (e.g. camera) need to be released.
Objective-C1- (void)viewDidDisappear:(BOOL)animated{2 [super viewDidDisappear:animated];3 [self.captureHandler destroy];4 }
FaceCaptureHandler Info
Delegate
This sets the listener to receive the biometrics information.
Objective-C1handler.delegate = ... //Object that implements `FaceCaptureHandlerDelegate` protocol
Preview
This sets the camera preview.
Objective-C1handler.preview = ... //An `UIImageView`
Debug Settings
This sets the camera preview.
Objective-C1BIODebugDataSettings *settings = [[BIODebugDataSettings alloc] init];2 settings.rtvFilePath = ... //A path to RTV video file used for video playback.3 handler.debugSettings = settings;
Note: To stop the camera preview, set preview to nil
.
Start Capture
This starts the biometric capture.
Objective-C1[handler startCaptureWithCompletionHandler:^(NSError \* error) {2 ...3 }];
Parameter | Description |
---|---|
completionHandler void (^)(NSError*) | An object with an error code if an error occurred, otherwise it will show nil . In addition to error code other data can be returned in error's userInfo structure as described in the table below. |
Additional optional userInfo data:
Parameter | Description |
---|---|
lockedUntil | When capture is locked this parameter contains timestamp on which capture will be unlocked. This data is returned only in case of BIOCaptureHandlerErrorCaptureIsLocked. |
Stop Capture
This stops a capture.
Objective-C1[handler stopCaptureWithCompletionHandler:^(NSError \* error) {2 ...3 }];
Parameter | Description |
---|---|
completionHandler void (^)(NSError*) | Object with an error code if an error occurred, otherwise it will show nil . |
Switch Camera
This switches between different cameras.
Objective-C1[handler switchCamera:BIOCameraFront withError:&error];2 [handler switchCamera:BIOCameraRear withError:&error];
Parameter | Description |
---|---|
camera BIOCamera | Flag that selects the camera. |
error NSError** | Object with an error code if an error occurred, otherwise it will show nil . |
Overlay
This sets the overlay option.
Objective-C1[self setOverlay:BIOOverlayOFF withError:&error];2 [self setOverlay:BIOOverlayON withError:&error];
Parameter | Description |
---|---|
overlay BIOOverlay | Flag that determines if overlay should be on or off. |
error NSError** | Object with an error code if an error occurred, otherwise it will show nil . |
Orientation
This sets the orientation option.
Objective-C1[self setOrientation:BIOOrientationPortrait withError:&error];2 [self setOrientation:BIOOrientationUpSideDown withError:&error];3 [self setOrientation:BIOOrientationLandscapeLeft withError:&error];4 [self setOrientation:BIOOrientationLandscapeRight withError:&error];
Parameter | Description |
---|---|
orientation BIOOrientation | Specifies the orientation of the preview. |
error NSError** | Object with an error code if an error occurred, otherwise it will show nil . |
Options
This retrieves the capture options used in this handler.
Note: Readonly
Objective-C1FaceCaptureOptions* options = handler.options;
Partial Dump Video
This dumps the last played sequences.
Objective-C1[handler requestPartialDumpVideoWithError:&error];
Note: To dump a partial video, capture with partial dump video recording enabled. This should be done previously.
Parameter | Description |
---|---|
error NSError** | Object with an error code if an error occurred, otherwise it will show nil . |
Capture Attempts Left
Returns the number of capture attempts left based on the maxCapturesBeforeDelay
provided on the options.
Note: Readonly
Objective-C1NSInteger attemptsLeft = handler.captureAttemptsLeft;
Time to unlock
Returns the number of seconds to unlock the capture or 0 if capture is not locked.
Note: Readonly
Objective-C1NSInteger timeToUnlock = handler.timeToUnlock;
Destroy
This releases all of the handler resources.
Objective-C1[handler destroy];
FaceCaptureHandlerDelegate
FaceCaptureHandlerDelegate is a sub protocol of BIOReplayProtocol and implements all of its methods as well.
captureFinishedWithImages:withBiometrics:withError:
This is the main method called when images are captured and returned.
Objective-C1- (void)captureFinishedWithImages:(NSArray<BIOFaceImage*>* )images2 withBiometrics:(BIOBiometrics*)biometrics3 withError:(NSError*)error
Parameter | Description |
---|---|
images NSArray<BIOFaceImage*> | Array of images. BIOFaceImages with biometric data. |
biometrics BIOBiometrics* | Object that describes what biometric data was captured. |
error NSError* | Error if any occurred, otherwise nil . |
receiveBioCaptureInfo:withError:
This method is called whenever there is information that should be transmitted to the user (eg. BIOCapturingInfoFaceInfoTurnLeft
to "Turn Left"), or information about the current challenge.
Objective-C1- (void)receiveBioCaptureInfo:(BIOCapturingInfo)info2 withError:(NSError*)error
Parameter | Description |
---|---|
info BIOCapturingInfo | Describes action from the user that needs to be done to finish capturing. |
error NSError* | Error if any occurred, otherwise nil . |
receiveCr2DTargetInfo:atIndex:outOf:withError:
This method is called whenever there is an update about a target for the CR2D challenge. Its position is relative to the captured picture resolution.
Objective-C1- (void)receiveCr2DTargetInfo:(BIOCr2DTargetInfo *)target2 atIndex:(NSUInteger)index3 outOf:(NSUInteger)numberOfTargets4 withError:(NSError *)error
Parameter | Description |
---|---|
target BIOCr2DTargetInfo* | Contains information about the target. |
index NSUInteger | Number of the target. Starts with 0 . |
numberOfTargets NSUInteger | Total number of targets for a challenge. |
error NSError* | Error if any occurred, otherwise nil . |
receiveCr2DChallengeInfo:withError:
This method is called whenever there is information for the CR2D challenge. The challenge contains information about the current point. The position is relative to the captured picture resolution.
Objective-C1- (void)receiveCr2DChallengeInfo:(BIOCr2DChallengeInfo * _Nullable)challengeInfo2 withError:(NSError * _Nullable)error
Parameter | Description |
---|---|
challengeInfo BIOCr2DChallengeInfo* | Contains information about the heading point. |
error NSError* | Error if any occurred, otherwise nil . |
receiveFaceTrackingInfo:
This method is being called when there is a tracking information available about face and eyes position.
Objective-C1- (void)receiveFaceTrackingInfo:(BIOFaceTrackingInformation *)faceTrackingInfo
Parameter | Description |
---|---|
faceTrackingInfo BIOFaceTrackingInformation | Object containing face box cordinates, eyes position and size that those points are related to. |
RemoteFaceCaptureHandlerDelegate
Note: ⚠️ RemoteFaceCaptureHandlerDelegate is deprecated.
RemoteFaceCaptureHandlerDelegate is a sub protocol of BIOReplayProtocol and BIOPassiveVideoProtocol and implements all of their methods as well.
captureFinishedWithEncryptedMetadata:withFaceImage: (deprecated)
Method that is triggered when capture is finished and face metadata is available.
Objective-C1- (void)captureFinishedWithEncryptedMetadata:(BIOEncryptedData *)encryptedMetadata withFaceImage:(BIOFaceImage *)faceImage;
Parameter | Description |
---|---|
metadata BIOEncryptedData | Object with encrypted metadata that can be used for liveness verification on the server. |
faceImage BIOFaceImage | Object with captured face image that can be used to display it within the app. |
receiveBioCaptureInfo:withError: (deprecated)
This method is called whenever there is information that should be transmitted to the user (eg. BIOCapturingInfoFaceInfoTurnLeft
to "Turn Left"), or information about the current challenge.
Objective-C1- (void)receiveBioCaptureInfo:(BIOCapturingInfo)info2 withError:(NSError*)error
Parameter | Description |
---|---|
info BIOCapturingInfo | Describes action from the user that needs to be done to finish capturing. |
error NSError* | Error if any occurred, otherwise nil . |
receiveCr2DTargetInfo:atIndex:outOf:withError: (deprecated)
This method is called whenever there is an update about a target for the CR2D challenge. Its position is relative to the captured picture resolution.
Objective-C1- (void)receiveCr2DTargetInfo:(BIOCr2DTargetInfo *)target2 atIndex:(NSUInteger)index3 outOf:(NSUInteger)numberOfTargets4 withError:(NSError *)error
Parameter | Description |
---|---|
target BIOCr2DTargetInfo* | Contains information about the target. |
index NSUInteger | Number of the target. Starts with 0 . |
numberOfTargets NSUInteger | Total number of targets for a challenge. |
error NSError* | Error if any occurred, otherwise nil . |
receiveCr2DChallengeInfo:withError: (deprecated)
This method is called whenever there is information for the CR2D challenge. The challenge contains information about the current point. The position is relative to the captured picture resolution.
Objective-C1- (void)receiveCr2DChallengeInfo:(BIOCr2DChallengeInfo * _Nullable)challengeInfo2 withError:(NSError * _Nullable)error
Parameter | Description |
---|---|
challengeInfo BIOCr2DChallengeInfo* | Contains information about the heading point. |
error NSError* | Error if any occurred, otherwise nil . |
BIOPassiveVideoProtocol
passiveVideoPreparationDidStart
Method that is triggered when BIORemoteFaceCaptureHandler receive info update about Passive Video Liveness starts preparation.
Objective-C1- (void)passiveVideoPreparationDidStart;
passiveVideoPreparationDidEnd
Method that is triggered when BIORemoteFaceCaptureHandler receive info update about Passive Video Liveness ends preparation.
Objective-C1- (void)passiveVideoPreparationDidEnd;
passiveVideoOverlayDidUpdate:andPosition:orError:
Method that is triggered when BIORemoteFaceCaptureHandler receive any info update about update for passive video overlay.
Objective-C1- (void)passiveVideoOverlayDidUpdate:(CGSize)overlaySize andPosition:(CGPoint)position orError:(NSError *)error;
Parameter | Description |
---|---|
overlaySize CGSize | Size of the overlay, if error CGSize.zero . |
position CGPoint | Position of the overlay, if error CGPoint.zero |
error NSError* | Error if something was wrong otherwise nil |
passiveVideoProgressDidUpdate:orError:
Method that is triggered when BIORemoteFaceCaptureHandler receive any info update about progress for passive video liveness check.
Objective-C1- (void)passiveVideoProgressDidUpdate:(CGFloat)progress orError:(NSError *)error;
Parameter | Description |
---|---|
progress CGSize | Progress of the passive video check, if error 0 |
error NSError* | Error if something was wrong otherwise nil |
Helper Objects
FaceCaptureOptions / RemoteFaceCaptureOptions
Note: RemoteFaceCaptureOptions are deprecated.
This is the object used to configure the behavior of BioCapture
.
Attribute | Description | FaceCaptureOptions | RemoteFaceCaptureOptions |
---|---|---|---|
livenessMode FaceCaptureLivenessMode | The app enum option to configure the FaceCaptureLivenessMode . | YES | YES |
securityLevel FaceLivenessSecurityLevel | The app enum option to configure FaceLivenessSecurityLevel . (high by default.) | YES | YES |
cr2dMode BIOCr2dMode* | Sets the CR2D mode. It's valid when livenessMode is set to FaceCaptureLivenessModeActive . Can be set to BIORandomCr2dMode, BIOFixedTargetCr2dMode, BIOPathCr2dMode. The default mode is BIORandomCr2dMode. | YES | YES |
challengeIntervalDelay NSTimeInterval | Sets the time interval between challenges. | YES | YES |
maxCapturesBeforeDelay NSInteger | Property that defines the maximum number of captures before locking the capture for a certain delay. (Default: 5). It can be set to -1 to disable locking based on repeated failed captures. | YES | NO |
timeCaptureDelayArray NSArray | Property that defines capture delays (in seconds) for each lock which occur after number of captures configured with maxCapturesBeforeDelay property. First lock after maxCapturesBeforeDelay will take as long as the number of seconds defined in the first element from timeCaptureDelayArray. Second lock after maxCapturesBeforeDelay will take as long as the number of seconds defined in the second element from timeCaptureDelayArray and so on. For all capture attempts after the array length, last element of the array is used. All values are in seconds. Default values: [1*60, 5*60, 15*60, 60*60]). | YES | NO |
camera BIOCamera | The app Camera option to configure BioCapture . | YES | YES |
torch BIOTorch | OBSOLETE. Sets the torch value. | YES | YES |
overlay BIOOverlay | Sets the overlay value. | YES | YES |
captureTimeout NSTimeInterval | Capture timeout in seconds (default value 120 ). | YES | YES |
logLevel BIOLogLevel | Level of logs that are displayed during debug. | YES | YES |
orientation BIOOrientation | Sets the orientation that the capture will be done in. | YES | YES |
previewColorspace BIOPreviewColorspace | Option that sets colorspace of the camera preview. | YES | YES |
dumpFileEnable BOOL | If dump file is enabled, the capture create logs. | YES | YES |
dumpFileFolder NSString* | Folder where the logs will be saved. If nil it will be saved on the Documents folder. | YES | YES |
dumpMetadataEnable BOOL | If dump metadata is enabled, the capture saves metadata. | YES | YES |
dumpMetadataFolder NSString* | Folder where the metadata will be saved. If nil it will be saved on the Documents folder. | YES | YES |
videoRecordEnable BOOL | If video recording is enabled, the capture will be recorded. | YES | YES |
videoRecordFolder NSString* | Folder where recorded videos will be saved. | YES | YES |
partialDumpVideoRecordEnable BOOL | If partial dump video recording is enabled, the capture can be recorded afterwards. | YES | YES |
partialDumpVideoRecordFolder NSString* | Folder where partial dump recorded videos will be saved. | YES | YES |
challengeIntervalDelay NSTimeInterval | Sets the time interval delay between challenges. | YES | YES |
videoRecordingOptions BIOVideoRecordingOptions | The video recording related options. | YES | NO |
BIOCr2dMode
Ths is a CR2D mode base class. All CR2D modes inherits from it.
Note: It should NOT be used independently.
BIORandomCr2dMode
This CR2D mode is where the target is fully random.
Objective-C1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];2 // [...]3 options.cr2dMode = [BIOCr2dMode random];
BIOFixedTargetCr2dMode
CR2D mode is where the target has a defined position. The position values x
and y
are normalized to be independent from the screen dimensions.
Both are values between -1.0 through 0.0 to 1.0.
- [-1.0, -1.0] - top, left corner of the screen
- [ 0.0, 0.0] - center of the screen
- [ 1.0, 1.0] - bottom, right corner of the screen
Objective-C1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];2 // [...]3 CGSize targetPosition = CGSizeMake(0, 0);4 options.cr2dMode = [BIOCr2dMode fixedTargetWithPosition:targetPosition];
BIOPathCr2dMode
This is a CR2D mode where there is more than one target. The number of the targets is defined by the targetsNumber
parameter.
Objective-C1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];2 // [...]3 NSInteger numberOfTargets = 4;4 options.cr2dMode = [BIOCr2dMode pathWithNumberOfTargets:numberOfTargets];
BIOTrainingCr2dMode
This is a CR2D mode where it's possible to set a starting point and an end point of a challenge.
Objective-C1FaceCaptureOptions* options = [[FaceCaptureOptions alloc] initWithLivenessMode:FaceCaptureLivenessModeActive];2 // [...]3 CGSize startingPosition = CGSizeMake(0, 0);4 CGSize targetPosition = CGSizeMake(1, 0);5 options.cr2dMode = [BIOCr2dMode pathWithNumberOfTargets:numberOfTargets];
BIOChallengeInfo
The object contains information about the ongoing challenge.
Parameters
Parameter | Description |
---|---|
currentChallengeNumber NSInteger | Number of a current challenge. |
totalChallenges NSInteger | Total number of challenges. |
BIOCr2DChallengeInfo
The object contains the CR2D challenge info. All coordinates and metrics are relative to the camera preview resolution.
PARAMETERS
Parameter | Description |
---|---|
headingPointVisible BOOL | If the heading point should be displayed or not. |
headingPoint CGPoint | X,Y coordinates of the heading point (the point where a user is looking at) |
BIOCr2DTargetInfo
The object that is a representation of a target for the CR2D challenge.
PARAMETERS
Parameter | Description |
---|---|
number NSUInteger | Target number, starts with 0 . |
visible BOOL | Tells if the target should be displayed or not. |
current BOOL | Tells if target is the current one for the try. |
position CGPoint | X,Y coordinates of the target center. |
radius CGFloat | Target area radius in pixels. |
|completeness CGFloat | Property tells how long it requires the heading point to be in the target area to pass the try. The value is a float type between 0
and 1
. Where values means: 0
- the heading point is NOT in the target area; 0..1
- when it's getting bigger, the heading point is in the target area and should be kept still. When it's getting smaller, the heading point is outside of the target area;1
- the try is passed.
BIOFaceImage
This is the image object of a face. Subclass BIOImage.
Parameter | Description | Notes |
---|---|---|
buffer NSData* | The image. | |
stride int | The stride of the biometric. | |
width uint32_t | The width of the image. | |
height uint32_t | The height of the image. | |
colorSpace BIOColorSpace | The ColorSpace of the image. | |
resolution float | The resolution of the image. | |
alive BOOL | Trueis alive, otherwise false` | OBSOLETE. Use livenessStatus instead. |
livenessStatus BIOFaceLivenessStatus | The liveness status of the liveness checking mechanism. | |
imageQuality int | Image quality. Only available for fingerprint images, so it will always be 0 | |
faceMetadata BIOFaceMetadata | Captured face metadata | |
faceTrackingInfo BIOFaceTrackingInformation | Face tracking information |
BIOFaceMetadata
This is the face metadata that can be used for liveness verification with server.
Parameter | Description |
---|---|
data NSData* | Face metadata |
BIOEncryptedData
This is encrypted face metadata that can be used for liveness verification with server.
Parameter | Description |
---|---|
data NSData* | Encrypted face metadata |
masterSecret NSData* | Encrypted master secret |
BIOFaceTrackingInformation
This is face tracking information with information such as the face's position or eyes' position
Parameter | Description |
---|---|
faceBox CGRect | Face's position and size. |
leftEye BIOEye* | Left eye |
rightEye BIOEye* | Right eye |
relativeSize CGSize | Size that the points are related to |
Enums
BIOFaceLivenessStatus
The enum describes liveness status of BIOFaceImage
.
Attribute | Description |
---|---|
BIOFaceLivenessStatusFake | The SDK determined the liveness changellange to have failed. The face image is probably a photocopy or a mask. |
BIOFaceLivenessStatusLive | The SDK determined the liveness changellange to have passed. The face image is probably of an alive person. |
BIOFaceLivenessStatusNoDecision | The SDK cannot determine if the face image is of a live person or of a mask/photocopy. The image should be processed by the server to be sure. |
BIOFaceLivenessStatusUnknown | Capture was performed using NoLiveness mode. |
BIOCapturingInfo
This is the Bio capture info constants.
Attribute | Description | Notes |
---|---|---|
BIOCapturingInfoUndefined | Undefined info. | OBSOLETE |
BIOCapturingInfoFaceInfoGetOutField | Get out of the camera field. | OBSOLETE |
BIOCapturingInfoFaceInfoComeBackField | Come back in the camera field. | |
BIOCapturingInfoFaceInfoTurnLeft | Turn head left. | |
BIOCapturingInfoFaceInfoTurnRight | Turn head right. | OBSOLETE |
BIOCapturingInfoFaceInfoCenterTurnLeft | Face center but turn head left. | |
BIOCapturingInfoFaceInfoCenterTurnRight | Face center but turn head right. | |
BIOCapturingInfoFaceInfoCenterRotateDown | Face center but rotate head down. | |
BIOCapturingInfoFaceInfoCenterRotateUp | Face center but rotate head up. | |
BIOCapturingInfoFaceInfoCenterTiltLeft | Face center but tilt head left. | |
BIOCapturingInfoFaceInfoCenterTiltRight | Face center but tilt head right. | |
BIOCapturingInfoFaceInfoCenterMoveForwards | Move forwards. | |
BIOCapturingInfoFaceInfoCenterMoveBackwards | Move backwards. | |
BIOCapturingInfoFaceInfoCenterLookFrontOfCamera | Look in front of the camera. | OBSOLETE |
BIOCapturingInfoFaceInfoCenterLookCameraWithLessMovement | Look at the camera with less movement. | OBSOLETE |
BIOCapturingInfoFaceInfoTurnLeftRight | Turn left then right or right then left. | OBSOLETE |
BIOCapturingInfoFaceInfoTurnDown | Turn head down. | OBSOLETE |
BIOCapturingInfoFaceInfoTimeOut | Timeout occurred. | OBSOLETE |
BIOCapturingInfoFaceInfoUnsuccessfulAttempt | Unsuccessful attempt. | OBSOLETE |
BIOCapturingInfoFaceInfoTooFast | Moved the head too fast. | |
BIOCapturingInfoFaceInfoCenterGood | Good position of head. | |
BIOCapturingInfoFaceInfoDontMove | Don't move. | |
BIOCapturingInfoFaceInfoChallenge2D | Move the challenge point to the target using head movements. | |
BIOCapturingInfoFaceInfoMoveBrighterArea | Move to a brighter area, because exposure is too dark. | OBSOLETE |
BIOCapturingInfoFaceInfoMoveDarkerArea | Move to darker area, because exposure is too bright. | OBSOLETE |
BIOCapturingInfoFaceInfoStandStill | Stand still during the illumination check. | |
BIOCapturingInfoDeviceMovementDetected | Device is being moved but should stay still. | |
BIOCapturingInfoDeviceMovementEnded | Device is no longer moved and stays still. | |
BIOCapturingInfoNoFaceMovementDetected | Head movement is expected but not detected. | |
BIOCapturingInfoFaceInfoOpenEyes | Open eyes. | OBSOLETE |
BIOCapturingInfoDeviceMovementEnded | User stopped moving the device. |
FaceCaptureLivenessMode
This is the enum used to configure the behavior of BioCapture
.
Attribute | Description |
---|---|
FaceCaptureLivenessModePassiveVideo | Face tracking with passive video liveness for the sever integration only. |
FaceCaptureLivenessModePassive | Face tracking with passive liveness. |
FaceCaptureLivenessModeActive | Face tracking using CR2D challenge of liveness. |
FaceCaptureLivenessModeNoLiveness | Face tracking with default subprofile. |
FaceLivenessSecurityLevel
This enum is used to configure the security level of a used capture mode. The modes have different APCER (Attack Presentation Classification Error Rate) values. APCER
- Proportion of presentations using the same PAI (Presentation Attack Instrument) species incorrectly classified as bona fide presentations in a specific scenario. That is the equivalent of False Acceptance. The lower the better.
By default:
FaceLivenessSecurityLevelHigh
is used for all modes (FaceCaptureLivenessModeHigh
,FaceCaptureLivenessModePassive
,FaceCaptureLivenessModeVideoPassive
)
Attribute | Description | Notes |
---|---|---|
FaceLivenessSecurityLevelLow | Low security level | APCER ~6% |
FaceLivenessSecurityLevelMedium | Medium security level | APCER ~3% |
FaceLivenessSecurityLevelHigh | High security level | APCER ~1% |
The goal of the face liveness feature is to provide mechanisms that prevent and or fight fraud attempts. For example, through the use of still images or photos and or videos of a given person.
This IDEMIA Biometric Capture SDK offers three anti-spoofing measures:
- Artefact detection: the SDK detects artefacts that occur when a fraudster tries to perform face acquisition by placing the smartphone’s camera in front of a video feed.
- 3D model analysis: the SDK detects that the face is in three dimensions. To do this, the SDK asks the user to move his head or his smartphone.
- CR2D challenge: the SDK asks the user to perform a task. The task is to move the target point using a movement of the user's head.
Biometric Mode | 3D model analysis | Artefacts (video attack detection) |
---|---|---|
FaceCaptureModeLivenessLow | LOW | ON |
FaceCaptureModeLivenessActive | HIGH | ON |
FaceCaptureModeLivenessPassive | HIGH | ON |
FaceCaptureModeLivenessPassiveVideo | SERVER ONLY | SERVER ONLY |
FaceCaptureModeNoLiveness | Not applicable | OFF |
Use cases
Capture Face Biometrics
Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and get information about the biometry (Move your head to the left …).
Capture Timeout
Below is displayed the generic execution flow to be followed when a capture timeout happens.
Capture Enroll
Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that, extract the biometric’s template from the image returned by the capture component. Once we have the template we store it in a database and to link it to one user. We add the user ID to the template. As a result of the insertion we are going to receive the UUID
of this template in the database.
Compression Recommendations
Selfie images:
- Recommended width is 400 px.
- Recommended compression is JPEG90.
- Size of image will be about 100 KB.
Objective-C1- (NSData *)compressFaceImage:(UIImage *)image {2 NSInteger imageWidth = 400;3 CGFloat compressionQuality = 0.90;45 CGFloat scaleFactor = imageWidth / image.size.width;6 NSInteger imageHeight = image.size.height * scaleFactor;78 UIGraphicsBeginImageContext(CGSizeMake(imageWidth, imageHeight));9 [image drawInRect:CGRectMake(0, 0, imageWidth, imageHeight)];10 UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();11 UIGraphicsEndImageContext();1213 return UIImageJPEGRepresentation(scaledImage, compressionQuality);14 }
BIOMatcherHandler
When the BIOMatcherHandler
is meant to be used for one of the SDK's face variants (face, face_document, biometry, biometry_document), it requires one of the face algorithms to be added to the project.
Authentication
This verifies a list of candidate templates against a list of reference templates.
Please check the use case named Verify.
Objective-C1BIOAuthenticationOptions* options = [[BIOAuthenticationOptions alloc] initWithModality:BIOModalityFace];2 [self.matcherHandler authenticateWithOptions:options3 withBiometricCandidate:biometricCandidate4 withBiometricReference:biometricReference5 andCompletionHandler:^(BIOAuthenticationResult* result, NSError* error) {6 ...7 }];
Parameter | Description |
---|---|
options BIOAuthenticationOptions* | The options used to perform the authentication. |
biometricCandidate BIOBiometricCandidate* | The biometric candidate with the templates that you want to match. |
biometricReference BIOBiometricReference* | The biometric reference with a list of templates used as reference. |
completionHandler void(^)(BIOAuthenticationResult result, NSError error) | Callback to be executed when authentication is finished. |
Identify
This identifies the user to which belongs the list of candidate templates against a list of reference templates. This method can also be used to verify/authenticate users.
Please check the use case named Identify.
Objective-C1BIOIdentificationOptions* options = [[BIOIdentificationOptions alloc] initWithModality:BIOModalityFace];2 [matcher identifyWithOptions:options3 withBiometricCandidate:biometricCandidate4 withBiometricReferences:biometricReferences5 andCompletionHandler:(^(BIOIdentificationResult* result, NSError* error){6 .......7 }];
Parameter | Description |
---|---|
options BIOIdentificationOptions* | The options used to perform the identification. |
biometricCandidate BIOBiometricCandidate* | The biometric candidate with the templates that you want to match. |
biometricReferences NSArray<BIOBiometricReference*>* | The list of biometric references with a list of templates used as references. |
|completionHandler void(^)(BIOIdentificationResult* result, NSError* error) | Callbacks to be executed when the identification is finished.
Detect Biometric
This detects the biometrics in a BIOImage
. This function is intended to be used to extract the all the biometric templates contained in an image (For example all the faces that are in an image).
Please check the use case named Detect Biometric.
Objective-C1BIODetectBiometricOptions* options = [BIODetectBiometricOptions biometricsWithLocation:BIOLocationFaceFrontal2 withModality:BIOModalityFace];3 [matcher detectBiometricWithOptions:options4 withBIOImage:image5 withCompletionHandler:void(^)(NSArray<BIOTemplate*>* templates, NSError* error){6 .....7 }];
Parameter | Description |
---|---|
options BIODetectBiometricOptions* | The options used to perform the verification. |
image BIOImage | The image. |
completionHandler void(^)(NSArray<BIOTemplate*>* templates, NSError* error) | Callbacks to be executed when the detection is finished. |
Destroy
This releases all the handler resources.
Objective-C1[matcher destroy];
Helper Objects
BIOMatcherHandlerOptions
This object is used to configure the behavior of MatcherHandler
.
Attribute | Description |
---|---|
logLevel BIOLogLevel | Level of logs that are displayed during debug. |
BIOMatchingOptions
This is the object that represents a the basic matching options.
Parameter | Description |
---|---|
modality BIOModality | The BIOModality option. |
BIOAuthenticationOptions
This is the object that represents a the verification options. This object extends BIOMatchingOptions
.
Parameter | Description |
---|---|
modality BIOModality | The BIOModality option. |
threshold int | The authentication threshold to be considered valid (default value: 3500 ). |
Note: The threshold is the score value that is used to differentiate a HIT
from a NOHIT
.
FAR | 1% | 0.1% | 0.01% | 0.001% | 0.0001% | 0.00001% |
---|---|---|---|---|---|---|
Score | 2500 | 3000 | 3500 | 4000 | 4500 | 5000 |
FAR: proportion of requests that generate a non-expected HIT
with two biometric acquisitions of two different persons.
For the use case of a selfie against a selfie within the context of a smartphone, the recommended threshold is 3500
.
BIOIdentificationOptions
This is the object that represents a the identification options. This object extends BIOMatchingOptions
.
Parameter | Description |
---|---|
modality BIOModality | The BIOModality option. |
BIODetectBiometricOptions
This is the object that represents a the verification options. This object extends BIOBiometrics
.
Parameter | Description |
---|---|
biometricLocation BIOLocation | The BiometricLocation enum option. |
biometricModality BIOModality | The BiometricModality enum option. |
BIOBiometricReference
This the object that represents a biometric reference to be compared against for authentication and identification proposes.
Parameter | Description |
---|---|
templates NSArray<BIOTemplate*>* | The templates to be used. |
userUUID NSString* | The user uuid to be matched against. |
BIOBiometricCandidate
This is the object that represents a biometric candidate that is used to be authenticated or identified.
Parameter | Description |
---|---|
templates NSArray<BIOTemplate*>* | The templates to be used. |
BIOMatchingCandidate
This is the object that represents a candidate result.
Parameter | Description |
---|---|
UUID NSString* | The candidate uuid . |
score long | The identification score result. |
BIOAuthenticationResult
This is the object that represents an authentication result.
Parameter | Description |
---|---|
score long | The authentication score result. |
matchesSuccessfully BOOL | True if the score is bigger than the threshold. |
BIOIdentificationResult
This is the object that represents an identification result.
Parameter | Description |
---|---|
candidates NSArray<BIOMatchingCandidate*>* | The authentication score result. |
Use Cases
Create BIOMatcherHandler
Below is displayed the generic execution flow to be followed to retrieve and release a BIOMatcherHandler
.
Capture Verify
Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that extract the biometric’s template from the image returned by the capture component (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and verify that the candidate template belongs to the user. There are two ways to extract a list of template references. One is retrieving them from the database used during the enrolment process. The second one is extracting the templates from another image with detectBiometricWithOptions:withBIOImage:withCompletionHandler:
.
Capture Identify
Below is displayed the generic execution flow to be followed to perform a biometric capture (Get Picture), and after that extract the biometric’s template from the image returned by the capture component (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and identify to which user belongs the candidate template.
Authenticate
Below is displayed the generic execution flow to be followed to perform a generic authentication process extracting the biometric’s template from an image (This is the candidate template). Once we have the candidate template, we need to retrieve a list of reference templates to match against the candidate and verify that the candidate template belongs to the user. There are two ways to extract a list of template references. One is retrieving them from the database used during the enrollment process. The second one is extracting the templates from another image with detectBiometricWithOptions:withBIOImage:withCompletionHandler:
.
Identify
Below is displayed the generic execution flow to be followed to perform a generic identification process extracting the biometric’s template from an image. Once that we have the candidate template we need to retrieve a list of reference templates to match against the candidate and identify to which user belongs the candidate template.
Detect Biometric
This detects the biometrics in a BIOImage
. This function is intended to be used to extract the all the biometric templates contained in an image (For example all the faces that are in an image).
ImageUtils
SDK provides methods to perform various operations on BIOImage such as for example converting BIOImage to various image formats. Those operations are described below.
Compress BIOImage to JPEG
This is the method of the BIOImage
class that converts the BIOImage
object to NSData
object with JPEG file with default compression quality (90% for finger images, 80% for face images, 70% for document images). Created JPEG will contain capture maker note data inside EXIF metadata containing information such as for example SDK version used for capturing the image.
Objective-C1- (NSData *)toJPEG;
Return | Description |
---|---|
NSData* | JPEG file binary data. |
Compress BIOImage to JPEG with custom quality
This is the method of the BIOImage
class that converts the BIOImage
object to NSData
object with JPEG file with given compression quality level. Created JPEG will contain capture maker note data inside EXIF metadata containing information such as for example SDK version used for capturing the image.
Objective-C1- (NSData *)toJPEGWithQuality:(CGFloat)quality;
Parameter | Description |
---|---|
quality CGFloat | Compression quality in range [0, 1]. |
Return | Description |
---|---|
NSData* | JPEG file binary data. |
Get UIImage from BIOImageFromUIImage
This is the method of UIImage
(BIOImage) extension that converts a BIOImage
to a UIImage
.
Objective-C1+ (UIImage*)imageFromBIOImage:(BIOImage*)bioImage
Parameter | Description |
---|---|
image BIOImage* | BIOImage to convert to UIImage. |
Return | Description |
---|---|
BIOImage* | UIImage from BIOImage. |
Get BIOImage from UIImage
This is the method of the BIOImage
(ImageGetters) extension that converts UIImage
to BIOImage
.
Objective-C1+ (BIOImage*)BIOImageFromUIImage:(UIImage*)image;
Parameter | Description |
---|---|
image UIImage* | UIImage to convert to BIOImage. |
Return | Description |
---|---|
BIOImage* | BIOImage from UIImage. |
Create a BIOImage with Different Color Space
This is the method of the BIOImage
(ImageGetters) extension that converts a BIOImage
to another BIOImage
with a different color space.
Objective-C1- (BIOImage*)BIOImageWithColorSpace:(BIOColorSpace)colorSpace;
Parameter | Description |
---|---|
colorSpace BIOColorSpace | Color space wanted for the new BIOImage . |
Return | Description |
---|---|
BIOImage* | BIOImage with the color space provided. |
Compress Image with Quality to NSData
This is a method of BIOImage
(BIOResize) extension that compresses an image to a NSData*
with determined quality.
Objective-C1- (NSData*)dataByCompressingImageWithQuality:(uint32_t)quality
Parameter | Description |
---|---|
quality uint32_t | Quality of the compression (value between 1 and 100 ). |
Return | Description |
---|---|
NSData* | Final data with compression applied. |
Compress Image with Quality to BIOImage
This is the method of the BIOImage
(BIOResize) extension to compresses an image to a BIOImage*
with determined quality.
Objective-C1- (BIOImage*)imageByCompressingImageWithQuality:(uint32_t)quality
Parameter | Description |
---|---|
quality uint32_t | Quality of the compression (value between 1 and 100 .) |
Return | Description |
---|---|
BIOImage* | Final BIOImage with compression applied. |
Compress Image from Size in Kilobytes to NSData
This is the a method of the BIOImage
(BIOResize) extension to compress an image to a NSData*
with a determined final size in kilobytes.
Objective-C1- (NSData*)dataByCompressingImageToSizeInKilobytes:(CGFloat)sizeInKilobytes
Parameter | Description |
---|---|
sizeInKilobytes CGFloat | Final size in kilobytes. |
Return | Description |
---|---|
NSData* | Final data with compression applied. |
Compress Image from Size in Kilobytes to BIOImage
This is a the method of the BIOImage
(BIOResize) extension to compress an image to a BIOImage*
with a determined final size in kilobytes.
Objective-C1- (BIOImage*)imageByCompressingImageToSizeInKilobytes:(CGFloat)sizeInKilobytes
Parameter | Description |
---|---|
sizeInKilobytes CGFloat | Final size in kilobytes. |
Return | Description |
---|---|
BIOImage* | Final BIOImage with compression applied. |
Compress Image from WSQ Ratio to NSData
This is the method of the BIOImage
(BIOResize) extension to compress an image from WSQ to a NSData*
with a determined WSQ ratio. Only fingerprint images should be used in this method.
Objective-C1- (NSData*)dataByCompressingImageToWSQRatio:(CGFloat)ratio withScannerBlack:(Byte)scannerBlack andScannerWhite:(Byte)scannerWhite;
Parameter | Description |
---|---|
ratio CGFloat | WSQ ratio for the compression (value between 1.6 and 8000 , recommended value between 12 and 15 ). |
scannerBlack Byte | Black calibration value (if unknown use 0 ). |
|scannerWhite Byte | White calibration value (if unknown use 255
).
Return | Description |
---|---|
NSData* | Final data with compression applied. |
Compress Image from WSQ Ratio to BIOImage
This is the method of the BIOImage
(BIOResize) extension that compresses an image to a BIOImage*
with a determined WSQ ratio. Only fingerprint images should be used in this method.
Objective-C1- (BIOImage*)imageByCompressingImageToWSQRatio:(CGFloat)ratio withScannerBlack:(Byte)scannerBlack andScannerWhite:(Byte)scannerWhite;
Parameter | Description |
---|---|
ratio CGFloat | WSQ ratio for the compression (value between 1.6 and 8000 , recommended value between 12 and 15 ). |
|scannerBlack Byte | Black calibration value (if unknown use 0
).
|scannerWhite Byte | White calibration value (if unknown use 255
).
Return | Description |
---|---|
BIOImage* | Final BIOImage with compression applied. |
Compress Image from JPEG2000 to NSData
This is the method of the BIOImage
(BIOResize) extension that compresses and image to a NSData*
in JPEG2000 format with a determined maximum size in kilobytes. Only fingerprint images should be used in this method.
Objective-C1- (NSData*)dataByCompressingImageToJPEG2000InKilobytes:(CGFloat)maximumSizeInKilobytes
Parameter | Description |
---|---|
maximumSizeInKilobytes CGFloat | Maximum size in kilobytes. |
Return | Description |
---|---|
NSData* | Final data with compression applied. |
Compress Image from JPEG2000 to BIOImage
This is the method of the BIOImage
(BIOResize) extension that compresses an image to a JPEG2000 in a BIOImage*
format with a determined maximum size in kilobytes. Only fingerprint images should be used in this method.
Objective-C1- (BIOImage*)imageByCompressingImageToJPEG2000InKilobytes:(CGFloat)maximumSizeInKilobytes
Parameter | Description |
---|---|
maximumSizeInKilobytes CGFloat | Maximum size in kilobytes. |
Return | Description |
---|---|
BIOImage* | Final BIOImage with compression applied. |
Crop Image Region (document)
This is the method of the BIOImage
(RegionCropping) extension to crop a BIOImage
to a determined region.
Objective-C1+ (BIOImage*)cropImage:(BIOImage*)bioImage toRegion:(BIODocumentTrackingInformation*)documentTrackingInfo;
Parameter | Description |
---|---|
bioImage BIOImage* | BIOImage to be cropped. |
documentTrackingInfo BIODocumentTrackingInformation* | Region coordinates that the BIOImage will be cropped to. |
Return | Description |
---|---|
BIOImage* | Final BIOImage with the result of the crop, or nil if an error occurred. |
Crop Image Rect
This is the method of the BIOImage
(Cropping) extension to crop a BIOImage
to a determined rectangle.
Objective-C1- (instancetype _Nullable)cropToRect:(CGRect)rect withMargin:(CGFloat)margin;2- (instancetype _Nullable)cropToRect:(CGRect)rect; // margin = 0
Parameter | Description |
---|---|
rect CGRect | Region coordinates that the BIOImage will be cropped to. |
margin CGFloat | Optional cropping margin |
Return | Description |
---|---|
BIOImage* | Final BIOImage with the result of the crop, or nil if an error occurred. |
Crop Image Points
This is the method of the BIOImage
(Cropping) extension to crop a BIOImage
to a determined rectangle determined by points.
Objective-C1- (instancetype _Nullable)cropToRegionWithPoint1:(CGPoint)point1 point2:(CGPoint)point2 point3: (CGPoint)point3 point4:(CGPoint)point4 withMargin:(CGFloat)margin;2- (instancetype _Nullable)cropToRegionWithPoint1:(CGPoint)point1 point2:(CGPoint)point2 point3: (CGPoint)point3 point4:(CGPoint)point4; // margin = 0
Parameter | Description |
---|---|
point1 CGPoint | First point of the region coordinates that the BIOImage will be cropped to. |
point2 CGPoint | Second point of the region coordinates that the BIOImage will be cropped to. |
point3 CGPoint | Third point of the region coordinates that the BIOImage will be cropped to. |
point4 CGPoint | Fourth point of the region coordinates that the BIOImage will be cropped to. |
margin CGFloat | Optional cropping margin |
Return | Description |
---|---|
BIOImage* | Final BIOImage with the result of the crop, or nil if an error occurred. |