Android Extensions 

This section covers Android UI Extensions and Getting Started with AAMVA for Android.

Image utils 

The image utility performs all the format conversions needed to implement an app.

Converting Morpho Image Y800 to ARGB8888 

This function converts a Morpho Image encoded in Y800 to a bitmap encoded in ARGB888.

1ImageUtils.morphoImageY800ToARGB8888(getApplicationContext(), morphoImage, object : ImageUtilsAsyncCallbacks<Bitmap?> {
2 override fun onPreExecute() {
3 // Optional hook on the builtin Android AsyncTask callback `onPreExecute`
4 }
5
6 override fun onSuccess(bitmap: Bitmap) {
7 //The image in ARGB8888
8 }
9
10 override fun onError(e: Exception) {
11 // An error has occurred
12 }
13})

Function

Java
1public static void morphoImageY800ToARGB8888(final Context context, final MorphoImage image, ImageUtilsAsyncCallbacks<Bitmap> callbacks);
Parameter
Description
context ContextThe Android context.
image MorphoImageThe image.
callbacks ImageUtilsAsyncCallbacksCallbacks to be executed depending on the result.

Errors

You will receive an exception reporting the error.

Converting Bitmap to Morpho Image Y800 

This function converts a bitmap into a Morpho Image encoded in Y800.

Note: It is the developer's responsibility to fill in the properties for BiometricLocation and BiometricModality.

1ImageUtils.bitmapToMorphoImageY800(getApplicationContext(), bitmap, object : ImageUtilsAsyncCallbacks<MorphoImage?> {
2 override fun onPreExecute() {
3 // Optional hook on the builtin Android AsyncTask callback `onPreExecute`
4 }
5
6 override fun onSuccess(image: MorphoImage) {
7 //Remember to configure the Morpho Image
8 image.biometricModality = BiometricModality.FACE
9 image.biometricLocation = BiometricLocation.FACE_FRONTAL
10 }
11
12 override fun onError(e: Exception) {
13 // An error has occurred
14 }
15})

Function

Java
1public static void bitmapToMorphoImageY800(final Context context, final Bitmap image, ImageUtilsAsyncCallbacks<MorphoImage> callbacks);
Parameter
Description
context ContextThe Android context.
image BitmapThe image.
callbacks ImageUtilsAsyncCallbacksCallbacks to be executed depending on the result.

Errors

You receive an exception reporting the error.

Compressing bitmap to a maximum desired size 

This function converts a bitmap to a desired size in kilobytes (KB). Although you can convert the bitmap into the maximum size, you are not required to. This function works if you convert to less than the maximum size.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1//Maximum size 500 KB
2val compressed = ImageUtils.compressBitmap(bitmap, 500)

Function

Java
1public static Bitmap compressBitmap(Bitmap srcBitmap, int maxSize) throws IllegalArgumentException;
Parameter
Description
srcBitmap BitmapThe image.
maxSize intThe maximum size desired in KB.

Errors

You receive an exception reporting the error.

Resize a bitmap to a maximum side length 

This converts a bitmap to a desired length in pixels. Although you can convert the bitmap into the maximum desired side length, you are not required to. This function works if you convert to less than the maximum side length.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1//Maximum size 500 px
2val resized: Bitmap = ImageUtils.resizeBitmap(bitmap, 500)

Function

Java
1public static Bitmap resizeBitmap(Bitmap srcBitmap, int maxSideLengthInPixels) throws IllegalArgumentException;
Parameter
Description
srcBitmap BitmapThe image.
maxSideLengthInPixels intMaximum side length in pixels.

Errors

You receive an exception reporting the error.

Compress and resize an IImage 

This function compresses and resizes an image to a desired size in kilobytes (KB) and a maximum length in pixels. The resizing keeps the aspect ratio. You can use this to compress and resize the image to a size less than your previously determined maximum size. The returned data will be a JPEG image as a byte[].

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1//Maximum pixel length is 3000 pixels
2//Maximum size 500 KB
3val jpegImage: ByteArray = ImageUtils.resizeAndCompressToByteArray(image, 3000, 500) //returned image in jpeg format as byte[]
4

Function

Java
1public static byte[] resizeAndCompressToByteArray(IImage image, int maxSideLengthInPixels, int maxSizeInKB) throws Exception;
Parameter
Description
image IImageThe image.
maxSideLengthInPixels intMaximum side length desired in pixels.
maxSizeInKB intThe maximum size desired in KB.

Errors

You receive an exception reporting the error.

Cropping an IImage 

This function crops an image. The returned data will be the same kind of IImage as the source image.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.doCrop(image, documentRegion.getPoint1().x, documentRegion.getPoint1().y, documentRegion.getPoint3().x, documentRegion.getPoint3().y)

Function

Java
1public static IImage doCrop(IImage srcImage, double topLeftX, double topLeftY, double bottomRightX, double bottomRightY) throws Exception;
Parameter
Description
srcImage IImageThe image.
topLeftX doubleThe top left position X coordinate.
topLeftY doubleThe top left position Y coordinate.
bottomRightX doubleThe bottom right position X coordinate.
bottomRightY doubleThe bottom right position Y coordinate.

Errors

You receive an exception reporting the error.

Rotating an IImage 

This function rotates an image. The returned data will be the same kind of IImage as the source image.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.doRotation(image, 90);

Function

Java
1public static IImage doRotation(IImage srcImage, float degrees) throws Exception;
Parameter
Description
srcImage IImageThe image.
degrees floatThe degrees to rotate.

Errors

You receive an exception reporting the error.

Flipping an IImage 

This function flips an image. The returned data will be the same kind of IImage as the source image.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.doFlip(image, FlipType.LI_F_BOTH)

Function

Java
1public static IImage doFlip(IImage srcImage, FlipType flipType) throws Exception;
Parameter
Description
srcImage IImageThe image.
flipType FlipTypeThe flip type.

Errors

You receive an exception reporting the error.

Converting raw images to JPEG 2000 

This function converts raw images to JPEG 2000. The returned data will be the same kind of IImage as the source image.

Only fingerprint images should be used in this method.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.toJPG2000(image, false)

Function

Java
1public static IImage toJPG2000(IImage srcImage, boolean isLatent) throws Exception;
Parameter
Description
srcImage IImageThe image in raw format.
isLatent booleanFalse for Rolled, Flat, Slap (Card scan, Live scan, Mobile ID credential, and palm). True for Latent.

Errors

You receive an exception reporting the error.

Converting Raw Images to JPEG 2000 with Maximum Size 

This function converts raw images to JPEG 2000. The returned data will be the same kind of IImage as the source image.

Only fingerprint images should be used in this method.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.toJPG2000(image, 102400)

Function

Java
1public static IImage toJPG2000(IImage srcImage, int outputMaximumSizeInBytes) throws Exception;
Parameter
Description
srcImage IImageThe image in raw format.
outputMaximumSizeInBytes intMaximum size (in bytes) of the output compressed buffer.

Errors

You receive an exception reporting the error.

Converting raw images to WSQ 

This function will convert raw images to WSQ. The returned data will be the same kind of IImage as the source image.

Required for this function:

  • Resolution of the image must be 500 dpi.
  • Number of rows in the image must be between 64 and 20000.
  • Number of columns in the image must be between 64 and 20000.

Only fingerprint images should be used in this method.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImage: IImage = ImageUtils.toWSQ(srcImage, 15, (byte) 0, (byte) 0xff)

Function

Java
1public static IImage toWSQ(IImage srcImage, float compressionRatio, byte scannerBlack, byte scannerWhite) throws java.lang.Exception;
Parameter
Description
srcImage IImageThe image in raw format.
compressionRatio floatMaximum size (in bytes) of the output compressed buffer.
scannerBlack byteBLACK calibration value (if unknown, use 0)
scannerWhite byteWHITE calibration value (if unknown, use 255)

Errors

You receive an exception reporting the error.

Extracting images 

This method extracts the images encoded by the location coordinates of a DocumentImage, if rectification is disabled during a capture. The returned data will be a list of cropped images from the original image.

This function is intended to be used if Rectification is disabled during the capture of a document.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val iImages: List<DocumentImage> = ImageUtils.extractImages(srcImage)

Function

Java
1public static List<DocumentImage> extractImages(DocumentImage srcImage) throws java.lang.Exception;
Parameter
Description
srcImage IImageThe image in raw format that contains the location coordinates (areas to be cropped).

Errors

You receive an exception reporting the error.

Resizing bitmap to IDEMIA standards 

This method eases integration with IDEMIA servers. The returned data is an image scaled to the proper format, depending on the input parameters.

This function is intended to be used before an image is send to IDEMIA servers.

Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.

1val result: Bitmap = ImageUtils.resizeBitmap(srcImage, UseCase.BIOMETRIC, DocumentType.SELFIE, false)

Function

Java
1public static Bitmap resizeBitmap(Bitmap srcBitmap, UseCase useCase, DocumentType documentType, boolean isCropped) throws IllegalArgumentException;
Parameter
Description
srcImage BitmapThe source image to resize.
useCase UseCaseThe use case.
documentType DocumentTypeThe type of document.
isCropped booleanTrue the document has been cropped, false otherwise.

Errors

You receive an exception reporting the error.

UI Extensions 

The UI Extension library helps developers to display a liveness challenge for the Liveness.ACTIVE setting (The challenge is called Join the Points). It’s widely customizable in order to adapt to many different applications.

Prerequisites 

Skills Required

The developers need knowledge of:

  • Android Studio
  • Java/Kotlin
  • Android

Resources Required

The library is distributed as a maven artifact, aar package. It is recommended to use the Capture SDK repository to handle dependency management.

Getting Started 

Adding the Library to your Project

Here is the Maven artifact (from Artifactory) with a Capture SDK repository configuration.

Groovy
1buildscript {
2 repositories {
3 maven {
4 url "$repositoryUrlMI"
5 credentials {
6 username "$artifactoryUserMI"
7 password "$artifactoryPasswordMI"
8 }
9 }
10 ...
11 }
12 ...
13}

repositoryUrlMI: Mobile Identity artifactory repository url

artifactoryUserMI: Mobile Identity artifactory username

artifactoryPasswordMI: Mobile Identity artifactory password

These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:

XML
1artifactoryUserMI=artifactory_user
2artifactoryPasswordMI=artifactory_credentials
3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local

More about gradle properties can be found here.

Note: The UI Extensions dependency declaration: X.Y.Z should be replaced with the proper version number (for example: 1.2.6).

XML
1dependencies {
2 implementation "com.idemia.smartsdk:ui-extensions:X.Y.Z@aar"
3 ...
4}

Integrating with the UI Extension 

Setting up the Layout

Before the challenge can be configured, add the view that will be responsible for displaying everything.

XML
1<?xml version="1.0" encoding="utf-8"?>
2<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
3 xmlns:app="http://schemas.android.com/apk/res-auto"
4 android:layout_width="match_parent"
5 android:layout_height="match_parent"
6 app:layout_behavior="@string/appbar_scrolling_view_behavior">
7
8 <com.idemia.biometricsdkuiextensions.ui.scene.view.SceneView
9 android:id="@+id/sceneSurface"
10 app:showBorders="true"
11 android:layout_width="match_parent"
12 android:layout_height="match_parent"/>
13</android.support.constraint.ConstraintLayout>

The property showBorders describes if borders where points can be displayed, should be visible or not.

Setting up the Scene Controller

Before creating the proper controller for the scene, options need to be configured to be passed to the controller’s constructor. It’s similar to the SDK’s options. There is a DSL configured so that the setting options in Kotlin are much more convenient.

You must use one of the settings below depending on the selected mode:

Kotlin
1joinThePointsChallengeSettings {}
2passiveCaptureSettings {}
3fingerCaptureSettings {}
4passiveVideoCaptureSettings {}
5mlcCaptureSettings {}
JoinThePointsCaptureSettings (example configuration)
Kotlin
1val settings = joinThePointsChallengeSettings {
2 targetCount = 4
3 useInterpolation = true
4 scene {
5 overlay {
6 showOverlay = true
7 imageRes = R.drawable.ic_face_overlay
8 marginVertical = R.dimen.default_face_overlay_vertical_padding
9 marginHorizontal = R.dimen.default_face_overlay_vertical_padding
10 text {
11 text = R.string.default_overlay_text
12 textSize = R.dimen.default_overlay_text_size
13 textColor = Color.parseColor(Colors.text_black)
14 }
15 }
16 capturedLineOpacity = 0.5f
17 background {
18 colorEnd = Color.parseColor("#189482")
19 colorStart = Color.parseColor("#38ddb8")
20 }
21 pointer {
22 type = PointerType.PULSING
23 collisionWithTargetAction = PointerCollisionAction.NONE
24 }
25 target {
26 notSelectedImageResId = R.drawable.ic_target_free
27 capturedImageResId = R.drawable.ic_target_joined
28 capturedImageSolidColor = Color.CYAN
29 failedImageResId = R.drawable.ic_challenge_failed
30 selectedImageResId = R.drawable.ic_target_connecting_light_blue
31 startingImageResId = R.drawable.ic_target_connecting
32 capturedTargetOpacity = 1f
33 displayTextSettings = TextSettings.ALL
34 pulseAnimation {
35 waves = 2
36 }
37 progressColor = Color.WHITE
38 textColor = Color.RED
39 showMarkOnCurrentTarget = true
40 }
41 result {
42 failureImageResId = R.drawable.ic_challenge_failed
43 successImageResId = R.drawable.ic_challenge_success
44 }
45 previewScale {
46 scaleX = 1.0f
47 scaleY = 1.0f
48 }
49 tapping {
50 colorBackground = Color.parseColor("#FAFAFA")
51 colorImage = Color.parseColor("#101010")
52 colorText = Color.parseColor("#101010")
53 textResId = R.string.use_your_head
54 textH1ResId = R.string.no_tapping_feedback
55 enabled = true
56 }
57 }
58}
PassiveCaptureSettings (example configuration)
Kotlin
1val settings = passiveCaptureSettings {
2 scene {
3 overlay {
4 showOverlay = true
5 imageRes = R.drawable.ic_face_overlay
6 marginVertical = R.dimen.default_face_overlay_vertical_padding
7 marginHorizontal = R.dimen.default_face_overlay_vertical_padding
8 text {
9 text = R.string.default_overlay_text
10 textSize = R.dimen.default_overlay_text_size
11 textColor = Color.parseColor(Colors.text_black)
12 }
13 }
14 background {
15 colorEnd = Color.parseColor("#189482")
16 colorStart = Color.parseColor("#38ddb8")
17 }
18 previewScale {
19 scaleX = 1.0f
20 scaleY = 1.0f
21 }
22 feedback {
23 colorText = Color.parseColor(Colors.white)
24 }
25 overlay {
26 showOverlay = true
27 }
28 tapping {
29 colorBackground = Color.parseColor("#FAFAFA")
30 colorImage = Color.parseColor(Colors.black)
31 colorText = Color.parseColor(Colors.black)
32 textResId = R.string.use_your_head
33 textH1ResId = R.string.no_tapping_feedback
34 enabled = true
35 }
36 verticalTilt {
37 colorBackground = Color.parseColor("#FAFAFA")
38 colorImage = Color.parseColor("#000000")
39 colorText = Color.parseColor("#000000")
40 textResId = R.string.device_vertical_tilt_feedback
41 enabled = true
42 }
43 countdown {
44 countdownSeconds = 3
45 }
46 delay {
47 isEnabled = true
48 message = R.string.capture_delay_message
49 }
50 }
51}
FingerCaptureSettings (example configuration)
Kotlin
1val settings = fingerCaptureSettings {
2 scene {
3 background {
4 colorEnd = Color.parseColor("#189482")
5 colorStart = Color.parseColor("#38ddb8")
6 }
7 rectangle {
8 color = Color.BLACK
9 strokeWidth = 20f
10 cornerRadius {
11 rx = 20f
12 ry = 20f
13 }
14 }
15 previewScale {
16 scaleX = 1.0f
17 scaleY = 1.0f
18 }
19 tapping {
20 colorBackground = Color.parseColor("#FAFAFA")
21 colorImage = Color.parseColor("#000000")
22 colorText = Color.parseColor("#000000")
23 textResId = R.string.use_your_head
24 textH1ResId = R.string.no_tapping_feedback
25 enabled = true
26 }
27 feedback {
28 feedbackStringMapping = mapping
29 show = true
30 }
31 distance {
32 range = convertDistanceRange(handler.getCaptureDistanceRange())
33 showOptimalDistanceIndicator = true
34 }
35 progressBar {
36 labelRes = R.string.scanning
37 show = true
38 }
39 }
40}
PassiveVideoCaptureSettings (example configuration)
Kotlin
1val settings = passiveVideoCaptureSettings {
2 scene {
3 preparationScene {
4 backgroundColor = Color.WHITE
5 }
6 faceOverlay {
7 progressBar {
8 progressFill = Color.GREEN
9 }
10 }
11 background {
12 colorEnd = Color.parseColor("#189482")
13 colorStart = Color.parseColor("#38ddb8")
14 }
15 previewScale {
16 scaleX = 1.0f
17 scaleY = 1.0f
18 }
19 feedback {
20 videoBackground { }
21 }
22 tapping {
23 colorBackground = Color.parseColor("#FAFAFA")
24 colorImage = Color.parseColor("#000000")
25 colorText = Color.parseColor("#000000")
26 textResId = R.string.use_your_head
27 textH1ResId = R.string.no_tapping_feedback
28 enabled = true
29 }
30 verticalTilt {
31 colorBackground = Color.parseColor("#FAFAFA")
32 colorImage = Color.parseColor("#000000")
33 colorText = Color.parseColor("#000000")
34 textResId = R.string.device_vertical_tilt_feedback
35 enabled = true
36 }
37 delay {
38 isEnabled = true
39 message = R.string.capture_delay_message
40 }
41 }
42}
MlcCaptureSettings (example configuration)
Kotlin
1val settings = mlcCaptureSettings {
2 scene {
3 cancelButton {
4 enabled = true
5 iconColor = Color.parseColor(Colors.cancel_icon_color)
6 iconSize = R.dimen.cancel_icon_size
7 listener = {}
8 }
9 feedback {
10 textColor = Color.parseColor(Colors.default_mlc_feedback_color)
11 centerFaceResId = R.string.feedback_center_face
12 noSmileResId = R.string.no_smile
13 bigSmileResId = R.string.big_smile
14 feedbackDisplayTimeInMillis = FEEDBACK_DISPLAY_TIME_MILLIS
15 faceCaptureFeedbackStringMapping = faceCaptureFeedbackMapping
16 flashesWarningResId = R.string.flashes_warning
17 smileAcquiredResId = R.string.smile_acquired_successfully
18 illuminationResId = R.string.hold_still_and_close_eyes
19 moveCloserResId = R.string.move_your_face_closer
20 noSmileEmojiResId = R.string.no_smile_emoji
21 bigSmileEmojiResId = R.string.big_smile_emoji
22 }
23 tapping {
24 colorBackground = Color.parseColor("#FAFAFA")
25 colorImage = Color.parseColor("#000000")
26 colorText = Color.parseColor("#000000")
27 textResId = R.string.use_your_head
28 textH1ResId = R.string.no_tapping_feedback
29 enabled = true
30 }
31 verticalTilt {
32 colorBackground = Color.parseColor("#FAFAFA")
33 colorImage = Color.parseColor("#000000")
34 colorText = Color.parseColor("#000000")
35 textResId = R.string.device_vertical_tilt_feedback
36 enabled = true
37 }
38 mlcFaceOvalBorder {
39 showingDurationMillis = 2000L
40 enabled = true
41 color = Color.parseColor(Colors.green)
42 }
43 captureProgressBar {
44 enabled = true
45 progressText = R.string.capture_progress_text
46 progressTextColor = Color.parseColor(Colors.black)
47 progressColor = Color.parseColor(Colors.capture_progress_color)
48 progressBackgroundColor = Color.parseColor(Colors.capture_progress_bar_background_color)
49 illuminationPhaseColor = Color.parseColor(Colors.capture_illumination_progress_color)
50 illuminationPhaseBackgroundColor = Color.parseColor(Colors.capture_illumination_background_color)
51 }
52 }
53}

There is no need to configure every option. Each option has a default variant. however, targetCount (in Join The Points mode) must be equal to the count in FaceCaptureOptions from Capture SDK.

A description of the each option can be found below in the "Options" section.

Now we are ready to create a scene controller that will manage the challenge drawing, based on the input we provide it.

Kotlin
1...
2val sceneController = JoinThePointsSceneController(sceneSurface, settings)
Kotlin
1...
2val sceneController = PassiveCaptureSceneController(sceneSurface, settings)
Kotlin
1...
2val sceneController = FingerCaptureSceneController(sceneSurface, settings)
Kotlin
1...
2val sceneController = PassiveVideoCaptureSceneController(sceneSurface, settings)
Kotlin
1...
2val sceneController = MlcCaptureSceneController(sceneSurface, settings)

Where sceneSurface is an instance of SceneView that is added to the layout of Activity/Fragment with the challenge.

Use Settings with Java

Using Java is also possible. In order to do that, use the settings classes directly instead of Kotlin DSL:

  • For Join The Points mode: JoinThePointsChallengeSettingsBuilder or JoinThePointsChallengeSettings
  • For Passive mode: PassiveCaptureSettings or PassiveSettingsBuilder
  • For Finger Capture: FingerCaptureSettings or FingerCaptureSettingsBuilder
  • For VideoPassive mode: PassiveVideoCaptureSettings or PassiveVideoSceneSettingsBuilder
  • For MLC mode: MlcCaptureSettings or MlcSettingsBuilder

Using Scene Controller in Capture SDK’s Callbacks

Starting the Challenge

The controller can be used once it's configured. Call the start() method on scene controller immediately when starting the capture on the SDK's handler. We need to start preview first, then start scene controller which is asynchronous call, and after its finish it work we should call start capture.

  1. start scene controller asynchronously (capture preview will be started automatically)
  2. start capture

There are 4 ways to start scene controller: 2 using coroutines and 2 using callback:

  1. Using coroutines:
Kotlin
1CoroutineScope(Dispatchers.Main).launch {
2 sceneController.start(captureHandler)
3 captureHandler.startCapture()
4}
Kotlin
1CoroutineScope(Dispatchers.Main).launch {
2 sceneController.start()
3 captureHandler.startCapture()
4}
  1. Using callbacks:
Java
1captureHandler.startPreview(new PreviewStatusListener() {
2 @Override
3 public void onStarted() {
4 try {
5 sceneController.start(captureHandler) {
6 captureHandler.startCapture()
7 }
8 } catch (MSCException e) {
9 // handle exception
10 }
11 }
12
13 @Override
14 public void onError(PreviewError error) {
15 // Preview initialization failed and can not be started
16 }
17})
Java
1captureHandler.startPreview(new PreviewStatusListener() {
2 @Override
3 public void onStarted() {
4 try {
5 sceneController.start() {
6 captureHandler.startCapture()
7 }
8 } catch (MSCException e) {
9 // handle exception
10 }
11 }
12
13 @Override
14 public void onError(PreviewError error) {
15 // Preview initialization failed and can not be started
16 }
17})
Pausing or Stopping the Challenge

Stop both the capture and the scene when the activity pauses or when it's desired to stop the challenge.

1captureHandler.stopCapture()
2sceneController.stop()

Closing the Challenge

Release the resources when closing the whole challenge or activity.

1captureHandler.destroy()
2sceneController.destroy()
Updating the Challenge Status

Updating the challenge status is required in order to have a full working experience. Update the challenge status, so that the sceneController knows how to redraw all of the elements in the scene. This allows the ability to collect proper data from the SDK's callbacks and then push it to the controller.

Kotlin
1//Setting tracking listener
2captureHandler.setBioTrackingListener { trackingList -> sceneController.onTracking(trackingList) }
3
4//Setting CR2D listener
5captureHandler.setBioCaptureCR2DListener(object : BioCaptureCR2DListener {
6 override fun onCurrentUpdated(point: Cr2dCurrentPoint?) {
7 if (point != null) {
8 sceneController.update(point)
9 }
10 }
11
12 override fun onTargetUpdated(target: Cr2dTargetPoint?) {
13 if (target != null) {
14 sceneController.update(target)
15 }
16 }
17
18 override fun onTargetsConditionUpdated(targetsNumber: Int, stability: Int) {
19 sceneController.update(targetsNumber, stability)
20 }
21})
22
23//Setting result listener
24captureHandler.setBioCaptureResultListener(BioCaptureResultListenerAdapter(
25 { sceneController.captureSuccess { println("CAPTURE SUCCES") } },
26 { error ->
27 if (error == CaptureError.CAPTURE_TIMEOUT) {
28 sceneController.captureTimeout { println("CAPTURE TIMEOUT") }
29 } else {
30 sceneController.captureFailure { println("CAPTURE FAILURE") }
31 }
32}))
Updating the PassiveVideo capture

To provide good user experience beside standard cases like capture success or failure, all callbacks from FaceVideoPassiveListener from CaptureSDK should be covered. Handling them is quite easy, just pass values from SDK's callbacks to proper PassiveVideoCaptureSceneController methods as on example below.

Kotlin
1class FaceVideoPassiveListenerAdapter(
2 val preparationStarted: () -> Unit,
3 val preparationFinished: () -> Unit,
4 val overlayUpdatedCallback: (OvalOverlay) -> Unit,
5 val progressUpdatedCallback: (Float) -> Unit
6) : FaceVideoPassiveListener {
7 override fun onPreparationFinished() {
8 preparationFinished()
9 }
10
11 override fun onPreparationStarted() {
12 preparationStarted()
13 }
14
15 override fun overlayUpdated(overlay: OvalOverlay) {
16 overlayUpdatedCallback(overlay)
17 }
18
19 override fun progressUpdated(progress: Float) {
20 progressUpdatedCallback(progress)
21 }
22}
Kotlin
1captureHandler.setFaceVideoPassiveListener(
2 FaceVideoPassiveListenerAdapter(
3 { sceneController.showPreparingView() },
4 { sceneController.hidePreparingView() },
5 { sceneController.ovalOverlayUpdate(
6 FaceOval(
7 it.width,
8 it.height,
9 it.centerX,
10 it.centerY
11 )
12 ) },
13 { sceneController.updateProgress(it) }
14 )
15)
Managing MLC capture

MlcCaptureSceneController is designed to handle operations associated with MultidimensionalLivenessCheck capture. It allow to perform following operations related with the capture (all of them should be triggered in order presented below):

Updating smile progress

Smile progress, which is displayed using SmileIndicatorBar, can be changed with * applySmileProgress* method:

Kotlin
1fun applySmileProgress(@FloatRange(from = 0.0, to = 1.0) progress: Float)

IMPORTANT Make sure that showSmileIndicator() has been called on SceneView before passing values to this method. Otherwise, SmileIndicatorBar will not be visible!

Rescaling oval preview for illumination To rescale oval preview before starting illumination phase, prepareIllumination() method should be called:

Kotlin
1fun prepareIllumination(scale: Float)

scale value should come from onIlluminationPrepared(scale: Float) method of MlcListener.

Showing instructions for illumination phase and triggering illumination

Starting illumination process can be done by calling smileEnded method:

Kotlin
1fun smileEnded(requestIllumination: () -> Unit)

Calling this method results in following things:

  1. Feedbacks associated with face capture will not be shown anymore.
  2. SmileIndicatorBar will be hidden.
  3. Instructions for illumination phase will be shown with given delay (it can be configured with feedback field of DSL).
  4. requestIllumination passed as method argument will be called.
  5. Feedbacks for face capture will be shown again.

IMPORTANT requestIllumination passed to this method should invoke start on IlluminationRequest interface coming from the SDK. Otherwise, illumination process wil not be started!

Changing preview background color during illumination phase

Once start() is invoked on IlluminationRequest, you will be receiving colors from onColorToDisplay(red: Int, green: Int, blue: Int) method. Those colors should be passed to showIllumination(red: Int, green: Int, blue: Int) method of MlcCaptureController in order to set proper color of preview background.

Options 

Interpolation

useInterpolation is the preferred option for "low-end" phones where pointer movement may look “sluggish”. This option is disabled by default.

It is used only in JoinThePointsCaptureSettings.

Pointer

Configuration of pointer in Join The Points capture mode. It contains following parameters:

  • collisionWithTargetAction describes an action where the pointer collides with actual target. There is no action by default. To hide the pointer after collission, set this option to PointerCollisionAction.HIDE.

  • solidColor is used to set the pointer color.

  • imageResourceId as name suggests is a resource id of the pointer.

  • type specifies the type of pointer available:

  1. a standard image with pulsing animation under the image (PointerType.PULSING).
  2. a rotating image towards the actual target (PointerType.TRACKING).

If type has been set to PointerType.PULSING, additional pulseAnimation configuration can be provided. For this animation you can set:

  • waves - number of waves inside animation.
  • color - color of the animation.
  • minAlpha and maxAlpha - ranges of animation opacity.

Pointer is used only in JoinThePointsCaptureSettings.

Whole pointer configuration with default values:

Kotlin
1pointer {
2 solidColor = Color.parseColor(Colors.black)
3 imageResourceId = R.drawable.dot_pointer
4 type = PointerType.PULSING
5 collisionWithTargetAction = PointerCollisionAction.NONE
6 pulseAnimation {
7 waves = 2
8 minAlpha = 0.4f
9 maxAlpha = 0.8f
10 color = Color.parseColor(Colors.pulse_wave_color)
11 }
12}

Target

Configuration of targets in Join The Points capture mode. It contains following parameters:

  • displayTextSettings since targets need to be captured in a specific order, the points are numbered by default. There is also one extra point, the starting point. This option allows an integrator to enable or disable the default text in specific places.

  • notSelectedImageResId - resource id of the image for the point that is not active yet.

  • notSelectedSolidColor - color of the image for the point that is not active yet.

  • selectedImageResId - resource id of the image for the active point.

  • selectedImageSolidColor - color of the image for the active point.

  • capturedImageResId - resource id of the image that will be displayed inside the dot that has already been captured.

  • capturedImageSolidColor - color of the image inside fulfilled dot.

  • startingImageResId - resource id of the image that will be displayed inside starting point.

  • startingPointSolidColor - color of the image for the starting point.

  • startingPointSize is a size of a starting point. It takes data class of type TargetSize, which consists of widthInPixels and heightInPixels.

  • progressColor - color of progress indicator inside the current point.

  • showMarkOnCurrentTarget - if set to true, animation with arrows that shows which point is the current will be shown.

  • textColor - color of the text inside the points.

  • capturedTargetOpacity - opacity of the point thas bas already been captured.

  • pulseAnimation - configuration of pulsing animation around the points. For this animation you can set:

    • waves - number of waves inside animation. 2 by default.
    • color - color of the animation.
    • minAlpha and maxAlpha - ranges of animation opacity.

Whole target configuration with default values:

Kotlin
1target {
2 displayTextSettings = TextSettings.ALL
3 notSelectedImageResId = R.drawable.ic_target_free
4 notSelectedSolidColor = Color.parseColor("#430099")
5 selectedImageResId = R.drawable.ic_target_connecting_light_blue
6 selectedImageSolidColor = Color.parseColor("#007dba")
7 capturedImageResId = R.drawable.ic_target_joined
8 capturedImageSolidColor = Color.parseColor("#430099")
9 startingImageResId = R.drawable.ic_target_connecting
10 startingPointSolidColor = Color.parseColor("#430099")
11 startingPointSize = TargetSize(136, 136)
12 progressColor = Colors.parseColor("#330370")
13 showMarkOnCurrentTarget = false
14 textColor = Colors.white
15 capturedTargetOpacity = 1f
16 pulseAnimation = {
17 waves = 2
18 minAlpha = 0.4f
19 maxAlpha = 0.8f
20 color = Color.parseColor(Colors.pulse_wave_color)
21 }
22}

Target is used only in JoinThePointsCaptureSettings.

Tapping Feedback

Refers to the built-in feedback displayed on SceneView. This feedback show a message preventing the user from tapping further on the screen.
You can set the look of Tapping Feedback in Kotlin DSL. Tapping Feedback is enabled by default. You can prevent Tapping Feedback from being displayed by setting enabled on false

It is used in all modes.

Kotlin
1tapping {
2 colorBackgroundResId = R.color.default_tapping_feedback_background
3 colorImageResId = R.color.black
4 colorTextResId = R.color.black
5 textResId = R.string.no_tapping_feedback
6 enabled = true
7}

Device vertical tilt Feedback

This feedback is displayed on SceneView before face capture. The feedback displays until the phone is held vertically.

You can set the look of Device vertical tilt Feedback in Kotlin DSL.

Device vertical tilt Feedback is enabled by default. You can disable it by setting enabled option on false value.

It is used in JoinThePointsCaptureSettings, PassiveCaptureSettings.

Kotlin
1verticalTilt {
2 colorBackgroundResId = R.color.default_device_vertical_tilt_feedback_background
3 colorImageResId = R.color.black
4 colorTextResId = R.color.black
5 textResId = R.string.device_vertical_tilt_feedback
6 enabled = true
7}

Capture delay

UI extension can handle capture delay. It will display a message with countdown when capture is available. It is turned on by default, but you can turn it off.

Kotlin
1delay {
2 isEnabled = true
3 message = R.string.delay_message
4}

In message resource, add text parameter that will change to counter. For example, this is the default text used in UIExtension with text parameter:

XML
1<string name="capture_delay_message">Authentication locked.\nPlease wait for:\n%1$s</string>

Feedbacks

There are two types of face capture feedbacks - one for legacy API and one for API based on use-cases.

FaceCaptureInfo

FaceCaptureInfo is associated with legacy API. Below are the default messages for FaceCaptureInfo:

FaceCaptureInfo
String
Comment
INFO_COME_BACK_FIELDCome back in the camera field
INFO_CENTER_TURN_LEFT, INFO_CENTER_TURN_RIGHT, INFO_CENTER_ROTATE_DOWN, INFO_CENTER_ROTATE_UP, INFO_CENTER_TILT_LEFT, INFO_CENTER_TILT_RIGHT,Center your face in camera view
INFO_CENTER_MOVE_FORWARDSMove your face forward
INFO_CENTER_MOVE_BACKWARDSMove your face backward
INFO_TOO_FASTMoving to fast
CENTER_GOODFace is in good positionUsed only in Passive mode
INFO_DONT_MOVENot showing any message. It is called just before taking capture in passive mode
INFO_CHALLANGE_2DConnect the dotsUsed only in Join The Points mode
INFO_STAND_STILLStand still for a momentUsed for best face image selection before starting challenge. It is used for all modes. It means that face position is good, do not move anymore.
INFO_NOT_MOVINGMove your head to connect the dots / Move your headIt is sent, when user not performing a task. Used in Join The Points, and have different message for them.
DEVICE_MOVEMENT_ROTATIONDon't move your phone
CaptureFeedback

CaptureFeedback is related with new API. Below are the default messages for CaptureFeedback:

CaptureFeedback
String
Comment
FACE_INFO_COME_BACK_FIELDCome back in the camera field
FACE_INFO_CENTER_MOVE_FORWARDSMove your face forward
FACE_INFO_CENTER_MOVE_BACKWARDSMove your face backward
FACE_INFO_CENTER_MOVE_UPMove your phone slightly upwards
FACE_INFO_CENTER_MOVE_DOWNMove your phone slightly downwards
FACE_INFO_CENTER_MOVE_LEFTMove your phone slightly to the left
FACE_INFO_CENTER_MOVE_RIGHTMove your phone slightly to the right
FACE_INFO_TOO_FASTMoving to fast
FACE_CENTER_GOODFace is in good position
DEVICE_MOVEMENT_DETECTEDDon't move your phone
FACE_INFO_CHALLANGE_2DConnect the dotsUsed only in Join The Points mode.
FACE_INFO_STAND_STILLStand still for a moment
FACE_INFO_MAKE_A_SMILENow, smile genuinely to fill the gauge below. Cheese!
FACE_INFO_MAKE_A_NEUTRAL_EXPRESSIONPlease keep a neutral facial expression for a while
FACE_INFO_MOVE_DARKER_AREAIt’s too bright. Make sure you are in a well lit environment.
FACE_INFO_MOVE_BRIGHTER_AREAIt’s too dark. Make sure you are in a well lit environment.
FACE_INFO_NOT_SMILINGLooks like you’re not smiling. Please try to make a wide, genuine smile.
FACE_INFO_SMILE_WIDERAlmost there! Make your smile a little wider.

You can set look of feedback and countdown by setting them in Kotlin DSL.

  • By default feedbacks are turned on - if you want to disable it you can set showFeedback to false
  • You can provide feedback type to message mapping by setting appropriate field in settings DSL. For legacy it will befaceFeedbackStringMapping, which has function type (FaceCaptureInfo) -> String. For new API it will be faceCaptureFeedbackStringMapping, with function type (CaptureFeedback) -> String. Both mappings have default values in English only.

Those feedbacks are used in JoinThePointsCaptureSettings, PassiveCaptureSettings and PassiveVideoCaptureSettings.

Kotlin
1val settings = passiveCaptureSettings {
2 scene {
3 ...
4 feedback {
5 background {
6 colorBackgroundResId = R.color.idemia_blue
7 alphaCanal = 0.5f
8 }
9 showFeedback = true
10 colorTextResId = R.color.white
11 faceFeedbackStringMapping = { faceCaptureInfo ->
12 mapToString(faceCaptureInfo) // This should return correct String for FaceCaptureInfo feedback
13 }
14 faceCaptureFeedbackStringMapping = { captureFeedback ->
15 mapToString(captureFeedback) // This should return correct String for CaptureFeedback
16 }
17 }
18 }
19}

Setting up the Countdown for Passive

For passive settings, set the countdown for which one will be displayed at the top of the scene.

  • To set the countdown time, choose it in Kotlin DSL for the passive variable countdownSeconds in countdown settings. countdownSeconds is default set on 0 (OFF).

  • Choose the text of the countdown. (The default is "Countdown..." andonly in English).

  • If countdown is turned on, it will start automatically before capture.

Kotlin
1val settings = passiveCaptureSettings {
2 countdown{
3 countdownSeconds = 3
4 countdownText = getString(R.string.countdown)
5 }
6}

Setting up the Overlay for Passive or JoinThePoints

Turn on the face overlay when using passive/active capabilities by adding its configuration in the Kotlin DSL for passive/active settings.

  • To turn on the overlay, set showOverlay to true (default).
  • Change the image of overlay by setting imageRes.
  • Overlay has a set width and height to match_parent permanently, but choosing vertical and horizontal margins: marginVertical and marginHorizontal (default is 16dp for both) is optional.
  • Setup text in the middle of an overlay in the text section by changing:
    • text (default "Center\nyour\nface"),
    • size(default 24sp)
    • color (default #010101)
Kotlin
1val settings = passiveCaptureSettings {
2 ...
3 overlay {
4 showOverlay = true
5 imageRes = R.drawable.ic_face_overlay
6 marginVertical = R.dimen.overlay_vertical_margin
7 marginHorizontal = R.dimen.overlay_horizontal_margin
8 text {
9 text = R.string.overlay_text
10 textSize = R.dimen.overlay_text_size
11 textColor = R.color.overlay_text_color
12 }
13 }
14}

Displaying face capture feedback (new API)

In order to display feedback coming from FaceCaptureSDK, onFeedback method from FaceCaptureSceneController should be used:

Kotlin
1fun onFeedback(feedback: Feedback)
Kotlin
1sealed class Feedback
2
3data class ShowFeedback(val feedback: CaptureFeedback): Feedback()
4
5object ClearFeedback: Feedback()

In case of passing ClearFeedback object, message visible on the screen will be hidden. Otherwise, provided CaptureFeedback will be translated according to faceCaptureFeedbackStringMapping passed in feedback DSL.

Setting result success and failure indicators

Success and Failure indicators can be applied to JoinThePointsCaptureSettings, PassiveCaptureSettings. However, it looks different in Join The Points and other modes:

  • In Join The Points mode, it is animated from the last point that was connected.
  • In Passive modes, a static image display for some time.

You can turn on Success and Failure indicators after face capture. To show indicators you need to inform scene controller about capture result by invoking the following methods:

  • sceneController.captureSuccess { ... Your code on success ... }
  • sceneController.captureFailure { ... Your code on failure ... }
  • sceneController.captureTimeout { ... Your code on timeout ... }

Invoke them in the FaceCaptureResultListener:

Kotlin
1captureHandler.setFaceCaptureResultListener(object : FaceCaptureResultListener {
2 override fun onCaptureFailure(captureError: CaptureError, biometricInfo: IBiometricInfo, extraInfo: Bundle) {
3 if (captureError == CaptureError.CAPTURE_TIMEOUT) {
4 sceneController.captureTimeout {
5 // Your code on timeout
6 }
7 } else {
8 sceneController.captureFailure {
9 // Your code on failure
10 }
11 }
12 }
13
14 override fun onCaptureSuccess(image: FaceImage) {
15 sceneController.captureSuccess {
16 // Your code on success
17 }
18 }
19})

The appearance of indicators is configurable in the result section in the Kotlin DSL.

  • You can change the time of the indicator by setting resultDurationInMillis (only for Passive).
  • You can change the images for success and failure indicators by setting successDrawableRes and failureDrawableRes
Kotlin
1val settings = passiveCaptureSettings {
2 ...
3 result {
4 successDrawableRes = R.drawable.ic_challenge_success
5 failureDrawableRes = R.drawable.ic_challenge_failed
6 resultDurationInMillis = 1000
7 }
8}

Setting preparation screen for PassiveVideo mode

Screen might be displayed on preparation phase of PassiveVideo mode.

  • backgroundColor as name suggests is a color of a background. Default value is "#FFFFFF".
  • colorProgressFill is a property corresponding to progress bar color. Default value is "#430099".
  • colorProgressBackground is background of progress bar. Default value is "#33430099".
  • colorTextTitle is color of a title text on preparation screen. Default value is "#000000".
  • colorTextDescription is color of a description text on preparation screen. Default value is "#808080".

Setting face overlay for PassiveVideo mode

Oval face overlay making face capture of PassiveVideo mode much easier as it shows where face should be placed.

  • backgroundColor as name suggests is a color of a background. Default value is "#153370".
  • backgroundAlpha is a property ranging from 0.0 to 1.0 describing how transparent background around oval is. Default value is 0.8f.
  • progressBar is field for setting progress around the oval. It has two properties:
    • progressFill with default value of "#FFA000".
    • progressBackground with default value of "#FFFFFF".
  • scanningFeedback is a message to the user. Default value is "Scanning... Stay within the oval".

Setting cancel button

For MLC capture it is possible to configure "X" button, visible at the upper right corner of the screen during the capture. It can be done using cancelButton property, which has following fields:

  • enabled indicates if icon will be visible or not. By default it is set to true.
  • iconColor is a color of the button with default value of "#430099"
  • iconSize is a size a resource of icon dimensions. By default it is 24dp
  • cancelListener - method that will be invoked after clicking the button.

DSL example:

Kotlin
1mlcCaptureSettings {
2 scene {
3 cancelButton {
4 enabled = true
5 iconColor = Color.parseColor(Colors.cancel_icon_color)
6 iconSize = R.dimen.cancel_icon_size
7 listener = {}
8 }
9 (...)
10 }
11}

Setting feedbacks for MLC mode

For MLC capture, following properties can be passed to feedback field:

  • centerFaceResId is a resource id of message which is shown at the beginning of the capture. Default value is R.string.center_face_feedback with text Fill the oval frame with your face.
  • showFeedback is a boolean which indicate if feedback set as centerFaceResId will be visible. Default value is true.
  • noSmileResId is a resource id of text that which shown below icon indicating no smile in capture view. Default value is R.string.no_smile with text No smile.
  • bigSmileResId is a resource id of text that which shown below icon indicating big smile in capture view. Default value is R.string.big_smile with text No smile.
  • feedbackDisplayTimeInMillis is a field for changing time of displaying feedback messages. Default value is 1s.
  • faceCaptureFeedbackStringMapping is a mapping for CaptureFeedback. Default mapping is presented here.
  • smileAcquiredResId is a resource id of message that will be shown after end of smile phase of the capture. By default, it is R.string.smile_acquired_successfully with text Smile acquired successfully! Now, please hold still.
  • smileAcquiredTextDelayInMillis is a time of showing message set in smileAcquiredResId field. By default it is 3s.
  • flashesWarningResId is a resource id of a second message. Default value is R.string.flash_warning_feedback with text In the last step, your face will be verified with a sequence of color flashes. Please hold still during color’s countdown.
  • flashesWarningTextDelayInMillis is a time of showing flashesWarning message. By default it is 5s. moveCloserResId = R.string.move_your_face_closer
  • moveCloserResId is a resource id of a third instruction. Default value is R.string.move_face_closer_feedback with text Now, move your face closer to the phone.
  • moveCloserTextDelayInMillis is a time of showing message set in moveCloserResId field. By default it is
  • illuminationResId is a resource id of message that will be visible during illumination phase. Default value is 3s. R.string.illumination_feedback with text Hold still. You can close your eyes.
  • textColor - feedback text color. By default it is "#30006D".
  • noSmileEmojiResId is a resource of icon shown above no smile text. By default it is R.string.no_smile_emoji.
  • bigSmileEmojiResId is a resource of icon shown above big smile text. By default it is R.string.big_smile_emoji.

MlcFaceOvalBorderSettings

It's possible to show border around face oval during MLC capture, when capture phase will be completed (So it will be visible after centering the face before smile challenge, after capturing smile, and when face will be centered just before triggering illumination). This oval has following properties:

  • enabled - indicates if component will be visible or not. By default it is set to true.
  • showingDurationInMillis - determine how long oval will be visible after finishing the phase. By default it is 1000ms.
  • color - color of the border. Default value is "#429400".

DSL example:

Kotlin
1mlcCaptureSettings {
2 scene {
3 mlcFaceOvalBorder {
4 showingDurationMillis = 2000L
5 enabled = true
6 color = Color.parseColor(Colors.green)
7 }
8 (...)
9 }
10}

CaptureProgressBarSettings

For MLC capture, you can configure progress bar that will show current progress of the capture.

  • enabled - indicates if component will be visible or not. By default it is set to true.
  • progressText - resource id of text visible to the left of the progress bar. Default value is R.string.capture_progress_text. It contains text %1$d%% Complete. IMPORTANT If you want to show percentage progress of the capture, you should put string with placeholder here!
  • progressTextColor - color of progress text. "#000000" by default.
  • progressColor - progress bar fill color. By default it is "#430099".
  • progressBackgroundColor - color of not filled section of progress bar. Default value is "#F2D9FA".
  • illuminationPhaseColor - progress bar fill color during illumination phase. "#FFFFFF" by default.
  • illuminationPhaseBackgroundColor - color of not filled section of progress bar during illumination phase. By default it is "#80FFFFFF".

DSL example:

Kotlin
1mlcCaptureSettings {
2 scene {
3 cancelButton {
4 enabled = true
5 iconColor = Color.parseColor(Colors.cancel_icon_color)
6 iconSize = R.dimen.cancel_icon_size
7 listener = {}
8 }
9 captureProgressBar {
10 enabled = true
11 progressText = R.string.capture_progress_text
12 progressTextColor = Color.parseColor(Colors.black)
13 progressColor = Color.parseColor(Colors.capture_progress_color)
14 progressBackgroundColor = Color.parseColor(Colors.capture_progress_bar_background_color)
15 illuminationPhaseColor = Color.parseColor(Colors.capture_illumination_progress_color)
16 illuminationPhaseBackgroundColor = Color.parseColor(Colors.capture_illumination_background_color)
17 }
18 (...)
19 }
20}

Additional components 

CaptureResultImageView

This is an custom Android's view that can be embedded within app's layout. It contains two methods:

  • fun setImage(image: Bitmap) that sets image inside oval. Such image should be cropped and ideally be a square in order to display properly.
  • fun setStrokeColor(@ColorInt color: Int) which sets border color and checkmark background around image.
CaptureResultImageView

TutorialView

This is an component to show tutorials provided by TutorialProvider in NFC Reader library. It contains one method:

fun start(animation: ByteArray, listener: TutorialListener?) that sets and start animation in lottie format.

TutorialListener

This listener provides the information when animation ends.

onAnimationComplete()

This method is called when animation end.

SmileIndicatorBar

This is an component used for visualize status of smile phase of the capture. It consists of three groups of indicators which will change colors according to passed progress value. To update this value, following method should be called:

fun applySmileProgress(@FloatRange(from = 0.0, to = 1.0) progress: Float)

SmileIndicatorBar with maximum progress

Android AAMVA 

The AAMVADecoder framework is targeted to developers who need to decode AAMVA within their mobile apps.

Prerequisites 

Skills Required

The developers need knowledge of:

  • Android Studio
  • Java/Kotlin
  • Android OS 4.1 or above
  • Gradle

Resources Required

The tools required are:

  • Android Studio
  • Gradle Wrapper, preferred v4.4

Integration Guide 

Adding a Library to your Project

To add a dependency to project, the artifactory repository needs to be configured. Replace user and password with the proper credentials.

Groovy
1buildscript {
2 repositories {
3 maven {
4 url "$repositoryUrlMI"
5 credentials {
6 username "$artifactoryUserMI"
7 password "$artifactoryPasswordMI"
8 }
9 }
10 ...
11 }
12 ...
13}

repositoryUrlMI: Mobile Identity artifactory repository url

artifactoryUserMI: Mobile Identity artifactory username

artifactoryPasswordMI: Mobile Identity artifactory password

These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:

XML
1artifactoryUserMI=artifactory_user
2artifactoryPasswordMI=artifactory_credentials
3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local

More about gradle properties can be found here.

Then add the following implementation line to the gradle dependencies.

Groovy
1dependencies {
2...
3 implementation 'com.idemia.aamva:aamva-parser:1.0.13'
4...
5}

Using the Library

Create an instance of the AAMVADecoder object.

1val decoder: AAMVADecoder = new AAMVADecoder(context)

Now the decoder is ready to use. After scanning the PDF417 barcode, the value can be decoded.

1decoder.initWithPDF417("Scanned PDF417 value")
2val decodedDocument: Document = decoder.getDocument()

Now the Document object is ready to use with the values fetched from the PDF417 barcode.

NFC Reader 

The NFC Reader library is the mobile part of the NFC Document Reading Solution. The core of the solution is the NFC Server (minimum supported version is 2.2.2), which collects and process the read data. Once the whole document's data is read, it is available to securely download from/or to push by the NFC Server.

This library allows the ability to read ICAO compliant passports.

Quick integration guide 

  1. Add dependency in your project's build.gradle file:
XML
1implementation ("com.idemia.smartsdk:smart-nfc:$smartNfcVersion")
  1. Create Identity on GIPS Relying Service (gips-rs) component using v1/identities endpoint. More informations can be found here.

  2. Create NFC session using MRZ lines fetched from the document and Identity id from previous step using v1/identities/{identityId}/id-documents/nfc-session endpoint. More informations can be found here.

  3. Create the NFCReader object. This is an entry point to whole document reading procedure.

  • The parameter configuration is the NFCConfiguration object with the customer identifier in the ID&V cloud.

  • The parameter activity is reference to Android's AppCompatActivity where the reader will be running.

1val configuration: NFCConfiguration = new NFCConfiguration()
2val reader: NFCReader = new NFCReader(configuration, activity)
  1. Check if the device is compatible with this feature:
1reader.isDeviceCompatible()
  • If the device is not compatible, reading will fail regardless. Consider displaying some feedback to the end-user or handle it in different way (like hiding this feature in the app).

  • If the device is compatible, then the reading process might be started. In order to do that, session id will be required. This ID should be obtained from the ID&V cloud.

The reading process can be started using classic listener interfaces or by subscribing to the observable object. It's up to integrator which way is preferred.

Warning: using the observable object way require subscribing to the object returned in the start method in order to start reading procedure (it's cold observable).

1reader.start(sessionId, object: ResultListener{
2 override fun onSuccess() {
3 //Reading finished with success
4 }
5
6 override fun onFailure(failure: Failure) {
7 //Reading finished with failure
8 }
9
10}, object: ProgressListener{
11 override fun onProgressUpdated(progress: Int) {
12 //Progress update
13 }
14})

Components 

Configuration

NFCConfiguration

This is the configuration class that contains information about the server url, customer identifier (apiKey), and which logs are available for viewing.

Parameter
Description
serverUrl StringThe URL of the service where the reader can reach the NFCServer's device API.
serverApiKey StringAPI key used for the authorization process.
sdkExperience SDKExperienceConfiguration for SDKExperience. It can be null if you don't want use TutorialProvider.
logLevel LogLevelLogging level.
SDKExperience

This class is configuration for SDKExperience. Need to get animation provided by IDEMIA and localisation of NFC chip on document/phone.

Parameter
Description
serviceUrl StringThe URL of the service where the TutorialProvider can reach SDKExperience API. (It has default value)
apiKey StringAPI key used for the authorization process.
assetsUrl StringThe URL of the service where the TutorialProvider can reach animmations. (It has default value)
LogLevel

This is the enum used to configure the behavior of logs.

Attribute
Description
INFOShow info logs
DEBUGShow debug logs
ERRORShow error logs
NONEDo not show logs

NFCReader

This is the main class that is an entry point to every activity connected with the document reading process.

All listeners for methods below will be called on the main thread.

start(sessionId: String, resultListener: ResultListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)

This starts the document reading process. It requires session id as a parameter to fetch communication scripts.

ResultListener is the interface that allows the ability to receive the reading result.

  • mrz information saved on document.

  • PhoneNFCLocation information about phone NFC antenna localisation.

start(sessionId: String, resultListener: ResultListener, progressListener: ProgressListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)

This starts the document reading process. It requires the session id as a parameter to fetch communication scripts.

  • ResultListener is the interface that allows the ability to receive the reading result.

  • ProgressListener is the interface that allows the ability to receive the progress feedback.

  • mrz information saved on document.

  • PhoneNFCLocation information about phone NFC antenna localisation.

start(sessionID: String, mrz: String, phoneNFCLocation: PhoneNFCLocation): Observable

This is a Kotlin-friendly method that returns the object to which we can subscribe in order to start the capture and get capture feedback.

cancel()

This stops the reading procedure. This method does the background job.

isDeviceCompatible()

This checks if the device satisfies all hardware/software requirements connected with the document reading feature.

Kotlin
1if(reader.isDeviceCompatible()) {
2 //start reading
3} else {
4 //display message that device is not compatible
5}
getTutorialProvider()

This is a provider for getting information about NFC location and document type. It also provides animation based on NFC location.

ResultListener

This listener provides the possibility to invoke code based upon the reading result.

onSuccess(documentDataAccessToken: String)

This method is called when the document has been read successfully. The argument contains the access token for the document data.

onFailure(failure: Failure)

This method is called when the document reading fails. This method's argument represents the failure reason. More about failures here.

ProgressListener

This provides the progress of the document reading.

onProgressUpdated(progress: Int)

This is the method called when progress changes. The argument progress can be in the range 0 - 100.

Failure

This contains information about the document reading failure. It's built from message and type. Type is a more general failure cause (more than one failures might have the same type). The message contains detailed information what happened for a given type.

Failure types
  • NFC_CONNECTION_BROKEN - NFC connection has been broken
  • CONNECTION_ISSUE - Cannot connect with external server, no internet connection
  • INVALID_SESSION_STATE - Session is in an unexpected state. New one needs to be created.
  • SERVER_CONNECTION_BROKEN - Cannot process data with the server side, might be a compatibility issue
  • SERVER_ERROR - Server side error occurred
  • UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
  • READING_ISSUE - Document reading issue occurred. Can be related with NFC issues and data converting
  • REQUESTS_LIMIT_EXCEEDED - Document reading is impossible because of too many requests to server has been made or API key request limit has been exceeded

TutorialProvider

This class allows to get information about NFC antenna location on phone and document. It also provides animation DocumentType and animation based on the previous three variables.

getNFCLocation(mrz: String, listener: NFCLocationListener)

This provide phone and document NFC antenna location on callback. It also provide document type.

getNFCLocation(mrz: String)

This is a Coroutines-friendly method that returns the NFCLocationResult.

getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null, listener: AnimationListener)

This provide animation in lottie format on callback.

getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null)

This is a Coroutines-friendly method that returns the AnimationResult.

NFCLocationListener

This listener provides the information about phone and document NFC antenna location on callback. It also provide document type.

onNFCLocation(nfcLocation: NFCLocation)

This method is called when we get information about NFC antenna location. This method's argument provides information about NFC and document type.

onFailure(failure: LocationFetchFailure)

This method is called when we failed to fetch information about location. This method's argument provides information of failure reason.

AnimationListener

This listener provides animation prepare by IDEMIA

onAnimationProvided(animation: ByteArray)

This method is called when we get animation. The method argument is a animation in lottie format.

onFailure(failure: AnimationFetchFailure)

This method is called when we failed to fetch animation. This method's argument provides information of failure reason.

NFCLocationResult

NFCLocation

This is the class which contains information about phone and document NFC antenna location on callback. It also contain document type information.

Parameter
Description
phoneNFCLocation List< PhoneNFCLocation >NFC antenna location on phone. If NFC antenna location is for us unknown we return list of possible locations.
documentNFCLocation DocumentNFCLocationNFC chip location on document. If we do not have information about location we return as a default FRONT_COVER.
documentType DocumentTypeDocument type which have mrz.
documentFeature StringAdditional information about document.
LocationFetchFailure

This is the class which contains reason why fetching nformation about phone and document NFC antenna failed.

Parameter
Description
message StringDescryption of the failure
type TutorialFailureGeneral failure cause (more than one failures might have the same type).
PhoneNFCLocation

This is the enum with information about phone NFC antenna localisation.

Attribute
Description
TOPNFC antenna is in top of phone
MIDDLENFC antenna is in middle of phone
BOTTOMNFC antenna is in bottom of phone
SWIPEo not know when antenna is please move your phone on the document
DocumentNFCLocation

This is the enum with information about document NFC location.

Attribute
Description
FRONT_COVERNFC antenna is on cover of passport
INSIDE_PAGENFC antenna is on the first of passport
NO_NFCDocument do not have NFC antenna
DocumentType

This is the enum with information about document type which have mrz.

Attribute
Description
PASSPORTPasspor
IDeID
UNKNOWNUnknown

AnimationResult

AnimationFetchSuccess

This is the class which contains the animation in lottie format.

Parameter
Description
animation ByteArrayAnimation in lottie format.
AnimationFetchFailure

Class containing reason of animation fetching failure.

Parameter
Description
message StringDescryption of the failure
code IntegerCode of the failure
type TutorialFailureGeneral failure cause (more than one failures might have the same type).
Failure types
  • CONNECTION_ISSUE - Cannot connect with external server
  • NO_INTERNET_CONNECTION - No internet connection
  • SERVER_ERROR - Server side error occurred
  • UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
  • READING_ISSUE - Fetching information about NFC and animation issue occurred. Can be related with data converting
  • REQUEST_ERROR - Fetching information about NFC and animation is impossible because of too many requests to server has been made or API key request limit has been exceeded
  • MRZ_ISSUE - Issue with parsing MRZ.
  • DOCUMENT_TYPE_ISSUE - It only occur when there is no such animation for chosen DocumentType

Warning!

There is a possibility that after scanning an NFC chip, the user will not move their device away from the chip, and the chip will be scanned once again. This can make the device display a message about the scanned chip. Some devices (e.g. Huawei and Honor devices) exit the application and open a new window with a message about the newly-read NFC tag, which can make a bad user experience or disturb handling of NFC results in later steps.

To prevent this from happening, you can handle NFC scanning in the application even after the scan is finished. You don't have to do anything with the result, but it will prevent the application flow from being interrupted by a scanned tag.

To do this you need to enable reader mode in NFCAdapter from android.nfc by calling enableReaderMode() method. Or you can just create an NFCReader in your activity that calls this method on Lifecycle.Event.ON_RESUME and stops it on Lifecycle.Event.ON_PAUSE.

The sample application uses a single Activity and NFCReader is created in Activity initialization, so it has enabled reader mode all the time the app is running in foreground.

Sample Application 

Below you will find instructions to add and run the sample NFC application.

Note: To run the sample NFC application, you must add LKMS and Artifactory credentials and also NFC and IPV (gips-rs) API keys to your global gradle.properties.

Step 1: Obtain the API keys and credentials from the IDEMIA Experience Portal dashboard:

  • Follow the steps below to access the NFC API key:

    1. Log in to the IDEMIA Experience Portal.

    2. Go to My Dashboard -> My Identity Proofing.

      The dashboard appears.

    3. Under Access, navigate to Environments section to find the needed key.

  • Follow the steps below to access The IPV (gips-rs) API key:

    1. Log in to the IDEMIA Experience Portal.

    2. Go to My Dashboard -> My Identity Proofing.

      The dashboard appears.

    3. Under Access, navigate to Environments section to find the needed key.

  • Follow the steps below to access LKMS and Artifactory credentials:

    1. Log in to the IDEMIA Experience Portal.

    2. Go to My Dashboard -> My Identity Proofing.

    The dashboard appears.

    1. Under Access, navigate to SDK artifactory and licenses section to find needed credentials.

Note: Remember to use the default environment (EU PROD) and confirm that serverUrl value in NFCConfiguration and serviceUrl value in SDKExperience is the same as the selected environment address.

Step 2: Place the NFC, IPV (gips-rs) API keys and LKMS, Artifactory credentials into your global gradle.properties found in your gradle directory and are set by default to: USER_HOME/.gradle.

Language not specified
1nfcApiKey="YOUR NFC API KEY"
2ipvApiKey="YOUR IPV API KEY"
3
4artifactoryUserMI=<artifactory user>
5artifactoryPasswordMI=<artifactory credentials>
6repositoryUrlMI=<repository url>
7
8lkmsProfileId="YOUR LKMS PROFILE ID"
9lkmsApiKey="YOUR LKMS API KEY"

Step 3: Fetch sample app source code as a .zip package from Artifactory.

Step 4: In app's source change NFCConfiguration to your match your tenant configuration: NFCConfiguration(serverUrl = "TENANT_URL/nfc/", serverApiKey = BuildConfig.nfcApiKey, sdkExperience = SDKExperience(serviceUrl = "TENANT_URL/sdk-experience/", apiKey = BuildConfig.nfcApiKey)). On production environment, default parameters will be most probably fine.

Step 5: In app's source change ServerConfigurationData to match your tenant configuration: ServerConfigurationData(serverUrl = "TENANT_URL/gips/",serverApiKey = BuildConfig.ipvApiKey). On production environment, default parameters will be most probably fine.

Step 6: Run app. If all steps have been applied properly there should not be any issues.