Android Extensions
This section covers Android UI Extensions and Getting Started with AAMVA for Android.
Image utils
The image utility performs all the format conversions needed to implement an app.
Converting Morpho Image Y800 to ARGB8888
This function converts a Morpho Image encoded in Y800 to a bitmap encoded in ARGB888.
1ImageUtils.morphoImageY800ToARGB8888(getApplicationContext(), morphoImage, object : ImageUtilsAsyncCallbacks<Bitmap?> {2 override fun onPreExecute() {3 // Optional hook on the builtin Android AsyncTask callback `onPreExecute`4 }56 override fun onSuccess(bitmap: Bitmap) {7 //The image in ARGB88888 }910 override fun onError(e: Exception) {11 // An error has occurred12 }13})
Function
Java1public static void morphoImageY800ToARGB8888(final Context context, final MorphoImage image, ImageUtilsAsyncCallbacks<Bitmap> callbacks);
Parameter | Description |
---|---|
context Context | The Android context. |
image MorphoImage | The image. |
callbacks ImageUtilsAsyncCallbacks | Callbacks to be executed depending on the result. |
Errors
You will receive an exception reporting the error.
Converting Bitmap to Morpho Image Y800
This function converts a bitmap into a Morpho Image encoded in Y800.
Note: It is the developer's responsibility to fill in the properties for BiometricLocation
and BiometricModality
.
1ImageUtils.bitmapToMorphoImageY800(getApplicationContext(), bitmap, object : ImageUtilsAsyncCallbacks<MorphoImage?> {2 override fun onPreExecute() {3 // Optional hook on the builtin Android AsyncTask callback `onPreExecute`4 }56 override fun onSuccess(image: MorphoImage) {7 //Remember to configure the Morpho Image8 image.biometricModality = BiometricModality.FACE9 image.biometricLocation = BiometricLocation.FACE_FRONTAL10 }1112 override fun onError(e: Exception) {13 // An error has occurred14 }15})
Function
Java1public static void bitmapToMorphoImageY800(final Context context, final Bitmap image, ImageUtilsAsyncCallbacks<MorphoImage> callbacks);
Parameter | Description |
---|---|
context Context | The Android context. |
image Bitmap | The image. |
callbacks ImageUtilsAsyncCallbacks | Callbacks to be executed depending on the result. |
Errors
You receive an exception reporting the error.
Compressing bitmap to a maximum desired size
This function converts a bitmap to a desired size in kilobytes (KB). Although you can convert the bitmap into the maximum size, you are not required to. This function works if you convert to less than the maximum size.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1//Maximum size 500 KB2val compressed = ImageUtils.compressBitmap(bitmap, 500)
Function
Java1public static Bitmap compressBitmap(Bitmap srcBitmap, int maxSize) throws IllegalArgumentException;
Parameter | Description |
---|---|
srcBitmap Bitmap | The image. |
maxSize int | The maximum size desired in KB. |
Errors
You receive an exception reporting the error.
Resize a bitmap to a maximum side length
This converts a bitmap to a desired length in pixels. Although you can convert the bitmap into the maximum desired side length, you are not required to. This function works if you convert to less than the maximum side length.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1//Maximum size 500 px2val resized: Bitmap = ImageUtils.resizeBitmap(bitmap, 500)
Function
Java1public static Bitmap resizeBitmap(Bitmap srcBitmap, int maxSideLengthInPixels) throws IllegalArgumentException;
Parameter | Description |
---|---|
srcBitmap Bitmap | The image. |
maxSideLengthInPixels int | Maximum side length in pixels. |
Errors
You receive an exception reporting the error.
Compress and resize an IImage
This function compresses and resizes an image to a desired size in kilobytes (KB) and a maximum length in pixels. The resizing keeps the aspect ratio. You can use this to compress and resize the image to a size less than your previously determined maximum size. The returned data will be a JPEG image as a byte[].
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1//Maximum pixel length is 3000 pixels2//Maximum size 500 KB3val jpegImage: ByteArray = ImageUtils.resizeAndCompressToByteArray(image, 3000, 500) //returned image in jpeg format as byte[]4
Function
Java1public static byte[] resizeAndCompressToByteArray(IImage image, int maxSideLengthInPixels, int maxSizeInKB) throws Exception;
Parameter | Description |
---|---|
image IImage | The image. |
maxSideLengthInPixels int | Maximum side length desired in pixels. |
maxSizeInKB int | The maximum size desired in KB. |
Errors
You receive an exception reporting the error.
Cropping an IImage
This function crops an image. The returned data will be the same kind of IImage
as the source image.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.doCrop(image, documentRegion.getPoint1().x, documentRegion.getPoint1().y, documentRegion.getPoint3().x, documentRegion.getPoint3().y)
Function
Java1public static IImage doCrop(IImage srcImage, double topLeftX, double topLeftY, double bottomRightX, double bottomRightY) throws Exception;
Parameter | Description |
---|---|
srcImage IImage | The image. |
topLeftX double | The top left position X coordinate. |
topLeftY double | The top left position Y coordinate. |
bottomRightX double | The bottom right position X coordinate. |
bottomRightY double | The bottom right position Y coordinate. |
Errors
You receive an exception reporting the error.
Rotating an IImage
This function rotates an image. The returned data will be the same kind of IImage
as the source image.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.doRotation(image, 90);
Function
Java1public static IImage doRotation(IImage srcImage, float degrees) throws Exception;
Parameter | Description |
---|---|
srcImage IImage | The image. |
degrees float | The degrees to rotate. |
Errors
You receive an exception reporting the error.
Flipping an IImage
This function flips an image. The returned data will be the same kind of IImage
as the source image.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.doFlip(image, FlipType.LI_F_BOTH)
Function
Java1public static IImage doFlip(IImage srcImage, FlipType flipType) throws Exception;
Parameter | Description |
---|---|
srcImage IImage | The image. |
flipType FlipType | The flip type. |
Errors
You receive an exception reporting the error.
Converting raw images to JPEG 2000
This function converts raw images to JPEG 2000. The returned data will be the same kind of IImage
as the source image.
Only fingerprint images should be used in this method.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.toJPG2000(image, false)
Function
Java1public static IImage toJPG2000(IImage srcImage, boolean isLatent) throws Exception;
Parameter | Description |
---|---|
srcImage IImage | The image in raw format. |
isLatent boolean | False for Rolled , Flat , Slap (Card scan, Live scan, Mobile ID credential, and palm). True for Latent . |
Errors
You receive an exception reporting the error.
Converting Raw Images to JPEG 2000 with Maximum Size
This function converts raw images to JPEG 2000. The returned data will be the same kind of IImage
as the source image.
Only fingerprint images should be used in this method.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.toJPG2000(image, 102400)
Function
Java1public static IImage toJPG2000(IImage srcImage, int outputMaximumSizeInBytes) throws Exception;
Parameter | Description |
---|---|
srcImage IImage | The image in raw format. |
outputMaximumSizeInBytes int | Maximum size (in bytes) of the output compressed buffer. |
Errors
You receive an exception reporting the error.
Converting raw images to WSQ
This function will convert raw images to WSQ. The returned data will be the same kind of IImage
as the source image.
Required for this function:
- Resolution of the image must be 500 dpi.
- Number of rows in the image must be between 64 and 20000.
- Number of columns in the image must be between 64 and 20000.
Only fingerprint images should be used in this method.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImage: IImage = ImageUtils.toWSQ(srcImage, 15, (byte) 0, (byte) 0xff)
Function
Java1public static IImage toWSQ(IImage srcImage, float compressionRatio, byte scannerBlack, byte scannerWhite) throws java.lang.Exception;
Parameter | Description |
---|---|
srcImage IImage | The image in raw format. |
compressionRatio float | Maximum size (in bytes) of the output compressed buffer. |
scannerBlack byte | BLACK calibration value (if unknown, use 0 ) |
scannerWhite byte | WHITE calibration value (if unknown, use 255 ) |
Errors
You receive an exception reporting the error.
Extracting images
This method extracts the images encoded by the location coordinates of a DocumentImage
, if rectification is disabled during a capture. The returned data will be a list of cropped images from the original image.
This function is intended to be used if Rectification
is disabled during the capture of a document.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val iImages: List<DocumentImage> = ImageUtils.extractImages(srcImage)
Function
Java1public static List<DocumentImage> extractImages(DocumentImage srcImage) throws java.lang.Exception;
Parameter | Description |
---|---|
srcImage IImage | The image in raw format that contains the location coordinates (areas to be cropped). |
Errors
You receive an exception reporting the error.
Resizing bitmap to IDEMIA standards
This method eases integration with IDEMIA servers. The returned data is an image scaled to the proper format, depending on the input parameters.
This function is intended to be used before an image is send to IDEMIA servers.
Note: This function should be executed in a different window or thread than the UI as it can consume substantial resources.
1val result: Bitmap = ImageUtils.resizeBitmap(srcImage, UseCase.BIOMETRIC, DocumentType.SELFIE, false)
Function
Java1public static Bitmap resizeBitmap(Bitmap srcBitmap, UseCase useCase, DocumentType documentType, boolean isCropped) throws IllegalArgumentException;
Parameter | Description |
---|---|
srcImage Bitmap | The source image to resize. |
useCase UseCase | The use case. |
documentType DocumentType | The type of document. |
isCropped boolean | True the document has been cropped, false otherwise. |
Errors
You receive an exception reporting the error.
UI Extensions
The UI Extension library helps developers to display a liveness challenge for the Liveness.ACTIVE
setting (The challenge is called Join the Points). It’s widely customizable in order to adapt to many different applications.
Prerequisites
Skills Required
The developers need knowledge of:
- Android Studio
- Java/Kotlin
- Android
Resources Required
The library is distributed as a maven artifact, aar
package. It is recommended to use the Capture SDK repository to handle dependency management.
Getting Started
Adding the Library to your Project
Here is the Maven artifact (from Artifactory) with a Capture SDK repository configuration.
Groovy1buildscript {2 repositories {3 maven {4 url "$repositoryUrlMI"5 credentials {6 username "$artifactoryUserMI"7 password "$artifactoryPasswordMI"8 }9 }10 ...11 }12 ...13}
repositoryUrlMI: Mobile Identity artifactory repository url
artifactoryUserMI: Mobile Identity artifactory username
artifactoryPasswordMI: Mobile Identity artifactory password
These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:
XML1artifactoryUserMI=artifactory_user2artifactoryPasswordMI=artifactory_credentials3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local
More about gradle properties can be found here.
Note: The UI Extensions dependency declaration: X.Y.Z
should be replaced with the proper version number (for example: 1.2.6).
XML1dependencies {2 implementation "com.idemia.smartsdk:ui-extensions:X.Y.Z@aar"3 ...4}
Integrating with the UI Extension
Setting up the Layout
Before the challenge can be configured, add the view that will be responsible for displaying everything.
XML1<?xml version="1.0" encoding="utf-8"?>2<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"3 xmlns:app="http://schemas.android.com/apk/res-auto"4 android:layout_width="match_parent"5 android:layout_height="match_parent"6 app:layout_behavior="@string/appbar_scrolling_view_behavior">78 <com.idemia.biometricsdkuiextensions.ui.scene.view.SceneView9 android:id="@+id/sceneSurface"10 app:showBorders="true"11 android:layout_width="match_parent"12 android:layout_height="match_parent"/>13</android.support.constraint.ConstraintLayout>
The property showBorders
describes if borders where points can be displayed, should be visible or not.
Setting up the Scene Controller
Before creating the proper controller for the scene, options need to be configured to be passed to the controller’s constructor. It’s similar to the SDK’s options. There is a DSL configured so that the setting options in Kotlin are much more convenient.
You must use one of the settings below depending on the selected mode:
Kotlin1joinThePointsChallengeSettings {}2passiveCaptureSettings {}3fingerCaptureSettings {}4passiveVideoCaptureSettings {}5mlcCaptureSettings {}
JoinThePointsCaptureSettings (example configuration)
Kotlin1val settings = joinThePointsChallengeSettings {2 targetCount = 43 useInterpolation = true4 scene {5 overlay {6 showOverlay = true7 imageRes = R.drawable.ic_face_overlay8 marginVertical = R.dimen.default_face_overlay_vertical_padding9 marginHorizontal = R.dimen.default_face_overlay_vertical_padding10 text {11 text = R.string.default_overlay_text12 textSize = R.dimen.default_overlay_text_size13 textColor = Color.parseColor(Colors.text_black)14 }15 }16 capturedLineOpacity = 0.5f17 background {18 colorEnd = Color.parseColor("#189482")19 colorStart = Color.parseColor("#38ddb8")20 }21 pointer {22 type = PointerType.PULSING23 collisionWithTargetAction = PointerCollisionAction.NONE24 }25 target {26 notSelectedImageResId = R.drawable.ic_target_free27 capturedImageResId = R.drawable.ic_target_joined28 capturedImageSolidColor = Color.CYAN29 failedImageResId = R.drawable.ic_challenge_failed30 selectedImageResId = R.drawable.ic_target_connecting_light_blue31 startingImageResId = R.drawable.ic_target_connecting32 capturedTargetOpacity = 1f33 displayTextSettings = TextSettings.ALL34 pulseAnimation {35 waves = 236 }37 progressColor = Color.WHITE38 textColor = Color.RED39 showMarkOnCurrentTarget = true40 }41 result {42 failureImageResId = R.drawable.ic_challenge_failed43 successImageResId = R.drawable.ic_challenge_success44 }45 previewScale {46 scaleX = 1.0f47 scaleY = 1.0f48 }49 tapping {50 colorBackground = Color.parseColor("#FAFAFA")51 colorImage = Color.parseColor("#101010")52 colorText = Color.parseColor("#101010")53 textResId = R.string.use_your_head54 textH1ResId = R.string.no_tapping_feedback55 enabled = true56 }57 }58}
PassiveCaptureSettings (example configuration)
Kotlin1val settings = passiveCaptureSettings {2 scene {3 overlay {4 showOverlay = true5 imageRes = R.drawable.ic_face_overlay6 marginVertical = R.dimen.default_face_overlay_vertical_padding7 marginHorizontal = R.dimen.default_face_overlay_vertical_padding8 text {9 text = R.string.default_overlay_text10 textSize = R.dimen.default_overlay_text_size11 textColor = Color.parseColor(Colors.text_black)12 }13 }14 background {15 colorEnd = Color.parseColor("#189482")16 colorStart = Color.parseColor("#38ddb8")17 }18 previewScale {19 scaleX = 1.0f20 scaleY = 1.0f21 }22 feedback {23 colorText = Color.parseColor(Colors.white)24 }25 overlay {26 showOverlay = true27 }28 tapping {29 colorBackground = Color.parseColor("#FAFAFA")30 colorImage = Color.parseColor(Colors.black)31 colorText = Color.parseColor(Colors.black)32 textResId = R.string.use_your_head33 textH1ResId = R.string.no_tapping_feedback34 enabled = true35 }36 verticalTilt {37 colorBackground = Color.parseColor("#FAFAFA")38 colorImage = Color.parseColor("#000000")39 colorText = Color.parseColor("#000000")40 textResId = R.string.device_vertical_tilt_feedback41 enabled = true42 }43 countdown {44 countdownSeconds = 345 }46 delay {47 isEnabled = true48 message = R.string.capture_delay_message49 }50 }51}
FingerCaptureSettings (example configuration)
Kotlin1val settings = fingerCaptureSettings {2 scene {3 background {4 colorEnd = Color.parseColor("#189482")5 colorStart = Color.parseColor("#38ddb8")6 }7 rectangle {8 color = Color.BLACK9 strokeWidth = 20f10 cornerRadius {11 rx = 20f12 ry = 20f13 }14 }15 previewScale {16 scaleX = 1.0f17 scaleY = 1.0f18 }19 tapping {20 colorBackground = Color.parseColor("#FAFAFA")21 colorImage = Color.parseColor("#000000")22 colorText = Color.parseColor("#000000")23 textResId = R.string.use_your_head24 textH1ResId = R.string.no_tapping_feedback25 enabled = true26 }27 feedback {28 feedbackStringMapping = mapping29 show = true30 }31 distance {32 range = convertDistanceRange(handler.getCaptureDistanceRange())33 showOptimalDistanceIndicator = true34 }35 progressBar {36 labelRes = R.string.scanning37 show = true38 }39 }40}
PassiveVideoCaptureSettings (example configuration)
Kotlin1val settings = passiveVideoCaptureSettings {2 scene {3 preparationScene {4 backgroundColor = Color.WHITE5 }6 faceOverlay {7 progressBar {8 progressFill = Color.GREEN9 }10 }11 background {12 colorEnd = Color.parseColor("#189482")13 colorStart = Color.parseColor("#38ddb8")14 }15 previewScale {16 scaleX = 1.0f17 scaleY = 1.0f18 }19 feedback {20 videoBackground { }21 }22 tapping {23 colorBackground = Color.parseColor("#FAFAFA")24 colorImage = Color.parseColor("#000000")25 colorText = Color.parseColor("#000000")26 textResId = R.string.use_your_head27 textH1ResId = R.string.no_tapping_feedback28 enabled = true29 }30 verticalTilt {31 colorBackground = Color.parseColor("#FAFAFA")32 colorImage = Color.parseColor("#000000")33 colorText = Color.parseColor("#000000")34 textResId = R.string.device_vertical_tilt_feedback35 enabled = true36 }37 delay {38 isEnabled = true39 message = R.string.capture_delay_message40 }41 }42}
MlcCaptureSettings (example configuration)
Kotlin1val settings = mlcCaptureSettings {2 scene {3 cancelButton {4 enabled = true5 iconColor = Color.parseColor(Colors.cancel_icon_color)6 iconSize = R.dimen.cancel_icon_size7 listener = {}8 }9 feedback {10 textColor = Color.parseColor(Colors.default_mlc_feedback_color)11 centerFaceResId = R.string.feedback_center_face12 noSmileResId = R.string.no_smile13 bigSmileResId = R.string.big_smile14 feedbackDisplayTimeInMillis = FEEDBACK_DISPLAY_TIME_MILLIS15 faceCaptureFeedbackStringMapping = faceCaptureFeedbackMapping16 flashesWarningResId = R.string.flashes_warning17 smileAcquiredResId = R.string.smile_acquired_successfully18 illuminationResId = R.string.hold_still_and_close_eyes19 moveCloserResId = R.string.move_your_face_closer20 noSmileEmojiResId = R.string.no_smile_emoji21 bigSmileEmojiResId = R.string.big_smile_emoji22 }23 tapping {24 colorBackground = Color.parseColor("#FAFAFA")25 colorImage = Color.parseColor("#000000")26 colorText = Color.parseColor("#000000")27 textResId = R.string.use_your_head28 textH1ResId = R.string.no_tapping_feedback29 enabled = true30 }31 verticalTilt {32 colorBackground = Color.parseColor("#FAFAFA")33 colorImage = Color.parseColor("#000000")34 colorText = Color.parseColor("#000000")35 textResId = R.string.device_vertical_tilt_feedback36 enabled = true37 }38 mlcFaceOvalBorder {39 showingDurationMillis = 2000L40 enabled = true41 color = Color.parseColor(Colors.green)42 }43 captureProgressBar {44 enabled = true45 progressText = R.string.capture_progress_text46 progressTextColor = Color.parseColor(Colors.black)47 progressColor = Color.parseColor(Colors.capture_progress_color)48 progressBackgroundColor = Color.parseColor(Colors.capture_progress_bar_background_color)49 illuminationPhaseColor = Color.parseColor(Colors.capture_illumination_progress_color)50 illuminationPhaseBackgroundColor = Color.parseColor(Colors.capture_illumination_background_color)51 }52 }53}
There is no need to configure every option. Each option has a default variant. however, targetCount
(in Join The Points mode) must be equal to the count in FaceCaptureOptions
from Capture SDK.
A description of the each option can be found below in the "Options" section.
Now we are ready to create a scene controller that will manage the challenge drawing, based on the input we provide it.
Kotlin1...2val sceneController = JoinThePointsSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = PassiveCaptureSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = FingerCaptureSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = PassiveVideoCaptureSceneController(sceneSurface, settings)
Kotlin1...2val sceneController = MlcCaptureSceneController(sceneSurface, settings)
Where sceneSurface
is an instance of SceneView
that is added to the layout of Activity/Fragment with the challenge.
Use Settings with Java
Using Java is also possible. In order to do that, use the settings classes directly instead of Kotlin DSL:
- For Join The Points mode:
JoinThePointsChallengeSettingsBuilder
orJoinThePointsChallengeSettings
- For Passive mode:
PassiveCaptureSettings
orPassiveSettingsBuilder
- For Finger Capture:
FingerCaptureSettings
orFingerCaptureSettingsBuilder
- For VideoPassive mode:
PassiveVideoCaptureSettings
orPassiveVideoSceneSettingsBuilder
- For MLC mode:
MlcCaptureSettings
orMlcSettingsBuilder
Using Scene Controller in Capture SDK’s Callbacks
Starting the Challenge
The controller can be used once it's configured. Call the start()
method on scene controller immediately when starting the capture on the SDK's handler.
We need to start preview first, then start scene controller which is asynchronous call, and after its finish it work we should call start capture.
- start scene controller asynchronously (capture preview will be started automatically)
- start capture
There are 4 ways to start scene controller: 2 using coroutines and 2 using callback:
- Using coroutines:
Kotlin1CoroutineScope(Dispatchers.Main).launch {2 sceneController.start(captureHandler)3 captureHandler.startCapture()4}
Kotlin1CoroutineScope(Dispatchers.Main).launch {2 sceneController.start()3 captureHandler.startCapture()4}
- Using callbacks:
Java1captureHandler.startPreview(new PreviewStatusListener() {2 @Override3 public void onStarted() {4 try {5 sceneController.start(captureHandler) {6 captureHandler.startCapture()7 }8 } catch (MSCException e) {9 // handle exception10 }11 }1213 @Override14 public void onError(PreviewError error) {15 // Preview initialization failed and can not be started16 }17})
Java1captureHandler.startPreview(new PreviewStatusListener() {2 @Override3 public void onStarted() {4 try {5 sceneController.start() {6 captureHandler.startCapture()7 }8 } catch (MSCException e) {9 // handle exception10 }11 }1213 @Override14 public void onError(PreviewError error) {15 // Preview initialization failed and can not be started16 }17})
Pausing or Stopping the Challenge
Stop both the capture and the scene when the activity pauses or when it's desired to stop the challenge.
1captureHandler.stopCapture()2sceneController.stop()
Closing the Challenge
Release the resources when closing the whole challenge or activity.
1captureHandler.destroy()2sceneController.destroy()
Updating the Challenge Status
Updating the challenge status is required in order to have a full working experience. Update the challenge status, so that the sceneController
knows how to redraw all of the elements in the scene. This allows the ability to collect proper data from the SDK's callbacks and then push it to the controller.
Kotlin1//Setting tracking listener2captureHandler.setBioTrackingListener { trackingList -> sceneController.onTracking(trackingList) }34//Setting CR2D listener5captureHandler.setBioCaptureCR2DListener(object : BioCaptureCR2DListener {6 override fun onCurrentUpdated(point: Cr2dCurrentPoint?) {7 if (point != null) {8 sceneController.update(point)9 }10 }1112 override fun onTargetUpdated(target: Cr2dTargetPoint?) {13 if (target != null) {14 sceneController.update(target)15 }16 }1718 override fun onTargetsConditionUpdated(targetsNumber: Int, stability: Int) {19 sceneController.update(targetsNumber, stability)20 }21})2223//Setting result listener24captureHandler.setBioCaptureResultListener(BioCaptureResultListenerAdapter(25 { sceneController.captureSuccess { println("CAPTURE SUCCES") } },26 { error ->27 if (error == CaptureError.CAPTURE_TIMEOUT) {28 sceneController.captureTimeout { println("CAPTURE TIMEOUT") }29 } else {30 sceneController.captureFailure { println("CAPTURE FAILURE") }31 }32}))
Updating the PassiveVideo capture
To provide good user experience beside standard cases like capture success or failure, all callbacks from FaceVideoPassiveListener from CaptureSDK should be covered. Handling them is quite easy, just pass values from SDK's callbacks to proper PassiveVideoCaptureSceneController methods as on example below.
Kotlin1class FaceVideoPassiveListenerAdapter(2 val preparationStarted: () -> Unit,3 val preparationFinished: () -> Unit,4 val overlayUpdatedCallback: (OvalOverlay) -> Unit,5 val progressUpdatedCallback: (Float) -> Unit6) : FaceVideoPassiveListener {7 override fun onPreparationFinished() {8 preparationFinished()9 }1011 override fun onPreparationStarted() {12 preparationStarted()13 }1415 override fun overlayUpdated(overlay: OvalOverlay) {16 overlayUpdatedCallback(overlay)17 }1819 override fun progressUpdated(progress: Float) {20 progressUpdatedCallback(progress)21 }22}
Kotlin1captureHandler.setFaceVideoPassiveListener(2 FaceVideoPassiveListenerAdapter(3 { sceneController.showPreparingView() },4 { sceneController.hidePreparingView() },5 { sceneController.ovalOverlayUpdate(6 FaceOval(7 it.width,8 it.height,9 it.centerX,10 it.centerY11 )12 ) },13 { sceneController.updateProgress(it) }14 )15)
Managing MLC capture
MlcCaptureSceneController is designed to handle operations associated with MultidimensionalLivenessCheck capture. It allow to perform following operations related with the capture (all of them should be triggered in order presented below):
Updating smile progress
Smile progress, which is displayed using SmileIndicatorBar, can be changed with * applySmileProgress* method:
Kotlin1fun applySmileProgress(@FloatRange(from = 0.0, to = 1.0) progress: Float)
IMPORTANT Make sure that showSmileIndicator()
has been called on SceneView
before passing values to this method. Otherwise, SmileIndicatorBar
will not be visible!
Rescaling oval preview for illumination
To rescale oval preview before starting illumination phase, prepareIllumination()
method should be called:
Kotlin1fun prepareIllumination(scale: Float)
scale value should come from onIlluminationPrepared(scale: Float)
method of MlcListener.
Showing instructions for illumination phase and triggering illumination
Starting illumination process can be done by calling smileEnded
method:
Kotlin1fun smileEnded(requestIllumination: () -> Unit)
Calling this method results in following things:
- Feedbacks associated with face capture will not be shown anymore.
- SmileIndicatorBar will be hidden.
- Instructions for illumination phase will be shown with given delay (it can be configured with
feedback
field of DSL). requestIllumination
passed as method argument will be called.- Feedbacks for face capture will be shown again.
IMPORTANT requestIllumination
passed to this method should invoke start
on IlluminationRequest
interface coming from the SDK.
Otherwise, illumination process wil not be started!
Changing preview background color during illumination phase
Once start()
is invoked on IlluminationRequest
, you will be receiving colors from onColorToDisplay(red: Int, green: Int, blue: Int)
method.
Those colors should be passed to showIllumination(red: Int, green: Int, blue: Int)
method of MlcCaptureController
in order to set proper
color of preview background.
Options
Interpolation
useInterpolation
is the preferred option for "low-end" phones where pointer movement may look “sluggish”. This option is disabled by default.
It is used only in JoinThePointsCaptureSettings.
Pointer
Configuration of pointer in Join The Points capture mode. It contains following parameters:
-
collisionWithTargetAction
describes an action where the pointer collides with actual target. There is no action by default. To hide the pointer after collission, set this option toPointerCollisionAction.HIDE
. -
solidColor
is used to set the pointer color. -
imageResourceId
as name suggests is a resource id of the pointer. -
type
specifies the type of pointer available:
- a standard image with pulsing animation under the image (
PointerType.PULSING
). - a rotating image towards the actual target (
PointerType.TRACKING
).
If type
has been set to PointerType.PULSING, additional pulseAnimation
configuration can be provided. For this animation you can set:
waves
- number of waves inside animation.color
- color of the animation.minAlpha
andmaxAlpha
- ranges of animation opacity.
Pointer
is used only in JoinThePointsCaptureSettings.
Whole pointer configuration with default values:
Kotlin1pointer {2 solidColor = Color.parseColor(Colors.black)3 imageResourceId = R.drawable.dot_pointer4 type = PointerType.PULSING5 collisionWithTargetAction = PointerCollisionAction.NONE6 pulseAnimation {7 waves = 28 minAlpha = 0.4f9 maxAlpha = 0.8f10 color = Color.parseColor(Colors.pulse_wave_color)11 }12}
Target
Configuration of targets in Join The Points capture mode. It contains following parameters:
-
displayTextSettings
since targets need to be captured in a specific order, the points are numbered by default. There is also one extra point, the starting point. This option allows an integrator to enable or disable the default text in specific places. -
notSelectedImageResId
- resource id of the image for the point that is not active yet. -
notSelectedSolidColor
- color of the image for the point that is not active yet. -
selectedImageResId
- resource id of the image for the active point. -
selectedImageSolidColor
- color of the image for the active point. -
capturedImageResId
- resource id of the image that will be displayed inside the dot that has already been captured. -
capturedImageSolidColor
- color of the image inside fulfilled dot. -
startingImageResId
- resource id of the image that will be displayed inside starting point. -
startingPointSolidColor
- color of the image for the starting point. -
startingPointSize
is a size of a starting point. It takes data class of typeTargetSize
, which consists ofwidthInPixels
andheightInPixels
. -
progressColor
- color of progress indicator inside the current point. -
showMarkOnCurrentTarget
- if set to true, animation with arrows that shows which point is the current will be shown. -
textColor
- color of the text inside the points. -
capturedTargetOpacity
- opacity of the point thas bas already been captured. -
pulseAnimation
- configuration of pulsing animation around the points. For this animation you can set:waves
- number of waves inside animation. 2 by default.color
- color of the animation.minAlpha
andmaxAlpha
- ranges of animation opacity.
Whole target configuration with default values:
Kotlin1target {2 displayTextSettings = TextSettings.ALL3 notSelectedImageResId = R.drawable.ic_target_free4 notSelectedSolidColor = Color.parseColor("#430099")5 selectedImageResId = R.drawable.ic_target_connecting_light_blue6 selectedImageSolidColor = Color.parseColor("#007dba")7 capturedImageResId = R.drawable.ic_target_joined8 capturedImageSolidColor = Color.parseColor("#430099")9 startingImageResId = R.drawable.ic_target_connecting10 startingPointSolidColor = Color.parseColor("#430099")11 startingPointSize = TargetSize(136, 136)12 progressColor = Colors.parseColor("#330370")13 showMarkOnCurrentTarget = false14 textColor = Colors.white15 capturedTargetOpacity = 1f16 pulseAnimation = {17 waves = 218 minAlpha = 0.4f19 maxAlpha = 0.8f20 color = Color.parseColor(Colors.pulse_wave_color)21 }22}
Target
is used only in JoinThePointsCaptureSettings.
Tapping Feedback
Refers to the built-in feedback displayed on SceneView
.
This feedback show a message preventing the user from tapping further on the screen.
You can set the look of Tapping Feedback
in Kotlin DSL.
Tapping Feedback
is enabled by default. You can prevent Tapping Feedback
from being displayed by setting enabled
on false
It is used in all modes.
Kotlin1tapping {2 colorBackgroundResId = R.color.default_tapping_feedback_background3 colorImageResId = R.color.black4 colorTextResId = R.color.black5 textResId = R.string.no_tapping_feedback6 enabled = true7}
Device vertical tilt Feedback
This feedback is displayed on SceneView
before face capture. The feedback displays until the phone is held vertically.
You can set the look of Device vertical tilt Feedback
in Kotlin DSL.
Device vertical tilt Feedback
is enabled by default. You can disable it by setting enabled
option on false
value.
It is used in JoinThePointsCaptureSettings, PassiveCaptureSettings.
Kotlin1verticalTilt {2 colorBackgroundResId = R.color.default_device_vertical_tilt_feedback_background3 colorImageResId = R.color.black4 colorTextResId = R.color.black5 textResId = R.string.device_vertical_tilt_feedback6 enabled = true7}
Capture delay
UI extension can handle capture delay. It will display a message with countdown when capture is available. It is turned on by default, but you can turn it off.
Kotlin1delay {2 isEnabled = true3 message = R.string.delay_message4}
In message resource, add text parameter that will change to counter. For example, this is the default text used in UIExtension with text parameter:
XML1<string name="capture_delay_message">Authentication locked.\nPlease wait for:\n%1$s</string>
Feedbacks
There are two types of face capture feedbacks - one for legacy API and one for API based on use-cases.
FaceCaptureInfo
FaceCaptureInfo is associated with legacy API. Below are the default messages for FaceCaptureInfo:
FaceCaptureInfo | String | Comment |
---|---|---|
INFO_COME_BACK_FIELD | Come back in the camera field | |
INFO_CENTER_TURN_LEFT, INFO_CENTER_TURN_RIGHT, INFO_CENTER_ROTATE_DOWN, INFO_CENTER_ROTATE_UP, INFO_CENTER_TILT_LEFT, INFO_CENTER_TILT_RIGHT, | Center your face in camera view | |
INFO_CENTER_MOVE_FORWARDS | Move your face forward | |
INFO_CENTER_MOVE_BACKWARDS | Move your face backward | |
INFO_TOO_FAST | Moving to fast | |
CENTER_GOOD | Face is in good position | Used only in Passive mode |
INFO_DONT_MOVE | Not showing any message. It is called just before taking capture in passive mode | |
INFO_CHALLANGE_2D | Connect the dots | Used only in Join The Points mode |
INFO_STAND_STILL | Stand still for a moment | Used for best face image selection before starting challenge. It is used for all modes. It means that face position is good, do not move anymore. |
INFO_NOT_MOVING | Move your head to connect the dots / Move your head | It is sent, when user not performing a task. Used in Join The Points, and have different message for them. |
DEVICE_MOVEMENT_ROTATION | Don't move your phone |
CaptureFeedback
CaptureFeedback is related with new API. Below are the default messages for CaptureFeedback:
CaptureFeedback | String | Comment |
---|---|---|
FACE_INFO_COME_BACK_FIELD | Come back in the camera field | |
FACE_INFO_CENTER_MOVE_FORWARDS | Move your face forward | |
FACE_INFO_CENTER_MOVE_BACKWARDS | Move your face backward | |
FACE_INFO_CENTER_MOVE_UP | Move your phone slightly upwards | |
FACE_INFO_CENTER_MOVE_DOWN | Move your phone slightly downwards | |
FACE_INFO_CENTER_MOVE_LEFT | Move your phone slightly to the left | |
FACE_INFO_CENTER_MOVE_RIGHT | Move your phone slightly to the right | |
FACE_INFO_TOO_FAST | Moving to fast | |
FACE_CENTER_GOOD | Face is in good position | |
DEVICE_MOVEMENT_DETECTED | Don't move your phone | |
FACE_INFO_CHALLANGE_2D | Connect the dots | Used only in Join The Points mode. |
FACE_INFO_STAND_STILL | Stand still for a moment | |
FACE_INFO_MAKE_A_SMILE | Now, smile genuinely to fill the gauge below. Cheese! | |
FACE_INFO_MAKE_A_NEUTRAL_EXPRESSION | Please keep a neutral facial expression for a while | |
FACE_INFO_MOVE_DARKER_AREA | It’s too bright. Make sure you are in a well lit environment. | |
FACE_INFO_MOVE_BRIGHTER_AREA | It’s too dark. Make sure you are in a well lit environment. | |
FACE_INFO_NOT_SMILING | Looks like you’re not smiling. Please try to make a wide, genuine smile. | |
FACE_INFO_SMILE_WIDER | Almost there! Make your smile a little wider. |
You can set look of feedback and countdown by setting them in Kotlin DSL.
- By default feedbacks are turned on - if you want to disable it you can set
showFeedback
tofalse
- You can provide feedback type to message mapping by setting appropriate field in settings DSL. For legacy it will be
faceFeedbackStringMapping
, which has function type(FaceCaptureInfo) -> String
. For new API it will befaceCaptureFeedbackStringMapping
, with function type(CaptureFeedback) -> String
. Both mappings have default values in English only.
Those feedbacks are used in JoinThePointsCaptureSettings, PassiveCaptureSettings and PassiveVideoCaptureSettings.
Kotlin1val settings = passiveCaptureSettings {2 scene {3 ...4 feedback {5 background {6 colorBackgroundResId = R.color.idemia_blue7 alphaCanal = 0.5f8 }9 showFeedback = true10 colorTextResId = R.color.white11 faceFeedbackStringMapping = { faceCaptureInfo ->12 mapToString(faceCaptureInfo) // This should return correct String for FaceCaptureInfo feedback13 }14 faceCaptureFeedbackStringMapping = { captureFeedback ->15 mapToString(captureFeedback) // This should return correct String for CaptureFeedback16 }17 }18 }19}
Setting up the Countdown for Passive
For passive settings, set the countdown for which one will be displayed at the top of the scene.
-
To set the countdown time, choose it in Kotlin DSL for the passive variable
countdownSeconds
incountdown
settings.countdownSeconds
is default set on 0 (OFF). -
Choose the text of the countdown. (The default is "Countdown..." andonly in English).
-
If countdown is turned on, it will start automatically before capture.
Kotlin1val settings = passiveCaptureSettings {2 countdown{3 countdownSeconds = 34 countdownText = getString(R.string.countdown)5 }6}
Setting up the Overlay for Passive or JoinThePoints
Turn on the face overlay when using passive/active capabilities by adding its configuration in the Kotlin DSL for passive/active settings.
- To turn on the overlay, set
showOverlay
totrue
(default). - Change the image of overlay by setting
imageRes
. - Overlay has a set width and height to
match_parent
permanently, but choosing vertical and horizontal margins:marginVertical
andmarginHorizontal
(default is 16dp for both) is optional. - Setup text in the middle of an overlay in the
text
section by changing:text
(default "Center\nyour\nface"),size
(default 24sp)color
(default #010101)
Kotlin1val settings = passiveCaptureSettings {2 ...3 overlay {4 showOverlay = true5 imageRes = R.drawable.ic_face_overlay6 marginVertical = R.dimen.overlay_vertical_margin7 marginHorizontal = R.dimen.overlay_horizontal_margin8 text {9 text = R.string.overlay_text10 textSize = R.dimen.overlay_text_size11 textColor = R.color.overlay_text_color12 }13 }14}
Displaying face capture feedback (new API)
In order to display feedback coming from FaceCaptureSDK, onFeedback
method from FaceCaptureSceneController
should be used:
Kotlin1fun onFeedback(feedback: Feedback)
Kotlin1sealed class Feedback23data class ShowFeedback(val feedback: CaptureFeedback): Feedback()45object ClearFeedback: Feedback()
In case of passing ClearFeedback
object, message visible on the screen will be hidden. Otherwise, provided CaptureFeedback
will be translated
according to faceCaptureFeedbackStringMapping
passed in feedback
DSL.
Setting result success and failure indicators
Success and Failure indicators can be applied to JoinThePointsCaptureSettings, PassiveCaptureSettings. However, it looks different in Join The Points and other modes:
- In Join The Points mode, it is animated from the last point that was connected.
- In Passive modes, a static image display for some time.
You can turn on Success and Failure indicators after face capture. To show indicators you need to inform scene controller about capture result by invoking the following methods:
sceneController.captureSuccess { ... Your code on success ... }
sceneController.captureFailure { ... Your code on failure ... }
sceneController.captureTimeout { ... Your code on timeout ... }
Invoke them in the FaceCaptureResultListener
:
Kotlin1captureHandler.setFaceCaptureResultListener(object : FaceCaptureResultListener {2 override fun onCaptureFailure(captureError: CaptureError, biometricInfo: IBiometricInfo, extraInfo: Bundle) {3 if (captureError == CaptureError.CAPTURE_TIMEOUT) {4 sceneController.captureTimeout {5 // Your code on timeout6 }7 } else {8 sceneController.captureFailure {9 // Your code on failure10 }11 }12 }1314 override fun onCaptureSuccess(image: FaceImage) {15 sceneController.captureSuccess {16 // Your code on success17 }18 }19})
The appearance of indicators is configurable in the result
section in the Kotlin DSL.
- You can change the time of the indicator by setting
resultDurationInMillis
(only for Passive). - You can change the images for success and failure indicators by setting
successDrawableRes
andfailureDrawableRes
Kotlin1val settings = passiveCaptureSettings {2 ...3 result {4 successDrawableRes = R.drawable.ic_challenge_success5 failureDrawableRes = R.drawable.ic_challenge_failed6 resultDurationInMillis = 10007 }8}
Setting preparation screen for PassiveVideo mode
Screen might be displayed on preparation phase of PassiveVideo mode.
backgroundColor
as name suggests is a color of a background. Default value is "#FFFFFF".colorProgressFill
is a property corresponding to progress bar color. Default value is "#430099".colorProgressBackground
is background of progress bar. Default value is "#33430099".colorTextTitle
is color of a title text on preparation screen. Default value is "#000000".colorTextDescription
is color of a description text on preparation screen. Default value is "#808080".
Setting face overlay for PassiveVideo mode
Oval face overlay making face capture of PassiveVideo mode much easier as it shows where face should be placed.
backgroundColor
as name suggests is a color of a background. Default value is "#153370".backgroundAlpha
is a property ranging from 0.0 to 1.0 describing how transparent background around oval is. Default value is 0.8f.progressBar
is field for setting progress around the oval. It has two properties:progressFill
with default value of "#FFA000".progressBackground
with default value of "#FFFFFF".
scanningFeedback
is a message to the user. Default value is "Scanning... Stay within the oval".
Setting cancel button
For MLC capture it is possible to configure "X" button, visible at the upper right corner of the screen during the capture. It can be done
using cancelButton
property, which has following fields:
enabled
indicates if icon will be visible or not. By default it is set to true.iconColor
is a color of the button with default value of "#430099"iconSize
is a size a resource of icon dimensions. By default it is 24dpcancelListener
- method that will be invoked after clicking the button.
DSL example:
Kotlin1mlcCaptureSettings {2 scene {3 cancelButton {4 enabled = true5 iconColor = Color.parseColor(Colors.cancel_icon_color)6 iconSize = R.dimen.cancel_icon_size7 listener = {}8 }9 (...)10 }11}
Setting feedbacks for MLC mode
For MLC capture, following properties can be passed to feedback
field:
centerFaceResId
is a resource id of message which is shown at the beginning of the capture. Default value is R.string.center_face_feedback with text Fill the oval frame with your face.showFeedback
is a boolean which indicate if feedback set as centerFaceResId will be visible. Default value is true.noSmileResId
is a resource id of text that which shown below icon indicating no smile in capture view. Default value is R.string.no_smile with text No smile.bigSmileResId
is a resource id of text that which shown below icon indicating big smile in capture view. Default value is R.string.big_smile with text No smile.feedbackDisplayTimeInMillis
is a field for changing time of displaying feedback messages. Default value is 1s.faceCaptureFeedbackStringMapping
is a mapping for CaptureFeedback. Default mapping is presented here.smileAcquiredResId
is a resource id of message that will be shown after end of smile phase of the capture. By default, it is R.string.smile_acquired_successfully with text Smile acquired successfully! Now, please hold still.smileAcquiredTextDelayInMillis
is a time of showing message set in smileAcquiredResId field. By default it is 3s.flashesWarningResId
is a resource id of a second message. Default value is R.string.flash_warning_feedback with text In the last step, your face will be verified with a sequence of color flashes. Please hold still during color’s countdown.flashesWarningTextDelayInMillis
is a time of showing flashesWarning message. By default it is 5s. moveCloserResId = R.string.move_your_face_closermoveCloserResId
is a resource id of a third instruction. Default value is R.string.move_face_closer_feedback with text Now, move your face closer to the phone.moveCloserTextDelayInMillis
is a time of showing message set in moveCloserResId field. By default it isilluminationResId
is a resource id of message that will be visible during illumination phase. Default value is 3s. R.string.illumination_feedback with text Hold still. You can close your eyes.textColor
- feedback text color. By default it is "#30006D".noSmileEmojiResId
is a resource of icon shown above no smile text. By default it is R.string.no_smile_emoji.bigSmileEmojiResId
is a resource of icon shown above big smile text. By default it is R.string.big_smile_emoji.
MlcFaceOvalBorderSettings
It's possible to show border around face oval during MLC capture, when capture phase will be completed (So it will be visible after centering the face before smile challenge, after capturing smile, and when face will be centered just before triggering illumination). This oval has following properties:
enabled
- indicates if component will be visible or not. By default it is set to true.showingDurationInMillis
- determine how long oval will be visible after finishing the phase. By default it is 1000ms.color
- color of the border. Default value is "#429400".
DSL example:
Kotlin1mlcCaptureSettings {2 scene {3 mlcFaceOvalBorder {4 showingDurationMillis = 2000L5 enabled = true6 color = Color.parseColor(Colors.green)7 }8 (...)9 }10}
CaptureProgressBarSettings
For MLC capture, you can configure progress bar that will show current progress of the capture.
enabled
- indicates if component will be visible or not. By default it is set to true.progressText
- resource id of text visible to the left of the progress bar. Default value is R.string.capture_progress_text. It contains text %1$d%% Complete. IMPORTANT If you want to show percentage progress of the capture, you should put string with placeholder here!progressTextColor
- color of progress text. "#000000" by default.progressColor
- progress bar fill color. By default it is "#430099".progressBackgroundColor
- color of not filled section of progress bar. Default value is "#F2D9FA".illuminationPhaseColor
- progress bar fill color during illumination phase. "#FFFFFF" by default.illuminationPhaseBackgroundColor
- color of not filled section of progress bar during illumination phase. By default it is "#80FFFFFF".
DSL example:
Kotlin1mlcCaptureSettings {2 scene {3 cancelButton {4 enabled = true5 iconColor = Color.parseColor(Colors.cancel_icon_color)6 iconSize = R.dimen.cancel_icon_size7 listener = {}8 }9 captureProgressBar {10 enabled = true11 progressText = R.string.capture_progress_text12 progressTextColor = Color.parseColor(Colors.black)13 progressColor = Color.parseColor(Colors.capture_progress_color)14 progressBackgroundColor = Color.parseColor(Colors.capture_progress_bar_background_color)15 illuminationPhaseColor = Color.parseColor(Colors.capture_illumination_progress_color)16 illuminationPhaseBackgroundColor = Color.parseColor(Colors.capture_illumination_background_color)17 }18 (...)19 }20}
Additional components
CaptureResultImageView
This is an custom Android's view that can be embedded within app's layout. It contains two methods:
- fun setImage(image: Bitmap) that sets image inside oval. Such image should be cropped and ideally be a square in order to display properly.
- fun setStrokeColor(@ColorInt color: Int) which sets border color and checkmark background around image.
TutorialView
This is an component to show tutorials provided by TutorialProvider in NFC Reader library. It contains one method:
fun start(animation: ByteArray, listener: TutorialListener?) that sets and start animation in lottie format.
TutorialListener
This listener provides the information when animation ends.
onAnimationComplete()
This method is called when animation end.
SmileIndicatorBar
This is an component used for visualize status of smile phase of the capture. It consists of three groups of indicators which will change colors according to passed progress value. To update this value, following method should be called:
fun applySmileProgress(@FloatRange(from = 0.0, to = 1.0) progress: Float)
Android AAMVA
The AAMVADecoder framework is targeted to developers who need to decode AAMVA within their mobile apps.
Prerequisites
Skills Required
The developers need knowledge of:
- Android Studio
- Java/Kotlin
- Android OS 4.1 or above
- Gradle
Resources Required
The tools required are:
- Android Studio
- Gradle Wrapper, preferred v4.4
Integration Guide
Adding a Library to your Project
To add a dependency to project, the artifactory repository needs to be configured. Replace user
and password
with the proper credentials.
Groovy1buildscript {2 repositories {3 maven {4 url "$repositoryUrlMI"5 credentials {6 username "$artifactoryUserMI"7 password "$artifactoryPasswordMI"8 }9 }10 ...11 }12 ...13}
repositoryUrlMI: Mobile Identity artifactory repository url
artifactoryUserMI: Mobile Identity artifactory username
artifactoryPasswordMI: Mobile Identity artifactory password
These properties can be obtained through portal and should be stored in local gradle.properties file. In such case credentials will not be included in source code. Configuration of properties:
XML1artifactoryUserMI=artifactory_user2artifactoryPasswordMI=artifactory_credentials3repositoryUrlMI=https://mi-artifactory.otlabs.fr/artifactory/smartsdk-android-local
More about gradle properties can be found here.
Then add the following implementation line to the gradle dependencies.
Groovy1dependencies {2...3 implementation 'com.idemia.aamva:aamva-parser:1.0.13'4...5}
Using the Library
Create an instance of the AAMVADecoder
object.
1val decoder: AAMVADecoder = new AAMVADecoder(context)
Now the decoder is ready to use. After scanning the PDF417 barcode, the value can be decoded.
1decoder.initWithPDF417("Scanned PDF417 value")2val decodedDocument: Document = decoder.getDocument()
Now the Document
object is ready to use with the values fetched from the PDF417 barcode.
NFC Reader
The NFC Reader library is the mobile part of the NFC Document Reading Solution. The core of the solution is the NFC Server (minimum supported version is 2.2.2), which collects and process the read data. Once the whole document's data is read, it is available to securely download from/or to push by the NFC Server.
This library allows the ability to read ICAO compliant passports.
Quick integration guide
- Add dependency in your project's build.gradle file:
XML1implementation ("com.idemia.smartsdk:smart-nfc:$smartNfcVersion")
-
Create Identity on GIPS Relying Service (gips-rs) component using v1/identities endpoint. More informations can be found here.
-
Create NFC session using MRZ lines fetched from the document and Identity id from previous step using v1/identities/{identityId}/id-documents/nfc-session endpoint. More informations can be found here.
-
Create the NFCReader object. This is an entry point to whole document reading procedure.
-
The parameter
configuration
is the NFCConfiguration object with the customer identifier in the ID&V cloud. -
The parameter
activity
is reference to Android's AppCompatActivity where the reader will be running.
1val configuration: NFCConfiguration = new NFCConfiguration()2val reader: NFCReader = new NFCReader(configuration, activity)
- Check if the device is compatible with this feature:
1reader.isDeviceCompatible()
-
If the device is not compatible, reading will fail regardless. Consider displaying some feedback to the end-user or handle it in different way (like hiding this feature in the app).
-
If the device is compatible, then the reading process might be started. In order to do that,
session id
will be required. This ID should be obtained from the ID&V cloud.
The reading process can be started using classic listener interfaces or by subscribing to the observable
object. It's up to integrator which way is preferred.
Warning: using the observable
object way require subscribing to the object returned in the start
method in order to start reading procedure (it's cold observable).
1reader.start(sessionId, object: ResultListener{2 override fun onSuccess() {3 //Reading finished with success4 }56 override fun onFailure(failure: Failure) {7 //Reading finished with failure8 }910}, object: ProgressListener{11 override fun onProgressUpdated(progress: Int) {12 //Progress update13 }14})
Components
Configuration
NFCConfiguration
This is the configuration class that contains information about the server url, customer identifier (apiKey
), and which logs are available for viewing.
Parameter | Description |
---|---|
serverUrl String | The URL of the service where the reader can reach the NFCServer's device API. |
serverApiKey String | API key used for the authorization process. |
sdkExperience SDKExperience | Configuration for SDKExperience. It can be null if you don't want use TutorialProvider. |
logLevel LogLevel | Logging level. |
SDKExperience
This class is configuration for SDKExperience. Need to get animation provided by IDEMIA and localisation of NFC chip on document/phone.
Parameter | Description |
---|---|
serviceUrl String | The URL of the service where the TutorialProvider can reach SDKExperience API. (It has default value) |
apiKey String | API key used for the authorization process. |
assetsUrl String | The URL of the service where the TutorialProvider can reach animmations. (It has default value) |
LogLevel
This is the enum used to configure the behavior of logs.
Attribute | Description |
---|---|
INFO | Show info logs |
DEBUG | Show debug logs |
ERROR | Show error logs |
NONE | Do not show logs |
NFCReader
This is the main class that is an entry point to every activity connected with the document reading process.
All listeners for methods below will be called on the main thread.
start(sessionId: String, resultListener: ResultListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)
This starts the document reading process. It requires session id
as a parameter to fetch communication scripts.
ResultListener is the interface that allows the ability to receive the reading result.
-
mrz information saved on document.
-
PhoneNFCLocation information about phone NFC antenna localisation.
start(sessionId: String, resultListener: ResultListener, progressListener: ProgressListener, mrz: String, phoneNFCLocation: PhoneNFCLocation)
This starts the document reading process. It requires the session id
as a parameter to fetch communication scripts.
-
ResultListener is the interface that allows the ability to receive the reading result.
-
ProgressListener is the interface that allows the ability to receive the progress feedback.
-
mrz information saved on document.
-
PhoneNFCLocation information about phone NFC antenna localisation.
start(sessionID: String, mrz: String, phoneNFCLocation: PhoneNFCLocation): Observable
This is a Kotlin-friendly method that returns the object to which we can subscribe in order to start the capture and get capture feedback.
cancel()
This stops the reading procedure. This method does the background job.
isDeviceCompatible()
This checks if the device satisfies all hardware/software requirements connected with the document reading feature.
Kotlin1if(reader.isDeviceCompatible()) {2 //start reading3} else {4 //display message that device is not compatible5}
getTutorialProvider()
This is a provider for getting information about NFC location and document type. It also provides animation based on NFC location.
ResultListener
This listener provides the possibility to invoke code based upon the reading result.
onSuccess(documentDataAccessToken: String)
This method is called when the document has been read successfully. The argument contains the access token for the document data.
onFailure(failure: Failure)
This method is called when the document reading fails. This method's argument represents the failure reason. More about failures here.
ProgressListener
This provides the progress of the document reading.
onProgressUpdated(progress: Int)
This is the method called when progress changes. The argument progress
can be in the range 0 - 100.
Failure
This contains information about the document reading failure. It's built from message
and type
. Type
is a more general failure cause (more than one failures might have the same type). The message contains detailed information what happened for a given type.
Failure types
- NFC_CONNECTION_BROKEN - NFC connection has been broken
- CONNECTION_ISSUE - Cannot connect with external server, no internet connection
- INVALID_SESSION_STATE - Session is in an unexpected state. New one needs to be created.
- SERVER_CONNECTION_BROKEN - Cannot process data with the server side, might be a compatibility issue
- SERVER_ERROR - Server side error occurred
- UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
- READING_ISSUE - Document reading issue occurred. Can be related with NFC issues and data converting
- REQUESTS_LIMIT_EXCEEDED - Document reading is impossible because of too many requests to server has been made or API key request limit has been exceeded
TutorialProvider
This class allows to get information about NFC antenna location on phone and document. It also provides animation DocumentType and animation based on the previous three variables.
getNFCLocation(mrz: String, listener: NFCLocationListener)
This provide phone and document NFC antenna location on callback. It also provide document type.
getNFCLocation(mrz: String)
This is a Coroutines-friendly method that returns the NFCLocationResult.
getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null, listener: AnimationListener)
This provide animation in lottie format on callback.
getAnimation(phoneNFCLocation: PhoneNFCLocation, documentNFCLocation: DocumentNFCLocation, documentType: DocumentType, documentFeature: String? = null)
This is a Coroutines-friendly method that returns the AnimationResult.
NFCLocationListener
This listener provides the information about phone and document NFC antenna location on callback. It also provide document type.
onNFCLocation(nfcLocation: NFCLocation)
This method is called when we get information about NFC antenna location. This method's argument provides information about NFC and document type.
onFailure(failure: LocationFetchFailure)
This method is called when we failed to fetch information about location. This method's argument provides information of failure reason.
AnimationListener
This listener provides animation prepare by IDEMIA
onAnimationProvided(animation: ByteArray)
This method is called when we get animation. The method argument is a animation in lottie format.
onFailure(failure: AnimationFetchFailure)
This method is called when we failed to fetch animation. This method's argument provides information of failure reason.
NFCLocationResult
NFCLocation
This is the class which contains information about phone and document NFC antenna location on callback. It also contain document type information.
Parameter | Description |
---|---|
phoneNFCLocation List< PhoneNFCLocation > | NFC antenna location on phone. If NFC antenna location is for us unknown we return list of possible locations. |
documentNFCLocation DocumentNFCLocation | NFC chip location on document. If we do not have information about location we return as a default FRONT_COVER. |
documentType DocumentType | Document type which have mrz. |
documentFeature String | Additional information about document. |
LocationFetchFailure
This is the class which contains reason why fetching nformation about phone and document NFC antenna failed.
Parameter | Description |
---|---|
message String | Descryption of the failure |
type TutorialFailure | General failure cause (more than one failures might have the same type). |
PhoneNFCLocation
This is the enum with information about phone NFC antenna localisation.
Attribute | Description |
---|---|
TOP | NFC antenna is in top of phone |
MIDDLE | NFC antenna is in middle of phone |
BOTTOM | NFC antenna is in bottom of phone |
SWIPE | o not know when antenna is please move your phone on the document |
DocumentNFCLocation
This is the enum with information about document NFC location.
Attribute | Description |
---|---|
FRONT_COVER | NFC antenna is on cover of passport |
INSIDE_PAGE | NFC antenna is on the first of passport |
NO_NFC | Document do not have NFC antenna |
DocumentType
This is the enum with information about document type which have mrz.
Attribute | Description |
---|---|
PASSPORT | Passpor |
ID | eID |
UNKNOWN | Unknown |
AnimationResult
AnimationFetchSuccess
This is the class which contains the animation in lottie format.
Parameter | Description |
---|---|
animation ByteArray | Animation in lottie format. |
AnimationFetchFailure
Class containing reason of animation fetching failure.
Parameter | Description |
---|---|
message String | Descryption of the failure |
code Integer | Code of the failure |
type TutorialFailure | General failure cause (more than one failures might have the same type). |
Failure types
- CONNECTION_ISSUE - Cannot connect with external server
- NO_INTERNET_CONNECTION - No internet connection
- SERVER_ERROR - Server side error occurred
- UNSUPPORTED_DEVICE - Device does not support NFC or it's disabled
- READING_ISSUE - Fetching information about NFC and animation issue occurred. Can be related with data converting
- REQUEST_ERROR - Fetching information about NFC and animation is impossible because of too many requests to server has been made or API key request limit has been exceeded
- MRZ_ISSUE - Issue with parsing MRZ.
- DOCUMENT_TYPE_ISSUE - It only occur when there is no such animation for chosen DocumentType
Warning!
There is a possibility that after scanning an NFC chip, the user will not move their device away from the chip, and the chip will be scanned once again. This can make the device display a message about the scanned chip. Some devices (e.g. Huawei and Honor devices) exit the application and open a new window with a message about the newly-read NFC tag, which can make a bad user experience or disturb handling of NFC results in later steps.
To prevent this from happening, you can handle NFC scanning in the application even after the scan is finished. You don't have to do anything with the result, but it will prevent the application flow from being interrupted by a scanned tag.
To do this you need to enable reader mode in NFCAdapter
from android.nfc
by calling enableReaderMode()
method.
Or you can just create an NFCReader
in your activity that calls this method on Lifecycle.Event.ON_RESUME
and stops it on Lifecycle.Event.ON_PAUSE
.
The sample application uses a single Activity and NFCReader
is created in Activity initialization, so it has enabled reader mode all the time the app is running in foreground.
Sample Application
Below you will find instructions to add and run the sample NFC application.
Note: To run the sample NFC application, you must add LKMS and Artifactory credentials and also NFC and IPV (gips-rs) API keys to your global gradle.properties
.
Step 1: Obtain the API keys and credentials from the IDEMIA Experience Portal dashboard:
-
Follow the steps below to access the NFC API key:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
-
Under Access, navigate to Environments section to find the needed key.
-
-
Follow the steps below to access The IPV (gips-rs) API key:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
-
Under Access, navigate to Environments section to find the needed key.
-
-
Follow the steps below to access LKMS and Artifactory credentials:
-
Log in to the IDEMIA Experience Portal.
-
Go to My Dashboard -> My Identity Proofing.
The dashboard appears.
- Under Access, navigate to SDK artifactory and licenses section to find needed credentials.
-
Note: Remember to use the default environment (EU PROD) and confirm that serverUrl
value in NFCConfiguration
and serviceUrl
value in SDKExperience
is the same as the selected environment address.
Step 2: Place the NFC, IPV (gips-rs) API keys and LKMS, Artifactory credentials into your global gradle.properties
found in your gradle directory and are set by default to: USER_HOME/.gradle.
Language not specified1nfcApiKey="YOUR NFC API KEY"2ipvApiKey="YOUR IPV API KEY"34artifactoryUserMI=<artifactory user>5artifactoryPasswordMI=<artifactory credentials>6repositoryUrlMI=<repository url>78lkmsProfileId="YOUR LKMS PROFILE ID"9lkmsApiKey="YOUR LKMS API KEY"
Step 3: Fetch sample app source code as a .zip package from Artifactory.
Step 4: In app's source change NFCConfiguration to your match your tenant configuration: NFCConfiguration(serverUrl = "TENANT_URL/nfc/", serverApiKey = BuildConfig.nfcApiKey, sdkExperience = SDKExperience(serviceUrl = "TENANT_URL/sdk-experience/", apiKey = BuildConfig.nfcApiKey)). On production environment, default parameters will be most probably fine.
Step 5: In app's source change ServerConfigurationData to match your tenant configuration: ServerConfigurationData(serverUrl = "TENANT_URL/gips/",serverApiKey = BuildConfig.ipvApiKey). On production environment, default parameters will be most probably fine.
Step 6: Run app. If all steps have been applied properly there should not be any issues.