How to use Machine Learning to detect faces in Jetpack Compose

Inuwa Ibrahim
3 min readMay 20, 2023

--

In this tutorial, I will be explaining how to detect human faces using firebase ML kit with jetpack compose:

So you can detect if:

  1. The person is smiling
  2. The person’s eyes are open
  3. The person is nodding e.t.c

Let’s get started:

Dependencies:

//MLKIT
implementation 'com.google.mlkit:face-detection:16.1.5'
implementation 'com.google.android.gms:play-services-mlkit-face-detection:17.1.0'

//CAMERAX
def camerax_version = "1.2.2"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
implementation "androidx.camera:camera-view:${camerax_version}"

First, we create an analyzer class.
This class helps to create an input image object after capturing from the device camera. CameraX will handle that. We will also need to configure the face detector to our needs using the FaceDetectorOptions object.

private val realTimeOpts = Builder()
.setContourMode(CONTOUR_MODE_ALL)
.setPerformanceMode(PERFORMANCE_MODE_FAST)
.setLandmarkMode(LANDMARK_MODE_NONE)
.setClassificationMode(CLASSIFICATION_MODE_ALL)
.setMinFaceSize(0.20f)
.build()
  • setContourMode — Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image.
  • setPerformanceMode — Favor speed or accuracy when detecting faces.
  • setLandmarkMode — Whether to attempt to identify facial “landmarks”: eyes, ears, nose, cheeks, mouth, and so on.
  • setClassificationMode — Whether or not to classify faces into categories such as “smiling”, and “eyes open”.
  • setMinFaceSize — Sets the smallest desired face size, expressed as the ratio of the width of the head to width of the image

Here is the complete ImageAnalyzer class:

class FaceAnalyzer(private val callBack : FaceAnalyzerCallback) : ImageAnalysis.Analyzer {

private val realTimeOpts = Builder()
.setContourMode(CONTOUR_MODE_ALL)
.setPerformanceMode(PERFORMANCE_MODE_FAST)
.setLandmarkMode(LANDMARK_MODE_NONE)
.setClassificationMode(CLASSIFICATION_MODE_ALL)
.setMinFaceSize(0.20f)
.build()

private val detector = FaceDetection.getClient(realTimeOpts)

override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
mediaImage?.let {
val inputImage =
InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
detector.process(inputImage)
.addOnSuccessListener { faces ->
callBack.processFace(faces)
imageProxy.close()
}
.addOnFailureListener {
callBack.errorFace(it.message.orEmpty())
imageProxy.close()
}
.addOnCompleteListener {
imageProxy.close()
}
}
}
}

interface FaceAnalyzerCallback {
fun processFace(faces: List<Face>)
fun errorFace(error: String)
}

Next, you can then set this analyzer in you cameraX builder when you want to take a photo like this:

val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(previewView.width, previewView.height))
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.setImageQueueDepth(10)
.build()
.apply {
setAnalyzer(executor, FaceAnalyzer(
object : FaceAnalyzerCallback {
override fun processFace(faces: List<Face>) {
doProcessFace(faces)
}

override fun errorFace(error: String) {
onErrorFace(error)
}

}
))
}

//bind here
cameraProvider.bindToLifecycle(
lifecycleOwner, cameraSelector, preview, imageCapture, imageAnalysis
)
  • As you can see the callback (processFace) takes in a list of faces (literally) obtained by the ml kit. With this, you can write a loop statement to determine if it’s a smiling face and much more.

In your viewModel, you can do this:

 fun processFaces(faces: List<Face>){
viewModelScope.launch {
if (faces.isNotEmpty()) {
for (element in faces){
val leftEyeOpenProbability = element.leftEyeOpenProbability
val rightEyeOpenProbability = element.rightEyeOpenProbability
val smilingProbability = element.smilingProbability

//Smiling Face
if((smilingProbability ?: 0f) > 0.3f) {
faceViewState.update {
it.copy(
isSmiling = true
)
}
}

//Eyes are open
if((leftEyeOpenProbability ?: 0F) > 0.9F && (rightEyeOpenProbability ?: 0F) > 0.9F
){
faceViewState.update {
it.copy(
areEyesOpen = true
)
}
}

//Blinking face
if(((leftEyeOpenProbability ?: 0F) < 0.4 && (leftEyeOpenProbability != 0f)) && ((rightEyeOpenProbability ?: 0F) < 0.4F && (leftEyeOpenProbability != 0f))
){
faceViewState.update {
it.copy(
isBlinking = true
)
}
}
}
}
}
}

Collect the state in your view and display appropriate message:

val faceViewState by viewModel.faceViewState.collectAsState()

faceViewState.areEyesOpen //eyes are open
faceViewState.isBlinking //eyes are blinking
faceViewState.isSmiling //smiling face

That’s it for me.

As you can see, using machine learning in an Android project isn’t hard, all thanks to ML kit by Google.
https://developers.google.com/ml-kit/vision/face-detection

Reach out to me here:
https://linktr.ee/ibrajix

--

--