Face Recognition Using Firebase ML Kit In Android Studio 2020 (Complete Guide) | Step By Step Tutorial
Today, in this post we are going to make an android app that recognizes facial features from an image using a firebase ml kit in the android studio.

After doing all the steps the output will look like this:
Step 1: Add Firebase to your android project:
I recommend you to see how to add firebase to the android project in 5minutes to know how to add it or if you already add it then you can move on to 2nd Step.
Step 2: Add this dependency for the ml kit android libraries to your app-level build.gradle file:
Step 3: Design the layout of the activity:
Step 4: Select Image from the device:
I recommend you to first go through the post on how to select or capture an image from the device before going further.
So now, let's open the image cropping activity to select the image on button click:
and now get the image by overriding onActivityResult method :
Step 5: Extract face data from the image :
Now, run the app.
You will see the required output.
You can see the full source code at GitHub.
If you face any problem or have any suggestion please comment it down we love to answer it.

After doing all the steps the output will look like this:
I recommend you to see how to add firebase to the android project in 5minutes to know how to add it or if you already add it then you can move on to 2nd Step.
Step 2: Add this dependency for the ml kit android libraries to your app-level build.gradle file:
implementation 'com.google.firebase:firebase-ml-vision:24.0.1' // If you want to detect face contours (landmark detection and classification // don't require this additional model): implementation 'com.google.firebase:firebase-ml-vision-face-model:19.0.0'
Step 3: Design the layout of the activity:
<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"> <ImageView android:id="@+id/image" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_above="@+id/selectImage" android:layout_marginBottom="20dp" /> <Button android:id="@+id/selectImage" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:text="Select Image !" /> <TextView android:id="@+id/text" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@id/selectImage" android:layout_centerHorizontal="true" android:layout_marginTop="20dp" android:textColor="@android:color/black" android:textSize="16sp" /></RelativeLayout>
Step 4: Select Image from the device:
I recommend you to first go through the post on how to select or capture an image from the device before going further.
So now, let's open the image cropping activity to select the image on button click:
and now get the image by overriding onActivityResult method :
Step 5: Extract face data from the image :
private void detectFaceFromImage(Uri uri) { try {
// Preparing the input image image = FirebaseVisionImage.fromFilePath(MainActivity.this, uri);
// For high accuracy option FirebaseVisionFaceDetectorOptions highAccuracyOpts = new FirebaseVisionFaceDetectorOptions.Builder() .setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE) .setLandmarkMode(FirebaseVisionFaceDetectorOptions.ALL_LANDMARKS) .setClassificationMode(FirebaseVisionFaceDetectorOptions.ALL_CLASSIFICATIONS) .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS) .build();
// Create the detector object
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance()
.getVisionFaceDetector(highAccuracyOpts); // Pass the vision image object in detector
detector.detectInImage(image)
.addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
@Override public void onSuccess(List<FirebaseVisionFace> faces) {
for (FirebaseVisionFace face : faces) {
Rect bounds = face.getBoundingBox();
// coordinate of face textView.append("Bounding Polygon "+ "("+bounds.centerX()+","+bounds.centerY()+")"+"\n\n"); // angle of rotation
float rotY = face.getHeadEulerAngleY(); // Head is rotated to the right rotY degrees float rotZ = face.getHeadEulerAngleZ(); // Head is tilted sideways rotZ degrees textView.append("Angles of rotation " + "Y:"+rotY+","+ "Z: "+rotZ+ "\n\n"); // If landmark detection was enabled (mouth, ears, eyes, cheeks, and // nose available): // If face tracking was enabled: if (face.getTrackingId() != FirebaseVisionFace.INVALID_ID) { int id = face.getTrackingId(); textView.append("id: " + id + "\n\n"); }
// Left Ear Position FirebaseVisionFaceLandmark leftEar = face.getLandmark(FirebaseVisionFaceLandmark.LEFT_EAR); if (leftEar != null) { FirebaseVisionPoint leftEarPos = leftEar.getPosition(); textView.append("LeftEarPos: " + "("+leftEarPos.getX()+"," + leftEarPos.getY()+")"+"\n\n"); }
// Right Ear Position
FirebaseVisionFaceLandmark rightEar = face.getLandmark(FirebaseVisionFaceLandmark.RIGHT_EAR); if (rightEar != null) {
FirebaseVisionPoint rightEarPos = rightEar.getPosition(); textView.append("RightEarPos: " + "("+rightEarPos.getX()+","+rightEarPos.getY() +")"+ "\n\n"); }
// If contour detection was enabled: List<FirebaseVisionPoint> leftEyeContour =
face.getContour(FirebaseVisionFaceContour.LEFT_EYE).getPoints();
List<FirebaseVisionPoint> upperLipBottomContour = face.getContour(FirebaseVisionFaceContour.UPPER_LIP_BOTTOM).getPoints(); // If classification was enabled:
// Similing Probability
if (face.getSmilingProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
float smileProb = face.getSmilingProbability(); textView.append("SmileProbability: " + ("" + smileProb * 100).subSequence(0, 4) + "%" + "\n\n"); }
// Right eye open Probability if (face.getRightEyeOpenProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) { float rightEyeOpenProb = face.getRightEyeOpenProbability(); textView.append("RightEyeOpenProbability: " + ("" + rightEyeOpenProb * 100).subSequence(0, 4) + "%" + "\n\n"); }
// Left eye open Probability
if (face.getLeftEyeOpenProbability() != FirebaseVisionFace.UNCOMPUTED_PROBABILITY) {
float leftEyeOpenProbability = face.getLeftEyeOpenProbability(); textView.append("LeftEyeOpenProbability: " + ("" + leftEyeOpenProbability * 100).subSequence(0, 4) + "%" + "\n\n"); }
}
} }) .addOnFailureListener( new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } }); } catch (IOException e) { e.printStackTrace(); } }
Now, run the app.
You will see the required output.
You can see the full source code at GitHub.
If you face any problem or have any suggestion please comment it down we love to answer it.
Comment down what next topic you need a guide on? or Drop a message on our social media handle
Happy coding and designing : )
Comments
Post a Comment