Skip to main content

Detect and Track Object Using Firebase ML Kit In Android Studio 2020(Complete Guide) With Source Code | Step By Step Tutorial

In this post, we're going to detect and track an object in an image using a firebase ml kit in an android studio.


This is the output after doing all the steps:

So, now make it happen:

Step 1: Add Firebase to your android project:
I recommend you to see how to add firebase to the android project in 5minutes to know how to add it or if you already add it then you can move on to 2nd Step. 

Step 2: Add this dependency for the ml kit android libraries to your app-level build.gradle file:

implementation 'com.google.firebase:firebase-ml-vision:24.0.1'

implementation 'com.google.firebase:firebase-ml-vision-object-detection-model:19.0.3'

as shown below and then click on Sync Now.


Step 3: Design the layout of the activity:

<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:layout_width="match_parent"    android:layout_height="match_parent">
    <ImageView        android:id="@+id/image"        android:layout_width="500dp"        android:layout_height="500dp"        android:layout_above="@+id/selectImage"        android:layout_margin="30dp" />
    <Button        android:id="@+id/selectImage"        android:layout_width="wrap_content"        android:layout_height="wrap_content"        android:layout_centerInParent="true"        android:text="Select Image !" />
    <TextView        android:id="@+id/text"        android:layout_width="wrap_content"        android:layout_height="wrap_content"        android:layout_below="@id/selectImage"        android:layout_margin="30dp"        android:textColor="@android:color/black"        android:textSize="15sp" />

</RelativeLayout> 

as shown below:


Step 4: Select Image from the device:

I recommend you to first go through the post on how to select or capture an image from the device before going further.

So now, let's open the image cropping activity to select the image on button click:


and now get the image by overriding onActivityResult method:


Step 5: Configure and run the object detector :

3 steps to detect and track an object:
1. Prepare the input image.
2. Configure and run the object detector.
3. Get information about objects.

There are 5 ways of getting a firebase vision image object (Prepare the input image):
(i)By Bitmap,
(ii)By media.image,
(iii)By ByteBuffer,
(iv)By ByteArray,
(v)By File on device

We're creating using file path(last option) if u want to know how to create from other option then comment down below:

See the firebase doc for full reference.

Here are all 3 steps:

private void detectAndTrackObjectFromImage(Uri uri) {
    try {
        //1.Prepare the input image.        FirebaseVisionImage image = FirebaseVisionImage.fromFilePath(MainActivity.this, uri);
         // Live detection and tracking         // FirebaseVisionObjectDetectorOptions options =         //new FirebaseVisionObjectDetectorOptions.Builder()         //.setDetectorMode(FirebaseVisionObjectDetectorOptions.STREAM_MODE)         //.enableClassification()  // Optional         //.build();
         // Multiple object detection in static images        //2. Configure and run the object detector.        FirebaseVisionObjectDetectorOptions options =
                new FirebaseVisionObjectDetectorOptions.Builder()
                        .setDetectorMode(FirebaseVisionObjectDetectorOptions.SINGLE_IMAGE_MODE)
                        .enableMultipleObjects()
                        .enableClassification()
                        .build();
        // To change the default settings:        FirebaseVisionObjectDetector objectDetector =
                FirebaseVision.getInstance().getOnDeviceObjectDetector(options);
         // for default setting:         //FirebaseVisionObjectDetector objectDetector =         //FirebaseVision.getInstance().getOnDeviceObjectDetector();       //Run the object detector.        objectDetector.processImage(image)
                .addOnSuccessListener(
                        new OnSuccessListener<List<FirebaseVisionObject>>() {
                            @Override                            public void onSuccess(List<FirebaseVisionObject> detectedObjects) {
                                // The list of detected objects contains one item if multiple object detection wasn't enabled.                                for (FirebaseVisionObject obj : detectedObjects) {
                                  //3. Get information about objects.                                    // Integer id = obj.getTrackingId(); null in SINGLE_IMAGE_MODE                                    Rect bounds = obj.getBoundingBox();                                    textView.append("Bounds- " + bounds + "\n");                                    // If classification was enabled:                                    int category = obj.getClassificationCategory();                                    Float confidence = obj.getClassificationConfidence();                                    textView.append("Category- " + category + "\n" + "Confidence- " + ("" + confidence * 100).subSequence(0, 4) + "%" + "\n\n");                                }
                            }
                        })
                .addOnFailureListener(
                        new OnFailureListener() {
                            @Override                            public void onFailure(@NonNull Exception e) {
                                // Task failed with an exception                                // ...                            }
                        });    } catch (IOException e) {
        e.printStackTrace();    }
}

Now, run the app :)

If everything is done correctly then you see the excepted output.

You can see the full source code at GitHub.

If you face any problem or have any suggestion please comment it down we love to answer it.

Comment down what next topic you need a guide on? or Drop a message on our social media handle

 Happy coding and designing : )



Comments

Popular posts from this blog

Face Recognition Using Firebase ML Kit In Android Studio 2020 (Complete Guide) | Step By Step Tutorial

Today, in this post we are going to make an android app that recognizes facial features from an image using a firebase ml kit in the android studio.                                                      After doing all the steps the output will look like this: Step 1: Add Firebase to your android project: I recommend you to see  how to add firebase to the android project in 5minutes  to know how to add it or if you already add it then you can move on to 2nd Step.      Step 2:  Add this dependency for the ml kit android libraries to your app-level build.gradle file:   implementation 'com.google.firebase:firebase-ml-vision:24.0.1'   // If you want to detect face contours (landmark detection and classification   // don't require this additional model):   implementation 'com.google.firebase:...

Select (or Capture) and Crop Image In Android Studio 2020 (Complete Guide) | Step By Step Guide

In, this post we're going to make an app that captures or selects an image and then displays in an image view using a third party library - android image cropper by ArthurHub at Github. Step 1: Add Dependency : Open android studio and paste this dependency in app-level build.gradle file as shown below: implementation 'com.theartofdev.edmodo:android-image-cropper:2.7.+' and then click on Sync Now. Step 2: Design the main activity layout : Add a Button and an ImageView to select and display image respectively as shown below: Step 3: Modify AndroidMainfest.xml by adding the CropImageActivity : <activity android:name="com.theartofdev.edmodo.cropper.CropImageActivity" android:screenOrientation="portrait" android:theme="@style/Base.Theme.AppCompat"/>  as shown below- Step 4: Open CropImageActivity on Click of a button : Step 5: Lastly, override the On Activity Result and update ImageView : ...



DMCA.com Protection Status

Copywrite © 2021 The MindfulCode