In the ever-evolving landscape of mobile app development, the ability to process and analyse images in real time has become increasingly important. If you are a Flutter developer you must have relied on theĀ
Subject refers to the primary people, animals, or objects that appear in the foreground of an image. If you have two subjects very close or touching each other, they are considered a single subject.
The subject segmentation processes an input image and produces an output mask or bitmap for the foreground.
If you are new to the
Before this addition, google_ml_kit for Flutter already offered a range of capabilities including text recognition, face detection, pose estimation and more. These features have enabled developers to create sophisticated apps with minimal effort in implementing complex ML algorithms.
Using Subject Segmentation on your Flutter app
To use the new subject segmentation on your app, you can follow these simple steps
Firstly, what are the requirements?
iOS: This feature is still in Beta, and it is only available for Android. Stay tuned for updates onĀ
Googleās website Ā and request the featureĀhere
Android
- minSdkVersion: 24
- targetSdkVersion: 33
- complieSdkVersion: 34
You can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your appāsĀ AndroidManifest.xmlĀ file:
<application ...>
...
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="subject_segment" >
<!-- To use multiple models: android:value="subject_segment,model2,model3" -->
</application>
Secondly, update yourĀ pubspec.yamlĀ file by adding theĀ
dependencies:
google_mlkit_subject_segmentation: ^0.0.1
Or run this command on your terminal
flutter pub add google_mlkit_subject_segmentation
Now in our Dart code, you can use:
import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart';
Usage
Create an instance ofĀ InputImageĀ with either of these three ways:
From path:
final inputImage = InputImage.fromFilePath(filePath);
From file:
final inputImage = InputImage.fromFile(file);
From bytes:
final inputImage = InputImage.fromBytes(bytes: bytes, metadata: metadata);
Create an instance ofĀ SubjectSegmenter
final options = SubjectSegmenterOptions(
enableForegroundConfidenceMask: true,
enableForegroundBitmap: false,
enableMultipleSubjects: SubjectResultOptions(
enableConfidenceMask: false,
enableSubjectBitmap: false,
),
);
final segmenter = SubjectSegmenter(options: options);
LetāsĀ discuss the options. We have four of them. Donāt worry; I will explain them one after the other.
Foreground confidence mask
The foreground confidence mask lets you distinguish the foreground subject from the background. To enable the confidence mask, you have to passĀ trueĀ toĀ enableForegroundConfidenceMask
enableForegroundConfidenceMask: true
Foreground bitmap
Similarly, you can also get a bitmap of the foreground subject, To enable that you have to passtrueĀ toĀ enableForegroundBitmap
enableForegroundBitmap: true,
Multi-subject confidence mask
As for the foreground options, you can use theĀ SubjectResultOptionsĀ to enable the confidence mask for each foreground subject as follows:
SubjectResultOptions(
enableConfidenceMask: true,
enableSubjectBitmap: false,
)
Multi-subject bitmap
Similarly, you can enable the bitmap for each subject:
SubjectResultOptions(
enableConfidenceMask: false,
enableSubjectBitmap: true,
)
Process image
final result = await segmenter.processImage(inputImage);
Release resources with close
segmenter.close();
In the example above, I used theĀ Foreground BitmapĀ you can also check the source code below
I canāt wait to see what you all build with this. Cheers š» š„.