In the past few years, BlinkID has become our most popular product. It is used all over the world for automating personal data extraction from different types of documents – IDs, driver’s licenses, travel visas, passports, etc.
BlinkID’s fundamental feature is document and face detection, used for extracting the image of the entire document, plus the ID photo. After obtaining the image, it uses proprietary smart & systematic recognition technology to extract information available on that particular document, like the name, address, and date of birth.
Changing the Course
As time went by and our client base got bigger, requirements kept growing. The templating system that we use to add support for every new document type reached its limit. The process of defining a template for a particular document was painstaking. That situation made us realize that our current approach would make us sacrifice the quality of our solutions in the future. We set our goals high and giving up from supporting capturing, reading, and extraction for all identity documents in the world was not an option.
Five months ago, we created a completely new, ML-backed development process to improve intelligent data extraction.
Building the new process from the ground up enabled more efficient code reuse and more time to focus on UX enhancing features. A big part of it is improving image processing, for example, our proprietary blur detection algorithm. Another advantage, both for our clients and us is faster support for new document types. Clients can focus on their task – automating data extraction from documents, without worrying about the idiosyncrasies of every different type of document their UI supports.
All in One Recognizer
The Recognizer is the basic unit for processing images and extracting meaningful information from them, in this case from various identity documents.
Scanning multiple documents in an earlier version of BlinkID meant setting up document-type specific recognizers, possibly implementing UI for document selection, and writing lots of boilerplate code to handle result extraction for all recognizers. With BlinkID v5, developers need to use a single BlinkIdCombined recognizer resulting in much less code to write and maintain. This also means that supporting new documents in the client’s app becomes as easy as increasing the BlinkID version number in their dependencies list.
While combined recognizers also exist in the earlier BlinkID version, they were not available for all documents. Some documents had to be scanned by using two recognizers, one for the front side and the other for the back. That required extra development effort on the client’s behalf because our built-in overlays support only combined recognizers. In BlinkID v5, this is no longer an issue as we now provide combined scanning with built-in overlay for all documents supported by BlinkIdCombinedRecognizer. Other than having to write less code, another advantage of having just one recognizer is performance. Combining multiple recognizers can lead to degraded performance, but with new BlinkIdCombinedRecognizer that is not a worry. We support over 400 documents while keeping scanning speed and accuracy at the highest level.
Less is More
With one recognizer, we support more identity documents than ever, and at the same time, we seamlessly keep adding support for new document types. One might think that this would bring additional weight to the mobile SDK. But it’s the other way around, BlinkID v5 is the most lightweight release ever.
We reduced the impact of mobile SDK on the client’s application size by about 40% on iOS and 60% on Android. For Android, it’s also important to note that BlinkID v5 has around 35% fewer methods, which might mean the difference between having and not having to use multidex. In the latest version, the API surface area is much smaller. There’s no longer a need to know the exact recognizer class for each document or worry about specific methods of each recognizer.
As you probably noticed, the changes that we made are mostly regarding mobile SDK. This doesn’t mean that we forgot about web API. It only means that web API updates are still yet to come.
End Users Will Appreciate v5, Too
In BlinkID v5, we made changes that are enhancing the existing user experience. There are three main reasons for this. First, there’s no longer a need to preselect document type before the scanning starts. Now, thanks to automatic document classification, the scanning starts immediately and seamlessly. Second, we have significantly improved ID card detection so scanning works regardless of document orientation or angle. That’s why we can all forget about the viewfinder rectangle. And finally, we’ve added a new callback that lets your users know if the document they’re trying to scan isn’t supported. That means that users get notified when the document isn’t supported. We are putting lots of effort into creating smart real-time feedback for better scanning experience. We’ll talk more about the new UX and built-in scan flow in the next blog.
Getting Started
There may be a worry that testing or migrating to BlinkID v5 could be a lot of work. Well… it’s not.
Even though it’s a major version update and there have been a lot of improvements, the way the mobile SDK is used hasn’t changed much. With a simple update of the existing code to BlinkIdCombined recognizer instead of country-specific recognizers and (optionally) switch to our new BlinkID overlays.
For more information, check out our Android and iOS Github repositories with updated sample apps and documentation.