The Microblink team has always been passionate about the intersection of AI and the real world with a bold vision to bring the benefits of AI to every person on earth. For nearly a decade, we’ve been developing and delivering diverse products that today impact more than 300M users across 60 countries. On top of our flagship receipt-scanning solutions, we’re thrilled to announce the general availability of our BlinkShelf product recognition offering. Ahead of this exciting milestone, we sat down with Chad Wood, our VP of Product and fearless product recognition leader, who has helped bring this offering to market in under 12 months.
Thanks for taking the time to chat ahead of this exciting release! Can you share more about yourself and your current role?
I oversee product initiatives and strategy across the Commerce part of Microblink’s business. I’ve always been very entrepreneurial and focused on building software that has a meaningful impact on the lives of actual people.
I joined the team back in 2015 when we were focused on physical receipt data extraction. Since then, we’ve since developed robust solutions for digital receipt extraction, which enables us to help companies make sense of purchase data wherever and however that purchase data lives.
Early on, we uncovered that data extraction is just one piece of the puzzle when it comes to receipt scanning. This led us to build out our own product catalog of 15 million products and growing, which powers our product intelligence that enables us to provide enriched purchase data to our customers.
More recently, we’ve developed software to detect products on shelf, without having to pick up the product or scan a barcode. This maintains and expands our computer vision expertise and product intelligence to evolve our core receipt scanning technology into scanning real-world shelves and product images. No one we’ve encountered over the last 7 years has been capable of doing this in real-time, with the same degree of accuracy. The response at Shoptalk and Groceryshop has been tremendous, and we’re just getting started!
Can you tell me more about how this works?
Through machine learning and computer vision, our tech is detecting and identifying products on the shelf down to the UPC. Our customers can use this data to surface personalized promotions, product reviews and recommendations, nutritional information and more. Imagine, for example, a brand’s mascot guiding shoppers to discover your latest seasonal item or turning the grocery list into a gamified adventure to find savings. Digitizing the entire shelf in real-time with a tap of a button can also change the game for a number of use cases across retail and market research – the opportunities are endless.
I’ll use myself as an example use case. Imagine I need to stop at the grocery store on the way home from work to pick up an item for my family. I always have my cell phone on me, and generally when I’m shopping, I have it out or in my pocket to check my list, look up a recipe, or clarify something with my wife. With this technology integrated into my favorite consumer promotional/loyalty shopping app, I could now capture a visual of the entire shelf and know which products are on the shelf and potentially on promotion with the tap of a button.
How did you go about the early days of product discovery and development?
We’ve always been super close with our customers, and this is no different. Promotional apps struggle to drive usage and engagement while consumers are shopping in-store. Most of these apps rely on a long list of promotions to surface offers, requiring users to scroll through and identify ones that are relevant to them.
While some users will peruse the offers before or during shopping, others will just capture and submit their receipts after purchase in hopes of qualifying for a promotion. In each of these cases, people struggle to connect relevant, available promotions with their in-store shopping experience and to bring those offers to life in new ways.
While we started with this promotion discovery use case, prompted by a large number of our customers who are in the loyalty/rewards or promotional space, as the product developed, we also uncovered major retail challenges we can solve along the path to purchase.
By digitizing the shelf in real-time, without expensive additional hardware investments, we’re unlocking exciting opportunities across store operations and retail execution or the in-store product discovery experience (e.g. shopping based on dietary preferences or allergy requirements).
From an AI perspective, this sounds like a compelling problem. How did the team go about solving this?
One of our Lead Machine Learning Engineers, Ivan Relić, wrote a great blog post on this that is definitely worth a read!
It took the entire Microblink team to make this possible, and I’m super grateful for the brainpower, collaboration and dedication from our AI and engineering teams globally.
What items can actually be detected in-store and how can someone get their hands on this tech?
We handle all grocery items that you would find in the store, including items like shampoo or beauty products, chips, beverages, etc. We leverage our existing product catalog of more than 15 million products, but we’re always working to expand products and retail channels, provided the market size and opportunity is there.
As for testing the tech, we’re currently piloting it with our existing customers, but we’re planning to make the iOS and Android SDK more widely available at the start of 2023, so definitely reach out if you’re interested in discussing further!