Speaker 0 00:00:00 The human vision is nothing but two cameras arranged in a stereoscopic way, coupled with an intelligent processor, which is our brain in the middle. And that is exactly what we decided to build.
Speaker 2 00:00:21 Hello everyone and welcome to the Robot Industry Podcast. We're glad you're here. And thank you for subscribing. I'm Jim Beretta, and our guest for this edition is Cena Aus from a para AI in Vancouver. And I just want to, uh, let everybody know this is part of our Automate 2023 in Detroit series of podcasts. I'm gonna, uh, also welcome a new sponsor I'd like to introduce Canova. Canova is a global leader in professional robotics, founded in 2006 in Montreal. The company's mission initially was to empower individuals with upper body limitations. Through the use of assistive robotics, the company has evolved its product line to service researchers, medical professionals, governments, and educational institutions achieve their innovation goals Through strategic partnerships and collaborative efforts today with robotic technologies built up over more than a decade of inspired ingenuity can, Nova's dedication is to provide solutions to professionals in industries such as AgriFood, healthcare, security, nuclear, hazmat, and advanced manufacturing. Cena, welcome to the podcast. Thank
Speaker 0 00:01:25 You, Jim, and I'm glad to be here.
Speaker 2 00:01:27 So Cena is a highly technical entrepreneur. As c e o, he is leading an elite team of visionaries, helping manufacturers to make their factories more flexible and productive. Robots are enhanced with a para's software, have 4D vision, the ability to see and handle objects with human-like capability. So challenging applications such as bin picking, sorting, packaging, and assembly are now open to fast, precise, and reliable. Automation. A pair is led by an experienced team from high growth companies focusing on robotics, ai, and advanced manufacturing. Cena has 20 years of startup and large corporate experience in machine learning, deep learning software and systems architecture. He has utility patents in his portfolio around ai, digital imaging, image processing, and media streaming. And Cena, how did you get started in this business? You
Speaker 0 00:02:18 Know, I studied robotics. Uh, that's where I got my hook into robotics, uh, at Summer Fraser University. Uh, but right out of school, my interest in entrepreneurship, uh, got me to help start a company in video surveillance and video analytics. And this is back in 2005 where surveillance cameras are all really low resolution. You can't really distinguish anything. And, uh, our claim to claim was building cameras that were 16 megapixel, 20 megapixel, and all the complexities that came with compressing that data, transmitting that data, displaying that data at the time. And, uh, we built that company, uh, to hundreds of millions in revenue, uh, I p o, uh, over a billion dollar exit by the acquisition through Motor Solutions. And, uh, after that great success, my focus really shifted from video analytics, video surveillance to what effectively deep learning was now doing to effectively every single sector of technology.
Speaker 0 00:03:28 That interest got me to join Amazon in AWS ai where I was very much immersed in AI at Amazon and had a lot of fun with a lot of new technology. And while I was at Amazon, we heard the reverse pitch by a really large tier one automotive oem, where they said, there is a lot of additional automation potential in our factories that we cannot do because they require a much more capable robot vision guidance solution. And after looking at the set of the industry, it was clear that there is a huge vacuum between where industrial robots, where and how they were used in factories and what effectively the latest greatest in computer vision and deep learning was enabling to be possible. So that was the basically spark that got a para AI started Right off the bat, we started to look at why is it that, uh, this vision guided robotics is a big challenge for factory, uh, automation people.
Speaker 0 00:04:39 And it was quite clear that it comes from two sources of basically shortcoming in one sensing in 3d. And two, the algorithmic side of it. The sensing component comes from the fact that practically everyone today, aside from apparel, they start by trying to use a standard fairly well understood structured lighting technology to sense the scene in 3d. And there's nothing very special about that technology. It's very similar to your home projector coupled with a camera. And just imagine you want to watch a movie anywhere but a dark room on a white screen and it just doesn't work. You try to, uh, project onto a, uh, glass bottle of shampoo. Will that work? No, you try to project onto, uh, a ceramic. Would that work? No, you try to project onto the, uh, painted body of a car that won't work. So these were all the shortcomings that we heard where the sensing itself cannot sense.
Speaker 0 00:05:54 Now, couple that with a couple decades old set of algorithms that, uh, are used for geometrical matching of basically the CAD or 3D model of the object with this now an inferior point cloud of the scene and your left with basically a solution that tries to solve a very small percentage of problems in 3D vision guided robotics. And there it leaves a lot of the applications basically unfilled. That is basically what we try to solve. And initially we basically said, what is the vision system out there that we should model our technology after? And we were inspired by human vision because basically all of these automation potentials in factories are today done by humans. And the human vision is nothing but two cameras arranged in a stereoscopic way, coupled with an intelligent processor, which is our brain in the middle. And that is exactly what we decided to build.
Speaker 0 00:07:04 Two simple, uh, cameras, 2D cameras, not smart cameras arrange in a stereoscopic way and then build an AI based software that can now remove all of these limitations that traditional 3D vision and 3D vision guided robotics are encountering. The million dollar question for us was how do you train the ai? Today's deep learning and AI requires a lot of labeled data to be fed to the algorithm. And it is one thing to ask humans to sit in front of a computer and draw boxes around traffic lights and humans and pedestrians. It's another thing to expect them to select every pixel in the object in the image and assign it and x, y, z coordinate in space. So human labeled three data is practically impossible. We knew that if we are to solve this problem with ai, that AI has to be trained in a simulation environment using synthetic data.
Speaker 0 00:08:17 And that is the problem that was set out to solve. And we spent about a year doing deep tech r and d. We put a pilot at a, uh, tier one automotive OEM that has now been in production for over two years. We then worked with multiple fortune hundred manufacturers in consumer packaged goods and automotive assembly to really broaden the addressable market. And our go to market basically started last year where we put our sales team together, uh, to truly expand, uh, the reach. And it's been incredible to see a massive, massive demand for our product and, uh, truly expansion. And one thing that has been very fascinating and satisfying is that we started by telling people we solve all of your vision products, but people were mostly interested in what I call non conformant applications. So these are applications where no other technology in the world can address.
Speaker 0 00:09:33 So, you know, picking transparent and translucent objects in pharma, in automotive lighting, picking extremely shiny products in automotive, in hand tools. And after our product is actually used by the customers, one of the incredible, uh, pieces that they discovered was how reliable the technology was to real world variations. So these are things like lightning change. You have a window in your factory and it ends up being a sunny day and sun is sneaking in and your classical 3D sensing falls apart. You have micro stops. There is vibration in every factory and slight vibrations cause 3D sensing components do not be very reliable. There can be dust in factories and dust collecting in front of a projector in a structured light system can cause it to be highly unreliable. A little bit of dust on the lens of a camera doesn't really hurt the imaging. So all of these components had resulted even in that small fraction of vision guided robotics that was deployed in factories to be unreliable and result in micro stops. One of our customers told us that they estimated these micro stops to amount to over a hundred thousand dollars of annual productivity loss on every line that they had. The system, this customer, it only took him one month of using an apparel system to become a complete believer because we had zero micro stops on that system. And they are now not only accelerating their adoption of apparel on new applications, but also retrofitting existing cells that used alternative 3D vision guided robotics with a para purely to fix these Microsoft problems.
Speaker 2 00:11:42 That's very impressive. Thank you for that. Um, so how long does it take to teach your system, uh, if you're just using it with synthetic data and you don't use real data anymore, like you use only synthetic data, correct?
Speaker 0 00:11:55 That is correct. The amount of time that it takes to train a system from the customer going to our training portal to the time that their trained assets are ready is anywhere between 24 to 48 hours, depending on the complexity of the task. And that's what's really amazing about, uh, AI training with synthetic data that everything about it is automated from the data generation to the training, to the testing. And in fact, this, uh, model allows us to provide not only a almost perfect digital twin of what the customer will expect to see on their factory, but also in fact provide guarantees about the performance of the system before the customer even does a single physical test in terms of pose accuracy, in terms of applicability and the, uh, rate at which we can, uh, empty the bin using what the customer had in mind in terms of the end of arm tooling, the size of the bin, the arrangement of the robot, all of those parameters can be known to the customer before they even decide to do a single physical test.
Speaker 2 00:13:22 So when you talk about go to market, this really is a go to market thing because there's almost no barriers. You just need a couple of days, run your models, do the, uh, do the synthetic data, and customers should have like really be able to, uh, uh, get to, uh, picking and placing right away.
Speaker 0 00:13:39 Absolutely. And uh, the fact that our system, because it's AI base requires almost no vision expertise, unlike alternative systems, that is a really big added value because that means that any robot engineer can now equip their system with vision without having to now be an expert in machine vision as well as robotics.
Speaker 2 00:14:11 And can you give me a little bit more of what you're dealing with as parts like I'm, I'm imagining a big bin of parts and they may be automotive parts, they may be medical device, they may be, uh, plastic, uh, obviously doing shiny stuff, so maybe they're on tables. Um, what, can you give us a little hint of what these parts might look like?
Speaker 0 00:14:30 The application use cases are very wide, as you just mentioned. Um, if you think about how manufacturing has been optimized over the past several decades, people have built highly efficient subprocesses to, to build components of the final product. So think about a car, A car is not built by sheet metal going into a machine and a finished car coming out of the other end. You have your stamping workshop, that is where individual stand parts are coming out. You have your body workshop, that is where these stand parts get welded together. Then you then at the end you have the assembly, uh, workshop, which is now the sub-components are getting assembled together. Very similar story across all the other, uh, manufacturing businesses, life sciences. You don't have your, for example, plastic injection molding system in the same facility that is doing perhaps the chemical filling. Those are separated.
Speaker 0 00:15:48 And the way that these sub-components are effectively transferred from one production facility to the other production facility, even within the same factory sometimes is effectively by putting them inside a bin. So where yep, we bring value is effectively on both sides, either in the introduction of the product to the next process, whether it's a welding cell where you're feeding and sand parts, whether it's a chemical filling machine, whether you're, uh, where you're feeling feeding pips or perhaps covid test kit containers or on the other side, which is when the parts are coming down the line when plastic injection molded parts come down the line. It's incredibly common for humans to be sanding at the end of that line and picking them and placing them inside either trays or bins or totes saying when welded parts come off of the welding uh, cell, it's very common for humans to be picking them up and placing them either on Iraq or inside the next bin for next transport.
Speaker 0 00:17:08 We have even heard of stories where on high speed stamping lines, because humans cannot keep up with their rate of parts coming in. Bins are placed at the end of the line where parts get dumped into that bin. But because that is not the most efficient way to stack them and ship them to the body shop, humans end up re binning them by picking from the bin and putting in the next spin in just a slightly more structured way to densify the bin so that when it gets transported to the next phase, it is efficiently packed. And this is all examples of scenarios where we bring a lot of value to manufacturers.
Speaker 2 00:17:59 Thank you for that. Cena. Can I ask who's your perfect customer and maybe what industries you're working in or are you kind of working across all industries?
Speaker 0 00:18:08 The way that we sell our system to our customers is by partnering with people who are building these automation sales who are system integrators. We have a large network of certified system integrators and value add, uh, distributors who provide that local attention and support to our end users and customers and ensure that the customer receives the local support that they need from people who have been very well trained on how to, uh, use the system and how to put the system into production. And our system is primarily a software that simply runs on off the shelf hardware. So what we sell effectively is our software, but we do fulfill the off the shelf hardware so that when the customer receives it, that industrial PC is programmed so they don't have to go in and install anything and the cameras are off the shelf. But we ship the cameras along with the system as well so that when the customer receives them, they don't have to figure out what camera resolution or what brand will be reliable. These are all provided by aara.
Speaker 2 00:19:26 Well thank you for that. And, um, did we forget anything today in our conversation?
Speaker 0 00:19:31 I do like to mention, uh, some of the new applications that customers are addressing with para. So automotive assembly is something that we started earlier on, but now we're increasingly seeing other types of assembly being done, uh, with our product. And these more or less come down to two problems. One, when you're assembling one component into another component where the physical tolerances of the components are not tight enough, for example, the location of a hole or the placement of a pin cannot be positionally known such that robots cannot be positioning program. That is increasingly where we're seeing, uh, applications. And then the second, uh, use case is there may be very accurate positional tolerance for the part, but the problem is that building a fixture to hold the part very, uh, accurately for repeat assemblies either costs in building the fixture or in changing the process. So in those applications, again, being able to simply identify exactly where the base of the par part is and be able to perform that assembly, regardless of how that part, uh, is presented, that is something that brings value. And of course, packaging is a form of assembly and that is also increasingly where, uh, we're being used.
Speaker 2 00:21:15 So you can use AI to really save money on your tooling, which is kind of exciting part of, uh, what's happening
Speaker 0 00:21:21 Precisely. So very little tooling would be required for packaging. Uh, one of the, uh, exciting applications that we're involved in is, for example, packaging of hand tools. So not only the parts, uh, arrive in random bins, not only the parts are basically mirror finished because those are hand tools are typically contemplated
Speaker 2 00:21:48 Mm-hmm. <affirmative>.
Speaker 0 00:21:49 But also when the packaging happens, the blow molds simply appear randomly on a conveyor. And by using AI to not only detect and pick accurately the parts, but also detect exactly where the blow mold is and perform the assembly onto the blow mold for packaging, all using vision, we remove necessity for building a very expensive conveyor with very precise indexing and very precise jigs. And the system becomes incredibly, uh, flexible in that sense.
Speaker 2 00:22:29 Well, it's very exciting. So I'm excited to see your system, uh, in real life at, at Automate. And what, what number is your booth at Automate, uh, in Detroit?
Speaker 0 00:22:37 Uh, we will be exhibiting at booth number 26 39, 26 39 at Detroit. And we're very excited. We have a number of really, uh, shining demos of our applications at our booth, but also at a number of our partners Booth. So once you visit our booth, we'll be able to show you to, uh, some of the other partners who are demonstrating a pairs capabilities at Automate
Speaker 2 00:23:06 And Cena. What do you like, uh, when you're not changing the robot world or the vision world or the tooling world, uh, what do you like to do? Do you have any hobbies
Speaker 0 00:23:13 These days? My hobbies, uh, pretty much get summarized to either teaching biking or skiing to my two year old and my four year old. Uh, before that though, uh, I, I was, uh, pretty big on hobby. So skiing has always been a big hobby. Uh, playing soccer, uh, continues to be a, a hobby. And uh, there was a period where, uh, uh, I was, uh, flying, uh, small airplanes a lot. So that period has, uh, has gone since I, I decided to have, uh, children. But, uh, uh, I'd like to pick those hobbies back up. I must say kiss, grow a little bit.
Speaker 2 00:23:59 Well that's great. And how can people get ahold of you if they wanna learn more about apparel? Ai?
Speaker 0 00:24:04 The easiest way is to, uh, via email and my email is fairly easy to remember. It's just Cena, s i n a at Apparat ai.
Speaker 2 00:24:18 Well, thanks again for coming on and we'll see you in Detroit.
Speaker 0 00:24:22 Thank you, Jim. It was a pleasure to be part of your podcast.
Speaker 2 00:24:26 Our sponsor for this episode is Earhart Automation Systems. Earhart builds and commissions turnkey solutions for their worldwide clients. With over 80 years of precision manufacturing, they understand the complex world of robotics, automated manufacturing and project management, delivering world-class custom automation on time and on budget. Contact one of their sales engineers to see what Earhart can build for you. And they're at info airhart automation.com. And Earhart is spelled E H R H A R D T and like to acknowledge a three, the Association for Advancing Automation. They are the leading automation trade association for robotics, vision and imaging motion control and motors, and the industrial artificial intelligent technologies. So visit automate.org to learn more. I'd like to thank Painted Robot. They build and integrate digital solutions. They're a web development firm that offers seo, digital social marketing, and can set up and connect CRM and other e r P tools to unify marketing, sales, and operations. And you can find
[email protected]. And if you'd like to get in touch with us at the Robot Industry Podcast, you can find me, Jim Beretta on LinkedIn. We'll see you next time. Thanks for listening. Be safe out there. Today's podcast was produced by Customer Attract Industrial Marketing, and I'd like to recognize my, uh, nephew, Chris Gray for the music, Jeff Bremner for audio production. My business partner Janet and our sponsors, uh, Airhart Automation System, and Canova Robotics.