Building smarter, better AI cameras with Yassir Rizwan from Labforge Inc.

Episode 102 June 19, 2023 00:31:06
Building smarter, better AI cameras with Yassir Rizwan from Labforge Inc.
The Robot Industry Podcast
Building smarter, better AI cameras with Yassir Rizwan from Labforge Inc.

Jun 19 2023 | 00:31:06

/

Hosted By

Jim Beretta

Show Notes

Yassir Rizwan is an Entrepreneur and technology enthusiast. His expertise is in robotics, computer vision, and AI, with over 10 years of experience. At Labforge, he focuses on product roadmap, business development, and strategic relationships, as well as hardware and software design. Yassir has engineering degrees in Mechatronics from the University of Waterloo and researched controls, non-linear control systems, and aerodynamics.

Before we get to the questions,

I would like to introduce one of our sponsors..Kinova is a global leader in professional robotics. Founded in 2006 in Montreal, the company's mission initially was to empower individuals with upper-body limitations through the use of assistive robotics. The company has evolved its product line to service researchers, medical professionals, governments, businesses and educational institutions achieve their innovation goals through strategic partnerships and collaborative efforts. Today, with robotic technologies built up over more than a decade of inspired ingenuity, Kinova’s provide solutions to professionals in industries such as agrifood, healthcare, security, nuclear, hazmat and advanced manufacturing.

How did you get started in machine vision and cameras?

Tell our audience a bit about LabForge company and your partners.

Applications that you have been involved with, especially in the early days?

You designed and built a camera, that is a pretty bold step why?

What is "Bottlenose"? What kind of lenses does it use?

Who is your target customer?

Do you have any Use Cases that you can talk about?

I feel that your product, Bottlenose is a disruptive product? Why is that?

How have you made it easy for machine builders and end customers to install vision?

Did we forget to talk about anything?

How can people get a hold of you?

To find out more about Labforge or you would like to reach out to Yassir, you can find more information here.

Enjoy the podcast. Thanks for subscribing, thanks for listening.

Regards,

Jim

Jim Beretta Customer Attraction Industrial Marketing & The Robot Industry Podcast

Thanks to our partners: A3 The Association for Advancing Automation and PaintedRobot.

If you would like to get involved with The Robot Industry Podcast, would like to become a guest or nominate someone, you can find me, Jim Beretta on LinkedIn or send me an email to therobotindustry at gmail dot com, no spaces.

Our sponsors for this episode are Ehrhardt Automation Systems. Ehrhardt builds and commissions robotic turnkey automated solutions for their worldwide clients. With over 80 years of precision manufacturing they understand the complex world of automated manufacturing, project management, supply chain and delivering world-class custom automation on-time and on-budget. Contact one of their sales engineers to see what Ehrhardt can build for you at [email protected]

Kinova Robotics. Kinova is a global leader in professional robotics. Founded in 2006 in Montreal, the company's mission initially was to empower individuals with upper-body limitations through the use of assistive robotics. The company has evolved its product line to service researchers, medical professionals, governments, businesses and educational institutions achieve their innovation goals through strategic partnerships and collaborative efforts. Today, with robotic technologies built up over more than a decade of inspired ingenuity, Kinova’s dedication is to provide solutions to professionals in industries such as agrifood, healthcare, security, nuclear, hazmat and advanced manufacturing.

Keywords and terms for this podcast: AI Camera Manufacturer, Yassir Rizwan, Labforge Inc., The Robot Industry Podcast, Ehrhardt Automation Systems, Kinova Robotics, #automate2023

View Full Transcript

Episode Transcript

Speaker 0 00:00:00 Bottle Nose is our flagship line of cameras, and it is a derived technology. From back when we were working in defense, we took everything that we learned in, in working in defense, the organization, the ability to see things clearly and far away, high resolution artificial intelligence, and we distilled it down into a dual use product that we call Bottle Nose. Speaker 2 00:00:27 Hello everyone, and welcome to the Robot Industry Podcast. I'd like to introduce one of our sponsors. Canova is a global leader in professional robotics, founded in 2006 in Montreal. The company's mission initially was to empower individuals with upper body limitations. Through the use of assistive robotics, the company has evolved its product line to service researchers, medical professionals, governments, businesses, and educational institutions to achieve their innovation goals. Through strategic partnerships and collaborative efforts today with robot technologies built up of more than a decade of inspired ingenuity, Canova provides solutions to professionals and industries such as AgriFood, healthcare, security, nuclear, hazmat, and advanced manufacturings. My guest for this edition is Yes, sir. Rizwan from Lab Forge in Waterloo region. Yes, sir. Is an entrepreneur and technology enthusiast. His expertise is in robotics, computer vision, ai. With over 10 years of experience at Lab Forge, he focuses on product roadmap, business development, and strategic relationships, as well as hardware and software design. Yes, sir. Has an engineering degree in mechatronics from the University of Waterloo and has researched controls non-linear control systems and aerodynamics. Welcome to the, uh, podcast. Yes, sir. Speaker 0 00:01:44 Thank you for having me, Jim. Great to be here. Speaker 2 00:01:47 Yes, sir. How did you get started? Machine Vision and Cameras? Speaker 0 00:01:50 Oh, gr, great question. So, um, we started, uh, or I, myself, I started in computer vision long time ago, so about, uh, 15 years or so ago. And some of my early work in Computer Vision was funded by A M d, which is one of the large, um, processor companies. And, uh, we were doing research at the University of Waterloo at the Embedded Systems Lab and, uh, and also the Waterloo Autonomous Vehicles Lab. And there we were essentially trying to fly cameras on drones. And the, the challenges that came with it is how we, how we got started into all of this. First of all, you know, the cameras had to be light enough to fly on a drone. The drone had to be big enough to, to fly cameras. And because we were doing, um, uh, automated recognition and, uh, uh, and AI on it, the, the processors either had to be super tiny or the airplanes had to be really large. So that, that's where, uh, everything just sort of, uh, began, uh, for me. Speaker 2 00:02:46 And tell us, tell our audience a little bit about Lab Forge Company and your partners, and of course, I've been aware of you for many, many years. Speaker 0 00:02:53 Mm-hmm. <affirmative>. Sure. Uh, so Lab Forge started in Waterloo about 10 years ago. And, um, uh, once we were, you know, sort of finishing our research, uh, Thomas Write Meister, our co-founder, myself and Sebastian Fich Meister, uh, at the lab, and he's still the director of the lab, we decided, you know, we, uh, we've built a lot of technology and we need to spin this off into, into a company. And we, we have some great IP here, uh, that could really, uh, provide benefit to the industries that we were working with. And so that's how we, we started Lab Forge. And one of our early employees was Martin. And, uh, so Martin, Thomas, myself, Sebastian, we're, we're still here. And, and now we design, build and cameras that have, uh, really high performance capabilities built into them. Speaker 2 00:03:34 And you've done a lot of partnerships, right? Speaker 0 00:03:36 Yes, that's right. So we've been working with some very large companies, uh, since almost the beginning. Like I mentioned, you know, early on, uh, our research was funded by a m d and, and a few other companies, but later we partnered with Toshiba and Toshiba. We've been partnered with them for about seven years, uh, with Toshiba Japan and Toshiba us. And, um, uh, we work with them at the chip level where we try to understand, you know, where their challenges are and where our challenges are in terms of, in, in terms of accelerating vision and AI onto an embedded level. You can't really do this kind of work without having a chip partner. So sort of like, and I know we were talking about this earlier, but sort of like how Blackberry, uh, used to have a partner in Qualcomm or Texas Instruments, or how Apple, uh, used to have a partner in Qualcomm. We have a partner in Toshiba where Toshiba provides us the core chip set. Beyond that, we are also partnered with T D K, coincidentally, another large Japanese conglomerate where we partner with them on very tiny, tiny power supplies that power the different components inside the bottle, mosts cameras that we produce, and as well as Cadence and Pleo Technologies, which is another Canadian company. Speaker 2 00:04:42 That's, that's great. And what are some of the applications that you've been involved with, especially in the early days? Sure. Speaker 0 00:04:47 Uh, so from the beginning, we've been looking at many different applications and now and have gotten our, sort of our feet wet in into all these different industries. And at the beginning we delved really deep into defense. Uh, we worked really closely with the, uh, with the Royal Canadian Air Force, and they have been our partners and our, uh, sort of like our, uh, our champion of, uh, of promoting our technologies as well. Uh, and then after that, we worked a lot with the Department of National Defense and, and the N D R D C. And of course, with partnership with Toshiba, we've been doing a lot of work in the automotive, uh, segment as well. And, and there we've seen some very interesting applications. Like, for example, you know, if you can have a camera detect an oncoming car, like could you dim the, uh, the headlamp automatically instead of having the high beams open all the time? Um, could we use the cameras to detect cyclists and pedestrians on road? And that's some work that we've done with Transport Canada, uh, early on as well. Uh, could we use cameras to detect overhead collisions on industrial equipment? You know, like, um, uh, like scissor lifts. Uh, so we've done, and that kind of works as well. So, but lately we've now been focusing a a lot more on industrial automation and, uh, uh, for example, pick and place and inspection. That's like, um, how we, um, were talking today. Speaker 2 00:06:03 So you guys built a camera, and that's a pretty bold step. And so, and why was that? Speaker 0 00:06:09 Uh, another great question. So the, the why of this is not, uh, the answer to this is not, uh, what you would find in, um, in sort of your typical business books, right? So, uh, the y didn't come from a need to do it from, uh, from a business perspective. The why we built our camera really came from our passion, and then from, from, from doing something, uh, very fulfilling, right? So, uh, we were excited about building things and, uh, and we love building things. So we love working with, you know, like chips and, and getting chips attached to other chips, and then together building a solution that sort of looks like it just came out of nowhere. That's why we do it on a fundamental level. That's why we do it. It's because it's because we love it, but because we do what we found many things in there that we can do that others cannot do. And, and that's where we can provide value by being a, uh, a, primarily an AI and software company that also builds their own cameras. We're sort of vertically integrated in, into that stack, and we can offer customers some unique advantages. Speaker 2 00:07:05 And let's talk about kind of the elephant in the room, which is AI and machine vision. Speaker 0 00:07:10 And, you know, maybe let's take a step back in terms of, uh, in terms of all of, uh, uh, there's a lot of hype in AI today, and, uh, there's a lot of, uh, sort of, uh, mystery and, uh, you know, and, and of course, you know, that machine vision has been there for four decades or so, right? And it's nothing new. And cameras have been used, have been used to, to fix manufacturing processes and, and to, to implement quality and, uh, and other standards for a very, very long time. But, but it's still sort of, um, very mysterious, right? We, we walk into factories regularly where, uh, where there are no cameras and, uh, and there's, uh, not even the basic kind of, um, uh, camera based automation is not there. So I think from a fundamental level, people should really, um, think about like what it is, right? Speaker 0 00:07:54 So I think maybe I can try to clarify a little bit of what it is actually. So back in the day, uh, cameras were lenses that were focusing images onto a film, right? So let's take it from the very beginning. And so films would, would capture images because the light that's coming in is being focused from a lens, and the different colors and pigments on that film react to the light in a different way, and therefore causes certain chemical changes that then appears as an image. And so if that light is very well focused and creates a crisp image, then the pigment change on the film looks like an image. And so now we can reproduce that and, and print it out and, and looks great. Digital imagery is, is a little bit different, but it's sort of very, um, an evolution of that concept in that all we've done now is taken the little parts of an image and we've turned them into numbers. Speaker 0 00:08:43 And these numbers are no different than numbers we know. So in the olden days, every part of an image used to be represented just from zero to 1 28. And, and it would just be, uh, like integer. So it'll just be 0, 1, 2, 3, and every part of an image would be represented by a number. And, and, and this was true for even some of the space programs that had some very, very small, um, images, right? So, and sometimes an image could only have, let's say, a grade of 16 by 16 parts, and each part is represented by a number. So these days, we call those pixels, right? The parts of an image are now pixels. And instead of the old cameras of the seventies and eighties where you just have 16 by 16 digital, uh, or, uh, vacuum tube based pixels, uh, these are now, uh, in millions. Speaker 0 00:09:26 And, uh, and when we say millions of pixels, maybe in megapixels, the advantage of representing a part of an image as a number is that now we can apply certain mathematical concepts to it and start to derive relationships in what these numbers mean. So if certain numbers are next to other numbers and you apply a sort of a mathematical formula on it, you could predetermine, uh, what that is for like 30, 40 years. A lot of the machine vision was basically handcrafted features in which the scientists and the computer vision, we call it in, in the, in the ac in academia, it's computer vision in which the computer vision scientists would sit down and they would say, okay, if I'm gonna see a gradient, uh, this is gonna be a transition from white color to black color, and I'm gonna look at my individual pixel and try to define where my gradient is, right? Speaker 0 00:10:12 And this gradient will now tell me where the lines are, and maybe it'll tell me where the circle is. So now, if I can apply a formula that then tells me where a circle is, now I can use that to now measure precisely the diameter of this circle. Once we've gone to that part, um, now you can see some really nice applications. All of a sudden in the eighties and nineties, you go, oh, I'm gonna measure diameter of my cookies coming out of the oven. And, you know, and now we have, uh, we have a machine vision process that can now, uh, dynamically change the size of cookies, perhaps, uh, at a, at a large high volume manufacturer. So the recent transition, and this happened in sort of like about the 2014 sort of timeframe, when there's this magical moment in, in, in computer vision was, was ai. Speaker 0 00:10:56 And at that time, what happened was that the handcrafted features where somebody had to sit down and define that, if I'm gonna find a transition of a gradient from one to the next, uh, was no longer needed. And because that was no longer needed, nobody had to sit down and think about how to detect a circle or how to detect a square. All they had to do was build these gigantic structures called neural networks, uh, fill them with empty parameters or initial parameters, and then train them from end to end by just showing the camera 10,000 circles. And out comes the, uh, you know, like a model that will then start sort of teach the camera, um, how to, how to detect these circles. And so that's what we call AI today. Uh, that is, uh, that is deep learning. These models got bigger and bigger and bigger. Speaker 0 00:11:40 And now the latest models are extremely large. And, uh, lately, like about two years ago, there was these, um, I think two years ago, the new concepts called transformers and Transformers are neural networks that have changed the industry again, yet, yet again from 2014. And, uh, and transformers essentially mean, uh, that you can represent certain things as other things. So if you write a sentence in English, uh, you could, uh, create an embedding and then now create a sentence in German. And because that transformation has taken place from English to German, uh, you can now apply this concept to images as well. And between images and text and everything is why we see chat g p T and sort of like an explosion of, of, of intelligence Speaker 2 00:12:23 Happening. Thanks for that explanation. It's perfect because I, I, when I was a kid, I, you had a dark room, and I had in a larger and a whole bunch of cameras, and so that's, it gives me some background on that. So, uh, now we're seeing, and I'm, I'm sure you're in a lot of factories, you're seeing a lot more, uh, vision systems on automated factories. Would you agree? Yes. Speaker 0 00:12:44 Yep. I would say that, uh, that the trend is picked up, and of course, north America is behind, um, compared to, uh, compared to Japan or Asia, uh, or other parts of Asia. But, uh, north America, uh, will not be behind for much longer. Right. There's now a strategic sh uh, strategic push in bringing manufacturing back, uh, to North America and also to Europe. And, uh, and that's why we are seeing more and more cameras there. And then there's that other reason, which is that some of these tasks that cameras can do, and especially, uh, you know, whether it's cameras by themselves or whether there's cameras coupled with actuators or robots is because, uh, uh, people don't want to do some of these things anymore, right? It's sort of, uh, uh, it's backbreaking work. It's, uh, it's a bit dull. You know, you have to sort of bend down and pick up parts from a bin and put them in a conveyor belt as an example, that is sort of a, a unfulfilling task, and many young people are just don't want to do this anymore. Um, same goes for some very high skill tasks like welding. Um, and it also any area where it's very hard to find labor. And so, yes, we're seeing more and more cameras being used to, uh, to do precision wells and, uh, and other Speaker 2 00:13:52 Things. Yeah, I agree with you. Welding is a very exciting part. So can, let's, let's talk about the camera for a minute. So, what is bottle nose and what kind of lenses do you use on that? Bottle? Speaker 0 00:14:02 Nose is our flagship line of cameras, and it is a derived technology from back when we were working in defense, we took everything that we learned in, in working in defense, their organization, the ability to see things clearly and far away, high resolution artificial intelligence. And we distilled it down into a dual use product that we call bottle nos. And bottle nos, uh, primarily being marketed for industrial and robotics applications, uh, for automation is a family of cameras that we have available in single lens and dual lens versions. And, um, what this means is that the single lens version, uh, has the ability to do everything we talked about, like AI and HDR and other things directly on board. The dual lens version can also do these things, uh, but it can also compute depth. And, um, depth is something that is, um, that can be done if you have two viewpoints of the same area. Speaker 0 00:14:54 So same as how we have depth perception from our eyes. And when we close one of our eyes, it'll be difficult to find the mouse again on the computer. Uh, when we have, uh, two lenses on a camera, sort of gives us this unique possibilities of finding out where things are. Bottle nose, uh, is, was, uh, designed for, uh, and was condensed down for industrial applications. So we are following the CS mount format, uh, that also allows customers to use seamount and s mount lenses, uh, but by simply using a small adapter. And, uh, and so therefore, there's a huge selection of lenses out there that are available for these cameras. Speaker 2 00:15:27 And the camera, the form factor is really small. Cause I see you in the background there. Speaker 0 00:15:32 Yes. Yeah. So the camera is, uh, is tiny. And again, and this goes to our passion for chips. Um, and, and so, you know, like e everything, the, the circuit board and everything has been condensed down into as small as it would really get. Uh, we have about 20 tear ops of processing power in there. And, um, and yeah, it's small. It's also rugged. It actually doesn't have any fans on it. So even though it does all this AI and everything else, it has no cooling fans on it, uh, which actually makes it very, very well suited for, uh, industrial environments. Speaker 2 00:16:01 Thank you for that. And so I was gonna ask you, what is an I S P and how are neural networks accelerated inside the camera? Speaker 0 00:16:10 Uh, so, uh, the, the 20 terra offs of processing power that we have inside the camera is sort of split between the I S P, uh, which is an EMMIT signal processor, and the neural network accelerator, which is what we call the D N N block, the Deep Neural Network block. And, and we also have a, uh, acceleration for stereo, stereo matching and, and 3D depth as well. So the ISSB is, is a part of the camera, uh, that exists in all cameras. If, uh, if they're high-end cameras, it would exist in those cameras. Is, uh, is the part of the camera that does image processing. Image processing, I is not typically computer vision. So image processing is a different field of manipulating, uh, manipulating images. Uh, that is not typically associated with computer vision, machine vision. Um, but what image processing does is what you would typically do with Photoshop or with the, or with other, uh, image manipulation software. Speaker 0 00:17:02 You can do color correction, you can sharpen the images. You could take out defects, you could do lens shading correction, you know, like if a lens has a sort of artifact on it, you could remove that automatically. Um, if there's a color aberration, you could remove those. Um, if there's certain things in your environment like, uh, diff difficult lighting situations, you can fix that with ISPs. There are many machine vision cameras of course, out there that don't have ISPs, right? And they're just simply piping out the image. So not only do we have the ISP portion of it, but we also have the computer vision portion of it, and the AI portion, which would be the, uh, the d n n block. The purpose of the D n N block, uh, goes back to where I was mentioning the, uh, the ability for a, uh, a, a network or, or a set of parameters to learn how to recognize a circle or a square, um, uh, goes into these, uh, into the structure, which is what we call a neural network. Speaker 0 00:17:53 And these neural nets can be really, really large, and they can have an amazing amount of, uh, parameters in them. And so these days, the, uh, the, there's no, uh, like a good, uh, way to accelerate them and for all type of use cases, right? The most general way to accelerate them is by using a, an Nvidia G P U. And that's why you see Nvidia GPU is being used a lot in, um, in the industry. But what we do is we accelerate them inside a, an asic, which is an application specific integrated circuit that is designed specifically to accelerate neural networks while consuming less power and, uh, being more suited to industrial environments than general purpose, uh, Speaker 2 00:18:32 GPUs. Yes, sir. Thanks for that. And I'm just kind of wondering, because you've got all this power on chip, is your camera faster? Speaker 0 00:18:38 Yes. So I would say that the camera has less latency. Uh, it's very responsive. Um, and, uh, it's, uh, it's, it's easier to use and, uh, and easier to, for the end customer, of course, it's very difficult to program an ASIC compared to a gpu, but, but once we're done with it, it's, uh, it's a lot easier and, and a lot better suited for specific applications. Speaker 2 00:18:58 And so who's your target customer then? Is it a machine builder or is it an end user? Speaker 0 00:19:03 I would say, uh, we have three types of target customers, the manufacturers, um, who are, who are doing fabrication, uh, or building parts. Um, and the manufacturers can be in different segments. They could be automotive, uh, tier one, tier twos, uh, they could be in the packaging industry. They could be food and beverage industry or, or Ag tech. And of course, machine builders are, are also, uh, our target customers. And these would be the people who are building small machines for specific tasks, right? So they could be building a machine to label something, and that will be a great, uh, customer for us. Uh, and of course, uh, system integrators are our target and, uh, we hope that we can, we can get some system integrators on board, um, in the future. Speaker 2 00:19:43 So do you have any use cases that you can talk about? Speaker 0 00:19:46 Yes, of course. Uh, so the, uh, the bottom so far, we've talked about the bottomless camera and, uh, uh, we're, uh, we are, we are merging this with our two, uh, new products that are coming out, which is the bottle notes, inspect and bottle notes locate. And, uh, these products are being billed, um, in partnership with the next generation manufacturing super cluster. Uh, the, a project that we've, uh, we've just started, and this will allow us to, uh, target bottle loss for two specific areas in manufacturing. So bottle loss inspect will be a fully integrated unit, um, with hardware and software that comes with bottoms, which will allow manufacturers to do high speed and precision inspection of the parts that they're manufacturing, and also to find anomalies that they may not know about. So it's, it's one thing to find a defect in a part if you know about it, and you can train for it, but it's all other game. Speaker 0 00:20:33 If you, uh, if you're sort of trying to find anomalies in your manufacturing, sort of certain scratches and, uh, and aberrations or, or other things in, in the process of making parts where you may just wanna, uh, keep an eye out for anomalies, right? So that will be part of bottomless inspect and bottomless locate is, um, is, again, it's, it's a, it's a fully packaged solution that will have the ability to locate things and find them in 3D space. And, and what that means is that, uh, if a customer is interested in finding things, um, where are my parts and where are the certain orientation and size and shape of different things, we can do that. And the use case for that is, uh, automated assembly. Like, let's say for example, you can use bottomless located to find the front clamshell and the back clamshell of a product, and then you can have a robot or two robots come pick both clamshells and merge them together to then, uh, create a full assembly and another robot come in with, uh, uh, with a tool changer and use a Phillips head screw to then screw the two clamshells together and form a blackberry, for example. Speaker 0 00:21:31 Right. Um, in terms of the use, other use cases for locate are palletizing depalletizing, right? So any, any place where you can, uh, where, uh, the system could use the knowledge that now you know where things are, what do you do with it, uh, would be a great fit. Speaker 2 00:21:47 I'm getting the feeling, uh, yasir that bottle nose is a disruptive product, and why is that? Speaker 0 00:21:53 Yeah, I would say, uh, that the bottle is a very, uh, it is a very disruptive product in the sense that we are probably one of the few companies, um, that have a very deep knowledge of AI and computer vision, uh, that also make their own cameras. And so, so that mix usually doesn't exist, either you're a pure software company or a pure camera company, and of course, for good reason, there's a good business case to be made for focusing on, on that one area. But there's also a lot of advantages and disruption you can do if you understand both sides of the equation. So if you really know how to build a camera and you know how to manipulate certain things inside the camera, you can make the AI and the computer vision machine vision part a lot more accessible to your users and a lot more accessible to use cases, uh, that may not have had automation before altogether. Speaker 0 00:22:40 So essentially opening new areas instead of trying to compete in the same waters as before, uh, trying to open new waters where, where the market grows for everybody together, but also, you know, we are automating things, um, and, and adding automation to, uh, to processes that never had them before or, or never thought that it was, it was possible before. And I can give you an example. Like, uh, for example, like in, um, in certain manufacturing, uh, uh, you can come across a situation where you have to build a part two very, very high tolerance, and it has to be built to like, let's say 0.01 millimeter tolerance in gotta machine, and it has to be perfect. But if you really trace that back, it turns out that that tolerance requirement is only a requirement because certain robots in the industry, uh, work in a very rigid manner, right? Speaker 0 00:23:25 And so if, if, if a robot works in a rigid manner, it's, and it's pre-programmed for its paths to do something, it, of course it needs that pa, the part that it's gonna work on needs to be extremely precise for it, for it to work. Uh, where we can come in is by having AI and machine vision all done on the camera is now you can have some variability in there, and that variability, uh, in turn reduces all that tolerance requirements from your parts. So maybe it can be made to a one millimeter precision or 10 millimeter precision, maybe that precision was never required to begin with, right? And it was just really there because robots were not seeing things. But now if robots can see things and you can make it commercially viable for every small robot, a large robot, a low cost robot to see things, then uh, there's a lot of opportunities, uh, to be disrupted. Speaker 2 00:24:09 That's great, and thank you for that. And so how have you made it easy for machine builders or easier for machine builders and any customers to install vision? We, Speaker 0 00:24:17 We thought about this, uh, for a while, and this was, uh, of course, we, we were a little bit resistant to this, uh, in terms of what the answer ended up being. And the answer was by adopting standards. And, uh, and, uh, for example, uh, you're gonna be at the automate show, and, uh, there's the, the standard that is governed by a three, which is, uh, Giggy vision, uh, I think I'm pronounced it right, a three, uh, governs Giggy vision, and they also govern USB three vision. And Giggy vision is a specific, um, term that, uh, that, that, that specifies how images are to be transported over an ethernet link for the purpose of industrial, uh, use cases. And initially we designed the thing, and we didn't use Giggi vision, of course, because we wanted to do our own thing. Uh, but then, you know, of course, uh, in order to make it easier for, for adoption and to make it easier for people to use it, we, uh, we partnered with Pira Technologies and we, we adopted the Giggy Vision standard. Speaker 0 00:25:11 Uh, pple is, uh, is on the committee, uh, on the, uh, uh, on the steering committees of, uh, of Giggi vision together with Delta and a few other companies. So bottle Notes is fully Giggy vision compliant, and I'm also proud to say that, um, we are using parts of Giggy vision that other companies are not necessarily using right now, but that are part of the standard. For example, we use a, uh, a portion in Giggy vision called Chunk Data. And Chunk Data allows us to transport things that are not image related, for example, results of artificial intelligence. And, um, and machine vision algorithms, uh, would get output through the chunk data provisions of Gig GeoVision that were added in there for future. And I guess now is, now is future for, for it to be used. Um, and, and to answer the rest of the question for, uh, for end customers, the manufacturers, the direct customers that we are, that we are going to, we are making it easy for them by offering them an off, off the shelf solution for, for them to use without them having to understand any of the stuff that we talked about today. Speaker 0 00:26:10 So, not knowing Geeky vision, not knowing ai, not knowing Machine Vision and any of that, uh, we are offering them off the shelf, uh, items and, and solutions that they, you know, they sort of like buy a station that they can use and, and, and it does inspection for them where, or it does pick in place for them. Um, second, for machine builders, uh, we are offering them flexible service-based offerings. So because we are a small company, and we also happen to know the camera inside out, uh, machine Builders, one of the main goals that they have is to reduce the bomb cost of that machine that they're selling. And, and they wanna sell 10 to a hundred of these machines a a year. Uh, and they don't want the vision component to be the biggest part of that, uh, of that cost base. And so, uh, by, by, again, by being vertically integrated, we're able to offer this to them and, and provide them with flexible services where again, they also don't need to know anything about AI or machine vision in order for this to work. And third is system integrators. Of course, system integrators know this stuff really well, and some of the large system integrators are very, very good at Machine Vision. You yourself worked at ATS Automation, uh, before. So for them, we have the, uh, we have a fully, uh, plug and play infrastructure with software that they already love and like, like Cognex and, uh, and Matrix and Halcon, uh, would be, uh, would be something that system integrators would be able to use right Speaker 2 00:27:25 Away. Uh, well, thank you very much for coming onto the, uh, podcast. Yasser, did we forget to talk about anything? Speaker 0 00:27:31 Yeah, I would just like to plug in, uh, a couple things. So, uh, I know that you mentioned that Canoa is one of, uh, one of the, uh, the sponsors of the, of this PO podcast. It just so happens that we've also just, uh, started a very large project with, with canoa, and the, the goal there is to bring their Linksys cobots, um, together with bottle notes to, to manufacture. So that is a very exciting for us, and, uh, and we feel that both of our companies have certain things, uh, to gain from each other. And the combined solution would actually be quite amazing, uh, to do. Uh, second I would say, um, is that, uh, if you wanna see bottle notes, uh, there would be a chance to see it at booth number one 18, uh, at the Automate show in Detroit. Um, and, uh, and that would not be the Lab Forge booth, that would be the player or technologies booth, which are one of our partners that I mentioned earlier. And they've been kind enough to, to give us some boot space. So, so while I won't be there, but the, uh, but the camera will be there and it's running a live demo, and we have, um, hopefully a set of fresh tomatoes in front of the camera and, and the camera will detect the tomatoes, uh, the entire time it's running. Speaker 2 00:28:32 Well, that's great, and thanks again for coming onto the podcast. When you're not changing the automation world of vision, what, uh, what do you like to do? Do you have any hobbies? Speaker 0 00:28:41 Sure, yes, I've got a couple. Um, my wife likes to think that I jump from <laugh> from hobbies to hobbies. Um, but lately I've been, uh, really deep into barbecue and, um, uh, going the full precision route of, uh, of smoking briskets, uh, for, uh, you know, 21 hours with, uh, thermometers coming out of every, every <laugh>, every place they can come out of. Um, and also, you know, uh, other things like sourdough bread is also another thing that I love because it's, uh, it's something that you can really apply a lot of, uh, precision and uh, uh, and creativity too. Yeah. Speaker 2 00:29:15 Well, that's great. That's fun. And uh, thanks very much again for coming on. And how can people get ahold of you? Speaker 0 00:29:20 Yeah, the best way, you know, please feel free, uh, listeners, your listeners can add me on LinkedIn. I'd be happy to talk to them over LinkedIn. Uh, yes, sir, is one, uh, if you search it, hopefully you'll find it. Um, if you can please send an email to [email protected] and, uh, and we'll, we'll, we'll go from Speaker 2 00:29:36 There and I'll put that in, uh, the show notes as well. Okay. Our sponsor for this episode is Earhart Automation Systems. Earhart builds and commissions turnkey solutions for their worldwide clients. With over 80 years of precision manufacturing. They understand the complex world of robotics, automated manufacturing and project management, delivering world class custom automation on time and on budget. Contact one of their sales engineers to see what Earhart can build for you. And they're at [email protected]. And Earhart is spelled E H R H A R D T, and I'd like to acknowledge a three, the Association for Advancing Automation. They are the leading automation trade association for robotics, vision and imaging motion control and motors, and the industrial artificial intelligence technologies. Visit automate.org to leave more. I'd also like to recognize painted robot, painted robot builds and integrates digital solutions, their web development firm that offers s e o Digital and social Marketing, and can set up and connect c RM and other e r P tools to unify marketing, sales, and operations. And you can find [email protected]. If you'd like to get in touch with us at the Robot Industry Podcast like, yes, sir, you can find me, Jim Beretta on LinkedIn. We'll see you next time. Thanks for listening. Be safe out there. Today's podcast was produced by Customer Traction Industrial Marketing, and I like to recognize Chris Gray for the music, Jeffrey Bremner for audio production. My business partner Janet and our sponsors, Earhart Automation Systems, and Canova Robotics.

Other Episodes

Episode 0

February 24, 2021 00:35:32
Episode Cover

Harnessing Innovation in Manufacturing with Carl Vause and Dave Henderson

In our continuing series about innovation in automation, robotics and advanced manufacturing I invite David Henderson and Carl Vause back to #TheRobotIndustryPodcast for a...

Listen

Episode 93

March 15, 2023 00:29:15
Episode Cover

Curing Epilepsy with Robots. A conversation with LHSC’s Sandrine deRibaupierre

The human brain is critical to a child’s development. Within a few years of life, it enables us to sit, walk, talk and eventually...

Listen

Episode 0

July 21, 2021 00:38:04
Episode Cover

Technology to Protect Robots and Robot Pendants with Roboworld and Chris Tur

Chris Tur is President & CEO of Roboworld. He is a Robotic protection expert, patent holder and a TEDx presenter and a really interesting...

Listen