[00:00:00] Speaker A: So why would you recommend a test company and not an automation company or machine builder to do the test part of an automated system?
[00:00:08] Speaker B: Well, I mean, would you go to a foot doctor for heart surgery? Test is a specialty field, just like custom machine building.
[00:00:23] Speaker A: Good day, everyone, and welcome to the robot industry podcast. I'm excited today because I have a previous co worker and a friend and a person I've actually done work with recently to the podcast. His name is Drew Wilson. Drew has had a 40 year career in instrumentation engineering, full functional test and automation systems. He's had 32 years with ATS automation tooling systems in various positions, including a founder of the ATS Test Systems division and chief engineer for the ATS industrial Group. He's now retired from full time employment, but he is doing the odd consulting project. Drew, welcome to the podcast.
[00:01:00] Speaker B: Thank you for inviting me to your podcast, Jim. It's been a fun career.
[00:01:04] Speaker A: Hey, you've had some super interesting jobs and projects in your life, and we've worked on some of them. You've worked on the people mover in Detroit and in Vancouver and in automotive, medical device pharma machinery, and there's lots that we can't talk about as well.
[00:01:20] Speaker B: That's true.
[00:01:21] Speaker A: Drew test is a challenge for manufacturers and machine builders because I feel that they just want to buy something and move on. And they, the machine builder, maybe even the end customer, doesn't really know why and what they're testing. They just want like a pass or a fail and a simple answer to usually a very complex problem. Would you agree?
[00:01:40] Speaker B: Absolutely. High speed automated assembly systems can build a lot of parts quickly, but they can also generate a lot of expensive scrap in a short period of time. These days, customers expect flawless product, so the cost of failure has never been higher. Building good products is a result of attention to detail and having a good product design, a good assembly process, 100% good parts, and then final validation of the assembly by testing. They say that 100% testing is 80% effective, so you just can't be left to the final test system to find all of the issues. Product with intermittent failures or infant mortality failures may or may not be found. Six sigma demands rigor in all aspects of product quality. Hence, testing is the final check in the production process prior to shipping. So, yes, Jim, it is very important.
[00:02:28] Speaker A: So, you know, when you said like 100% testing is 80% effective, what do you mean by that?
[00:02:33] Speaker B: Well, that's just an old rule that, you know, the test can't find absolutely everything. There's always, there's always some aspect that may or may not be found in the test criteria. So it's always best to have a, you know, many, many links in the chain that guarantee a successful product.
[00:02:55] Speaker A: Well, we're going to get into that a little bit later, but can you tell us more about the testing side of automated assembly process which you and I both grew up on?
[00:03:04] Speaker B: Again, you're building parts every so many seconds, and you need to validate that the process is in control. So it's extremely important that you ensure that the last part being built has all of the attributes of a known good part and that hence it should meet the customer's end requirements. Thus, Sigsigma quality control drew, tell us.
[00:03:28] Speaker A: More about the testing side of the automated assembly process.
[00:03:31] Speaker B: Well, production testing falls into two categories, inline testing and end of line final test. Inline testing, as the name suggests, is located along the production line, where partially completed subassemblies are validated for correct operation. Later on, you may not be able to have direct access to the inner workings of the part, and if it's bad, you want to fail the part prior to adding more value. It also gives you a rework opportunity. An example of an inline test might be checking for the free movement of a gearbox that was just assembled prior to adding the drive motor. End of line test is your final quality check. Full functional test is where you're running the device as if it were in its final environment. Usually testing is done using only the inputs and outputs of the completed device, as per its operational design. Occasionally there may be a diagnostic connection where you can talk to a smart device, but not often. You don't really need to test every feature bell and whistle necessarily, but you do need to prove that it has all the attributes of a know and good part, and then store the results for quality metrics. In the motor example that I gave, you might look at the voltage, current, torque and rpm, measure the full parametric power and efficiency curves, and check for any unusual vibration and confirmed safety grounding. End of line test systems can also calibrate or train smart devices, like calibrating the commutation of a brushless motor to maximize its efficiency. For example, in the quest for six Sigma quality control, a good gauge capable test system is critical.
[00:05:02] Speaker A: Thanks for that, Drew. So, if I'm installing test equipment in an automated system, am I best to distribute the test equipment throughout the machine or the system, or put it at the end of the line? Like what are your thoughts on this?
[00:05:15] Speaker B: Well, it's very case specific, with no hard and fast rules. Good test equipment is expensive, so you locate it where the failures can most easily be found and the cost to rework is the lowest. It's really driven by the failure modes of the product. In effect, the test system is used as a quality gauge. I mean, this is how the automotive and medical companies do it, and they are leaders in the high speed manufacturing industry.
[00:05:39] Speaker A: So the tester is considered a quality gauge, and you use the term gauge capable. And so can you explain to the audience what a gauge r and R is?
[00:05:49] Speaker B: Sure, no problem. Well, yes. Well, it's how you determine your gauge is capable for the task that is really the one and only metric that counts in test equipment. Gauge R and R stands for gauge repeatability and reproducibility. Repeatability is the ability of the gauge to reliably discriminate one part from another by always getting the same measurement result. Reproducibility is the ability of the gauge to provide consistent results independent of various operators or fixtures involved. Gage R and R is a statistical calculation that determines what percentage of gauge air is consumed in your pass fail tolerance across a large population. I'll explain more in a moment. But every gauge has errors, and this is how we determine the amount of gauge error that a test system has. This is the one metric that will prove your gauge will always pass 100% of good parts and fail 100% of bad parts in production. Overall equipment effectiveness, or OEE, is the main metric used to determine a production line's success in testing. Gauge R and R is the one metric used to determine the test system's success. So it's very important.
[00:07:03] Speaker A: So how do you specify good GR and R results?
[00:07:07] Speaker B: Well, that's a good question, because typically the customer's requirements document is almost always wrong. Let me explain. The ideal gold standard for measurement is a gauge r and R of 10% or less. And really, every specification asks for this. But this in itself is a meaningless requirement unless it's defined in conjunction with the pass fail tolerance requirements. Without the tolerance, the gauge r and R requirement is undefined and meaningless. If the tolerance is wide open, gauge r and R is easy to achieve. But if the tolerance is very tight, then a good gauge r and R can be very, very difficult to achieve. My rule of thumb is, all things being equal, you must measure 100 times more accurately than the manufacturing tolerance to meet a gauge r and R of 10% or less. So, say you want to know something within 0.1 degree. With a 10% gauge r and R, you must select instrumentation and tooling that can resolve down to 1000th of a degree. So you can see it gets very difficult very quickly.
[00:08:09] Speaker A: So what's a good gauge r and R result?
[00:08:12] Speaker B: Well, we said that 10% gauge r and R is excellent, and it's the gold standard for testing, but really 20% to 30% is considered good or acceptable, but has room for improvement. And 30% to 50% can be acceptable for production, but only with some careful rationalization of the test and its value to the product quality. So if the gauge r and R is 5% or less, then I'd be asking why the tolerances are so wide open. And should they be reduced, or can test cycle time be further reduced because you don't need the accuracy, your system is basically too accurate for the application. I would also just like to say that there's quite a bit of skill required to perform a successful gauge R and R. Some products have considerable natural variation, and this needs to be taken into account during the study. There are statistical tools and techniques to do this, but really, that's outside of this general discussion today.
[00:09:05] Speaker A: Drew, what are some of the other common mistakes in specifying test equipment?
[00:09:09] Speaker B: Oh, there's lots. Traditional specs state range, accuracy and resolution, but that's really only part of the required definition. Most product measurements are dynamic, not static. So you must also state bandwidth of the measurement, which is basically the speed of change that you can actually measure or you need to measure. In other words, is it a slow moving signal or a very fast moving signal? By signal, I mean a physical property that you're measuring from a sensor. The measurement methodologies and selected instruments totally change with bandwidth. Simple example, DC motor current. Are you interested in the inrush current that lasts microseconds, commutation current that lasts milliseconds, or average running current that lasts for hundreds of milliseconds? I used to joke that you can make the Rocky Mountains look like a pool table with enough averaging that is low bandwidth. But this requires more cycle time and may not actually tell you a whole lot about the quality of the part. High bandwidth requires much better instrumentation, but we'll tell you a lot more about what's going on in your product. Remember, you can always take a high bandwidth signal and turn it into a low bandwidth signal, but you can't do it the other way around.
[00:10:18] Speaker A: Are there any other common examples that you have?
[00:10:21] Speaker B: Yeah. Another typical one is measure x while holding y at a known condition, that is, measure motor power at a fixed load. The problem is that it's very difficult to hold a real world fixed condition with any level of accuracy, especially if you're trying to do it with minimum cycle time. You end up with feedback loops trying to control the fixed condition and then measuring the result, which is usually out of phase with the input condition. It's a fight to keep things stable and gauge r and r suffers. It's far better to have a high speed measurement system for x and sweep the y condition from just below to just above the area of interest curve. Fit all the data into a multi order polynomial plug in y and solve for x. This way you get excellent gauge r and r with minimum cycle time. An example of this idea is to test a small motor going from full speed and no load to stall in a few seconds. Collect all of the data at very high speed curve. Fit the result and you get the full parametric torque and power curve of the motor in just a few seconds. It's a very fast and very accurate test technique. It is the identical test for solar panel going from open circuit voltage to short circuit current, you get all of the parametric data for the device. In this case, we did sampling at 200,000 samples per second per channel, using a flash lamp that could only simulate the intensity of the sun for five thousandths of a second. We normalized each of the 1000 data points, which were 18 bit to one sun curve. Fit the results, and for a while we had the world's most accurate solar test system. Now you can do it as a static test with a led light. So times have changed for the better. But in both cases, you have to carefully understand the bandwidth of the measurement in order to pull this off successfully. Another example of poor customer spec is where the customer provides product validation requirements for a production test. These are two totally different requirements. Product validation. You confirm all of the design criteria are met. That is, it works from -30 to 70 c, works upside down, right side up, lights turn on and off after x sections, you name it, but it's not an appropriate specification for end of line test. But sadly, it's common in manufacturing rfqs. In a production test, you want to confirm the product was assembled correctly and is working as expected. That is, it has all the attributes of a known good part that did pass product validation. This is much faster and just looks for the assembly issues. Another example of a customer spec would be very prescriptive on the test method to be used. I can think of many examples, you know, testing a water utility meter by pumping water from 150 gallon tank to another 50 gallon tank, back and forth at various flow rates for minutes at a time plus, they specified a very crude turbine flow meter. It works. But the customer was looking at ten test systems and all of the associated conveyor to meet the production rate. In my opinion, much better to have only one high speed test system using an expensive Coriolis mass flow meter with four decades of resolution and 1 second gauge capable response time, resulting in substantially less conveyor, less floor space, and much less total cost.
Engine cam phasers require precise pressure control at multiple set points. The customer had a test solution they liked that took 85 seconds cycle time, but a production rate of 14 seconds per cam phaser. Thus, multiple test systems were required.
In looking at it, the bottleneck was the crude method of setting pressure by varying the hydraulic pump speed. Their asked was to build the print of eight systems. I proposed building one system with an independent precision pressure circuit for each of the multiple test points. We sold one system at one third the total cost, and it was an excellent success. It replaced all the eight systems requested when we proposed testing with air and eliminating the oil as a working fluid altogether, because in this case, the customer spent half a million dollars to remove the oil after the hydraulic test prior to shipping. We received a small contract to demonstrate a proof of principle, using air as the test fluid prior to building the production solution. That, too was a complete success because air is ten times less viscous than oil. So the test was basically ten times more sensitive to finding manufacturing defects in the machining of the cam phaser.
[00:15:02] Speaker A: Are there any other examples that you care to give?
[00:15:06] Speaker B: Well, one other one comes to mind. Turbochargers were usually tested in a hot gas environment. Each test chamber was over $2 million in capital investment, and throughput was typically ten to 15 units per hour. Working with the customer, we developed an air test that could detect the same failure characteristics at 45 pieces an hour with improved sensitivity. Plus, it was 20% of the cost and a lot less floor space. We saved fuel. We used less compressed air and less labor. It was so successful, it was implemented by the customer at all of their plants around the world. So, in specifying test equipment, often test companies can concept new solutions substantially different from the requested solutions to provide better return on investment, simplify production, and improve quality.
[00:15:55] Speaker A: So why would you recommend a test company and not an automation company or machine builder to do the test part of an automated system?
[00:16:03] Speaker B: Well, I mean, would you go to a foot doctor for heart surgery? Test is a specialty field, just like custom machine building. There are two different skill sets with very little overlap. One is no smarter than the other. They just have different expertise and experience.
You know, for example, I could sit and list off many different technologies to measure pressure or flow or leak, and they all have different pros and cons. The real skill is in understanding which device or method to choose is best suited to the application. It all boils down to experience. If you choose the wrong device or method, then the entire test system could be flawed. It's not a machine builder's area of expertise. You know, the previous examples I spoke to discuss the benefits of the test experience. You know, I want to make a key point. The part should never wait for the test system. The test system should always be waiting for the part. The cost alone for a single testation is far preferable over multiple stations. And often they don't agree with each other, which is always a problem. How to integrate these devices into a system is also key to their success or failure. The same goes for low noise instrumentation, wiring, grounding techniques, sampling rate, low pass or band pass data filtering, curve fitting, and other techniques for system accuracy. Mechanical fixturing and touch tooling can also impact the gauge r and r of a test system. The physical integration of sensors can be very specific. That is, applying even the slightest loading, side loading into a torque transducer or motor shaft will have a large impact on torque measurement repeatability. Hence, poor gauge r and r. There are so many small details that affect the end system gauge capability. You need to work with experienced instrumentation engineers. And likewise, you know, don't buy assembly automation from a test company. That, too, is a bad idea. Only a few very large integrators have divisions dedicated to both skill sets.
[00:18:05] Speaker A: Now, this has been a great discussion. What are some of the trends drew that you've noticed in automated testing over the last few years?
[00:18:13] Speaker B: Well, it's big data. You know, the engineers graduating today just can't get enough big data. They just can't get enough data, period. They want to drill down into every minute detail, look for trends and outliers. They want to be able to spot the proverbial needle in a haystack. Overall, pass fail is no longer the only goal of production test equipment. Ideally, the customers, engineering labs, and their production floors are using the exact same test solutions so that comparisons can be made. This goes for multinational companies with plants building the same product around the world.
Companies like Ford Engine, for example, develop systems where all the raw desk data is stored for each and every part made. That's the data prior to making the pass fail decision. If they find a problem in production and after, they understand how to sort for it. They can go back and post process the past production data to predict the size of the quality issue and minimize the cost of a general recall. By only targeting specific vins that they confirmed have the same issue. The payback is enormous, and this drives lower overall cost and the increased customer satisfaction.
[00:19:30] Speaker A: So, Drew, what are some of the best practices for test and testing?
[00:19:34] Speaker B: Well, do your homework. Develop a well defined requirements document, and do a paper study on the combined sum of various measurement errors before you build anything, prototype a measurement if the proposed method is untried before, and also to accurately estimate test cycle time, which is key to determining the number of test systems you really need, understand the range and bandwidth of the signal you're interested in, and apply appropriate sampling and filtering to get the very best clean signal to analyze. Analyzing a signal full of noise is a losing proposition. Looking at a signal that's so slow or smooth that eliminates all the useful information, and discriminating between production parts is also of little value. It's a fine line, fine balance, but if done well, you get a great test system and a great gauge r.
[00:20:24] Speaker A: And r. So I wanted to ask you, too, about the cost of test equipment. Like expensive test equipment, is it worth it?
[00:20:32] Speaker B: Well, you know, that was a common challenge by all of the customers. And my response was that there's a reason why a radio shack voltmeter costs less than a Hewitt Packard or keysight instrument. Precision, speed, drift, and absolute calibration. That's what you're paying for. Same goes for a PLC input card versus a national instruments data acquisition card. You need to be able to believe the last few digits of the measurement are accurate. A result that doesn't repeat is of little value. This is where the gauge r and r proves what you've purchased was worth the investment. It is the acid test of measurement success, and there's no hiding behind a substandard solution.
[00:21:14] Speaker A: Drew, maybe you could take a minute and we could talk a little bit about signature analysis that's used in testing. What is it, and how does it work?
[00:21:22] Speaker B: Well, you know, I'm a big believer in signature analysis. It's a great solution for testing for empirical characteristics of devices or circuits where an engineering spec might not exist. It's very common in the electronics industry, but it's not as common in the assembly industry. I think it should be. Almost every physical interaction can be plotted as two variables, torque versus speed, voltage versus current, force versus displacement, flow versus angle. Yeah, you get the idea. These graphs are called signatures, like a fingerprint for the part when production testing parts with signature analysis, you're looking for parts that fall outside a window. That is good. Parts have signatures that fall on top of each other over and over and over again. A bad part will have a different signature. You don't necessarily know why it's different and you don't really care. It's the difference that is important. So you fail it because it's an outlier. It's a very powerful and inexpensive test technique that confirms product quality without necessarily being tied back to the product specifications. With empirical experience, you can fine tune the limits and add diagnostic information to provide detailed failure causes. The Ford example I mentioned earlier was a signature analysis of torque versus angle for an engine cold test. Just imagine a torque signature versus crankshaft angle for each and every piston stroke and valve operation. If there's something wrong with just one part in the engine assembly, it will show up as a different signature. It's a very powerful sorting technique.
[00:22:59] Speaker A: Drew, what are the big mistakes that automation, integrators and even very big and capable ones make when it comes to test?
[00:23:06] Speaker B: Well, you know, in the past we'd see a lot of PLC's being used as the basis of the measurement system, generating a green or red box for pass or fail. And I'm not talking about simple tests, but more complex ones. PLC hardware is not optimized for instrumentation applications and thus the solutions were generally substandard. You need the ability to have a very advanced math tools to do post processing data to get gauge r and r. PLC's really don't do advanced math very well. Also less of an issue today with PLC's, it is very important to understand why the product is failing on the production floor so the corrective action can be completed immediately. A picture is worth a thousand words if you know what you're looking at. Having test graphical diagnostic tools like observing input waveforms and raw data is invaluable to troubleshooting product problems or tester problems. A good test executive should have pass fail information, know with actual values, provide tolerance limits being used, provide graphical information about the test metrics, give production totals first time pass second time pass after rework, provide tools to look at the raw data, perform diagnostics and confirm calibration using reference standards, collect log files and store all the results locally and also send to a server. I could go on, but you get the point that the test software is far more than just pass and fail.
[00:24:32] Speaker A: Drew, I get the idea that you have some favorite test equipment companies, hardware and software.
[00:24:37] Speaker B: Well, yes I do, but I won't promote any specific test company, but I'll promote the building blocks used in test. If the application is demanding or the cost of failure is high, go for the best of breed and instrumentation and sensors. In every category there are low cost alternatives and higher cost precision devices. Those precision devices will give you the best opportunity for success and minimum test cycle time. They give you options when you run into unexpected difficulties. There are too many to mention as there are so many categories. But companies like Keysight, Keithley, national Instruments, Rosemund Micromotion, PCB, Omega, Dataforth, etcetera, they're all leaders in their specific field of measurement. And the test executive software is also key to the success of the end result. It should provide not only the product pass fail results, but clear production failure information and significant diagnostic tools to troubleshoot the product and equipment. It's the key to successful six sigma quality or not.
[00:25:40] Speaker A: Drew, what were some of your most memorable test projects in your career?
[00:25:43] Speaker B: Well, you know, originally I started out with driverless trains for seven years, but then switched my career to automotive testing. I've literally tested each and every sub component of a car, absolutely everything, you name it, mechanical, electrical control modules. They're all built on high speed repetitive manufacturing lines, so they all need to be tested. In addition, turbo testing was an advanced specialty. Medical devices, to name a few. Heart catheters, very smart diagnostic devices and disposables like diabetes measurement devices, drug delivery pumps, automated surgery tools like eye cornea cut and colon cut and stable. They're all disposable products. Did jet engine fuel injectors, laser weld resistance testing, numerous switches and relays, ground fault and arc fault, outlet breaker testing, all kinds of leak testing. Different techniques such as pressure decay, mass flow inside out, outside in bell jar, constant mass flow, helium and hydrogen trace gas.
Virtually all devices are leak tested for one reason or another. Noise vibration, harshness, or nvh acoustic testing, solar cells and modules, hydrogen fuel cells, ev cells, submodules, modules, packs, and plus the drive motors. The point is, I'm trying to make these all seem radically different, but really, they're all very similar. You select instruments to measure the physical properties of interest. You bring those signals into a very capable data acquisition system, and then you analyze the signals using advanced math libraries that are all the same building blocks, no matter what the product is.
[00:27:25] Speaker A: So, what were some of your most challenging test projects in your career?
[00:27:29] Speaker B: Well, I started out testing automotive cockpits in satellite factories. We did about 35 different vehicles over the years.
Here we'd be testing the cluster, the center stack, the h vac, radio power distribution, etcetera. We'd be making typically 250 to 350 electrical connections between the tester and the instrument panel. Validate all the circuits. Features, option content, all in about 60 seconds of test cycle time. In the case of the Dodge Viper, it was interesting. It included the steering column, the driver pedals, even the windshield as one sub assembly for the vehicle. Things are much simpler these days with the various vehicle data buses used.
We did a medical auto injector that won an industry award. It used gas pressure to inject drugs through the skin, just like something in Star Trek.
The requirement to test was to test the integrity of a sapphire glass vial that held the drugs. The hydrostatic test system went from zero to 5000 psi at a controlled slew rate of over 2 million psi per second in a very specific pressure pattern. Plus, there was the need for quick change out tooling when the glass sapphire broke.
Turbo testing was very challenging and, and interesting, but I really can't discuss that topic because of NDAs with the customer. Testing of lithium ion cells at the rate of 256 cells every 3.75 seconds, that worked out to 15 milliseconds of test time per cell. We're sorting cells based on ac resistance and open circuit, and the voltage with repeatable resolution to three microvolts. That's three millionths of a volt. So that got us into testing all aspects of EV battery and propulsion testing solutions up to 400 kw charge and discharge.
[00:29:29] Speaker A: Wow, that's pretty impressive, Drew. So, testing is a huge challenge, and it's both the manufacturer, I think, and also the automator. And how would you recommend to some of our listeners who might be thinking, how can I specialize in industry? Or maybe how can my kids specialize in the industry and in test and test instrumentation? And how do I learn more about this?
[00:29:49] Speaker B: Well, it's a niche area, clearly. I mean, I started my career as a test engineer at a light rail transit test track. We'd instrument every aspect of the vehicle, looking to understand the physics and operation of each of the subsystems. Very similar to aircraft development. You measure everything, change something, and measure again. The variety of the work was exceptional. You know, I did hydraulics, propulsion, h vac, ride dynamics, driverless train control. I had to learn how to measure accurately pressure, distance, torque, force velocity, acceleration, strain, voltage, current frequency, temperature, flow, leak, time, noise, vibration, radio frequency, you name it. It was a lot of fun. You learn how to relate engineering to physics. Megatronics, which is a new engineering discipline, is an essential skill because it's everything today is a combination of mechanical, electrical and software design. Unless you have a solid grasp of all three, you really can't see the whole solution.
[00:30:50] Speaker A: So if I want to start a test instrumentation company, how do I do that?
[00:30:54] Speaker B: Well, start with test experience. Every, you know, either very specific or very broad. It really doesn't matter. The old 10,000 hours rule gives you enough experience to begin to learn your craft. Now make sure you love this field of work. It's going to be stressful. It's going to be long hours because you need to sweat all the small details. In most cases, you're going to develop test systems that have never been made before. The product you're going to test is likely never been made before either. So even the smallest of issues can ruin a well concepted test system. By the end, you'll likely know the product better than your customer. Surround yourself with people who are smarter than you and are not afraid to tell you you're wrong. Now, I can tell you that saved me many, many times. At the end of the day, you're selling confidence that you'll be successful. And you know, don't forget to have fun. It's a long career. A long career if you're not having fun.
[00:31:53] Speaker A: Machine vision is often confused with end of line testing. And so what are your thoughts on machine vision?
[00:31:59] Speaker B: Well, machine vision's come a long way in the last decade, and it too is especially field different from test. Lighting. Optics, vibration measurement, algorithms all require attention to detail. I've come across vision integrated into test, but it's often best suited to be a separate station, or in some cases it may not even be the best tool for the job.
[00:32:21] Speaker A: Drew, thanks for coming on to the podcast today. This has really been very educational and super interesting. What do you like doing when you're not thinking about testing?
[00:32:29] Speaker B: Well, I'm very interested in music and acoustics, and I've been involved in building audio equipment since I was about age 14. I also like to fly sail planes, or commonly called gliders. It's like sailing in the sky. The sport's to stay. The sport is to stay aloft for as long as you can by successfully reading the clouds and the tea leaves you against the weather conditions. I have a power license, but gliders are far more fun than a small plane. They also cost a lot less money. I've flown gliders in New Zealand, in the mountains and in wave. I've gone up to 24,500ft with no engine and come down only because it was so darn cold there's no heat or fuel in a glider.
[00:33:15] Speaker A: We haven't solved that problem yet. Hey, so if somebody's interested and maybe want to get ahold of you, how can they do that?
[00:33:22] Speaker B: Well, I'm retired and bored, so I like looking at interesting problems anywhere from 3 hours to three days to three weeks in duration. The more difficult the better. And I like to travel, so that's no problem. So you can reach me at LinkedIn or through Jim Brett at the robot industry podcast.
[00:33:42] Speaker A: Drew, thanks again. This has been wonderful to catch up and wonderful to hear some of your insights of what a spectacular career.
[00:33:47] Speaker B: Thank you. It's been a lot of fun.
[00:33:49] Speaker A: Our sponsor for this episode is Earhart Automation Systems. Earhart builds and commissions turnkey automation solutions for their worldwide clients. With over 80 years of precision manufacturing, they understand the complex world of robotics, automated manufacturing, and project management, delivering world class custom automation on time and on budget. Contact one of their sales engineers to see what Earhart can build for you, and their email address is infoheartautomation.com. and Earhart is spelled Ehrhardt. And if you'd like to get in touch with us at the robot industry podcast, our email address is
[email protected]. and you can find me just like Drew Wilson on LinkedIn, Jim Beretta on LinkedIn. We'll see you next time. Thanks for listening. Be safe out there.