On-arm 3D camera webinar with Alex
In this webinar, Alex talks about how attaching the 3D camera directly onto the robot arm unlocks a more flexible and effective solution for automation systems.
Good morning, good afternoon, and good evening, depending on where you are in the world.
Welcome to today's presentation on machine vision and how you can unleash the potential of on-arm 3D vision in robotics. My name is Alex capsule and I'm a technical sales manager at Zivid, as you might have seen from my personal bio. I have 12 years of experience in embedded systems on both the engineering design side and sales and marketing. And I'm really excited to be here presenting this topic today.
We have some new and very exciting advancements in 3D machine vision that we'll be discussing. So let's jump in and get started.
So within the world of automation, let's talk about logistics for a second, how many of you have ordered something online in the last few days? At my house, I can say that we have items showing up every day at our doorstep, probably multiple items a day in some cases. This e-commerce is a major trend that started 15, 20 years ago but has really accelerated over the past year with the pandemic as we spent most of our time at home and doing things like online shopping to give a sense of scale that Amazon shipping. Now more than 300 million packages per month. And they're not alone. E-commerce all around the world is exploding with no signs of stopping. But what is now commonplace for us in this online shopping craze is quite complicated behind the scenes to support that operationally and we as humans can't possibly sustain or manage that growth on our own.
For one, we're not great at doing fast, repetitive or ergonomic work like that for hours every day. Second, in many places of the world, it's very difficult to find workers to fill these jobs. And lastly, it's very costly, which increases the price of goods for us, the consumers.
So this is where robot picking sales have come to the rescue. So robots are perfect for these tasks. For example, in e-commerce order fulfillment, they can move objects from a bin to a box or from one place to another. Do this very quickly. Hello, cost. A single robot can move goods at over 600 goods per hour or even faster, in some cases, higher cycle times. And recently, we've started making robots even more intelligent with better 3D vision as shown in this picture, and smarter, more human-like intelligence, things like motion problem solving as they're fulfilling these tasks. But unfortunately, there's one sensory limitation that we've kind of cornered ourselves into, and that's stationary and fixed sensors. We typically mount these sensors in a fixed location.
So let me ask you something real quick or a few questions.
Have you been in any of these situations, one where your picking deployment was really close to passing all your field tests? But didn't quite make it? Or maybe it failed part of the acceptance test or maybe the project scope changed after deployment? Or what about where you addressed or kind of patch some of those challenges with multiple cameras, multiple stationary cameras to serve a single robot? Well, in those cases, the mounting architecture plays a direct role. And the issue with stationary mounted is that it can limit the piece picking reliability, limit the automation level physically in the factory, can drive up the number and cost of the sensors with multiple cameras and it can increase the maintenance cost. So exhibit. This is exactly what we want to change. One of the things we want to change is to give you this ability. We want to reinvent your toolbox with on-arm 3D cameras. So now we have a solution to improve the sensing capability and give better flexibility. With a single 3D camera, we can still deploy stationary sensors when that makes the most sense. But the benefits of on-arm capability or mounted on robot can really open up new opportunities with versatility that we've never seen to this point. So the benefits of on-arm vision is simple.
You can see better Seymour and it brings robotics systems more flexibility and lower costs. Rather than just talk about it, though, let me show you. This brings always optimal distance, on-arm 3D cameras enabled the best image quality all the time in this simple scene for depalletization in a logistics center or warehouse, a stationary sensor would sacrifice image quality for maximizing the field of view. So this can result in pics, damage boxes from crashing, things like that.
On the other hand, no pun intended, if you mount on-arm 3D camera, this will provide consistent spatial resolution and accuracy for the entire working volume. Problem solved. What about e-commerce fulfillment of consumer goods? Let's take a look at that. So detecting products inside have been is one of the main tasks. So that's what we set up here with a stationary mounted sensor. And as you can see, we're seeing surface reflections. There's kind of noise shown at the bottom of the bin here. And are we even confident that we're seeing everything? Probably not. In fact, we missed an entire item, but with on-arm 3D camera, on the other hand, as shown here, we're now getting closer. So as the robot arm better positions the camera to get a better view, we now have better resolution, lower noise, better accuracy, less ambient light disturbances, and we can mitigate some of the artifacts like reflections.
So now we're getting closer. We can empty the bin with confidence, even if things are up against the wall or in corners. This was not achievable before with a single stationary sensor, and it's where many of the field tests failed. So if we use our mounted approach, however, look how the robot can change position to solve this problem. We can even combine multiple point clouds from different viewing positions. And there's a whole new capability to avoid occlusions or reflections. This is now the highest possible resolution, highest possible accuracy and best and worst performance for any bean picking application in the world. So by giving the camera a new degree of freedom with flexible and human-like points of view, we meet the entitlement requirement. And we solve the corner cases that come up in the real world. So that showed how we can see better, but we can also see more in this simple scenario, you have two scenes or maybe two workstations to view if you want to see them both accurately. The stationary mounted would actually need two separate cameras here, because if you don't do that, you're going to lose pixel density and image quality mounted on the robot arm. Actually, though, solves this problem. You can simply move to see more just as we do, we humans do in the real world. This on-arm camera can cover multiple workstations, larger bins, larger racks and more general coverage of the workspace. And now what if instead of we're just talking about a simple scenario, like two workspaces, we bump that up to say 100 as things scale to larger plants. Now, this adds up to even more sensors to cover all those workspaces. So in this case, there's a huge benefit in having the camera and robot self-contained for one that makes the most use of your factory. Space installation and automatic maintenance are simple. It's one camera by one camera to install, calibrate, maintain and repair Purcell and you can even deploy mobile manipulators. So I think achieving the same kind of cost-effectiveness and flexibility is significantly more difficult with stationary sensors.
With all these benefits, on-arm, high accuracy sensors seem kind of too good to be true. And up until now, that's kind of been the case. We've all favored the stationary sensor approach simply because there wasn't a solution that was small enough, robust enough, accurate enough, or fast enough. So we kind of defaulted to the stationary approach.
But there is good news that is no longer the case. And that's why we're here today. This is exactly what the new Zivid Two cameras were designed for. I think a picture is worth a thousand words, so let me show you, here is a 3D point cloud that was vivid with 100 millisecond capture time. So with this kind of accuracy and speed and quality, this makes the six seconds cycle time that we mentioned earlier achievable. So this is pretty cool because we're now starting to break that barrier that we previously had going with on our approach.
What about size, though, that's been another barrier we've talked about or seen in the past. So up until now, high accuracy sensors are big and bulky in our class of industrial 3D vision. However, as if it too is the smallest form factor out there at a mass of 950 grams, it can pretty much fit in your hands. It was designed this way to maintain robotic maneuverability and reduce the risk of crashing or hitting objects.
Another barrier we've seen, which is a common industrial application, regardless of the mounting type, its robustness. These are not consumer cameras. When we're talking about civita, this is designed for extremely demanding conditions. And let me show you a little bit of what we do at Zivid with our cameras, and I'll admit that we're not very kind to them. So each unit goes through shake and bake verification. They're tested across a temperature range from 0 degrees C to 50 degrees C. Each is calibrated from various angles with our calibration boards that are also available to customers. They undergo extensive endurance testing with honor, mounting setups. We drop them, we test them, we drop them more and we repeat that. And we exhaustively do this and we make sure that each unit meets the datasheet specifications for a long life in an industrial environment.
So I would say, in short, robustness is not something we take lightly. This is baked into our mission and our product strategy. Zivid cameras, especially Zivid Two, is really ruggedized and proven for these industrial environments. The last barrier to discuss is mechanically integrating these cameras onto the robot arm. How can this be done simply and reliably? Well, we provide the pieces you need to do that with these, so as shown here, there are a few mounts extenders sensor cages that you need in your system to reliably mount ISIP to the camera. I hope this helps you to reimagine the possibilities you have now, pretty much your three division is only limited by the reach of the robot. It's no longer fixed and stuck during the life of its deployment. You can cover larger workspaces with a single camera. You can give your picking algorithms a better chance of succeeding and passing all tests. And you can maximize flexibility for changes that might come up or physical layouts that you want to achieve at Zivid.
This is our mission to give human-like vision to robots. We want your system to see more and see better. So that it can do more and do better.
Hopefully, you found this as interesting and exciting as we did. If you want to learn more, we actually just released an e-book detailing on-arm robot-mounted benefits, that's available in the link here.
So thank you again for joining us today. At this time, I'm happy to take any questions you might have.
Fill in your email, and we'll send you all the files
and resources from the webinar.
Zivid brings
Gjerdrums
N-0484, Oslo
Norway