X
Innovation

Robots: Why tech advances mean the tipping point is coming

Germany's Magazino is using a 'build, test, destroy' model to push the boundaries of what robots are capable of.
Written by Colin Barker, Contributor

Munich-based Magazino is developing perception-controlled, mobile robots which can tackle jobs like picking goods in warehouses. The German company says its robots can identify individual objects with their 2D and 3D cameras, and securely grasped them and collect them -- tasks that robots have often found difficult.

ZDNet recently spoke to the company's CEO and co-founder, Frederik Brantner, to see how their technology is developing.

ZDNet: Where did the idea for Magazino come from?

Brantner: From a friend of mine. She was working in a pharmacy [in German] where they use these little magazines which, like the magazine of a gun, dispense items one or more at a time. Pharmacies are the only places where they have these machines dispensing these little boxes, each of which has the number of pills needed per dose.

magazino-toru-high-shelf.jpg
Photo: Magazino

And I thought: I need something like this for my room at home! I have this sort of chaos and I could automatically sort it. And then when I went in, I could go to a machine and say, "I need my charging device," and it would automatically find it.

Soon I talked to a few friends and asked them if they could do something similar for the home and they could -- but it would be too expensive. But if that was true then maybe I could find out other applications for this and it turned out that e-commerce might be a good idea.

Once I started looking at e-commerce, I found out that when they used to deliver things like shoes and so on, they would ship a whole pallet of shoes to a shop and then take things out manually and dispense them to the shelves.

Well, people don't buy shoes by the pallet-load. At that time there was no existing technology to automatically dispense a single pair of shoes.

So you think, somehow this could be done more efficiently -- and that is what we are building robots for.

What sort of team is involved in building these robots?

Three of us got together to work out what we needed to do to make this happen. One is a computer scientist, the other a chemical engineer, and we founded a company. We recruited and now we are about 50 people. Of those 50, around 40 are engineers.

That was one of the first principles of the company: it is engineering-driven and we do everything ourselves. We start from scratch with an industrial designer and then we have a mechanical team for the mechanical design. We build it ourselves and we have a compartment team who put it all together.

magazine-brantner.jpg

Magazino CEO Brantner: "We build it, test it, destroy it. Build it, test it, destroy it over and over and this way we get it a lot better quicker."

Photo: Colin Barker

And then we've got a mechanical department who do all the cabling and the [motors], and we have the software team, which is the biggest with about 25 people.

Robotics is a technology that is changing all the time and very, very quickly. Robots for the last 20 years [have been] simple. If you use the example of cars, robots will do the same welding spot and then do that over and over again: very precise and fast and very efficient.

But then, as soon as something changes -- for example, the car has been turned around -- then the robot will not be able to put that welding spot in the same place.

The big difference with our robots is that they can drive around and look with a camera and a lot of sensors. Robots now are more sensitive -- we call it perception-based -- so there is a lot of action involved where the robots use all their sensors and make their own decisions.

When people say that robots will change the world, they are not talking about the robots we know today, they are talking about robots that use perception.

To take an example, ours work in parallel with the humans so, if a robot brings you something and puts it down, it will then go and get something else and will look around and place it somewhere else that it needs to go.

And then if something else is added, the robot will need to see this and act accordingly. The robot is more complex and can act in a complex manner. Now our robots can work in a streamlined way and make decisions and act on them as it does so.

Is the robot using AI?

Yes, AI and computer vision. Say you have a book and it should be in a place where the robot expects to find it. The robot looks and can't find it so it then says, "I can't find the book here but I can see it is over there". It then goes and gets the book.

But then the book is not straight because it has been turned. Our robot will see that and then turn the book.

In robotics, we are teaching the computers behaviour. Before this we were saying, "Do this movement and this movement and then put it down".

But now we can teach it how to pick an item. We can teach it how to drive somewhere and if something is in its way, we can teach it how to drive around. If there is an item somewhere that it cannot find, it can check if there is another item that is the same and then go and get that.

The complexity of the robot is increasing tremendously and to some extent it is using machine learning. For example, the little robots that are cleaning houses today are a lot smarter, and I mean a lot smarter, than any industrial robot out there today. Because they already look at the ceiling, they plan their own moves carefully, and aim to clean your apartment carefully and in the minimum number of moves.

You have real intelligence in a consumer product where the industrial robot is still pretty dumb.

1457441344767.jpg

It's all in the eyes: The real breakthrough is combining the well understood robotics seen in car factories and the like with sensor information obtained from many sources including the computers eyes (to right).

Photo: Siemens

But that vacuum cleaner robot can also go under a chair and get stuck...

Robots aren't there yet. We can't pick a single screw and a bridge at the same time. What they are good at, at the moment, are rectangular objects: books, shoe boxes, and so on. All of those things we can grasp and support. So what we should do is use the robot to do all the things that it is good at and leave human to do all of the things that he is good at.

A good way to see the advantage that a robot can offer is to have the robot pick an item that is on the lowest shelf and then an item on the highest shelf, so that the human does not have to crouch down too much or try to stretch up to the highest places.

And this is important: that is why, here at Magazino, we have hardware and software integrated within one team. The traditional approach was that you designed some kind of vehicle and just combined it with an arm on the top.

Now we have very specific hardware and are combining that with sophisticated software, so you can do it all much better.

Does that also work when you might want to pick up something hard -- like a box -- and then something softer and more delicate?

Within that warehouse system, we can set a flag on the system that says, 'this item can be picked by a robot'. With a soft delicate item, the system will flag it as something that can't be picked by a robot and so the item goes to a human.

We are working on packing t-shirts and normal shirts in plastic foil that can be picked by a robot.

We also came up with an idea where the robots have these plastic rollers and can roll things along and then pick them up with a plastic gripper. But when you are trying to do that, you come up against technical barriers and so we know we are coming up against the boundaries of what is possible.

It is the same with computer vision and navigation. We are expanding what is possible. Five years ago we could not take such big steps forward because we did not have the computing power, and the cameras weren't cheap enough -- there were no 3-D sensors, and so on. Now we can.

How much can the robot carry in one go?

If you are looking at books, it can carry about 30 books. We have one customer who is using our system for books now. They are using it for taking the books individually and assembling an order for Amazon or a book publisher.

If you look at Moravec's paradox, he explains that what is difficult for the human, such as complicated calculations, was actually done by computers 50 years ago. What is easy for the human, things that he learns in his early years, things like understanding and empathy and, especially, hand/eye co-ordination, are extremely complicated for a robot.The point is that we are only getting there now. The tilting point, in my opinion, is now. Computer vision is good enough, computing power is big enough to look at items and say things like, 'this is a pen and a pen I grasp this way'.

Do you have partners who you work with?

We started in January 2014 with three founders and then we got our first venture capital and business angels in May 2014. So in May last year -- 2015 -- we were still at six to seven people and then Siemens bought the shares of the existing shareholders and venture capitalists. Now Siemens holds 49.9 percent, so they are [providing] financing but, of course, they are also bringing technology. Siemens can buy things like sensors a lot cheaper than we could.

So you have the backing of a massive company but you can still operate like a startup?

Absolutely. Robotics is being developed very quickly. We took a robot and put it together with a 3D printer and we could do that very quickly. You know, you have the plan for a 3D model and you give that to the software department so they can get started, and then you have the robot there, so that the feedback keeps going backwards and forwards.

If you look at a large company, they sit down and write a book of requirements, then they start developing, and then they think they are finished and they start to ship.

That's not our way. We build it, test it, destroy it. Build it, test it, destroy it over and over and this way we get it a lot better quicker.

How fast?

Within a year we have built five different robots and six different grippers. We have to do it quickly because in robotics there is so much under development at this moment.

People worry about robots and think they are taking over our lives -- but they are not. For example, when you think of a nurse changing a bed for a patient, from a robotics point of view, the complexities are enormous.

If you go around a car manufacturing plant, you will see a lot of robot arms doing the work, but [who is] providing the logistics? [We can imagine] a robot building a car, yes, but driving the car? That's a lot more complex.

Read more

Editorial standards