Automation of Model Generation

This post is just a musing…I’ve been thinking a lot about the weakest link in robotic control via neural networks: the model. Sure, you can have the robot itself perform the episodes (thats what real people do). Thats problematic though, because reinforcement learning (especially tabula rasa) takes many many episodes. Thats slow, and probably damaging to the poor robot. We speed things up in virtual. That means we need a model. Making a model accurate enough to cross the reality gap is hard.

What if it were easy?

Here is how I think we can start from absolutely no knowledge about the robot, and arrive at a fully functioning model.

Start with a point cloud 

Using a stereo camera pointed at the robot, get a point cloud. This part is pretty easy. Nothing new here.

Turn the point cloud into a 3D model

This is less common but there are lots of implementations of algorithms that do this. Here is an example.

Notice at this point, you have the shape of your robot as a 3D model. Thats great. However we need to know a lot more.

Actuate a single motor and watch what happens

Here the system tells a motor to actuate some known amount, and we take another point cloud reading.

At this point the model should look different. Maybe an arm started to bend. This is pretty important because now the system knows there is a joint at the location where the models diverge. We also know that the actuator will move the robot at that joint, and we know by how much.

Try some prediction

Reset the real robot. Tell the same actuator to do the same thing, both with the real robot and the model. Compare how similar the results are.

Keep on keeping on

After running through all the actuators, taking as many measurements as you like, you have a model with joints in the right places. You also have a mapping between input (signal to actuators) and expected output (the robots pose).

What you wont have

You wont have mass, but that’s pretty easy. You won’t have inertia. You won’t have friction.