The first version of LEGO sorter worked quite well, so I started with improvements.

Images capturing

At first I wanted to improve images quality. The images were blurred with movement and long exposition. So in every color was added some gray. These looked really bad (well, 5 frames per second):

Rozmazaná žlutá kostka Rozmazaná zelená kostka Rozmazaná červená kostka Rozmazaná modrá kostka

I checked the camera library API for some way to set the camera shutter speed. (Well I know there is no real shutter in these cameras, in real the frame rate is in effect). But there is no way to set the frame rate directly. These small cheap webcams don’t allow that, these functionalities are embedded in the camera. But it can be done indirectly – by improving the lighting. The camera considers then it has enough light to increase the frame rate. So I threw away the lamp from the first version and used 2 Watts bicycle lamp. And it worked, camera started to take the images at rate 30 frames per second which is the camera maximum. The result was significantly better:

Modrá kostka Zelená kostka Červená kostka Bílá kostka Žlutá kostka

And the results of neural network were significantly better, too. But there was still a problem to distinguish yellow and white. Seeing the images I haven’t been surprised. I just thought I needed a better camera with a higher color scale, which wouldn’t give me overexposed images. But then I realized the brightness control is embedded in the camera, too. And again with no way to direct control but with a possibility to control that indirectly. In the camera box I had black background to avoid disturbances. It wasn’t on the cropped images, but the camera shot that, of course. So I replaced the black background tiles with white tiles. The overall brightness got quite down and the results were better again:

Modrá kostka Zelená kostka Červená kostka Bílá kostka Šedá kostka Žlutá kostka Černá kostka

Neural network enhancement

With such images I tried to enhance the neural network and add more colors – blue, black and light gray (especially the blurred light gray on dark gray background had been unrecognizable even with eyes). The results were better again. The only problem remained was to distinguish white and light gray – the network made a lot of mistakes here. I’m afraid this needs more complicated neural network and that two layers are not enough.

Sorting mechanics improvements

With a better neural network results I started to improve the sorter itself. I extended the conveyor belt and added two more pushers (I only had one NXT set, so was limited to three motors). So finally I was able to sort out three colors. Red, green and blue in a current configuration.

The original program was a single thread application. When it found a red brick, just slept for a short moment and pushed it off the conveyor belt. It was impractical for the three motors – during the sleep the program stopped to do anything. So I modified the program to use two threads. One thread captured the images and when found anything what should be pushed off the belt, created a push request with adequate delay and put it in the request list. The second thread checked the list and when found anything to process at the moment, issued a push action on the responsible motor.

Třídička verze 2

Less successful modifications…

Not all modifications were successful, of course. I tried to modify the pushers to push in both directions to be able to sort 6 colors, but I failed with the current construction. It would be necessary to exceed  the leading axles. At least 16 studs long axles would be necessary. I didn’t have these and moreover at this length it would probably bent too much. I was thinking about a construction with 12 studs long axles, but finally used a complete different solution. Would be mentioned in a next article.

I tried to figure out a better conveyor belt, something smoother then the original LEGO belt. The structure of the current belt would probably cause complications to more sophisticated neural network. I tried to use bicycle tube, but with no clear result.

What I want to try next:

  • Double pushers (looks I’m on a good way now).
  • Smoother conveyor belt to avoid the neural network confusion.
  • Use a better camera.
  • Try to sort various parts, not only Technics brick 1×1 and 1×2.
  • Use more images to for the recognition. That should decrease the error rate, as now there is higher chance of error on the first image (only small part of a brick is visible) than on the subsequent images.
  • Try to use multiple layer neural networks (and start with the shape recognition).
  • And a real highlight would be to use NXT color sensor as a input, upload the neural network into a NXT brick and create pure LEGO sorter without camera and PC.

And finally the video…