Since last post I learned the sorter to sort around 20 different colors. But sorting LEGO parts just by color is boring. So I went ahead.
A few years ago I used to buy a LEGO kiloware on Ebay. I still had a bucket of slopes which I had a poor morale to sort. Ideal food for the automatic sorter.
So I’ve grabbed the images, trained the neural net and started the sorter. It worked quite well, but had a problem to distinguish between regular and inverted slopes.
So I tried another approach. I’ve grouped the images to regular and inverted. The result were two boxes – inverted and other (regular, double slopes, round slopes etc.)
The inverted slopes were simple to sort – I had it done and put to the final boxes before I realized I should take a photo.
Problem with the bigger box was it had still been a lot of different types. So I reworked the basic sorting to regular, double, round and inverted. (But I had not many inverted, as those had been already sorted out).
The sorter still has a problem to distinguish between regular and double – I don’t have enought double slope images taken.
You can see lot of “1x2x3 inverted” in the box – during the previous round the neural net didn’t know them, so it misclassified those. In the second round it was much better.
Finally I got the big box with regular slopes and used an older neural net. It didn’t know many types, so there is a lot to improve:
The top row is not good. It should contain the 4×2, 4×3 and 2×4 slopes. For some reason it cannot recognise those 4×2 – those are in all those boxes.
The bottom row is much better – from left to right these are 2x2x3, 2×3, 2×2, 1×2 and 1×3. Let’s look to the results more closely.
I’ve manually sorted the 1×2 box. The parts to the left are wrong as the neural net doesn’t know these. Mostly double slopes. To the right are pure misclassifications.
On the next image you can see the same for 2×2 slopes. Parts unknown to the neural net are to the left, misclassifications to the right.
Finally 1×3 slopes. Most of the unknows are 3×2 slopes.
I have no image of 2x2x3 box – there was just one misclassification. The box also contained “unknown” double slope 2x2x3.
Originally I wanted to stop here and say “It’s good enough” and call the post finished. But I didn’t publish it in evening and in the following morning tried to do a bit better. So I took some images of 3×2 slopes and trained a new neural net. Better one – I still learn that, too. Then I’ve put all the parts back to one box and put it to the sorter again. This is the result:
Closer look. Let’s start with the more errorneous.
3×2 slopes. Unknown to the left, misclassifications to the right.
2×4 and 4×3 slopes. Looks better.
And the following images have again unknown to the left and misclassifications to the right. Well, misclassifications – every category has just one.
(This was funny – I didn’t put the 2x2x3 slopes back, so it contained mostly the unknown parts).
That’s all for today. The overall impression is it sorts with less mistakes than I do. And thats all the slopes sorting for some time – finally I have no slopes to sort.
I’ve just one NXT set. That means 3 motors. I was thinking how to modify the pushers to be able to push parts to both sides of the conveyor belt. That would allow me to sort to six categories. The previous version was not too good for that – it would be necessary to pre-set the pusher to the right position. And it could push another parts in the wrong moment. Moreover it would require much longer axes to build. Finally I’ve used a construction with LEGO treads (so those are useable anyway 🙂 ). I’ve added 1:3 gear to the NXT motor to get faster push. It shots the parts off the belt quite nicely.
Camera
Possibilities of the simple webcam were quite limited. And the images were lousy, too. So I looked for a replacement. And because some time ago I constructed a system for controlling water level in water tabks using RaspberryPI (and because RaspberryPI has a nice camera), the decision was very simple. So I’ve used a RaspberryPI system camera. It allows manually set most of the parameters and is able to provide 10 images per second (and there is probably still a space for an improvement).
I’ve used a Blinkt module for illumination. But during finetuning the camera I’ve found out it was insufficient. So I’ve added one piece of a LED stripe. (I tried to use about 20cm of it, but got a completely overexposed image).
Image classification
The biggest changes are at the software part. I threw away that Java library and started to study Python and deep neural networks (Jeremy Howard at fast.ai has an excellent set of lectures online). At a start they say – “Get an Nvidia graphics card”. I didn’t have any, so I tried to use a free capacity of an 8 processor server. The calculations took overnight with (mostly) poor results in the morning.
So finally I paid for a virtual machine with Nvidia. One neural network calculation took now about 2 minutes, which was great (Today I have more images and more complex neural networks, but it still fits under 90 minutes).
The classification run on a classical computer without Nvidia, which alloved processig speed of one image per second. But one image per second was simply too slow, many parts passed through unnoticed. Tried to increase the speed, but unsuccessfully.
But workmate of me had an old Nvidia. Too slow for gaming, but still useful for the classification. The time of the classification is insignificant by now. So I can process those 10 images per second.
So now the RaspberryPi continuously takes images and sends them to the server with Nvidia. The server performs the classification using the precalculated neural network (and calculates the position on the image with another neural net). Then it calculates the delay and sends the results to the Rasberry. The result is queued for the specified time. After the timeout, Raspberry sends the command to NXT brick to fire the pusher. And when all this works, the LEGO parts ends in a correct box.
And the result?
After a lot of experiments I got a working neural net for recognizing 10 colors – Black, Blue, DBG, DkGreen, Green, L(B)G, MdBlue, Red, White and Yellow. I didn’t have enough parts of other colors to learn on.
The sorting is not perfect, the machine makes a few types of errors:
Bricks are too close. Pusher hits two or more at once.
The neural net is not sure enough. In such a case I just let the brick pass.
Neural net does not know the color and didn’t learn that. So the Tan parts get to gray and yellow. Dark red and orange get to red etc.
The pusher just strikes.
And sometimes the neural net makes a wrong decision. But this is not often, color classification is simple task.
This is still a prototype and I know I need to improve some parts od the solution:
I need to calibrate the camera – now I use the automatic setting. Especially the white parts are overexposed.
I need to increase the field of view of the camera and rebuild the pushers. This will allow me to sort the bigger parts (now I can sort very small parts only).
Camera takes the pictures directly from the top. It’s sufficient for color sorting, but for shape sorting it would be a problem.
And a lot of small improvements in both software and construction.
And that all means I can probably throw all the collected images away and start over again. 🙂
It might look this project ended where many projects do, but this is not the case. It just takes me a bit longer.
Till now I spent time with calculating neural networks, taking pictures of bricks, mechanics for pushing parts off the conveyor belt to boxes and other “advanced” stuff. You can read about that in previous part(s).
Today it will be much much more simple.
I tried almost all items on the todo list from the last post (excluding that crazy idea of running the classification directly on LEGO NXT brick). But the most problematic was …
The conveyor belt
The conveyor belt was a real pain. The original LEGO tread is really bad as a conveyor belt. It has very strong and confusing structure (which would make training of the neural net harder). Moreover, this structure makes small parts to suck in the belt. They get stuck quite a lot. I tried several replacements – bicycle tube, motorcycle tube.. Didn’t work. Finally I tried the a surgical tourniquet. (I have to admit that when I came to the pharmay for the fifth tourniquet during a week, they looked quite oddly at me). Anyway, it works quite good.
But even the new belt didn’t solve one problem – round parts roll. They roll and fall all around. Even to the boxes with sorted parts. And those really don’t belong there. I hoped the longer belt could stabilize the parts, but that didn’t happen. There were pins, bushes and whort axles all around the sorter.
But then I realized, that this is kind of sorting, too. And that I can separate these parts on an inclined conveyor belt. Whatever is round, it rolls away. So I build something I call “presorter”. Before I put the parts to the “real” sorter, I separate the troublemakers. It looks like this:
I need to do multiple passes – some round parts fall flat and don’t roll (especially when I want them to). But the result is quite good. This is after the first pass:
And after the second pass:
This is just a mix of a few part types. And that can be easily sorted manually.
It gets two or three passes to get a mix of non-round parts. The result is something, what can be sorted by the “real” sorter. We can look into it the next time.
At first I wanted to improve images quality. The images were blurred with movement and long exposition. So in every color was added some gray. These looked really bad (well, 5 frames per second):
I checked the camera library API for some way to set the camera shutter speed. (Well I know there is no real shutter in these cameras, in real the frame rate is in effect). But there is no way to set the frame rate directly. These small cheap webcams don’t allow that, these functionalities are embedded in the camera. But it can be done indirectly – by improving the lighting. The camera considers then it has enough light to increase the frame rate. So I threw away the lamp from the first version and used 2 Watts bicycle lamp. And it worked, camera started to take the images at rate 30 frames per second which is the camera maximum. The result was significantly better:
And the results of neural network were significantly better, too. But there was still a problem to distinguish yellow and white. Seeing the images I haven’t been surprised. I just thought I needed a better camera with a higher color scale, which wouldn’t give me overexposed images. But then I realized the brightness control is embedded in the camera, too. And again with no way to direct control but with a possibility to control that indirectly. In the camera box I had black background to avoid disturbances. It wasn’t on the cropped images, but the camera shot that, of course. So I replaced the black background tiles with white tiles. The overall brightness got quite down and the results were better again:
Neural network enhancement
With such images I tried to enhance the neural network and add more colors – blue, black and light gray (especially the blurred light gray on dark gray background had been unrecognizable even with eyes). The results were better again. The only problem remained was to distinguish white and light gray – the network made a lot of mistakes here. I’m afraid this needs more complicated neural network and that two layers are not enough.
Sorting mechanics improvements
With a better neural network results I started to improve the sorter itself. I extended the conveyor belt and added two more pushers (I only had one NXT set, so was limited to three motors). So finally I was able to sort out three colors. Red, green and blue in a current configuration.
The original program was a single thread application. When it found a red brick, just slept for a short moment and pushed it off the conveyor belt. It was impractical for the three motors – during the sleep the program stopped to do anything. So I modified the program to use two threads. One thread captured the images and when found anything what should be pushed off the belt, created a push request with adequate delay and put it in the request list. The second thread checked the list and when found anything to process at the moment, issued a push action on the responsible motor.
Less successful modifications…
Not all modifications were successful, of course. I tried to modify the pushers to push in both directions to be able to sort 6 colors, but I failed with the current construction. It would be necessary to exceed the leading axles. At least 16 studs long axles would be necessary. I didn’t have these and moreover at this length it would probably bent too much. I was thinking about a construction with 12 studs long axles, but finally used a complete different solution. Would be mentioned in a next article.
I tried to figure out a better conveyor belt, something smoother then the original LEGO belt. The structure of the current belt would probably cause complications to more sophisticated neural network. I tried to use bicycle tube, but with no clear result.
What I want to try next:
Double pushers (looks I’m on a good way now).
Smoother conveyor belt to avoid the neural network confusion.
Use a better camera.
Try to sort various parts, not only Technics brick 1×1 and 1×2.
Use more images to for the recognition. That should decrease the error rate, as now there is higher chance of error on the first image (only small part of a brick is visible) than on the subsequent images.
Try to use multiple layer neural networks (and start with the shape recognition).
And a real highlight would be to use NXT color sensor as a input, upload the neural network into a NXT brick and create pure LEGO sorter without camera and PC.
And finally the video…
Comments Off on Automatic LEGO sorter – part two, Three colors LEGO sorter
Seven or eight years ago I got LEGO Mindstorms NXT set. Wonderful set. I built a few basic models and then decided to build my own construction – parts sorter. With a color sensor it looked like a piece of cake.
I worked on the sorter for a few weeks. The most difficult task was to calibrate the sensor for the individual colors. Finally it worked, somehow. But anyway I always found out the “dark measure box” wasn’t dark enough – the sunshine was different again and again and I could calibrate again…
Our son liked that a LOT, so one day he came to “take a look”. When I got back from work I just swept the debris to a box and started to build something else (some LEGO train, probably).
Few weeks ago in 2017, during the house cleanup, I found those debris. Disassembled, sorted out, sided. And exactly that afternoon my friend sent me a link to an article about LEGO sorter based on a neural network. So I started to build again. And to program a little bit, too.
Gaining images…
At first, I wanted to check I could take the images and detect there was something present (or that the image was blank). So I built the LEGO conveyor belt, put an old USB camera over it and wrote a simple program which took images from the camera and saved those to the disc. It worked although the image quality was really, really bad. As I’ve discovered later, the camera took just 5 frames per second. The camera even doesn’t have any autofocus… what is good. When I tried a newer camera with autofocus it was unable to focus the running belt. The result was even worse.
…processing images…
I cropped the images to get just the centre (see above). The thought was that when I could see a part of a brick, the original image should contain the whole brick. Using the Neuroph library, I started to train a “yes/no” neural network.
And finally I realized this was enough to sort the colors, not only “brick/no brick”. So I threw the neural network away and started to train a new one, which was able to recognize red, green, yellow and white (based on the pile of the bricks I used). I didn’t try the light/dark gray bricks – blured gray bricks on the gray belt were not visible at all.
And – it worked. It started to sort the images.
… and sorting the bricks
So lets build something to sort the bricks, not only images. I downloaded the LeJOS library and tried to flash the firmware in LEGO NXT brick. Well, hell… The program erased everything in NXT brick and reported there is no NXT brick connected. The NXT brick stopped to do anything, only clicked silently. So called “clicking of the death”… Finally, I’ve managed to repair it. If you ever try that, remember to install the LEGO drivers in Windows DISCONNECTED from the Internet. And the final point – there was no need to flash the brick.
When I verified I could control the motors from PC, I built a simple pusher to throw the bricks off the belt. You can see the result on youtube.
And you can see the applied Murphy’s law, too. The first brick filmed is sorted wrong :).