One of the problems in the world of moving robots is the task of figuring out where robot is. One way is to mount a GPS-module on the robot. Simpler solutions might include methods that measure how many rounds the wheels have spun and (with the help of the diameter of the wheels and our friend π) calculate the distance travelled. What if the wheel spins or slides? Then the robot “thinks” it’s further along than it actually is.
Andri suggested an alternative; how about using a cheap little device everyone owns and is especially made to measure distances (even in 2 dimensions!). The device referred to is an (optical) mouse. All it does is to measure distances, translate them into coordinates and deliver them to the computer it is attached to. The question we asked ourselves was what if we can somehow circumvent all this logic and read the measurements directly.
All this device does all day long is to measure distances but a heart surgery was needed. By doing that we ended up creating an optical-mouse scanner.
The first thing was to perform a heart-surgery on our mouse and figure out what components it is made of. Upon opening the mouse up we found a microchip with some sort of a serial number which we googled and stumbled upon its datasheet. The datasheet was a pleasant reading and we discovered that this $5 device was quite remarkable; it was equipped with a camera, a microchip and an pattern recognition microchip. The way an optical mouse works is to take a picture (18x18 px) of a 1x1mm surface and compare its pattern with the pattern from the privously taken picture. By calculating the movement of the pattern it’s able to figure out how far it has been moved between the two frames. Pretty clever!
By powering the mouse and reading the registers from the microchip through a serial communication line (using the protocol described in the datasheet) we were able to read the measurements. We had succeeded in our original endaevour.
Now armed with the powers to communicate with the mouse what was there to stop us from trying to read the image the optical mouse camera was taking? Answer: Nothing. Reading the datasheet provided us with the necessary knowledge. It is actually quite simple: You just need know that the image is stored as a 18x18 pixels with each pixel being a 8bit grayscale value (0-255). Just read the data serially and compose on the other end. This seemed to work but each frame received was such a small fraction of the surface (1x1mm) we could hardly “see” anything on the picture taken. The solution to that is to combine all the pictures taken and draw them on a canvas related to the coordinates we were reading from the mouse. By doing that we ended up creating an optical-mouse scanner.
Reads: iðnaðarverkfræði (industrial engineering)
On our mission to learn Robotics an opportunity presented it self; a line-following competition hosted by IEEE Iceland, November 22nd, 2009. What was known beforehand was this:
- Track is a 25mm black line on a white 1250x2500mm board
- Lines perpendicular will cross the track
- The toughest turn will have a radius of 100mm
- Five light gates will measure the distance made and 2 points given for each gate
- Each team gets two attempts
- The team with the most points and then the best time will win
Some limitations on the robot were:
- Height 200mm
- Length 250mm
- Width 200mm
What we came up with a was a “racer” which can be seen on the following photos:
On the front is the Arduino board with connections to the motor controller which was used to control the motors. The controller can pulse the motors to decrease the 9V from the batteries and change the direction of the motors. This way (and with the help of the steel ball on the front rotating in all directions) the the racer could make any kind of turns and in fact turn on the spot - making it very maneuverable. A plate with LEDs was mounted under the racer with light sensors. The light sensors measure the light reflected from the surface under the racer and therefore sensing where the 25mm black track was.
We assigned the sensors a value from -3 (the sensor second furthest to the left) to +3 (the sensor second furthest to the right). As can be seen from the picture there are 8 sensors but because the Arduino board only has 6 analog inputs and we were unable to connect a comparator to connect the remaining 2 to the digital inputs of the board we left the two sensors on the edges unused.
Armed with a way to control the motors and measure the location of the track the only thing left was to create a steering controller. After some pondering we decided to go with a PID controller which I implemented in C++. The aim of the PID controller was to keep the average of the values of the sensors equal to zero. The code for the whole project can be found here.
When we arrived it was announced that the track was a modeled after the F1 Suzuka track in Japan. After a few test rounds where we tweaked the parameters of our racers we were ready to go.
Four teams participated in the competition and we were the first team to race. We decided to be on the safe side and we reduced the speed to be sure to finish the track. And we did. Our time 29 seconds. None of the other teams made a successful lap. As we had the lead we decided to try for a faster lap. This time we however we went too fast into one of the turns and the racer lost control. Fortunately none of the other teams made it either and we were declared the winners with 10 points and a time of 29 seconds :)
Mechatronics is an interest that my colleague Andri and I share. For some months we have been trying to initiate a Mechatronics project but for some reason we never seem to be able to start even though we’re both pretty motivated.
Armed with the new coordinates it was relatively easy to have a computer draw up what our Wall-E «saw»
Earlier this summer we met up and did a warm-up exercise. The purpose of the exercise was to get familiarized with the newly bought analog sonar. We hooked the sonar to a stepper motor that with the help of a BASIC Stamp Microcontroller turned the sonar 90° while measuring the distance to the next object. This way we got a reading in the Polar-plane which was then projected onto the XY-plane. Armed with the new coordinates it was relatively easy to have a computer draw up what our Wall-E «saw». On the right is our version of Wall-E.
Last week we decided to switch to the ATmega microcontroller on the Arduino Duemilanove microcontroller board.
The Arduino Duemilanove (“2009”) is a microcontroller board based on the ATmega168 (datasheet) or ATmega328 (datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started
Arduino can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors, and other actuators. The microcontroller on the board is programmed using the Arduino programming language (based on Wiring) and the Arduino development environment (based on Processing). Arduino projects can be stand-alone or they can communicate with software on running on a computer (e.g. Flash, Processing, MaxMSP). We intend to control the Arduino with a Beagle Board.
What our little Mechatron is supposed to do in the end is very unclear. We are mainly going to have loads of fun while learning the ropes of mechatronic. This week we’ll receive two (1, 2) books to start our journey. As we progress I’ll be posting the results on this blog.