Current Updates

This blog is an informal, and sometimes intermittent, record of my MEng project.

Thursday 3 May 2012

What I've Been Doing for Over a Month

Once again, I've left it far too long since I last posted.  However, I hope I can summarise what I've done in the past month or so without making it insanely long.

I had a week or so off after my last post, before getting back to doing something resembling work.

Server
Development of the server was aided by using 'socat', which allows the creation of serial-port-like pseudo terminals in Linux.  This meant that I could point the server at one of these, and use moserial, and later a simple emulation of the AVR end of the connection, to feed it with commands.  The server is now near-complete, though there are a couple of things still left to do:
  • It's still a single-user server.  That's OK for now, but it'd be nice if multiple folk could connect (especially just to see what's happening).
  • The video streamer I wrote (on libv4l2) feeds raw video.  This even challenges the loopback adaptor.  I worked it out as about 217Mbps data (640 x 480 x 24bit x 30fps).  To bring this down, I could reduce the resolution, colour depth and frame rate, but a better bet is probably to find out if the supplied camera (Logitech Quickcam Messenger, if memory serves) can feed JPEG frames directly (no decompression / recompression required at the server).
  • Ability to receive scan results from a range sensor, update a map, and relay it to the client
  • Authentication (this one's really not that hard, but I don't see a need for now)
  • Buffering of commands (like, go here, then follow a line until you get to there)
As it is, the server acts as a very useful interface between the client and the AVR.  It's still a little verbose, and it's not got a useful command-line help screen either...

Client
I've made the client receive and display video now, though it's currently raw data (in short blocks, to make it work reasonably over UDP).  It uses OpenGL and a texture, and updates the texture directly using glTexSubImage2D.  The client also responds to key presses now, and sends out remote control data when appropriate.  It displays targets in the correct places, and navigates through them as the robot moves.

AVR Software
This took a lot less time than I anticipated.  I started by writing some code to drive the serial port.  It's interrupt-driven, and uses two fixed-length buffers.  Client code just checks if a buffer is available (i.e. empty for transmit, full for receive), gets the buffer and works with it.  I wanted to bury printf functionality in some calls, but the variable argument list version of sscanf isn't present in avr-libc (it may even be non-standard; it seems to be an odd-one-out).  Instead, I made the buffer directly accessible, so the client code can sprintf and sscanf into it.

The AVR code contains a neat little task scheduler now, which polls around the task list (specified in the call to 'start'), checking if any need to be run this tick.  The task returns the number of idle ticks required - which is simply specified using something like 'ticks_per_s / 40 - 1'.  After all of the tasks have been polled, it waits until the 'tick' flag is set, then clears it immediately.  The tick flag is set by an interrupt running at 5ms intervals.  The watchdog timer is set to expire after 1/4 second, and calls a routine to set some LEDs on the board to indicate an overload, but not to reset the unit.

There are quite a few tasks now (I started by writing a clock, which shows the system time on the serial port - it seemed to keep pretty good time!).  I've also written driver code for just about everything on the control board.  The tasks don't concern themselves with the hardware, but use the driver code to deal with it.  Likewise, they don't have anything to do with interrupts.

Capabilities
As of this moment, the robot can be remote controlled, can accurately navigate targets (I calibrated the geometrical parameters, so it's quite good now), and is capable of following a sufficiently dark line on a sufficiently light floor.  I still need to make it automatically pick up a generic 'floor' and 'line' and follow the line (assuming the two are different enough).

The navigation showed up a problem with the heading vector - it slowly grew in size and overflowed.  This might have been why the prototype had trouble navigating after a while.  I've stuck in my successive approximation routine to recalculate the heading vector on occasion.  There's also some nice PWM calculation code in the motor control task - this allows the motors to be driven with a specific torque, rather then just a PWM (which is meaningless on its own).

Wheel speed is measured effectively using a moving-average filter.  Each tick, when the wheel sensors are sampled, a bit is set at the low end of a 'forward' or 'reverse' register.  These are then shifted up one bit, and the 15th bit is checked in each.  Depending on the current movement and the value of the 15th bits, a running total is kept (this removes the need to add up the number of 1s in each buffer every time a speed is required).  The total is multiplied up and returned when requested.

Networking
The robot is now running openSSH server, so it's simple to log in and control it without having a monitor and keyboard plugged in.  I got the wireless adaptor to work in ad-hoc mode eventually (the current kernel driver won't work except in managed mode).  I used the driver from berlios, which was used before the mac80211 one that ships with the kernel was available.  I had to make a number of changes to the code to get it to compile with the current version of the kernel (the latest lucid kernel version, that is).  If anyone wants more information on that, I'd be happy to explain my modification.

I then found out that my laptop doesn't like ad-hoc connections using network-manager either.  So I set it up using wireless-tools, as I'd done on the robot.  I also need to reload the module before it works.  Again, let me know if you want the specifics on this.

Guidance Maths
I have to admit, this was bodged a little, but it works really well!  For the distance to the target, I used the crude approximation that R=max(a,b) + 0.5min(a,b); where a and b are the absolute values of x and y.  The maximum error is 11.8% (always positive error).  This works pretty well - it's only used for controlling speed when the robot's near the target and not pointing in the correct direction.  It's also good because the AVR has no barrel shifter, and this only needs one shift with the addition, so it's really quick.

The angle is approximated using the tangent (taken from the closest quadrant boundary) (assuming the angle takes a straight line from 0 to 1 in the first octant) - this has a maximum error of about 4 degrees, but has no discontinuities.  A better approximation uses the larger of x and y (absolute value), and is an approximation to sin(x)=x, but matches the crossover points between the octants - this produces an error of about 2 degrees, but is far more complex - the tangent inherently discounts the magnitude, and matching the approximation up in the better algorithm requires some irritating scale factors and would be very challenging on an AVR.  I can live with a single division (even if it is 32-bit).

Line Following
This caused me a fair headache.  After trying several times to write smart code, I decided to go back to basics and follow a line that is darker than 128, on a floor that is lighter than 127.  I also changed the hardware, to put all three floor sensors close together and forward of the robot's centre.  This makes it much easier - before, the side sensors were somewhere around the wheels (in front and a little further in).  the middle sensor was near the centre of the robot.  The angle between the three was very large, so the robot couldn't actually 'see' the floor directly in front.

I've positioned the sensors so that the centre one can sit on a line, with the side ones a little off it.  This will allow automatic adjustment (as both values can be seen), and means the robot only needs to make small adjustments to its heading, and the sensors are set up so that a side sensor will 'see' the line before the centre one leaves it - so it can see the line in 5 different positions.  The other three states correspond to "sideways to a line" (all 'line'), "not on a line" (all 'floor'), and some odd pattern where the side sensors are over lines but the centre isn't.  The first case isn't acknowledged in the software (one side overrides the other); the second is detected, and looks at which side sensor last saw the line, if any; and the last case is also absent.  In any case, it works quite well, and can manage to turn around at the line end, and navigate cross-over sections.


I've rattled on now for far too long, and it's lunchtime.  Then I'd better get started with my report.  I'll try to come back sooner this time, and write something short, sweet, and informative!

No comments:

Post a Comment

Comments are moderated. Sometimes it might take me a long time to get round to it, so please be patient.