instagram

Sunday, March 23, 2014

Vertical control

Vertical control is still not at a state that I'm happy with. We recently took one very important step in the right direction by making the same code and settings be used for standalone altitude control and navigation altitude control. At least with this, there is only one thing that requires tuning. After a fair bit of work, I'm pretty happy with the result. If you just want to see the video here it is

f

Sixteen state INS

One issue I see is that the accelerometer bias is not reproducible enough flight to flight, and because this is integrated into velocity that creates a non-trivial amount of noise. In the review I linked above, the complementary filter that is used for the vertical axis (based on ardupilot, to give credit where it is due) estimates the accelerometer bias. However, when running navigation (and thus the INS algorithm) the bias is not tracked for the accelerometers. The most obvious consequence of this is when you enable position hold mode or something else you initially get a throttle surge or dip. After it drops enough, the outer position control kicks in and the you get about a reasonable behavior.

One solution to this, which has already been merged, is the zero accel button. This will let you correct this error, but because the bias changes flight to flight will only be a short term fix.

This is more of a problem, though, for landing when only the velocity is controlled. If the decent rate is 1 m/s and the velocity axis has a 0.5 m/s error you get quite a large error in landing rate and either a long wait or quite a bump.

We had previously played around with a 16 state INS that added the accelerometer bias but it did not work terribly well. One issue for this was that when the board is sitting still, there is essentially a manifold of valid states - the board could be upside down if the accel bias were set to 16 m/s^2, for example. Unobservable systems are very bad for control. There was a fairly simple way of fixing this: I made the state equation for X and Y accel biases have a drift towards zero. This allows it to get to non-zero values (when flying around and really accelerating there might be good evidence for it) but keeps those values fairly well grounded near zero. This is appropriate, too, because the accel bias does not get that far from zero with a reasonably calibrated system.

The second problem turned out to be some bugs in the automatically generated efficient version of the covariance prediction. Manually finding the five typos in this wall of code (and this is only about a third of it)

was not the most fun I've had in the last few weeks. However, after fixing that and a few other tweaks it seems to be working quite well. Manually miscalibrating the system with an accelerometer bias of even 1 m/s^2 is accounted for perfectly and after thirty seconds the error in the vertical velocity falls to less than 0.1 m/s.

Of course, going to a 16 state INS introduces a higher CPU and memory load. However, on Sparky the INS is still only less than 1/4 of the CPU utilization which seems reasonable if it results in a good state.

INS Testing and tuning


To test this version of the INS, I got some logs with the Overo on Freedom and visualized the output of the INS and the quad movement. It seems to track quite well:


Now one issue I'm running into is that when I tune the baro variance to make sure it does track the altitude well, that couples the noise from the baro into the velocity estimate. This seems to be limiting the tuning settings I can get away with.

Vibrations and Fourteen state INS


Since the main issue bias we have issue with is the vertical, at least for now, I wanted to try and not burn cycles on the horizontal biases (although it is quite possible we might go back on this eventually). With some help from Dale (the original INS author) I jumped in and modified it to a fourteen state estimator, with only the vertical bias term being tracked. This wasn't too hard in the end, and also does a great job of estimating the vertical velocity.

However, no matter what settings I used, the vertical hold was not as good as I wanted. Interestingly, it was much better on my micro-quad. I finally broke down and moved Freedom from my POS HT-FPV frame that shakes like crazy to my QAV500. I hadn't flown this much because the pancake motors on it were not playing nicely with my ESCs and I hadn't had the time to tune them up. However, I switched to some SimonK ESCs:



And now no motor failures in flight. With this frame, suddenly the altitude control was much better. This was true using both the complementary filter and the new 14 state INS.

Tuning

You can do all this tuning in the default StateEstimation (complementary, raw) which runs the simple inertial filter for vertical. Once the updated INS is merged you can also tune in INSIndoor mode, but for now I would avoid that with the 13 state since the velocity bias makes the next step a bad idea.

Next, the altitude control loop should be tuned. Ultimately I found the default values were pretty close to what I wanted but the exercise was interesting. Lux (in IRC) recommended a method from baseflight which is very analogous to how I recommended tuning the main control loop (see this video for a tutorial I did years ago).

Basically first you set the AltiudeHoldSettings.AltitudeKp to zero and fly altitude hold. In this condition it will drift up and down a bit randomly but should behave essentially well. Then you gradually increase the AltiudeHoldSettings.VelocityKp until the motors begin to hunt and sound really twitchy. Once you hit that point bring it back down a bit. I didn't feel the need to tune the Ki component since it seemed to be working fairly well and not overshooting.

Once that is done, then you can dial in the AltitudeHoldSettings.AltitudeKp again. If the altitude is wandering too much then increase that term. If the whole system is jumping up and down and hunting again, then reduce it. I found the default value of 1 or down to 0.5 seems to be a pretty good range.

TL;DR: set Velocity Ki to about 1/5 of Velocity Kp and increase that until you see fast twitches, then back off. Then increase the outer gain until you start seeing oscillations and back off.

Vertical Control


KipK in IRC also asked for more aggressive control of altitude when flying altitude hold mode. This has been on my TODO list for a while, and I kept putting it off.

The current scheme waits for the stick to move out and then into the middle range to enable vario mode, but most people have found this unintuitive. Instead I went ahead and added an exponential to the range outside the dead band. This allows more sensitive control in the middle and aggressive control for further stick movements.

Previously, moving the sticks only shifted the set point. Instead I wanted something more like AxisLock where in the middle range the setpoint stays the same and in the outer range you control rate instead of moving the setpoint. In control parlance this is called full state control where we pass the desired position and rate.

I thought I was pretty clever coming up with this scheme, but looking around it seems like everyone has converged on it for vario mode - which probably means it is a reasonable idea.

Here is a video of OsoGrande testing it out and then myself. I'm pretty happy with the response and he said it felt fairly natural to fly.


There was also a glitch with vertical control when doing position hold and RTH that caused it to have a brief dip when engaged. The integral wasn't being properly preloaded with the hover throttle, but this should be fixed now.

Conclusion


So with all these changes, I think we are set to have some really clean navigation. Some care needs to be taken of vibration if you want to get a really clean hover. This is much easier on smaller quads in my experience.

Saturday, March 22, 2014

Magnetometer handling in INS


The Tau Labs INS (written by Dale Schinstock) works really well and has been used for various navigation tasks for the last few years. However, one aspect that I have found problematic is the magnetometer handling. The way it works currently is that the Earth's 3D magnetic vector is rotated by the attitude to predict the 3D measurement we should get.

In principle that is a great idea, and on planes probably fine. However, on multirotors that generate a lot of local field distortions from the motors, it is rather problematic. The reason is that as the Z (down) component is altered, it creates a bias in the roll and pitch components of attitude to deal with this. We already know (from years of experience flying the complementary filter) that the gyros and accels are sufficient to stabilize those components. As a work around we generally have just used fairly high variances for the magnetometer. This isn't optimal though, as I've seen the INS bias estimate for the Z axis sometimes destabilize since it isn't sufficiently constrained by the mag when initially converging if things are fairly miscalibrated.

I have been wanting to alter the EKF for some time to only allow the magnetometer to influence the heading estimate, but the path wasn't obvious. Today I decided to sit down and implement this, and I'm fairly happy with the result.

Derivation


We want to measure just the heading aspect of the mag, in order to get rid of the mag influencing attitude. Currently the predicted measurement of the magnetometer is 

$\bar B_b = R_{be}(q) Be$

So we want a version that only depends on the heading component of $q$. That turns out to be less trivial that it sounds. To achieve this, we essentially want to only work with magnetic information projected into a plane parallel to the earth (called the $h$ plane). The rotation $R_{be}$ can be decomposed into two matricies, $R_{be}=R_{bh} R_{he}$, where $R_{bh}$ applies the pitch and roll transformation and $R_{he}$ applies the yaw transformation. We can rotate the raw mag measurements in order to undo the influence of just the roll and pitch by using the inverse (in this case transpose) of $R_{bh}$, $B_h = R_{bh}^T B$, and then only keep the components in x and y.

To calculate both $R_{bh}$ and $R_{he}$, we can get the equations from the code. I then used this code in matlab and the symbolic toolbox to get the matrix. It took a little bit of massaging afterwards to get an efficient implementation:

syms q1 q2 q3 q4 real
q = [q1 q2 q3 q4];

q0s = q(1) ^ 2;
q1s = q(2) ^ 2;
q2s = q(3) ^ 2;
q3s = q(4) ^ 2;

R13 = 2.0 * (q(:,2) .* q(:,4) - q(:,1) .* q(:,3));
R11 = q0s + q1s - q2s - q3s;
R12 = 2.0 * (q(:,2) .* q(:,3) + q(:,1) .* q(:,4));
R23 = 2.0 * (q(:,3) .* q(:,4) + q(:,1) .* q(:,2));
R33 = q0s - q1s - q2s + q3s;

roll = atan2(R23, R33)
pitch = asin(-R13)
yaw = atan2(R12, R11)

% compute the rotation matrix that is for the attitude component
% this is the matrix that will rotate from the earth frame to the
% current roll and pitch. The inverse will take the magnetometer
% reading back into a horizontal place
sF = sin(roll); cF = cos(roll);
sT = sin(pitch); cT = cos(pitch);

r = sqrt(R23^2 + R33^2)
sF = R23 / r;
cF = R33 / r;
sT = -R13;
cT = sqrt(1 - R13^2);
Rbe_bh = [cT, 0, -sT;
          sF*sT, cF, cT*sF;
   cF*sT, -sF, cT*cF]


Then we need to modify the EKF to predict a measurement in the horizontal plane. That aspect is pretty straighforward, it is the same equation but keeping the yaw component instead of pitch and roll

% direct equation
sP = sin(yaw); cP = cos(yaw);
Rbe_h = [cP, sP;
       -sP, cP];

% another expression that results in code that doesn't use imaginary
% numbers
x = R12;
y = R11;
r = sqrt(R12^2 + R11^2)
Rbe_h = [y/r x/r; -x/r y/r];

% calculate predicted B_h in the horizontal plane
B_h = Rbe_h * Be

% generate c code to compute this
ccode(B_h)

I ended up having to manually expand $cos(atan2(x,y))=\tfrac{y}{\sqrt{x^2+y^2}}$ for matlab because it kept trying to use imaginary numbers to calculate the norm. The last line of this actually generates C code for the two components of $B_h$. It needed a fair bit of massaging after that to factor out common terms and get an efficient implementation.

The other thing needed for the INS is the derivative of the measurements with regard to the state, or in this case $\tfrac{\delta B_h} {\delta \mathbf{q}}$.

% and compute the derivatives wrt to the attitude
ccode([diff(B_h,q0)'; diff(B_h,q1)'; diff(B_h,q2)'; diff(B_h,q3)'])

Again, Matlab symbolic toolbox to the rescue, saving me from a lot of tedious algebra, although again lots of massaging was required.

Testing


So again, Freedom Overo logs saved me tons of time, since I can replay previous flights through the new filter and see how it performs. It performs really nicely. The nice thing about this, is we can tighten up the variances on the mag and make sure the convergence behaves well, and not worry about the magnetic field biasing the attitude as we go.

Here is the raw magnetic heading and the INS output:


, which you can see is nicely tracking together.

I also loaded it up on my Freedom and power cycled it a bunch of times, as well as letting is sit for an hour. Rock solid - none of the oscillations I would sometimes see with the previous code - and now I can keep the gyro bias estimation on reliably. With a good accel calibration and board leveling, you get a perfect PFD:



This is with minimal tuning or tweaking, so that is great. I'll need to play around with a bit in flight but I'm pretty optimistic about this - and hopefully it will reduce the rare times we do see toilet bowling. I also now need to play around again with the mag compensation code, although hopefully it won't be necessary.

It was also a really fun day project. I've played with kalman filters before but never had to go quite as deeply into the extended kalman filter. I also learned some more about playing with quaternions and rotation matricies, so that was productive.

Saturday, March 15, 2014

Logging and Python parsing

Current state


One really common task for anything beyond normal LOS and FPV flying, is the need to analyze log files. We have a fairly nice facility for doing this - using Matlab to parse log files. Running build matlab generates LogConvert.m that will parse log files into .mat files for analysis. That works quite well, but there are a few problems:

  1. not everyone has matlab (although octave is free)
  2. you need a LogConvert.m that matches the git hash you have
  3. impossible to make standalone applications using this
  4. matlab is terribly slow for processing video or generating videos
  5. slow for parsing the high resolution overo logs, as well as lacking the CRC checks 
Luckily, Stac came to the rescue.

Getting logs


So the easiest way to get logs is through Android (it logs all connected flights) or with GCS. This is what I'd recommend for most people.

But what do you do if you want better data. My goto option is Freedom which logs everything and I think is awesome. However, since I haven't finished that or made more than 2 that doesn't help most people.

Another option is something I came up with a while ago, using OpenLog. It's limited but useful. 

Python parsing and plotting


He wrote a python version of the code for parsing log files, that detects the appropriate git hash from the log file, checks out the xml files defining those UAV objects and generates the needed classes on the fly. The sort of really cool run time class generation python can do. This was merged here.

To parse a log file that was generated from either Android or the GCS, simply run (from the Tau Labs repository directory):

$ python python/logview.py path/to/file.tll

This launches an IPython environment - an interactive python environment - where you can proceed with data analysis and visualization. It should give you a prompt like this:


There will be a few variables in your workspace at this point. The most important is uavo_list, which contains all of the parsed log entries of all the object types. To get the data from a particular UAVO, use the as_numpy_array method. You have to pass it the UAVO_ObjectName which is the class to cast the appropriate elements too. This will create a numpy.ndarray that can be indexed like a set. For example, to get the accelerometer data and plot the z component:



I won't delve further into the details of plotting in python since I'm sure that is covered better elsewhere.

However, a side note for processing either Overo logs or logs recorded with OpenLog and my Logging branch. In these files there are two differences: they enable the embedded timestamps in the UAVTalk message (described on the wiki) and do not write the githash into the file. The later is pretty annoying because you have to keep track of what version of the firmware your log files were created with, so hopefully I can fix that later. To parse these log files use this command

$ python -t -g GITHASH_OR_BRANCH_NAME python/logview.py path/to/file.tll

The -t switch tells it to use the internal timestamps (which are more accurate since they are when the FC sent the data) and the second part tells it what git hash to get the set of xml files from.

Generating videos


I've been developing a set of methods that take these log files and generate a video of graphs. Just for general analysis, I find overlaying data on top of the video of flight is really informative. I also think this could evolve into a cool set of overlays for FPV flights.

At the core of this is the matplotlib.animation library. You write a function that will plot (or manipulate a plot) to what you want at a given time stamp. For example I often shift plots to center at that time stamp and show a brief segment around it, with markers centered at the current time:

# Plot a segment of the path
def update_img(t, mark_0=mark_0, mark_1=mark_1, mark_2=mark_2, ax=ax):

 # convert to minutes
 t = t / 60

 import matplotlib.pyplot

 matplotlib.pyplot.xlim(t - 0.25, t + 0.25)

 mark_0[0].set_xdata([t,t])
 mark_1[0].set_xdata([t,t])
 mark_2[0].set_xdata([t,t])

Then you want to run this over a set of times and generate the video. To do this, I use this something like this:

fps = 30.0
dpi = 150
t = arange(t0_s, t1_s, 1/fps)

ani = animation.FuncAnimation(fig,update_img,t,init_func=init,interval=0,blit=False)
writer = animation.writers['ffmpeg'](fps=30)
ani.save(output_filename,dpi=dpi,fps=30,writer=writer,savefig_kwargs={'facecolor':'green'})

The blit=False and this set of codecs seems to work pretty well for OSX, but your mileage may vary. Now, because I'm typically wanting to use this to create an overlay, I use a green background everywhere. If I have multiple axes that I am plotting, this is what I use:

for a in ax:
 a.set_axis_bgcolor('green')
 a.xaxis.label.set_color('white')
 a.yaxis.label.set_color('white')
 a.spines['bottom'].set_color('white')
 a.spines['top'].set_color('green')
 a.spines['right'].set_color('green')
 a.spines['left'].set_color('white')
 a.tick_params(axis='x', colors='white')
 a.tick_params(axis='y', colors='white')

This will use white text and labels and set the background of the axis to green. Notice also the savefig_kwargs={'facecolor':'green'} section in the ani.save command which also is important.

Here are some of the results:


Finally, I combine this all in Premiere or IMovie (although that can only handle one overlay and doesn't allow positioning), and get something like this:


Here are some of the scripts I used to generate this:

  • https://gist.github.com/peabody124/9571409 - generates the altitude overlays
  • https://gist.github.com/peabody124/9571455 - generates the position overlays

Next steps


As is, this is a really powerful tool. However, what I think would be really awesome is to organize some scripts to generate a bunch of nice overlays, and then ideally write standalone python scripts that parse the log and spit out a video instead of going through IPython. The end goal would ideally be compiling to standalone applications, or alternatively integrating into the GCS and calling out to python (plausible?).

Monday, March 10, 2014

Microquad update

This is a followup on my previous post about my foldable microquad from Hovership. I added a Mobius camera and 3DR radio so I could get telemetry. You can also see how I mounted the foam.



Probably not necessary, but I added some moon gel under the Mobius for a tad more vibration damping.



Finally the arms are replaced with the latest version which lets them go on the right way up. Note the little channel for the wire so it doesn't add any strain.



Camping

I took it camping, and my friend had a nice camera and got some great shots of it in the air. I really like these.






The video from the mobius is pretty good.


All the video that is not stabilized was from the miniquad. It was quite windy on the top of that rock and I was super impressed with how it handled it. Then I was testing it out with our new code for altitude hold, and was that's even more impressive. I need to get some FPV gear on it to try out this mode better.

Tuesday, March 4, 2014

EEG and decoding

Following up on the EEG I designed and built, I have been playing around with it somewhat to try and do some basic brain control work.

EKG Testing

First I tested it as a basic EKG. I did this partly because it is a really strong signal and an easy check, and partly because i wanted to see how resistant to movement the system is. I was fairly pleased with the results. Even when I ran it produced a fairly robust signal and the heart rate was easily detectable. I was also pleased to see the 60 Hz level was fairly low in this situation, which is consistent with my theory that what I see is mostly RF pickup in the recording leads rather than a problem with the referencing.


EEG Electrodes positioning

So on to some basic brain activity classification. The first step was to reduce the noise, which means less hair.


I'm using AgCl electrodes now (an improvement on my washers) recommended by Chip. Putting them on is a fairly annoying and messy affair, but only takes about 10 minutes.






My electrode placement was as follows:
  • Bias Black - right ear
  • Ch1 Light Gray - LV1
  • Ch2 Dark Gray - RV1
  • Ch3 White - left lower parietal
  • Ch4 Pink - right lower parietal
  • Ch5 Blue - left middle parietal
  • Ch6 Green - right middle parietal
  • Ch7 Yellow - left upper parietal
  • Ch8 Orange - right upper parietal
  • Ref Red - forehead
Using the impedance monitoring, I first saw that the occipital electrode impedances were high. This was easily fixed by adding more paste, and the monitoring confirmed the impedance was then normal. However, my upper parietal electrodes on both sides were also running fairly high. This was quite apparent when I walked to a part of my apartment with more electrical noise. Since these are obviously over the leg region, this was a problem. However, nothing another hairband from my girlfriend cannot fix.

In the long run, I'm definitely going to need something better to position the electrodes. I'm thinking it is time to drag out some old MRIs I got done during neuroscience experiments and use my CNC to build a solution.

Theta Waves

I have previously shown that with washers I can get a pretty nice modulation in the theta band from the occipital cortex when I close and open my eyes.

Brain control

To get the ball rolling with brain control, I wanted to see if I can classify real motion. This obviously has potential confounds of the motor activity directly modulating the EEG but let's ignore that for now. I wrote a stimulus for the android app that rotates through telling me to move my right arm, right leg, left arm and left leg. This produces a nice log file that automatically has this stimulus information synchronized to the EEG traces. I ran a shortly preliminary experiment and analyzed it, which was promising, so then I ran a longer 15 minute training session. Also for my own reference, this is the mapping from stimulus number to command:
  • stim 2 - left arm
  • stim 3 - right arm
  • stim 4 - left leg
  • stim 5 - right leg


For the analysis I extracted all of the two second segments for each stimulus and each channel and performed a welch spectral estimate. I then took the frequency bins between 1 and 30 Hz and concatenated those features together from all the channels to create a 240 dimensional feature vector for each trial. This processing of the log files was just done using the nice python parsing framework we just merged into Tau Labs. I then saved the parsed data in a matlab structure since I'm more comfortable doing the machine learning there (although hopefully I'll get there with SciPy and Python).

A first pass glance at the average response in each of the four conditions shows what appears to be a difference, although of course this lacks any statistics.



Since I didn't run tons of trials, I wanted to reduce the dimensionality a little bit for the classifier, so picked the three frequency peaks from all the channels to feed into the classifier. This reduced things to a 24 dimensional space.

I used variational bayesian logistic regression which I used in some of my thesis work and seems to work quite well for building well regularized classifiers. I would collect all the trials from two stimuli, and using leave-one-out cross validation measured how well I could classify the EEG feature vectors by the stimuli shown.

This is the performance of the classifier with 267 trials for all the pairwise comparisons:

RA LL RL
LA 0.5469 0.5386 0.5625
RA 0 0.5381 0.4766
LL 0 0 0.6394

Which shows the average performance is around 0.55. This is better than chance, but not anywhere near as good as I would like. Interesting, the performance on my preliminary analysis was better:

RA LL RL
LA0.3333 0.6111 0.7222
RA0 0.5278 0.7500
LL0 0 0.5278

Although one condition was doing poorly, most of them were doing better. This is despite having many less trials. My guess in this case is that electrode placement played a role and I wasn't properly over the motor cortex in this recent attempt. Also I think my upper parietal electrodes were a bit low. Not putting the occipital electrodes improved the performance on the larger experiment, most likely by reducing overfitting.

RA LL RL
LA 0.5469 0.6475 0.5781
RA 0 0.6304 0.5000
LL 0 0 0.7003

This brought the average to 0.6. However, this is still not enough to be flying with.

Going forward

  • I think I'll try putting all the electrodes over the parietal or motor cortex and not bother with an occipital electrodes.
  • Longer training session as well as using one second stimulus presentations to try and get the trial numbers up.  
  • Different classifiers. From what I have seen from the number of included features, there is not enough regularization.
  • Run classifier continuously on data rather than on individual trials
  • Once that is working, write some online visualization of the classifier output