You are currently browsing the monthly archive for February 2009.
I decided to make a rough performance comparison between my approach and boost accumulators api. A reasonable test is to procedurally generate 10^7 random numbers in range [0.0..1.0] as the input data. We compare 3 algorithms –
- No moments accumulated – baseline, just generates the input
- Accumulate all 12th order moments using Boost Accumulators
- Accumulate all 12th order moments using my vfuncs cpp code
We run on Linux using time command to get rough timings, raw results are –
- Baseline – 1.16s
- Boost Acc – 23.16s
- Vfunc Acc – 2.44s
If we subtract the baseline we get a rough comparison of Boost ~ 22s and Vfuncs cpp ~1.3s when the cost of generating the input is factored out.
So the vfuncs impl is roughly an order of magnitude faster then the current v1.37.0 boost accumulate stats implementation (both are single pass).
I don’t think the boost algorithm is logically different, its probably more a case of template code complexity having an impact – the 10s compile times might indicate this also. Executable size also reflects this, differing by an order of magnitude.
(Note : Using gcc 4.2.4. I haven’t tried this on other compilers, build settings etc – they could have a profound effect. Let me know if you see different results on eg. Intel or VC compilers)
Some links while I have them at hand…
Compressed sensing basically gives a better practical way of recovering a signal from its samples than the Shannon Sampling theorem suggests – given some structure on the data.
Recent methods using L1 norm, as a compromise between L2 and L0, mean this is much more computationally feasible. New work by Tao, Candes etc gives some proofs on why this sparse sampling works so well.
Useful compressed sensing links –
Continuing on the same topic as my previous post, its nice to be able to gather all the kth order moments in a single pass.
Last time I mentioned the boost/accumulators example, but you will have noticed two issues if you use that. Firstly, moment<k> tag will give you the kth simple moment relative to zero, whereas we often want the kth central moment of a sequence relative to the mean. Secondly, although boosts accumulator is well written it does seem to take a while to compile [~ 12 seconds for code using moment<12>].
After some playing around Ive got a faster simpler approach, where the inner loop accumulates kth powers of the element. After you’ve run the sequence through, you can then easily extract variance, and all the kth central moments. So in adding the more general case of kth moments, Ive made the particular variance case simpler. That often seems to happen in programming and math!
First a bit of math and then the code. We want to express the kth central moment in terms of the k basic moments.
First, lets define the basic moment as –
We rearrange the binomial expansion –
So we have the kth central moment given as a weighted sum of the kth simple moments –
which shows that all we need to accumulate as we walk across the sequence is the kth simple powers .
Notice the variance is now handled as a special case where k=2. Likewise k=0 corresponds to n, the element count and k=1 is the sum of elements.
Heres a basic impl of the above expression –
I was chatting with some other quant developers the other day, as you do, and the issue came up of calculating variances. Its a pretty common operation, and the naive approach does two passes, one to calculate the mean, , and one to gather the squares of differences from that. … but someone asked if it could be done in one pass, and of course it can, quite easily.
Recall, the population variance of a sequence is defined as –
We can expand this and see how the variance after n terms differs from the variance after m=n+1 terms, vis –
and we have in terms of vis –
In other words , so theres very little we need to store to accumulate the variance as we traverse the sequence.
Expressing this in C++ code we’d have a functor that maintains state and gets handed each element of the sequence, in simplest form –