You are currently browsing the tag archive for the ‘algorithm’ tag.

Just watched a really interesting documentary on the Flash Crash of 2010 :

Money & Speed: Inside the Black Box

Some stand-out points :
> CFTC / SEC attribute the root cause as Waddell & Reed ‘dumping’ $4.1bn shares
> Eric Hunsader @ Nanex looks at the W&R trades [ see video at 34m18s ]
> W&R trades don’t look like dumping, they maximize sell price during local up runs
> there are other trades that do look like aggressive dumping, ie. rapid sequential bursts down

The Nanex explanation of the Flash Crash : FlashCrashAnalysis

Price manipulation ?

This raises an interesting Question :

Is it possible for a black box algorithm to use a rational probabilistic strategy to drive down stock price in bursts like this, in order to later buy the stock at a very low artificially deflated price ?  You’d need a lot of stock to do this : is there a threshold of stock volume, say 5% of all stock, under which its impossible to create this effect ?

Price Delay Arbitrage ?

Another aspect of this, is the possibility to do ‘Diffusion Arbitrage’ for want of a better name :

If you can drive the market so quickly that derived instruments take seconds to reprice ( due to the storm of new data), then you have that window to trade ahead of the market in options or indexes based on the underlying you have manipulated.

In this case the delay was a whopping 5 to 35 seconds : see for example Nanex’s FlashCrashSummary , showing the delayed drop and recovery of the Dow.

Been reading a bit about gaussian processes and machine learning…

For a slightly related problem of matching Buyers with Sellers or matching a large number of people on a dating site, the brute force method would make NxN comparisons.  You have N points [seller/buyer or date-seeker etc] with d attributes such as age, sex,  weight, height, income location, interests, preferences… or price, quantity, product, location, expiry etc – these define the dimensions of the space in which N belongs.

If your data was 1 dimensional, each item having only one attribute such as price or age, youd simply sort on that and do the match in a greedy fashion, complexity roughly O(NlogN).

In many problems d=5 or so [in biotech micro-assay arrays where there are thousands of gene probes, d could be as large as 60k] which means that the volume of the space grows incredibly large – if you partition the space in K parts over each dimension thats K^d subvolumes to match to each other which gets huge very quickly.

There are normally continuity assumptions – if you take samples, you can get some predictive value from them.  A sample tells us something about nearby points, and this is really the basis of machine learning.  Another aspect of this is a theorem from compressed sensing and random matricies, that Terry Tao and others have proven, which says something along the lines that in high dimensions, random samples will actually be very effective in exploring the higher dimensional space.  This could explain why evolution is so universally effective in finding ultra optimised solutions, and overcoming local maxima.

Back to the case of finding best matches of N points in a d dimensional domain.  Lets assume we have some probably nonlinear function f = fitness(P1, P2) given any two points.  One approach would be to take a smaller random sample from the N points, small enough that we can do brute force comparing each sample against each other.  Then we sort on f and pick the best ones.  This gives us a lot of information about the space… because for any P1, P2 that match well, there will be surrounding points that also match well, in all likelihood.

So using this geometric way of looking at the problem of finding best matches, I implemented a simple prototype in Python which does the following –

  • Take S sample pairs P1, P2 and eveluate f12 = fitness(P1,P2)
  • Sort on f12, and take the best matches
  • Make a small volume V1 around P1, V2 around P2
  • Take the best matches from all points Pa in Va and Pb in V2
  • Brute force any remaining points
  • Post-process by swapping P1 of one pair with P2 of another pair

The results were not as good as I hoped but there are lots of improvements to make.  I measured the total number of times the fitness() function is called, as the complexity, and found that for N<~200 brute force is more effective.  Brute force on 1000 takes a long time to compute.   This sampling pairs method gets results in roughly NlogN it seems, runs on samples 5 to 10k in size, seems roughly NlogN.  This initial implementation when compared against brute force gave around 80% of the maximum possible global fitness score, when compared to brute force.

So its promising and Ill play with it and see if it can be improved in practical ways.  One of the nice things about it is the fitness function can be very nonlinear, so that the hotspots of high fitness volumes are not something that you can hard code a routine for… but random sampling finds these very well, and requires no special handling.

I used a naive approach to post processing – improving the match by swapping randomly chosen pair elements, and keeping the swap if it results in a better match.  This is basically a simple form of genetic optimisation or simulated annealing, so its pretty inefficient as a first implementation.

Interesting

My previous blog articles describe approaches to calculating kth moments in a single pass – see compute kth central moments in one pass and variance of a sequence in one pass.

I decided to make a rough performance comparison between my approach and boost accumulators api.  A reasonable test is to procedurally generate 10^7 random numbers in range [0.0..1.0] as the input data.  We compare 3 algorithms –

  • No moments accumulated – baseline, just generates the input
  • Accumulate all 12th order moments using Boost Accumulators
  • Accumulate all 12th order moments using my vfuncs cpp code

We run on Linux using time command to get rough timings, raw results are –

  • Baseline   –   1.16s
  • Boost Acc – 23.16s
  • Vfunc Acc –  2.44s

If we subtract the baseline we get a rough comparison of Boost ~ 22s and Vfuncs cpp ~1.3s when the cost of generating the input is factored out.

So the vfuncs impl is roughly an order of magnitude faster then the current v1.37.0 boost accumulate stats implementation (both are single pass).

I don’t think the boost algorithm is logically different, its probably more a case of template code complexity having an impact – the 10s compile times might indicate this also.  Executable size also reflects this, differing by an order of magnitude.

(Note : Using gcc 4.2.4.  I haven’t tried this on other compilers, build settings etc – they could have a profound effect.  Let me know if you see different results on eg. Intel or VC compilers)

Download

Download code and project – kth_moments.tgz – from vfuncs google code downloads page.  [BSD license feel free to use]

Continuing on the same topic as my previous post, its nice to be able to gather all the kth order moments in a single pass.

Last time I mentioned the boost/accumulators example, but you will have noticed two issues if you use that.  Firstly, moment<k> tag will give you the kth simple moment relative to zero, whereas we often want the kth central moment of a sequence relative to the mean.  Secondly, although boosts accumulator is well written it does seem to take a while to compile [~ 12 seconds for code using moment<12>].

After some playing around Ive got a faster simpler approach, where the inner loop accumulates kth powers of the element.  After you’ve run the sequence through, you can then easily extract variance, and all the kth central moments.  So in adding the more general case of kth moments, Ive made the particular variance case simpler.  That often seems to happen in programming and math!

algebra

First a bit of math and then the code.  We want to express the kth central moment in terms of the k basic moments.

First, lets define the basic moment as –

\displaystyle M_{n}^{j}= \sum_{i=1}^n {x}_i^{j}

We rearrange the binomial expansion –

\displaystyle nv_{n}^{k}= \sum_{i=1}^n({x}_{i}-\mu_{n})^k

\displaystyle = \sum_{i=1}^n \sum_{j=0}^k \binom{k}{j} {x}_{i}^j(-\mu_{n})^{k-j}

\displaystyle = \sum_{j=0}^k \binom{k}{j} (-\mu_{n})^{k-j} \sum_{i=1}^n {x}_{i}^j

So we have the kth central moment given as a weighted sum of the kth simple moments –

\displaystyle v_{n}^{k} = 1/n(\sum_{j=0}^k \binom{k}{j} (-\mu_{n})^{k-j} M_{n}^{j})

which shows that all we need to accumulate as we walk across the sequence is the kth simple powers ({x}_{i})^k .

Notice the variance is now handled as a special case where k=2.  Likewise k=0 corresponds to n, the element count and k=1 is the sum of elements.

c++ impl

Heres a basic impl of the above expression –

Read the rest of this entry »

Rearranging

I was chatting with some other quant developers the other day, as you do, and the issue came up of calculating variances.  Its a pretty common operation, and the naive approach does two passes, one to calculate the mean, \mu_{n} , and one to gather the squares of differences from that. … but someone asked if it could be done in one pass, and of course it can, quite easily.

Recall, the population variance of a sequence \left \{x_{i} \right \} is defined as –
v_{n} = (\sum_{i=1}^{n} (x_{i}-\mu_{n} )^2) / n

We can expand this and see how the variance after n terms differs from the variance after m=n+1 terms, vis –

nv_{n}= \sum_{i=1}^{n} (x_{i}-\mu_{m} + \mu_{m} - \mu_{n})^2
\displaystyle = \sum_{i=1}^{n} (x_{i}-\mu_{m})^2 + \sum_{i=1}^{n}(\mu_{m} - \mu_{n})^2 + 2 \sum_{i=1}^{n}(x_{i}-\mu_{m}) (\mu_{m} - \mu_{n})
\displaystyle = mv_{m}- (x_{m}-\mu_{m})^2 + n(\mu_{m} - \mu_{n})^2 + 2 (\mu_{m} - \mu_{n}) \sum_{i=1}^{n}(x_{i}-\mu_{m})
\displaystyle = mv_{m}- (x_{m}-\mu_{m})^2 + n(\mu_{m} - \mu_{n})^2 - 2n(\mu_{m} - \mu_{n})^2
\displaystyle = mv_{m}- (x_{m}-\mu_{m})^2 - n(\mu_{m} - \mu_{n})^2

and we have v_{m} in terms of v_{n} vis –

v_{m} = ( nv_{n} + (x_{m}-\mu_{m})^2 + n(\mu_{m} - \mu_{n})^2 ) / m

In other words v_{m}= f(v_{n}, sum_{n}, n, x_{m}) , so theres very little we need to store to accumulate the variance as we traverse the sequence.

C++ Implementation

Expressing this in C++ code we’d have a functor that maintains state and gets handed each element of the sequence, in simplest form –

Read the rest of this entry »