Category Archives: Experiment

The strange beauty of the Gumowski-Mira Attractor

The world of fractals and (strange) attractors have always had my interest; mathematical formulas which expose interesting visual patterns and/or chaos when mapped to the screen. I thought I’d seen most it, but was pleasantly surprised when I recently encountered some of the organic results of the Gumowski-Mira attractor. Seemed I had some programming to do and create them myself.

The Gumowski-Mira equation was developed in 1980 at CERN by I. Gumowski and C. Mira to calculate the trajectories of sub-atomic particles. It is in fact a formula to plot a 2-dimensional dynamic system, and the main equation is as follows:

[math]f(x) = ax + frac{2(1-a) x^2 }{1 + x^2}[/math]

Using that equation, we can iterate the following formulas to calculate sequential x,y locations:

[math]x_{n+1} = by_n + f(x_n)[/math]
[math]y_{n+1} = -x_n + f(x_{n+1})[/math]

So we have two parameters, a and b (b is usually kept at 1 for the most interesting results), and to get the system going we also need some initial values for both x and y. You can set all those values in the editor below using the sliders, with some of them having a finetune-slider as well.

When you change any value, the program gives you a quick render of the results (10.000 iterations). If you want to explore more iterations, click the render (and more) button repeatedly to increase the iterations by 50.000. The color-slider is used to set the increase in blackness that each pixel will get when a point is drawn; set it to a high value if you want to quickly check how your current settings will evolve, and set it to a low value for the best images (note: the quick-render isn’t affected by this slider). Below the program are some renders I made.


Bifurcation diagram of the logistic map

In my ongoing search for interesting images to generate, I decided to explore the strange behavior of the bifurcation diagram for the Logistic Map. Drawing of the diagram itself was done fairly quickly, the hard part was my desire to be able to zoom in and out in a userfriendly way.

Every image that’s drawn is stored, both the image-data and the progess (if you happen to zoom while the current image is still being created). The current job will be halted, and resumed if necessary if you happen go back to it. Took me a lot of time, but I’m quite satisfied with the way it worked out. Play around with it, or just enjoy some of the screenshots below.

[SWF], 640,600[/SWF]

That last one was zoomed in until the precision of the Flash Number-class ran out of precision :)

AS3 2D read/write test: BitmapData vs Array vs Vector vs Object

I’m looking into speeding up the creation of animations with my perlin noise vectorfield, and one aspect of doing that is by improving the way the data of the particles’ trajectories is handled. In short, the images are created by increasing the brightness of a pixel whenever a particle is present on that location, resulting in brighter pixels on busy spots.

Right now there is too much overhead by displaying the result of each iteration, which is something I can skip completely; the endresult of a complete set of iterations is what interests me – how it grows during that process is fun to watch, but also very slow. If I skip that part during the process, I can look at other options for storing the data other than the BitmapData object. All I need is something that accepts an x,y coordinate, looks up a value for that combination and stores something new in there.

There are several options to choose from, and several ways of using those options. I tried BitmapData, Vector, Array and Object, and since I obviously want the fastest one, I wrote a little testenvironment to see how much time they all took for a certain task. Using a 500 by 500 storage filled with uints, I loop through every location in it and do a read and write command. The data isn’t modified in any way, for now I just read the value and put it back.

[SWF], 640, 480[/SWF]

The test-subjects

  • Bitmap: A regular BitmapData instance, accessed with the setPixel() and getPixel() functions.
  • Bitmap32: Again a BitmapData instance, this time accessed by setPixel32() and getPixel32().
  • Array2D: A two dimensional array, which is an array of size width, each entry containing an array of size height.
  • Array1D: A one dimensional array, in which all rows of the 2D-matrix are sequentially stored. A little calculation is needed every time I read or write to convert the supplied x,y values into the correct index in the 1D-Array: i = (y * width) + x. This way, we don’t need to access two arrays (which is costly) when accessing a value.
  • Vector2D: Same as the two dimensional array, but done with a (fixed-length) Vector instead of an Array.
  • Vector1D: Same as the one dimensional array, but again using a (fixed-length Vector).

After implementing these, I decided to try an Object as well (which can be seen as an Array with String-based lookup keys) but not expecting much from it. What’s needed is a unique key for every x,y location which I set to: var key:String = x.toString() + "x" + y.toString(). The test for this one was extremely slow compared to the other ones, which was partly caused by the creation of the String-key on every read/write command. To avoid that, I stored the keys in a two dimensional array on creation so I could just look them up using the x,y values and not having to call the expensive toString() functions every time. However, there was no way this was ever going to be quick so I just left it to that and kept them in my test:

  • ObjectNonIndexed: uses an Object to store values, generating the key (var key:String= x.toString() + "x" + y.toString()) on every call.
  • ObjectIndexed: used an Object, with all the keys pregenerated and stored in a two dimensional Array

The results

Because the results of each separate run can vary quite a bit each time, I included the avarage results as well. When looking at that number, the Vector1D usually ends up in 1st place, with Vector2D, Array1D and Array2D not much behind. Using BitmapData is quite a lot slower as expected, and the Object is (due to the problems with their String-keys) not even worth mentioning.

Something I learned about Vectors by the way (this was my very first encounter with them) : the datatype of a Vector isn’t just Vector but includes the type of data that’s stored in it. When I was working on the Vector2D test, I was creating a Vector of Vectors (with the last one containing uints). I started out declaring that as var vector2d:Vector.<Vector>, which resulted in errors I didnt understand, until I realized that I wasnt creating a Vector of Vectors but, more specifically: a Vector of uint-Vectors. Correct declaration of that: var vector2d:Vector.<Vector.<uint>>.

Finally, the full sourcecode can be found here (it uses a CS5 .fla file). Feel free to comment on it!

Conway’s Game Of Life in Pixel Bender

Despite Pixel Bender being around for quite a while now, I hadn’t used it at all. Time to catch up! Pixel Bender is Adobe’s software to create pixel shaders, small programs which can be compiled into ultrafast image processing units. They can be used in several Adobe products: Photoshop, After Effects and … Flash!

The nice thing about pixel shaders is that they usually run on your GPU, which excels in mathematical operations on images, but unfortunately Pixel Bender’s exported shaderfiles don’t make use of that when they’re being run in the Flash Player. Boohoo. Still, they are way faster than doing the same stuff in Actionscript, and two advantages you do get when using them in Flash: they run as a separate thread (meaning you can for example have a responsive UI while doing extensive calculations on a lot of data at the same time) ├índ they will use multiple cores if you happen to have those in your machine.

So, more than enough reason to dive into the subject, and what better way to start than by creating good old Conway’s Game of Life. For the uninformed: Game of Life is a set of 4 rules that define whether a position on a board should become either “alive” (in this case: white) or “dead” (black) by looking at its 8 neighbours (head over to Wikipedia for the the exact rules). Apart from the interesting results that can evolve, that’s all there’s to it: pixels either become black or white, and the way that’s decided makes it perfect to put in a pixel shader.

And so I did. The result can be seen below, play around with it and/or read the notes underneath if you want some more details. Note: you can draw in the image when the shaderjobs are running, but it’s something I implemented very quickly – and poorly. One advice when drawing: set the FPS as high as possible, and don’t move your mouse too fast. (You will need FlashPlayer 10 to run the application).

[SWF], 640, 640[/SWF]

  • I used the compiled shader in a ShaderJob, which lets you run it as a separate task while you wait for it to complete. I supply it with BitmapData objects at both the input and the output. When the job is done, I use BitmapData’s clone-fucntion to copy the result into the data that’s being used as the input and move on to the next job.
  • Initially, I started a new job immediately after the previous one was done, but since I wanted to control the framerate by the SWF’s framerate-property, new jobs are now started on EnterFrame events. In case the current job isn’t done yet when entering a new frame, the program does nothing and will check again on the next frame. If this happens, you will see the words “frame skipped” blink (you should lower the FPS when you see that, because the shader can’t keep up).
  • The FPS you can set is not the actual frames per second but a request – the player will try to match that number. The left number is the actual FPS, the right one is the number you can adjust. So “60/90″ means you are currently getting 60 frames per second, while you requested 90.
  • The image used is 640 by 600 pixels, so in every job 384.000 pixels are being processed with each of them sampling 9 pixels from the input.

And last but not least: the Pixel Bender code:

[sourcecode language="csharp"]

kernel NewFilter
< namespace : "pvdn";
vendor : "Peter van der Noord";
version : 1;
description : "Conway's Game of Life";
input image4 src;
output pixel4 dest;

void evaluatePixel()
pixel4 color = sampleNearest(src,outCoord());


1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
2. Any live cell with more than three live neighbours dies, as if by overcrowding.
3. Any live cell with two or three live neighbours lives on to the next generation.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.


pixel4 tl = sampleNearest(src, outCoord() + float2(-1.0,-1.0));
pixel4 t = sampleNearest(src, outCoord() + float2(0.0,-1.0));
pixel4 tr = sampleNearest(src, outCoord() + float2(1.0,-1.0));
pixel4 r = sampleNearest(src, outCoord() + float2(1.0,0.0));
pixel4 br = sampleNearest(src, outCoord() + float2(1.0,1.0));
pixel4 b = sampleNearest(src, outCoord() + float2(0.0,1.0));
pixel4 bl = sampleNearest(src, outCoord() + float2(-1.0,1.0));
pixel4 l = sampleNearest(src, outCoord() + float2(-1.0,0.0));
pixel4 me = sampleNearest(src, outCoord());

pixel4 total = tl + t + tr + r + br + b + bl + l;

if(me.r == 1.0)
// alive
if(total.r == 2.0 || total.r == 3.0)
dest = pixel4(1.0, 1.0, 1.0, 1.0);

dest = pixel4(0.0, 0.0, 0.0, 1.0);
// dead
if(total.r == 3.0)
dest = pixel4(1.0, 1.0, 1.0, 1.0);
dest = pixel4(0.0, 0.0, 0.0, 1.0);

Perlin noise + vectorfield + particles = fun

Admittedly, the result of this equation might not be everyone’s definition of fun, so in order for you to decide whether this post is worth reading at all, I’ll start by putting up something I made using the aforementioned perlin noise, a vectorfield and some particles.

(Note: the videos in this post are actually png imagesequences compiled into swf-files, loaded by a simple player I wrote for the purpose because I didn’t succeed in converting the sequence into a video with decent quality – yet. Beware, the videos are quite a lot of MBs for quite a few seconds of imagery.)


If that was indeed fun to watch, you might want to know how that animation was made. Basically it comes down to this: create an image filled with noise, map each pixel’s colorvalue to a vector (which is something with a direction and a magnitude), and end up with a field of vectors at every location of the image. Finally, throw a lot of particles on it and let the vectors act as some kind of mysterious forces on them. Result: moving particles! To make their trajectories visible, every particle increases the brightness of every pixel it visits a little. This way, areas with a lot of traffic become very bright while other areas stay more dimmed, or completely black.

I made a little tool to play around with this concept, using 6348 particles. Just randomize the noise until you spot a nice one and then press start. You can safely toggle both the particles and the noise without anything changing in the drawing, those two layers are merely to show what’s going on. (To actually see the drawing, you have to turn both particles and noise off)


After endless hours of fun with that, I realized perlin noise had another interesting property I could use: you can render it using a set of inputvalues, and if you don’t change those values, the noise will always render the same. Not much hoorays yet, but: when you change the values (or at least some of them) slightly, the noise also changes slightly. Hmmm…

On to the final step: after an image was rendered (somewhere in the range of 150-250 iterations), I changed the noise a little bit, set the particles back to their startingpositions and let them move again. Since the noise was slightly different, the drawn endresult was slightly different as well. Restarting this process a few hundred times, collecting all the images in between, was how the animations on this page came to be.

All the rendering, storing the images, changing the noise and resetting the particles was automated (with a small php-script to post the images to, since Flash can’t write to disk). More particles and more iterations meant better results, but also longer rendertimes. The videos in this post took about 1.5 hours to render, which resulted in about 10 seconds of video.

Two more examples:



I’m not sure what the next step will be. I already started on rewriting this tool (in Flash) with a better interface and more options but I’m not sure whether that one will ever be finished; I might just skip on to rebuilding it in either Processing or XNA to make the creation of the animations go waaaaay faster.