Max For Live arpeggiator: Arpektron

I have owned two Elektron synthesizers, the Monomachine and the Analog Four, two very nice machines even though i have sold them both (mostly due to too many menus resulting in me not using them much). One feature i loved in particular was the arpeggiator that’s implemented in both synthesizers (and in their Octatrack as well). It has a built-in stepsequencer (of variable length) which allows you to skip certain steps in the ongoing arpeggio and to add note-offsets to certain steps, resulting in very interesting melodies and rhythms that are generated on the fly.

I decided to recreate that appreoach using Max For Live, which allows you to create custom audio/midi devices for Ableton Live with Max/MSP. A very first alpha-release of the so-called Arpektron can be found here, and you can listen to a demonstration of the device controlling my MS-20 Mini on SoundCloud.

TypeScript declarations for the Web Audio API

After having done some work on porting Patchwork to Javascript last year using the Web Audio API, a project I departed because of some unsolvable problems (mainly, but not limited to, the custom AudioNodes adding a delay in the audio chain), I recently picked up a small experiment again using said API. Working with TypeScript this time, a declarations file for the Web Audio API was needed and after some searching without any result, I decided to make one myself.

There are some minor issues with it, although they don’t prevent regular use afaik:

  • I coulnd’t get readonly attributes in there using getters without Visual Studio complaining about my syntax. For now, I’ve just made them into regular properties and marked them with a comment so I (or someone else) can change them later on. (Using them as setters will be allowed by Visual Studio, but will logically give an error during runtime).
  • I don’t know how to implement constants (there are two sets of uppercase consts in there), so I also made them into regular attributes.
  • The MediaStreamAudioDestinationNode has an attribute of type MediaStream, which isn’t a part of the Web Audio API (it appears to be defined here). It doesn’t really belong in this file, but since I am not planning on using it, I just added an empty interface to prevent any errors. You should remove it if you have a declaration for the MediaStream API.

Apart from that, it works fine so if you want to do some Web Audio API programming using TypeScript yourself, you may find it useful and you can download the declarations file here.

A quick preview of the new Patchwork

Even though my last post was several months ago, I’ve been doing a lot of work on the new version of Patchwork (the modular synthesizer that I wrote in Flash) since then. I will spend more words when I release a more polished version, for now I’ll keep it very short with a link to this beta version.

It’s work in progress, comments are still appreciated though.

Patchwork 0.2: some updates

I did some updates on Patchwork, the modular sound synthesizer in Flash that I released a few weeks ago. You can check the new version here or read on to learn about what is new.

First of all, a little insight into what has happened since the initial release, because Patchwork has gotten quite some attention since then. After I put that very first version online, I started off by posting it in the programming section of Reddit (best site ever btw) as a ‘hey look what i made!’ thing, which I usually do when I’ve made something new. If Reddit likes it (and they did), you can expect (depending on the section) thousands of visitors so that’s always a nice kick-off.

After that, I sent an email to Andre Michelle, notorious guru on the subject of sound in Flash, and within 15 minutes he twittered about it, an act that already made my day. With him having a lot of interested followers on this specific subject, things really started rolling from there. People retweeted it, shared the link on Facebook and blogs all over the world wrote about Patchwork. Sweet! The best thing was Richard Devine twittering about it, and for those who don’t know him: he’s a guy who really knows how to patch some modules together.

Since then about 7500 visitors have visited my site, a number I consider quite a success, and a lot of people were having fun with Patchwork. Which made me wonder where this could and should go (apart from expanding the application itself). One thing that bothered me was reading about people creating patches without being able to see and hear what they made. So, it was quickly decided that there should be a website: user profiles, lots of patches, categories, ratings, sharing, embedding, etc. I convinced some nice people from work to help me out with that, and you can expect something in a few months. Thoughts and ideas on that, and on the current version Patchwork itself as well, are always welcome.

Then, finally… about those updates:

  • There’s a sample player in the generators section now. You can load either a local or an external wav-file and play it with a trigger signal. Remember though that samples loaded from your computer will not be saved with the patch (external ones will). For external files to be loaded, the server has to give permission for that through a crossdomain.xml file, which unfortunately isn’t always present. Although stereo samples are supported, the module has only one output at the moment. Will fix that soon.
  • The sequencer modules have a trigger output that will fire whenever they jump from the end of the sequence back to the beginning. You can use this in combination with the new reset input to synchronize multiple sequencers. Just connect the Sequence End Trigger output with the Reset Trigger input of another sequencer, and they should run synchronized.
  • The trigger sequencer now also has a continuous output which will send out its current state (either 1 or 0) continuously. This behavior is different from the trigger output, which generates only a single value of 1 (and after that 0′s again) when a selected sequencer item is encountered and can be used as a trigger signal. (Note: both outputs can in theory be used as a trigger-signal (which in Patchwork is a value of 1 and a previous value of 0), but the difference lies in this: if you’d have 2 sequential sequencer items selected, the trigger output would generate 2 triggers and the continuous output only one, since the latter would go from 0 to 1 on the first item, and stay 1 all the way through the second item.)
  • I fixed quite a huge bug in saving a patch. I messed up some ids which could corrupt a patch forever.
  • The textfield underneath a knob can be used to manually enter a value by clicking on it.
  • When turning a knob, you can hold shift to enter precision-mode. During this, the values will be rounded as well.
  • Saving a file now includes the viewport location so you will have the correct view after loading a patch.
  • I made a unified messaging system that allows me to give the user feedback on whatever is happening.
  • Patchwork now starts and loads an example automatically. This is more inviting than entering the page and staring at an empty screen.
  • The LFO (still just a simple sine, sorry) has a reset trigger input.
  • There are two types of comment modules, allowing you to include information in your patch. One is just text, the other one allows to to point something out.

I know there aren’t that much new audio modules (sawtooth oscillator, more LFOs, envelopes, effects, filters, etc) but that will come (if you’d like to help with that, let me know). Last but not least: if you make any interesting patches, I’d be happy to include them in the examples.

Modular sound synthesis in Flash: Patchwork

Realtime sound generation in Flash. Is there anything more to prove since AudioTool? Their team made an amazing application that allows you to hook up different synthesizers, effects and drumcomputers, and wrapped it all up in a very sparkling interface. So yes, realtime sound synthesis can be done and it’s actually quite easy to get started. It takes just a few lines of code to be able to write data directly to your soundcard and when you can do that, you have access to a whole lot of fun.

Having a background in audio, I’ve used quite a few music and sound applications and the modular ones always had my special interest: Max/MSP by Cycling ’74, Native Instrument’s Reaktor or the software for my Nord Modular. Yes, AudioTool is modular but I usually have more fun when I have control over the most basic soundmodules: oscillators, lfo’s, filters, stepsequencers, envelopes, etc.

So when I saw the ease of direct access to the soundcard’s output with Actionscript, I decided that a modular system of those elementary blocks should be my next project. That was about a month ago, and here is a very first result. I proudly present: Patchwork (you will need Flash Player 10 to run this, and you might want to turn your volume down when you start). Please note: the application is far from finished. Although you can create some interesting things already, there are a lot of modules and options missing to do the real good stuff. Consider this current version a proof of concept.

Patchwork: Sound Synthesis in Flash

What is it?
Patchwork is a Flash application for realtime modular sound synthesis. You can connect basic sound generation or modification modules to create either music, effects or utter noisy crap. Although at the moment it has an interface for creating a so-called patch, you will in the future be able to run your creations standalone, which means: in your own site or game.

Why did you make it?
It seemed like a fun challenge.

Dude, WTF! There’s only one module in the effects-category, which “does absolutely nothing”.
Yeah, I know. Patchwork has a severe lack of modules at the moment. Working on that.

Future plans?
Yes, a lot. As I mentioned before, Patchwork is far from finished. Currently, I have a nice codebase to build and expand on. Patchwork’s core works good for the time being so I won’t be doing much programming on that in the near future, apart from some interface-related stuff. For now I want to focus on adding new modules and expanding the capabilities of existing ones. Apart from that, I want to add the aforementioned standalone functionality, so you can add interactive music or effects to your own Flash applications.

Will you release the code?
Patchwork will probably be open sourced.

Can you tell something about its workings?
Sure can:

  • First of all, a disclaimer: I have never ever created a realtime modular sound system before. Suggestions, comments, criticism and thoughts on every aspect of Patchwork are much appreciated.
  • Patchwork runs by creating a Sound object, adding a listener for the SampleDataEvent.SAMPLE_DATA to it and execute the play() method. An event will be dispatched, stating that the buffer is empty, after the Main SoundOut module is asked for a new buffer. This module will ask whatever module is connected to its inputs for a new buffer, and this chained request will go all the way up the node-tree until a buffer is found.
  • This means that modules that are not connected to the Sound Output will not run. At all.
  • Every module-output has a cache. If any module-output gets a request for a buffer, it will calculate it and store a copy of the results in its cache. If another request is made in the same iteration, the module will not recalculate a new buffer but instead create a copy from its cache.
  • To achieve this caching, each request that’s started at the end-module (main sound out) passes along a requestId with it as well that increases with every call. This id is used by modules to decide whether an incoming request is new (and a new buffer should be calculated) or if the buffer for this was already calculated and stored in its cache.
  • One type of trouble that i didn’t prevent yet is feedback. Typing this, I realize that i havent tried it myself, but i’m quite sure that Patchwork will surrender and collapse if you create a loop somewhere. I’m thinking about wrapping the buffers in a class that also holds a list of nodes that the buffer went through. This way, i can detect if a buffer is entering a node that it already passed and act appropriately. Any comments on this subject are very welcome.
  • Every connection runs at the same speed: 44.100Hz. For audio this is the way to go, but for other signals this is a waste of CPU usage. An LFO that runs at 10Hz can do with much less precision (in time) without notable effect. I plan to create different buffers, so that for example the LFO writes only a fraction (1/2, 1/4, 1/8 etc) of the normal buffersize (thus using less CPU). Other modules will go through these buffers at a different speed; a half buffer will be read by reading every value twice. Again, suggestions on this are very welcome.
  • Patchwork was built using the Temple Library for AS3.

The strange beauty of the Gumowski-Mira Attractor

The world of fractals and (strange) attractors have always had my interest; mathematical formulas which expose interesting visual patterns and/or chaos when mapped to the screen. I thought I’d seen most it, but was pleasantly surprised when I recently encountered some of the organic results of the Gumowski-Mira attractor. Seemed I had some programming to do and create them myself.

The Gumowski-Mira equation was developed in 1980 at CERN by I. Gumowski and C. Mira to calculate the trajectories of sub-atomic particles. It is in fact a formula to plot a 2-dimensional dynamic system, and the main equation is as follows:

[math]f(x) = ax + frac{2(1-a) x^2 }{1 + x^2}[/math]

Using that equation, we can iterate the following formulas to calculate sequential x,y locations:

[math]x_{n+1} = by_n + f(x_n)[/math]
[math]y_{n+1} = -x_n + f(x_{n+1})[/math]

So we have two parameters, a and b (b is usually kept at 1 for the most interesting results), and to get the system going we also need some initial values for both x and y. You can set all those values in the editor below using the sliders, with some of them having a finetune-slider as well.

When you change any value, the program gives you a quick render of the results (10.000 iterations). If you want to explore more iterations, click the render (and more) button repeatedly to increase the iterations by 50.000. The color-slider is used to set the increase in blackness that each pixel will get when a point is drawn; set it to a high value if you want to quickly check how your current settings will evolve, and set it to a low value for the best images (note: the quick-render isn’t affected by this slider). Below the program are some renders I made.

[SWF]http://www.petervandernoord.nl/swf/gumowski-mira/main.swf,640,780[/SWF]





















Bifurcation diagram of the logistic map

In my ongoing search for interesting images to generate, I decided to explore the strange behavior of the bifurcation diagram for the Logistic Map. Drawing of the diagram itself was done fairly quickly, the hard part was my desire to be able to zoom in and out in a userfriendly way.

Every image that’s drawn is stored, both the image-data and the progess (if you happen to zoom while the current image is still being created). The current job will be halted, and resumed if necessary if you happen go back to it. Took me a lot of time, but I’m quite satisfied with the way it worked out. Play around with it, or just enjoy some of the screenshots below.

[SWF]http://www.petervandernoord.nl/swf/bifurcation/bifurcation.swf, 640,600[/SWF]







That last one was zoomed in until the precision of the Flash Number-class ran out of precision :)

AS3 2D read/write test: BitmapData vs Array vs Vector vs Object

I’m looking into speeding up the creation of animations with my perlin noise vectorfield, and one aspect of doing that is by improving the way the data of the particles’ trajectories is handled. In short, the images are created by increasing the brightness of a pixel whenever a particle is present on that location, resulting in brighter pixels on busy spots.

Right now there is too much overhead by displaying the result of each iteration, which is something I can skip completely; the endresult of a complete set of iterations is what interests me – how it grows during that process is fun to watch, but also very slow. If I skip that part during the process, I can look at other options for storing the data other than the BitmapData object. All I need is something that accepts an x,y coordinate, looks up a value for that combination and stores something new in there.

There are several options to choose from, and several ways of using those options. I tried BitmapData, Vector, Array and Object, and since I obviously want the fastest one, I wrote a little testenvironment to see how much time they all took for a certain task. Using a 500 by 500 storage filled with uints, I loop through every location in it and do a read and write command. The data isn’t modified in any way, for now I just read the value and put it back.

[SWF]http://www.petervandernoord.nl/swf/2d_storage_benchmark/main.swf, 640, 480[/SWF]

The test-subjects

  • Bitmap: A regular BitmapData instance, accessed with the setPixel() and getPixel() functions.
  • Bitmap32: Again a BitmapData instance, this time accessed by setPixel32() and getPixel32().
  • Array2D: A two dimensional array, which is an array of size width, each entry containing an array of size height.
  • Array1D: A one dimensional array, in which all rows of the 2D-matrix are sequentially stored. A little calculation is needed every time I read or write to convert the supplied x,y values into the correct index in the 1D-Array: i = (y * width) + x. This way, we don’t need to access two arrays (which is costly) when accessing a value.
  • Vector2D: Same as the two dimensional array, but done with a (fixed-length) Vector instead of an Array.
  • Vector1D: Same as the one dimensional array, but again using a (fixed-length Vector).

After implementing these, I decided to try an Object as well (which can be seen as an Array with String-based lookup keys) but not expecting much from it. What’s needed is a unique key for every x,y location which I set to: var key:String = x.toString() + "x" + y.toString(). The test for this one was extremely slow compared to the other ones, which was partly caused by the creation of the String-key on every read/write command. To avoid that, I stored the keys in a two dimensional array on creation so I could just look them up using the x,y values and not having to call the expensive toString() functions every time. However, there was no way this was ever going to be quick so I just left it to that and kept them in my test:

  • ObjectNonIndexed: uses an Object to store values, generating the key (var key:String= x.toString() + "x" + y.toString()) on every call.
  • ObjectIndexed: used an Object, with all the keys pregenerated and stored in a two dimensional Array

The results

Because the results of each separate run can vary quite a bit each time, I included the avarage results as well. When looking at that number, the Vector1D usually ends up in 1st place, with Vector2D, Array1D and Array2D not much behind. Using BitmapData is quite a lot slower as expected, and the Object is (due to the problems with their String-keys) not even worth mentioning.

Something I learned about Vectors by the way (this was my very first encounter with them) : the datatype of a Vector isn’t just Vector but includes the type of data that’s stored in it. When I was working on the Vector2D test, I was creating a Vector of Vectors (with the last one containing uints). I started out declaring that as var vector2d:Vector.<Vector>, which resulted in errors I didnt understand, until I realized that I wasnt creating a Vector of Vectors but, more specifically: a Vector of uint-Vectors. Correct declaration of that: var vector2d:Vector.<Vector.<uint>>.

Finally, the full sourcecode can be found here (it uses a CS5 .fla file). Feel free to comment on it!

Conway’s Game Of Life in Pixel Bender

Despite Pixel Bender being around for quite a while now, I hadn’t used it at all. Time to catch up! Pixel Bender is Adobe’s software to create pixel shaders, small programs which can be compiled into ultrafast image processing units. They can be used in several Adobe products: Photoshop, After Effects and … Flash!

The nice thing about pixel shaders is that they usually run on your GPU, which excels in mathematical operations on images, but unfortunately Pixel Bender’s exported shaderfiles don’t make use of that when they’re being run in the Flash Player. Boohoo. Still, they are way faster than doing the same stuff in Actionscript, and two advantages you do get when using them in Flash: they run as a separate thread (meaning you can for example have a responsive UI while doing extensive calculations on a lot of data at the same time) ├índ they will use multiple cores if you happen to have those in your machine.

So, more than enough reason to dive into the subject, and what better way to start than by creating good old Conway’s Game of Life. For the uninformed: Game of Life is a set of 4 rules that define whether a position on a board should become either “alive” (in this case: white) or “dead” (black) by looking at its 8 neighbours (head over to Wikipedia for the the exact rules). Apart from the interesting results that can evolve, that’s all there’s to it: pixels either become black or white, and the way that’s decided makes it perfect to put in a pixel shader.

And so I did. The result can be seen below, play around with it and/or read the notes underneath if you want some more details. Note: you can draw in the image when the shaderjobs are running, but it’s something I implemented very quickly – and poorly. One advice when drawing: set the FPS as high as possible, and don’t move your mouse too fast. (You will need FlashPlayer 10 to run the application).

[SWF]http://www.petervandernoord.nl/swf/pixelbender-life/main.swf, 640, 640[/SWF]

  • I used the compiled shader in a ShaderJob, which lets you run it as a separate task while you wait for it to complete. I supply it with BitmapData objects at both the input and the output. When the job is done, I use BitmapData’s clone-fucntion to copy the result into the data that’s being used as the input and move on to the next job.
  • Initially, I started a new job immediately after the previous one was done, but since I wanted to control the framerate by the SWF’s framerate-property, new jobs are now started on EnterFrame events. In case the current job isn’t done yet when entering a new frame, the program does nothing and will check again on the next frame. If this happens, you will see the words “frame skipped” blink (you should lower the FPS when you see that, because the shader can’t keep up).
  • The FPS you can set is not the actual frames per second but a request – the player will try to match that number. The left number is the actual FPS, the right one is the number you can adjust. So “60/90″ means you are currently getting 60 frames per second, while you requested 90.
  • The image used is 640 by 600 pixels, so in every job 384.000 pixels are being processed with each of them sampling 9 pixels from the input.

And last but not least: the Pixel Bender code:

[sourcecode language="csharp"]

kernel NewFilter
< namespace : "pvdn";
vendor : "Peter van der Noord";
version : 1;
description : "Conway's Game of Life";
>
{
input image4 src;
output pixel4 dest;

void evaluatePixel()
{
pixel4 color = sampleNearest(src,outCoord());

/*

1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
2. Any live cell with more than three live neighbours dies, as if by overcrowding.
3. Any live cell with two or three live neighbours lives on to the next generation.
4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

*/

pixel4 tl = sampleNearest(src, outCoord() + float2(-1.0,-1.0));
pixel4 t = sampleNearest(src, outCoord() + float2(0.0,-1.0));
pixel4 tr = sampleNearest(src, outCoord() + float2(1.0,-1.0));
pixel4 r = sampleNearest(src, outCoord() + float2(1.0,0.0));
pixel4 br = sampleNearest(src, outCoord() + float2(1.0,1.0));
pixel4 b = sampleNearest(src, outCoord() + float2(0.0,1.0));
pixel4 bl = sampleNearest(src, outCoord() + float2(-1.0,1.0));
pixel4 l = sampleNearest(src, outCoord() + float2(-1.0,0.0));
pixel4 me = sampleNearest(src, outCoord());

pixel4 total = tl + t + tr + r + br + b + bl + l;

if(me.r == 1.0)
{
// alive
if(total.r == 2.0 || total.r == 3.0)
{
dest = pixel4(1.0, 1.0, 1.0, 1.0);

}
else
{
dest = pixel4(0.0, 0.0, 0.0, 1.0);
}
}
else
{
// dead
if(total.r == 3.0)
{
dest = pixel4(1.0, 1.0, 1.0, 1.0);
}
else
{
dest = pixel4(0.0, 0.0, 0.0, 1.0);
}
}
}
}
[/sourcecode]