While the outstanding audio engine from a1k0n has proven to be a perfect starting point, and invaluable in getting the project kickstarted while testing and building the core user interface elements, I started to realise some limitations. These limitations are not an indication of anything wrong with the jsxm engine, but rather a misuse of the code, which is perfectly suited to playback of .xm files, but not perfect for an authoring tool which needs different capabilities and controls.

I have been working on a refactoring of that engine, not a complete replacement, a lot of the effect processing code, logic and inherent Fasttracker 2 format and playback knowledge is just as valid in the new system. To explain the changes, and the reasoning, a short, very limited explanation of how jsxm works.

In a nutshell, jsxm does everything in a single ScriptProcessorNode, processing effects, combining samples, applying pitch changes, applying volume changes, extracting visualisation information, etc. This is very much in line with what you’d expect from a Fasttracker player, where mimicking the capabilities of the original engine is paramount. This approach gives ultimate control. The cost is that it’s difficult to control tracks individually, and it’s quite expensive in terms of processing in Javascript. While I never actually noticed any performance issue with jsxm, a testament to the quality of the Javascript, the lack of track level control did become an issue in WeTracker. So, with a1k0n’s help, I’ve reworked the internals and moved some of the heavy lifting out into WebAudio nodes.

The new engine lookes like this…

The engine manages an Instrument object for each instrument in the song, which contains an array of AudioBuffer objects converted from the samples in the song on loading. A Javascript scheduler creates lightweight PlayerInstrument objects during playback to represent notes played in the song during scheduling. These objects hold the AudioBufferSourceNode, GainNode and StereoPannerNode needed to process the audio and send it further down the graph. These PlayerInstruments last as long as necessary to complete the note, as soon as they finish, or are replaced by another instrument on the same track, the are cleaned up. The output from the PlayerInstrument goes to a GainNode per track, which affords direct control over the volume and mute of each track. The output of these is directed into an AnalyserNode per track, which is read by the monitors to display FFT curves for the notes playing on that track. The scheduling is based on the excellent artick by Chris Wilson, A Tale of Two Clocks - Scheduling Web Audio with Precision.

Onwards and upwards.