How "new" Frame Based Synthesis is, I couldn't say for sure, as elements are clearly present in FFT and possibly some forms of speech synthesis, not that I've done enough research to say so with any authority. Anyway, here's what's going on with the sound clips below.
With many synthesis systems, most of the parameters will determine the basic timbre, and then you might have a few elements to play with in order to change that over time, such as an envelope mapped to a Low Pass Filter in a classic Analogue design. My method takes an entire synth patch - which could involve an FM matrix, additive harmonic synthesis, granular sample playback or anything else - and saves it to a slot, or a "frame." Further timbres are created and saved, and then these patch changes are played back over time, creating movement in the sound. So essentially it's quite simple: instead of having to contend with 30 envelopes or something like that, you just design the timbre as it will be at a particular moment in time. For each step, varying degrees of lag can be applied to the parameters, creating smoother transitions if required.
The fun comes when having created a sequence with many steps. Pitch, rhythm and timbre have been rendered together in a kind of playlist, which I call a page. Pages can be recalled at will. They may be instantly loaded, or a smoother interpolation between the current page and a new page may be enacted. This "morphing" between pages can be specified as any number of steps, and given a curve argument, so that changes at the start and end of the transition period are more (or less) subtle that those in the middle.
There is clearly a lot more scope to this technique, not least that now an entire musical sequence along with its timbre is specified in full as simply a series of numbers, many different transformations can be applied for interesting results. Using this method to create melodies would be made easier with the use of a more intuitive interface, rather than the current grid of numbers. Also I'd like to work on making the code itself more efficient, as at the moment, faster transitions suffer from timing problems. As usual, I've made this stuff with SuperCollider.
Monday, 30 March 2009
Subscribe to:
Post Comments (Atom)
6 comments:
Nice work! It seems to be doing something similar to preset morphing which is available in a few softsynths. Reaktor does it for example by providing a slider for morphing between two presets of an instrument.
One potential source of confusion could be that (if i remember correctly) a frame in DSP, usually refers to a chunk of samples that will get processed together somehow.
Correct me if I'm wrong, but your concept is essentially in parallel with something like Native Instruments KORE or the Scuplture AU built into Apple Logic Pro in that it predicates a sum timbre of the patch sound upon the crossfading of multiple existing timbre settings?
In both Sculpture and KORE, this is achieved by dragging on an XY field. KORE is different from Sculpture in that rather than setting the four corners of a field to synthetic representations of materials being struck (glass, metal or wood for example) it's literally just sending the same MIDI controller messages to four different presets from diminutive engines of all their existing synthesis software (Massive, Absynth, Reaktor, Guitar Rig etc.) along with 4-8 knobbed soft controls for each one.
What you have done in parallel to these models is that you've allowed more options with a "frame library" rather than four corners visualized on an XY field and that the system itself synthesizes the automation of pitch and rhythm together with crossfading between frames, treating them as the same thing.
Have I generally summarized what you've done or am I not right?
Either way, it sounds fantastic. The creeping howly sound about 3:00-4:00 into the first file gave me chills; I listened to it several times.
Hi, thanks for the responses! BH I'd say that the use of the word "frame" has the same meaning as the one you're referring to, as a chunk of values get processed at the same time. - S there isn't really any patch morphing going on per se, it is just that the sound parameters are being updated in stages. Each stage (or frame) may be subject to a lag, which will mean that the values will glide, but there isn't a sense of having 2 or more patches and making a composite sound of some description, it's more like a playlist of timbral changes. It is these playlists (or pages) themselves that are subject to morphing - for instance at the start of Demo 1 you can hear a "beat" gradually dribble away and transform into a more abstract droning sequence. In all of these demos, a single oscillator model is playing continuously while the parameters are being updated.
wow, i'm having a hard time totally acquiring what you're offering me. i'd like to see it myself!
very cool.
Wow... You did your own program in SC... Damn... Years ago I was trying to make some noise in SC, but I drop it and use some guitar pedals that I borrow of my friends... Thanks for the inspiration... I write some junk in SC again... :)
Looks cool, but I'm not understanding how this is different from a tracker(like Renoise)?
Post a Comment