Nonlinear Music Design


What follows is a necessarily cursory, high-level look at the framework behind some of the approaches that I've used in recent years for designing stand-alone nonlinear music files using DirectMusic. The ideas presented here are subjective and by no means exhaustive. I hope they serve to spark other ideas or at least give interested composers and designers a starting point and/or general direction from which to proceed. Unfortunately, there is no way to create a template or formulate a simple list of instructions for making nonlinear music files. If there were, it would hardly be a creative endeavor, and designing compelling nonlinear compositions requires creativity or, at the very least, ingenuity to be done well.

One factor that makes a cookie-cutter approach impossible is that the nature of the musical material used often dictates the most desirable, or even feasible, design approach; even similar-sounding pieces from the same genre can sometimes require different approaches. Additionally, the nature of the DirectMusic architecture is such that there are often several very different ways to accomplish the same sounding outcome, which can be somewhat overwhelming at first. Each approach may be more or less appropriate, depending on the final application you design for and the strengths and weaknesses of your current skill set. Those with advanced MIDI chops who are comfortable building DLS sample sets may like the flexibility that comes with that approach if the material allows. (The ProjeKct X file was constructed with that approach, for instance.) More traditional audio engineers may gravitate toward DirectMusic Wave Tracks as their primary tool due to some similarity there with nonlinear audio editing tools. Ultimately, each piece needs an approach unique to it and to its designer. However, I've found it useful to keep three high-level concepts in mind when approaching a new DirectMusic project. They may seem obvious and simple to grasp at first, but I've found that they become more useful the more experienced I become.

First, in brief:

Your role in the project is the first key factor to consider. How much freedom do you have to alter the material? If you are the sole, original composer, the answer is perhaps easy. If you are working as remixer, obviously a different approach is required. One role that is unique to nonlinear music production is something like that of a translator — bringing linear material written by another composer (or yourself) into the nonlinear world while conveying the spirit of the original material, which of course brings its own set of requirements and restrictions.

Next, what functionality do you need to design for, as in what functionality does your player app expose to the end user? Will your material play in a "start and stop" player like the ProjeKct X player? Or are you required to expose more parameters, such as the ability to change the mix by loading different AudioPaths or different Band files to mute certain tracks, or allow control of purely musical parameters such as changing the tempo or harmonic character of the music. Material based heavily on large linear audio tracks simply won't allow purely musical parameter changes, since these parameters are, in effect, "baked in." This leads naturally to the next concept.

The most critical concept is what I call the granularity of the material that you are working with. It is also the hardest concept to convey in the absence of nonlinear composition experience, since it is unique to the field. Is the piece a Bach fugue for a keyboard instrument, a jazz standard with vocals, an ultra-polished mega-hit pop song, or a musically ambient sound bed for an art installment? Each of these will have a different granularity, which will dictate your approach and how much DirectMusic functionality you can use or expose.

Now we can look at these concepts a little more in depth.

Role

Your role in the process is hopefully self-evident but still worth articulating to identify the boundaries within which you must work. Are you the sole composer of the piece, working entirely from scratch with the freedom to follow the piece wherever it leads? Are you working with music from the classical canon in the public domain, with a jazz standard, or with a traditional score composer and taking his or her linear ideas and converting them to DirectMusic files (as often happens in game music)? Or are you working with someone else's finished audio multitracks, using as much or as little as you like in combination with your own material the way a remix artist might? Or are you limiting yourself to recreating a variable playback version of the original piece ("translating" from linear to nonlinear)? The answers to these questions will help define your role.

Since the roles are not discrete stations, separate and isolated from each other, we can arrange the various options on an axis or continuum. For instance, remixers nearly always add their own material to a remix, often using only a vocal track and nothing else from the original. If working as translator, one might choose between adhering to the exact form of the material, allowing variations within the different parts, or one might make editorial decisions and attempt to capture the spirit of the original while taking much liberty with the original form. On a number line from 1 to 12, the continuum might look something like this:

click to expand

My role in the creation of the ProjeKct X file would be somewhere around 10, closer to translator than remixer. I didn't have any new material but still made some critical choices that affected the listening experience. My aim was to create a composition that did not just sound like that particular piece of music playing but that sounded more like that particular piece of music being played live, and to accomplish this I made certain editorial/translation choices. For instance, in the drum tracks I intentionally have them introduce the piece the same way with every play, without variation. I also place the tacit drum breaks on a similar timeline as the original live performance, as well as have them progress in intensity the same way they do in the original. In choosing the drum track to function as a sort of formal template for the Segment, which provides cues and signposts to the attentive listener, I gained more freedom to add variability to the other instruments. I wasn't forced to do it this way by technical or musical constraints; these were merely editorial decisions based on my reading of what would work best for the material in this format. I also changed the mix quite a bit from the original and brought in musical material from markedly different sections of the same improvisational suite so that this small Segment of material could stand a little better on its own as a demo. It is just one arrangement of several that I did with the material, the most formally literal with regard to the original. Had my role been further to the left of the axis above, I might have added quite a bit of my own material to act as setting for the ProjeKct X material or even just used samples here and there in my own composition.

Functionality

Of course, game audio is currently the most common application to compose nonlinear music for, but as the title of this article indicates, the application for consideration here is a stand-alone player application. Currently, the few stand-alone player apps available function primarily as simple start and stop devices, but the APIs are there to add specific DirectMusic functionality to allow a custom player to expose values for different parameters to user input. The following list of parameters is in the order of increasing granularity, as well as increasing restriction on the type of source material that is compatible with such functionality:

  • Standard playlist functionality to play multiple Segment files in a desired order

  • Choice of different AudioPaths and different Band files within a Segment to allow for different mixes

  • Allow user to mute or unmute certain sub-tracks

  • Allow user to choose approximate length of run time for each Segment

  • Allow user to choose level of "musical activity" or density

  • Allow user to alter tempo

  • Allow user to alter harmonic and melodic characteristics and/or mode

click to expand

The ProjeKct X player is simple in functionality, so the Segment I provided here has a corresponding architecture; it sits squarely at 1 on the axis above. Another player I've worked with that was specifically designed to showcase this material allowed the user to choose between various mixes and signal processing via alternate AudioPaths and Band files, as well as choose differing levels of musical density and intensity via controlling Segments. Such a player would have required me to ship corresponding assets and also led to different design choices in the Segment's construction. It's important to note that, due to the nature of the ProjeKct X source material, it wouldn't be possible to allow the user to alter the tempo or harmonic/melodic content. This is due to the granularity of the material, as explained below. In the middle of the axis is the game music engine: It requires much more complex design and assets than a simple Segment player, but it's still a more "controlled environment" than a player with musical parameter variables exposed to the user. There is likely no direct user access to musical and sonic parameters in a game. The playback parameters are more predictable by the composer, therefore easier to plan for and work around as required.

Granularity

Anyone who has worked much with DirectMusic probably already has an intuitive understanding of what I mean by "granularity." It is useful to think of it in the literal sense; sand is quite granular and conforms naturally to the shape of most any vessel. Gravel is less granular but still conforms to a mold, within rough limits. Bricks or blocks are much less malleable, but in return maintain their own shape and stability, which means we can use them to build a stable vessel or container into which we can pour gravel and sand if we wish. The analogy is simple: MIDI tracks in combination with DLS instruments are like sand, phrase samples are like gravel, and Wave Tracks are like bricks.

click to expand

The ProjeKct X file belongs at around 4 or 5 on this continuum; phrase samples are built into DLS instruments. The left side allows much more malleability with regard to musical parameters, such as tempo, key, harmonic sequence, and phrasing. The right side is less concerned with musical parameters and more concerned with audio parameters. It is less musically malleable (in terms of key, tempo, etc.) but as such retains much more of the critical performance characteristics so crucial to popular music (i.e., the singer's voice or an instrumentalist's tone). As such, its stability (or invariability) will likely also determine the overall form of the Segment.

It is critical that a composer become familiar with how the basic architectural units in DirectMusic relate to one another in order to understand the concept of granularity. Roughly, from smallest to largest, these units are notes, variations, patterns, Styles, and Segments. These units tend to fit one inside the other, sort of like a series of nested Russian boxes. Once you grasp the internal relationships of the DirectMusic elements, the most critical step in designing a nonlinear Segment becomes somewhat easier — defining the smallest possible unit you can reduce the source material to without losing the essential quality you wish to convey, almost like discovering a "musical molecule." This analogy, of course, cannot be taken too literally since your definition of the granularity will likely be determined somewhat by your role and not by some objective musical quality.

For example, if you "translate" a hit song by a well-known artist and you want your Segment to maintain a traditional song form, the critical element is almost certainly the singer's vocal performance and the lyrics. The granularity in such a case is possibly as large as an entire verse and certainly no smaller than a full vocal/lyrical phrase. This would tend to dictate a design based around DirectMusic Wave Tracks, and the variations would likely be more in the post-production elements, such as altering the mix via track levels and differing DSP treatments, using alternate instrumental tracks, or altering the song's form. If you're working closely with the artist, you might be able to include alternate vocal takes. If, however, you design for a dance remix of the same song and are not concerned with maintaining pop song form, you can make the vocal line as granular as you want to take it. In such a case, cutting the vocal line into phrase samples and making a DLS instrument with them might be the best option, depending on the rest of your material.

In the case of instrumental music, the level of granularity may also be as large as a verse or a chorus, especially if the critical element you wish to keep is the player's individual tone. If the instrument is especially MIDI friendly though, high-quality DLS instruments may be an option, in which case the level of granularity may drop all the way down to the individual note level, again depending on your role.

For example, one could theoretically ship a well-captured keyboard performance in the form of MIDI-based DirectMusic Segments along with a Gigasampler quality DLS instrument. This would be the digital equivalent of shipping a player piano, the piano rolls, and the player. Rather than play the linear MIDI sequence the same way every time, the Segments allow for subtle musical variation in tempo, velocity, and duration, even within specifications derived from analyzing the way a particular player varies its performances. Imagine something like Keith Jarrett's Sun concerts in a format that sounded like he was playing your favorite Gigapiano in your listening room and had the option for subtle or not-so-subtle "Jarrett-like" variations that would differ with every playback. ("Hmm, shall I have Mr. Jarrett perform on my Bosendorfer or the Steinway today?") In such a case, the DirectMusic engineer (i.e., the person who converted the MIDI sequence to DirectMusic Segments and edited and added the musically appropriate variation parameters) would be working around 12 on the role axis above, but the material would be near 1 on granularity. The most compatible player would be near 12 on the player functionality axis above.




DirectX 9 Audio Exposed(c) Interactive Audio Development
DirectX 9 Audio Exposed: Interactive Audio Development
ISBN: 1556222882
EAN: 2147483647
Year: 2006
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net