Don't Say I Never Did Anything For You

Hello, yes… I’m back from the dead. Grettings from sunny Delray Beach. Just a quick checkin to let you guys know that I’ve been hard at work on my album (tentatively titled DSINDAFY as referenced in the subject of this post). I’m in the process of figuring out who will mix it - nothing is finalized yet but I believe we’re all in for a treat.

The process on this record was to use the Aleator as a source of ideas and inspiration and then layer/build compositions in the piano roll. So, the backdrop was generated algorithmically but then I went through and carefully crafted the compositions on top of that. Hopefully you enjoy the results.

Feel free to listen to the demos here: DSINDAFY

t00dles -k

Loops Loops Loops

Weeeeee!

Weeeeee!

Well, that was interesting.

I told you guys I’d check back in when I was ready to build loops, so here I am. As you know, I’ve been building out progression sets for an upcoming mixtape I’m working on, and this is the second post I’m doing to document that process. This will really be 33% mixtape, 33% playlist, and 34% science project.

I’ve chosen to mine 80’s pop hits for this effort. The first step of course is to gather a collection of clean acapellas. The fact that I’m only using free files and the general lack of selection available for isolated vocals online limits the possibilities. I won’t reveal my list now - I’d like that to be somewhat of a surprise as I work through. Suffice it to say that I have one. Anyway, from there, the process for each song will be relatively the same:

  • Figure out the tempo and key signature of the original recording

  • Analyze the chord progressions of the song in its entirety, including duration

  • Reduce those chords to intervals using roman numeral notation

  • Note any modulation that occurs

  • Transpose all of this data into a valid XML prgression set for the Aleator plugin to consume

  • Build melody approximation XML file

  • Debug the plugin against the data to make sure all of the files are compatible with respect to beats/duration

  • Load the isolated vocal into the DAW

  • Target the appropriate tempo and key signature with the plugin and experiment to generate different versions of the song

  • Profit

I have done this once before with Abstract Factory, which I invite you to listen to if you haven’t already. That was much easier than what I’m doing now because...well...it’s all rap. With most rap you aren’t worrying about specific chords when you do this; the exceptions would be sing-songy stuff like Nelly or Drake (or a lot of other rappers these days) do frequently where entire song is basically a hook, OR a situation where the hook is really the focal point of the song. Nothing I selected for Abstract Factory fell into either category, so I was free to reuse progression sets that I had already created. I just had to make sure that I was targeting the correct tempo and key signature so whatever hook existed was actually in tune. It made for some really interesting and fun results. Event though Abstract was rendered with the previous version of the Aleator and the output was much cruder than what I'm able to produce currently, this all made for some interesting and fun results.

A few months ago, I attempted to randomize the Simple Minds song Alive and Kicking, one of my favorite (non iconic) 80’s pop songs. Ultimately I came to realize that the acapella I had was too dirty to use. Lesson #1 - listen to your vocal iso thoroughly before proceeding. Almost as important though was the fact that the main melodic motif is so central to the song overall. It’s so powerful that eventually the vocal begins to mirror it. When I was running loops, I just kept wanting to hear that melody and I think anyone else would as well. It made rendering the song in any other way seem pretty pointless. So, that really adds another wrinkle to the selection process - I want something nostalgic with really memorable vocals that can afford to have the lead INSTRUMENTAL melody (and everything else) replaced generatively.

As I mentioned in the previous post, the next song I’ve attempted to randomize is New Edition’s Count Me Out. Why Count Me Out? Because it seemed I had no shot of finding a Cool It Now, Candy Girl or Popcorn Love acapella for free...it was really that simple. Anyway, this one is gonna work, but I did run up against some issues with tempo. A lot of these 80’s songs are built on loops that were recorded without click tracks. That means that while a song might be listed online at a certain BPM, it’s possible you get some drift when trying to sync. There’s also the fact that a lot of (all?) online resources list BPM as a whole number when the true BPM is fractional. Count Me Out is listed as 120 BPM, but if I let the Aleator run at 120 for the entire song, the vocal falls waaaaaay behind by the end - it’s not even close. A tempo of 119 BPM is much too slow, so the truth is somewhere in between, which my plugin can’t target. This resulted in me chopping up the vocal to re-sync every few measures; not ideal and a complete pain in the ass.

I am finally at a point where I can generate different versions of the song at will. Here is the initial run:

Not that great actually, but good enough that I’m comfortable I’ll eventually get something that is. Unfortunately, I’ve listened to this song in various states so many times over the past few weeks that I’ll probably punch my monitor if I hear it again (no offense New Edition), so I think now is a good time to move on to another tune. This will probably be my method for the mixtape overall - go through and do an initial setup of all of the songs, then come back around and do another pass where I actually generate versions I want to use.

As for the next tune…

I’ve wanted to cover this song since forever...never imagined I’d just have my computer do it. I’ll be back soon with a breakdown of my approach to melody generation. It’s a work in progress.

K

 

Pop Shit

Bobby Brown was a wild dude...

Bobby Brown was a wild dude...

Long time no talk. I guess you could say I've taken a short break from Staggered activities for a minute, although I have been been thinking strategy a lot. I had the pleasure of presenting the project for the NY Music Tech Meetup on Thursday which was a really good learning experience. I think I did a decent job of boiling things down for the audience and at least a few people seemed responsive. Thanks Seth.

I have a few things I need to get to in the next dev cycle and I'm just not ready to tackle them yet. So instead, I'm gonna have some fun and do another mixtape. We're gonna head back to the 80's for this one...I've already started pulling together acapellas for it. If you haven't heard Abstract Factory yet, well - I think you should. It is rated R though, so if you have sensitive ears you might want to skip it.

Anyway, I wanted to walk you guys through the process for doing these mixes this time around, as it is pretty interesting (I think). The first song I am trying to scramble is New Edition's Count Me Out. I guess in terms of their overall catalog this wouldn't be considered a major hit, but I remember it being played a lot on the radio when I was a kid and always liked it. This one is on All For Love, but it's just as sugar coated as any of the stuff on Candy Girl or the eponymous record so it's a good remix candidate.

The first step in doing this is always chord diagramming the song. Let me just say - the online tools for chord diagramming (e.g. Riffstation or Chordify) are pretty terrible in practice. More power to you if you have success with them but I have no idea how they are coming up with these progressions. The only thing I can think of is that they are inserting extra chords to go along with bass fills or something like that, but there are so many ghost chords that I find them pretty useless for what I'm doing and it's much easier to just draw up the chords myself.

Also...holy shit this song has way more parts than it should. When you listen, it's the simplest of pop songs. The song is in E, and the basic motif is just an ascending progression from the tonic to the dominant :

E (I) - Gbm (ii) - Abm (iii) - A (IV) - B (V)

The rest of the song is similarly constructed. There are no tricks here...it's bubblegum pop. There's no modulation and the submediant and leading chords are untouched for the entire song. There are no chord extensions to speak of; there aren't even any senvenths.

What DOES happen is the seemingly endless shuffling around of these five chords. Ooof. I won't go into the specifics, but I count 8 progressions in this song. That's not rock opera status, but it's kind of shocking for a song that FEELS like there are only three. Usually with these pop tunes, once you get to the bridge you are done...but they go into a rap break after the bridge that has different (new) chords under it! Each progression seems to have a few tiny variations within it. Ugh fuck it, I'll go into specifics. With that first progression out of the way, we have (only roman numeral notation from here):

IV - iii - IV - V

IV - iii - ii - V

ii - IV - V 

I - ii - iii - ii - V

ii - V - ii - V - IV

ii - V - ii - IV - V

ii - IV -V

You could break these up into more reusable components, but this is pretty indicative of how they are used. It's long way to go for a pop tune. Deconstructing these songs really makes you appreciate how much intuition is required to write the hooks and how effective these writers really were. Peace to Vince and Rick.

Ok I gotta run. I'll check back in when I start building loops...enjoy the weekend!

 

The End Of The Beginning

People still bring this up to me. Yes, I remember it.

People still bring this up to me. Yes, I remember it.

After working on the Aleator for maybe a year and a half, much of it in abstraction, I deployed my project to Unicron and started streaming Facets in the spring of 2014. At that point, I thought I was pretty far along in my endeavor to endlessly generate live music that people would actually want to listen to. Three years and several false starts later, it’s fair to say that I was way off on that assessment. Regardless, I finally have a stable solution that writes, performs and immediately broadcasts what has been described as “pretty good” music by some and met with slackjawed incredulity by others. I also have the infrastructure in place to create new instances of my environment and easily spawn new streams. It’s been a long road and it’s great to finally take a break from development.

A lot of the work I do on Staggered is late at night and/or at varying levels of sobriety. Often, I go with the easy implementation as opposed to the correct one. That led to me using while loops...what an awful idea. I can’t guess how many man hours I lost to addressing the issues it caused and ultimately changing the approach. The plugin would get trapped in one of these loops when filling the MIDI queue and basically stall out. To the listener, this meant that the song being written was never completed and the stream remained silent until it was restarted. Obviously, knowing that was a possibility didn’t make me want to draw attention to the project.

That’s all done now though. Both streams are up and running without issue, fallbacks are working as designed and I’m in a good position to promote and talk about what I’m doing. I will most likely be on a dev hiatus until the new year. In the upcoming months, I’ll be curating a series of Static Void test files on SoundCloud (while it’s still a thing) and working on an um...80s influenced mixtape that should be pretty fun. Recordings are not a focus for me, but they are the primary way that music is consumed and a tool I can use to engage with those who might not understand or care about Staggered otherwise. They are also a very fun distraction when you’ve been coding for too long.

Oh - and maybe Static Void gets another progression set or two. As always, I’ll see where it goes.

Testing, Testing 1 2

Real quick because I'm on vacation...if you're reading this, I need your help. Both Facets and Static Void streams are being actively tested right now and I can use all of the assistance I can get in determining which devices and browsers the web player isn't working for. If you go to the stream pages and the clicking the speaker at the top of the page doesn't activate the stream, please leave a comment letting me know so I can attempt to address it. I've received some complaints from some Android users (fuck you Eric), so I'm just trying to get a feel for how extensive an issue this is.

The streams are sounding pretty good and are relatively stable. There is still (at least) one unhandled exception to be dealt with but it's being thrown infrequently enough that I can address it later.

Regarding Facets - after being in a fallback state for sometime, it has been redeployed with the most recent version of the Aleator, which means it has some added capabilities - notably, multiple drum kits and chord changes on 8th notes. Hopefully this makes it a more...robust listen without changing the character of the original stream too much.

But really - whatever, I'm doing this shit for free. F it I'm going back to the beach. See you guys in a couple weeks.

Static State

Hi Everybody-

Just some quick hits tonight. First, thanks to the people who have shown interest and offered word of encouragement recently - I appreciate it. Another thing...as you can tell from all of the sketches I've been tweeting out, I've been doing a lot of experimentation with Spectrasonics' Moog Tribute Library and enjoying it a lot. You should probably expect to hear a lot of Moogish instrumentation on the upcoming stream. I have three interval structures from Static Void pretty well fleshed out so I'm taking a break to do some software work while I have the energy.

I've been listening to as much generative as I can over the past few weeks. It's a little tough with me being such a rap & rock guy, and interesting because a lot of generative music is either ambient or almost purely rhythmic. I seem to be the only game in town in terms of cohesively generative melody and/or harmony. That said there are a lot of generative audio artists doing some interesting things - Renick Bell specifically is doing some really great work in the generative space. Also you might wanna listen to Rob Clouth and some of the other Leisure System stuff. A lot of this music isn't purely generative, but will have some generative or algorithmic elements. I'm still looking for an area where I might fit in a little more and be part of the conversation...something closer to generative pop. So far I'm not seeing it though....maybe that's not a bad thing.

I'm not gonna get into Autechre or Brian Eno. If you're reading this, you already know about them already.

Anyway...Static Void. Unfortunately, I have to do the thing I hate the most for a little while: run endurance tests with the Aleator, with a little regression thrown in. I'm introducing a ton of changes and there were known issues with the plugin running indefinitely as it was. I'm making some progress...if I get it to the point where it is consistently running for 3 days or more, I can live with that.

Once I get past that I will add a feature or two. Dynamic instrumentation (i.e. not having all the instruments always playing) is definite, and I might look into either fades or tuning the drum kits. I'm definitely going to add one more kit as well - I'll go with four total for this stream. Considering the fact that Facets only has one, that's plenty. Then finally, I'll get three more structures in there (6 total). I'm really excited about how the sketches are turning out and think the stream is going to shock a lot of people into believing.

In the meantime, I'll keep pumping out the sketches when I get time to tide you guys over. Thanks again for caring...

Space Case

One of the main reasons I started working on Staggered was because I wanted some insight into my own musical tendencies. My knowledge of music theory was (and remains, relatively) limited. I just played shit that I thought sounded cool and never really thought about exactly why I felt that way about it.

Part of the fun of actually working on streams as opposed to working on the Aleator itself or the site is that I get to analyze my stylistic decisions as a songwriter, specifically as they relate to chord coloring and harmony. If we imagine melody and harmony occurring on the x and y axes, respectively, contemporary pop music is horizontally focused. Meaning - the most important thing is the melodic hook...that's where the money gets made. There is a ton of sound being crammed down our throats as listeners - whether it be words, weird ass noises, effects or just pure saturation. Harmony requires a certain amount of aural space to really have an impact and as a result it seems that harmonic concerns aren't really at the forefront right now. That makes the topic very interesting to me.

I want to open up my process a little for you guys and provide some visibility into how these streams come into being. First off, everything starts with acoustic guitar. That may seem counter intuitive, but its the truth. So, let's have a listen to a quick sketch of a piece I'm working on called Space Case. As a song, it's not overly complicated:

So a quick rundown of what's going on here in terms of progressions. I am in C major, but I work key agnostic so I'm providing roman numeral analysis. For part "A", each chord is held for a measure. I alternate between Cmaj9sus2 (I) and Am7 (vi) three times and then for the final cadence move to Em7 (iii) <--> Dm7 (ii). Then, for part "B", I modulate to the relative minor (A minor in this case) and hang out on the newly established tonic. This time I'm cramming two chords into each measure, but its a similar pattern. I alternate between Am7(b13) (i) and Am7 (i) for the first three measures, then in the final measure go from Em7 (v) to F (VI).

Right away you'll notice a lot of 7th chords, and some are even extended. I've always known that I tended to use a lot of 7ths and 9ths, but my use of suspended and extended chords was news to me. This was all pretty problematic for with Facets, as the Aleator could only play basic triads at that point. As a result, it lacks a certain amount of nuance; I'm hoping that the work I did on the Aleator in the fall and winter makes Static Void a lot richer and that some of the subtleties that come through on the acoustic can find their way through everyone's speakers.

Anyway, when transposed into XML for the Aleator to consume (with another tiny part added), we get:

SpaceCase_XML_2.JPG

With this, the first step is complete; the harmonic framework for this particular passage is represented in XML with no reference to key. Next time I'll talk a little about some of the challenges that have arisen as a result of the additional intervals, altered 5ths, etcetera. Space Case is the first Staggered piece to incorporate any of that so we'll see how it goes. In the meantime I'll keep tweeting sketches.

Nth World Problems

nth world problems

nth world problems

To anyone who has paid even the slightest bit of attention to what I'm doing, I want to say Hapy Holidays & Happy New Year.

As far as Staggered is concerned, this has easily been the most frustrating and humbling year of work since year one. In a lot of ways, it was worse. In 2013, I was working in total abstraction and didn't know for a fact that live, generative MIDI was even possible. As far as I knew, nothing like it had been attempted before. It was an amazing moment when my code generated those first rhythmic pulses of white noise. It wasn't music (or was it!?), but I knew that I was onto something.

In a lot of ways, not much has changed. Musicians are dipping their feet into generative techniques using software, the same as they were before I started working. However, in the realm of music, two facts placed Staggered on an island in 2013 and still do:

  • Code is my primary instrument
  • My output is ephemeral by design - I do not aspire to be a recording artist

The second point cannot be overstated. Since I am streaming live 24/7, I have infrastructure concerns that even another musician working primarily in generative MIDI would not. Availability, fallbacks, recovery, etc. In this space, there is no one for me to defer to...no known blueprint. I'm still working from scratch.

For this reason, I deployed the minimum viable product in early 2014 as Facets, which is still what you hear on this site. Before I can move forward, that prototype must become an application with all of the features I need to create the desired output. This is what I've been working on all year, and it has not been easy. The work itself has been painstaking and tedious, with no real gratification until recently.

Starting in the New Year, I will be posting test runs from Static Void. This isn't some sort of planned release...it's just coincidence that I've been able to complete the implementation of some of these features over the last few weeks. These include:

  • Changing drum kits from one composition to the next (currently only one kit)
  • Changing chords on 8th notes (currently only done on quarters)
  • Lead melody playing 16ths (currently only 8ths)
  • 4+ note chords (currently only triads)

While these might not seem like a big deal and might not even be detected by a casual listener, the mathematical component of the changes made them exceedingly difficult to debug, especially in my spare time. That's really what happened to 2016. I had nothing to disclose on a given day other than what specific algebraic hell I had sunk into.

I will be transposing my notes to XML, experimenting with new presets and doing some test recordings (gasp!) for you guys in the coming days just to pull you in on the process. You know, the fun stuff. The reason I started doing this in the first place. There's still some low hanging fruit in terms of features (for example: varying the instrumentation instead of having all instruments always playing), but I need a break form dev for a while.

As I move forward, please post ideas in the comments or tweet. Thanks again for your interest and again, Happy New Year!

K

 

Reverse Proxy: Two Birds, One Stone

Hey it's been a minute so I thought I should speak upon some nerd shit. You know, for posterity. 

Today's topic will be the reverse proxy. As you may or may not have realized, I was incapable of streaming through most corporate firewalls previously. I use SHOUTcast as my streaming server and the audio comes through on port 2199. I couldn't figure out how to change that on the SHOUTcast side; I don't believe it's possible. I've been operating at such a low level though that it really wasn't pressing. The result was simply that most people (myself included) couldn't listen to Facets - or anything else that I stream - from their work computer. That made me feel like this:

Seriously, I moved on pretty quickly. However, when I started trying to implement the visualization in the dashboard using the Web Audio API (another thing still in progress), I realized I had another, related problem. I couldn't use an audio buffer source for my stream because it never ends. It's impossible for me to fill the buffer since the onload event will never fire; the request never technically "loads". In other words, this shit will not work b:

      var url = "http://usa4.somestreamurl.com/;";
      var request = new XMLHttpRequest();
      request.open("GET", url, true);
      request.responseType = "arraybuffer";

      /* Good luck ever hitting this, dumbass */
      request.onload = function()
      {
         /* Create the sound source */
         soundSource = context.createBufferSource();
         soundBuffer = context.createBuffer(request.response, true);
         soundSource.buffer = soundBuffer;
      };
      request.send();

That meant that I needed to use a media source. No problemo. Oh wait - still one problemo: CORS. For you non developers who are weird enough to be reading this, CORS stands for Cross-Origin Resource Sharing. It basically means that you cant use Javascript to load resources from a domain other than the one your application is running on without consent on the other end. In this case, I was fucked. I can't just make SHOUTcast allow me to make JavaScript requests to resources on their domain. What to do...

With a little elbow grease (aka Google), I had my solution: set up a reverse proxy. The basic idea is that if you are administering a web server, you can set it up as a sort of relay and allow cross-origin access to the resource there. You configure your web server to forward requests meeting the desired criteria to the destination of your choice (in this case, my SHOUTcast URL). Then, on the front end (e.g. Sqaurespace), you send your request to your web server. In IIS, it looks like this:

Click URL Rewrite

Click URL Rewrite

Click Add Rule(s), then Reverse Proxy in the subsequent window and follow instructions

Click Add Rule(s), then Reverse Proxy in the subsequent window and follow instructions

All set

All set

Not sure about Apache and other servers, you're on your own there. Of course, as a side effect of the default IIS configuration the resulting audio is exposed on port 80, which alleviates any port/firewall problems. Pay me.

Don't Mind Me...

You may have noticed some changes around here recently. We're redoing the site on the fly so just act like everything's normal. As you can see, we're installing a custom player. You can see the new functionality in the upper left hand corner of the site, but we're (obviously) still working on styling. As a side note, the nav items have been moved from the center to the upper right.

In the coming weeks we'll be updating banners and...(drum roll)...adding our new dashboard to the navigation. It's still under construction, but you can preview it here.

Y'all finished or y'all done?

Y'all finished or y'all done?

Don't say I never did anything for you.

K

ZZZzzzZZZzzz

Happy Saturday true believers. Another weekend, another immense dev task to tackle in my spare time. Today (and for the foreseeable future) I am working on changes to the Aleator that will allow chord changes on 8th notes; currently it only happens on quarters. This is one of the four major changes I was planning on implementing for Static Void:

  • Chord changes on 8ths
  • Drum kit changes between passages
  • Increased chord coloring (7ths, 9ths, etc)
  • Arrangement variation (inclusion/omission of instruments in a given passage)

Given where I am though, I will probably just move forward with the first three and implement arrangement variation in a phase 2. There have been a ton of changes since I last pushed to production which isn't good, so I will look to get all of that stuff tested and deployed and then get the new XML sets in place before any further Aleator changes.

Last week, my DAW (Reaper) stopped producing audio for a very long time, but didn't crash. This is the worst case scenario since it leads to silence in the production environment (a crash results in fallback files being played). Before I launch the next stream, I have to figure out a way to stop this from happening, or at least bring down Reaper when the Aleator stops producing notes. That's what we call a "P1" in the biz.

Wanna see how boring this shit is to implement? Here's a peep at the method I'm currently altering - this one determines drum totals if the Aleator has decided that it's going to play a Reggae style beat. Ugh.

Static Void

Long time no see. I just wanted to take moment to let anyone reading this know that after a long time just coding, we are getting close to having what you might call an alpha. That means that activity around these parts will see a welcome increase. There are still a lot of improvements to be made on the software, but the framework is in a place where we can bring the project in front of the public.

In the coming months we will be releasing Aleator improvements to allow for more chord coloring and dynamic capabilities, as well as launching a Kickstarter campaign and continuing development on our forthcoming stream, Static Void. I feel like that sentence should have ended with at least one exclamation point but it seemed stupid so I just went with the period. Anyway, the goal of the Kickstarter campaign will be to fund both streams (SV as well as 2014's Facets) for two years, so it's important that it go well. A lot of effort and energy are being put there. We also will be optimizing the site for mobile and making some aesthetic changes, so pardon us in advance.

I'd like the thank The Melissas for their continued support - it's very much appreciated. And now, allow me to leave you a very nice integer notation diagram that's helped me immensely: 

Never forget.

Never forget.

Similac

ABBOT LABS COULD SUE ME FOR USING THIS

ABBOT LABS COULD SUE ME FOR USING THIS

When Facets went live in February, it seemed like a pretty big accomplishment, and it was. The Aleator plugin had been in development for an extremely long time and we were finally at a point where we could attempt to run it in a production environment. Of course, attempting and accomplishing are entirely different things.

If I've learned one thing over the last 4 months, it's that this project is still in it's infancy from a technology standpoint. The problem of delivering live audio 24/7 using the methods described here isn't a simple one - there are many points of failure it's taken a long time to work through them. Some of them have been detailed in this blog and some of them haven't.  Most recently, I discovered that when responding to the data received from the DAW's transport, there are instances when it instructs the Aleator to stop playing. That has been fixed in the most recent assembly, but even now there are instances when the stream inexplicably disconnects from the SHOUTcast server. When that happens, the connection has to be manually reconnected in Reaper. These sorts of things can be near impossible to debug, as a lot of times it can take days to observe the behavior. For this reason, it was necessary to implement pretty extensive logging - otherwise it gets pretty hard to tell what precipitated certain events that take place.

Did any of that sound exciting? It's hard to post blog entries and tweet about this stuff because it's incredibly boring, but it's nonetheless necessary. Before there can be any real promotional effort behind for Facets, the uptime has to be considerably higher. That means we need to figure out what is causing these disconnections, or at least figure out a way to be notified when they occur. The more of these problems get addressed, the more confident we can be that the stream will be active as listeners try to access it. Only when that confidence is high does it make sense to aggressively promote it.

In light of all this, I've come to the conclusion that putting a lot of effort toward the release of an assembly doesn't make sense right now. We will maintain the CodePlex project for posterity (and because we need source control), but we will not be focusing on building out that project in any formal sense.

Finally, I am starting to put some of the primary building blocks in place for the next release. Just sketches really, but I am getting a sense of the palette. Should be fun.

The Next Episode

Long time no talk. Lots going on with Staggered in recent days, especially on the tech side. If you're reading this you probably know that tempo changes have been implemented in production. Whew that was something. We also switched streaming data centers - we were initially set up in the EU since that's the default location with our host. The result of that is much less latency, meaning that if we are doing anything on the production server we can hear the live impact a lot more quickly.

One great thing that happened is iTunes Radio has picked up Facets...crazy right? Proof:

We exist

We exist

Announced with all of our deserved fanfare.

So the next thing that will happen is that our production server will be migrated to new data center. This is going to result in a significant outage tomorrow night (3/30/14) into Monday afternoon. We will provide additional fallback files before the outage so if you happen to visit during that time you may not notice a change, but that audio will be prerecorded.

Most important of all, we have begun writing for our next stream. There's a lot to consider there...a lot of aspects to musical composition that we didn't have the time or resources to cover for Facets. A big one is the notion of commonality between different passages in a movement. When a (good) songwriter crafts a song, it isn't just a bunch of parts arbitrarily crammed together. The various sections of the song are related to each other in some sort of musical sense - same keys, complementary melodic patterns, rhythmic phrases supporting or countering one another...maybe a combination of different things. As a result, Facets can feel somewhat jerky at times. It's fine if that is the desired effect, but one thing we will definitely address before we go to production with another stream is making our compositions more cohesive, unless the goal for a particular one is a more fragmented feel. We will get into of more of this stuff in the coming  weeks, but just know that we definitely want to take what we are doing to another level in terms of quality (a higher one to be specific).

Ok everybody enjoy the weekend -  catch me on Titanfall. 1!

-K

According To My Calculations Part II

Taking a little break from the tempo change implementation to do a follow up on the first post as promised...

I won't spend a lot of time here going over the same topics I did in part 1. Continuing with the scenario discussed there, lets assume we are approximating a One Drop rhythm.

Yeah.

Yeah.

Pete approves. So we have guessed that there are going to be two kicks to the measure. That's great but where do they go? This is kind of where this really becomes more of an art form than a development exercise. You are really writing algorithms to apply a certain sense of style to the phrases you generate and the kick is a really good example in this situation.

For these sorts of rhythms, the routine is pretty simplistic. We determine the basic sort of beat based on the number of kicks predicted per measure. If we guessed 1 or 2, we know this is going to be a One Drop and as such will look to place all kicks on a 2nd or 4th beat - remember we are counting half time so that may be considered on the 3 by some . If we only guessed 1 (as opposed to 2) or ended up with an odd total, the placement of a given note may be randomly on either the 2nd or 4th beat of the measure. This logic gets extrapolated out to Rockers, meaning that if we guessed four kicks per measure or less, we will look to place all kicks on quarters. This expands to Steppers in the same manner (except with eighths).

Now, these are by far the most straightforward calculations we make with respect to generating the note lists. Most of the time we make these predictions using distribution models like the one mentioned in part 1. We finely tune the parameters to the point where we are comfortable with the possibilities, and let chance sort out the rest. That's part of the art of it so I won't get into details - if you are reading this you are obviously interested in implementing your own solution so just do it the way you think it should be done.

Tempo variation is coming...not easy...

K!

 

Meanwhile...

Since we don't have a "Links" section, I'm gonna take a little space here to quickly mention a couple of interesting sites I came across this weekend in my dealins. Caliper is a blog that focuses specifically on instrumental and experimental music....it's extremely well curated and you can hear some really great stuff on there. I actually thought the The Use song that's up there sounded not unlike something we might generate. Ugh I hate being forced to use two "thes" in a row. Anyway - the guys were even nice enough to give us mention here.

The second thing I wanted to throw at you guys was this piece by solo.op:

Kota is a label that likes to bend minds and this drone bomb is definitely in line with that aesthetic. RanDrone is generative much like one of our streams, but it...well...you'll just have to visit and hear for yourself. Far out.

For the party people, you should really listen to the new Major Lazer EP, Apocalypse Soon. If you know Major Lazer, there are no surprises - just the dusted dancehall we've grown to love.

Damn I gotta get to that other post. Sorrryyyyy...

 

Stabilized!

Abe looking pensive.

Abe looking pensive.

Happy Prez Day everybody. Made a lot of progress this weekend on the overall stability of the stream and a lot of it was due to conceptual shift in what a "stream" represents. I'd like to wax philosophical about it for a moment.

It takes a while for the Aleator to render the MIDI for an entire movement, let a lone a set of them. As a listener, my favorite format of recorded music is the album. Songs are great, but putting together a well crafted album that is engaging for its entire duration is extremely difficult to do.

When applied to the stream though, the problem is that 99.9% of all listeners will be joining it in progress. There is no real way to control what is perceived as the beginning, middle or end of the stream. In essence, they don't exist.

Previously, the MIDI for the entire set of compositions would be loaded into memory at the beginning of each cycle. As I said - that took a while, so I would start a separate background thread to spin up the new set as the current one was ending. This multithreading worked fine in the short term, but seemed to cause memory problems when the plugin ran for an extended amount of time (4+ hours). My gut instinct tells me that something in one or some of the libraries I am using isn't thread safe, but I digress.

Looking back on it, it's awful design...I just fell into the trap of thinking of the whole thing as a single unit that needed to be dealt with as such. The fact that I can't force a listener to the beginning really forced me out of that mentality. Now, one composition is loaded at a time, and added to a list of compositions that have already been played. When looking to load the next composition, the plugin randomly selects one that hasn't already been played. This continues until all compositions have been played, at which time that list is cleared out and the process starts all over again. It will still play through all included compositions without repeating, but the sequence is random.

This approach is really inline with the general direction of the project, a part of which is to walk the line between control and chance in our algorithms. And the best part, of course, is that loading one composition at a time allows everything to run on a single thread, leading to increased stability overall.

It doesn't make sense to think of these resulting streams in a linear sense with respect to any sort of track sequence. There literally isn't one. Ideally, we want the listener to observer a stream aurally in much the same way they would take in piece of visual art. There can be edges or borders, but it doesn't really have beginnings and endings. It's just there, existing.

I'm gonna step away from the dev stuff for a second and do another music post next time, promise.

The Memory Pit

So I know I was supposed to do a post on how notes get distributed on the virtual staff, and I will get to that. I just wanted to post a quick update and talk about whats been going on with the project lately.

One of the ideas that I put forth when describing the project is that a stream is equivalent to an "endless album". It should loop through the same progressions indefinitely but should spin up new sets of notes with each iteration. Full disclosure - I discovered a bug in the plugin in late January and realized that wasn't happening. The same notes where being spit out with each cycle, the only difference in the resulting audio being the different presets that happened to be loaded in my synths at the time.

The fix for that was easy enough. It basically amounted to rearranging a few blocks of code. However, making this change exposed an enormous memory leak in the plugin. This one was actually more like a memory pit - with each iteration through the composition lists, memory allocated for certain objects (most notably compositions, progressions and notes) basically doubled, along with the object counts. Why?

To get to the bottom of this, I fired up dotTrace memory profiler. I'm not too used to dealing with memory leaks in .NET - generally garbage collection will release resources for all eligible objects and I am pretty careful not to create objects willy-nilly. Point being, it took me a little while to figure out what exactly I was looking at and how to really go about diagnosing the leak. The view that made the most sense to me was the root path, as that was the easiest way for me to visualize where the references to old objects where originating from. The following two images are screenshots from iterations 2 and 3 of an Aleator run:

Iteration 2

Iteration 2

Iteration 3

Iteration 3

I know the images are small - I am looking specifically at the memory allocated on the heap for Composition objects. As you may or may not be able to see, there were 5 compositions in the second node during iteration 2, but 10 for iteration 3. There are only supposed to be five compositions in memory for Facets, so basically we have a smoking gun here.

As it happens, I lucked out with respect to a solution. Digging down further into the node, we can see that old notes have references in a list called m_isPlayingList. This was actually a holdover from a time when I wanted to keep track of currently playing notes so I could turn them off if they were hanging after a progression or composition switch. That is no longer needed, since at this point "all notes off" messages are sent on all channels except for drums if the progression is changing. I was able to simply remove all references to this list and that in and of itself fixed the leak. The fact that notes hold references to their parent progressions and compositions meant that none of these objects were ever eligible for garbage collection. Ugh. Since the leak was causing intermittent OutOfMemory exceptions, I am hopeful that this fix will result in a little more stability for the stream.

At the end of the day, I guess this is really an advertisement for JetBrains dotTrace. If you run into trouble with memory or performance during your dev adventures, it's a pretty nice tool...

According To My Calculations Part I

Back To The Beat (AKA Totals)

Ok so...we're going to use this space to discuss some of our algorithms and how we've arrived at the specific implementations used in our code. Although we write in C#, we'll keep the conversation at a relatively high level and refrain from referencing particular code blocks. Since we've made no mention of how we build drum patterns in the other pages, that seems like like a good first entry.

The word 'Techniques' is an important one to us - it is actually a namespace within the Aleator application that contains all of the phrase building code. You may notice when listening to Facets that there really are no driving rhythms at all. Most of the beats you will hear are either (for lack of better terms) bouncy or slower, almost reggae influenced. This is because we simply haven't written an algorithm to generate those more driving (read: rock) rhythms.

As of this post, we have two rhythm techniques - 'Bounce' and 'Vibe'. These are both classes that derive from a Technique base class that houses some of the common properties and methods. Let's concentrate on the latter of these two.

As mentioned above, the Vibe technique was really written to generate Reggae influenced drum patterns. If you know anything about Reggae drumming, you know that those patterns tend to fall into one of three groups: One Drops, Rockers, and Steppers. You can find all of the history associated with these riddims on the interwebs if you so desire, but we will do here is briefly describe what differentiates these grooves from each other and how we seek to represent them in our code. For all of these, our approach to the hi-hat is to start with 8th notes and vary it from there by adding or removing a few, or perhaps a combination of the two. You really have a lot of wiggle room with the hi-hat.

The name One Drop comes from the fact that only one beat tends to be emphasized when playing rhythms of this type - the 3 (3rd beat in the measure if you are musically challenged and still reading this). This is usually executed with a kick, a rim shot or maybe both.  You can play around with other rim shots occasionally, but the important thing is that the heavy emphasis is on the 3 and the 3 only. The Rocker's beat adds emphasis on the 1 (making the 1 AND 3 important), and the Stepper's beat emphasizes all four beats in the measure. Again, this is usually happening with the kick, but you can mix in rim shots at different points in the measure to put some sauce on there. Just for reference...

One Drop - Legalize It (Peter Tosh)

Rockers - Sponji Reggae (Black Uhuru)

Steppers - Exodus (Bob)

Its the interpretation of these guidelines that gets tricky. If we are employing the Vibe technique, we want the resulting beat to be somewhere in the neighborhood of one of the types described above, but we don't want it to be exactly the same all the time. It also needs to be reiterated that these are only guidelines - a drummer can of course do whatever he or she wants. So we try to guess where kicks, snares and hi-hats will fall using probability.

The kind of rhythm that will accompany a particular phrase is determined randomly when the phrase is built. Obviously if you were playing in a live setting with other musicians you would never work this way, but we are in the business of chance. If the type of beat to be generated is a One Drop, Steppers or Rockers beat, the Aleator knows to use the Vibe class to build it. The first thing we need to do when building any rhythm phrase is determine the number of kick, snare (or rim shot), and hi-hat notes that will be played. Within the Vibe technique, we use a normal distribution random number generator to get the kick total. A continuous generator is used for snare and hi-hat totals, but that's another story.

Normal distribution is just your standard bell curve:

Normal_Distribution_PDF.svg_.png

In the case of the One Drop, we know that most of the time we are expecting a single kick per measure (on the 3). Now, within the Aleator, we actually count Reggae beats half time. This means that instead of counting 1 2 3 4 1 2 3 4, we count 1 & 2 & 3 & 4 &. Same thing, just a different way of counting it. That means we are expecting 2 kicks per measure, counting half time. Therefore, when calculating the kick total for a given phrase using a One Drop, we set Mu equal to 2. To allow for a fair amount of variation, we use .5 as our Sigma. Referencing the graph above, that's basically taking the green curve and moving the apex over to +2. We use our normal distribution object to retrieve a random integer along that curve. The result is that most of the time, there will be 2 kicks per measure but every once in a while there may be 1 or 3. In those instances the rhythm isn't a true One Drop, but really nobody cares. We use similar techniques to generate totals for all instruments and all phrase types across the application. Isn't music mathy!?

That's just to figure out how many kicks are going to be in a One Drop drum phrase, which is really the simplest of all the calculations we perform. We'll take a look at placing these kicks on the virtual staff in Part II - Revenge of the Beat.

-k!l0