Tuesday, March 17, 2015

Synthesis Modules: Oscillator, Filter, Amplifier, Envelope, and LFO


Below are the 5 most important synthesis modules and their common uses:

Oscillator (VCO)

Oscillators are the sound source for the synthesizer. The Voltage Controlled Oscillator is used to generate sound through geometric wave forms. The most common wave forms are: Sine, Triangle, Saw-tooth, Square, Pulse, and Noise waves.

The wave shapes are related to how sound is created in the real world. For example, on a clarinet the wooden reed vibrates rapidly, opening and closing the passage way for air to travel down the hollow body. This is related with a square wave form. Sawtooth waves will be more related with string bowed instruments like the violin. 


Complex synthesizer may allow you to use multiple wave forms at the same time. 

Filter (VCF)


Once the sound leaves the oscillator, it typically enters the Voltage Controlled Filter. This is the module most responsible for shaping tone/spectrum of sound and giving a synthesizer its unique character. While simple synthesizers will have one filter (usually low pass) available to you, complex synths will have many.

Because geometric waveforms are typically very bright, Low Pass Filters tend to be a common choice. This filter reduces the high end (including harmonics) by allowing frequencies below the frequency cut off to pass through, while blocking any frequencies above that cut off. Other types of filters may be desired, however, when trying to emulate specific sounds.


Unlike the EQ section of a mixing board, the filters in synths are resonant (boosting) and frequency cutoff boosts can be a very creative tool in sound production. 
However, if you only want to reduce or remove unwanted frequencies, using resonance would be counterproductive.

Resonant Low Pass Filter Frequency Response



Amplifier (VCA)  


Usually the last module in the chain, the Voltage Controlled Amplifier controls how much of a signal is allowed to pass through over time. This is accomplished with instructions from an Envelope Generator (EG). 



Voltage Controlled Amplifier (VCA) Diagram

Envelope

Envelope shapes tell the synth how the amplitude controls should respond over time once a signal is sent (by hitting a key on your MIDI keyboard, for example). 

This is different from an envelope in a compressor, which creates a path as it follows the signal. Synth envelopes create the path before the signal is sent by manipulating ADSR (Attack time, Decay time, Sustain Level, and Release time).



  • Attack time is the time taken for initial run-up of level from nil to peak, beginning when the key is first pressed.
  • Decay time is the time taken for the subsequent run down from the attack level to the designated sustain level.
  • Sustain level is the level during the main sequence of the sound's duration, until the key is released.
  • Release time is the time taken for the level to decay from the sustain level to zero after the key is released.


Low Frequency Oscillator (LFO)


The Low Frequency Oscillation is a rhythmic pulse below the threshold of human hearing (20 Hz) that is used to control other modules/parameters within the synthesizer. Its cyclic and wavering nature makes it ideal for emulating vibrato. You can do this by setting the LFO (source of modulation) to 3-6 Hz with the VCO as your destination of modulation.

--------------------------------------

Sources:
  • http://www.highonscore.com/sonik_dimensions
  • http://synthesizeracademy.com/
  • http://www.meghanmorrison.com/blog/2014/uses-of-the-5-most-important-synthesis-modules/
  • http://en.wikipedia.org/wiki/Synthesizer


Wednesday, March 11, 2015










Modulated Short Delay Effects: Chorus & Flanger Explained 



Chorus

In music, a chorus effect occurs when individual sounds with roughly the same timbre and nearly (but never exactly) the same pitch converge and are perceived as one. While similar sounds coming from multiple sources can occur naturally (as in the case of a choir or string orchestra), it can also be simulated using an electronic effects unit or signal processing device.

The chorus effect is created by adding a slightly delayed, pitch-modulated version of a sound to the original sound, in roughly equal proportions. The intention is to create the illusion that two or more instruments are playing the same part at the same time.

The chorus is widely used on  clean electric guitar and keyboard pad, it can yield very dreamy or ambient sounds.

Chorus, however, has the effect of 'de-localising' a sound: It sounds rich and wide, but you don't really know where it's coming from, and the psycho-acoustic outcome is that it sits further back in the mix.


Chorus Device Parameters




Rate: The rate dictates how fast the modulation happens. This parameter is described as a frequency (usually 0.1 to 10 Hz). The frequency actually doesn't refer to a pitch; rather, it describes how many times per second (Hz) the oscillation happens. The oscillation is controlled by the depth parameter.

Depth: The depth parameter controls the amount of pitch modulation that’s produced by the chorus. The settings are often arbitrary (you can get a range of 1 to 100). This range relates to a percentage of the maximum depth to which the particular chorus can go, rather than an actual level.

Delay: The pre delay setting affects how far out of time the chorus’s sound is in relation to the original. This setting is listed in milliseconds, and the lower the number, the closer the chorused sound is to the original in time.

Feedback: The feedback control sends the affected sound from the chorus back in again. This allows you to extend the amount of chorusing that the effect creates. This setting can also be called stages in some systems.

Effect Level: This could also be called mix in some systems. The effect level controls how much effect is sent to the aux return bus. This allows you to adjust how affected the sound becomes.




Flanger

Flanging is an audio effect produced by mixing two identical signals together, with one signal delayed by a small and gradually changing period, usually smaller than 20 milliseconds. 

This produces a swept comb filter effect: peaks and notches are produced in the resultant frequency spectrum, related to each other in a linear harmonic series. Varying the time delay causes these to sweep up and down the frequency spectrum. A flanger is an effects unit dedicated to creating this sound effect.




Part of the output signal is usually fed back to the input (a "recirculating delay line"), producing a resonance effect which further enhances the intensity of the peaks and troughs. The phase of the fed-back signal is sometimes inverted, producing another variation on the flanging sound.

The underlying technology is almost identical to that of chorus, except that chorus tends to use slightly longer delay times and doesn't feed any of the output signal back to the input. A flanger creates its deep, almost resonant whooshing effect by feeding some of the delay output signal back to the input, and although this has no counterpart in manual tape flanging, the effect is pretty dramatic. If you were to look at the spectral characteristics of the output signal, you'd see a whole series of strong peaks and notches in the response; these move across the audio spectrum under the control of the modulating LFO.

Flanging works best on harmonically rich sounds, but it is also strong enough to show up clearly on clean guitar, drums, or even vocals. Very often its done differently in the left and right speaker to give it a back-and-forth kind of swirly, wide stereo presence.

Flanger effect Parameters




Delay : This parameter changes the time it takes for the second signal to play after the original. Most times the highest setting will not be more than 20 milliseconds, as mentioned above. In some cases, the delay setting can be to to a negative value to create interesting ambient effects as well.

Depth : This parameter functions as the control for the ''warble'' of the flange effect. That is, the severity of the changes in pitch.

Width : This parameter is somewhat similar to Depth, but has a noticeable difference to the ear. It controls the speed at which the peaks and valleys of the flange are reached.

Rate : This parameter controls the rate at which the warbles repeat themselves. The faster the speed, the faster your audio signal will go through the complete flange process.

LFO : This parameter enable you to alter the output of the wave in accordance to the flange. The four possible settings are sine, square, saw, and triangle.

Feedback : Feedback loops the output signal back into the input, creating a possibly infinite amount of noise. This results in many strange effects, especially in higher settings. If you're using headphones, be sure to keep the volume low before you experiment with this parameter.



--------------------

Sources:

http://www.soundonsound.com/sos/jan98/articles/learnprocessors.htm
http://www.eumus.edu.uy/eme/ensenanza/electivas/csound/materiales/book_chapters/30multieffects/30multieffects.html

Wednesday, March 4, 2015

Distortion Explained


When we use the word distortion in the context of musical production we are talking about something that happens when audio passes through a non‑linear device, like a saturating tube amp or a clipping amplifier. The distortion process introduces new harmonics that are somehow musically related to the original signal.

Obvious distortion is something that we would like to avoid, because most of the time we want to work with an accurate reproduction of a musical performance, but it has many creative uses in popular music, up to the point to be the distinctive feature that identifies some musical genres (Yes, I'm thinking about heavy metal, Black Sabbath, and all the crazy stuff that was created afterwards).




Audio Clipping

Commonly the distortion originates from the fact that we are using audio levels higher than the maximum expected by some of our gear. This can happen because we did not set the appropriate levels during recording or as a result of some gain stage in our pedal chain or our set of digital plugins in a DAW. 

The effect of this is that the signal gets clipped at a certain point. Those hard clips comes with higher frequency components that may completely change the original sound timbre. Clipping is something that should be avoided during recording.





On the other hand, this hard clipping is actually used to design pedal effects like rock and metal guitar distortions. This is a popular effect/sound in rock music and it’s become widely-used in modern electronic music as well. The distorted signals tend to cut through a mix and bring warmth or grit to a tone.




Harmonic Distortion


Harmonic distortion is the introduction of extra harmonics that are musically related to the already present harmonics. This results in a change in timbre. 


These extra harmonics are of two types:



  • Even‑order harmonic distortion: These extra harmonics tend to sound musically sympathetic, smooth, and bright in a constructive way. For example, the tube amplifiers circuits. 



  • Odd‑order harmonic distortion: This one tends to sound rough and gritty, and is often associated with added richness and depth. E.g.: The distortion produced by an analogue tape.


Distortion, Overdrive, Fuzz


The terms "distortion", "overdrive" and "fuzz" are often used interchangeably, but they have subtle differences. Overdrive effects are the mildest of the three, producing warm overtones at quieter volumes and harsher distortion as gain is increased. A distortion effect produces approximately the same amount of distortion at any volume, and its sound alterations are much more pronounced and intense. On the other hand, a fuzz box alters the audio signal until it is nearly a square wave and adds complex overtones by way of a frequency multiplier.


Digital Distortion


We can also have digital distortion. And with this term we are not only talking about a DAW plugin or a digital peace of gear used to model analogue distortion. In the digital word but you can also experiment in crazier ways like playing with the word length or sample rate of an audio file. Reducing the word length creates quantization distortions, while reducing the sample rate can bring aliasing distortion.

Aliasing distortion has the interesting property that can generate frequencies below the fundamentals of the source sounds. That doesn't happen with most analogue distortion processes, which produce harmonic distortions above and based upon the fundamental frequency of the source. 

Although digital distortion might not generally sound as musically pleasing as its analogue counterpart, it does have useful creative potential.


Conclusion

Distortion is an important concept, whether you are producing music that takes advantage of its features or you are fighting to reduce it, aiming to get completely clean sounds from a live performance. 

From my experience the only advice I can give is that if you want to have a classic distortion sound for your guitars or bass, always prefer the sound of analog amps/pedals over digital/modeling stuff. At post-production stage you can always give a try to your digital effects and compare the resulting sounds. The final choice will be up to you, but it is always a good thing to have some options, and not to be limited to the (probably not so good) available distortion plugins.


-------------
Sources:

  • http://www.soundonsound.com/sos/apr10/articles/distortion.htm
  • http://en.wikipedia.org/wiki/Distortion_(music)
  • http://www.electronics-tutorials.ws/amplifier/amp_4.html

Wednesday, February 25, 2015

The Channel Strip



The Channel strip


The purpose of this post is to teach the signal flow through a channel strip in Reaper, describing in detail every component of the channel strip including, its usage and position in the signal flow.

Before starting: Expanding the mixer


it is important to note that in Reaper, when you open the application, it displays only a compact view of the mixer:





Every channel strip looks like this:



You can resize the mixer control panel and to display a full view of the channel strips:

The following is the complete view of a Channel Strip:

The view modes can also be customized using the options from the "View" menu.

The Signal flow


When you look at a channel strip, the general idea is that sound moves from the top to the bottom. But you have to keep in mind that it is not exactly that way. There are some places where the signal flow isn't quite top to bottom. 


The signal flow is the following:





At the very top of the channel strip we have the input section. This could be a hardware input or a virtual instrument.


After that we find the inserts. The inserts are a collection of places that we can add effects, like gates, compressors, EQs, etc. 

The next thing we'll find in our signal flow are the sends. Reaper actually provides 3 kinds of sends:

  • Pre-FX (these will actually be acting before the inserts)
  • Post-FX
  • Post-Fader  
The sends will route the signal to a destination which will usually be a bus or a hardware output (for monitoring purposes, for example). The channel strip includes knobs to set how much signal is going to that output.

After the pan knob, we really have the most important part of the channel strip: The volume fader.

Next, we have the mute button, which just turns off the sound of that track. And then, our solo button, which isolates that track by itself, muting all the other tracks.

Also in the middle of this signal flow there are the volume and pan envelopes. The pan controls the output of the signal in the left or the right channel, and the volume controls the main output. 

Finally, the audio signal will go into the Master Channel where all the audio streams will be mixed down.

Conclusion

This is basically how the audio signal flows in the mixer channel strip. For most controls you can think of a top-to-bottom signal flow, but it is important to keep in mind the special cases, an how they are going to affect your mix.

In real life, the actual signal flow can be quite complex, i.e. taking into account all the possible options, mutes, envelopes, etc.. The following diagram will give an idea of the complexity involved:








----------------------
Sources used:

  • http://forum.cockos.com/showthread.php?t=59784



Wednesday, February 11, 2015

Sound Fundamentals: Timbre



Hello fellow classmates! My name is Ram, from Colombia, and as my first assignment I chose to explore one of the audio fundamental properties: Timbre.

----------------------------------------------------


What is Timbre?


Timber is the quality of a sound independent of its pitch and volume. It is what differentiates one sound from another and one instrument from another. This quality is how we can tell a guitar from a violin when they are playing the same notes (i.e. same pitch) at the same volume.


The timbre results from the combination of all sound frequencies, attack and release envelopes, and other qualities that comprise a tone. The Timbre is determined by the instrument and how theinstrument is played (i.e.: bowed versus plucked) and also by the environment where the sound is being played.

Of all the above factors, the key component of timbre is the combination of sound frequencies. 

The sine waves described on a text book are sound signals that are not normally present in the real world. You can create those waves using electronic equipment, but every time you strike a note on a musical instrument, there's not going to be only one frequency. Instead the sound will be comprised of a fundamental frequency plus a set of other frequency components. 

The following pictures show a real audio signal from a music instrument, and a frequency spectrum of the signal:





The other frequencies that come about when a sound is created are called harmonics or overtones. Specifically the harmonics are defined as integer 

multiples of the fundamental frequency

The relative balance of these overtones is what gives an instrument its distinctive sound. The timbre varies widely between different instruments, voices, and to lesser degree, between instruments of the same type due to variations in their construction, and also due to the performer's technique.


What's the importance :

Understanding this concept is a big part of understanding music. Most of what we are going to do in the studio is going to be related either capturing or playing with all the frequency components of the music sounds.


-----------------------------------------------------


Sources Used: 
  • http://en.wiktionary.org/wiki/timbre
  • http://education-portal.com/academy/topic/ap-music-theory-fundamentals-of-music.html
  • http://en.wikipedia.org/wiki/Music_theory#Timbre
  • http://annieintrotomusicproduction.blogspot.com/2014/05/timbre.htm

Reflection:

Although I wanted to explore multimedia presentations I decided to make the assignments as text-only documents, due to time constraints. Anyway, having this in a blog will hopefully make this material useful for someone else.


First Post



Some Words about Me.... 

My name is Ram, and I am from Colombia. I work as software engineer but I'd like to think of myself as an amateur musician. I've always been interested in music, composing, recording and exploring the possibilities of computers and DAW (digital audio workstation) applications.

The purpose of this blog

In this blog post I will be publishing my assignments for the Introduction to Music Production
course, dictated by the Berklee College of Music.

This is going to be my fisrt attempt on online education. I am already amazed about the quality of the course and the wide variety of  education opportunities provided by Coursera.