The Penseive

                                                                                 A look into the past, or a peek at the future.

I Work in Television—Why Should I Care About Dolby Atmos?



DA Logo

If you’re slugging it out in the trenches of television production audio, you know what Dolby Digital is, because all television sound has to become Dolby Digital before it finds its way to the home screen.  But chances are, your only interactions with Dolby Atmos have been going to see a first-run big budget movie.  And you probably thought, “having a zillion speakers is great, but has nothing to do with my world.”  That may change sooner than you think.

I can already see your eyes glazing over, so let’s cut all the associated mumbo-jumbo and consider three main points: what Dolby Atmos actually is, what it looks like in the home, and how it gets delivered. 

The Two Main Things About Dolby Atmos

You know already know about stereo and 5.1 in television, right?  Then you know that nearly all episodic drama is delivered as a discrete 5.1 mix, and nearly everything else is delivered stereo and then upmixed to 5.1 by the distributor.  Atmos simply takes those “downstairs" channels and adds overhead speakers in pairs to them, taking advantage of the somewhat limited human capacity to localize sounds coming from above our heads.  That’s Thing One.  When you talk about a 5.1 system, you have six speakers: left, center, right, left surround, right surround, and subwoofer.  The simplest discrete Atmos installation adds two overhead speakers (Dolby refers to them as top left and top right) so you get 5.1.2 (five main and surround, one subwoofer, two ceiling) configuration.  For cinema, Atmos can support a lot of outputs, so a really large theater may have something like a 21.2.8 array (four pairs of speakers overhead), or larger.  

Lotsa speakers

A whole bunch of speakers.


Thing Two about Atmos is that it is object-based.  Note that there are now two flavors of Atmos—a scaled-back version for home theater and gaming, and the cinema version you are probably familiar with.  The cinema version can support up to 128 objects going to 64 separate speakers.  So, what the heck is an object?  Well, let’s step back a little.  The default Atmos setup is 7.1.2, and is generated in Pro Tools.  A mixer on a dub stage equipped with a default-sized array would use the 7.1.2 channel setup to create beds, ambiences, music, or anything else that doesn’t move between speakers.  He would then use object tracks for everything else.  A helicopter flyover that starts rear left, goes across the ceiling, and lands front right, would use a mono or stereo object track steered by an inserted Dolby Atmos panner plug-in to move through the appropriate speaker feeds.  So the object doesn’t show up on the bed tracks—it has its own track, accompanied by metadata generated by the panner.  Same thing with dialogue, swirling effects, and anything else that needs to move dynamically.  Having dynamic sounds as objects means that whatever the theater size and array setup, the Dolby RMU (Rendering/Mixing Unit) in the theater can move things around accurately in real time to match the dictates of its own speaker configuration file regardless of what the setup was on the dub stage.  Not that you need to know this, but the 7.1.2 counts against the total object number, so in the example above the mixer would actually have 118 object tracks to work with.

Dolby Monitor

Dolby Atmos Monitor—part of the software suite used to create objects for Atmos cinema releases


And for broadcast you might think of objects in this way: 

Second language

Commentary

Ambience beds

Music beds

If you mix these as objects and provide metadata on where they go, you open up a myriad of possibilities to the viewer, who may choose to do scandalous things like watch the game without listening to the commentators.

But I Mix For People in Their Living Rooms…

You could, right now, today, go out and buy a Dolby Atmos system for your home.  After installing it, you could load a movie encoded with a Dolby TrueHD stream in your Blu-Ray player and enjoy a pretty startling immersive audio experience.  If you upgrade your subscription to support Ultra HD you could watch Atmos titles right through your Netflix account.  Today.  Did I mention today?  This isn’t the future we’re talking about.  It’s here.

Now would be a GREAT time for you to get familiar with a little product called the Dolby Atmos Production Suite.  It operates as a Pro Tools plug-in and provides the tools needed to originate Dolby Atmos mixes for the home.  It’s $299.  

Atmos production suite

Dolby Atmos Production Suite.  Similar toolset to Dolby Atmos Monitor for home theater and gaming

And don’t grouse to me about people not setting up their home theater systems correctly.  Your job is to make the most engaging, immersive mix possible, not to second-guess what gear the consumer has purchased and where he or she places it.  As of 2016, 41% of American homes had some kind of home theater system, and that number is growing.

Dolby envisioned a scaled approach to home theater speaker systems, and licensed that technology to various manufacturers.  The audio system can be as simple as a soundbar in front of the screen with up-firing drivers to bounce the top channels off the ceiling, to separate speakers at listener level (again with up-firing drivers), to fully discrete systems with speakers mounted on the ceiling.

Philips

Philips Fidelio Atmos soundbar


KEF R50

KEF R50 Dolby Atmos-enabled home speaker with up-firing element


Atmos for the home reduces bandwidth and file sizes by grouping objects together in what Dolby refers to as a spatially coded substream, which can be thought of as a lossy representation of the original object-based mix.  For the truly crazy home cinema/gaming fan, Atmos can handle up to a 24.1.10 speaker array.  Yes, that’s right—24 front and surround channels, a subwoofer, and 5 pairs of ceiling-mount speakers.  Of course in the real world we’re much more apt to see 5.1.2 and 7.1.4 systems, but the point is much of the object-based mix has been retained and can be steered to the appropriate speakers on the fly.

Sounds Nice, But How Does it Get to the Viewer?

No less a player than Solid State Logic thinks immersive audio will gain acceptance in the home.  SSL’s flagship broadcast audio platform, the System T, added support for 5.1.2, 5.1.4, 7.1.2, and 7.1.4 formats in their V2 software release.  (They also added some other tasty toys you can read about here.)

SSL S300-32 web

SSL S300 Compact Audio Console, part of the System T lineup


But yeah, what about delivery?  The current Dolby Digital AC-3 pipeline does not support Atmos.  Things are changing, and I want you to consider the two following points carefully.  First, we’re about to go through a revolution in picture quality.  Sales of 4K television sets will probably pass 100 million this year, and consumers are already demanding sharper pictures and better color rendition than the current High Definition system allows.  Netflix and Amazon are doing a limited delivery of HDR10+ content with its far superior color space.  ATSC 3.0 with its superior picture capability is already rolling out in some markets, and that standard supports two audio delivery systems, MPEG-H and Dolby AC-4.  The biggest difference between the two codecs is that the AC-4 stream is the most efficient compression algorithm Dolby has yet attained, and can scale itself from serving a big Atmos system to making a two-channel Atmos-like headphone experience.  We will all be delivering for AC-4 at some point in the near future.

In the meantime, over-the-top providers like Netflix and Amazon don’t have to wait at all.  They are already providing HDR content and can opt to deliver Atmos now on Dolby Digital Plus (the stopgap successor to AC-3) pathways. 

Simply said, if you work in television audio you should pay attention to Dolby Atmos.  It’s going to change the way we do things

Some Notes on 5.1 and Music on Television


Where We Are

Let’s get some basics out of the way.  Until ATSC 3.0 is implemented, all television audio in the United States is delivered encoded as Dolby Digital.  Inside Dolby Digital the carrier has a choice to deliver different services, among them two-channel stereo and full 5.1.  All of the major broadcast networks, many large local TV channels, and quite a few cable channels, deliver 5.1 only.

That bears repeating.  If you’re listening to CBS, NBC, ABC, or Fox, the product they distribute is a 5.1 mix.  Even if you’re listening in stereo the signal leaving the network is 5.1.  If you’re hearing it in mono the signal is still 5.1.  

Wait a minute, I hear you say: how much content actually originates in 5.1?  Well, nearly all episodic drama originates 5.1, and as a side note the mixes of these programs have never been better than they are today.  Everything else, with few exceptions, originates as two-channel stereo.  There are multiple reasons for this.  The first is the post-production bottleneck—picture editors simply don’t want to deal with a six channel source no matter how easy designers of non-linear editing software make it.  They are mostly used to stereo and mono sources (although they will still pass split-track recordings through as stereo with depressing regularity) and the time constraints in post can be crushing.  For any show that visits an editing suite before airing, if there isn’t a wholly separate audio process, it can’t originate as 5.1.  The only other way to deliver discrete 5.1 is to do it live, and there are so many traps to doing so that it is rare.  Sports mixer Fred Aldous is a master at it and makes it look easy.

So why is everything coming to your house 5.1?  It’s easier for the network.  At network Master Control all incoming stereo material hits an upmixer to become 5.1 before being encoded as Dolby Digital.  Anything originating discrete 5.1 simply bypasses the upmixer and goes straight to the encoder.  This way the network doesn’t have to change anything in the encode process, and change is bad in failure-averse environments. Now there are upmixers and there are upmixers—the better ones, the ones sitting in national network equipment racks, do one particular thing superbly well: a stereo source going in will sound nearly identical to the metadata-derived stereo downmix coming out, despite the fact that it was turned into 5.1 and back to stereo along the way.

My Data is Meta

Let’s talk metadata for a moment.  When the ATSC standard was being formulated the idea was that mixers and producers would evaluate every show they made and specify the appropriate metadata parameters.  As an example, to get a Dialnorm number the program would be run through a Dolby LM-100, the resulting number then reported to the carrier as the Dialnorm of the program, who would then set the DialNorm parameter appropriately.  Or, if you were mixing a show where you purposely leaked what you had in the front L/R channels into the surrounds, you specified Surround Phase Shift as disabled to avoid the 90° shift Dolby baked into the standard.

No one ever did this.

Instead, the networks, needing to actually broadcast shows, set the metadata to fixed values (mostly the Dolby defaults) and required program producers to conform to those parameters.  Hence the reason we all aim at a -24dB DialNorm value now, and you leak signal from the front L/R to the surrounds at your peril (since the Surround Phase Shift is permanently set to enabled).

Warning: Opinion of Author Will Appear Soon

And now I step into the area of opinion, and in this I am as the voice of one crying in the wilderness compared to all of the other variety and music mixers I know.  I am diametrically opposed to the Recording Academy position on music in 5.1, but as there is little chance I will ever again be considered for employment on a Grammy broadcast I’m not terribly concerned about saying so.  The Academy actually published a white paper some years ago covering music mixes, especially live television music mixes.  Their position was that use of the center speaker should be severely limited if not done away with entirely.  At least part of that was driven by the fear that if a soloist was out there naked, so to speak, in the center channel, that audio could be stripped off by a consumer with a grudge against the artist and released to the world revealing that the artist had slightly imperfect pitch, or strange breathing patterns, or some other flaw.  To be fair this has happened, but I’m pretty certain no lasting damage has been done to anyone.  It is, after all, just television, and performers do much stranger things than sing off-key all the time.

The Academy position also revolves around a distrust that consumers can place and calibrate speakers in a surround system correctly.  That center speaker will never be in the center, it will never be appropriate to the rest of the system, it will never be at the right level, etc. etc.  But of course consumers always put stereo speakers in exactly the right placement related to the preferred listening position, don’t they?

So why disregard the center speaker?  Well, the record industry was successful for many decades putting the soloist in the “phantom” center, and by now everyone is used to it.  Never mind the fact that the left/right speaker placement has to be exact to generate a phantom center or that you can literally lean your head out of the sweet spot and feel the soloist smear over toward whatever side you’re favoring.  It’s what we’ve done in the past, so we’re going to continue doing it.  Here’s a direct quote from the document:

most playback systems — even the most rudimentary consumer systems — allow each channel to be heard in isolation. Placing a lead vocal "naked" in the center channel, without other instrumentation to help mask poorly intonated notes, "auto-tuning" glitches, or bad drop-ins, can therefore potentially expose weaknesses in a performance and consequently incur the wrath of the recording artist and record label.

For these reasons, most surround sound music mixers treat the center channel with caution, rarely if ever using it to carry any mix components exclusively. Instead, those instruments routed to the center channel (most often lead vocal, bass, snare drum, kick drum and/or instrument solos) are also generally routed to other speakers as well. Placing selected instruments in the center channel and one or both front speakers helps emphasize their sound within the front wall and also aids in localization if the listener moves around the room. 

Recommendations For Surround Sound Production ©NARAS 2004

Personally, as a consumer it’s nobody’s business where I put my speakers, and the mix you make shouldn’t try to second-guess me.  Just make the best mix you can and leave it at that.

The men and women who mix episodic drama for television don’t seem to worry about any of that.  By and large dialogue is locked in the center speaker without leakage into the L/R, and as long as that loudspeaker is somewhere near the screen, the viewer has no problem localizing the actors no matter where he or she is sitting in the room.  That doesn’t work with a phantom center.

The real reason I get all torches and pitchforks about this is that I’ve both heard and created discrete 5.1 music mixes that take my breath away precisely because the soloist is placed by themselves in the center speaker and not smeared into the L/R or the surrounds.  Your brain loves this—it doesn’t have to do any complex work to reassemble and place the voice in the center of the sound field from information received from each side—the voice is right there where it’s supposed to be.  A properly balanced 5.1 mix that uses the center speaker can make  music come much more alive.  

How Did We Get Here?

So what’s the reason 5.1 came about in the first place?  Simple answer.  Restricted bandwidth vs. convincing soundfield.  Research by scientists in Europe revealed many years ago that the fewest channels of discrete sound you can get away with when trying to create a realistic soundfield is five: three across the front, and two on the sides slightly toward the rear.  Tomlinson Holman came up with the .1 by pointing out that a bandwidth-limited low frequency effects channel could be included in a data stream with a very small data carriage penalty.  And as usual audio got a very miserly slice of the bandwidth available.  AC3 (the actual codec for Dolby Digital) is encoded at 384KBs.  Your standard HDTV channel is about 18MBs.

The place where things get really screwed up is when “music” people decide that the 5.1 discrete mix they are creating for a live broadcast must leave the center speaker silent.  This starts a cascade of consequences.  First, if the dialogue levels are to match the music levels the dialogue has to be steered out of the center as well.  Think about it—you’re listening to the presenter or sports commentator in the center speaker.  You’re used to it, it images from the center, it’s all good.  Then the music comes on.  Suddenly that speaker goes dark, and the singer or sax player or guitar solo you see that big closeup of isn’t coming from there.  To compensate, the mixer turns up the soloist but it still doesn’t sound as good.  Then, the artist’s manager demands he or she be turned down to make it sound more like the record.  Then when the song is over the dialogue returns to the center, and all the 5.1 listeners say “what just happened?”  Your only choice is to run the dialogue for the whole show as a phantom center to match the music, and the producers are going to have a cow if you do.

None of this even addresses what happens to people listening in stereo.  The default downmix parameter has the center channel feeding the L/R evenly at -3dB.  Assuming the mixer has been studiously monitoring the downmix during the broadcast to make certain his center-channel material is sitting correctly, when he switches to music mode and the center channel goes dark anything that occupies the center of the mix now has to be pushed up 3dB to maintain the same loudness.

Sad but true, at this point the mixer is actually better off delivering stereo to the network and letting the upmixer there handle things.  The upmixer will derive a center from whatever algorithm it uses and the transition from music to speaking will be natural—the center speaker will stay on in all the 5.1 living rooms (and most of today’s upmixers treat music very well) and the downmix will be almost exactly what the mixer hears in the truck. 

And strange things happen when everyone isn’t on the same page.  When Paul McCartney did the Super Bowl halftime in 2005 it sounded glorious in 5.1—Paul in the center, band in the L/R, crowd and fireworks in the surrounds.  Right up until about midway through Live and Let Die.  In the middle of an instrumental break Macca yells out “oh yeah” and does a little falsetto scream.  It was planned, and the director cut back to him at exactly the right instant.  Unfortunately, the mixer had Sir Paul’s live mic routed to the L/R instead of the center.  After hearing the verse from the center speaker, suddenly we hear the yell from the sides.  Which answered the question—was he singing live?  Which was no.  Which no one would have ever known if the mixer had just assigned the live mic to the center with the recorded vocal.  I can only assume he was monitoring the downmix and never knew the mic was mis-assigned.

This Actually Works

Since you asked (even if you didn’t) here’s how I’ve structured the 5.1 music mixes that actually worked on-air.  First you have to somehow get the carrier to agree to disable the Surround Phase Shift parameter on the Dolby Digital encoder (I actually got DirecTV to do this on several occasions).  Second create an “Instruments/Reverb” mix, pan it hard L/R but pull it back 25% from the front.  Next, create an “Audience/Effects” mix, pan it hard L/R and pull it back 75% from the front.  Now create a mono “Soloist” mix and assign that to the center.  Now switch to monitoring the downmix and create your balances.  As long as everything is assigned correctly to the mixes created above, mixing in stereo will feel absolutely normal to you and when you occasionally dip into the 5.1 to check it you’ll be astounded by the depth, clarity, and cohesion.

Live Production: Delay Audio to Match Multiview or Not?

I admit it, I’m a broken record on this topic.  But I’m intensely curious about where the rest of you come down on this, especially you directors.

If this sounds like a techie-only issue let me assure you it is not.  In a nutshell, the advent of huge LCD panels and multiview software has changed the game in live and live-to-tape production.  

     *The panels themselves can have 2-3 frames of built-in delay

     *Multiviewer software lag varies from 5. to 1.5 frames

     *Normal switcher delay is 1 frame

All told, there can be more than 5 frames of delay between when a picture is generated and when it actually plays out on the Program window of that giant LCD screen in front of you.  A single frame of 29.97 video is 33.367ms, and 33 x 5=165ms.  In the music world, a 165ms delay corresponds to a sixteenth-note pulse at a slow tempo (think In Your Eyes by Peter Gabriel).  This is enough of a delay that if the audio isn’t delayed back to match the picture, there will be a noticeable sync issue.

The usual way this is dealt with is to put a ~160ms delay on the mix that feeds the control room so that the director and producers will see things in sync.  I think this is a terrible idea, and I have five main reasons.

1. Directors compose shots by looking.  Directors cut shows by listening.  If the director is interacting with on-camera talent (putting up stills, rolling clips, cueing sound effects, etc.) and using a delayed audio feed, she is cutting the show 1/6 of a second later than when it is actually happening.  This means everything goes up late.  1/6 of a second doesn’t sound like much, but when you’re cutting live comedy it can be an eternity.

2. If there is a live audience there is most likely a PA system.  This has to be run delay free or the delay from the PA will come back through the mics and be heard on-air.  As a consequence, when a stage manager keys his mic to the PL you will hear un-delayed audio bleed from the PA through his mic.  This will drive anyone listening to both the stage manager and the control room speakers crazy.

3. If the control room shares a wall with the studio, the leakage can be a large issue.  I have actually seen (and heard) this happen on a show.  The show ended up abandoning the delay in the control room and learned to live happily with picture slightly out-of-sync, but audio in sync with the leakage.

4. Anyone who needs to monitor the director PL will hear the delayed program leaking through his or her headset.  This effectively forces them to use the delayed audio as well.  This sounds like a minor point, but if the mixer is forced to monitor the delayed feed, he or she will be more likely to upcut talent due to late fader moves.

5. The sync mismatch is NOT an audio problem, it is a video problem.  If it can’t be fixed in the video domain it should be left alone.

Well, where do all of you come down on this?  I have to admit I don’t see much consensus.  I work on shows that use a delay in the control room, and shows that don’t.  I much prefer the latter.


Peter Baird

Mevo: 3 Great Things, 3 Big Limitations

Mevo (Around $400 on Amazon)

Mevo

Why deny it?  It's cute.

Thinking about Mevo?

Great Things About Mevo

  • Company support
  • Ease of setup
  • It works!

So was I.  It looks cool.  The marketing campaign makes you think it will change your life in some vague way, so I got one.  If you do any sort of video or streaming support for your clients, I can assure you there are three very cool things about Mevo.  I can also tell you there are three very big limitations as well.

Mevo’s Big Limitations

  • No going back on shot decisions
  • Limited zooming
  • Shots cannot be adjusted offline

Which brings us right to Great Thing Number One: Livestream (parent company of Mevo) poured a lot of resources into this little gadget.  The device, the packaging, the company support, all invoke Apple in its heyday.  Mevo at its most basic is very small, little larger than a shot glass.  It’s a cylinder, either black or anodized silver, relatively featureless, but extremely well thought out—as a for instance, the bottom of the device is threaded for a standard mic stand (along with an adapter for a tripod stud).  Livestream is obviously committed to Mevo, and they push out updates regularly to both firmware and software, many of them driven by user feedback.

But the reason you’re interested in Mevo has nothing to do with that, right?  You want multiple camera angles from a single device, the ability to cut between them, and simple streaming live to the web.   Mevo definitely delivers all of that.  Here it is in a nutshell.

Imagine a 4K imaging sensor optimized for a 150° lens and a 16:9 aspect ratio—in Mevo’s case, 3840 x 2160 pixels.  While not quite cinema 4K (4096 x 2160) it’s a very healthy resolution (called, in fact, Ultra HD).  Suppose you divide that box into nine equal boxes, like a tic-tac-toe grid: you get 1280 x 720 pixels in each box.  Does that number look familiar?  It should—that’s the 720p specification of HD television.  So—why not use that sensor to derive multiple HD quality shots?  All it takes is software, and Livestream Mevo software can be used on later iPhones and many iPads (they promise more platforms will be supported in coming releases).

4K Imager

TTT grid

Mevo’s imager resolution is 3840 x 2160 pixels, which is exactly nine 720p boxes.

Mevo offers you three options when you’re ready to press the red button: record HD video to an internal Micro SD card, connect to Livestream, or connect to Facebook Live.  (The last two options also engage recording to the SD card, but the card will record at whatever resolution Mevo is able to do for the stream, which will nearly always be lower than the 10mbps the card can do on its own.)

As luck would have it, a job came up that seemed tailor-made for Mevo.  I record and remix concerts for the jazz choirs at College of the Canyons, our local community college, and other than parents with smartphones no one really documents them.  Right after I bought Mevo, their holiday concert came up..  Which brings up Great Thing Number Two: Mevo is really easy to set up.

I put Mevo on a mic stand in front of the stage, then started the Mevo app on my iPhone 5GS.  The app asks you to power up Mevo by pressing the button on top of the unit.  A light spins around on the top of Mevo as it powers up, and when it’s done the phone changes to a configure screen.  Mevo can be paired with an iPhone or iPad several ways, but by far the easiest way is to use the built-in Mevo Hotspot.  Choose this, and the app takes you to iPhone’s Wi-Fi configure page where you choose Mevo from the list and then return to the app.  The main page of the app has the overall picture from the imager full-screen, and in the upper right a smaller screen of whatever is on line.  Bottom left is the big red button, bottom right are a few other icons.  The concert I was shooting featured two groups of 13 singers each alternating on stage, so I moved Mevo to where I would be able to see all 13 singers in a single master shot (Mevo ended up around four feet from the edge of the stage). 

Now forget for a second about the bleeding edge 4K sensor and accept the fact that the best you’re going to get out of Mevo is 720p.  You have nine possible things that can record to the SD card or go to the stream, but only one at a time.  You can see the whole frame, or you can see one of eight shots you’ve set up within that frame, but all of them will output at the same 720p resolution. 

So here is Big Limitation Number One: you are stuck with whatever shot decisions you make while the event is going down—there is no going back.  This has some implications for one-man shops such as myself: while the event is happening I can’t concentrate on anything else other than my iPhone choosing and cutting video.  I can spare a glance over to Pro Tools to make sure it’s still in record, but if I need to get up and adjust an audience mic the wide shot is going to play for a few minutes without switching.  Or I suppose I could wander around crashing into tables and walls while cutting the show, just like a normal dumb human using a smart phone.

College of the Canyons’ Just Jazz performing a really great arrangement of Bird’s Confirmation.  Audio is from Pro Tools, the shots of the band are from my Canon camcorder (gaff taped to the pedestrian bridge over the Atrium), but everything esle is Mevo.T

Accepting that limitation, we are now ready to cut a show, what Livestream calls “live editing”.  Tap on someone’s face on the iPhone screen, and Mevo will instantly cut to a closeup.  Move the box and the image will pan to where you move your finger, the speed of the pan following the speed of your move.  If you want the frame even tighter on the face, you “pinch” on the box.  The image will zoom in obediently, but once you get to 1/9 the size of the master frame, a single tic-tac-toe box, you’re done.  Big Limitation Number Two: Mevo cannot zoom any tighter than the size of one 720p box.  I have a feeling this may get dealt with in future releases of the app, such that if you’re willing to put up with the loss of resolution you can engage some sort of digital zoom, but for now it will only output signals that began life as no less than 720p.

IMG 0944IMG 0945

The master shot, all 13 singers                                                                                               As close as I can get on the right side

In practice, this can be quite limiting.  Since the tightest box I can get horizontally is 1/3 the main image (think the middle row of the tic-tac-toe grid), the best I could do was to separate the group into subgroups of four and five singers.  I have to say it was still way better than a single static shot, but it would have been nice to get a little closer for solos.

But really, think about it: I’m sitting in the corner of the stage occasionally glancing at Pro Tools on the laptop willing it to stay in record,  and I’m cutting a four-camera show on the iPhone in my hand.  Live.  Which is Great Thing Number Three: it all works.  In my case Mevo was across the room, and I still had no dropouts or loss of signal.  You have to make some decisions about coverage—do you want static shots or have the app follow particular faces?  For static shots, press what you want to shoot and hold for a few seconds.  The box will now stay put unless you move or manipulate it.  You can set up to eight angles that way, but in practice I never had more than four.  I haven’t made much use of the face-following feature yet, but it seems to work.  If you tap the tic-tac-toe icon on the lower right of the main screen you get a 3 x 3 multiviewer that you can use to take whatever angle you want by tapping on it.  It’s really miraculous.

Mevo Boost (around $250 on Amazon)

Boost

Livestream does need to come up with a way to re-frame shots without taking them.  For instance if I center a soloist in a shot on the left side of the group and then cut away to the right side for a response, when I cut to the left side again the shot will still be panned for the solo.  I got caught out by this several times, and I really don’t want to engage the face-sensing algorithm because the face I’m after might change in a few seconds.  So here’s Big Limitation Number Three: shots cannot be adjusted offline. 

My only other complaints about Mevo are minor.   Without the Mevo Boost accessory battery life is only an hour, but an AC adapter is included.  (Boost provides 10 hours of battery life plus ethernet and USB connectivity.)  The app is a bit of a battery hog on the iPhone, but since you’ll be stationary anyways just keep your phone plugged in to its AC cube.  While Livestream thoughtfully includes a 16G Micro SD card, doing the math recording at 10mbps means you’ll burn around 5 gigs per hour of recording, so I bought a 64G card when I bought Mevo just in case I went past three hours total.

A lot of people have given unfavorable reviews to Mevo on the basis of its supposedly marginal ability to handle low-light situations.  I find this unfair, as there is a tradeoff between the amount of light that hits the sensor versus how much of the sensor is actually used.  I think Mevo does a great job in available light, and a fantastic job if you’re able to enhance the lighting situation a bit. Mevo isn’t meant for broadcast (although it’s almost certain someone will use it for something), so adjust your expectations accordingly.

Post production went just as expected.  Final Cut Pro X was able to import the files from the card quickly, and since I didn’t stream anything I had full 720p HD resolution.  At the concert I set up another lock off camcorder to shoot the band, and afterwards combined that and my Pro Tools remixes in FCPX.  I let FCPX chew on the camera audio files for sync against the remixes, and set the resulting Multicam file Mevo as Camera 1, the band as Camera 2, and audio from the remix.

For my first time out with Mevo, I can say that despite its limitations I was impressed, and really like using it.  The performers were very happy to have a video recording of the concert, and while I would much rather hire actual camera operators and do it correctly, the budget simply did not allow for it.  And the results were pretty amazing.

More videos from the concert can be seen here.


The Arsenio Hall Show


IMG_1219

All good things come to an end.

We made some really great television, but yes, the new Arsenio Hall Show is now a just a memory.  We went on the air in September of 2013 and had the privilege of working with some of the greatest musicians on the planet.  


Screenshot 2014-02-14 15.06.57

Posse 2.0: Robin, Alex, the Boss, Sean, Victoria, and Rob

We also had one of the greatest house bands ever, the Posse 2.0 with MD Robin DiMaggio.  

Mix Magazine did a very nice piece on us for their January 2014 issue, check it out here.

There's so much I want to remember about this whole experience.  Thanks to the DVR at home I have videos of many of these performances, but since I don't have rights to any of them you'll just have to search them out yourselves (or come over to the house).


Gloria Estefan

Why oh why did I not take any pictures???!!!  My apologies.  The lady did two tunes on the show (we don't do that often) from her new album The Standards.  I gotta list the players:

Piano--Shelly Berg

Bass--Chuck Berghofer

Guitar--Dean Parks

Drums--Gregg Field

Percussion--Edward Bonilla

She did The Way You Look Tonight and You Made Me Love You.  Both arrangements were terrific.  Used a KSM9 on her vocal and a pair of DPAs in the piano, Greg Keslake (monitor mixer) brought in his Royer to use on Dean's guitar amp.  I really love the sound of Ish Garcia's (our department head and Production Mixer) Earthworks mics on the overheads and the hat.

I always look forward to seeing Emilio whenever Gloria is on a show I'm doing.  He is absolutely the most gracious, nicest man in the music business, even to the point of pretending he remembers who I am each time.  Just love that guy.  He only came to the truck once this time to hear a playback, though--during the show he sat in on congas with the Posse!


Hiatus Kiayote

Hiatus Kiayote is an Australian band that makes arresting music.  Vocalist/Lyricist Nai (pronounced "nay") Palm has such a unique voice and style.  She and the band don't play at deafening volumes, so the individual quirks and nuances of each player come out.  Really good stuff.  The band played Nakamarra, a dreamy kind of love letter that references the red earth of "Oz".  It's a deceptively simple track--very little reverb or effects of any kind, just a sort of dance between the Rhodes doing splashy chords and Nai delivering a very directed stream-of-consciousness lyric.

I loved the style of the track, and used it as the basis for the balance I did on the show.  The only slight change I made was to let the drums get a little more forward, especially toward the end of the tune in the vampy section.  Meter freaks take note--the bridge of the tune is in a very groovy and unexpected three/four after several verses of common time.

I really liked the sound of Nai's vocal on the Shure KSM9.

IMG_1220

In the truck with Paul and Perrin.  Nice guys.  I look like I have no teeth.



Esperanza Spalding with the Wayne Shorter Quartet


IMG 1230

Musical legend Wayne Shorter and the Posse's Alex Al

Okay, as a bass player I'm just star-struck about this one.  On October 29, 2013, Stage 6 had Esperanza Spalding, John Patitucci, and Alex Al in the building at the same time (John is playing bass for Esperanza, Esperanza is singing, and Alex is of course the amazing bassist for the Posse 2.0).  These are three of the best bess players on the planet.  Oh yeah, and this guy Wayne Shorter will be playing tenor.  Yeah, THAT Wayne Shorter--Jazz Messengers, Weather Report, Miles Davis, the sax solo in Steely Dan's Aja--if you love music, in some way Wayne has touched you.

IMG_1223

And here they all are on Stage 6.  That's John Patitucci, Alex Al, and Esperanza Spalding.  I've gone to Bass Heaven.

We had the piano lid full open for the sound check, and pianist Danilo Perez graciously asked if we would prefer short stick for the performance.  It's a very energetic piece, and absolutely no one wants drummer Teri Lyne Carrington to dial back even a tiny bit, so the short stick should help the piano definition.  I'm using a pair of old C12 cap 414's on the harp with gaff tape plus a Schoeps with a knuckle aimed into the second-lowest hole.  There's a Beyer M 160 on Wayne's soprano--a really nice match, and how great is it to have pattern control on a ribbon?  John has a very nice DI on his bass, and I'm using a DPA 4023 and a U47 (yes, a U47!) to fill in the real bass sound.  Drums are the usual mashup of Shures plus Earthworks on the overheads and hat.  Teri also has a cymbal of her own that looks like a shield or a the wing of a giant beetle.  Look for it camera left and listen for the strangely pretty ring-out if there's space in the mix.  KSM9 for Esperanza, and we spent a very pleasant ten minutes in the truck working on her vocal sound.  Really nice lady.

IMG_1225

Alex Al and Danilo Perez

Well, it's done, and it's…amazing.  Form, arc, and meaning out of chaos.  Wayne and Esperanza dancing around each other like a couple of deliriously happy songbirds. 


Raheem DeVaughn

Raheem had the Posse behind him to sing his song Ridiculous.  The first instant I heard Raheem's voice it reminded me of someone--I couldn't remember who--until Arsenio mentioned Donny Hathaway.  (Raheem's definitely his own guy, but isn't that a great voice to be compared to?) His camp really had their act together--they sent us background tracks that slotted in exactly to a Pro Tools session, all we had to do was add a count off for the Posse.  Once again, my vocal mic of choice, KSM9 (set to hypercardiod in this case).


Bernhoft

This is really worth a look.  Bernhoft is a terrific singer, but the thing that will amaze you is how he uses loops of his voice and guitar to build up a really amazing sonic mashup.  We're using his Sennheiser 935s on vocals (one goes to the loop generator, one comes to me) and they sound very pleasant on his voice.  The song is a new one for him called Wind You Up and he manages to generate a lot more energy than one would think possible with just voice and tenor guitar.


Janelle Monae

IMG_1232

Nate "Rocket" Wonder in the truck after Janelle's performance.  Nate's bio has the following info:   "According to Metropolis authorities, Nate Wonder invented the Internet and barbecue sauce. Speculation that he invented the flying car remains inconclusive ."  What is certain is that he produces great music.

She is a force of nature.  And a very gracious person.

She performed Electric Lady on Arsenio 11/4/2013 and just killed it.  I love the fact that she has real brass players on the tour--I made sure I could hear them in the mix!


Atlas Genius

If So is an extremely well-constructed song, a little pop gem.  We're using all their gear except for Ish's Earthworks mics on the overheads and hat.  Pay special attention to Keith's guitar sound--they tour with a pair of the Shure KSM313s on his two Fender guitar amps, and the ribbon mics really sound great.  They have a lot of fun mixing live mics and samples on the kit--there were two mics plus a trigger on the snare, in addition to an E snare for effects.

IMG_1233

With my new friends Keith and Darren from Atlas Genius.  The "I Voted" sticker means it's November 5.

I think this may be in my top 20 all time favorite live mixes.  Not that they need my endorsement, but these guys write really good pop music that hearkens back to some classic 80s bands--Men At Work, Duran Duran, and you can definitely hear some U2 in there--but the sensibility is all their own.  Expect great things from them.

Keith says they've been away from home for over a year and are heading to Great Britain next.  They're looking forward to getting home to Australia maybe around Christmas.  Aussie assie aussie!  Oi oi oi!



Childish Gambino

IMG_1237

Ray, Chris, Donald (Childish Gambino himself), me, and Thundercat with a fan

He does a new song tonight, Shadows.  It has quite an arc to it--it goes from controlled/introspective with Thundercat playing amazing fingerstyle on his six-string bass to a breakdown where-am-I vibe to full on echo/crashing drums/thundering bass.  I asked CG what the emotions were.  "Think of it this way.  We're in the park, and it's nice, and then it starts raining.  Hard."  I've really never seen anything like this performance on television before.  He and Thundercat start over on the couch with Arsenio, and then you hear the band, and then they move to the stage…I guess you have to see it to fully dig it.  Really nice people, no surprise there.

Prince

It was a true bucket list kind of day.  On March 4, 2014, Prince brought 3rdeyegirl, New Power Generation, and other assorted talents to the Arsenio Hall Show and took over.  Which was awesome. 


Big Booty

From the very first conversations we had about the sound of the show, Arsenio Hall Musical Director Robin DiMaggio emphasized the desire of everyone involved for the show to have lots of good low end in the music.  “Big booty,” he said.  


I’ve been in broadcast audio for quite awhile.  Back before transmission was all digital, we tended to treat bass the way mastering engineers did vinyl--never let the low end get too big, but try to maintain the illusion that something big was down there.  Partially this was due to how easy it was to overload any part of the broadcast chain--a satellite transponder,  a microwave link, the preamp stage of the transmitter--and since every part of the chain had limitations and quirks, it was best just to keep the bass under control.  Bass overloads usually happened in an average-level rather than peak-level fashion, and the net effect was to crowd out everything else in the mix but the bass, especially since there were limiters protecting the signal at nearly every point in the chain.  The other reason to keep the low end under a tight rein was that the prevailing sense from producers was that nobody had a full range system on their home television anyways, which was largely true.  I do remember in 1978 hooking up the parent’s big stereo for the first broadcast of Battlestar Galactica, however, thinking it would be a Star Wars-type experience. Alas not.


Digital broadcast has changed a lot of things, and the old attitude toward bass isn’t valid any more.  Now, as long as you stay within whatever framework a particular network decrees about loudness (not the same as volume), you can do whatever you like with the mix.  And with the penetration of home theater in the general market, there are a surprising number of homes with subwoofers.


So, how to make the Arsenio Hall Show stand out when the Posse or any of the myriad guest artists are pumping?  Of course we go to the internet first and see how everyone else does it.  Dave Pensado, blessings be upon him, shows a method of thickening up synth bass tracks by duplicating the track with a 330Hz high pass on one track and a 300Hz low pass on the other.  The important part of this is that it allows the bass portion of the source to be treated on its own.  Dave uses the Waves Air plug and then a compressor set 6:1 with loose attack and release times to really focus the bass.


This approach works extremely well for synths, and with some tweaks can be used with other low-end sources.  For the purposes of television, however, I need the focus to be slightly different.  First, I want sub-bass information coming from instruments that may not actually have any.  Second, I want to optimize the signal to what the home viewer may be listening on.  The first item can be accomplished with any bass enhancement plug that uses some kind of synthesis or octave division.  The second, however, is a little more subjective.


Since I can’t control how a viewer sets up his subwoofer, I don’t worry too much about it.  I set mine with using the Blue Sky test files and a Radio Shack SPL meter, and I set the whole system to -20dbfs=78db at mix position and leave it there.  That’s loud enough to make me happy without making me deaf in the process.  So far so good, but what about the sub-less rest of the universe?  My mini speakers are an ancient pair of NS-10M Studios that have had the drivers replaced many times.  Truth be told, I’m not an NS-10 fan.  Don’t really love them, never have.  But they do have the virtue of telling one what something sounds like on a decent home speaker, albeit one with next to no bass response.

Screenshot 2014-02-14 13.01.02

Big Bottom set for High Booty


Now if I want to be able to make the subwoofer crank, I need to push subwoofer frequencies, right?  But what about the NS-10s?  Practically zero of that extra energy will make it to them.  So I have two aux sends available to every channel, called Low Booty and High Booty.  I twist Low Booty to make the subs thump, and I twist High Booty to wake up the NS-10s.


Screenshot 2014-02-14 13.01.02(3)

High Booty bandpass filter

Both sends end in aux inputs that feed the mix bus. Both inputs have the following chain: Aphex Big Bottom, Avid Pro Lim, and Avid 7 Band EQ.  If this sounds Avid heavy remember I set it all up in September of 2013 before the majority of third party 64-bit AAX plugs began to appear.  The difference between the two inputs is in the Big Bottom and the EQs.  Low Booty has Big Bottom tuned to 69Hz and the EQ HP at 48Hz and LP at 120Hz.  High Booty has Big Bottom tuned to 151Hz and the EQ HP at 67Hz and LP at 160Hz.  I set those frequencies by listening to the two different monitor systems with pink noise and then fine tuning them with music.  

Screenshot 2014-02-14 13.01.02(2)

Avid Pro Limiter High Booty Squish


The ProLim is set for Auto Release with the threshold set right down to -15.  I then selectively dial in various amounts of each send from the drum buss (both the clean and the parallel return), bass, percussion pads, and anything else that might add to the fun.  With the drums at first I experimented with just kick and floor tom going to the Booty busses, but putting the entire kit in there makes really wonderful things happen to the snare, which I did not expect.

IMG 1267

How it looks on the ICON.  The small "p" means the send is for the Posse,  Arsenio's house band.  There are a separate set of Booty sends for the Guest Artists.


The result is two inputs that add bass “feel” to the mix without overpowering it, while at the same time adding insane amounts of sub bass for anyone who likes that stuff.

Using Pro Tools and ICON for Live Mixing with Snapshots

For many years now live event mixers, FOH, Monitors, and Broadcast/Record, have relied on snapshot and scene recall automation to make shows go smoother and have performances as consistent as possible.  I've used lots of snapshot/scene systems, but the better ones all rely on some kind of combination of storing the present state of the console and then recalling that state on the fly.

Pro Tools, on the other hand, was not designed as a live mixing platform.  It is a powerful recording and mixing system meant for the recording studio and not the stage, and Digidesign recognized this when they rolled out the Venue series.  If you do happen to own an ICON, however, you already have a powerful live mixing system that you can set up to emulate a Yamaha or Venue.  Here's how:

1.  Gather all of the tracks that will be involved in a recall into a group--let's name it Guest for now.  Open Modify Groups (Control-Command-G in Mac) and uncheck Follow Globals.  In Attributes uncheck all the Main boxes (Volume, Mute, Pan, LFE). In the Mix Attributes sub window on the bottom check Record Enable, Input Monitoring, and Automation Mode.

Screen Shot 2014-02-14 at 3.34.05 PM

What you have done here is make a group that when enabled the faders and mute buttons are all independent, but changing automation modes, input switching, or record enable on one track will change all the tracks.

2.  If you haven't already, open Preferences/Mixing and enable Allow Latch Prime in Stop and Plug-In Controls Default to Auto-Enabled.  Make sure automation is enabled.  My personal preference here is to enable all the boxes in the Automation window except Mute since I frequently use mutes to audition different balances, and I don't want those moves stored.

3.  Make sure Guest is enabled (highlighted) in the GROUPS window, then toggle one member track into Latch mode.  All members of Guest should change to match.

4.  Set up your initial balances, inserts, send levels, etc. as usual.  Any touched fader will drop into active Write mode and the Latch light will flash.  It's not a bad idea here to occasionally hit Write To All, which will switch all the Latch lights from flashing back to steady.  As you adjust things, a quick glance at the flashing lights will tell you which tracks have been modified and which haven't.

4  In the GROUPS window click to the left of Guest   All members of Guest should now show that they are selected.

Screen Shot 2014-02-14 at 3.38.12 PM

Click on the black dot to the left of the group to Select all members of the group.

5.  Press Snap on the ICON soft key pad to open the Snapshot buttons.

6.  When you have a starting balance, hold Shift/Option on the keyboard and press the CAPTRE soft key.  What you have just done is to write all selected channels into the snapshot buffer--note that only the channels that are selected go into the buffer--any tracks that aren't members of Guest will not go into the buffer, and hence, will not be subject to a future recall.  Pick one of the soft keys below, and press and hold the key until the window says Stored.  (If you're in Select mode and not Focus you can now double-press the soft key you just stored to open a naming dialogue for that snapshot.)

7.  As you modify your balances and settings, keep overwriting that soft key by loading the buffer with the Shift/Option/CAPTRE combination and press-hold on the target soft key.

8.  When you need to store a different snap, choose a new soft key.  There are four available on the first page and another 44 available on succeeding pages.

9.  When you're ready to recall a snap, press the soft key of the scene you want (which pops it into the buffer) and PUNCH CAPTRE to execute the recall.  You will note that all tracks that are members of Guest will go into active Write mode instantly, and tracks that aren't members of Guest aren't affected at all.  This behavior will not be affected by whatever channels you happen to have selected when you recall the snapshot.  The concept to get here is that the channels that are selected when you store are the channels that will be affected when you recall.

You now have a system that can reliably store and recall 48 snapshots on the fly.  I know it seems like a lot of keystrokes, but really, once you have things set up the important moves are the left-click to select all members of the group and the key combinations to store the selected tracks into the buffer and subsequently the soft key.  Note that you can have more than one SCENE group--I use this to have separate recalls for the house band and guest artists on Arsenio.

I would avoid using VCA Masters when doing this.  The Coalesce options can make the behavior of slave channels somewhat unpredictable on recalls.  The exception is VCA groups where you never change the internal balances (FX returns, for example).  You can emulate VCA spill by making Custom Fader layouts for any logical groupings (drums, strings, horns, etc.) with only the members of the group in the layout.  Then assign the members of the group to a bus and re-enter the bus on an Aux Input on the main layout (also a great way to pop a limiter across a bunch of like instruments).

Keep in mind that this is an automation-only recall, and as such has no effect on bus assigns and plug-in selection/ordering.  If you need a different snaps to use different bus paths, all needed paths must exist throughout the whole session.  If you need different plug-ins at different times they all need to be enabled throughout the entire session and brought in and out with mutes or fader moves.

One last tip--get into the habit of hitting Write to All until you actually roll the transport for the first time.  From that point on, use Write to End to save all your previous moves.  Once you're finished, you now have all your original live moves intact and you can use them as the basis for quick touch-ups to the mix.

Good luck,

Peter


A Qualified Panegyric

As the process of innovation gets ever faster, I’ve noticed that combinations of workflow and gear now resemble snapshots more than portraits.  When I was at Post Logic years ago we had rooms that didn’t substantially change for some time-- a big analog desk with fader automation, and an Adams-Smith AV system controlling various 24 tracks, DATS, video machines, and the odd 2 track.  Certainly we tried to innovate as far as the technology would allow--it was a big deal when we put all the sound effects CDs in a central jukebox with a Mac in each room to search and audition, for example--but a picture of the room taken in the late 80s would have been substantially the same five years later.

What a difference a few decades makes.

These days, a workflow/gear combination may go roaring past me with only a fleeting glimpse before being swallowed forever by progress.  So I want to take a moment and talk about the present workflow “snapshot”.  Some things that have been around for a while have been joined by some newer things, and some older things have seen some dramatic improvements.  Since this is largely a love letter to the D-Control, I’m putting it here, but it involves a variety of other gear not necessarily limited to the Avid marque.

Here are some highlights from the present snapshot:

Blueface 32 fader D-Control

Pro Tools 11HD

Pro Tools HDX|3 talking to Focusrite Rednet 5’s and HD I/Os

Aphex Aural Exciter and Big Bottom

Avid Pro-Lim, Reverb One, Revibe

Sources from the stage on Yamaha Rio 3224Ds via the Dante network

And here’s my workflow:

In the morning, set up and sound check a guest artist of some type.  Could be hip-hop, could be rock, could be blues, could be pop, could be latin, could be anything.  Anywhere between 10 and 60 inputs.  After that, rehearse with the house band (48-58 inputs depending) setting the show order and possibly recording some new cues.  Then camera blocking, and if time is available I may get to play back the last camera pass of the guest artist to get it as dialed in as I can.  Then and at some point we do the show.  Depending on time available before the network feed, maybe some quick touch-ups to the musical performances, but mostly it airs as it went down.

Here are the things that make coming to work fun.

First, as I said before, the D-Control.  Being able to have a dozen different Custom Fader setups a single button away is really what makes this whole thing possible.  I set all 32 faders for Custom, with drums on one setup and the rest of the band on another (the drum submaster lands on the band setup).  It keeps things calm on the active layer while making a quick look at the kit for a touchup simple.  Multiply that by two bands, and you get an idea of why it’s important to me.  And when last minute things get thrown my way, after a quick layer edit I’m ready to go.  I keep another layer with just masters on it for switching stems in and out of record and selecting which busses feed the external meters.  The surface is also very good at letting you know what the automation is doing: the status lights on the channels really yell out what state they are in, and since my workflow involves hitting Write To All during rehearsals and Write To End when recording, having those buttons dedicated on the surface is really useful.

I also find D-Control amazingly responsive.  I know it’s just a big mouse, but it reacts instantly and doesn’t fight me.  Some other digital desks I know seem to love to fight.

Up top I mentioned this observation was “qualified”.  Of course, the biggest irritation for me is Avid abandoning the platform just as I'm starting to fall in love with it.  I understand corporations have to chart their own courses--I just wish they had done a redesign/relaunch of ICON instead of abandoning it for the S6 project.

And of course I share the usual gripes about the surface that have been with us since it came out.  The meters are strange, no way around it.  It’s always a shock to go from the meters on the desk to the ones on the Mix screen, especially now that we have such a great selection of meter ballistics in 11.  My personal grump is the control spacing--the console could have been probably a third smaller without sacrificing utility (which would also have made the top row of encoders a little more reachable by standard-sized arms).  I’m not as unhappy with the scribble strips as some others seem to be--they’re a reminder of what the state of the art was ten years ago, but still an enormous source of critical information while working.

Now that I’ve mentioned Pro Tools 11 I can list a few things about it that make my present workflow possible.  First of all, in the configuration I have, it’s much more stable than any other Pro Tools version I’ve ever used.  There was release a few years back that would drop record during concerts, which was not good for my blood pressure.  11 feels completely different in this regard from its predecessors: the spikes are gone from the system status bars, and while this may just be the effect of re-writing the graphics software it certainly doesn’t feel that way.  Taken together this means a reduction in much of the pucker factor when mixing in the box.

Second is the Offline Bounce capability.  Since most of my work involves song-sized chunks I guess I could use the regular Bounce to Disk function and not give up too much, but I always seem to be in situations where minutes count.  The fastest I’ve seen Offline render a selection that involves quite a few plugs, re-entry paths and a bit of automation is around 4-5 times real time, and mostly it’s slower.  But it works, except for one strange bug (11.0.2).  If your session start time is anything other than 00:00:00;00 (I work in television and therefore drop frame) the original time stamp on any bounce you make will be offset by exactly the difference between your session start and 00:00:00;00.  Annoying, but the workaround is fairly simple: keep your session start time all zeros.  Although for me this means the Session Start time is now 15 hours away from the first recorded material, it also means that any bounces I send over to editorial will fall in to their timeline frame-accurate to the camera master time (which saves them a lot of grief spotting).  I hope Avid fixes this soon.

Third is the ability to automate while recording.  This may not sound terribly important, but it’s a huge timesaver.  Now we all know that the automation in Pro Tools is some of the simplest and at the same time deepest ever written for mixing.  Everyone who was big into SSL or Flying Faders automation, raise your hands: automation used to be really scary, and only those who have seen 60 faders suddenly fly to the wrong position after an hour of careful balancing because you missed one button push know what I mean.  Not anymore--I never even bother to turn automation off these days.  The PT automation suite is really the only one that an operator can start using right away and still be confident the automation isn’t out to ambush him or her.

Those advantages really come together when doing music for television.  When a show has a daily delivery, the feed to the network very often starts an hour or less after production finishes.  In our case, it’s always less,  sometimes 30 minutes or less.  Let’s consider the case where something needs to be tweaked in that time.  If you weren’t able to automate while recording you would either have to rebuild the entire mix very quickly or just punch in spots that really need fixing.  If you wanted to make a minor change for one instrument or vocal across the whole song you would very quickly run out of time, losing whatever good moves you made during the live performance.  With the ability to automate while recording your moves are all safe and you can pick and choose which elements require touching up.  When you have the remix finished, you can quickly select that area, do an offline bounce, and 11 will deliver it to editorial in a third the time it takes to play it out (all the while miraculously saving all the reverb tails, compression ballistics, etc).  My favorite application of this is when we do hip-hop and the artist uses language that Standards and Practices is unhappy with.  When this happens I just select the entire performance (with sufficient handles for the editors), mute the channels that may contain questionable language, label the bounce “Song no vocal” and send it on.  Editorial can then quickly drop it on the timeline in sync, and wherever the lawyers object just swap from the line mix to the remix for a few frames.  It means the show isn’t forced into silence, tone, or some other substitute sound and the song doesn’t get interrupted.

Lastly, let’s talk about sonics.  I don’t consider myself a “golden ears” type, but even I can hear the difference between HDX and TDM.  The amount of detail in the audio now is startling.  Overhead mics now actually reproduce cymbal sounds and not the ripping paper sound we’ve all put up with for so long.  For better or worse, things actually sound more like their sources now than they used to.  I don’t know how they did it, but the sonics are dramatically better. 

Once again I’ve taken way too many column inches to say something simple.  I’m grateful for the combination of tools and toys available now that make mixing easier and more fun.  Which is what it’s always supposed to be, isn’t it?

Peter

A Little Love for the BeyerDynamic M 160

I guess I've always harbored a deep and unreasonable suspicion of Beyer ribbons because a place I worked at long ago had an M500 (maybe two?) that always seemed to be broken.  After multiple repairs it never seemed to last more than a month before something fell off or the ribbon gave up.  Looking back I'm sure this has more to do with us not treating the thing with some respect, but it was a long time ago.  Maybe we stuck it in front of the kick too much.

Flash forward 25 years or so, and while mixing music for the new Arsenio Hall Show I'm getting requests from three separate acts in the last few weeks for an M 160 on something--guitar, kick, whatever.  Then I get the advance info for the Wayne Shorter Quartet with Esperanza Spalding, and Wayne lists two mics he likes to see these days.  The first is a Soundelux U99, the second, you guessed it, was an M 160.  (The U99 is a little huge for TV and besides I don't know anyone who owns one, much less rents one.)  This is now a trend, I think, and place a call to Ron Cheney over at RSPE to get an M 160.  It arrives the morning of the day Wayne and Esperanza are scheduled, and we literally took it brand new out of the box and put it on Wayne's stand.

Kind of revelatory. Soprano sax is a tricky instrument to reproduce--it can get honky or screechy pretty easily--and when Wayne, who is a master, living legend, sax god of the highest order, started playing it was creamy and warm and still had enough character to stand up in a dense mix with almost no EQ.  They're not cheap by any stretch--street price around $600+, but it's probably the only ribbon I know of that sounds great and has enough pattern control to make it useable on a noisy stage.

So I hereby admit my prejudice was unfounded and give an unqualified endorsement of the M 160.  At some point I'm going to bring the sucker home and try playing some trombone into it.  Not that I'm any good, but I am curious how well that character--slightly forward for a ribbon--will translate for brass.

The performance was amazing--very outside and something of a challenge for mainstream television.  I loved it.  And I got to meet Esperanza!  Let me know what you think of Wayne's sound, if you feel like it.  Thanks again to Ron and RSPE for the hustle.

Old News

VGA10

Which is what they called the 2012 edition of the Video Game Awards on Spike.  The nice folks at MTV Technical asked LRT to handle the music acts for the show, and I had the privilege of doing the music mixes.  The live performances were from Linkin Park, Tenacious D, and Gustavo Santaolalla.  The show went live on December 7, 2012, and as you might expect it was a blast.

I was cleaning out my computer bag, found these sheets, and was just about to trash them when I thought "some scholar writing a dissertation about live broadcast may find these interesting in 80 years or so."  Probably not, but here they are anyways.  These are the cue sheets I prepare for myself when doing a live mix--Castle Of Glass is Linkin Park, and Rize Of The Fenix is Tenacious D.  The show script will have all of the lyrics printed out, but a long song can reach 20 pages or more.  I need something I can glance at, get my fingers in the right place and make the move just before the word or lick happens.  So I end up making my own cue sheets and then making notes on those after listening to the rehearsal a few times.  These are the ones I used at the VGAs.  Note that Tenacious D takes 2 pages--they really like the long-form rock anthem.


Click on the image to see it full screen.  The links below will take you to the Spike TV video portal where you get to see a quick ad and then the video.  Have patience.




ROVE LA SEASON 2

Well, it's all over for Season 2.  We had an incredible time, and I will miss Rove and the staff greatly until the next cycle.  Rove McManus had the most popular live "chat" show in Australia for something like 10 years, and after he moved to LA, Foxtel in Australia asked him to do a weekly show to beam back to Oz, taking advantage of the availability of big-name guests in LA.  It was a big hit, so they asked him to do a Season 2.  And what a guest list--John Travolta, Olivia Newton-John, Michael Buble, Zach Braff, Eddie Izzard, Sarah Silverman, Rob Schneider, Wayne Brady, Aisha Tyler, Rainn Wilson, Russell Brand, four baby pigs and a full-grown rhinoceros (no joke--a real rhinoceros, running through the Warners lot).  This season the TV Guide Channel picked up the show for the USA, so you can actually see it here.  It's freaking hysterical.  For my money Rove is the best talk show host since Johnny.

Rove (r) with Joel McHale, Wendi McClendon-Covey and Wayne Brady

Photo by Craig T. Mathew/Mathew Imaging

Doing Rove is also very special for me because it gives me a chance to follow a show all the way through from production to post (see The Gypsy System).  I have a chance to fix any goofs I might have made during taping and really get the thing polished.  Typically we would shoot Wednesday nights and edit/sweeten on Thursday (really talented editor named Gerrad Holtz) with a Thursday night delivery back to Oz.  Sure hope we come back for Season 3!

LOVING THE SILENT TEARS

My old friend Trace Goodman (Goodman Audio, really terrific audio production company) has a client called SMTV.  They do an annual celebration in LA, and the last two years they have capped it with a Broadway-level original musical.  Trace, bless his heart, recommended LRT and myself to do the broadcast mix for the musical portion back in 2011.  For 2012, the celebration was at the Shrine, and the musical was entitled "Loving The Silent Tears."  It is essentially a spiritual/musical tour of the countries of the world set to poetry with a strong environmental and humanitarian influence.  It features music from (among others) Al Kasha, Don Pippin, and David Shire, and performances from Jon Secada, the wonderful Debbie Gravitte, and an absolute knockout punch from persian superstar Siavash Shams.  As you might assume, the music and performances were top-notch with a large pit orchestra directed by the talented (and feisty!) Doug Katsaros.  LRT was brought in to do the mix for the musical for the live broadcast, and I am now putting the finishing touches on the DVD mix in LRT.

IMG_1002

LRT next to Denali's new flagship, California, behind the Shrine


SHINEDOWN LIVE

Got a call from a nice guy, Garrett Davis, who works with the band Shinedown.  He needed to record the band's live performance in Phoenix at the end of September, but just as impiortant to Garrett was having a room he could trust to monitor the tracks in as the show went down.  So the truck and I headed to Phoenix to help out.  Great band, great tracks, hoping to hear something from Garrett about a release…


IMG_0942

Garrett at the D-Control.  He's quite a power user, and fluent in all things Pro Tools.

Garrett's crew got the black t-shirt memo

LRT behind the pavilion 



RESOLVED 2012

Darius hard at work at the D-Control

Back in 2008 I got a call from a young LA mixer named Darius Fong.   Darius has one continuing project very near to his heart--a praise band called Enfield.  One Enfield’s major projects is an annual conference called Resolved.  Darius and the band had decided they wanted to produce a live album of the band at the conference, so they hired LRT to come out and do the original 2008 recording and provide Darius with a space to accurately monitor the tracks he would be using to produce the album.  Everything went very well, and the album they produced was very successful.

IMG_0228

Enfield stops long enough for a group portrait

Well, now here I sit (June, 2012) once again in the loading dock of the Palm Springs Convention Center helping Darius record another Enfield album.

So what’s changed in four years?  Well, Enfield, already a great band, has progressed to the point where they can read each other’s minds on stage.  LRT has a great big Digidesign console in place of the Yamahas, and Resolved has announced this is their final conference.  Also, in the mean time, I lost my best friend and coworker, Gary Van Pelt.  For all of us, it’s a little bittersweet.

One of my favorite things is watching Darius discover functions on the ICON and integrate them into his workflow.  Already a Pro Tools power user, everything is familiar to Darius, but it’s great to listen to him instantly remix a performance and then put it up on the Enfield website as a promo.  Now THAT’S a power user.

Darius is a true “golden ears” mixer and really nice guy.  He trained with Bill Schnee, and if you’re an audio geek you know that’s about as good as you get.  Rock of Ages: Enfield Live at Resolve 2012 is scheduled for release 8/14/12, go to http://enfieldband.com.  The arrangements and playing are really top-notch, and hearing several thousand conference goers sing along is quite emotional.


CEASARS PALACE TOTAL REWARDS RELAUNCH

Cee-Lo and friends announce Vegas show at the TR Launch

This is a difficult one even to describe.  If you heard about this, or were at one of the venues, you know it was amazing.

In a nutshell, Ceasar’s Palace has re-branded its preferred guest program, the one they call Total Rewards.  They decided that what they really needed to introduce the revamped rewards program to the world was a big free concert. Great idea.  But in true Vegas style they took it several steps beyond--not just a free concert in one city, but a coordinated free concert in FOUR cities simultaneously--Los Angeles, New Orleans, Chicago, and New York.  The idea was that each city would have several artists perform live over three hours, and that highlights from each city would be beamed to the other three cities to put up on their big screens during the act changeover.  Los Angeles was chosen as the hub.

The big day was March 1, and the LA venue was the Hollywood and Highland Center, home of the former Kodak Theater, next door to Grauman’s Chinese and across the street from the Egyptian and the Jimmy Kimmel Live theater.  They built a giant stage in the plaza and hired Lil’ Wayne and Cee-Lo Green to perform, each with a full backup band.

LRT was brought in to handle the music mix for the two acts and push it over to the AMV EPIC 3D truck for distribution to the web and the other cities.

One of the coolest things about this was that each act got to do a short set, not just a song or two.  Cee-Lo took the opportunity to announce to the world that he had just made a deal to do a long-term major show at Planet Hollywood in Las Vegas. 

The show was nerve-wracking, but very exciting.


LRT at Hollywood and Highland before the show


TOAST OF THE NATION

LRT next to the Blue Whale

As I sit here we’re just a few hours away from the West Coast feed of NPR’s annual Toast of the Nation New Year’s Eve broadcast.  LRT is the mix facility for the broadcast from the relatively new LA jazz club the Blue Whale.  Bay area vet Phil Edwards is mixing (the hands you see on the home page are Phil’s), and the band is the Billy Childs Quartet.  Very technical, demanding stuff, but Billy’s natural gifts constructing lines makes it very accessible as well. 

LITTLE RED TRUCK MODIFICATIONS

I can’t get over the change the D-Control has made in the capabilities of the truck.  And it must be admitted, driving this desk is FUN.

ROVE LA

Rove McManus is a very funny Aussie who had one of the most successful late night shows down under for 10 years.  He’s now doing a weekly show that airs in Australia and Great Britain, but he’s doing it from CBS in Hollywood.  The guest list has been phenomenal, and as I mentioned, Rove is hysterical.  We shoot the show on Thursday and I do all the audio post on Friday--I moved my Pro Tools HD|Native system into the office next to the editor, added two Avid Artist Mix panels and my lava lamp and away we go. 


© REMOTE WEST 2018