Timecode versus Sync: How They Differ and Why it Matters

26Share

Though many of us are familiar with cameras having two ports―a timecode in/out port and a genlock port―we may not consider why both might be needed. Perhaps we dismiss genlock as just a legacy holdover from the times before switchers had built-in frame synchronizers. The terms “sync” and “timecode” are often used interchangeably, and the fact that timecode can be used to sync devices only compounds this confusion. Sync (genlock) is like a beat that calls out when a field (as well as a line with tri-level) occurs, while timecode indexes each frame (or the equivalent period of time for an audio recorder) so that it can be identified in uniquely post.  

"... timecode alone, while sufficient to provide syncing points for post production, may not be a reliable way to keep devices in sync with each other."

If we need to synchronize multiple cameras, or a camera and audio recorder, we tend to just use timecode. The problem is, timecode alone, while sufficient to provide syncing points for post production, may not be a reliable way to keep devices in sync with each other.

With timecode, unless the cameras or recorders are re-jam-synced frequently, their timecode will start to drift apart. The problem is that the quartz crystal (or other piezo-electric material) clocks used in many cameras and quite a few audio recorders are too imprecise for critical applications. This means that two different clocks may have two different opinions on how long a second is, causing recordings to drift apart over time. Depending on the camera or recorder, the timecode may drift as much as a field in a non-trivially short space of time (30 minutes in some cases). Even a field offset is enough that sound and picture will become visibly out of sync when combining the two elements later, in post.

In light of this limitation, several responses come to mind:

  • Genlock the devices, forcing them to "fire" at the same instant
  • Rather than jam-syncing, constantly feed in timecode from a central source
  • Fix it in post, perhaps using a sync software

What is genlock?

To start, it is worth revisiting genlock. Genlock (generator locking) appeared in the broadcast world as a way to synchronize multiple video sources, including cameras, VTRs, and external feeds, so that their field rates are all in phase with each other. This was required to prevent so-called "jumping," an artifact that occurs, when switching in live productions, if the all of the video sources are not in step with each other. For a long time it was necessary to genlock each device―camera or deck―by sending a black burst or composite signal into it, originating from a common house clock. These days, many switchers handle frame sync on their own by using a frame buffer that can hold the field (or frame if the signal is progressive) momentarily until it falls into alignment with the program feed.

In the HD world genlock is still with us, but using a composite signal as a reference has largely been supplanted by tri-level, which emits pulses that clock both the frame rate and the line rate.

Because it synchronizes frames, genlock can be used to keep multiple devices from drifting apart. But it is tricky because, unlike timecode, the camera or recorder must always be hardwired to the source. On top of that, where different length cables are used, each camera must be calibrated to take into account the length of the cable run. In a fixed studio setting this isn't a big problem, but it can be in the field. And it is especially problematic in cinema production where the setup might change with every shot.

Incidentally, another application for which genlock is used is to ensure both sensors on two-camera 3D rigs fire at the same time. A common misconception is that that you can simply link the two cameras together, feeding the composite output of one camera into the genlock input on the second. While this works in theory, the risk of miscalibrating the receiving camera means it is a much safer option to use a separate sync box that is wired to both cameras with cables of identical length.

What about wireless timecode?

A popular, if dubious, way to sync multiple devices in the field is to continuously feed timecode into them using a wireless audio transmitter and receiver. Much like black burst being a type of composite signal, timecode such as SMPTE 12M LTC can be passed as an analog audio signal, and produces a well-know "telemetry" noise if played through speakers. Since we live in Age of the App, there are many app-driven solutions that promise timecode sync over Wi-Fi. This can work, but the inherent unreliability of wireless technologies comes in to play; inexpensive wireless systems may even digitally process the signals, introducing delay. Also, when audio or video is being sent wirelessly, a loss of signal will be noticed right away―people on set are watching that. But if the timecode signal drops, the camera just reverts to its internal clock and the problem may not get noticed until it is too late. Plus, using audio hardware, not to mention a Wi-Fi network, rather than dedicated timecode hardware, adds variables that may compound troubleshooting when things do―as they inevitably will―go wrong.

What about syncing in software?

If the footage is being recoded, it's going to need to be edited anyway, so why not just use an application like Red Giant PluralEyes or Final Cut Pro X's built-in facility to sync audio and video? If the material is sufficiently broken up or if you are able to cut up the clips into less than 30-minute segments, this will probably work, at least within a half-frame margin of error.* But for an all-day live event, during which each camera is being recorded separately, this solution would create a lot of extra work. It would be nice, if timecode is available anyway, to be able to use that exclusively when syncing cameras and audio in post.

The ideal solution, when all cameras can't be hard-wired to a reference source as in a TV studio, is to connect each individually to a reliable sync device such as an Ambient Recording Lockit Box or a Sound Devices recorder with sync output. These devices use high-precision, temperature-compensated crystal oscillator (TCVCXO) technology that boasts less than one frame of drift per day. For situations in which timecode has to be relayed wirelessly, there is also wireless hardware based on the same TCVCXO technology which, while not as reliable as a hard-wired solution, is the next best thing since it is optimized to send timecode data rather than natural sound in the frequency range of the human voice.

To sum it up, true synchronization without drift requires two things: It requires a reliable timecode source to avoid drift, and it requires genlock to make sure the fields or frames are hitting the same beat. In some cases, sync problems can be fixed in post. But this can be time consuming, and the relative cost of getting it right on-set may win at the end of the day. The bad news is with the reduced need for genlock in broadcast, fewer and fewer cameras have sync ports, especially at the prosumer level. Eventually, we may be stuck with Wi-Fi but, hopefully, by then cameras will feature better internal clocks so that an hour in camera A will match an hour in camera B―even if their timecodes aren't perfectly aligned.

*A video field in NTSC areas lasts just shy of 1/60 of a second. Meanwhile, an audio sample is typically either 48 kHz or 96 kHz. That's 48 or 96 thousand samples per second versus 60 video samples per second for video. Unfortunately most NLEs only let you adjust audio with precision of one frame, or at best, one field. This means you will never get audio that wasn't already in sync precisely without resorting to dedicated tools.

26 Comments

Does anybody has any update on this thread?

I would be interested in knowing how to synchronize wireless cameras (the more precise, the better). There is a lot of information on cabled systems, but I was wondering if wireless solutions maintain that level of synchronicity and how are they sent.

Hi Salvador - 



Consider the: Tentacle Sync Sync E Timecode Generator with Bluetooth (Single Unit) B&H # TETE1  

Key Feautures:

  • Single Timecode Generator Unit
  • 1 x Sync Software Included (macOS)
  • Bluetooth Control via iOS or Android App
  • Locking 3.5mm Cable Clamp
  • 23.98/24/25/29.97/30 fps TC Rates
  • Master Clock & Jam-Sync Modes
  • 3.5mm Mic Jack & USB Type-C Port
  • Built-In Battery Runs up to 35 Hours
  • <1 Frame/Day Drift Rate
  • Built-In Mic for Reference Sound

     
  • The compact, lightweight Tentacle Sync E Single Set includes a timecode generator, and a downloadable Tentacle Sync Studio software license (macOS). This successor to the Tentacle Sync features Bluetooth setup and monitoring, enabling you to easily view frame-accurate readings on your iOS or Android smartphone device. Use this updatable generator to extend Bluetooth-broadcast timecode to virtually any mobile device. The Sync E also features a locking clamp for its 3.5mm cable, ensuring compatibility with a wide array of devices without the need for an adapter.

    The Tentacle E functions as a master clock, or to jam-sync to an external timecode source. Compatible with pro equipment, the Tentacle outputs SMPTE-12M-standard LTC timecode and supports all SMPTE TC rates. The Tentacle E's integrated, high-quality mic enables you to record both timecode and ambient sound simultaneously.

    Each Tentacle E TC generator is powered by a built-in LiPo battery with a runtime lasting up to 35 hours. The battery charges via the unit's USB Type-C interface. A handy industrial-strength hook-and-loop surface is built into the back side of the box for secure mounting to your rig.

What I gathered was to find a SMPTE tone from a website, use the correct frame rate which in the case of the camera is 24FPS which is really 23.976 but the point is to make sure you are using the same frame rate. I added this time code to the track using Logic Pro X. I started the code some bars ahead and I kept the code several bars after the song. the song was 5:40 but I made code for future songs that can be over 8 minutes long for worship songs. Then I created a playback song panning LEFT side for SMPTE and right side for PLAYBACK. This would then be played on a playback device like a CD player or using a Cell Phone or using a Tablet. The left side is inputed into camera via the mic input and the right side would go to the playback amp for the performer to lip sync to. So on the DIGITAL tape/ or the Hard Disk recording of the video will have the SMPTE starting from 00:00:00.00 and ending 00:08:00:00 for example. Once I dump the video into Final Cut Pro X each video should have the same identical time code to know when the music started which was 00:00:59:00 or 00:00:00:59 or what ever I set the SMPTE to start...  Im guessing the software FCP will easily align all 3 video clips together because all time code would be identical.... that is the easiest and cheapest way to sync the video without purchasing a professional recorder (which I have a professional recorder I purchased from you but it dont do SMPTE or any timecode) I was not into video syncing when I purchased it, it just records audio and thats it... its not a mixer its the Tascam DR-100 I think.... either way it dont sync with SMPTE.

 

I just need a list of devices that I need to achieve what Im trying to do. Yes I do (in the future need to record DIALOGUE on a recording device and sync that external audio with the camera(s) audio using SMTPE im assuming the Recording device will be a master which would wirelessly or "JAM" using BNC or to each camera which would feed the camera with NO sync features with an audio SMPTE tone using a wired or wireless device... I understand that....  so what am I actually looking to buy in order to NOT do it the cheap way as discribed above. I want to right tools to do the professional way....  do I need a recorder with SMPTE out, plus devices to transmit that time into the camera(s) then when the video is recorded the audio recorded in the professional mixing device and the audio recorded in the camera has the same identical time code???? is that how it works? no one has explained it... they want you to buy there products but no one has every taken the step by step approach to explaining how it works! so im confused on what gear I need to buy... if my "cheap" way work there is no need to buy any additional gear but i actually WANT to buy some professional gear to reduce editing times for syncing the VIDEO from the camera and the AUDIO for the music video... no one explains how that works... meaning everyone is explaining how to record AUDIO and how to record VIDEO and syncing those together... all Im saying is the AUDIO has already been recorded in my case a music video... there is NO need to record music or dialogue audio! so do I need to put the audio in the professional mixer for playback with the SMPTE code then send to the cameras? no one has explained that part... its all assumed that your recording dialogue when Im NOT trying to do... if thats the case if I record the dialogue or playback music clearly in the camera FCP X does have a multi-cam feature that will automatically LOCK the video using the waveform... it only works if the wave forms are identical or if the peaks align together....

 

no one ever explains how to sync the audio thats already in LOGIC X the audio by default dont have a time code... it does but its a start stop code 00:00:00:00 its not a linear code because its not tape its digital... thats why I created a version of the song using the SMPTE .wave file (which really is an mp3 file) and on the left side is the SMPTE and the right side has the song/playback but what if the wave/mp3 SMPTE was damaged or not accurate? I want to use a timecode or a sync generator instead of making a file with SMPTE wave file embedded in it... and im not too sure how a MIDI to SMPTE converter will help because LOGIC has SMPTE but wont generate a tone unless you have some other device. 

 

I NEED a workflow for the equipment that I do have. a step by step instructions included any additional gear I need to purchase. yes,... budget is an issue but I need the best quality for least amount of money... but NOT cheap product.

I shoot and mix audio to music where the performers are either lip-syncing or being recorded live, but singing to a playback audio recording of the song.  That way all takes are in time with each other.  The problem is, the camera guys always want to run with time of day SMPTE (so that it is continuous) instead of taking time code from the audio playback.  Is there a way to synchronize the audio playback with SMPTE frame accuracy WITHOUT following the time code POSITION?  In other words, I want the audio playback to start at the same moment in the frame, but not care about WHICH frame.  So, jam syncing without regard to time position, just so that the RELATIVE frames are lined up.  Has anyone done this?  Does anyone know how?

If I'm on a shoot with 4 cameras for example, and I'm using an audio recorder that sends and receives timecode (like the Zoom F8), would it be best to send timecode from the F8 to Camera 1, and then Genlock Cameras 2-4 to Camera 1? 

We are using a Sony EX3 camera, a Sony PMW 150 camera and a sound devices T702 audio recorder,

We use the T702 to jam the timecode into the cameras. We edit with Adobe Premiere Pro CS6 and after syncing the video and audio  there is always 1 frame difference between video and audio 

Recently we purchased 2 nanolockit devices to sync everyting. That should solve the problem but the problem is still the same.

Is there anyone who knows how to solve this problem!? 

Is it possible to genlock using a sdi splitter like this one?:

https://www.bhphotovideo.com/c/product/766064-REG/Blackmagic_Design_CONVMSDIDA_Mini_Converter_SDI_Distribution.html

If im doing a 3 cam interview with 1 x c300 and 2 x fs7(with XDCA-FS7 Extension Unit ). The c300 has a SYNC OUT. Could this go to the SDI IN, in the splitter and then connect the outs on the splitter to the gen lock on the fs7's?



And can you do also use another sdi splitter and split a timecode signal from one camera to the others?

 

Hi Lars - 

Sounds like a plan on both proposals, Lars.  But we wouldn't mind reviewing your work flow for you:

Please contact us via e-mail if you have additional questions:  [email protected]

 

 

What was the outcome of Lars question? I'm wondering about the same thing. Thanks!

No, you are incorrect. See my previous comment.

No, you cannot use a digital/SDI DA for blackburst or tri-level sync because those are analog signals. You would need an analog video DA.



Timecode--in this case linear time code, aka. LTC or SMPTE-- is an analog audio signal. The cheapest way would be to just daisy-chain the timecode from the audio recorder to cam A, B, C, etc, using a BNC T at each junction. A lockit sync box would be better of course.

I need to run genlock cables in a large conference room. The video will be projected so sync between the projection and the speaker is critical. To eliminate latency I've been asked to genlock all equipment. I have 4 cameras to lock, and there is no room in the budget for purchasing sdi/bnc cables. Our local rental house shut down and liquidated all assetss. Can genlock be ran over SURVEILLANCE camera bnc? 150' of surveillance camera bnc is $15.00/ compared to sdi/bnc at $100.00+.

Thanks in Advance

 

Hi Neo - 

With a high quality RG59 cable and quality BNC connectors and careful, accurate crimping you will max out at about 300'.  RG-6 cable is only a bit heavier and thicker but offers much better signal loss characteristics.

Yes, you can genlock with surveillance coax, as long as it has good connections. Blackburst and tri-level sync are analog video signals and so they do not have as strict of requirements or as short of cable run limits as the HD-SDI signals Mark S. is incorrectly referencing.

Quick question: Could you suggest a solution for shutter-sync for witness cameras - used in VFX to help with matchmoving. There's no need for sound in this application but it is very useful to have the shutters fire at the same time on multiple cameras (even if the primary recording unit isn't on the same lock). Are the any cheaper cameras available that you think this would be possible for? I've seen some info about atomos systems with the Sony dSLRs but it would be great to hear your thoughts on options available. Wireless would be handy, despite it's limitations: there are short takes and reset frequently.

 

I need to genlock signals running between several buildings.  For the most distant one (a couple blocks away), I'm considering using a sync generator locked to GPS (the master sync generator is also locked to GPS).  Can I expect that to deliver a correct lock between them?

For this type of question, your best bet would be really to consult with a studio engineer.  Though, if you send us an email, a couple of our agents in the Video Department would be happy to see what they can come up with.  [email protected]

Sorry, newbie question.

So, if I use genlock with multiple video cameras, will they fire their shutters at EXACTLY the same moment?

I want to create a multi-camera rig for "bullet time" effects to capture high-speed motion with no jitter when switching cameras. Any advice on low-cost sync generator and cameras?

 

Hi Jimmy -

Genlocking would keep multiple cameras firing at the same time however timecode is used to sync audio and video.Genlock is used is to ensure both sensors on two-camera 3D rigs fire at the same time. During long cable runs, combining a audio mixer, and cameras, there is a frame or two delay.  Timecode allows you to then sync the audio with the video so the cable delays are no longer an issue. The combination of both provides a more reliable and flexible synchronization. 

The Lynx Technik AG yellobrik SPG 1707 HD / SD Sync Pulse Generator with Genlock is a resilient and versatile sync generator that delivers simultaneous sources of HDTV tri-level and SDTV Bi-level sync. The module provides three SD sync outputs and three HD sync outputs and a separate audio sync output that can be switched between 48 kHz World Clock or Digital Audio Reference (DARS). 

The Genlock capability allows cross locking to any sync signal regardless of the output sync format selected. The unit is suitable when you need a source of HDTV tri-level sync locked to a SDTV Black reference, or if you need to lock across standards. For example, when you need to connect a PAL sync and get frequency locked NTSC sync and HDTV Tri-level sync outputs. 

The HDTV tri-level sync outputs can be set to any of the available HDTV standards, and the SDTV bi-level outputs can be set for NTSC, PAL, or PAL M/N. The SDTV, HDTV, and audio sync signals are all frequency locked together, or locked to the reference input, if connected. The SDTV sync outputs can be color bars, black burst, or sync only, with selectable 7.5 IRE pedestal for NTSC standards with adjustable burst phase in 8 increments. 

Please contact us via e-mail if you have additional questions:  [email protected]

I have a question from a post point of view.

When I edit a multicamera shoot, my goal is to make sure that there is a single common sync point for all my tracks. This can be done manually or through audio waveform recognition. Once I have all the tracks synced, my sequence plays perfectly and all tracks remain in sync. 

After reading your article I would expect that all my cameras would drift out of sync without using hardware. Why does my workflow not fall out of sync?



The reason I am asking is I am putting together a tutorial on using Premiere Pro's new Multicam sync that uses labels to place "start and stop" cameras on a single track. This new feature (CC 2015.2) does not work with audio sync and must use Timecode. So I arrived on B & H to find a Jamsync type device to recommend to my viewers to give them the ability to use this new feature with the kind of cameras they use which is mostly DSLRs and Blackmagic cameras. 



Thoughts?

Hi Colin Smith-

To be clear, the amount of drift is very slight. I think around one frame per 30 minutes is typical for the timing devices prosumer cameras use these days. So if each segment you place in the timeline is less than 30 minutes, drift won't be a problem. Additionally, if you are using waveform sync software (and not relying on timecode) you can always break longer clips up into less than 30 minutes segments so everything aligns properly. The takeaway for the average user is, drift is probably never going to be something to worry about. Having said that, it is important to understand that simply jam-syncing timecode at the start will not cause cameras to remain in sync hours later, which may be an issue for certain types of productions. Hope this helps clarify.

Hello Mr. McDougal,

Thank you for such an informative article!  I am a founder of a high-tech startup in Boston area.  Some of the technologies we've developed has to do with wireless high-precision time and frequecy synchronization.  The accuracy is in the order of nano seconds for time and a few ppb (parts per billion) for frequency.  Our technology also enables much more improved throughput for wireless link, which is very relevant to video, camera, drone industries.  I am very much intereted in speaking with you privately as I find you to be a highly knowledgeable expert in these fields.  If you don't mind please email me back via your personal email so I could tell you more about our company and technology.  Looking forward to the follow up.  Thank you.

good read, just want to understand, if i genlock the cameras, so they are all firing at the exact same time every time, why do i need timecode for? they are already runing together...

It seems like genlock is the better way to go isn;t it? apart from the fact they have to stay connected all the time, but that is also preferable with timecode so what's to gain here?

Eli

Genlocking would keep multiple cameras firing at the same time however timecode is used to sync audio and video. During long cable runs, combining a audio mixer, and cameras, there is a frame or two delay.  Timecode allows you to then sync the audio with the video so the cable delays are no longer an issue. The combination of both provides a more reliable and flexible synchronization. 

Andrea O. wrote:

Genlocking would keep multiple cameras firing at the same time however timecode is used to sync audio and video. During long cable runs, combining a audio mixer, and cameras, there is a frame or two delay.  Timecode allows you to then sync the audio with the video so the cable delays are no longer an issue. The combination of both provides a more reliable and flexible synchronization. 

Long cable runs do NOT affect genlock because you adjust the sync timing of the cameras to compensate. Long cable runs do NOT delay audio either. I'm not sure where you get the idea of long cable runs causing delay.