Once upon a time, compression was a dirty word, but the reality is that compression is occurring throughout the image taking process. Your lens compresses the image, unless you are shooting 1:1 macro. You could consider the Bayer filter on your sensor a form of compression, (referred to as Color Sampling Ratio) and, of course, there is the codec (compressor/decompressor), format, and bit-rate you choose. All this happens before your images even get to the recording media. So, compression is hidden throughout your image and, by itself, compression isn’t a bad thing. However, it’s important to manage your compression as much as possible.
Perspective
Let’s travel back in time to around the year 2000. There was film and there was standard-def video. Most video origination formats were analog, with the standard being Beta SP. Even if you shot film, to use an NLE you would transfer to standard-def video, digitize, and edit. Compression wasn’t something you thought about with analog until you got to the NLE or distribution stage. The early NLEs were very limited, and uncompressed standard-def video was generally considered to require a data rate of 20 MB/s when digitized, to retain the quality of the image. Most NLEs couldn’t capture at that rate, and storage space was ridiculously expensive, so the general rule of thumb was to digitize at lower quality, do your edit, then go to an “on-line room” to finish at the highest quality.
Let us now jump forward in time. We are capturing 4K video with four times the frame size of HD at 100 Mb/s (about 12.5 MB/sec). Depending on the camera and its processing capabilities, you can capture 4K at lower quality/data rate and higher quality/data rate. Still, there is quite a lot of compression going on. Which leads us to. . .
Recording Format
If you want to shoot 4K on an SD card, you are going to need compression—that’s all there is to it. Take the Panasonic GH5. With the 2.0 Firmware update, you can record internally onto a V60 or V90-rated SD card, 4:2:2 10-bit 4K, either 4096 x 2160 (DCI) or 3840 x 2160 (UHD). The data rate there is 400 Mb/s, which seems like a great deal, but remember—400 Mb/s is only 50MB (megabytes per second) or 20 seconds per gig, roughly 3GB per minute. If you want to record uncompressed, then you are going to need high-end cameras and/or external recorders such as those from Convergent Design, Atomos, and Video Devices. You could also get device-specific recorders such as those from Codex, Sony, or ARRI to go along with your high-end cameras. The general rule of thumb is to capture your images at the highest quality possible, to give yourself as much room to tweak as possible. This is, of course, limited by your camera’s capabilities, your media’s capability, your edit system’s capabilities and, of course, it becomes a trade-off between transfer time and storage.
Encoding Schemes – GOP (Group of Pictures)
Basically, compression is extremely complex, and would require far more time than we want to spend here, so, for the purposes of this article, let us just agree that there are two kinds of compression: Intra- or Inter-frame compression.
Intra-frame means that all the compression is done within that single frame and generates what is sometimes referred to as an i-frame. Inter-frame refers to compression that takes place across two or more frames, where the encoding scheme only keeps the information that changes between frames. This is sometimes referred to as a b-frame (bi-directional) or p-frame (predictive). With Inter-frame encoding, you end up with a GOP that starts and ends with an intra-frame compressed frame, and the GOP has a variety of inter-frame compressed frames between the i-frames. This saves a lot of space but, depending on your settings, could lead to quality issues.
Encoding versus Decoding Power
While it may seem obvious that decoding must happen in real time, lest you suffer dropped frames, when recording you should encode in real time, which leads to a tradeoff off between compression quality and file size. Your file is being encoded in real time, at anywhere from 24 frames per second up to 60 fps in 4K. That’s either a lot of data or a lot compression. You may have no choice in your camera, but chances are you will. And the question goes, which one looks better? Who can say? Well, you can, just go shoot some tests with your camera.
Intra-frame codecs are popular because, although they have a higher data rate than inter-frame codecs, they do one thing very well—on playback they require far less computing power to decode. This becomes important in the edit, where you may be pushing your computer to its limit. If you are shooting an inter-frame codec—MP4, or AVCHD for example—you really don’t want to edit that codec. Think about how hard your computer must work to decode each frame. Plus, when you make an edit, chances are the frame you are cutting on will depend on information found in the surrounding frames—frames that, now you have made a cut, are no longer there. This will require your edit system to do some heavy lifting, which may cause a dropped frame or slowdown. This highlights the advantage of shooting an all intra-frame codec, which doesn’t rely on information from adjacent frames. Don’t worry though; if you do shoot in an inter-frame codec, you can transcode your footage to an intra-frame codec either by using a stand-alone software before importing, or by having your NLE transcode for you. Popular intermediary codecs are Apple’s ProRes and Avid’s DNxHD/DNxHR.
Deliverance
Compression is going to happen. The key is to control it so you get the best image quality, versus the smallest storage space, while leaving you the most options in post. If you compress your footage and then edit it, you will undoubtedly introduce artifacts that are going to remain in your footage and degrade it as you manipulate your footage. Your edit system may offer you the opportunity to work with proxy footage, and this can speed up your edit system’s response time dramatically, but the proxy footage is going to be highly compressed and likely be lacking in fine detail.
So, when you are finished editing and have locked picture, you should go back and conform your high-quality original material or transcoded masters to your edit. Once you have your high-quality finished project, you will probably notice that it is extremely large, unwieldy, and difficult for most devices to play back smoothly. This is where you finally get to take control of compression. Now you can compress right from the timeline, but my preference is to render out the timeline at the highest quality possible, and then QC that file—yes, that means watch the whole video, watching for artifacts and errors. Then, I take that file and apply compression. This way, I have a master file that is clean, and I can adjust my compression settings over and over until I get the file size and quality on renders that I am looking for. While it is possible to import your master file back into your NLE, I find that I prefer a separate compression program, such as Sorenson Squeeze, to do my compression.
Test, Test, Test
You can read and share all the articles you want. “Book learning” is a valuable tool and asset, one that I hope you take advantage of during your entire career. However, just going by what someone else says or writes, without confirming it yourself, can get you into trouble, especially when going by something technical. If it helps you understand, show pictures, outline methodologies, these are all important things to read, but the deeper understanding is going to come from testing what you have read about. Applying what you have read to your gear, and your style of filmmaking. Do not just accept as truth what someone with a bunch of letters after their name says.
If you have your own recipe for compression settings for your output (depending on the venue), please feel free to share them, below.
0 Comments