“The skin tones are too magenta and too saturated compared to the reference stills.” The dreaded email from a distant client. Complete with a screen grab as proof. “And we are viewing on a calibrated screen”.
We slave away at making a good match in a colour grade of a video to a photoshopped reference still and yet that is the feedback we sometimes get. The problem is, we don’t know what screen the person complaining is looking at, how it is set up and profiled, and what software they are looking at the still and the video with.
The chances are that critic has a good eye. That they have worked hard to get a good calibrated screen for making those judgements. But in fact, they may be making the comparison harder for themselves than if they were just looking at a regular screen.
If you want to know why this happens and what to do about it, read on.
At the Post Factory in London, we’ve developed ways over the years to really be able to manipulate video images (as long as they come from good sources like RAW Red Epic etc) to match a highly photoshopped reference still. A few years ago you wouldn’t have even tried, it was just not worth it. Stills were stills and video was inferior video.
But now with good tools and good eyes, we can do a great job. But we are let down by a lot of vagaries in the delivery of those videos to the consumer. These are some ways to fight those.
The bad old days of video
Traditionally, video cameras captured their images fully encoded into the colour space that matched the destination viewing platform, which for old standard definition was Rec601 then Rec709 for HD. So no-one in post really had to have an idea of differing colour space.s
You could plug a camera into a monitor and it was WYSIWYG. That’s how live TV has to work. A lot of why people preferred one camera to another was down to how the engineers had set up the onboard matrixing (combining the RGB chip source colours into the encoded signal). People like how Sony did skin, how the Varicam looked more “filmic”.
And sure, we could take those images and push them more in post, but we were still starting from a Rec709 source and delivering to a Rec709 deliverable. We used external monitors calibrated for that. There was no other video standard for them.
Earlier CRT computer monitors were so similar to TVs that the computer web delivery standard, sRGB, was not a long way off from Rec709. So much so that even today, if you make a video for television you probably will not think about making a different deliverable for web computer viewing. You will just encode it into a more compressed format like H264.
The problem came when you took a non-video source, film, which has a much higher dynamic range, and tried to squeeze it into this smaller video container. At the telecine/scanning stage, you’d have to make decisions about how to compress the image, reduce contrast, or lose information on the way from film to video.
Rather than make those permanent “lossy” decisions at that stage (a very costly by the hour stage), a way was developed to keep a lot of that higher dynamic range within the smaller container of the video. If you mapped the information differently, pushing up the shadows, pulling down the highlights and lumping the information more closely into middle of the range that video could hold, you could have more to play with later in the colour grade. The you could reverse this curve in a more controlled way, choosing more precisely how to stretch out the shadows, roll off the highlights and so on.
So there used to be two distinct ways of colour grading in the TV/Video world. The colour correction that was simply fixing and matching Video Colour Space sources (Rec601 in these mostly Standard Definition days), and then the more expensive “film grade” suites that had heavy iron systems and great colourists who mostly handled telecine film work.
It was a Big Deal to give your video project a Film Grade. This sort of budget only went on high end shows.
Incoming Colour Space Today
The cameras we use now involve a lot of different standards and colour spaces. Some still go straight to a video curve (eg if you use a DSLR in Video mode and don’t load a custom profile.) Some Like the Red Epic only shoot RAW.
We like RAW but there are differing flavours between Epics, Alexas and Sonys. Poeple often shoot encoded (ie non-RAW) video but with a log curve to pack the extended range into the smaller container of the video and these all vary as well. S-Log 2 and 3 for Sony cameras, RedLogFilm or RedSpace3/Redgamma3 etc. for Red Cameras, Log-C for Alexas, three different log curves for the Blackmagic cameras (so far) etc.
This log shooting emulates the way the film scanners brought the extended dynamic range into the smaller container of the video file.
Deliverable Colour Space
There are slightly fewer options here:
One deliverable for us, making many DCPS for cinema is XYZ colour space, the standard set for DCI Digital Cinema, which has a greater colour range than Rec709. But many projects stay within Rec709 to save having to make different deliverables. A very good Rec709 to XYZ transform is achievable. If you have graded to XYZ and used the full colour range available, going the other way to Rec709 may involve some limiting.
Video/Consumer Computer Colour Space
All HD video is currently made in the context of either the Rec709 (TV Broadcast) spec or sRGB (Computer monitor delivery). These are similar enough in their white points (although there is a slight gamma shift in the shadows) that it is common for there to be no differentiation in deliverables for both platforms. Mostly people finish in Rec709 for HD video and do not make any alterations for the sRGB web delivery.
The majority of computer monitors in the world, i.e. most of the common usage of video by consumers, are designed and profiled for sRGB (but of course they will vary from the standard by various degrees).
At the Post Factory, we work very hard to maintain a colour critical workflow previously thought unachievable in the moving image world. We often have to match a product video to a reference still image.
And this is where a further issue that arises when we work with photographers working with video regarding colour space.
Photographic Colour Space
In the photographic world, where critical decisions are made with an eye on CMYK print delivery, it is fairly standard to profile monitors for Adobe RGB. This is to allow the fact that CMYK printers are capable of more green and cyan than was historically possible to view on standard monitors (although this is not really the case any more with new screen technologies). RGB Stills used in a trusted environment that respects and understands the profiling (such as Adobe Photoshop) will display correctly, and then Adobe Photoshop can transform the colour space into sRGB for when Jpegs are prepared for Web delivery rather than print.
The problem can arise when comparing a video calibrated for SRGB on a screen calibrated for Adobe RGB. Whether the video will be displayed correctly (especially in the Apple Mac environment) is very dependant on the particular version of the operating system, the particular version of Quicktime, if quick preview is being used rather than quicktime etc..
So we will often make a video match overall tonality to within a 1% margin within our sRGB environment, only to have an external art director note differences in the green and magenta axis.
When this happens, we know there is an issue somewhere in the their viewing environment and the player of the video, the still, the ICC profile, the monitor or any one of the combinations.
In fact there are so many variables that we can only recommend NOT judging videos for critical colour matching on an Adobe RGB calibrated platform.
If you try to judge a video against a reference still in an Adobe RGB environment, the chances are very high that one or the other will be incorrectly profiled and thus differences along the green/magenta axis will be apparent.
The correct way to make a colour critical judgement of a video against a reference still is:
Have a correctly calibrated monitor profiled for sRGB (but at a higher cm/m2 brightness target as below).
Transform the reference still image in Adobe Photoshop or similar from Adobe RGB to sRGB.
Use a trusted video player for the particular codec you are using (e.g. Quicktime player 7 for Prores).
The current standards for both REC709 and sRGB were designed in the age of CRT screen technology and presumed a maximum brightness of 80cd/m2. However, widespread adoption of LCD, LED and OLED technology has meant that in practise the vast majority of consumers run their screens at much higher brightness levels. We prefer to reference at a higher cd/m2 to allow for this. We’ve surveyed TVs and computer screens that are used by our clients and set our levels for an average of that. It is not a standard but at least more accurately represents what is going to happen to the image “out there”. This is vital in video judgement as (discussed below) the brightness setting will affect colour perception more in video than an RGB still.
Many monitors designed for colour critical applications for photography are not designed to be calibrated to this level of brightness accurately. We have found through testing that the most reliable way of predicting a good consumer SRGB experience is to take a well made consumer monitor (such as an Apple Cinema Display with a Samsung panel) which is fairly accurately targeted for sRGB in manufacture with no corrective profile and calibrate that for the minor fluctuations from the sRGB standard.
Our workflow is to properly transform incoming Adobe profiled images to sRGB, and colour match within a calibrated sRGB environment (but at a higher cd/m2 than standard), and create sequential RGB TIFF sequences. However, these TIFFS are never very practical for use outside a facility so video codecs are commonly used for intermediate professional delivery (e.g. Prores or DNXHD) and then highly compressed codecs (H264 or H265) for consumer delivery. This can lead to other issues:
YUV vs RGB
Video does not currently get delivered to the consumer in an RGB format. All video delivery codecs are based on the historical separation of colour and luminance for compression’s sake (originally YUV, now . The upshot is a screen’s brightness and gamma can make changes to the perception of colour saturation in a far more egregious manner than for an RGB still. So correct calibration is essential (bearing in mind very few screens in the real world will be calibrated in any way whatsoever and brightness and contrast adjustments made by the user will significantly alter the colour appearance of video content more than stills. You can take some comfort in the fact that every film or video in the world suffers the same disadvantage when viewed on a consumer’s screen.)
The H264 codecs used can also add a slight colour shift that is not entirely predictable as it will further be interpreted differently by different playback engines. An H264 played back in Firefox on a Mac can be markedly different to playback Google Chrome (which is closer to spec). Again, operating systems and player versions add many variables to the delivery chain.
8 Bit Compression
All current consumer web delivery video codecs are 8 bit for compression/bandwidth delivery efficiency. It is therefore not technically possible to have the gradation of gamut possible in a still. So a gentle gradation (eg a spotlight falloff on a plain background) will always exhibit stepping levels between sample levels (banding). The fact that most consumer displays only have 8 bit display engines means in practise a higher bit depth video would not benefit the consumer currently. This will hopefully change.
We hope that future adoption of HEVC codecs may mitigate these issues, but agreements on standards and various intellectual property issues seem to ensure we will continue to have such issues for years to come.
For further reading Wiley’s Digital Color management: Encoding Solutions
is very extensive.
Video Colour Color Space for photographers