xyz -> xy and xy -> xyz conversions have divisions in them, so we need to
handle the edge cases of the divisor being zero. This can happen if an ICC
profile is invalid, or if the XYZ color space is used.
This is done by
- fixing isFuzzyScalingOnly to not check the [3, 3] component, which is always 1
- making the comparison between transfer functions fuzzy, so small floating point
errors don't prevent two practically identical functions to be optimized out
- switching manual optimizations to use addMatrix instead, which removes the
matrix or replaces it with a multiplier
and the autotest is expanded to test transformations between color descriptions with
transfer functions and reference luminances that are just scaled versions of each
other
Rendering intents describe how to handle mapping between different colorspaces,
what to do with out of gamut values and what to do if the whitepoint doesn't match.
This way, clients can choose which behavior their content should get.
Using the custom values for min. and max. luminance in transfer functions, we can reduce the
ranges of values in the shadow buffer to be limited to [0, 1], and with that we can switch
from a floating point buffer back to a normalized format. As gamma 2.2 encoding is much more
efficient at storing color values, this also drops the buffer from 16bpc down to 10bpc.
Furthermore, this offloads the gamma 2.2 -> PQ conversion to KMS when possible, and then uses
the scanout buffer with gamma 2.2 encoding directly. This way the shadow buffer gets completely
skipped and performance and efficiency get improved a lot.
BUG: 491452
CCBUG: 477223
It just tests rec.709 <-> rec.2020 at 0%, 50% and 100% rec.709 luminance, to have
a very simple indicator for when something's gone really wrong while working on
color pipeline changes
This redefines the transfer functions to have a custom luminance at encoded
value zero, and a custom luminance at encoded value 1, neither of which are
tied to the reference luminance, even for relative transfer functions.
The goal of that is that we can use a gamma 2.2 transfer function for the shadow
buffer, with the reference luminance being much lower than the maximum luminance.
For example, on an HDR screen you might have the reference luminance of 600 nits,
while the maximum luminance is 1000 nits. By representing this in gamma 2.2, we
can use a much smaller amount of bits per color to store the values than if we
used a linear transfer function. An additional benefit is that this way the values
in the buffer can be scaled by arbitrary amounts, for example to limit the range of
values to [0, 1], which can be represented in a normalized buffer
If identity transformations aren't properly optimized out, we can have additional
rounding errors and reduced performance. This test ensures that doesn't happen
This was just done because of the wrong assumption that displays needed that
to show the full native gamut. That turned out to be an amdgpu bug though; with
that fixed, most of the 0-100% range is wildly oversaturated.
To make the slider more intuitive, this changes the sdr gamut wideness to instead
interpolate to the native display primaries as indicated by the EDID.