COLOR

  • VES Cinematic Color – Motion-Picture Color Management

    https://cinematiccolor.org

    This paper presents an introduction to the color pipelines behind modern feature-film visual-effects and animation.

    Authored by Jeremy Selan, and reviewed by the members of the VES Technology Committee including Rob Bredow, Dan Candela, Nick Cannon, Paul Debevec, Ray Feeney, Andy Hendrickson, Gautham Krishnamurti, Sam Richards, Jordan Soles, and Sebastian Sylwan.

    , ,
    Read more: VES Cinematic Color – Motion-Picture Color Management
  • Photography basics: Why Use a (MacBeth) Color Chart?

    Start here: https://www.pixelsham.com/2013/05/09/gretagmacbeth-color-checker-numeric-values/

     

    https://www.studiobinder.com/blog/what-is-a-color-checker-tool/

     

     

     

     

    In LightRoom

     

    in Final Cut

     

    in Nuke

    Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
    You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).

    One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
    So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

     

    If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.

     

    In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

     

    This will set the sun usually around EV12-17 and the sky EV1-4 , depending on cloud coverage.

     

    To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

    , ,
    Read more: Photography basics: Why Use a (MacBeth) Color Chart?
  • Image rendering bit depth

    The terms 8-bit, 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

     

    https://en.wikipedia.org/wiki/Color_depth

     

    In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

     

    When referring to a pixel, the concept can be defined as bits per pixel (bpp).

     

    When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

     

    Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

     

     

    Here’s a simple explanation of each.

     

    8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
    They can store around 5 stops of light and each pixel carry a value from 0 (black) to 255 (white).
    As a comparison, DSLR cameras can capture ~12-15 stops of light and they use RAW files to store the information.

     

    16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

     

    16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

     

    32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

     

    Bits and HDR coverage

    High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

     

    A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

     

    On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

     

    The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

     

    However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

     

    To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

     

    To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

     

    In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

     

    HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.

     

    ,
    Read more: Image rendering bit depth
  • Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors

    In the retina, photoreceptors, bipolar cells, and horizontal cells work together to process visual information before it reaches the brain. Here’s how each cell type contributes to vision:

     

    1. Photoreceptors

    • Types: There are two main types of photoreceptors: rods and cones.
      • Rods: Specialized for low-light and peripheral vision; they help us see in dim lighting and detect motion.
      • Cones: Specialized for color and detail; they function best in bright light and are concentrated in the central retina (the fovea), allowing for high-resolution vision.
    • Function: Photoreceptors convert light into electrical signals. When light hits the retina, photoreceptors undergo a chemical change, triggering an electrical response that initiates the visual process. Rods and cones detect different intensities and colors, providing the foundation for brightness and color perception.

     

    2. Bipolar Cells

    • Function: Bipolar cells act as intermediaries, connecting photoreceptors to ganglion cells, which send signals to the brain. They receive input from photoreceptors and relay it to the retinal ganglion cells.
    • On and Off Bipolar Cells: Some bipolar cells are ON cells, responding when light is detected (depolarizing in light), and others are OFF cells, responding in darkness (depolarizing in the absence of light). This division allows for more precise contrast detection and the ability to distinguish light from dark areas in the visual field.

     

    3. Horizontal Cells

    • Function: Horizontal cells connect photoreceptors to each other and create lateral interactions between them. They integrate signals from multiple photoreceptors, allowing them to adjust the sensitivity of neighboring photoreceptors in response to varying light conditions.
    • Lateral Inhibition: This process improves visual contrast and sharpness by making the borders between light and dark areas more distinct, enhancing our ability to perceive edges and fine detail.

     

    These three types of cells work together to help the retina preprocess visual information and perception, emphasizing contrast and adjusting for different lighting conditions before signals are sent to the brain for further processing and interpretation.

     

     

    ,
    Read more: Eye retina’s Bipolar Cells, Horizontal Cells, and Photoreceptors
  • Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking

    https://bottosson.github.io/misc/colorpicker

     

    https://bottosson.github.io/posts/colorpicker/

     

    https://www.smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/

     

    One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient.

     

     

    There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly.

     

    I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them.

     

    IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab.

     

    In the Munsell color system, colors are described with three parameters, designed to match the perceived appearance of colors: Hue, Chroma and Value. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. Modern color spaces and models, such as CIELAB, Cam16 and Björn Ottosson own Oklab, are very similar in their construction.

     

     

    By far the most used color spaces today for color picking are HSL and HSV, two representations introduced in the classic 1978 paper “Color Spaces for Computer Graphics”. HSL and HSV designed to roughly correlate with perceptual color properties while being very simple and cheap to compute.

     

    Today HSL and HSV are most commonly used together with the sRGB color space.

     

     

    One of the main advantages of HSL and HSV over the different Lab color spaces is that they map the sRGB gamut to a cylinder. This makes them easy to use since all parameters can be changed independently, without the risk of creating colors outside of the target gamut.

     

     

    The main drawback on the other hand is that their properties don’t match human perception particularly well.
    Reconciling these conflicting goals perfectly isn’t possible, but given that HSV and HSL don’t use anything derived from experiments relating to human perception, creating something that makes a better tradeoff does not seem unreasonable.

     

     

    With this new lightness estimate, we are ready to look into the construction of Okhsv and Okhsl.

     

     

    , ,
    Read more: Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color picking
  • OLED vs QLED – What TV is better?

     

    Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs.
    OLED stands for “organic light emitting diode.”
    It is a fundamentally different technology from LCD, the major type of TV today.
    OLED is “emissive,” meaning the pixels emit their own light.

     

    Samsung is branding its best TVs with a new acronym: “QLED”
    QLED (according to Samsung) stands for “quantum dot LED TV.”
    It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.”
    QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.

     

    OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

    QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.

     

    When analyzing TVs color, it may be beneficial to consider at least 3 elements:
    “Color Depth”, “Color Gamut”, and “Dynamic Range”.

     

    Color Depth (or “Bit-Depth”, e.g. 8-bit, 10-bit, 12-bit) determines how many distinct color variations (tones/shades) can be viewed on a given display.

     

    Color Gamut (e.g. WCG) determines which specific colors can be displayed from a given “Color Space” (Rec.709, Rec.2020, DCI-P3) (i.e. the color range).

     

    Dynamic Range (SDR, HDR) determines the luminosity range of a specific color – from its darkest shade (or tone) to its brightest.

     

    The overall brightness range of a color will be determined by a display’s “contrast ratio”, that is, the ratio of luminance between the darkest black that can be produced and the brightest white.

     

    Color Volume is the “Color Gamut” + the “Dynamic/Luminosity Range”.
    A TV’s Color Volume will not only determine which specific colors can be displayed (the color range) but also that color’s luminosity range, which will have an affect on its “brightness”, and “colorfulness” (intensity and saturation).

     

    The better the colour volume in a TV, the closer to life the colours appear.

     

    QLED TV can express nearly all of the colours in the DCI-P3 colour space, and of those colours, express 100% of the colour volume, thereby producing an incredible range of colours.

     

    With OLED TV, when the image is too bright, the percentage of the colours in the colour volume produced by the TV drops significantly. The colours get washed out and can only express around 70% colour volume, making the picture quality drop too.

     

    Note. OLED TV uses organic material, so it may lose colour expression as it ages.

     

    Resources for more reading and comparison below

    www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    www.newtechnologytv.com/qled-vs-oled/

     

    news.samsung.com/za/qled-tv-vs-oled-tv

     

    www.cnet.com/news/qled-vs-oled-samsungs-tv-tech-and-lgs-tv-tech-are-not-the-same/

     

    ,
    Read more: OLED vs QLED – What TV is better?

LIGHTING


| Featured AI
| Design And Composition
| Explore posts


unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free