COLOR

  • What causes color

    www.webexhibits.org/causesofcolor/5.html

    Water itself has an intrinsic blue color that is a result of its molecular structure and its behavior.

    Read more: What causes color
  • OLED vs QLED – What TV is better?

     

    Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs.
    OLED stands for “organic light emitting diode.”
    It is a fundamentally different technology from LCD, the major type of TV today.
    OLED is “emissive,” meaning the pixels emit their own light.

     

    Samsung is branding its best TVs with a new acronym: “QLED”
    QLED (according to Samsung) stands for “quantum dot LED TV.”
    It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.”
    QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.

     

    OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

    QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.

     

    When analyzing TVs color, it may be beneficial to consider at least 3 elements:
    “Color Depth”, “Color Gamut”, and “Dynamic Range”.

     

    Color Depth (or “Bit-Depth”, e.g. 8-bit, 10-bit, 12-bit) determines how many distinct color variations (tones/shades) can be viewed on a given display.

     

    Color Gamut (e.g. WCG) determines which specific colors can be displayed from a given “Color Space” (Rec.709, Rec.2020, DCI-P3) (i.e. the color range).

     

    Dynamic Range (SDR, HDR) determines the luminosity range of a specific color – from its darkest shade (or tone) to its brightest.

     

    The overall brightness range of a color will be determined by a display’s “contrast ratio”, that is, the ratio of luminance between the darkest black that can be produced and the brightest white.

     

    Color Volume is the “Color Gamut” + the “Dynamic/Luminosity Range”.
    A TV’s Color Volume will not only determine which specific colors can be displayed (the color range) but also that color’s luminosity range, which will have an affect on its “brightness”, and “colorfulness” (intensity and saturation).

     

    The better the colour volume in a TV, the closer to life the colours appear.

     

    QLED TV can express nearly all of the colours in the DCI-P3 colour space, and of those colours, express 100% of the colour volume, thereby producing an incredible range of colours.

     

    With OLED TV, when the image is too bright, the percentage of the colours in the colour volume produced by the TV drops significantly. The colours get washed out and can only express around 70% colour volume, making the picture quality drop too.

     

    Note. OLED TV uses organic material, so it may lose colour expression as it ages.

     

    Resources for more reading and comparison below

    www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    www.newtechnologytv.com/qled-vs-oled/

     

    news.samsung.com/za/qled-tv-vs-oled-tv

     

    www.cnet.com/news/qled-vs-oled-samsungs-tv-tech-and-lgs-tv-tech-are-not-the-same/

     

    ,
    Read more: OLED vs QLED – What TV is better?
  • Tim Kang – calibrated white light values in sRGB color space

    https://www.linkedin.com/posts/timkang_colorimetry-cinematography-nerdalert-activity-7058330978007584769-9xln

     

    8bit sRGB encoded
    2000K 255 139 22
    2700K 255 172 89
    3000K 255 184 109
    3200K 255 190 122
    4000K 255 211 165
    4300K 255 219 178
    D50 255 235 205
    D55 255 243 224
    D5600 255 244 227
    D6000 255 249 240
    D65 255 255 255
    D10000 202 221 255
    D20000 166 196 255

    8bit Rec709 Gamma 2.4
    2000K 255 145 34
    2700K 255 177 97
    3000K 255 187 117
    3200K 255 193 129
    4000K 255 214 170
    4300K 255 221 182
    D50 255 236 208
    D55 255 243 226
    D5600 255 245 229
    D6000 255 250 241
    D65 255 255 255
    D10000 204 222 255
    D20000 170 199 255

    8bit Display P3 encoded
    2000K 255 154 63
    2700K 255 185 109
    3000K 255 195 127
    3200K 255 201 138
    4000K 255 219 176
    4300K 255 225 187
    D50 255 239 212
    D55 255 245 228
    D5600 255 246 231
    D6000 255 251 242
    D65 255 255 255
    D10000 208 223 255
    D20000 175 199 255

    10bit Rec2020 PQ (100 nits)
    2000K 520 435 273
    2700K 520 466 358
    3000K 520 475 384
    3200K 520 480 399
    4000K 520 495 446
    4300K 520 500 458
    D50 520 510 482
    D55 520 514 497
    D5600 520 514 500
    D6000 520 517 509
    D65 520 520 520
    D10000 479 489 520
    D20000 448 464 520

     

    ,
    Read more: Tim Kang – calibrated white light values in sRGB color space
  • Image rendering bit depth

    The terms 8-bit, 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

     

    https://en.wikipedia.org/wiki/Color_depth

     

    In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

     

    When referring to a pixel, the concept can be defined as bits per pixel (bpp).

     

    When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

     

    Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

     

     

    Here’s a simple explanation of each.

     

    8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
    They can store around 5 stops of light and each pixel carry a value from 0 (black) to 255 (white).
    As a comparison, DSLR cameras can capture ~12-15 stops of light and they use RAW files to store the information.

     

    16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

     

    16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

     

    32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

     

    Bits and HDR coverage

    High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

     

    A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

     

    On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

     

    The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

     

    However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

     

    To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

     

    To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

     

    In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

     

    HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.

     

    ,
    Read more: Image rendering bit depth
  • About green screens

    hackaday.com/2015/02/07/how-green-screen-worked-before-computers/

     

    www.newtek.com/blog/tips/best-green-screen-materials/

     

    www.chromawall.com/blog//chroma-key-green

     

     

    Chroma Key Green, the color of green screens is also known as Chroma Green and is valued at approximately 354C in the Pantone color matching system (PMS).

     

    Chroma Green can be broken down in many different ways. Here is green screen green as other values useful for both physical and digital production:

     

    Green Screen as RGB Color Value: 0, 177, 64
    Green Screen as CMYK Color Value: 81, 0, 92, 0
    Green Screen as Hex Color Value: #00b140
    Green Screen as Websafe Color Value: #009933

     

    Chroma Key Green is reasonably close to an 18% gray reflectance.

     

    Illuminate your green screen with an uniform source with less than 2/3 EV variation.
    The level of brightness at any given f-stop should be equivalent to a 90% white card under the same lighting.

    , ,
    Read more: About green screens
  • About color: What is a LUT

    http://www.lightillusion.com/luts.html

    https://www.shutterstock.com/blog/how-use-luts-color-grading

     

    A LUT (Lookup Table) is essentially the modifier between two images, the original image and the displayed image, based on a mathematical formula. Basically conversion matrices of different complexities. There are different types of LUTS – viewing, transform, calibration, 1D and 3D.

     

    , ,
    Read more: About color: What is a LUT
  • Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

    https://jo.dreggn.org/home/2018_manuka.pdf

     

    http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/

     

    The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture. In its core, reyes is based on stochastic rasterisation of micropolygons, facilitating depth of field, motion blur, high geometric complexity,and programmable shading.

     

    This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade-on-hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure which is then used bypath sampling logic to evaluate contributions and to inform what further rays to cast through the scene.

     

    Over the years, however, the expectations have risen substantially when it comes to image quality. Computing pictures which are indistinguishable from real footage requires accurate simulation of light transport, which is most often performed using some variant of Monte Carlo path tracing. Unfortunately this paradigm requires random memory accesses to the whole scene and does not lend itself well to a rasterisation approach at all.

     

    Manuka is both a uni-directional and bidirectional path tracer and encompasses multiple importance sampling (MIS). Interestingly, and importantly for production character skin work, it is the first major production renderer to incorporate spectral MIS in the form of a new ‘Hero Spectral Sampling’ technique, which was recently published at Eurographics Symposium on Rendering 2014.

     

    Manuka propose a shade-before-hit paradigm in-stead and minimise I/O strain (and some memory costs) on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling, “compressing” any bvh structure as needed, and as such also limiting duplication of source data.
    The difference with reyes is that instead of baking colors into the geometry like in Reyes, manuka bakes surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

     

    The main drawback with this method is that geometry has to be tessellated to its highest, stable topology before shading can be evaluated properly. As such, the high cost to first pixel. Even a basic 4 vertices square becomes a much more complex model with this approach.

     

     

    Manuka use the RenderMan Shading Language (rsl) for programmable shading [Pixar Animation Studios 2015], but we do not invoke rsl shaders when intersecting a ray with a surface (often called shade-on-hit). Instead, we pre-tessellate and pre-shade all the input geometry in the front end of the renderer.
    This way, we can efficiently order shading computations to sup-port near-optimal texture locality, vectorisation, and parallelism. This system avoids repeated evaluation of shaders at the same surface point, and presents a minimal amount of memory to be accessed during light transport time. An added benefit is that the acceleration structure for ray tracing (abounding volume hierarchy, bvh) is built once on the final tessellated geometry, which allows us to ray trace more efficiently than multi-level bvhs and avoids costly caching of on-demand tessellated micropolygons and the associated scheduling issues.

     

    For the shading reasons above, in terms of AOVs, the studio approach is to succeed at combining complex shading with ray paths in the render rather than pass a multi-pass render to compositing.

     

    For the Spectral Rendering component. The light transport stage is fully spectral, using a continuously sampled wavelength which is traced with each path and used to apply the spectral camera sensitivity of the sensor. This allows for faithfully support any degree of observer metamerism as the camera footage they are intended to match as well as complex materials which require wavelength dependent phenomena such as diffraction, dispersion, interference, iridescence, or chromatic extinction and Rayleigh scattering in participating media.

     

    As opposed to the original reyes paper, we use bilinear interpolation of these bsdf inputs later when evaluating bsdfs per pathv ertex during light transport4. This improves temporal stability of geometry which moves very slowly with respect to the pixel raster

     

    In terms of the pipeline, everything rendered at Weta was already completely interwoven with their deep data pipeline. Manuka very much was written with deep data in mind. Here, Manuka not so much extends the deep capabilities, rather it fully matches the already extremely complex and powerful setup Weta Digital already enjoy with RenderMan. For example, an ape in a scene can be selected, its ID is available and a NUKE artist can then paint in 3D say a hand and part of the way up the neutral posed ape.

     

    We called our system Manuka, as a respectful nod to reyes: we had heard a story froma former ILM employee about how reyes got its name from how fond the early Pixar people were of their lunches at Point Reyes, and decided to name our system after our surrounding natural environment, too. Manuka is a kind of tea tree very common in New Zealand which has very many very small leaves, in analogy to micropolygons ina tree structure for ray tracing. It also happens to be the case that Weta Digital’s main site is on Manuka Street.

     

     

    , ,
    Read more: Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

LIGHTING