COLOR

  • OLED vs QLED – What TV is better?

     

    Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs.
    OLED stands for “organic light emitting diode.”
    It is a fundamentally different technology from LCD, the major type of TV today.
    OLED is “emissive,” meaning the pixels emit their own light.

     

    Samsung is branding its best TVs with a new acronym: “QLED”
    QLED (according to Samsung) stands for “quantum dot LED TV.”
    It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.”
    QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.

     

    OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

    QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.

     

    When analyzing TVs color, it may be beneficial to consider at least 3 elements:
    “Color Depth”, “Color Gamut”, and “Dynamic Range”.

     

    Color Depth (or “Bit-Depth”, e.g. 8-bit, 10-bit, 12-bit) determines how many distinct color variations (tones/shades) can be viewed on a given display.

     

    Color Gamut (e.g. WCG) determines which specific colors can be displayed from a given “Color Space” (Rec.709, Rec.2020, DCI-P3) (i.e. the color range).

     

    Dynamic Range (SDR, HDR) determines the luminosity range of a specific color – from its darkest shade (or tone) to its brightest.

     

    The overall brightness range of a color will be determined by a display’s “contrast ratio”, that is, the ratio of luminance between the darkest black that can be produced and the brightest white.

     

    Color Volume is the “Color Gamut” + the “Dynamic/Luminosity Range”.
    A TV’s Color Volume will not only determine which specific colors can be displayed (the color range) but also that color’s luminosity range, which will have an affect on its “brightness”, and “colorfulness” (intensity and saturation).

     

    The better the colour volume in a TV, the closer to life the colours appear.

     

    QLED TV can express nearly all of the colours in the DCI-P3 colour space, and of those colours, express 100% of the colour volume, thereby producing an incredible range of colours.

     

    With OLED TV, when the image is too bright, the percentage of the colours in the colour volume produced by the TV drops significantly. The colours get washed out and can only express around 70% colour volume, making the picture quality drop too.

     

    Note. OLED TV uses organic material, so it may lose colour expression as it ages.

     

    Resources for more reading and comparison below

    www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

     

    www.newtechnologytv.com/qled-vs-oled/

     

    news.samsung.com/za/qled-tv-vs-oled-tv

     

    www.cnet.com/news/qled-vs-oled-samsungs-tv-tech-and-lgs-tv-tech-are-not-the-same/

     

    ,
    Read more: OLED vs QLED – What TV is better?
  • HDR and Color

    https://www.soundandvision.com/content/nits-and-bits-hdr-and-color

    In HD we often refer to the range of available colors as a color gamut. Such a color gamut is typically plotted on a two-dimensional diagram, called a CIE chart, as shown in at the top of this blog. Each color is characterized by its x/y coordinates.

    Good enough for government work, perhaps. But for HDR, with its higher luminance levels and wider color, the gamut becomes three-dimensional.

    For HDR the color gamut therefore becomes a characteristic we now call the color volume. It isn’t easy to show color volume on a two-dimensional medium like the printed page or a computer screen, but one method is shown below. As the luminance becomes higher, the picture eventually turns to white. As it becomes darker, it fades to black. The traditional color gamut shown on the CIE chart is simply a slice through this color volume at a selected luminance level, such as 50%.

    Three different color volumes—we still refer to them as color gamuts though their third dimension is important—are currently the most significant. The first is BT.709 (sometimes referred to as Rec.709), the color gamut used for pre-UHD/HDR formats, including standard HD.

    The largest is known as BT.2020; it encompasses (roughly) the range of colors visible to the human eye (though ET might find it insufficient!).

    Between these two is the color gamut used in digital cinema, known as DCI-P3.

    sRGB

    D65

     

    , ,
    Read more: HDR and Color
  • Types of Film Lights and their efficiency – CRI, Color Temperature and Luminous Efficacy

    nofilmschool.com/types-of-film-lights

     

    “Not every light performs the same way. Lights and lighting are tricky to handle. You have to plan for every circumstance. But the good news is, lighting can be adjusted. Let’s look at different factors that affect lighting in every scene you shoot. ”

    Use CRI, Luminous Efficacy and color temperature controls to match your needs.

     

    Color Temperature
    Color temperature describes the “color” of white light by a light source radiated by a perfect black body at a given temperature measured in degrees Kelvin

     

    https://www.pixelsham.com/2019/10/18/color-temperature/

     

    CRI
    “The Color Rendering Index is a measurement of how faithfully a light source reveals the colors of whatever it illuminates, it describes the ability of a light source to reveal the color of an object, as compared to the color a natural light source would provide. The highest possible CRI is 100. A CRI of 100 generally refers to a perfect black body, like a tungsten light source or the sun. ”

     

    https://www.studiobinder.com/blog/what-is-color-rendering-index/

     

     

     

    https://en.wikipedia.org/wiki/Color_rendering_index

     

    Light source CCT (K) CRI
    Low-pressure sodium (LPS/SOX) 1800 −44
    Clear mercury-vapor 6410 17
    High-pressure sodium (HPS/SON) 2100 24
    Coated mercury-vapor 3600 49
    Halophosphate warm-white fluorescent 2940 51
    Halophosphate cool-white fluorescent 4230 64
    Tri-phosphor warm-white fluorescent 2940 73
    Halophosphate cool-daylight fluorescent 6430 76
    “White” SON 2700 82
    Standard LED Lamp 2700–5000 83
    Quartz metal halide 4200 85
    Tri-phosphor cool-white fluorescent 4080 89
    High-CRI LED lamp (blue LED) 2700–5000 95
    Ceramic discharge metal-halide lamp 5400 96
    Ultra-high-CRI LED lamp (violet LED) 2700–5000 99
    Incandescent/halogen bulb 3200 100

     

    Luminous Efficacy
    Luminous efficacy is a measure of how well a light source produces visible light, watts out versus watts in, measured in lumens per watt. In other words it is a measurement that indicates the ability of a light source to emit visible light using a given amount of power. It is a ratio of the visible energy to the power that goes into the bulb.

     

    FILM LIGHT TYPES

    https://www.studiobinder.com/blog/video-lighting-kits/?utm_campaign=Weekly_Newsletter&utm_medium=email&utm_source=sendgrid&utm_term=production-lighting&utm_content=production-lighting

     

     

     

    Consumer light types

     

    https://www.researchgate.net/figure/Emission-spectra-of-different-light-sources-a-incandescent-tungsten-light-bulb-b_fig1_312320039

     

    http://dev.informationdisplay.org/IDArchive/2015/NovemberDecember/FrontlineTechnologyCandleLikeEmission.aspx

     

     

    Tungsten Lights
    Light interiors and match domestic places or office locations. Daylight.

    Advantages of Tungsten Lights
    Almost perfect color rendition
    Low cost
    Does not use mercury like CFLs (fluorescent) or mercury vapor lights
    Better color temperature than standard tungsten
    Longer life than a conventional incandescent
    Instant on to full brightness, no warm-up time, and it is dimmable

    Disadvantages of Tungsten Lights
    Extremely hot
    High power requirement
    The lamp is sensitive to oils and cannot be touched
    The bulb is capable of blowing and sending hot glass shards outward. A screen or layer of glass on the outside of the lamp can protect users.

     

     

    Hydrargyrum medium-arc iodide lights
    HMI’s are used when high output is required. They are also used to recreate sun shining through windows or to fake additional sun while shooting exteriors. HMIs can light huge areas at once.

    Advantages of HMI lights
    High light output
    Higher efficiency
    High color temperature

    Disadvantages of HMI lights:
    High cost
    High power requirement
    Dims only to about 50%
    the color temperature increases with dimming
    HMI bulbs will explode is dropped and release toxic chemicals

     

     

    Fluorescent
    Fluorescent film lighting is achieved by laying multiple tubes next to each other, combining as many as you want for the desired brightness. The good news is you can choose your bulbs to either be warm or cool depending on the scenario you’re shooting. You want to get these bulbs close to the subject because they’re not great at opening up spaces. Fluorescent lighting is used to light interiors and is more compact and cooler than tungsten or HMI lighting.

    Advantages of Fluorescent lights
    High efficiency
    Low power requirement
    Low cost
    Long lamp life
    Cool
    Capable of soft even lighting over a large area
    Lightweight

    Disadvantages of Fluorescent lights
    Flicker
    High CRI
    Domestic tubes have low CRI & poor color rendition.

     

     

    LED
    LED’s are more and more common on film sets. You can use batteries to power them. That makes them portable and sleek – no messy cabled needed. You can rig your own panels of LED lights to fit any space necessary as well. LED’s can also power Fresnel style lamp heads such as the Arri L-series.

    Advantages of LED light
    Soft, even lighting
    Pure light without UV-artifacts
    High efficiency
    Low power consumption, can be battery powered
    Excellent dimming by means of pulse width modulation control
    Long lifespan
    Environmentally friendly
    Insensitive to shock
    No risk of explosion

    Disadvantages of LED light
    High cost.
    LED’s are currently still expensive for their total light output

    (more…)

    , , ,
    Read more: Types of Film Lights and their efficiency – CRI, Color Temperature and Luminous Efficacy
  • Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

    https://jo.dreggn.org/home/2018_manuka.pdf

     

    http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/

     

    The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture. In its core, reyes is based on stochastic rasterisation of micropolygons, facilitating depth of field, motion blur, high geometric complexity,and programmable shading.

     

    This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade-on-hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure which is then used bypath sampling logic to evaluate contributions and to inform what further rays to cast through the scene.

     

    Over the years, however, the expectations have risen substantially when it comes to image quality. Computing pictures which are indistinguishable from real footage requires accurate simulation of light transport, which is most often performed using some variant of Monte Carlo path tracing. Unfortunately this paradigm requires random memory accesses to the whole scene and does not lend itself well to a rasterisation approach at all.

     

    Manuka is both a uni-directional and bidirectional path tracer and encompasses multiple importance sampling (MIS). Interestingly, and importantly for production character skin work, it is the first major production renderer to incorporate spectral MIS in the form of a new ‘Hero Spectral Sampling’ technique, which was recently published at Eurographics Symposium on Rendering 2014.

     

    Manuka propose a shade-before-hit paradigm in-stead and minimise I/O strain (and some memory costs) on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling, “compressing” any bvh structure as needed, and as such also limiting duplication of source data.
    The difference with reyes is that instead of baking colors into the geometry like in Reyes, manuka bakes surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

     

    The main drawback with this method is that geometry has to be tessellated to its highest, stable topology before shading can be evaluated properly. As such, the high cost to first pixel. Even a basic 4 vertices square becomes a much more complex model with this approach.

     

     

    Manuka use the RenderMan Shading Language (rsl) for programmable shading [Pixar Animation Studios 2015], but we do not invoke rsl shaders when intersecting a ray with a surface (often called shade-on-hit). Instead, we pre-tessellate and pre-shade all the input geometry in the front end of the renderer.
    This way, we can efficiently order shading computations to sup-port near-optimal texture locality, vectorisation, and parallelism. This system avoids repeated evaluation of shaders at the same surface point, and presents a minimal amount of memory to be accessed during light transport time. An added benefit is that the acceleration structure for ray tracing (abounding volume hierarchy, bvh) is built once on the final tessellated geometry, which allows us to ray trace more efficiently than multi-level bvhs and avoids costly caching of on-demand tessellated micropolygons and the associated scheduling issues.

     

    For the shading reasons above, in terms of AOVs, the studio approach is to succeed at combining complex shading with ray paths in the render rather than pass a multi-pass render to compositing.

     

    For the Spectral Rendering component. The light transport stage is fully spectral, using a continuously sampled wavelength which is traced with each path and used to apply the spectral camera sensitivity of the sensor. This allows for faithfully support any degree of observer metamerism as the camera footage they are intended to match as well as complex materials which require wavelength dependent phenomena such as diffraction, dispersion, interference, iridescence, or chromatic extinction and Rayleigh scattering in participating media.

     

    As opposed to the original reyes paper, we use bilinear interpolation of these bsdf inputs later when evaluating bsdfs per pathv ertex during light transport4. This improves temporal stability of geometry which moves very slowly with respect to the pixel raster

     

    In terms of the pipeline, everything rendered at Weta was already completely interwoven with their deep data pipeline. Manuka very much was written with deep data in mind. Here, Manuka not so much extends the deep capabilities, rather it fully matches the already extremely complex and powerful setup Weta Digital already enjoy with RenderMan. For example, an ape in a scene can be selected, its ID is available and a NUKE artist can then paint in 3D say a hand and part of the way up the neutral posed ape.

     

    We called our system Manuka, as a respectful nod to reyes: we had heard a story froma former ILM employee about how reyes got its name from how fond the early Pixar people were of their lunches at Point Reyes, and decided to name our system after our surrounding natural environment, too. Manuka is a kind of tea tree very common in New Zealand which has very many very small leaves, in analogy to micropolygons ina tree structure for ray tracing. It also happens to be the case that Weta Digital’s main site is on Manuka Street.

     

     

    , ,
    Read more: Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline
  • Is it possible to get a dark yellow

    https://www.patreon.com/posts/102660674

     

    https://www.linkedin.com/posts/stephenwestland_here-is-a-post-about-the-dark-yellow-problem-activity-7187131643764092929-7uCL

     

    Read more: Is it possible to get a dark yellow
  • Björn Ottosson – How software gets color wrong

    https://bottosson.github.io/posts/colorwrong/

     

    Most software around us today are decent at accurately displaying colors. Processing of colors is another story unfortunately, and is often done badly.

     

    To understand what the problem is, let’s start with an example of three ways of blending green and magenta:

    • Perceptual blend – A smooth transition using a model designed to mimic human perception of color. The blending is done so that the perceived brightness and color varies smoothly and evenly.
    • Linear blend – A model for blending color based on how light behaves physically. This type of blending can occur in many ways naturally, for example when colors are blended together by focus blur in a camera or when viewing a pattern of two colors at a distance.
    • sRGB blend – This is how colors would normally be blended in computer software, using sRGB to represent the colors. 

     

    Let’s look at some more examples of blending of colors, to see how these problems surface more practically. The examples use strong colors since then the differences are more pronounced. This is using the same three ways of blending colors as the first example.

     

    Instead of making it as easy as possible to work with color, most software make it unnecessarily hard, by doing image processing with representations not designed for it. Approximating the physical behavior of light with linear RGB models is one easy thing to do, but more work is needed to create image representations tailored for image processing and human perception.

     

    Also see:

    https://www.pixelsham.com/2022/04/05/bjorn-ottosson-okhsv-and-okhsl-two-new-color-spaces-for-color-picking/

    Read more: Björn Ottosson – How software gets color wrong
  • Photography Basics : Spectral Sensitivity Estimation Without a Camera

    https://color-lab-eilat.github.io/Spectral-sensitivity-estimation-web/

     

    A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known. As consumer cameras are not designed for high-precision visual tasks, manufacturers do not disclose spectral sensitivities. Their estimation requires a costly optical setup, which triggered researchers to come up with numerous indirect methods that aim to lower cost and complexity by using color targets. However, the use of color targets gives rise to new complications that make the estimation more difficult, and consequently, there currently exists no simple, low-cost, robust go-to method for spectral sensitivity estimation that non-specialized research labs can adopt. Furthermore, even if not limited by hardware or cost, researchers frequently work with imagery from multiple cameras that they do not have in their possession.

     

    To provide a practical solution to this problem, we propose a framework for spectral sensitivity estimation that not only does not require any hardware (including a color target), but also does not require physical access to the camera itself. Similar to other work, we formulate an optimization problem that minimizes a two-term objective function: a camera-specific term from a system of equations, and a universal term that bounds the solution space.

     

    Different than other work, we utilize publicly available high-quality calibration data to construct both terms. We use the colorimetric mapping matrices provided by the Adobe DNG Converter to formulate the camera-specific system of equations, and constrain the solutions using an autoencoder trained on a database of ground-truth curves. On average, we achieve reconstruction errors as low as those that can arise due to manufacturing imperfections between two copies of the same camera. We provide predicted sensitivities for more than 1,000 cameras that the Adobe DNG Converter currently supports, and discuss which tasks can become trivial when camera responses are available.

     

     

     

    , ,
    Read more: Photography Basics : Spectral Sensitivity Estimation Without a Camera
  • A Brief History of Color in Art

    www.artsy.net/article/the-art-genome-project-a-brief-history-of-color-in-art

    Of all the pigments that have been banned over the centuries, the color most missed by painters is likely Lead White.

    This hue could capture and reflect a gleam of light like no other, though its production was anything but glamorous. The 17th-century Dutch method for manufacturing the pigment involved layering cow and horse manure over lead and vinegar. After three months in a sealed room, these materials would combine to create flakes of pure white. While scientists in the late 19th century identified lead as poisonous, it wasn’t until 1978 that the United States banned the production of lead white paint.

    More reading:
    www.canva.com/learn/color-meanings/

    https://www.infogrades.com/history-events-infographics/bizarre-history-of-colors/

    ,
    Read more: A Brief History of Color in Art

LIGHTING


| Featured AI
| Design And Composition
| Explore posts


unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke




Subscribe to PixelSham.com RSS for free
Subscribe to PixelSham.com RSS for free