COLOR

  • Christopher Butler – Understanding the Eye-Mind Connection – Vision is a mental process

    https://www.chrbutler.com/understanding-the-eye-mind-connection

     

    The intricate relationship between the eyes and the brain, often termed the eye-mind connection, reveals that vision is predominantly a cognitive process. This understanding has profound implications for fields such as design, where capturing and maintaining attention is paramount. This essay delves into the nuances of visual perception, the brain’s role in interpreting visual data, and how this knowledge can be applied to effective design strategies.

     

    This cognitive aspect of vision is evident in phenomena such as optical illusions, where the brain interprets visual information in a way that contradicts physical reality. These illusions underscore that what we “see” is not merely a direct recording of the external world but a constructed experience shaped by cognitive processes.

     

    Understanding the cognitive nature of vision is crucial for effective design. Designers must consider how the brain processes visual information to create compelling and engaging visuals. This involves several key principles:

    1. Attention and Engagement
    2. Visual Hierarchy
    3. Cognitive Load Management
    4. Context and Meaning

     

     

    , , , ,
    Read more: Christopher Butler – Understanding the Eye-Mind Connection – Vision is a mental process
  • Sensitivity of human eye

    http://www.wikilectures.eu/index.php/Spectral_sensitivity_of_the_human_eye

    http://www.normankoren.com/Human_spectral_sensitivity_small.jpg

    Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.

    http://weeklysciencequiz.blogspot.com/2013/01/violet-skies-are-for-birds.html

     

     

    Sensitivity of human eye Sensitivity of human eyes to light increase with the decrease in light intensity. In day-light condition, the cones cell is responding to this condition. And the eye is most sensitive at 555 nm. In darkness condition, the rod cell is responding to this condition. And the eye is most sensitive at 507 nm.

    As light intensity decreases, cone function changes more effective way. And when decrease the light intensity, it prompt to accumulation of rhodopsin. Furthermore, in activates rods, it allow to respond to stimuli of light in much lower intensity.

     

    https://www.nde-ed.org/EducationResources/CommunityCollege/PenetrantTest/Introduction/lightresponse.htm

    The three curves in the figure above shows the normalized response of an average human eye to various amounts of ambient light. The shift in sensitivity occurs because two types of photoreceptors called cones and rods are responsible for the eye’s response to light. The curve on the right shows the eye’s response under normal lighting conditions and this is called the photopic response. The cones respond to light under these conditions.

     

    As mentioned previously, cones are composed of three different photo pigments that enable color perception. This curve peaks at 555 nanometers, which means that under normal lighting conditions, the eye is most sensitive to a yellowish-green color. When the light levels drop to near total darkness, the response of the eye changes significantly as shown by the scotopic response curve on the left. At this level of light, the rods are most active and the human eye is more sensitive to the light present, and less sensitive to the range of color. Rods are highly sensitive to light but are comprised of a single photo pigment, which accounts for the loss in ability to discriminate color. At this very low light level, sensitivity to blue, violet, and ultraviolet is increased, but sensitivity to yellow and red is reduced. The heavier curve in the middle represents the eye’s response at the ambient light level found in a typical inspection booth. This curve peaks at 550 nanometers, which means the eye is most sensitive to yellowish-green color at this light level. Fluorescent penetrant inspection materials are designed to fluoresce at around 550 nanometers to produce optimal sensitivity under dim lighting conditions.

     

    , , ,
    Read more: Sensitivity of human eye
  • Björn Ottosson – How software gets color wrong

    https://bottosson.github.io/posts/colorwrong/

     

    Most software around us today are decent at accurately displaying colors. Processing of colors is another story unfortunately, and is often done badly.

     

    To understand what the problem is, let’s start with an example of three ways of blending green and magenta:

    • Perceptual blend – A smooth transition using a model designed to mimic human perception of color. The blending is done so that the perceived brightness and color varies smoothly and evenly.
    • Linear blend – A model for blending color based on how light behaves physically. This type of blending can occur in many ways naturally, for example when colors are blended together by focus blur in a camera or when viewing a pattern of two colors at a distance.
    • sRGB blend – This is how colors would normally be blended in computer software, using sRGB to represent the colors. 

     

    Let’s look at some more examples of blending of colors, to see how these problems surface more practically. The examples use strong colors since then the differences are more pronounced. This is using the same three ways of blending colors as the first example.

     

    Instead of making it as easy as possible to work with color, most software make it unnecessarily hard, by doing image processing with representations not designed for it. Approximating the physical behavior of light with linear RGB models is one easy thing to do, but more work is needed to create image representations tailored for image processing and human perception.

     

    Also see:

    https://www.pixelsham.com/2022/04/05/bjorn-ottosson-okhsv-and-okhsl-two-new-color-spaces-for-color-picking/

    Read more: Björn Ottosson – How software gets color wrong
  • Tim Kang – calibrated white light values in sRGB color space

    https://www.linkedin.com/posts/timkang_colorimetry-cinematography-nerdalert-activity-7058330978007584769-9xln

     

    8bit sRGB encoded
    2000K 255 139 22
    2700K 255 172 89
    3000K 255 184 109
    3200K 255 190 122
    4000K 255 211 165
    4300K 255 219 178
    D50 255 235 205
    D55 255 243 224
    D5600 255 244 227
    D6000 255 249 240
    D65 255 255 255
    D10000 202 221 255
    D20000 166 196 255

    8bit Rec709 Gamma 2.4
    2000K 255 145 34
    2700K 255 177 97
    3000K 255 187 117
    3200K 255 193 129
    4000K 255 214 170
    4300K 255 221 182
    D50 255 236 208
    D55 255 243 226
    D5600 255 245 229
    D6000 255 250 241
    D65 255 255 255
    D10000 204 222 255
    D20000 170 199 255

    8bit Display P3 encoded
    2000K 255 154 63
    2700K 255 185 109
    3000K 255 195 127
    3200K 255 201 138
    4000K 255 219 176
    4300K 255 225 187
    D50 255 239 212
    D55 255 245 228
    D5600 255 246 231
    D6000 255 251 242
    D65 255 255 255
    D10000 208 223 255
    D20000 175 199 255

    10bit Rec2020 PQ (100 nits)
    2000K 520 435 273
    2700K 520 466 358
    3000K 520 475 384
    3200K 520 480 399
    4000K 520 495 446
    4300K 520 500 458
    D50 520 510 482
    D55 520 514 497
    D5600 520 514 500
    D6000 520 517 509
    D65 520 520 520
    D10000 479 489 520
    D20000 448 464 520

     

    ,
    Read more: Tim Kang – calibrated white light values in sRGB color space
  • Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline

    https://jo.dreggn.org/home/2018_manuka.pdf

     

    http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/

     

    The Manuka rendering architecture has been designed in the spirit of the classic reyes rendering architecture. In its core, reyes is based on stochastic rasterisation of micropolygons, facilitating depth of field, motion blur, high geometric complexity,and programmable shading.

     

    This is commonly achieved with Monte Carlo path tracing, using a paradigm often called shade-on-hit, in which the renderer alternates tracing rays with running shaders on the various ray hits. The shaders take the role of generating the inputs of the local material structure which is then used bypath sampling logic to evaluate contributions and to inform what further rays to cast through the scene.

     

    Over the years, however, the expectations have risen substantially when it comes to image quality. Computing pictures which are indistinguishable from real footage requires accurate simulation of light transport, which is most often performed using some variant of Monte Carlo path tracing. Unfortunately this paradigm requires random memory accesses to the whole scene and does not lend itself well to a rasterisation approach at all.

     

    Manuka is both a uni-directional and bidirectional path tracer and encompasses multiple importance sampling (MIS). Interestingly, and importantly for production character skin work, it is the first major production renderer to incorporate spectral MIS in the form of a new ‘Hero Spectral Sampling’ technique, which was recently published at Eurographics Symposium on Rendering 2014.

     

    Manuka propose a shade-before-hit paradigm in-stead and minimise I/O strain (and some memory costs) on the system, leveraging locality of reference by running pattern generation shaders before we execute light transport simulation by path sampling, “compressing” any bvh structure as needed, and as such also limiting duplication of source data.
    The difference with reyes is that instead of baking colors into the geometry like in Reyes, manuka bakes surface closures. This means that light transport is still calculated with path tracing, but all texture lookups etc. are done up-front and baked into the geometry.

     

    The main drawback with this method is that geometry has to be tessellated to its highest, stable topology before shading can be evaluated properly. As such, the high cost to first pixel. Even a basic 4 vertices square becomes a much more complex model with this approach.

     

     

    Manuka use the RenderMan Shading Language (rsl) for programmable shading [Pixar Animation Studios 2015], but we do not invoke rsl shaders when intersecting a ray with a surface (often called shade-on-hit). Instead, we pre-tessellate and pre-shade all the input geometry in the front end of the renderer.
    This way, we can efficiently order shading computations to sup-port near-optimal texture locality, vectorisation, and parallelism. This system avoids repeated evaluation of shaders at the same surface point, and presents a minimal amount of memory to be accessed during light transport time. An added benefit is that the acceleration structure for ray tracing (abounding volume hierarchy, bvh) is built once on the final tessellated geometry, which allows us to ray trace more efficiently than multi-level bvhs and avoids costly caching of on-demand tessellated micropolygons and the associated scheduling issues.

     

    For the shading reasons above, in terms of AOVs, the studio approach is to succeed at combining complex shading with ray paths in the render rather than pass a multi-pass render to compositing.

     

    For the Spectral Rendering component. The light transport stage is fully spectral, using a continuously sampled wavelength which is traced with each path and used to apply the spectral camera sensitivity of the sensor. This allows for faithfully support any degree of observer metamerism as the camera footage they are intended to match as well as complex materials which require wavelength dependent phenomena such as diffraction, dispersion, interference, iridescence, or chromatic extinction and Rayleigh scattering in participating media.

     

    As opposed to the original reyes paper, we use bilinear interpolation of these bsdf inputs later when evaluating bsdfs per pathv ertex during light transport4. This improves temporal stability of geometry which moves very slowly with respect to the pixel raster

     

    In terms of the pipeline, everything rendered at Weta was already completely interwoven with their deep data pipeline. Manuka very much was written with deep data in mind. Here, Manuka not so much extends the deep capabilities, rather it fully matches the already extremely complex and powerful setup Weta Digital already enjoy with RenderMan. For example, an ape in a scene can be selected, its ID is available and a NUKE artist can then paint in 3D say a hand and part of the way up the neutral posed ape.

     

    We called our system Manuka, as a respectful nod to reyes: we had heard a story froma former ILM employee about how reyes got its name from how fond the early Pixar people were of their lunches at Point Reyes, and decided to name our system after our surrounding natural environment, too. Manuka is a kind of tea tree very common in New Zealand which has very many very small leaves, in analogy to micropolygons ina tree structure for ray tracing. It also happens to be the case that Weta Digital’s main site is on Manuka Street.

     

     

    , ,
    Read more: Weta Digital – Manuka Raytracer and Gazebo GPU renderers – pipeline
  • The Forbidden colors – Red-Green & Blue-Yellow: The Stunning Colors You Can’t See

    www.livescience.com/17948-red-green-blue-yellow-stunning-colors.html

     

     

    While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.

     

    https://en.rockcontent.com/blog/the-use-of-yellow-in-data-design

    The back of the retina is covered in light-sensitive neurons known as cone cells and rod cells. There are three types of cone cells, each sensitive to different ranges of light. These ranges overlap, but for convenience the cones are referred to as blue (short-wavelength), green (medium-wavelength), and red (long-wavelength). The rod cells are primarily used in low-light situations, so we’ll ignore those for now.

     

    When light enters the eye and hits the cone cells, the cones get excited and send signals to the brain through the visual cortex. Different wavelengths of light excite different combinations of cones to varying levels, which generates our perception of color. You can see that the red cones are most sensitive to light, and the blue cones are least sensitive. The sensitivity of green and red cones overlaps for most of the visible spectrum.

     

    Here’s how your brain takes the signals of light intensity from the cones and turns it into color information. To see red or green, your brain finds the difference between the levels of excitement in your red and green cones. This is the red-green channel.

     

    To get “brightness,” your brain combines the excitement of your red and green cones. This creates the luminance, or black-white, channel. To see yellow or blue, your brain then finds the difference between this luminance signal and the excitement of your blue cones. This is the yellow-blue channel.

     

    From the calculations made in the brain along those three channels, we get four basic colors: blue, green, yellow, and red. Seeing blue is what you experience when low-wavelength light excites the blue cones more than the green and red.

     

    Seeing green happens when light excites the green cones more than the red cones. Seeing red happens when only the red cones are excited by high-wavelength light.

     

    Here’s where it gets interesting. Seeing yellow is what happens when BOTH the green AND red cones are highly excited near their peak sensitivity. This is the biggest collective excitement that your cones ever have, aside from seeing pure white.

     

    Notice that yellow occurs at peak intensity in the graph to the right. Further, the lens and cornea of the eye happen to block shorter wavelengths, reducing sensitivity to blue and violet light.

    Read more: The Forbidden colors – Red-Green & Blue-Yellow: The Stunning Colors You Can’t See
  • Image rendering bit depth

    The terms 8-bit, 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

     

    https://en.wikipedia.org/wiki/Color_depth

     

    In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

     

    When referring to a pixel, the concept can be defined as bits per pixel (bpp).

     

    When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

     

    Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

     

     

    Here’s a simple explanation of each.

     

    8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
    They can store around 5 stops of light and each pixel carry a value from 0 (black) to 255 (white).
    As a comparison, DSLR cameras can capture ~12-15 stops of light and they use RAW files to store the information.

     

    16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

     

    16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

     

    32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

     

    Bits and HDR coverage

    High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

     

    A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

     

    On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

     

    The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

     

    However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

     

    To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

     

    To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

     

    In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

     

    HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.

     

    ,
    Read more: Image rendering bit depth

LIGHTING

  • Convert between light exposure and intensity

    import math,sys
    
    def Exposure2Intensity(exposure): 
        exp = float(exposure)
        result = math.pow(2,exp)
        print(result)
    
    Exposure2Intensity(0)
    
    def Intensity2Exposure(intensity):
        inarg = float(intensity)
        
        if inarg == 0:
            print("Exposure of zero intensity is undefined.")
            return
        
        if inarg < 1e-323:
            inarg = max(inarg, 1e-323)
            print("Exposure of negative intensities is undefined. Clamping to a very small value instead (1e-323)")
        
        result = math.log(inarg, 2)
        print(result)
    
    Intensity2Exposure(0.1)
    
    
    
    
    ,
    Read more: Convert between light exposure and intensity
  • Composition – cinematography Cheat Sheet

    https://moodle.gllm.ac.uk/pluginfile.php/190622/mod_resource/content/1/Cinematography%20Cheat%20Sheet.pdf

    Where is our eye attracted first? Why?

    Size. Focus. Lighting. Color.

    Size. Mr. White (Harvey Keitel) on the right.
    Focus. He’s one of the two objects in focus.
    Lighting. Mr. White is large and in focus and Mr. Pink (Steve Buscemi) is highlighted by
    a shaft of light.
    Color. Both are black and white but the read on Mr. White’s shirt now really stands out.


    What type of lighting?

    -> High key lighting.
    Features bright, even illumination and few conspicuous shadows. This lighting key is often used in musicals and comedies.

    Low key lighting
    Features diffused shadows and atmospheric pools of light. This lighting key is often used in mysteries and thrillers.

    High contrast lighting
    Features harsh shafts of lights and dramatic streaks of blackness. This type of lighting is often used in tragedies and melodramas.

     

    What type of shot?

    Extreme long shot
    Taken from a great distance, showing much of the locale. Ifpeople are included in these shots, they usually appear as mere specks

    -> Long shot
    Corresponds to the space between the audience and the stage in a live theater. The long shots show the characters and some of the locale.

    Full shot
    Range with just enough space to contain the human body in full. The full shot shows the character and a minimal amount of the locale.

    Medium shot
    Shows the human figure from the knees or waist up.

    Close-Up
    Concentrates on a relatively small object and show very little if any locale.

    Extreme close-up
    Focuses on an unnaturally small portion of an object, giving that part great detail and symbolic significance.

     

    What angle?

    Bird’s-eye view.
    The shot is photographed directly from above. This type of shot can be disorienting, and the people photographed seem insignificant.

    High angle.
    This angle reduces the size of the objects photographed. A person photographed from this angle seems harmless and insignificant, but to a lesser extent than with the bird’s-eye view.

    -> Eye-level shot.
    The clearest view of an object, but seldom intrinsically dramatic, because it tends to be the norm.

    Low angle.
    This angle increases high and a sense of verticality, heightening the importance of the object photographed. A person shot from this angle is given a sense of power and respect.

    Oblique angle.
    For this angle, the camera is tilted laterally, giving the image a slanted appearance. Oblique angles suggest tension, transition, a impending movement. They are also called canted or dutch angles.

     

    What is the dominant color?

    The use of color in this shot is symbolic. The scene is set in warehouse. Both the set and characters are blues, blacks and whites.

    This was intentional allowing for the scenes and shots with blood to have a great level of contrast.

     

    What is the Lens/Filter/Stock?

    Telephoto lens.
    A lens that draws objects closer but also diminishes the illusion of depth.

    Wide-angle lens.
    A lens that takes in a broad area and increases the illusion of depth but sometimes distorts the edges of the image.

    Fast film stock.
    Highly sensitive to light, it can register an image with little illumination. However, the final product tends to be grainy.

    Slow film stock.
    Relatively insensitive to light, it requires a great deal of illumination. The final product tends to look polished.

    The lens is not wide-angle because there isn’t a great sense of depth, nor are several planes in focus. The lens is probably long but not necessarily a telephoto lens because the depth isn’t inordinately compressed.

    The stock is fast because of the grainy quality of the image.

     

    Subsidiary Contrast; where does the eye go next?

    The two guns.

     

    How much visual information is packed into the image? Is the texture stark, moderate, or highly detailed?

    Minimalist clutter in the warehouse allows a focus on a character driven thriller.

     

    What is the Composition?

    Horizontal.
    Compositions based on horizontal lines seem visually at rest and suggest placidity or peacefulness.

    Vertical.
    Compositions based on vertical lines seem visually at rest and suggest strength.

    -> Diagonal.
    Compositions based on diagonal, or oblique, lines seem dynamic and suggest tension or anxiety.

    -> Binary. Binary structures emphasize parallelism.

    Triangle.
    Triadic compositions stress the dynamic interplay among three main

    Circle.
    Circular compositions suggest security and enclosure.

     

    Is the form open or closed? Does the image suggest a window that arbitrarily isolates a fragment of the scene? Or a proscenium arch, in which the visual elements are carefully arranged and held in balance?

    The most nebulous of all the categories of mise en scene, the type of form is determined by how consciously structured the mise en scene is. Open forms stress apparently simple techniques, because with these unself-conscious methods the filmmaker is able to emphasize the immediate, the familiar, the intimate aspects of reality. In open-form images, the frame tends to be deemphasized. In closed form images, all the necessary information is carefully structured within the confines of the frame. Space seems enclosed and self-contained rather than continuous.

    Could argue this is a proscenium arch because this is such a classic shot with parallels and juxtapositions.

     

    Is the framing tight or loose? Do the character have no room to move around, or can they move freely without impediments?

    Shots where the characters are placed at the edges of the frame and have little room to move around within the frame are considered tight.

    Longer shots, in which characters have room to move around within the frame, are considered loose and tend to suggest freedom.

    Center-framed giving us the entire scene showing isolation, place and struggle.

     

    Depth of Field. On how many planes is the image composed (how many are in focus)? Does the background or foreground comment in any way on the mid-ground?

    Standard DOF, one background and clearly defined foreground.

     

    Which way do the characters look vis-a-vis the camera?

    An actor can be photographed in any of five basic positions, each conveying different psychological overtones.

    Full-front (facing the camera):
    the position with the most intimacy. The character is looking in our direction, inviting our complicity.

    Quarter Turn:
    the favored position of most filmmakers. This position offers a high degree of intimacy but with less emotional involvement than the full-front.

    -> Profile (looking of the frame left or right):
    More remote than the quarter turn, the character in profile seems unaware of being observed, lost in his or her own thoughts.

    Three-quarter Turn:
    More anonymous than the profile, this position is useful for conveying a character’s unfriendly or antisocial feelings, for in effect, the character is partially turning his or her back on us, rejecting our interest.

    Back to Camera:
    The most anonymous of all positions, this position is often used to suggest a character’s alienation from the world. When a character has his or her back to the camera, we can only guess what’s taking place internally, conveying a sense of concealment, or mystery.

    How much space is there between the characters?

    Extremely close, for a gunfight.

     

    The way people use space can be divided into four proxemic patterns.

    Intimate distances.
    The intimate distance ranges from skin contact to about eighteen inches away. This is the distance of physical involvement–of love, comfort, and tenderness between individuals.

    -> Personal distances.
    The personal distance ranges roughly from eighteen inches away to about four feet away. These distances tend to be reserved for friends and acquaintances. Personal distances preserve the privacy between individuals, yet these rages don’t necessarily suggest exclusion, as intimate distances often do.

    Social distances.
    The social distance rages from four feet to about twelve feet. These distances are usually reserved for impersonal business and casual social gatherings. It’s a friendly range in most cases, yet somewhat more formal than the personal distance.

    Public distances.
    The public distance extends from twelve feet to twenty-five feet or more. This range tends to be formal and rather detached.

    , , ,
    Read more: Composition – cinematography Cheat Sheet
  • GretagMacbeth Color Checker Numeric Values and Middle Gray

    The human eye perceives half scene brightness not as the linear 50% of the present energy (linear nature values) but as 18% of the overall brightness. We are biased to perceive more information in the dark and contrast areas. A Macbeth chart helps with calibrating back into a photographic capture into this “human perspective” of the world.

     

    https://en.wikipedia.org/wiki/Middle_gray

     

    In photography, painting, and other visual arts, middle gray or middle grey is a tone that is perceptually about halfway between black and white on a lightness scale in photography and printing, it is typically defined as 18% reflectance in visible light

     

    Light meters, cameras, and pictures are often calibrated using an 18% gray card[4][5][6] or a color reference card such as a ColorChecker. On the assumption that 18% is similar to the average reflectance of a scene, a grey card can be used to estimate the required exposure of the film.

     

    https://en.wikipedia.org/wiki/ColorChecker

     

     

    https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

     

    The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray independently, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory). Thus enters the Macbeth chart.

     

    <!–more–>

     

    Note that Chroma Key Green is reasonably close to an 18% gray reflectance.

    http://www.rags-int-inc.com/PhotoTechStuff/MacbethTarget/

     

    No Camera Data

     

    https://upload.wikimedia.org/wikipedia/commons/b/b4/CIE1931xy_ColorChecker_SMIL.svg

     

    RGB coordinates of the Macbeth ColorChecker

     

    https://pdfs.semanticscholar.org/0e03/251ad1e6d3c3fb9cb0b1f9754351a959e065.pdf

    , , , ,
    Read more: GretagMacbeth Color Checker Numeric Values and Middle Gray
  • Key/Fill ratios and scene composition using false colors

    www.videomaker.com/article/c03/18984-how-to-calculate-contrast-ratios-for-more-professional-lighting-setups

     

     

    To measure the contrast ratio you will need a light meter. The process starts with you measuring the main source of light, or the key light.

     

    Get a reading from the brightest area on the face of your subject. Then, measure the area lit by the secondary light, or fill light. To make sense of what you have just measured you have to understand that the information you have just gathered is in F-stops, a measure of light. With each additional F-stop, for example going one stop from f/1.4 to f/2.0, you create a doubling of light. The reverse is also true; moving one stop from f/8.0 to f/5.6 results in a halving of the light.

     

    Let’s say you grabbed a measurement from your key light of f/8.0. Then, when you measured your fill light area, you get a reading of f/4.0. This will lead you to a contrast ratio of 4:1 because there are two stops between f/4.0 and f/8.0 and each stop doubles the amount of light. In other words, two stops x twice the light per stop = four times as much light at f/8.0 than at f/4.0.

     

    theslantedlens.com/2017/lighting-ratios-photo-video/

     

    Examples in the post

    (more…)

    , , ,
    Read more: Key/Fill ratios and scene composition using false colors
  • Ethan Roffler interviews CG Supervisor Daniele Tosti

    Ethan Roffler
    I recently had the honor of interviewing this VFX genius and gained great insight into what it takes to work in the entertainment industry. Keep in mind, these questions are coming from an artist’s perspective but can be applied to any creative individual looking for some wisdom from a professional. So grab a drink, sit back, and enjoy this fun and insightful conversation.



    Ethan

    To start, I just wanted to say thank you so much for taking the time for this interview!

    Daniele
    My pleasure.
    When I started my career I struggled to find help. Even people in the industry at the time were not that helpful. Because of that, I decided very early on that I was going to do exactly the opposite. I spend most of my weekends talking or helping students. ;)

    Ethan
    That’s awesome! I have also come across the same struggle! Just a heads up, this will probably be the most informal interview you’ll ever have haha! Okay, so let’s start with a small introduction!

    Daniele
    Short introduction: I worked very hard and got lucky enough to work on great shows with great people. ;) Slightly longer version: I started working for a TV channel, very early, while I was learning about CG. Slowly made my way across the world, working along very great people and amazing shows. I learned that to be successful in this business, you have to really love what you do as much as respecting the people around you. What you do will improve to the final product; the way you work with people will make a difference in your life.

    Ethan
    How long have you been an artist?

    Daniele
    Loaded question. I believe I am still trying and craving to be one. After each production I finish I realize how much I still do not know. And how many things I would like to try. I guess in my CG Sup and generalist world, being an artist is about learning as much about the latest technologies and production cycles as I can, then putting that in practice. Having said that, I do consider myself a cinematographer first, as I have been doing that for about 25 years now.

    Ethan
    Words of true wisdom, the more I know the less I know:) How did you get your start in the industry?
    How did you break into such a competitive field?

    Daniele
    There were not many schools when I started. It was all about a few magazines, some books, and pushing software around trying to learn how to make pretty images. Opportunities opened because of that knowledge! The true break was learning to work hard to achieve a Suspension of Disbelief in my work that people would recognize as such. It’s not something everyone can do, but I was fortunate to not be scared of working hard, being a quick learner and having very good supervisors and colleagues to learn from.

    Ethan
    Which do you think is better, having a solid art degree or a strong portfolio?

    Daniele
    Very good question. A strong portfolio will get you a job now. A solid strong degree will likely get you a job for a longer period. Let me digress here; Working as an artist is not about being an artist, it’s about making money as an artist. Most people fail to make that difference and have either a poor career or lack the understanding to make a stable one. One should never mix art with working as an artist. You can do both only if you understand business and are fair to yourself.



    Ethan

    That’s probably the most helpful answer to that question I have ever heard.
    What’s some advice you can offer to someone just starting out who wants to break into the industry?

    Daniele
    Breaking in the industry is not just about knowing your art. It’s about knowing good business practices. Prepare a good demo reel based on the skill you are applying for; research all the places where you want to apply and why; send as many reels around; follow up each reel with a phone call. Business is all about right time, right place.

    Ethan
    A follow-up question to that is: Would you consider it a bad practice to send your demo reels out in mass quantity rather than focusing on a handful of companies to research and apply for?

    Daniele
    Depends how desperate you are… I would say research is a must. To improve your options, you need to know which company is working on what and what skills they are after. If you were selling vacuum cleaners you probably would not want to waste energy contacting shoemakers or cattle farmers.

    Ethan
    What do you think the biggest killer of creativity and productivity is for you?

    Daniele
    Money…If you were thinking as an artist. ;) If you were thinking about making money as an artist… then I would say “thinking that you work alone”.

    Ethan
    Best. Answer. Ever.
    What are ways you fight complacency and maintain fresh ideas, outlooks, and perspectives

    Daniele
    Two things: Challenge yourself to go outside your comfort zone. And think outside of the box.

    Ethan
    What are the ways/habits you have that challenge yourself to get out of your comfort zone and think outside the box?

    Daniele
    If you think you are a good character painter, pick up a camera and go take pictures of amazing landscapes. If you think you are good only at painting or sketching, learn how to code in python. If you cannot solve a problem, that being a project or a person, learn to ask for help or learn about looking at the problem from various perspectives. If you are introvert, learn to be extrovert. And vice versa. And so on…

    Ethan
    How do you avoid burnout?

    Daniele
    Oh… I wish I learned about this earlier. I think anyone that has a passion in something is at risk of burning out. Artists, more than many, because we see the world differently and our passion goes deep. You avoid burnouts by thinking that you are in a long term plan and that you have an obligation to pay or repay your talent by supporting and cherishing yourself and your family, not your paycheck. You do this by treating your art as a business and using business skills when dealing with your career and using artistic skills only when you are dealing with a project itself.

    Ethan
    Looking back, what was a big defining moment for you?

    Daniele
    Recognizing that people around you, those being colleagues, friends or family, come first.
    It changed my career overnight.

    Ethan
    Who are some of your personal heroes?

    Daniele
    Too many to list. Most recently… James Cameron; Joe Letteri; Lawrence Krauss; Richard Dawkins. Because they all mix science, art, and poetry in their own way.

    Ethan
    Last question:
    What’s your dream job? ;)

    Daniele
    Teaching artists to be better at being business people… as it will help us all improve our lives and the careers we took…

    Being a VFX artist is fundamentally based on mistrust.
    This because schedules, pipelines, technology, creative calls… all have a native and naive instability to them that causes everyone to grow a genuine but beneficial lack of trust in the status quo. This is a fine balance act to build into your character. The VFX motto: “Love everyone but trust no one” is born on that.

     

    , ,
    Read more: Ethan Roffler interviews CG Supervisor Daniele Tosti
  • HDRI shooting and editing by Xuan Prada and Greg Zaal

    www.xuanprada.com/blog/2014/11/3/hdri-shooting

     

    http://blog.gregzaal.com/2016/03/16/make-your-own-hdri/

     

    http://blog.hdrihaven.com/how-to-create-high-quality-hdri/

     

    Shooting checklist

    • Full coverage of the scene (fish-eye shots)
    • Backplates for look-development (including ground or floor)
    • Macbeth chart for white balance
    • Grey ball for lighting calibration
    • Chrome ball for lighting orientation
    • Basic scene measurements
    • Material samples
    • Individual HDR artificial lighting sources if required

    Methodology

    • Plant the tripod where the action happens, stabilise it and level it
    • Set manual focus
    • Set white balance
    • Set ISO
    • Set raw+jpg
    • Set apperture
    • Metering exposure
    • Set neutral exposure
    • Read histogram and adjust neutral exposure if necessary
    • Shot slate (operator name, location, date, time, project code name, etc)
    • Set auto bracketing
    • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
    • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
    • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
    • Take backplates and ground/floor texture references
    • Shoot reference materials
    • Write down measurements of the scene, specially if you are shooting interiors.
    • If shooting artificial lights take HDR samples of each individual lighting source.

    Exposures starting point

    • Day light sun visible ISO 100 F22
    • Day light sun hidden ISO 100 F16
    • Cloudy ISO 320 F16
    • Sunrise/Sunset ISO 100 F11
    • Interior well lit ISO 320 F16
    • Interior ambient bright ISO 320 F10
    • Interior bad light ISO 640 F10
    • Interior ambient dark ISO 640 F8
    • Low light situation ISO 640 F5

     

    NOTE: The goal is to clean the initial individual brackets before or at merging time as much as possible.
    This means:

    • keeping original shooting metadata
    • de-fringing
    • removing aberration (through camera lens data or automatically)
    • at 32 bit
    • in ACEScg (or ACES) wherever possible

     

    Here are the tips for using the chromatic ball in VFX projects, written in English:
    https://www.linkedin.com/posts/bellrodrigo_here-are-the-tips-for-using-the-chromatic-activity-7200950595438940160-AGBp

     

    Tips for Using the Chromatic Ball in VFX Projects**

    The chromatic ball is an invaluable tool in VFX work, helping to capture lighting and reflection data crucial for integrating CGI elements seamlessly. Here are some tips to maximize its effectiveness:

     

    1. **Positioning**:
    – Place the chromatic ball in the same lighting conditions as the main subject. Ensure it is visible in the camera frame but not obstructing the main action.
    – Ideally, place the ball where the CGI elements will be integrated to match the lighting and reflections accurately.

     

    2. **Recording Reference Footage**:
    – Capture reference footage of the chromatic ball at the beginning and end of each scene or lighting setup. This ensures you have consistent lighting data for the entire shoot.

     

    3. **Consistent Angles**:
    – Use consistent camera angles and heights when recording the chromatic ball. This helps in comparing and matching lighting setups across different shots.

     

    4. **Combine with a Gray Ball**:
    – Use a gray ball alongside the chromatic ball. The gray ball provides a neutral reference for exposure and color balance, complementing the chromatic ball’s reflection data.

     

    5. **Marking Positions**:
    – Mark the position of the chromatic ball on the set to ensure consistency when shooting multiple takes or different camera angles.

     

    6. **Lighting Analysis**:
    – Analyze the chromatic ball footage to understand the light sources, intensity, direction, and color temperature. This information is crucial for creating realistic CGI lighting and shadows.

     

    7. **Reflection Analysis**:
    – Use the chromatic ball to capture the environment’s reflections. This helps in accurately reflecting the CGI elements within the same scene, making them blend seamlessly.

     

    8. **Use HDRI**:
    – Capture High Dynamic Range Imagery (HDRI) of the chromatic ball. HDRI provides detailed lighting information and can be used to light CGI scenes with greater realism.

     

    9. **Communication with VFX Team**:
    – Ensure that the VFX team is aware of the chromatic ball’s data and how it was captured. Clear communication ensures that the data is used effectively in post-production.

     

    10. **Post-Production Adjustments**:
    – In post-production, use the chromatic ball data to adjust the CGI elements’ lighting and reflections. This ensures that the final output is visually cohesive and realistic.

    , ,
    Read more: HDRI shooting and editing by Xuan Prada and Greg Zaal