Photography basics: Exposure Value vs Photographic Exposure vs Il/Luminance vs Pixel luminance measurements
/ Featured, lighting, photography

Also see: http://www.pixelsham.com/2015/05/16/how-aperture-shutter-speed-and-iso-affect-your-photos/

 

In photography, exposure value (EV) is a number that represents a combination of a camera’s shutter speed and f-number, such that all combinations that yield the same exposure have the same EV (for any fixed scene luminance).

 

 

The EV concept was developed in an attempt to simplify choosing among combinations of equivalent camera settings. Although all camera settings with the same EV nominally give the same exposure, they do not necessarily give the same picture. EV is also used to indicate an interval on the photographic exposure scale. 1 EV corresponding to a standard power-of-2 exposure step, commonly referred to as a stop

 

EV 0 corresponds to an exposure time of 1 sec and a relative aperture of f/1.0. If the EV is known, it can be used to select combinations of exposure time and f-number.

 

https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

Note EV does not equal to photographic exposure. Photographic Exposure is defined as how much light hits the camera’s sensor. It depends on the camera settings mainly aperture and shutter speed. Exposure value (known as EV) is a number that represents the exposure setting of the camera.

 

Thus, strictly, EV is not a measure of luminance (indirect or reflected exposure) or illuminance (incidental exposure); rather, an EV corresponds to a luminance (or illuminance) for which a camera with a given ISO speed would use the indicated EV to obtain the nominally correct exposure. Nonetheless, it is common practice among photographic equipment manufacturers to express luminance in EV for ISO 100 speed, as when specifying metering range or autofocus sensitivity.

 

The exposure depends on two things: how much light gets through the lenses to the camera’s sensor and for how long the sensor is exposed. The former is a function of the aperture value while the latter is a function of the shutter speed. Exposure value is a number that represents this potential amount of light that could hit the sensor. It is important to understand that exposure value is a measure of how exposed the sensor is to light and not a measure of how much light actually hits the sensor. The exposure value is independent of how lit the scene is. For example a pair of aperture value and shutter speed represents the same exposure value both if the camera is used during a very bright day or during a dark night.

 

Each exposure value number represents all the possible shutter and aperture settings that result in the same exposure. Although the exposure value is the same for different combinations of aperture values and shutter speeds the resulting photo can be very different (the aperture controls the depth of field while shutter speed controls how much motion is captured).

EV 0.0 is defined as the exposure when setting the aperture to f-number 1.0 and the shutter speed to 1 second. All other exposure values are relative to that number. Exposure values are on a base two logarithmic scale. This means that every single step of EV – plus or minus 1 – represents the exposure (actual light that hits the sensor) being halved or doubled.

https://www.streetdirectory.com/travel_guide/141307/photography/exposure_value_ev_and_exposure_compensation.html

 

Formula

https://en.wikipedia.org/wiki/Exposure_value

 

https://www.scantips.com/lights/math.html

 

which means   2EV = N² / t

where

  • N is the relative aperture (f-number) Important: Note that f/stop values must first be squared in most calculations
  • t is the exposure time (shutter speed) in seconds

EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

Example: If f/16 and 1/4 second, then this is:

(N² / t) = (16 × 16 ÷ 1/4) = (16 × 16 × 4) = 1024.

Log₂(1024) is EV 10. Meaning, 210 = 1024.

 

Collecting photographic exposure using Light Meters

https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory)

https://en.wikipedia.org/wiki/Light_meter

For reflected-light meters, camera settings are related to ISO speed and subject luminance by the reflected-light exposure equation:

where

  • N is the relative aperture (f-number)
  • t is the exposure time (“shutter speed”) in seconds
  • L is the average scene luminance
  • S is the ISO arithmetic speed
  • K is the reflected-light meter calibration constant

 

For incident-light meters, camera settings are related to ISO speed and subject illuminance by the incident-light exposure equation:

where

  • E is the illuminance (in lux)
  • C is the incident-light meter calibration constant

 

Two values for K are in common use: 12.5 (Canon, Nikon, and Sekonic) and 14 (Minolta, Kenko, and Pentax); the difference between the two values is approximately 1/6 EV.
For C a value of 250 is commonly used.

 

Nonetheless, it is common practice among photographic equipment manufacturers to also express luminance in EV for ISO 100 speed. Using K = 12.5, the relationship between EV at ISO 100 and luminance L is then :

L = 2(EV-3)

 

The situation with incident-light meters is more complicated than that for reflected-light meters, because the calibration constant C depends on the sensor type. Illuminance is measured with a flat sensor; a typical value for C is 250 with illuminance in lux. Using C = 250, the relationship between EV at ISO 100 and illuminance E is then :

 

E = 2.5 * 2(EV)

 

https://nofilmschool.com/2018/03/want-easier-and-faster-way-calculate-exposure-formula

Three basic factors go into the exposure formula itself instead: aperture, shutter, and ISO. Plus a light meter calibration constant.

f-stop²/shutter (in seconds) = lux * ISO/C

 

If you at least know four of those variables, you’ll be able to calculate the missing value.

So, say you want to figure out how much light you’re going to need in order to shoot at a certain f-stop. Well, all you do is plug in your values (you should know the f-stop, ISO, and your light meter calibration constant) into the formula below:

lux = C (f-stop²/shutter (in seconds))/ISO

 

Exposure Value Calculator:

https://www.vroegop.nu/exposure-value-calculator/

 

From that perspective, an exposure stop is a measurement of Exposure and provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso & f-stop.
+-1 stop is a doubling or halving of the amount of light let in when taking a photo.
1 EV is just another way to say one stop of exposure change.

 

One major use of EV (Exposure Value) is just to measure any change of exposure, where one EV implies a change of one stop of exposure. Like when we compensate our picture in the camera.

 

If the picture comes out too dark, our manual exposure could correct the next one by directly adjusting one of the three exposure controls (f/stop, shutter speed, or ISO). Or if using camera automation, the camera meter is controlling it, but we might apply +1 EV exposure compensation (or +1 EV flash compensation) to make the result goal brighter, as desired. This use of 1 EV is just another way to say one stop of exposure change.

 

On a perfect day the difference from sampling the sky vs the sun exposure with diffusing spot meters is about 3.2 exposure difference.

 ~15.4 EV for the sun
 ~12.2 EV for the sky

That is as a ballpark. All still influenced by surroundings, accuracy parameters, fov of the sensor…

 

 

EV calculator

https://www.scantips.com/lights/evchart.html#calc

http://www.fredparker.com/ultexp1.htm

 

Exposure value is basically used to indicate an interval on the photographic exposure scale, with a difference of 1 EV corresponding to a standard power-of-2 exposure step, also commonly referred to as a “stop”.

 

https://contrastly.com/a-guide-to-understanding-exposure-value-ev/

 

Retrieving photographic exposure from an image

All you can hope to measure with your camera and some images is the relative reflected luminance. Even if you have the camera settings. https://en.wikipedia.org/wiki/Relative_luminance

 

If you REALLY want to know the amount of light in absolute radiometric units, you’re going to need to use some kind of absolute light meter or measured light source to calibrate your camera. For references on how to do this, see: Section 2.5 Obtaining Absolute Radiance from http://www.pauldebevec.com/Research/HDR/debevec-siggraph97.pdf

 

IF you are still trying to gauge relative brightness, the level of the sun in Nuke can vary, but it should be in the thousands. Ie: between 30,000 and 65,0000 rgb value depending on time of the day, season and atmospherics.

 

The values for a 12 o’clock sun, with the sun sampled at EV 15.5 (shutter 1/30, ISO 100, F22) is 32.000 RGB max values (or 32,000 pixel luminance).
The thing to keep an eye for is the level of contrast between sunny side/fill side.  The terminator should be quite obvious,  there can be up to 3 stops difference between fill/key in sunny lit objects.

 

Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).
One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.
Note: make sure to set your Nuke read node to ‘raw data’

 

In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

 

To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

 

If you are trying to gauge the EV from the pixel luminance in the image:
– Setting the center f-stop to 0 means that in a neutral plate, the middle 18% gray will equal to exposure value 0.
– So if EV 0 = 0.18 middle gray in nuke which equal to a pixel luminance of 0.18, doubling that value, doubles the EV.

.18 pixel luminance = 0EV
.36 pixel luminance = 1EV
.72 pixel luminance = 2EV
1.46 pixel luminance = 3EV
...

 

This is a Geometric Progression function: xn = ar(n-1)

The most basic example of this function is 1,2,4,8,16,32,… The sequence starts at 1 and doubles each time, so

  • a=1 (the first term)
  • r=2 (the “common ratio” between terms is a doubling)

And we get:

{a, ar, ar2, ar3, … }

= {1, 1×2, 1×22, 1×23, … }

= {1, 2, 4, 8, … }

In this example the function translates to: n = 2(n-1)
You can graph this curve through this expression: x = 2(y-1)  :

You can go back and forth between the two values through a geometric progression function and a log function:

(Note: in a spreadsheet this is: = POWER(2; cell# -1)  and  =LOG(cell#, 2)+1) )

2(y-1) log2(x)+1
x y
1 1
2 2
4 3
8 4
16 5
32 6
64 7
128 8
256 9
512 10
1024 11
2048 12
4096 13

 

Translating this into a geometric progression between an image pixel luminance and EV:

(more…)

Convert between light exposure and intensity
/ lighting, software
import math,sys

def Exposure2Intensity(exposure): 
    exp = float(exposure)
    result = math.pow(2,exp)
    print(result)

Exposure2Intensity(0)

def Intensity2Exposure(intensity):
    inarg = float(intensity)
    
    if inarg == 0:
        print("Exposure of zero intensity is undefined.")
        return
    
    if inarg < 1e-323:
        inarg = max(inarg, 1e-323)
        print("Exposure of negative intensities is undefined. Clamping to a very small value instead (1e-323)")
    
    result = math.log(inarg, 2)
    print(result)

Intensity2Exposure(0.1)



Photography basics: How Exposure Stops (Aperture, Shutter Speed, and ISO) Affect Your Photos – cheat cards
/ Featured, lighting, photography, production

 

Also see:

http://www.pixelsham.com/2018/11/22/exposure-value-measurements/

 

http://www.pixelsham.com/2016/03/03/f-stop-vs-t-stop/

 

 

An exposure stop is a unit measurement of Exposure as such it provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso and f-stop.

 

+-1 stop is a doubling or halving of the amount of light let in when taking a photo

 

1 EV (exposure value) is just another way to say one stop of exposure change.

 

https://www.photographymad.com/pages/view/what-is-a-stop-of-exposure-in-photography

 

Same applies to shutter speed, iso and aperture.
Doubling or halving your shutter speed produces an increase or decrease of 1 stop of exposure.
Doubling or halving your iso speed produces an increase or decrease of 1 stop of exposure.

 

 

Because of the way f-stop numbers are calculated (ratio of focal length/lens diameter, where focal length is the distance between the lens and the sensor), an f-stop doesn’t relate to a doubling or halving of the value, but to the doubling/halving of the area coverage of a lens in relation to its focal length. And as such, to a multiplying or dividing by 1.41 (the square root of 2). For example, going from f/2.8 to f/4 is a decrease of 1 stop because 4 = 2.8 * 1.41. Changing from f/16 to f/11 is an increase of 1 stop because 11 = 16 / 1.41.

 

 

https://www.quora.com/Photography-How-a-higher-f-Stop-larger-aperture-leads-to-shallow-Depth-Of-Field

A wider aperture means that light proceeding from the foreground, subject, and background is entering at more oblique angles than the light entering less obliquely.

Consider that absolutely everything is bathed in light, therefore light bouncing off of anything is effectively omnidirectional. Your camera happens to be picking up a tiny portion of the light that’s bouncing off into infinity.

Now consider that the wider your iris/aperture, the more of that omnidirectional light you’re picking up:

When you have a very narrow iris you are eliminating a lot of oblique light. Whatever light enters, from whatever distance, enters moderately parallel as a whole. When you have a wide aperture, much more light is entering at a multitude of angles. Your lens can only focus the light from one depth – the foreground/background appear blurred because it cannot be focused on.

 

https://frankwhitephotography.com/index.php?id=28:what-is-a-stop-in-photography

 

 

 

 

The great thing about stops is that they give us a way to directly compare shutter speed, aperture diameter, and ISO speed. This means that we can easily swap these three components about while keeping the overall exposure the same.

 

http://lifehacker.com/how-aperture-shutter-speed-and-iso-affect-pictures-sh-1699204484

 

 

https://www.techradar.com/how-to/the-exposure-triangle

 

 

https://www.videoschoolonline.com/what-is-an-exposure-stop/

 

Note. All three of these measurements (aperture, shutter, iso) have full stops, half stops and third stops, but if you look at the numbers they aren’t always consistent. For example, a one third stop between ISO100 and ISO 200 would be ISO133, yet most cameras are marked at ISO125.

Third-stops are especially important as they’re the increment that most cameras use for their settings. These are just imaginary divisions in each stop.
From a practical standpoint manufacturers only standardize the full stops, meaning that while they try and stay somewhat consistent there is some rounding up going on between the smaller numbers.

 

http://www.digitalcameraworld.com/2015/04/15/the-exposure-triangle-aperture-shutter-speed-and-iso-explained/

 

 

 

 

 

Note that ND Filters directly modify the exposure triangle.

 

 

 

apps exposure
/ IOS, production

DiffusionLight: HDRI Light Probes for Free by Painting a Chrome Ball
/ lighting, photography, production

https://diffusionlight.github.io/

 

 

https://github.com/DiffusionLight/DiffusionLight

 

https://github.com/DiffusionLight/DiffusionLight?tab=MIT-1-ov-file#readme

 

https://colab.research.google.com/drive/15pC4qb9mEtRYsW3utXkk-jnaeVxUy-0S

 

“a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However, these approaches often struggle with real-world, uncontrolled settings due to the limited diversity and size of their datasets. To address this problem, we leverage diffusion models trained on billions of standard images to render a chrome ball into the input image. Despite its simplicity, this task remains challenging: the diffusion models often insert incorrect or inconsistent objects and cannot readily generate images in HDR format. Our research uncovers a surprising relationship between the appearance of chrome balls and the initial diffusion noise map, which we utilize to consistently generate high-quality chrome balls. We further fine-tune an LDR difusion model (Stable Diffusion XL) with LoRA, enabling it to perform exposure bracketing for HDR light estimation. Our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios.”

 

Magic Lantern – a free software add-on to boost Canon’s features
/ hardware, photography

https://www.magiclantern.fm/

 

https://builds.magiclantern.fm/

 

Supported cameras:
5D Mark II, 5D Mark III, 6D, 7D, 50D, 60D, 500D/T1i, 550D/T2i, 600D/T3i, 650D/T4i, 700D/T5i, 1100D/T3, EOS M.

 

Example features:

  • Zebras for under/over exposed areas.
  • Focus peaking for quickly checking focus.
  • Magic Zoom window to fine-tune focus.
  • Cropmark overlays for 16×9 bars or any custom shape.
  • Overlay a ghost image in live view.
  • Spotmeter in live view.
  • False color for precise exposure control.
  • RGB histogram with RAW support.
  • Waveform for perfect green screen lighting.
  • Vectorscope for color tones.
  • Kelvin and push-button white balance.
  • Auto ETTR (Expose To The Right).
  • Exposure Lock for manual mode.
  • Manually override exposure settings in LiveView.
  • Quickly switch between two exposure presets.
  • Toggle exposure simulation on/off (Liveview).
  • Dynamic range improvements (Dual ISO, other features being researched).
  • Bracketing for exposure, flash, or depth-of-field.
  • In-camera intervalometer.
  • Custom bulb timer for extra-long exposures.
  • Motion detection.
  • Silent pictures without shutter actuation.
  • Record voice tags for photos / videos.
  • Analog / digital gain adjustments.
  • Selectable input source.
  • Toggle wind filter.
  • Live audio monitoring through with headphones.
  • Beep / test tones.

 

 

GretagMacbeth Color Checker Numeric Values and Middle Gray

The human eye perceives half scene brightness not as the linear 50% of the present energy (linear nature values) but as 18% of the overall brightness. We are biased to perceive more information in the dark and contrast areas. A Macbeth chart helps with calibrating back into a photographic capture into this “human perspective” of the world.

 

https://en.wikipedia.org/wiki/Middle_gray

 

In photography, painting, and other visual arts, middle gray or middle grey is a tone that is perceptually about halfway between black and white on a lightness scale in photography and printing, it is typically defined as 18% reflectance in visible light

 

Light meters, cameras, and pictures are often calibrated using an 18% gray card[4][5][6] or a color reference card such as a ColorChecker. On the assumption that 18% is similar to the average reflectance of a scene, a grey card can be used to estimate the required exposure of the film.

 

https://en.wikipedia.org/wiki/ColorChecker

 

 

https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

 

The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray independently, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory). Thus enters the Macbeth chart.

 

Note that Chroma Key Green is reasonably close to an 18% gray reflectance.

http://www.rags-int-inc.com/PhotoTechStuff/MacbethTarget/

 

No Camera Data

 

https://upload.wikimedia.org/wikipedia/commons/b/b4/CIE1931xy_ColorChecker_SMIL.svg

 

RGB coordinates of the Macbeth ColorChecker

 

https://pdfs.semanticscholar.org/0e03/251ad1e6d3c3fb9cb0b1f9754351a959e065.pdf

(more…)

Image rendering bit depth
/ colour

The terms 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

 

https://en.wikipedia.org/wiki/Color_depth

 

In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

 

When referring to a pixel, the concept can be defined as bits per pixel (bpp).

 

When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

 

Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

 

 

Here’s a simple explanation of each.

 

8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.

 

16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

 

16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

 

32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

 

Bits and HDR coverage

High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

 

A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

 

On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

 

The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

 

However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

 

To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

 

To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

 

In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

 

HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.

 

RICOH THETA Z1 51GB camera – 360° images in RAW format
/ hardware, production

https://theta360.com/en/about/theta/z1.html

 

  • 23MP(6720 x 3360, 7K)
  • superior noise reduction performance
  • F2.1, F3.5 and F5.6
  • 4K videos (3840 x 1920, 29.97fps)
  • RAW (DNG) image format
  • 360° live streaming in 4K
  • record sound from 4 different directions when shooting video
  • editing of 360° images in Adobe Photoshop Lightroom Classic CC
  • Android™ base system for the OS. Use plug-ins to customize your own THETA.
  • Wireless 2.4 GHz: 1 to 11ch or 1 to 13ch
  • Wireless 5 GHz: W52 (36 to 48ch, channel bandwidth 20/40/80 MHz supported)

 

Theta Z1 is Ricoh’s flagship 360 camera that features 1-inch sensors, which are the largest available for dual lens 360 cameras.  It has been a highly regarded camera among 360 photographers because of its excellent image quality, color accuracy, and its ability to shoot Raw DNG photos with exceptional exposure latitude.

 

Bracketing mode 2022

Rquirement: Basic app iOS ver.2.20.0, Android ver.2.5.0, Camera firmware ver.2.10.3

https://community.theta360.guide/t/new-feature-ae-bracket-added-in-the-shooting-mode-z1-only/8247

 

HDRi for VFX

https://community.theta360.guide/t/create-high-quality-hdri-for-vfx-using-ricoh-theta-z1/4789/4

 

 

 

ND filtering

 

https://community.theta360.guide/t/neutral-density-solution-for-most-theta-cameras/7331

 

https://community.theta360.guide/t/long-exposure-nd-filter-for-ricoh-theta/1100

Fast, optimized ‘for’ pixel loops with OpenCV and Python to create tone mapped HDR images
/ lighting, photography, python, software

https://pyimagesearch.com/2017/08/28/fast-optimized-for-pixel-loops-with-opencv-and-python/

 

https://learnopencv.com/exposure-fusion-using-opencv-cpp-python/

 

Exposure Fusion is a method for combining images taken with different exposure settings into one image that looks like a tone mapped High Dynamic Range (HDR) image.

 

StudioBinder.com – Photography basics: What is Dynamic Range in Photography
/ lighting, photography, production

https://www.studiobinder.com/blog/what-is-dynamic-range-photography/

 

https://www.hdrsoft.com/resources/dri.html#bit-depth

 

 

The dynamic range is a ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to.

For a scene: Dynamic range is the ratio between the brightest and darkest parts of the scene.

 

For a camera: Dynamic range is the ratio of saturation to noise. More specifically, the ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise.

 

For a display: Dynamic range is the ratio between the maximum and minimum intensities emitted from the screen.

 

 

 

 

 

Erik Winquist – The Definitive Weta Digital Guide to IBL hdri capture
/ photography, production

www.fxguide.com/fxfeatured/the-definitive-weta-digital-guide-to-ibl

 

Notes:

  • Camera type: full frame with exposure bracketing and an 8mm circular fish eye lens.
  • Bracketing: 7 exposures at 2 stops increments.
  • Tripod: supporting 120 degrees locked offsets
  • Camera angle: should point up 7.5 degrees for better sky or upper dome coverage.
  • Camera focus: set and tape locked to manual
  • Start shooting looking towards the sun direction with and without the ND3 filter; The other angles will not require the ND3 filter.
  • Documenting shooting with a slate (measure distance to slate, day, location, camera info, camera temperature, camera position)

Is a MacBeth Colour Rendition Chart the Safest Way to Calibrate a Camera?
/ colour, lighting

www.colour-science.org/posts/the-colorchecker-considered-mostly-harmless/

 

 

“Unless you have all the relevant spectral measurements, a colour rendition chart should not be used to perform colour-correction of camera imagery but only for white balancing and relative exposure adjustments.”

 

“Using a colour rendition chart for colour-correction might dramatically increase error if the scene light source spectrum is different from the illuminant used to compute the colour rendition chart’s reference values.”

 

“other factors make using a colour rendition chart unsuitable for camera calibration:

– Uncontrolled geometry of the colour rendition chart with the incident illumination and the camera.
– Unknown sample reflectances and ageing as the colour of the samples vary with time.
– Low samples count.
– Camera noise and flare.
– Etc…

 

“Those issues are well understood in the VFX industry, and when receiving plates, we almost exclusively use colour rendition charts to white balance and perform relative exposure adjustments, i.e. plate neutralisation.”

HDRI shooting and editing by Xuan Prada and Greg Zaal
/ lighting, photography, production

www.xuanprada.com/blog/2014/11/3/hdri-shooting

 

http://blog.gregzaal.com/2016/03/16/make-your-own-hdri/

 

http://blog.hdrihaven.com/how-to-create-high-quality-hdri/

 

Shooting checklist

  • Full coverage of the scene (fish-eye shots)
  • Backplates for look-development (including ground or floor)
  • Macbeth chart for white balance
  • Grey ball for lighting calibration
  • Chrome ball for lighting orientation
  • Basic scene measurements
  • Material samples
  • Individual HDR artificial lighting sources if required

Methodology

  • Plant the tripod where the action happens, stabilise it and level it
  • Set manual focus
  • Set white balance
  • Set ISO
  • Set raw+jpg
  • Set apperture
  • Metering exposure
  • Set neutral exposure
  • Read histogram and adjust neutral exposure if necessary
  • Shot slate (operator name, location, date, time, project code name, etc)
  • Set auto bracketing
  • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
  • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
  • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
  • Take backplates and ground/floor texture references
  • Shoot reference materials
  • Write down measurements of the scene, specially if you are shooting interiors.
  • If shooting artificial lights take HDR samples of each individual lighting source.

Exposures starting point

  • Day light sun visible ISO 100 F22
  • Day light sun hidden ISO 100 F16
  • Cloudy ISO 320 F16
  • Sunrise/Sunset ISO 100 F11
  • Interior well lit ISO 320 F16
  • Interior ambient bright ISO 320 F10
  • Interior bad light ISO 640 F10
  • Interior ambient dark ISO 640 F8
  • Low light situation ISO 640 F5
HDRI Median Cut plugin
/ Featured, lighting, software

www.hdrlabs.com/picturenaut/plugins.html

 

 

Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

 

Here is an openCV example:

 

# bottom left coordinates = 0,0
import numpy as np
import cv2

# Load the HDR or EXR image
image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification

# Calculate the luminance from the HDR channels (assuming RGB format)
luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])

# Set a threshold value based on estimated EV
threshold_value = 2.4  # Estimated threshold value based on 4.8 EV

# Apply the threshold to identify bright areas
# The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
thresholded = (luminance > threshold_value) * 255 

# Convert the thresholded image to uint8 for contour detection 
thresholded = thresholded.astype(np.uint8) 

# Find contours of the bright areas 
contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 

# Create a list to store the bounding boxes of bright areas 
bright_areas = [] 

# Iterate through contours and extract bounding boxes for contour in contours: 
x, y, w, h = cv2.boundingRect(contour) 

# Adjust y-coordinate based on bottom-left origin 
y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 

# Store as (x1, y1, x2, y2) 
# Print the identified bright areas 
print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

 

More details

 

Luminance and Exposure in an EXR Image:

  • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
  • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
  • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

 

The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

 

These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

 

Here’s the breakdown of the float values:

  • 0.299: Weight for the red channel.
  • 0.587: Weight for the green channel.
  • 0.114: Weight for the blue channel.

 

The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

 

For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

 

To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

  1. Linear and Logarithmic Scales:
    • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
    • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
  2. Conversion Between Scales:
    • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.
    • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.
    threshold_value = base_value * (2 ** EV)

    Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.

  3. Choosing the Base Value:
    • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
    • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

 

Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

Here’s an example of how you might determine a base_value to achieve this:

 

# Define the dynamic range of the image in EV
dynamic_range = 12

# Choose the desired number of EV above middle gray for thresholding
desired_ev_above_middle_gray = 2

# Calculate the threshold value based on the desired EV above middle gray
threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))

print("Threshold Value:", threshold_value)
Photography basics: Lumens vs Candelas (candle) vs Lux vs FootCandle vs Watts vs Irradiance vs Illuminance
/ colour, Featured, lighting, photography

https://www.translatorscafe.com/unit-converter/en-US/illumination/1-11/

 

 

The power output of a light source is measured using the unit of watts W. This is a direct measure to calculate how much power the light is going to drain from your socket and it is not relatable to the light brightness itself.

The amount of energy emitted from it per second. That energy comes out in a form of photons which we can crudely represent with rays of light coming out of the source. The higher the power the more rays emitted from the source in a unit of time.

Not all energy emitted is visible to the human eye, so we often rely on photometric measurements, which takes in account the sensitivity of human eye to different wavelenghts

 

 

 

https://pllight.com/understanding-lighting-metrics/

Candela is the basic unit of measure of light intensity from any point in a single direction from a light source. It measures the total volume of light within a certain beam angle and direction.
While the luminance of starlight is around 0.001 cd/m2, that of a sunlit scene is around 100,000 cd/m2, which is a hundred millions times higher. The luminance of the sun itself is approximately 1,000,000,000 cd/m2.

 

https://www.hdrsoft.com/resources/dri.html#bit-depth

To make it easier to represent values that vary so widely, it is common to use a logarithmic scale to plot the luminance. The scanline below represents the log base 10 of the luminance, so going from 0.1 to 1 is the same distance as going from 100 to 1000, for instance. A scene showing the interior of a room with a sunlit view outside the window, for instance, will have a dynamic range of approximately 100,000:1.

 

Lumen (lm) is the basic unit of measure for a light that is visible to the human eye. It indicates the total potential amount of light from a light source. If a uniform point source of 1 candela is at the center of a sphere with a 1ft2 radius with an opening of 1 ft2 at its surface, the quantity of light that passes through that opening is equal to 1 lumen. Since lumens are a photometric measurement for humans, we do not use this unit of measure for describing horticultural lighting.

 

Technically speaking, a Lumen is the SI unit of luminous flux, which is equal to the amount of light which is emitted per second in a unit solid angle of one steradian from a uniform source of one-candela intensity radiating in all directions.

 

Illuminance refers to the density of light over a given surface area, and is expressed in mostly LUX or lumens/m2.
Luminance refers to the amount of light emitted under various circumstances, and it is expressed mostly in LUMENS or candela/m2.

 

 

 

Lux (lx) or often Illuminance, is a photometric unit along a given area, which takes in account the sensitivity of human eye to different wavelenghts. It is the measure of light at a specific distance within a specific area at that distance. Often used to measure the incidental sun’s intensity. Its default unit describes the number of lumens visible in a square meter (lumen/m2). 100 lumens spread out over an area of 1 m2 will have an illuminance of 100 lx. The same 100 lumens spread out over 10 m2 produces a dimmer illuminance of only 10 lx.

 

The core difference between lux and lumens can be summarized as follows:

  • Lux is a measure of illuminance, the total amount of light that falls on a surface at a given distance
  • Lumens is a measure of luminous flux, the total amount of light emitted in all directions.

 

A footcandle describes the number of lumen per square foot. Therefore, one footcandle is equal to approximately 10.764 lx. This measure is only relevant for how we perceive light and is irrelevant for plant growth. Like lumens, lux and footcandles are not useful for describing horticultural lighting.

 

Color Rendering Index, or CRI, describes the ability of a light source to show an object’s color accurately in comparison to standardized colour samples under a reference light source. The highest value a light can achieve is a CRI of 100. Lower CRI values result in objects appearing unnatural or discolored. Under a light with a CRI of 100, an orange appears bright orange; under a light with a CRI of 70, the orange appears darker and bluer. This measure is dependent on how the human eye sees light, and so it is not a useful parameter for choosing horticultural lighting.

 

Correlated Color Temperature, or CCT, describes the color of a light source vs. a reference source when heated to a particular temperature, and is measured in degrees Kelvin (°K). The higher the CCT of a light source, the cooler the light’s color. For example, a very red light achieves a CCT of about 1000 K while a very blue light can achieve a CCT of about 10,000 K. Warm white lights will have a CCT around 2700 K (since they emit more energy at the red end of the spectrum), neutral white will be around 4000 K, and cool white around 5000 K (emitting more energy at the blue end of the spectrum). Similar to CRI, this measure is dependent on light perception by the human eye, and, once again, is not useful for describing or choosing horticultural lighting.

 

http://www.lumis.co.nz/reference-information/lighting-terminology

Light is a visible portion of electromagnetic radiation. Watts isn’t a measure of light output. Watts is actually a measure of total power output. Not all of the energy emitted by a light source is visible light – heat and invisible light waves (ex. infrared light) are also emitted. Lumens, on the other hand, will tell you the total visible light output of a source. For this reason, lumens (not watts) is the relevant unit of measure when you’re concerned about visibility. The higher the power the more rays emitted from the source in a unit of time.

 

Irradiance (radiant flux emitted by a surface per unit area aka watt per square meter) is a radiometric unit. As such also actually a measure of total power output.
Radiometric units are based on physical power, that means all wavelengths are weighted equally, while photometric units take into account the sensitivity of human eye to different wavelengths.
The weighting is determined by the luminosity function (which was measured for human eye and is an agreed-upon standard).

 

Converting Irradiance and Illuminance
http://www.dfisica.ubi.pt/~hgil/Fotometria/HandBook/ch07.html
There is a different conversion factor for every wavelength, so the spectral composition of light must be known to make the conversion.

At the most sensitive wavelegth to the human eye the conversion factor is

1.0 W/m2 = 683.002 lumen/m2 # at wavelength = 555nm (green)

That means the irradiance (power) to make 1 lumen is at it’s minimum at this wavelength (just 1.464 mW/m2).
Luminous efficiency is then the ratio between the actual number of lumens per watt and the theoretical maximum.
Incandescent light bulb has a luminous efficiency of 2% which is very poor. It’s because lot of it’s irradiance is only heat which is not visible. The luminosity function is zero for wavelengths outside the visible spectrum.

 

 

 

https://www.rapidtables.com/calc/light/lux-to-lumen-calculator.html

 

https://www.projectorpoint.co.uk/news/how-bright-should-my-projector-be/

 

https://dracobroadcast.eu/blogs/news/continuous-light-converting-guide

 

  • Watts (halogen) – A measure of energy consumed
  • Watts (HMI) – A measure of energy consumed
  • Watts (LED) – A measure of energy consumed
  • Lux – Lumens per square meter, illuminance at target
  • Footcandles – Lumens per square foot
  • Stops – Size of lens aperture
  • EV – Exposure Value
  • Lumens – Measure of amount of visible light, luminance from source
  • Candela – Measure of entire volume of lighting
  • NIT = Candela per square meter but is not part of the International System of Units
Methods for creating motion blur in Stop motion
/ animation, production

en.wikipedia.org/wiki/Go_motion

 

Petroleum jelly
This crude but reasonably effective technique involves smearing petroleum jelly (“Vaseline”) on a plate of glass in front of the camera lens, also known as vaselensing, then cleaning and reapplying it after each shot — a time-consuming process, but one which creates a blur around the model. This technique was used for the endoskeleton in The Terminator. This process was also employed by Jim Danforth to blur the pterodactyl’s wings in Hammer Films’ When Dinosaurs Ruled the Earth, and by Randal William Cook on the terror dogs sequence in Ghostbusters.[citation needed]

 

Bumping the puppet
Gently bumping or flicking the puppet before taking the frame will produce a slight blur; however, care must be taken when doing this that the puppet does not move too much or that one does not bump or move props or set pieces.

 

Moving the table
Moving the table on which the model is standing while the film is being exposed creates a slight, realistic blur. This technique was developed by Ladislas Starevich: when the characters ran, he moved the set in the opposite direction. This is seen in The Little Parade when the ballerina is chased by the devil. Starevich also used this technique on his films The Eyes of the Dragon, The Magical Clock and The Mascot. Aardman Animations used this for the train chase in The Wrong Trousers and again during the lorry chase in A Close Shave. In both cases the cameras were moved physically during a 1-2 second exposure. The technique was revived for the full-length Wallace & Gromit: The Curse of the Were-Rabbit.

 

Go motion
The most sophisticated technique was originally developed for the film The Empire Strikes Back and used for some shots of the tauntauns and was later used on films like Dragonslayer and is quite different from traditional stop motion. The model is essentially a rod puppet. The rods are attached to motors which are linked to a computer that can record the movements as the model is traditionally animated. When enough movements have been made, the model is reset to its original position, the camera rolls and the model is moved across the table. Because the model is moving during shots, motion blur is created.

 

A variation of go motion was used in E.T. the Extra-Terrestrial to partially animate the children on their bicycles.

Polarised vs unpolarized filtering
/ colour, lighting, production

A light wave that is vibrating in more than one plane is referred to as unpolarized light. … Polarized light waves are light waves in which the vibrations occur in a single plane. The process of transforming unpolarized light into polarized light is known as polarization.

en.wikipedia.org/wiki/Polarizing_filter_(photography)

 

Light reflected from a non-metallic surface becomes polarized; this effect is maximum at Brewster’s angle, about 56° from the vertical for common glass.

 

A polarizer rotated to pass only light polarized in the direction perpendicular to the reflected light will absorb much of it. This absorption allows glare reflected from, for example, a body of water or a road to be reduced. Reflections from shiny surfaces (e.g. vegetation, sweaty skin, water surfaces, glass) are also reduced. This allows the natural color and detail of what is beneath to come through. Reflections from a window into a dark interior can be much reduced, allowing it to be seen through. (The same effects are available for vision by using polarizing sunglasses.)

 

www.physicsclassroom.com/class/light/u12l1e.cfm

 

Some of the light coming from the sky is polarized (bees use this phenomenon for navigation). The electrons in the air molecules cause a scattering of sunlight in all directions. This explains why the sky is not dark during the day. But when looked at from the sides, the light emitted from a specific electron is totally polarized.[3] Hence, a picture taken in a direction at 90 degrees from the sun can take advantage of this polarization. Use of a polarizing filter, in the correct direction, will filter out the polarized component of skylight, darkening the sky; the landscape below it, and clouds, will be less affected, giving a photograph with a darker and more dramatic sky, and emphasizing the clouds.

 

There are two types of polarizing filters readily available, linear and “circular”, which have exactly the same effect photographically. But the metering and auto-focus sensors in certain cameras, including virtually all auto-focus SLRs, will not work properly with linear polarizers because the beam splitters used to split off the light for focusing and metering are polarization-dependent.

 

Polarizing filters reduce the light passed through to the film or sensor by about one to three stops (2–8×) depending on how much of the light is polarized at the filter angle selected. Auto-exposure cameras will adjust for this by widening the aperture, lengthening the time the shutter is open, and/or increasing the ASA/ISO speed of the camera.

 

www.adorama.com/alc/nd-filter-vs-polarizer-what%25e2%2580%2599s-the-difference

 

Neutral Density (ND) filters help control image exposure by reducing the light that enters the camera so that you can have more control of your depth of field and shutter speed. Polarizers or polarizing filters work in a similar way, but the difference is that they selectively let light waves of a certain polarization pass through. This effect helps create more vivid colors in an image, as well as manage glare and reflections from water surfaces. Both are regarded as some of the best filters for landscape and travel photography as they reduce the dynamic range in high-contrast images, thus enabling photographers to capture more realistic and dramatic sceneries.

 

shopfelixgray.com/blog/polarized-vs-non-polarized-sunglasses/

 

www.eyebuydirect.com/blog/difference-polarized-nonpolarized-sunglasses/

 

The difference between eyes and cameras
/ production, reference

 

 

 

https://www.quora.com/What-is-the-comparison-between-the-human-eye-and-a-digital-camera

 

https://medium.com/hipster-color-science/a-beginners-guide-to-colorimetry-401f1830b65a

 

There are three types of cone photoreceptors in the eye, called Long, Medium and Short. These contribute to color discrimination. They are all sensitive to different, yet overlapping, wavelengths of light. They are commonly associated with the color they are most sensitive too, L = red, M = green, S = blue.

 

Different spectral distributions can stimulate the cones in the exact same way
A leaf and a green car that look the same to you, but physically have different reflectance properties. It turns out every color (or, unique cone output) can be created from many different spectral distributions. Color science starts to make a lot more sense when you understand this.

 

When you view the charts overlaid, you can see that the spinach mostly reflects light outside of the eye’s visual range, and inside our range it mostly reflects light centered around our M cone.

 

This phenomenon is called metamerism and it has huge ramifications for color reproduction. It means we don’t need the original light to reproduce an observed color.

 

http://www.absoluteastronomy.com/topics/Adaptation_%28eye%29

 

The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand. What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as “black” can be shifted across six orders of magnitude—a factor of one million.

 

https://clarkvision.com/articles/eye-resolution.html

 

The Human eye is able to function in bright sunlight and view faint starlight, a range of more than 100 million to one. The Blackwell (1946) data covered a brightness range of 10 million and did not include intensities brighter than about the full Moon. The full range of adaptability is on the order of a billion to 1. But this is like saying a camera can function over a similar range by adjusting the ISO gain, aperture and exposure time.

In any one view, the eye eye can see over a 10,000 range in contrast detection, but it depends on the scene brightness, with the range decreasing with lower contrast targets. The eye is a contrast detector, not an absolute detector like the sensor in a digital camera, thus the distinction.  The range of the human eye is greater than any film or consumer digital camera.

As for DSLR cameras’ contrast ratio ranges in 2048:1.

 

(Daniel Frank) Several key differences stand out for me (among many):

  • The area devoted to seeing detail in the eye — the fovea — is extremely small compared to a digital camera sensor. It covers a roughly circular area of only about three degrees of arc. By contrast, a “normal” 50mm lens (so called because it supposedly mimic the perspective of the human eye) covers roughly 40 degrees of arc. Because of this extremely narrow field of view, the eye is constantly making small movements (“saccades”) to scan more of the field, and the brain is building up the illusion of a wider, detailed picture.
  • The eye has two different main types of light detecting elements: rods and cones. Rods are more sensitive, and detect only variations in brightness, but not color. Cones sense color, but only work in brighter light. That’s why very dim scenes look desaturated, in shades of gray, to the human eye. If you take a picture in moonlight with a very high-ISO digital camera, you’ll be struck by how saturated the colors are in that picture — it looks like daylight. We think of this difference in color intensity as being inherent in dark scenes, but that’s not true — it’s actually the limitation of the cones in our eyes.
  • There are specific cones in the eye with stronger responses to the different wavelengths corresponding to red, green, and blue light. By contrast, the CCD or CMOS sensor in a color digital camera can only sense luminance differences: it just counts photons in tens of millions of tiny photodetectors (“wells”) spread across its surface. In front of this detector is an array of microscopic red, blue, and green filters, one per well. The processing engine in the camera interpolates the luminance of adjacent red-, green-, or blue-filtered detectors based on a so-called “demosaicing” algorithm. This bears no resemblance to how the eye detects color. (The so-called “foveon” sensor sold by Sigma in some of its cameras avoid demosaicing by layering different color-sensing layers, but this still isn’t how the eye works.)
  • The files output by color digital cameras contain three channels of luminance data: red, green, and blue. While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.
  • The retina is much larger than the fovea, but the light-sensitive areas outside the fovea, and the nuclei to which they wire in the brain, are highly sensitive to motion, particularly in the periphery of our vision. The human visual system — including the eye — is highly adapted to detecting and analyzing potential threats coming at us from outside our central vision, and priming the brain and body to respond. These functions and systems have no analogue in any digital camera system.

domeble – Hi-Resolution CGI Backplates and 360° HDRI
/ lighting, photography, reference

www.domeble.com/

When collecting hdri make sure the data supports basic metadata, such as:

  • Iso
  • Aperture
  • Exposure time or shutter time
  • Color temperature
  • Color space Exposure value (what the sensor receives of the sun intensity in lux)
  • 7+ brackets (with 5 or 6 being the perceived balanced exposure)

 

In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances (a Photometry measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle) between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to represent more accurately the wide range of intensity levels found in real scenes ranging from direct sunlight to faint starlight and to the deepest shadows.

 

The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) or standard dynamic range (SDR) images. Tone Mapping (Look-up) techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Photography

 

In photography, dynamic range is measured in Exposure Values (in photography, exposure value denotes all combinations of camera shutter speed and relative aperture that give the same exposure. The concept was developed in Germany in the 1950s) differences or stops, between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.

 

The human response to brightness is well approximated by a Steven’s power law, which over a reasonable range is close to logarithmic, as described by the Weber�Fechner law, which is one reason that logarithmic measures of light intensity are often used as well.

 

HDR is short for High Dynamic Range. It’s a term used to describe an image which contains a greater exposure range than the “black” to “white” that 8 or 16-bit integer formats (JPEG, TIFF, PNG) can describe. Whereas these Low Dynamic Range images (LDR) can hold perhaps 8 to 10 f-stops of image information, HDR images can describe beyond 30 stops and stored in 32 bit images.

 

Rec-2020 – TVs new color gamut standard used by Dolby Vision?
/ colour, Featured, lighting, production, reference

https://www.hdrsoft.com/resources/dri.html#bit-depth

 

The dynamic range is a ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to.

For a scene: Dynamic range is the ratio between the brightest and darkest parts of the scene.

For a camera: Dynamic range is the ratio of saturation to noise. More specifically, the ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise.

For a display: Dynamic range is the ratio between the maximum and minimum intensities emitted from the screen.

 

The Dynamic Range of real-world scenes can be quite high — ratios of 100,000:1 are common in the natural world. An HDR (High Dynamic Range) image stores pixel values that span the whole tonal range of real-world scenes. Therefore, an HDR image is encoded in a format that allows the largest range of values, e.g. floating-point values stored with 32 bits per color channel. Another characteristics of an HDR image is that it stores linear values. This means that the value of a pixel from an HDR image is proportional to the amount of light measured by the camera.

 

For TVs HDR is great, but it’s not the only new TV feature worth discussing.

 

Wide color gamut, or WCG, is often lumped in with HDR. While they’re often found together, they’re not intrinsically linked. Where HDR is an increase in the dynamic range of the picture (with contrast and brighter highlights in particular), a TV’s wide color gamut coverage refers to how much of the new, larger color gamuts a TV can display.

 

Wide color gamuts only really matter for HDR video sources like UHD Blu-rays and some streaming video, as only HDR sources are meant to take advantage of the ability to display more colors.

 

 

www.cnet.com/how-to/what-is-wide-color-gamut-wcg/

 

Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed through a pixel; the other aspect is how broad a range of colors can be expressed (the gamut)

 

Image rendering bit depth

 

Wide color gamuts include a greater number of colors than what most current TVs can display, so the greater a TV’s coverage of a wide color gamut, the more colors a TV will be able to reproduce.

 

When we talk about a color space or color gamut we refer to the range of color values stored in an image. The perception of these color also requires a display that has been tuned with to resolve these color profiles at best. This is often referred to as a ‘viewer lut’.

 

So this comes also usually paired with an increase in bit depth, going from the old 8 bit system (256 shades per color, with the potential of over 16.7 million colors: 256 green x 256 blue x 256 red) to 10  (1024+ shades per color, with access to over a billion colors) or higher bits, like 12 bit (4096 shades per RGB for 68 billion colors).

The advantage of higher bit depth is in the ability to bias color with the minimum loss.

https://photo.stackexchange.com/questions/72116/whats-the-point-of-capturing-14-bit-images-and-editing-on-8-bit-monitors

 

For an extreme example, raising the brightness from a completely dark image allows for better reproduction, independently on the reproduction medium, due to the amount of data available at editing time:

 

https://www.cambridgeincolour.com/tutorials/dynamic-range.htm

 

https://www.hdrsoft.com/resources/dri.html#bit-depth

 

Note that the number of bits itself may be a misleading indication of the real dynamic range that the image reproduces — converting a Low Dynamic Range image to a higher bit depth does not change its dynamic range, of course.

  • 8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.
  • 16-bit images (i.e. 48 bits per pixel for a color image) resulting from RAW conversion are still considered Low Dynamic Range, even though the range of values they can encode is significantly higher than for 8-bit images (65536 versus 256). Note that converting a RAW file involves applying a tonal curve that compresses the dynamic range of the RAW data so that the converted image shows correctly on low dynamic range monitors. The need to adapt the output image file to the dynamic range of the display is the factor that dictates how much the dynamic range is compressed, not the output bit-depth. By using 16 instead of 8 bits, you will gain precision but you will not gain dynamic range.
  • 32-bit images (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range.Unlike 8- and 16-bit images which can take a finite number of values, 32-bit images are coded using floating point numbers, which means the values they can take is unlimited.It is important to note, though, that storing an image in a 32-bit HDR format is a necessary condition for an HDR image but not a sufficient one. When an image comes from a single capture with a standard camera, it will remain a Low Dynamic Range image,

 

 

Also note that bit depth and dynamic range are often confused as one, but are indeed separate concepts and there is no direct one to one relationship between them. Bit depth is about capacity, dynamic range is about the actual ratio of data stored.
The bit depth of a capturing or displaying device gives you an indication of its dynamic range capacity. That is, the highest dynamic range that the device would be capable of reproducing if all other constraints are eliminated.

 

https://rawpedia.rawtherapee.com/Bit_Depth

 

Finally, note that there are two ways to “count” bits for an image — either the number of bits per color channel (BPC) or the number of bits per pixel (BPP). A bit (0,1) is the smallest unit of data stored in a computer.

For a grayscale image, 8-bit means that each pixel can be one of 256 levels of gray (256 is 2 to the power 8).

For an RGB color image, 8-bit means that each one of the three color channels can be one of 256 levels of color.
Since each pixel is represented by 3 colors in this case, 8-bit per color channel actually means 24-bit per pixel.

Similarly, 16-bit for an RGB image means 65,536 levels per color channel and 48-bit per pixel.

To complicate matters, when an image is classified as 16-bit, it just means that it can store a maximum 65,535 values. It does not necessarily mean that it actually spans that range. If the camera sensors can not capture more than 12 bits of tonal values, the actual bit depth of the image will be at best 12-bit and probably less because of noise.

The following table attempts to summarize the above for the case of an RGB color image.

 

 

Type of digital support Bit depth per color channel Bit depth per pixel FStops Theoretical maximum Dynamic Range Reality
8-bit 8 24 8 256:1 most consumer images
12-bit CCD 12 36 12 4,096:1 real maximum limited by noise
14-bit CCD 14 42 14 16,384:1 real maximum limited by noise
16-bit TIFF (integer) 16 48 16 65,536:1 bit-depth in this case is not directly related to the dynamic range captured
16-bit float EXR 16 48 30 65,536:1 values are distributed more closely in the (lower) darker tones than in the (higher) lighter ones, thus allowing for a more accurate description of the tones more significant to humans. The range of normalized 16-bit floats can represent thirty stops of information with 1024 steps per stop. We have eighteen and a half stops over middle gray, and eleven and a half below. The denormalized numbers provide an additional ten stops with decreasing precision per stop.
http://download.nvidia.com/developer/GPU_Gems/CD_Image/Image_Processing/OpenEXR/OpenEXR-1.0.6/doc/#recs
HDR image (e.g. Radiance format) 32 96 “infinite” 4.3 billion:1 real maximum limited by the captured dynamic range

32-bit floats are often called “single-precision” floats, and 64-bit floats are often called “double-precision” floats. 16-bit floats therefore are called “half-precision” floats, or just “half floats”.

 

https://petapixel.com/2018/09/19/8-12-14-vs-16-bit-depth-what-do-you-really-need/

On a separate note, even Photoshop does not handle 16bit per channel. Photoshop does actually use 16-bits per channel. However, it treats the 16th digit differently – it is simply added to the value created from the first 15-digits. This is sometimes called 15+1 bits. This means that instead of 216 possible values (which would be 65,536 possible values) there are only 215+1 possible values (which is 32,768 +1 = 32,769 possible values).

 

Rec-601 (for the older SDTV format, very similar to rec-709) and Rec-709 (the HDTV’s recommended set of color standards, at times also referred to sRGB, although not exactly the same) are currently the most spread color formats and hardware configurations in the world.

 

Following those you can find the larger P3 gamut, more commonly used in theaters and in digital production houses (with small variations and improvements to color coverage), as well as most of best 4K/WCG TVs.

 

And a new standard is now promoted against P3, referred to Rec-2020 and UHDTV.

 

It is still debatable if this is going to be adopted at consumer level beyond the P3, mainly due to lack of hardware supporting it. But initial tests do prove that it would be a future proof investment.

www.colour-science.org/anders-langlands/

 

Rec. 2020 is ultimately designed for television, and not cinema. Therefore, it is to be expected that its properties must behave according to current signal processing standards. In this respect, its foundation is based on current HD and SD video signal characteristics.

 

As far as color bit depth is concerned, it allows for a maximum of 12 bits, which is more than enough for humans.

Comparing standards, REC-709 covers 35.9% of the human visible spectrum. P3 45.5%. And REC-2020 75.8%.
https://www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

 

Comparing coverage to hardware devices

 

To note that all the new standards generally score very high on the Pointer’s Gamut chart. But with REC-2020 scoring 99.9% vs P3 at 88.2%.
www.tftcentral.co.uk/articles/pointers_gamut.htm

https://www.slideshare.net/hpduiker/acescg-a-common-color-encoding-for-visual-effects-applications

 

The Pointer’s gamut is (an approximation of) the gamut of real surface colors as can be seen by the human eye, based on the research by Michael R. Pointer (1980). What this means is that every color that can be reflected by the surface of an object of any material is inside the Pointer’s gamut. Basically establishing a widely respected target for color reproduction. Visually, Pointers Gamut represents the colors we see about us in the natural world. Colors outside Pointers Gamut include those that do not occur naturally, such as neon lights and computer-generated colors possible in animation. Which would partially be accounted for with the new gamuts.

cinepedia.com/picture/color-gamut/

 

Not all current TVs can support the full spread of the new gamuts. Here is a list of modern TVs’ color coverage in percentage:
www.rtings.com/tv/tests/picture-quality/wide-color-gamut-rec-709-dci-p3-rec-2020

 

There are no TVs that can come close to displaying all the colors within Rec.2020, and there likely won’t be for at least a few years. However, to help future-proof the technology, Rec.2020 support is already baked into the HDR spec. That means that the same genuine HDR media that fills the DCI P3 space on a compatible TV now, will in a few years also fill Rec.2020 on a TV supporting that larger space.

 

Rec.2020’s main gains are in the number of new tones of green that it will display, though it also offers improvements to the number of blue and red colors as well. Altogether, Rec.2020 will cover about 75% of the visual spectrum, which is a sizeable increase in coverage even over DCI P3.

 

 

Dolby Vision

https://www.highdefdigest.com/news/show/what-is-dolby-vision/39049

https://www.techhive.com/article/3237232/dolby-vision-vs-hdr10-which-is-best.html

 

Dolby Vision is a proprietary end-to-end High Dynamic Range (HDR) format that covers content creation and playback through select cinemas, Ultra HD displays, and 4K titles. Like other HDR standards, the process uses expanded brightness to improve contrast between dark and light aspects of an image, bringing out deeper black levels and more realistic details in specular highlights — like the sun reflecting off of an ocean — in specially graded Dolby Vision material.

 

The iPhone 12 Pro gets the ability to record 4K 10-bit HDR video. According to Apple, it is the very first smartphone that is capable of capturing Dolby Vision HDR.

The iPhone 12 Pro takes two separate exposures and runs them through Apple’s custom image signal processor to create a histogram, which is a graph of the tonal values in each frame. The Dolby Vision metadata is then generated based on that histogram. In Laymen’s terms, it is essentially doing real-time grading while you are shooting. This is only possible due to the A14 Bionic chip.

 

Dolby Vision also allows for 12-bit color, as opposed to HDR10’s and HDR10+’s 10-bit color. While no retail TV we’re aware of supports 12-bit color, Dolby claims it can be down-sampled in such a way as to render 10-bit color more accurately.

 

 

 

 

 

Resources for more reading:

https://www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

 

wolfcrow.com/say-hello-to-rec-2020-the-color-space-of-the-future/

 

www.cnet.com/news/ultra-hd-tv-color-part-ii-the-future/

 

Photography basics: Why Use a (MacBeth) Color Chart?
/ colour, lighting, photography

Start here: http://www.pixelsham.com/2013/05/09/gretagmacbeth-color-checker-numeric-values/

 

https://www.studiobinder.com/blog/what-is-a-color-checker-tool/

 

 

 

In LightRoom

 

in Final Cut

 

in Nuke

Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).

One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

 

If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.

 

In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

 

This will set the sun usually around EV12-17 and the sky EV1-4 , depending on cloud coverage.

 

To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

Photography basics: f-stop vs t-stop
/ photography

http://petapixel.com/2014/09/30/your-lens-aperture-might-be-lying-to-you-or-the-difference-between-f-stops-and-t-stops/

 

https://www.premiumbeat.com/blog/understanding-lenses-aperture-f-stop-t-stop/

 

F-stops are the theoretical amount of light transmitted by the lens; t-stops, the actual amount. The difference is about 1/3 stop, often more with zooms.

 

f-stop is the measurement of the opening (aperture) of the lens in relation to its focal length (the distance between the lens and the sensor).  The math is focal length / lens diameter.
It mainly controls depth of field, given a known amount of light.

https://www.scantips.com/lights/fstop2.html

 

The smaller f-stop (larger aperture) the more depth of field and light.

 

Note that the numbers in an aperture—f/2.8, f/8—signify a certain amount of light, but that doesn’t necessarily mean that’s directly how much light is getting to your sensor.

 

T stop on the other hand is the measurement of how much light passes through aforementioned opening and actually makes it to the sensor. There is no such a lens which does not steal some light on the way to the sensor.
In short, is the corrected f-stop number you want to collect, based on the amount of light reaching the sensor after bouncing through all the lenses, to know exactly what is making it to film. The smaller, the more light.

 

http://www.dxomark.com/Lenses/Ratings/Optical-Metric-Scores

Note that exposure stop is a measurement of sensibility to light not of lens capabilities.

Photography basics: Shutter angle and shutter speed and motion blur
/ Featured, photography

http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/

 

https://www.cinema5d.com/global-vs-rolling-shutter/

 

https://www.wikihow.com/Choose-a-Camera-Shutter-Speed

 

https://www.provideocoalition.com/shutter-speed-vs-shutter-angle/

 

 

Shutter is the device that controls the amount of light through a lens. Basically in general it controls the amount of time a film is exposed.

 

Shutter speed is how long this device is open for, which also defines motion blur… the longer it stays open the blurrier the image captured.

 

The number refers to the amount of light actually allowed through.

 

As a reference, shooting at 24fps, at 180 shutter angle or 1/48th of shutter speed (0.0208 exposure time) will produce motion blur which is similar to what we perceive at naked eye

 

Talked of as in (shutter) angles, for historical reasons, as the original exposure mechanism was controlled through a pie shaped mirror in front of the lens.

 

 

A shutter of 180 degrees is blocking/allowing light for half circle.  (half blocked, half open). 270 degrees is one quarter pie shaped, which would allow for a higher exposure time (3 quarter pie open, vs one quarter closed) 90 degrees is three quarter pie shaped, which would allow for a lower exposure (one quarter open, three quarters closed)

 

The shutter angle can be converted back and fort with shutter speed with the following formulas:
https://www.provideocoalition.com/shutter-speed-vs-shutter-angle/

 

shutter angle =
(360 * fps) * (1/shutter speed)
or
(360 * fps) / shutter speed

 

shutter speed =
(360 * fps) * (1/shutter angle)
or
(360 * fps) / shutter angle

 

For example here is a chart from shutter angle to shutter speed at 24 fps:
270 = 1/32
180 = 1/48
172.8 = 1/50
144 = 1/60
90 = 1/96
72 = 1/120
45 = 1/198
22.5 = 1/348
11 = 1/696
8.6 = 1/1000

 

The above is basically the relation between the way a video camera calculates shutter (fractions of a second) and the way a film camera calculates shutter (in degrees).

Smaller shutter angles show strobing artifacts. As the camera only ever sees at least half of the time (for a typical 180 degree shutter). Due to being obscured by the shutter during that period, it doesn’t capture the scene continuously.

 

This means that fast moving objects, and especially objects moving across the frame, will exhibit jerky movement. This is called strobing. The defect is also very noticeable during pans.  Smaller shutter angles (shorter exposure) exhibit more pronounced strobing effects.

 

Larger shutter angles show more motion blur. As the longer exposure captures more motion.

Note that in 3D you want to first sum the total of the shutter open and shutter close values, than compare that to the shutter angle aperture, ie:

 

shutter open -0.0625
shutter close 0.0625
Total shutter = 0.0625+0.0625 = 0.125
Shutter angle = 360*0.125 = 45

 

shutter open -0.125
shutter close 0.125
Total shutter = 0.125+0.125 = 0.25
Shutter angle = 360*0.25 = 90

 

shutter open -0.25
shutter close 0.25
Total shutter = 0.25+0.25 = 0.5
Shutter angle = 360*0.5 = 180

 

shutter open -0.375
shutter close 0.375
Total shutter = 0.375+0.375 = 0.75
Shutter angle = 360*0.75 = 270

 

 

Faster frame rates can resolve both these issues.

Security is mostly a superstition
/ quotes

Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure. Life is either a daring adventure, or nothing. — Helen Keller
 
http://completewellbeing.com/article/security-is-an-illusion/
 
The biggest risk in life is not taking risks. Because, then, we do not live – we merely exist.