Capturing textures albedo

Building a Portable PBR Texture Scanner by Stephane Lb
http://rtgfx.com/pbr-texture-scanner/

 

 

How To Split Specular And Diffuse In Real Images, by John Hable
http://filmicworlds.com/blog/how-to-split-specular-and-diffuse-in-real-images/

 

Capturing albedo using a Spectralon
https://www.activision.com/cdn/research/Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf

Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf

Spectralon is a teflon-based pressed powderthat comes closest to being a pure Lambertian diffuse material that reflects 100% of all light. If we take an HDR photograph of the Spectralon alongside the material to be measured, we can derive thediffuse albedo of that material.

 

The process to capture diffuse reflectance is very similar to the one outlined by Hable.

 

1. We put a linear polarizing filter in front of the camera lens and a second linear polarizing filterin front of a modeling light or a flash such that the two filters are oriented perpendicular to eachother, i.e. cross polarized.

 

2. We place Spectralon close to and parallel with the material we are capturing and take brack-eted shots of the setup7. Typically, we’ll take nine photographs, from -4EV to +4EV in 1EVincrements.

 

3. We convert the bracketed shots to a linear HDR image. We found that many HDR packagesdo not produce an HDR image in which the pixel values are linear. PTGui is an example of apackage which does generate a linear HDR image. At this point, because of the cross polarization,the image is one of surface diffuse response.

 

4. We open the file in Photoshop and normalize the image by color picking the Spectralon, filling anew layer with that color and setting that layer to “Divide”. This sets the Spectralon to 1 in theimage. All other color values are relative to this so we can consider them as diffuse albedo.

What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?
/ colour, Featured, photography

https://www.discovery.com/science/mexapixels-in-human-eye

About 576 megapixels for the entire field of view.

 

Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).

 

At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:

120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.

Or.

7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.

https://clarkvision.com/articles/eye-resolution.html

 

 

How many megapixels do you really need?

https://www.tomsguide.com/us/how-many-megapixels-you-need,review-1974.html

 

 

Photography basics: How Exposure Stops (Aperture, Shutter Speed, and ISO) Affect Your Photos – cheat cards
/ Featured, lighting, photography, production

 

Also see:

http://www.pixelsham.com/2018/11/22/exposure-value-measurements/

 

http://www.pixelsham.com/2016/03/03/f-stop-vs-t-stop/

 

 

An exposure stop is a unit measurement of Exposure as such it provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso and f-stop.

 

+-1 stop is a doubling or halving of the amount of light let in when taking a photo

 

1 EV (exposure value) is just another way to say one stop of exposure change.

 

https://www.photographymad.com/pages/view/what-is-a-stop-of-exposure-in-photography

 

Same applies to shutter speed, iso and aperture.
Doubling or halving your shutter speed produces an increase or decrease of 1 stop of exposure.
Doubling or halving your iso speed produces an increase or decrease of 1 stop of exposure.

 

 

Because of the way f-stop numbers are calculated (ratio of focal length/lens diameter, where focal length is the distance between the lens and the sensor), an f-stop doesn’t relate to a doubling or halving of the value, but to the doubling/halving of the area coverage of a lens in relation to its focal length. And as such, to a multiplying or dividing by 1.41 (the square root of 2). For example, going from f/2.8 to f/4 is a decrease of 1 stop because 4 = 2.8 * 1.41. Changing from f/16 to f/11 is an increase of 1 stop because 11 = 16 / 1.41.

 

 

https://www.quora.com/Photography-How-a-higher-f-Stop-larger-aperture-leads-to-shallow-Depth-Of-Field

A wider aperture means that light proceeding from the foreground, subject, and background is entering at more oblique angles than the light entering less obliquely.

Consider that absolutely everything is bathed in light, therefore light bouncing off of anything is effectively omnidirectional. Your camera happens to be picking up a tiny portion of the light that’s bouncing off into infinity.

Now consider that the wider your iris/aperture, the more of that omnidirectional light you’re picking up:

When you have a very narrow iris you are eliminating a lot of oblique light. Whatever light enters, from whatever distance, enters moderately parallel as a whole. When you have a wide aperture, much more light is entering at a multitude of angles. Your lens can only focus the light from one depth – the foreground/background appear blurred because it cannot be focused on.

 

https://frankwhitephotography.com/index.php?id=28:what-is-a-stop-in-photography

 

 

 

 

The great thing about stops is that they give us a way to directly compare shutter speed, aperture diameter, and ISO speed. This means that we can easily swap these three components about while keeping the overall exposure the same.

 

http://lifehacker.com/how-aperture-shutter-speed-and-iso-affect-pictures-sh-1699204484

 

 

https://www.techradar.com/how-to/the-exposure-triangle

 

 

https://www.videoschoolonline.com/what-is-an-exposure-stop/

 

Note. All three of these measurements (aperture, shutter, iso) have full stops, half stops and third stops, but if you look at the numbers they aren’t always consistent. For example, a one third stop between ISO100 and ISO 200 would be ISO133, yet most cameras are marked at ISO125.

Third-stops are especially important as they’re the increment that most cameras use for their settings. These are just imaginary divisions in each stop.
From a practical standpoint manufacturers only standardize the full stops, meaning that while they try and stay somewhat consistent there is some rounding up going on between the smaller numbers.

 

http://www.digitalcameraworld.com/2015/04/15/the-exposure-triangle-aperture-shutter-speed-and-iso-explained/

 

 

 

 

 

Note that ND Filters directly modify the exposure triangle.

 

 

 

Photography basics: Color Temperature and White Balance
/ colour, Featured, lighting, photography

 

Color Temperature of a light source describes the spectrum of light which is radiated from a theoretical “blackbody” (an ideal physical body that absorbs all radiation and incident light – neither reflecting it nor allowing it to pass through) with a given surface temperature.

https://en.wikipedia.org/wiki/Color_temperature

 

Or. Most simply it is a method of describing the color characteristics of light through a numerical value that corresponds to the color emitted by a light source, measured in degrees of Kelvin (K) on a scale from 1,000 to 10,000.

 

More accurately. The color temperature of a light source is the temperature of an ideal backbody that radiates light of comparable hue to that of the light source.

As such, the color temperature of a light source is a numerical measurement of its color appearance. It is based on the principle that any object will emit light if it is heated to a high enough temperature, and that the color of that light will shift in a predictable manner as the temperature is increased. The system is based on the color changes of a theoretical “blackbody radiator” as it is heated from a cold black to a white hot state.

 

So, why do we measure the hue of the light as a “temperature”? This was started in the late 1800s, when the British physicist William Kelvin heated a block of carbon. It glowed in the heat, producing a range of different colors at different temperatures. The black cube first produced a dim red light, increasing to a brighter yellow as the temperature went up, and eventually produced a bright blue-white glow at the highest temperatures. In his honor, Color Temperatures are measured in degrees Kelvin, which are a variation on Centigrade degrees. Instead of starting at the temperature water freezes, the Kelvin scale starts at “absolute zero,” which is -273 Centigrade.

 

More about black bodies here: http://www.pixelsham.com/2013/03/14/black-body-color

 

 

The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is about 5780 K. The color temperature of sunlight above the atmosphere is about 5900 K. Time of the day and atmospheric conditions bias the purity of the light that reaches us from the sun.

Some think that the Sun’s output in visible light peaks in the yellow. However, the Sun’s visible output peaks in the green:

  

 

 

http://solar-center.stanford.edu/SID/activities/GreenSun.html

Independently, we refer to the sun as a pure white light source. And we use its spectrum as a reference for other light sources.

Because the sun’s spectrum can change depending on so many factors (including pollution), a standard called D65 was defined (by the International Commission on Illumination) to represent what is considered as the average spectrum of the sun in average conditions.

This in reality tends to bias towards an overcast day of 6500K. And while it is implemented at different temperatures by different manufacturers, it is still considered a more common standard.

 

https://en.wikipedia.org/wiki/Illuminant_D65

 

https://www.scratchapixel.com/lessons/digital-imaging/colors

 

 

In this context, the White Point of a light defines the neutral color of its given color space.

https://chrisbrejon.com/cg-cinematography/chapter-1-color-management/#Colorspace

 

D65 corresponds to what the spectrum of the sun would typically look like on a midday sun somewhere in Western/Northern Europe (figure 9). This D65 which is also called the daylight illuminant is not a spectrum which we can exactly reproduce with a light source but rather a reference against which we can compare the spectrum of existing lights.

 

Another rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become “red hot” when they attain one temperature, and then “white hot” for even higher temperatures.

 

Similarly, black bodies at different temperatures also have varying color temperatures of “white light.” Despite its name, light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum.

 

The Kelvin Color Temperature scale imagines a black body object— (such as a lamp filament) being heated. At some point the object will get hot enough to begin to glow. As it gets hotter its glowing color will shift, moving from deep reds, such as a low burning fire would give, to oranges & yellows, all the way up to white hot.

 

Color temperatures over 5,000K are called cool colors (bluish white), while lower color temperatures (2,700–3,000 K) are called warm colors (yellowish white through red)

  

 

https://www.ni.com/en-ca/innovations/white-papers/12/a-practical-guide-to-machine-vision-lighting.html

 

Our eyes are very good at judging what is white under different light sources, but digital cameras often have great difficulty with auto white balance (AWB) — and can create unsightly blue, orange, or even green color casts. Understanding digital white balance can help you avoid these color casts, thereby improving your photos under a wider range of lighting conditions.

 

 

White balance (WB) is the process of removing these color casts from captured media, so that objects which appear white in perception (or expected) are rendered white in your medium.

This color cast is due to the way light itself is formed and spread.

 

What a white balancing procedure does is it identifies what is white in your footage. It doesn’t know what white is until you tell it what it is.

 

You can often do this with AWB (Automatic White Balance), but the results are not always desirable. That is why you may choose to manually change your white balance.

When you white balance you are telling your camera to treat any object with similar chrominance and luminance as white.

 

Different type of light sources generate different color casts.

 

As such, camera white balance has to take into account this “color temperature” of a light source, which mostly refers to the relative warmth or coolness of white light.

 

Matching the temperature value of an indoor/outdoor cast makes for a white balance.
The two color temperatures you’ll hear most often discussed are outdoor lighting which is often ball parked at 5600K and indoor (tungsten) lighting which is generally ball parked at 3200K. These are the two numbers you’ll hear over and over again. Higher color temperatures (over 5000K) are considered “cool” (i.e. Blue’ish). Lower color temperatures (under 5000K) are considered “warm” (i.e. orange’ish).

 

Therefore if you are shooting indoors under tungsten lighting at 3200K you will set your white balance for indoor shooting at this color temperature. In this case, your camera will correct your camera’s settings to ensure that white appears white. Your camera will either have an indoor 3200K auto option (even the most basic camera’s have this option) or you can choose to set it manually.

 

Things get complicated if you’re filming indoors during the day under tungsten lighting while the outdoor light is coming through a window. Now what we have is a mixing of color temperatures. What you need to understand in this situation is that there is no perfect white balance setting in a mixed color temperature setting. You will need to make a compromise on one end of the spectrum or the other. If you set your white balance to tungsten 3200K the daylight colors will appear very blue. If you set your white balance to optimize for daylight 5600K then your tungsten lighting will appear very orange.

 

Where to use which light:
For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.

 

 

REFERENCES

 


How to Convert Temperature (K) to RGB: Algorithm and Sample Code

https://tannerhelland.com/2012/09/18/convert-temperature-rgb-algorithm-code.html

 

http://www.vendian.org/mncharity/dir3/blackbody/UnstableURLs/bbr_color.html

 

http://riverfarenh.com/light-bulb-color-chart/

 

https://www.lightsfilmschool.com/blog/filmmaking-white-balance-and-color-temperature

 

https://astro-canada.ca/le_spectre_electromagnetique-the_electromagnetic_spectrum-eng

 

http://www.3drender.com/glossary/colortemp.htm

 

http://pernime.info/light-kelvin-scale/

 

http://lowel.tiffen.com/edu/color_temperature_and_rendering_demystified.html

 

https://en.wikipedia.org/wiki/Color_temperature

 

https://www.sylvania.com/en-us/innovation/education/light-and-color/Pages/color-characteristics-of-light.aspx

 

How to Convert Temperature (K) to RGB:
http://www.tannerhelland.com/4435/convert-temperature-rgb-algorithm-code/

 

  

 

 

https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_cinema_4d_ci_Lights_html

 

 

 

Macro photography stacking
/ photography, software

The main question being: Is it better to use a macro rail or is it better to vary the focus of the lens?

 

 

 

Photography Stacking PDF Presentations:

 

 

With complex scenes, it is a good idea, not to change the position of the camera!
the use of the focusing rail leads to changes in the perspective in a way, that no stitching program can solve

 

But. For small subjects like this, it doesn’t matter much how you step focus, as long as:
1) you step along the direction that the lens is pointing, and
2) everything you care about ends up in the final image.

https://www.photigy.com/school/the-best-way-to-do-a-focus-stacking-macro-focusing-rails-vs-focus-variation/

 

http://zerenesystems.com/cms/stacker/docs/troubleshooting/ringversusrail

 

Ring controlled stacking: https://www.heliconsoft.com/heliconsoft-products/helicon-fb-tube/

 

 

www.dpreview.com/articles/5717972844/focus-stacking-in-macro-photography/2

 

petapixel.com/2014/07/19/focus-stacking-walkthroughs-will-help-take-macro-photography-next-level/

 

Using the Canon Utility for software controlled stacking:
http://zerenesystems.com/cms/stacker/docs/tutorials/usingcanoneosutility

No one could see the colour blue until modern times
/ colour, photography, production

http://www.businessinsider.com.au/what-is-blue-and-how-do-we-see-color-2015-2

The way that humans see the world… until we have a way to describe something, even something so fundamental as a colour, we may not even notice that something it’s there.

 

Ancient languages didn’t have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew, not Icelandic cultures. And without a word for the colour, there’s evidence that they may not have seen it at all.

https://www.wnycstudios.org/story/211119-colors

 

Every language first had a word for black and for white, or dark and light. The next word for a colour to come into existence — in every language studied around the world — was red, the colour of blood and wine.

After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colours to appear in every language is blue.

 

The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.

https://mymodernmet.com/shades-of-blue-color-history/

Considered to be the first ever synthetically produced color pigment, Egyptian blue (also known as cuprorivaite) was created around 2,200 B.C. It was made from ground limestone mixed with sand and a copper-containing mineral, such as azurite or malachite, which was then heated between 1470 and 1650°F. The result was an opaque blue glass which then had to be crushed and combined with thickening agents such as egg whites to create a long-lasting paint or glaze.

 

If you think about it, blue doesn’t appear much in nature — there aren’t animals with blue pigments (except for one butterfly, Obrina Olivewing, all animals generate blue through light scattering), blue eyes are rare (also blue through light scattering), and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue?

 

So before we had a word for it, did people not naturally see blue? Do you really see something if you don’t have a word for it?

 

A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, who speak a language that has no word for blue or distinction between blue and green. When shown a circle with 11 green squares and one blue, they couldn’t pick out which one was different from the others.

 

When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?

 

Davidoff says that without a word for a colour, without a way of identifying it as different, it’s much harder for us to notice what’s unique about it — even though our eyes are physically seeing the blocks it in the same way.

 

Further research brought to wider discussions about color perception in humans. Everything that we make is based on the fact that humans are trichromatic. The television only has 3 colors. Our color printers have 3 different colors. But some people, and in specific some women seemed to be more sensible to color differences… mainly because they’re just more aware or – because of the job that they do.

Eventually this brought to the discovery of a small percentage of the population, referred to as tetrachromats, which developed an extra cone sensitivity to yellow, likely due to gene modifications.

The interesting detail about these is that even between tetrachromats, only the ones that had a reason to develop, label and work with extra color sensitivity actually developed the ability to use their native skills.

 

So before blue became a common concept, maybe humans saw it. But it seems they didn’t know they were seeing it.

If you see something yet can’t see it, does it exist? Did colours come into existence over time? Not technically, but our ability to notice them… may have…

 

domeble – Hi-Resolution CGI Backplates and 360° HDRI
/ lighting, photography, reference

www.domeble.com/

When collecting hdri make sure the data supports basic metadata, such as:

  • Iso
  • Aperture
  • Exposure time or shutter time
  • Color temperature
  • Color space Exposure value (what the sensor receives of the sun intensity in lux)
  • 7+ brackets (with 5 or 6 being the perceived balanced exposure)

 

In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances (a Photometry measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle) between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to represent more accurately the wide range of intensity levels found in real scenes ranging from direct sunlight to faint starlight and to the deepest shadows.

 

The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) or standard dynamic range (SDR) images. Tone Mapping (Look-up) techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Photography

 

In photography, dynamic range is measured in Exposure Values (in photography, exposure value denotes all combinations of camera shutter speed and relative aperture that give the same exposure. The concept was developed in Germany in the 1950s) differences or stops, between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.

 

The human response to brightness is well approximated by a Steven’s power law, which over a reasonable range is close to logarithmic, as described by the Weber�Fechner law, which is one reason that logarithmic measures of light intensity are often used as well.

 

HDR is short for High Dynamic Range. It’s a term used to describe an image which contains a greater exposure range than the “black” to “white” that 8 or 16-bit integer formats (JPEG, TIFF, PNG) can describe. Whereas these Low Dynamic Range images (LDR) can hold perhaps 8 to 10 f-stops of image information, HDR images can describe beyond 30 stops and stored in 32 bit images.