Romain Chauliac – LightIt a lighting script for Maya and Arnold
/ lighting, production

LightIt is a script for Maya and Arnold that will help you and improve your lighting workflow.
Thanks to preset studio lighting components (lights, backdrop…), high quality studio scenes and HDRI library manager.

 

 

https://www.artstation.com/artwork/393emJ

 

https://wzx.gumroad.com/l/lightit

Vahan Sosoyan MakeHDR – an OpenFX open source plug-in for merging multiple LDR images into a single HDRI
/ lighting, software

https://github.com/Sosoyan/make-hdr

 

Feature notes

  • Merge up to 16 inputs with 8, 10 or 12 bit depth processing
  • User friendly logarithmic Tone Mapping controls within the tool
  • Advanced controls such as Sampling rate and Smoothness

 

Available at cross platform on Linux, MacOS and Windows Works consistent in compositing applications like Nuke, Fusion, Natron.

Custom bokeh in a raytraced DOF render
/ lighting, photography

 

https://www.linkedin.com/posts/davidgruwierlarsen_you-can-render-super-realistic-custom-bokeh-activity-7148259483440381952-I9hi

 

To achieve a custom pinhole camera effect with a custom bokeh in Arnold Raytracer, you can follow these steps:

  1. Set the render camera with a focal length around 50 (or as needed)
  2. Set the F-Stop to a high value (e.g., 22).
  3. Set the focus distance as you require
  4. Turn on DOF
  5. Place a plane a few cm in front of the camera.
  6. Texture the plane with a transparent shape at the center of it. (Transmission with no specular roughness)
DiffusionLight: HDRI Light Probes for Free by Painting a Chrome Ball
/ lighting, photography, production

https://diffusionlight.github.io/

 

 

https://github.com/DiffusionLight/DiffusionLight

 

https://github.com/DiffusionLight/DiffusionLight?tab=MIT-1-ov-file#readme

 

https://colab.research.google.com/drive/15pC4qb9mEtRYsW3utXkk-jnaeVxUy-0S

 

“a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However, these approaches often struggle with real-world, uncontrolled settings due to the limited diversity and size of their datasets. To address this problem, we leverage diffusion models trained on billions of standard images to render a chrome ball into the input image. Despite its simplicity, this task remains challenging: the diffusion models often insert incorrect or inconsistent objects and cannot readily generate images in HDR format. Our research uncovers a surprising relationship between the appearance of chrome balls and the initial diffusion noise map, which we utilize to consistently generate high-quality chrome balls. We further fine-tune an LDR difusion model (Stable Diffusion XL) with LoRA, enabling it to perform exposure bracketing for HDR light estimation. Our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios.”

 

GretagMacbeth Color Checker Numeric Values and Middle Gray

The human eye perceives half scene brightness not as the linear 50% of the present energy (linear nature values) but as 18% of the overall brightness. We are biased to perceive more information in the dark and contrast areas. A Macbeth chart helps with calibrating back into a photographic capture into this “human perspective” of the world.

 

https://en.wikipedia.org/wiki/Middle_gray

 

In photography, painting, and other visual arts, middle gray or middle grey is a tone that is perceptually about halfway between black and white on a lightness scale in photography and printing, it is typically defined as 18% reflectance in visible light

 

Light meters, cameras, and pictures are often calibrated using an 18% gray card[4][5][6] or a color reference card such as a ColorChecker. On the assumption that 18% is similar to the average reflectance of a scene, a grey card can be used to estimate the required exposure of the film.

 

https://en.wikipedia.org/wiki/ColorChecker

 

 

https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

 

The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray independently, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory). Thus enters the Macbeth chart.

 

Note that Chroma Key Green is reasonably close to an 18% gray reflectance.

http://www.rags-int-inc.com/PhotoTechStuff/MacbethTarget/

 

No Camera Data

 

https://upload.wikimedia.org/wikipedia/commons/b/b4/CIE1931xy_ColorChecker_SMIL.svg

 

RGB coordinates of the Macbeth ColorChecker

 

https://pdfs.semanticscholar.org/0e03/251ad1e6d3c3fb9cb0b1f9754351a959e065.pdf

(more…)

Neural Microfacet Fields for Inverse Rendering
/ A.I., lighting, software

https://half-potato.gitlab.io/posts/nmf/

 

 

Fast, optimized ‘for’ pixel loops with OpenCV and Python to create tone mapped HDR images
/ lighting, photography, python, software

https://pyimagesearch.com/2017/08/28/fast-optimized-for-pixel-loops-with-opencv-and-python/

 

https://learnopencv.com/exposure-fusion-using-opencv-cpp-python/

 

Exposure Fusion is a method for combining images taken with different exposure settings into one image that looks like a tone mapped High Dynamic Range (HDR) image.