GretagMacbeth Color Checker Numeric Values and Middle Gray

The human eye perceives half scene brightness not as the linear 50% of the present energy (linear nature values) but as 18% of the overall brightness. We are biased to perceive more information in the dark and contrast areas. A Macbeth chart helps with calibrating back into a photographic capture into this “human perspective” of the world.

 

https://en.wikipedia.org/wiki/Middle_gray

 

In photography, painting, and other visual arts, middle gray or middle grey is a tone that is perceptually about halfway between black and white on a lightness scale in photography and printing, it is typically defined as 18% reflectance in visible light

 

Light meters, cameras, and pictures are often calibrated using an 18% gray card[4][5][6] or a color reference card such as a ColorChecker. On the assumption that 18% is similar to the average reflectance of a scene, a grey card can be used to estimate the required exposure of the film.

 

https://en.wikipedia.org/wiki/ColorChecker

 

 

https://photo.stackexchange.com/questions/968/how-can-i-correctly-measure-light-using-a-built-in-camera-meter

 

The exposure meter in the camera does not know whether the subject itself is bright or not. It simply measures the amount of light that comes in, and makes a guess based on that. The camera will aim for 18% gray independently, meaning if you take a photo of an entirely white surface, and an entirely black surface you should get two identical images which both are gray (at least in theory). Thus enters the Macbeth chart.

 

Note that Chroma Key Green is reasonably close to an 18% gray reflectance.

http://www.rags-int-inc.com/PhotoTechStuff/MacbethTarget/

 

No Camera Data

 

https://upload.wikimedia.org/wikipedia/commons/b/b4/CIE1931xy_ColorChecker_SMIL.svg

 

RGB coordinates of the Macbeth ColorChecker

 

https://pdfs.semanticscholar.org/0e03/251ad1e6d3c3fb9cb0b1f9754351a959e065.pdf

(more…)

Introduction to Autodesk ShotGrid
/ production, software

https://customersuccess.autodesk.com/learning/course/introduction-to-shotgrid

 

 

Learn about ShotGrid’s basic capabilities and functionality in this introductory course. Set up your account, gain an understanding of the structure of data within ShotGrid, learn to navigate ShotGrid, determine your role, including what you can and cannot do, and customize the view of on-screen data.

Unpremult and Premult in compositing cycles
/ production, software

Steve Wright

https://www.linkedin.com/pulse/why-oh-premultiply-steve-wright/

 

James Pratt

https://jamesprattvfx.wordpress.com/2018/11/08/premult-unpremult/

 

The simple definition of premult is to multiply the alpha and the RGB of the input together.

Un-Premult suggests that this does the opposite operation to the premult node. Therefore instead of multiplying the RGB values by the alpha, it divides instead.

 

Alan Martinez

https://www.linkedin.com/posts/alan-martinez-1a7a60234_unpremult-and-premult-are-terms-used-activity-7089270470889394176-OVXE

 

Unpremult” and “premult” are terms used in digital compositing that are relevant for both those working with computer-generated graphics (CG) and those working with live-action plates.

 

Unpremult” is short for “unpremultiply” and refers to the action of undoing the multiplication of a pixel by its alpha value. It is commonly used to avoid halos or unwanted edges when combining images. (This by making sure that edits to a layer are added independently from edges’ opacity levels.)

 


Premult” is short for “premultiply” and is the opposite process of “unpremult.” In this case, each pixel in an image is multiplied by its alpha value.
In simple terms, premult crops the RGB by its alpha, while unpremult does the opposite.

 

It’s important to perform color corrections on CG renders in a sort of sandwich approach. First, divide the image to extend the edges fully of the RGB channels. Then, apply the necessary color corrections. Finally, pre-multiply the image again to avoid artifacts on the edges.

 

Typically, most 3D rendered images are premultiplied. As a rule of thumb, if the background is black or even just very dark, the image may be premultiplied. Additionally, most of the time, the 3D render has antialiasing in the edges.

 

Aaron Strasbourg

https://www.aaronstrasbourgvfx.com/post/2017/06/23/002-unpremult-and-premult

KeenTools 2023.2: Introducing GeoTracker for Blender!
/ blender, production, software

https://keentools.io/products/geotracker-for-blender

 

 

Changes in add-ons Blender

  • GeoTracker open beta
  • Up to 3 times faster texture generation
  • Minor bug fixes and improvements in FaceBuilder

Changes in GeoTracker for After Effects

  • Video analysis up to 2 times faster
  • Improved tracking performance up to 20%
  • Accelerated surface masking
  • Fixed primitive scaling issue
  • Minor bug fixes and improvements

Changes in Nuke package

  • Video analysis up to 2 times faster
  • Improved tracking performance up to 20%
  • TextureBuilder up to 3 times faster
  • Accelerated surface masking
  • New build for Nuke 14 running newer linux systems (RHEL9)
  • Minor bug fixes and improvements
Photography Basics : Spectral Sensitivity Estimation Without a Camera
/ colour, production, software

https://color-lab-eilat.github.io/Spectral-sensitivity-estimation-web/

 

A number of problems in computer vision and related fields would be mitigated if camera spectral sensitivities were known. As consumer cameras are not designed for high-precision visual tasks, manufacturers do not disclose spectral sensitivities. Their estimation requires a costly optical setup, which triggered researchers to come up with numerous indirect methods that aim to lower cost and complexity by using color targets. However, the use of color targets gives rise to new complications that make the estimation more difficult, and consequently, there currently exists no simple, low-cost, robust go-to method for spectral sensitivity estimation that non-specialized research labs can adopt. Furthermore, even if not limited by hardware or cost, researchers frequently work with imagery from multiple cameras that they do not have in their possession.

 

To provide a practical solution to this problem, we propose a framework for spectral sensitivity estimation that not only does not require any hardware (including a color target), but also does not require physical access to the camera itself. Similar to other work, we formulate an optimization problem that minimizes a two-term objective function: a camera-specific term from a system of equations, and a universal term that bounds the solution space.

 

Different than other work, we utilize publicly available high-quality calibration data to construct both terms. We use the colorimetric mapping matrices provided by the Adobe DNG Converter to formulate the camera-specific system of equations, and constrain the solutions using an autoencoder trained on a database of ground-truth curves. On average, we achieve reconstruction errors as low as those that can arise due to manufacturing imperfections between two copies of the same camera. We provide predicted sensitivities for more than 1,000 cameras that the Adobe DNG Converter currently supports, and discuss which tasks can become trivial when camera responses are available.

 

 

 

Denoisers available in Arnold
/ production, software

https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_denoising_html

 

 

AOV denoising: While all denoisers work on arbitrary AOVs, not all denoisers guarantee that the denoised AOVs composite together to match the denoised beauty. The AOV column indicates whether a denoiser has robust AOV denoising and can produce a result where denoised_AOV_1 + denoised_AOV_2 + … + denoised_AOV_N = denoised_Beauty.

 

OptiX™ Denoiser imager

This imager is available as a post-processing effect. The imager also exposes additional controls for clamping and blending the result. It is based on Nvidia AI technology and is integrated into Arnold for use with IPR and look dev. The OptiX™ denoiser is meant to be used during IPR (so that you get a very quickly denoised image as you’re moving the camera and making other adjustments).

 

OIDN Denoiser imager

The OIDN denoiser (based on Intel’s Open Image Denoise technology) is available as a post-processing effect. It is integrated into Arnold for use with IPR as an imager (so that you get a very quickly denoised image as you’re moving the camera and making other adjustments).

 

Arnold Denoiser (Noice)

The Arnold Denoiser (Noice) can be run from a dedicated UI, exposed in the Denoiser, or as an imager, you will need to render images out first via the Arnold EXR driver with variance AOVs enabled. It is also available as a stand-alone program (noice.exe).

This imager is available as a post-processing effect. You can automatically denoise images every time you render a scene, edit the denoising settings and see the resulting image directly in the render view. It favors quality over speed and is, therefore, more suitable for high-quality final frame denoising and animation sequences.
Note:

imager_denoiser_noice does not support temporal denoising (required for denoising an animation).

AOUSD – Pixar, Adobe, Apple, Autodesk, and NVIDIA Form Alliance for OpenUSD to Drive Open Standards for 3D Content
/ production, software, ves

https://www.linuxfoundation.org/press/announcing-alliance-for-open-usd-aousd

 

https://aousd.org/

 

The alliance seeks to standardize the 3D ecosystem by advancing the capabilities of Open Universal Scene Description (OpenUSD). By promoting greater interoperability of 3D tools and data, the alliance will enable developers and content creators to describe, compose, and simulate large-scale 3D projects and build an ever-widening range of 3D-enabled products and services.

Virtual Production volumes study
/ colour, photography, production

Color Fidelity in LED Volumes
https://theasc.com/articles/color-fidelity-in-led-volumes

 

Virtual Production Glossary
https://vpglossary.com/

 

What is Virtual Production – In depth analysis
https://www.leadingledtech.com/what-is-a-led-virtual-production-studio-in-depth-technical-analysis/

 

A comparison of LED panels for use in Virtual Production:
Findings and recommendations

https://eprints.bournemouth.ac.uk/36826/1/LED_Comparison_White_Paper%281%29.pdf

VFX pipeline – Render Wall management topics
/ Featured, production

1: Introduction Title: Managing a VFX Facility’s Render Wall

  • Briefly introduce the importance of managing a VFX facility’s render wall.
  • Highlight how efficient management contributes to project timelines and overall productivity.

 

2: Daily Overview Title: Daily Management Routine

  • Monitor Queues: Begin each day by reviewing render queues to assess workload and priorities.
  • Resource Allocation: Allocate resources based on project demands and available hardware.
  • Job Prioritization: Set rendering priorities according to project deadlines and importance.
  • Queue Optimization: Adjust queue settings to maximize rendering efficiency.

 

3: Resource Allocation Title: Efficient Resource Management

  • Hardware Utilization: Distribute rendering tasks across available machines for optimal resource usage.
  • Balance Workloads: Avoid overloading specific machines while others remain underutilized.
  • Consider Off-Peak Times: Schedule resource-intensive tasks during off-peak hours to enhance overall performance.

 

4: Job Prioritization Title: Prioritizing Rendering Tasks

  • Deadline Sensitivity: Give higher priority to tasks with imminent deadlines to ensure timely delivery.
  • Critical Shots: Identify shots crucial to the project’s narrative or visual impact for prioritization.
  • Dependent Shots: Sequence shots that depend on others should be prioritized together.

 

5: Queue Optimization and Reporting Title: Streamlining Render Queues

  • Dependency Management: Set up dependencies to ensure shots are rendered in the correct order.
  • Error Handling: Implement automated error detection and requeueing mechanisms.
  • Progress Tracking: Regularly monitor rendering progress and update stakeholders.
  • Data Management: Archive completed renders and remove redundant data to free up storage.
  • Reporting: Provide daily reports on rendering status, resource usage, and potential bottlenecks.

 

6: Conclusion Title: Enhancing VFX Workflow

  • Effective management of a VFX facility’s render wall is essential for project success.
  • Daily monitoring, resource allocation, job prioritization, queue optimization, and reporting are key components.
  • A well-managed render wall ensures efficient production, timely delivery, and overall project success.
Infinigen – a free procedural generator of 3D scenes
/ A.I., blender, production, software

https://infinigen.org/

 

https://github.com/princeton-vl/infinigen

 

Infinigen is based on Blender and is free and open-source (BSD 3-Clause License). Infinigen is being actively developed to expand its capabilities and coverage.