FX Artists Are Tired of Fixing It in Post
/ ves

https://www.vulture.com/2023/06/vfx-artists-in-hollywood-push-for-union-amidst-wga-strike.html

 

The CGI in The Little Mermaid was criticized for having an uncanny and soulless look. Ant-Man and the Wasp: Quantumania was called out for lackluster visual effects. So much of what we see onscreen relies on computer-generated imagery, and it costs a lot of money to make. So why does it often look so bad?

 

Vulture’s Chris Lee explains there is a long list of reasons: a lack of qualified workers, directors with limited visual-effects experience, and studios such as Marvel overworking and underpaying. “It’s an unsustainable business model,” he tells Into It, “and I’m told over and over again by not knowing what they want, and by overworking these employees, it’s basically a race to the bottom. The films decline in quality, and the fans revolt.”

PhD Frederick Travis explains that the concept “We create our reality”
/ quotes

Frederick Travis, PhD, director of the Center for Brain, Consciousness and Cognition, explains that the concept “We create our reality” is more than a philosophical statement. It is a physical reality driven by neural plasticity—every experience changes the brain. Therefore, choose transcendental experiences and higher states of consciousness naturally unfold.

 

The “massive lie” that returning to work is magically better for productivity and collaboration
/ ves

https://www.news.com.au/finance/work/at-work/disastrous-experiment-real-reason-behind-hated-return-to-work-push/news-story/6f377ea396388a531de6cedf89936fe5

 

“I think the push in some quarters to get everyone back into the office for the majority of the time is being driven by two factors.

 

The first one is concern about commercial property values.

 

The second is a peculiar harking back by some managers to a 1950s Theory X approach. Theory X assumes that all workers are lazy, must be watched at all times and need to be directed and controlled in order to work.”

Apple Vision Pro vr/ar headset will cost $3,499 and debut in early 2024
/ hardware, VR

https://edition.cnn.com/2023/06/06/tech/apple-vision-pro-hands-on/index.html

 

https://edition.cnn.com/tech/live-news/apple-event-june-wwdc-2023/index.html

 

CUPERTINO, CALIFORNIA – JUNE 05: The new Apple Vision Pro headset is displayed during the Apple Worldwide Developers Conference on June 05, 2023 in Cupertino, California. Apple CEO Tim Cook kicked off the annual WWDC23 developer conference with the announcement of the new Apple Vision Pro mixed reality headset. (Photo by Justin Sullivan/Getty Images)
10 FREE AI Tools
/ A.I., software
Image rendering bit depth
/ colour

The terms 16-bit, 16-bit float, and 32-bit refer to different data formats used to store and represent image information, as bits per pixel.

 

https://en.wikipedia.org/wiki/Color_depth

 

In color technology, color depth also known as bit depth, is either the number of bits used to indicate the color of a single pixel, OR the number of bits used for each color component of a single pixel.

 

When referring to a pixel, the concept can be defined as bits per pixel (bpp).

 

When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often.

 

Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.

 

 

Here’s a simple explanation of each.

 

8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range.

 

16-bit: This format is commonly referred to as “half-precision.” It uses 16 bits of data to represent color values for each pixel. With 16 bits, you can have 65,536 discrete levels of color, allowing for relatively high precision and smooth gradients. However, it has a limited dynamic range, meaning it cannot accurately represent extremely bright or dark values. It is commonly used for regular images and textures.

 

16-bit float: This format is an extension of the 16-bit format but uses floating-point numbers instead of fixed integers. Floating-point numbers allow for more precise calculations and a larger dynamic range. In this case, the 16 bits are used to store both the color value and the exponent, which controls the range of values that can be represented. The 16-bit float format provides better accuracy and a wider dynamic range than regular 16-bit, making it useful for high-dynamic-range imaging (HDRI) and computations that require more precision.

 

32-bit: (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. This format, also known as “full-precision” or “float,” uses 32 bits to represent color values and offers the highest precision and dynamic range among the three options. With 32 bits, you have a significantly larger number of discrete levels, allowing for extremely accurate color representation, smooth gradients, and a wide range of brightness values. It is commonly used for professional rendering, visual effects, and scientific applications where maximum precision is required.

 

Bits and HDR coverage

High Dynamic Range (HDR) images are designed to capture a wide range of luminance values, from the darkest shadows to the brightest highlights, in order to reproduce a scene with more accuracy and detail. The bit depth of an image refers to the number of bits used to represent each pixel’s color information. When comparing 32-bit float and 16-bit float HDR images, the drop in accuracy primarily relates to the precision of the color information.

 

A 32-bit float HDR image offers a higher level of precision compared to a 16-bit float HDR image. In a 32-bit float format, each color channel (red, green, and blue) is represented by 32 bits, allowing for a larger range of values to be stored. This increased precision enables the image to retain more details and subtleties in color and luminance.

 

On the other hand, a 16-bit float HDR image utilizes 16 bits per color channel, resulting in a reduced range of values that can be represented. This lower precision leads to a loss of fine details and color nuances, especially in highly contrasted areas of the image where there are significant differences in luminance.

 

The drop in accuracy between 32-bit and 16-bit float HDR images becomes more noticeable as the exposure range of the scene increases. Exposure range refers to the span between the darkest and brightest areas of an image. In scenes with a limited exposure range, where the luminance differences are relatively small, the loss of accuracy may not be as prominent or perceptible. These images usually are around 8-10 exposure levels.

 

However, in scenes with a wide exposure range, such as a landscape with deep shadows and bright highlights, the reduced precision of a 16-bit float HDR image can result in visible artifacts like color banding, posterization, and loss of detail in both shadows and highlights. The image may exhibit abrupt transitions between tones or colors, which can appear unnatural and less realistic.

 

To provide a rough estimate, it is often observed that exposure values beyond approximately ±6 to ±8 stops from the middle gray (18% reflectance) may be more prone to accuracy issues in a 16-bit float format. This range may vary depending on the specific implementation and encoding scheme used.

 

To summarize, the drop in accuracy between 32-bit and 16-bit float HDR images is mainly related to the reduced precision of color information. This decrease in precision becomes more apparent in scenes with a wide exposure range, affecting the representation of fine details and leading to visible artifacts in the image.

 

In practice, this means that exposure values beyond a certain range will experience a loss of accuracy and detail when stored in a 16-bit float format. The exact range at which this loss occurs depends on the encoding scheme and the specific implementation. However, in general, extremely bright or extremely dark values that fall outside the representable range may be subject to quantization errors, resulting in loss of detail, banding, or other artifacts.

 

HDRs used for lighting purposes are usually slightly convolved to improve on sampling speed and removing specular artefacts. To that extent, 16 bit float HDRIs tend to me most used in CG cycles.