Angelo Perotta – Comp and Eddy Demo Reel
/ software

20 New Series Coming to Disney+ in ‘Next Few Years’
/ ves

www.awn.com/news/20-new-series-coming-disney-next-few-years

“In yesterday’s big Disney investor meeting, Lucasfilm’s Kathleen Kennedy revealed plans for 10 Star Wars and 10 Marvel series to stream on Disney+ in the next few years. PHEW! That’s 20 series in less than five years, which is ambitious to say the least.”

Right now, hit songwriters are driving Ubers
/ ves

www.bbc.com/news/entertainment-arts-55232418

Fiona Bevan, who has written songs for One Direction, Steps and Lewis Capaldi, said many writers were struggling because of the way streaming services pay royalties.

Bevan revealed she had earned just £100 for co-writing a track on Kylie Minogue’s number one album, Disco.

“The most successful songwriters in the world can’t pay their rent,” she added.

A Wave Optics Based Fiber Scattering Model
/ software

mandyxmq.github.io/research/wavefiber.html

This figure shows a spiderweb iridescence example. The left image is a photograph of this effect by Marianna Armata. The middle image is rendered using the new wave-based BCSDF and the image on the right is rendered using a previous ray-based BCSDF.

teaching AI + ethics from elementary to high school
/ A.I.

codeorg.medium.com/microsoft-code-org-partner-to-teach-ai-ethics-from-elementary-to-high-school-4b983fd809e3

At a time when AI and machine learning are changing the very fabric of society and transforming entire industries, it is more important than ever to give every student the opportunity to not only learn how these technologies work, but also to think critically about the ethical and societal impacts of AI.

HDRI Median Cut plugin
/ Featured, lighting, software

www.hdrlabs.com/picturenaut/plugins.html

 

 

Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

 

Here is an openCV example:

 

# bottom left coordinates = 0,0
import numpy as np
import cv2

# Load the HDR or EXR image
image = cv2.imread('your_image_path.exr', cv2.IMREAD_UNCHANGED)  # Load as-is without modification

# Calculate the luminance from the HDR channels (assuming RGB format)
luminance = np.dot(image[..., :3], [0.299, 0.587, 0.114])

# Set a threshold value based on estimated EV
threshold_value = 2.4  # Estimated threshold value based on 4.8 EV

# Apply the threshold to identify bright areas
# The luminance array contains the calculated luminance values for each pixel in the image. # The threshold_value is a user-defined value that represents a cutoff point, separating "bright" and "dark" areas in terms of perceived luminance.
thresholded = (luminance > threshold_value) * 255 

# Convert the thresholded image to uint8 for contour detection 
thresholded = thresholded.astype(np.uint8) 

# Find contours of the bright areas 
contours, _ = cv2.findContours(thresholded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 

# Create a list to store the bounding boxes of bright areas 
bright_areas = [] 

# Iterate through contours and extract bounding boxes for contour in contours: 
x, y, w, h = cv2.boundingRect(contour) 

# Adjust y-coordinate based on bottom-left origin 
y_bottom_left_origin = image.shape[0] - (y + h) bright_areas.append((x, y_bottom_left_origin, x + w, y_bottom_left_origin + h)) 

# Store as (x1, y1, x2, y2) 
# Print the identified bright areas 
print("Bright Areas (x1, y1, x2, y2):") for area in bright_areas: print(area)

 

More details

 

Luminance and Exposure in an EXR Image:

  • An EXR (Extended Dynamic Range) image format is often used to store high dynamic range (HDR) images that contain a wide range of luminance values, capturing both dark and bright areas.
  • Luminance refers to the perceived brightness of a pixel in an image. In an RGB image, luminance is often calculated using a weighted sum of the red, green, and blue channels, where different weights are assigned to each channel to account for human perception.
  • In an EXR image, the pixel values can represent radiometrically accurate scene values, including actual radiance or irradiance levels. These values are directly related to the amount of light emitted or reflected by objects in the scene.

 

The luminance line is calculating the luminance of each pixel in the image using a weighted sum of the red, green, and blue channels. The three float values [0.299, 0.587, 0.114] are the weights used to perform this calculation.

 

These weights are based on the concept of luminosity, which aims to approximate the perceived brightness of a color by taking into account the human eye’s sensitivity to different colors. The values are often derived from the NTSC (National Television System Committee) standard, which is used in various color image processing operations.

 

Here’s the breakdown of the float values:

  • 0.299: Weight for the red channel.
  • 0.587: Weight for the green channel.
  • 0.114: Weight for the blue channel.

 

The weighted sum of these channels helps create a grayscale image where the pixel values represent the perceived brightness. This technique is often used when converting a color image to grayscale or when calculating luminance for certain operations, as it takes into account the human eye’s sensitivity to different colors.

 

For the threshold, remember that the exact relationship between EV values and pixel values can depend on the tone-mapping or normalization applied to the HDR image, as well as the dynamic range of the image itself.

 

To establish a relationship between exposure and the threshold value, you can consider the relationship between linear and logarithmic scales:

  1. Linear and Logarithmic Scales:
    • Exposure values in an EXR image are often represented in logarithmic scales, such as EV (exposure value). Each increment in EV represents a doubling or halving of the amount of light captured.
    • Threshold values for luminance thresholding are usually linear, representing an actual luminance level.
  2. Conversion Between Scales:
    • To establish a mathematical relationship, you need to convert between the logarithmic exposure scale and the linear threshold scale.
    • One common method is to use a power function. For instance, you can use a power function to convert EV to a linear intensity value.
    threshold_value = base_value * (2 ** EV)

    Here, EV is the exposure value, base_value is a scaling factor that determines the relationship between EV and threshold_value, and 2 ** EV is used to convert the logarithmic EV to a linear intensity value.

  3. Choosing the Base Value:
    • The base_value factor should be determined based on the dynamic range of your EXR image and the specific luminance values you are dealing with.
    • You may need to experiment with different values of base_value to achieve the desired separation of bright areas from the rest of the image.

 

Let’s say you have an EXR image with a dynamic range of 12 EV, which is a common range for many high dynamic range images. In this case, you want to set a threshold value that corresponds to a certain number of EV above the middle gray level (which is often considered to be around 0.18).

Here’s an example of how you might determine a base_value to achieve this:

 

# Define the dynamic range of the image in EV
dynamic_range = 12

# Choose the desired number of EV above middle gray for thresholding
desired_ev_above_middle_gray = 2

# Calculate the threshold value based on the desired EV above middle gray
threshold_value = 0.18 * (2 ** (desired_ev_above_middle_gray / dynamic_range))

print("Threshold Value:", threshold_value)
How are Energy and Matter the Same?
/ lighting, quotes

www.turnerpublishing.com/blog/detail/everything-is-energy-everything-is-one-everything-is-possible/

www.universetoday.com/116615/how-are-energy-and-matter-the-same/

As Einstein showed us, light and matter and just aspects of the same thing. Matter is just frozen light. And light is matter on the move. Albert Einstein’s most famous equation says that energy and matter are two sides of the same coin. How does one become the other?

Relativity requires that the faster an object moves, the more mass it appears to have. This means that somehow part of the energy of the car’s motion appears to transform into mass. Hence the origin of Einstein’s equation. How does that happen? We don’t really know. We only know that it does.

Matter is 99.999999999999 percent empty space. Not only do the atom and solid matter consist mainly of empty space, it is the same in outer space

The quantum theory researchers discovered the answer: Not only do particles consist of energy, but so does the space between. This is the so-called zero-point energy. Therefore it is true: Everything consists of energy.

Energy is the basis of material reality. Every type of particle is conceived of as a quantum vibration in a field: Electrons are vibrations in electron fields, protons vibrate in a proton field, and so on. Everything is energy, and everything is connected to everything else through fields.

Polycam integration with Sketchfab
/ IOS, photogrammetry, software

https://sketchfab.com/blogs/community/polycam-adds-sketchfab-integration/

3D scanning is becoming more accessible with the LiDAR scanners in the new iPhone 12 Pro and iPad Pro.
Polycam’s integration lets users log in to their Sketchfab account and publish directly to Sketchfab.

Facebook joins the Blender Development Fund
/ blender, software, ves

www.blender.org/press/facebook-joins-the-blender-development-fund/

Facebook will join the Blender Foundation’s Development Fund as a Corporate Patron as of Q4, 2020.

Autodesk Shotgun running generative scheduling based on machine learning
/ A.I., software

www.awn.com/news/autodesk-shotgun-taps-new-tech-future-production-management

With Autodesk’s acquisition of technology known as Consilium, machine learning-driven generative scheduling is coming to Shotgun Software, which will enable more accurate bidding, scheduling, and resource planning decisions.

Machine learning is being brought to production management with generative scheduling in Shotgun, currently in early testing. For producers and production managers, this will make the manual and complex challenge of optimized scheduling and resource planning more dynamic, controllable, and predictive. This feature set will allow producers to plan faster, with greater accuracy and agility to help their teams produce the best work possible.

Chandigarh Design School – GO48 International Challenge
/ cool

GO48 Challenge is an international competition that celebrates the creative skills of the global community comprising students, artists, designers, faculty, professionals and industry experts.

The contest would have 5 Exciting Competitions, each comprising of 2 Challenges, all bound by the common thread 48.
That means, you submit your art/design solution, either in 48 minutes or 48 hours.

To this effect, you can work on design solutions in the following 10 Challenges:

Go48 Graphix : Visual Communication Design : LoGO48 and MotionX.

Go48 Anim8 : 2d & 3d Animation : 3D As8 and Anim8.

Go48 Live : Filmmaking : Photography and Live!

Go48 GameIT : Game Design : CharACTer and Game IT.

Go48 UI/UX : User Interaction / User Experience Design : UI Eye and XD48.

chandigarhdesignschool.com/go48-competition/