MDL – NVidia Material Definition Language
/ production, software

www.nvidia.com/en-us/design-visualization/technologies/material-definition-language/

developer.nvidia.com/mdl-sdk

THE NVIDIA MATERIAL DEFINITION LANGUAGE (MDL) gives you the freedom to share physically based materials and lights between supporting applications.

For example, create an MDL material in an application like Allegorithmic Substance Designer, save it to your library, then use it in NVIDIA® Iray® or Chaos Group’s V-Ray, or any other supporting application.

Unlike a shading language that produces programs for a particular renderer, MDL materials define the behavior of light at a high level. Different renderers and tools interpret the light behavior and create the best possible image.

Photography basics: How Exposure Stops (Aperture, Shutter Speed, and ISO) Affect Your Photos – cheat cards
/ Featured, lighting, photography, production

 

Also see:

http://www.pixelsham.com/2018/11/22/exposure-value-measurements/

 

http://www.pixelsham.com/2016/03/03/f-stop-vs-t-stop/

 

 

An exposure stop is a unit measurement of Exposure as such it provides a universal linear scale to measure the increase and decrease in light, exposed to the image sensor, due to changes in shutter speed, iso and f-stop.

 

+-1 stop is a doubling or halving of the amount of light let in when taking a photo

 

1 EV (exposure value) is just another way to say one stop of exposure change.

 

https://www.photographymad.com/pages/view/what-is-a-stop-of-exposure-in-photography

 

Same applies to shutter speed, iso and aperture.
Doubling or halving your shutter speed produces an increase or decrease of 1 stop of exposure.
Doubling or halving your iso speed produces an increase or decrease of 1 stop of exposure.

 

 

Because of the way f-stop numbers are calculated (ratio of focal length/lens diameter, where focal length is the distance between the lens and the sensor), an f-stop doesn’t relate to a doubling or halving of the value, but to the doubling/halving of the area coverage of a lens in relation to its focal length. And as such, to a multiplying or dividing by 1.41 (the square root of 2). For example, going from f/2.8 to f/4 is a decrease of 1 stop because 4 = 2.8 * 1.41. Changing from f/16 to f/11 is an increase of 1 stop because 11 = 16 / 1.41.

 

 

https://www.quora.com/Photography-How-a-higher-f-Stop-larger-aperture-leads-to-shallow-Depth-Of-Field

A wider aperture means that light proceeding from the foreground, subject, and background is entering at more oblique angles than the light entering less obliquely.

Consider that absolutely everything is bathed in light, therefore light bouncing off of anything is effectively omnidirectional. Your camera happens to be picking up a tiny portion of the light that’s bouncing off into infinity.

Now consider that the wider your iris/aperture, the more of that omnidirectional light you’re picking up:

When you have a very narrow iris you are eliminating a lot of oblique light. Whatever light enters, from whatever distance, enters moderately parallel as a whole. When you have a wide aperture, much more light is entering at a multitude of angles. Your lens can only focus the light from one depth – the foreground/background appear blurred because it cannot be focused on.

 

https://frankwhitephotography.com/index.php?id=28:what-is-a-stop-in-photography

 

 

 

 

The great thing about stops is that they give us a way to directly compare shutter speed, aperture diameter, and ISO speed. This means that we can easily swap these three components about while keeping the overall exposure the same.

 

http://lifehacker.com/how-aperture-shutter-speed-and-iso-affect-pictures-sh-1699204484

 

 

https://www.techradar.com/how-to/the-exposure-triangle

 

 

https://www.videoschoolonline.com/what-is-an-exposure-stop/

 

Note. All three of these measurements (aperture, shutter, iso) have full stops, half stops and third stops, but if you look at the numbers they aren’t always consistent. For example, a one third stop between ISO100 and ISO 200 would be ISO133, yet most cameras are marked at ISO125.

Third-stops are especially important as they’re the increment that most cameras use for their settings. These are just imaginary divisions in each stop.
From a practical standpoint manufacturers only standardize the full stops, meaning that while they try and stay somewhat consistent there is some rounding up going on between the smaller numbers.

 

http://www.digitalcameraworld.com/2015/04/15/the-exposure-triangle-aperture-shutter-speed-and-iso-explained/

 

 

 

 

 

Note that ND Filters directly modify the exposure triangle.

 

 

 

Joe Letteri on Production, VFX and storytelling
/ production, quotes

nerdist.com/article/joe-letteri-avatar-alita-battle-angel-james-cameron-martin-scorsese/

 

[Any] story [has to be] complete in itself. If there are gaps that you’re hoping will be filled in with visual effects, you’re likely to be disappointed. We can add ideas, we can help in whatever way that we can, but you want to make sure that when you read it, it reads well.

 

[Our responsibility as VFX artist] I think first and foremost [is] to engage the audience. Everything that we do has to be part of the audience wanting to sit there and watch that movie and see what happens next. And it’s a combination of things. It’s the drama of the characters. It’s maybe what you can do to a scene to make it compelling to look at, the realism that you might need to get people drawn into that moment. It could be any number of things, but it’s really about just making sure that you’re always in mind of how the audience is experiencing what they’re seeing.

What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…
/ Featured, lighting, production

RASTERIZATION
Rasterisation (or rasterization)
is the task of taking the information described in a vector graphics format OR the vertices of triangles making 3D shapes and converting them into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes), or in other words “rasterizing” vectors or 3D models onto a 2D plane for display on a computer screen.

For each triangle of a 3D shape, you project the corners of the triangle on the virtual screen with some math (projective geometry). Then you have the position of the 3 corners of the triangle on the pixel screen. Those 3 points have texture coordinates, so you know where in the texture are the 3 corners. The cost is proportional to the number of triangles, and is only a little bit affected by the screen resolution.

In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

 

The main advantage of rasterization is its speed. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. So it cannot take shading, especially the physical light, into account and it cannot promise to get a photorealistic output. That’s a big limitation of rasterization.

There are also multiple problems:

  • If you have two triangles one is behind the other, you will draw twice all the pixels. you only keep the pixel from the triangle that is closer to you (Z-buffer), but you still do the work twice.

  • The borders of your triangles are jagged as it is hard to know if a pixel is in the triangle or out. You can do some smoothing on those, that is anti-aliasing.

  • You have to handle every triangles (including the ones behind you) and then see that they do not touch the screen at all. (we have techniques to mitigate this where we only look at triangles that are in the field of view)

  • Transparency is hard to handle (you can’t just do an average of the color of overlapping transparent triangles, you have to do it in the right order)

 

 

 

RAY CASTING
It is almost the exact reverse of rasterization: you start from the virtual screen instead of the vector or 3D shapes, and you project a ray, starting from each pixel of the screen, until it intersect with a triangle.

The cost is directly correlated to the number of pixels in the screen and you need a really cheap way of finding the first triangle that intersect a ray. In the end, it is more expensive than rasterization but it will, by design, ignore the triangles that are out of the field of view.

You can use it to continue after the first triangle it hit, to take a little bit of the color of the next one, etc… This is useful to handle the border of the triangle cleanly (less jagged) and to handle transparency correctly.

 

RAYTRACING


Same idea as ray casting except once you hit a triangle you reflect on it and go into a different direction. The number of reflection you allow is the “depth” of your ray tracing. The color of the pixel can be calculated, based off the light source and all the polygons it had to reflect off of to get to that screen pixel.

The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Ray tracing is eye-oriented process that needs walking through each pixel looking for what object should be shown there, which is also can be described as a technique that follows a beam of light (in pixels) from a set point and simulates how it reacts when it encounters objects.

Compared with rasterization, ray tracing is hard to be implemented in real time, since even one ray can be traced and processed without much trouble, but after one ray bounces off an object, it can turn into 10 rays, and those 10 can turn into 100, 1000…The increase is exponential, and the the calculation for all these rays will be time consuming.

Historically, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on the another technique called rasterization.

 

 

PATH TRACING
Path tracing can be used to solve more complex lighting situations.

Path tracing is a type of ray tracing. When using path tracing for rendering, the rays only produce a single ray per bounce. The rays do not follow a defined line per bounce (to a light, for example), but rather shoot off in a random direction. The path tracing algorithm then takes a random sampling of all of the rays to create the final image. This results in sampling a variety of different types of lighting.

When a ray hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit.
It then calculates the amount of light transferred all the way to the pixel, including any color information gathered from surfaces along the way.
It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel color value.

It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. So when you increase the the samples per pixel, you can see the image quality becomes better and better.

Ray tracing tends to be more efficient than path tracing. Basically, the render time of a ray tracer depends on the number of polygons in the scene. The more polygons you have, the longer it will take.
Meanwhile, the rendering time of a path tracer can be indifferent to the number of polygons, but it is related to light situation: If you add a light, transparency, translucence, or other shader effects, the path tracer will slow down considerably.

 

Sources:
https://medium.com/@junyingw/future-of-gaming-rasterization-vs-ray-tracing-vs-path-tracing-32b334510f1f

 

https://www.reddit.com/r/explainlikeimfive/comments/8tim5q/eli5_whats_the_difference_among_rasterization_ray/

 

blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/

 

https://en.wikipedia.org/wiki/Rasterisation

 

https://www.dusterwald.com/2016/07/path-tracing-vs-ray-tracing/

 

https://www.quora.com/Whats-the-difference-between-ray-tracing-and-path-tracing

MovieLabs and Hollywood Studios Publish White Paper Envisioning the Future of Media Creation in 2030
/ production, ves

www.broadcastingcable.com/post-type-the-wire/movielabs-and-hollywood-studios-publish-white-paper-envisioning-the-future-of-media-creation-in-2030

The main limitation that our technology future forecasts is a challenge in speed while supporting valid data to the user base.

Generally speaking, data can change after being stored locally in various databases around the world, challenging its uber validity.

With around 75 billion users by 2030, our current infrastructure will not be able to cope with demand. From 1.2 zettabytes world wide in 2016 (about enough to fill all high capacity 9 billion iphone’s drives), demand is planned to raise 5 times in 2021, up to 31Gb per person.
While broadband support is only expected to double up.

This will further fragment both markets and contents, possibly to levels where not all information can be retrieved at reasonable or reliable levels.

The 2030 Vision paper lays out key principles that will form the foundation of this technological future, with examples and a discussion of the broader implications of each. The key principles envision a future in which:

1. All assets are created or ingested straight into the cloud and do not need to be moved.

2. Applications come to the media.

3. Propagation and distribution of assets is a “publish” function.

4. Archives are deep libraries with access policies matching speed, availability and security to the economics of the cloud.

5. Preservation of digital assets includes the future means to access and edit them.

6. Every individual on a project is identified and verified, and their access permissions are efficiently and consistently managed.

7. All media creation happens in a highly secure environment that adapts rapidly to changing threats.

8. Individual media elements are referenced, accessed, tracked and interrelated using a universal linking system.

9. Media workflows are non-destructive and dynamically created using common interfaces, underlying data formats and metadata.

10. Workflows are designed around real-time iteration and feedback.

The difference between eyes and cameras
/ production, reference

 

 

 

https://www.quora.com/What-is-the-comparison-between-the-human-eye-and-a-digital-camera

 

https://medium.com/hipster-color-science/a-beginners-guide-to-colorimetry-401f1830b65a

 

There are three types of cone photoreceptors in the eye, called Long, Medium and Short. These contribute to color discrimination. They are all sensitive to different, yet overlapping, wavelengths of light. They are commonly associated with the color they are most sensitive too, L = red, M = green, S = blue.

 

Different spectral distributions can stimulate the cones in the exact same way
A leaf and a green car that look the same to you, but physically have different reflectance properties. It turns out every color (or, unique cone output) can be created from many different spectral distributions. Color science starts to make a lot more sense when you understand this.

 

When you view the charts overlaid, you can see that the spinach mostly reflects light outside of the eye’s visual range, and inside our range it mostly reflects light centered around our M cone.

 

This phenomenon is called metamerism and it has huge ramifications for color reproduction. It means we don’t need the original light to reproduce an observed color.

 

http://www.absoluteastronomy.com/topics/Adaptation_%28eye%29

 

The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand. What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as “black” can be shifted across six orders of magnitude—a factor of one million.

 

https://clarkvision.com/articles/eye-resolution.html

 

The Human eye is able to function in bright sunlight and view faint starlight, a range of more than 100 million to one. The Blackwell (1946) data covered a brightness range of 10 million and did not include intensities brighter than about the full Moon. The full range of adaptability is on the order of a billion to 1. But this is like saying a camera can function over a similar range by adjusting the ISO gain, aperture and exposure time.

In any one view, the eye eye can see over a 10,000 range in contrast detection, but it depends on the scene brightness, with the range decreasing with lower contrast targets. The eye is a contrast detector, not an absolute detector like the sensor in a digital camera, thus the distinction.  The range of the human eye is greater than any film or consumer digital camera.

As for DSLR cameras’ contrast ratio ranges in 2048:1.

 

(Daniel Frank) Several key differences stand out for me (among many):

  • The area devoted to seeing detail in the eye — the fovea — is extremely small compared to a digital camera sensor. It covers a roughly circular area of only about three degrees of arc. By contrast, a “normal” 50mm lens (so called because it supposedly mimic the perspective of the human eye) covers roughly 40 degrees of arc. Because of this extremely narrow field of view, the eye is constantly making small movements (“saccades”) to scan more of the field, and the brain is building up the illusion of a wider, detailed picture.
  • The eye has two different main types of light detecting elements: rods and cones. Rods are more sensitive, and detect only variations in brightness, but not color. Cones sense color, but only work in brighter light. That’s why very dim scenes look desaturated, in shades of gray, to the human eye. If you take a picture in moonlight with a very high-ISO digital camera, you’ll be struck by how saturated the colors are in that picture — it looks like daylight. We think of this difference in color intensity as being inherent in dark scenes, but that’s not true — it’s actually the limitation of the cones in our eyes.
  • There are specific cones in the eye with stronger responses to the different wavelengths corresponding to red, green, and blue light. By contrast, the CCD or CMOS sensor in a color digital camera can only sense luminance differences: it just counts photons in tens of millions of tiny photodetectors (“wells”) spread across its surface. In front of this detector is an array of microscopic red, blue, and green filters, one per well. The processing engine in the camera interpolates the luminance of adjacent red-, green-, or blue-filtered detectors based on a so-called “demosaicing” algorithm. This bears no resemblance to how the eye detects color. (The so-called “foveon” sensor sold by Sigma in some of its cameras avoid demosaicing by layering different color-sensing layers, but this still isn’t how the eye works.)
  • The files output by color digital cameras contain three channels of luminance data: red, green, and blue. While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.
  • The retina is much larger than the fovea, but the light-sensitive areas outside the fovea, and the nuclei to which they wire in the brain, are highly sensitive to motion, particularly in the periphery of our vision. The human visual system — including the eye — is highly adapted to detecting and analyzing potential threats coming at us from outside our central vision, and priming the brain and body to respond. These functions and systems have no analogue in any digital camera system.

Scorsese’s New Mob Epic, ‘The Irishman,’ Has Netflix and Theaters Distributors at Odds – The online feature streaming battle
/ production, ves

www.nytimes.com/2019/08/21/business/media/netflix-scorsese-the-irishman.html

When Martin Scorsese signed with Netflix to make “The Irishman,” the star-studded epic scheduled to have its premiere on the opening night of the New York Film Festival next month, he put himself in the crossfire of the so-called streaming wars.

A crucial sticking point has been the major chains’ insistence that the films they book must play in their theaters for close to three months while not being made available for streaming at the same time, which does not sit well with Netflix.

More than 95 percent of movies stop earning their keep in theaters at the 42-day mark, well short of the three-month window demanded by major chains, according to Mr. Aronson. That suggests the need for change, he said.

Having built itself into an entertainment powerhouse by keeping its subscribers interested and coming back for more, Netflix does not want to be distracted by the demands of the old-style movie business, even as it makes deals with legendary filmmakers like Mr. Scorsese.

Oscar eligibility is not much of a factor in how Netflix handles the rollout. To qualify for the Academy Awards, a film must have a seven-day run in a commercial theater in Los Angeles County, according to rules recently confirmed by the Academy of Motion Picture Arts and Sciences’ board of governors; it can even be shown on another platform at the same time. Still, there is an Academy contingent that may look askance at Netflix if it does not play by the old rules for a cinematic feature like “The Irishman.”

 

7 Commandments of Film Editing and composition
/ composition, production

 

1. Watch every frame of raw footage twice. On the second time, take notes. If you don’t do this and try to start developing a scene premature, then it’s a big disservice to yourself and to the director, actors and production crew.

 

2. Nurture the relationships with the director. You are the secondary person in the relationship. Be calm and continually offer solutions. Get the main intention of the film as soon as possible from the director.

 

3. Organize your media so that you can find any shot instantly.

 

4. Factor in extra time for renders, exports, errors and crashes.

 

5. Attempt edits and ideas that shouldn’t work. It just might work. Until you do it and watch it, you won’t know. Don’t rule out ideas just because they don’t make sense in your mind.

 

6. Spend more time on your audio. It’s the glue of your edit. AUDIO SAVES EVERYTHING. Create fluid and seamless audio under your video.

 

7. Make cuts for the scene, but always in context for the whole film. Have a macro and a micro view at all times.

No one could see the colour blue until modern times
/ colour, photography, production

http://www.businessinsider.com.au/what-is-blue-and-how-do-we-see-color-2015-2

The way that humans see the world… until we have a way to describe something, even something so fundamental as a colour, we may not even notice that something it’s there.

 

Ancient languages didn’t have a word for blue — not Greek, not Chinese, not Japanese, not Hebrew, not Icelandic cultures. And without a word for the colour, there’s evidence that they may not have seen it at all.

https://www.wnycstudios.org/story/211119-colors

 

Every language first had a word for black and for white, or dark and light. The next word for a colour to come into existence — in every language studied around the world — was red, the colour of blood and wine.

After red, historically, yellow appears, and later, green (though in a couple of languages, yellow and green switch places). The last of these colours to appear in every language is blue.

 

The only ancient culture to develop a word for blue was the Egyptians — and as it happens, they were also the only culture that had a way to produce a blue dye.

https://mymodernmet.com/shades-of-blue-color-history/

Considered to be the first ever synthetically produced color pigment, Egyptian blue (also known as cuprorivaite) was created around 2,200 B.C. It was made from ground limestone mixed with sand and a copper-containing mineral, such as azurite or malachite, which was then heated between 1470 and 1650°F. The result was an opaque blue glass which then had to be crushed and combined with thickening agents such as egg whites to create a long-lasting paint or glaze.

 

If you think about it, blue doesn’t appear much in nature — there aren’t animals with blue pigments (except for one butterfly, Obrina Olivewing, all animals generate blue through light scattering), blue eyes are rare (also blue through light scattering), and blue flowers are mostly human creations. There is, of course, the sky, but is that really blue?

 

So before we had a word for it, did people not naturally see blue? Do you really see something if you don’t have a word for it?

 

A researcher named Jules Davidoff traveled to Namibia to investigate this, where he conducted an experiment with the Himba tribe, who speak a language that has no word for blue or distinction between blue and green. When shown a circle with 11 green squares and one blue, they couldn’t pick out which one was different from the others.

 

When looking at a circle of green squares with only one slightly different shade, they could immediately spot the different one. Can you?

 

Davidoff says that without a word for a colour, without a way of identifying it as different, it’s much harder for us to notice what’s unique about it — even though our eyes are physically seeing the blocks it in the same way.

 

Further research brought to wider discussions about color perception in humans. Everything that we make is based on the fact that humans are trichromatic. The television only has 3 colors. Our color printers have 3 different colors. But some people, and in specific some women seemed to be more sensible to color differences… mainly because they’re just more aware or – because of the job that they do.

Eventually this brought to the discovery of a small percentage of the population, referred to as tetrachromats, which developed an extra cone sensitivity to yellow, likely due to gene modifications.

The interesting detail about these is that even between tetrachromats, only the ones that had a reason to develop, label and work with extra color sensitivity actually developed the ability to use their native skills.

 

So before blue became a common concept, maybe humans saw it. But it seems they didn’t know they were seeing it.

If you see something yet can’t see it, does it exist? Did colours come into existence over time? Not technically, but our ability to notice them… may have…