Methods for creating motion blur in Stop motion
/ animation, production

en.wikipedia.org/wiki/Go_motion

 

Petroleum jelly
This crude but reasonably effective technique involves smearing petroleum jelly (“Vaseline”) on a plate of glass in front of the camera lens, also known as vaselensing, then cleaning and reapplying it after each shot — a time-consuming process, but one which creates a blur around the model. This technique was used for the endoskeleton in The Terminator. This process was also employed by Jim Danforth to blur the pterodactyl’s wings in Hammer Films’ When Dinosaurs Ruled the Earth, and by Randal William Cook on the terror dogs sequence in Ghostbusters.[citation needed]

 

Bumping the puppet
Gently bumping or flicking the puppet before taking the frame will produce a slight blur; however, care must be taken when doing this that the puppet does not move too much or that one does not bump or move props or set pieces.

 

Moving the table
Moving the table on which the model is standing while the film is being exposed creates a slight, realistic blur. This technique was developed by Ladislas Starevich: when the characters ran, he moved the set in the opposite direction. This is seen in The Little Parade when the ballerina is chased by the devil. Starevich also used this technique on his films The Eyes of the Dragon, The Magical Clock and The Mascot. Aardman Animations used this for the train chase in The Wrong Trousers and again during the lorry chase in A Close Shave. In both cases the cameras were moved physically during a 1-2 second exposure. The technique was revived for the full-length Wallace & Gromit: The Curse of the Were-Rabbit.

 

Go motion
The most sophisticated technique was originally developed for the film The Empire Strikes Back and used for some shots of the tauntauns and was later used on films like Dragonslayer and is quite different from traditional stop motion. The model is essentially a rod puppet. The rods are attached to motors which are linked to a computer that can record the movements as the model is traditionally animated. When enough movements have been made, the model is reset to its original position, the camera rolls and the model is moved across the table. Because the model is moving during shots, motion blur is created.

 

A variation of go motion was used in E.T. the Extra-Terrestrial to partially animate the children on their bicycles.

DNeg possibly charged with fraud
/ ves

An Oscar-winning visual effects studio aiming for a £600 million stock market flotation has become entangled in an alleged scheme to defraud the taxman.

DNEG, which has worked on films such as No Time to Die and Captain Marvel, could have to pay HM Revenue & Customs more than £10 million in back taxes and penalties.

https://www.thetimes.co.uk/article/visual-effects-studio-reveals-tax-raid-vjb3pj8s3

Polarised vs unpolarized filtering
/ colour, lighting, production

A light wave that is vibrating in more than one plane is referred to as unpolarized light. … Polarized light waves are light waves in which the vibrations occur in a single plane. The process of transforming unpolarized light into polarized light is known as polarization.

en.wikipedia.org/wiki/Polarizing_filter_(photography)

 

Light reflected from a non-metallic surface becomes polarized; this effect is maximum at Brewster’s angle, about 56° from the vertical for common glass.

 

A polarizer rotated to pass only light polarized in the direction perpendicular to the reflected light will absorb much of it. This absorption allows glare reflected from, for example, a body of water or a road to be reduced. Reflections from shiny surfaces (e.g. vegetation, sweaty skin, water surfaces, glass) are also reduced. This allows the natural color and detail of what is beneath to come through. Reflections from a window into a dark interior can be much reduced, allowing it to be seen through. (The same effects are available for vision by using polarizing sunglasses.)

 

www.physicsclassroom.com/class/light/u12l1e.cfm

 

Some of the light coming from the sky is polarized (bees use this phenomenon for navigation). The electrons in the air molecules cause a scattering of sunlight in all directions. This explains why the sky is not dark during the day. But when looked at from the sides, the light emitted from a specific electron is totally polarized.[3] Hence, a picture taken in a direction at 90 degrees from the sun can take advantage of this polarization. Use of a polarizing filter, in the correct direction, will filter out the polarized component of skylight, darkening the sky; the landscape below it, and clouds, will be less affected, giving a photograph with a darker and more dramatic sky, and emphasizing the clouds.

 

There are two types of polarizing filters readily available, linear and “circular”, which have exactly the same effect photographically. But the metering and auto-focus sensors in certain cameras, including virtually all auto-focus SLRs, will not work properly with linear polarizers because the beam splitters used to split off the light for focusing and metering are polarization-dependent.

 

Polarizing filters reduce the light passed through to the film or sensor by about one to three stops (2–8×) depending on how much of the light is polarized at the filter angle selected. Auto-exposure cameras will adjust for this by widening the aperture, lengthening the time the shutter is open, and/or increasing the ASA/ISO speed of the camera.

 

www.adorama.com/alc/nd-filter-vs-polarizer-what%25e2%2580%2599s-the-difference

 

Neutral Density (ND) filters help control image exposure by reducing the light that enters the camera so that you can have more control of your depth of field and shutter speed. Polarizers or polarizing filters work in a similar way, but the difference is that they selectively let light waves of a certain polarization pass through. This effect helps create more vivid colors in an image, as well as manage glare and reflections from water surfaces. Both are regarded as some of the best filters for landscape and travel photography as they reduce the dynamic range in high-contrast images, thus enabling photographers to capture more realistic and dramatic sceneries.

 

shopfelixgray.com/blog/polarized-vs-non-polarized-sunglasses/

 

www.eyebuydirect.com/blog/difference-polarized-nonpolarized-sunglasses/

 

Capturing textures albedo

Building a Portable PBR Texture Scanner by Stephane Lb
http://rtgfx.com/pbr-texture-scanner/

 

 

How To Split Specular And Diffuse In Real Images, by John Hable
http://filmicworlds.com/blog/how-to-split-specular-and-diffuse-in-real-images/

 

Capturing albedo using a Spectralon
https://www.activision.com/cdn/research/Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf

Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf

Spectralon is a teflon-based pressed powderthat comes closest to being a pure Lambertian diffuse material that reflects 100% of all light. If we take an HDR photograph of the Spectralon alongside the material to be measured, we can derive thediffuse albedo of that material.

 

The process to capture diffuse reflectance is very similar to the one outlined by Hable.

 

1. We put a linear polarizing filter in front of the camera lens and a second linear polarizing filterin front of a modeling light or a flash such that the two filters are oriented perpendicular to eachother, i.e. cross polarized.

 

2. We place Spectralon close to and parallel with the material we are capturing and take brack-eted shots of the setup7. Typically, we’ll take nine photographs, from -4EV to +4EV in 1EVincrements.

 

3. We convert the bracketed shots to a linear HDR image. We found that many HDR packagesdo not produce an HDR image in which the pixel values are linear. PTGui is an example of apackage which does generate a linear HDR image. At this point, because of the cross polarization,the image is one of surface diffuse response.

 

4. We open the file in Photoshop and normalize the image by color picking the Spectralon, filling anew layer with that color and setting that layer to “Divide”. This sets the Spectralon to 1 in theimage. All other color values are relative to this so we can consider them as diffuse albedo.

Material X – an open standard for transfer of rich material and look-development content
/ production

www.materialx.org/

MaterialX is an open standard for transfer of rich material and look-development content between applications and renderers.

Originated at Lucasfilm in 2012, MaterialX has been used by Industrial Light & Magic in feature films such as Star Wars: The Force Awakens and Rogue One: A Star Wars Story, and by ILMxLAB in real-time experiences such as Trials On Tatooine.

MaterialX addresses the need for a common, open standard to represent the data values and relationships required to transfer the complete look of a computer graphics model from one application or rendering platform to another, including shading networks, patterns and texturing, complex nested materials and geometric assignments.

To further encourage interchangeable CG look setups, MaterialX also defines a complete set of data creation and processing nodes with a precise mechanism for functional extensibility.

Photography basics: Color Temperature and White Balance
/ colour, Featured, lighting, photography

 

Color Temperature of a light source describes the spectrum of light which is radiated from a theoretical “blackbody” (an ideal physical body that absorbs all radiation and incident light – neither reflecting it nor allowing it to pass through) with a given surface temperature.

https://en.wikipedia.org/wiki/Color_temperature

 

Or. Most simply it is a method of describing the color characteristics of light through a numerical value that corresponds to the color emitted by a light source, measured in degrees of Kelvin (K) on a scale from 1,000 to 10,000.

 

More accurately. The color temperature of a light source is the temperature of an ideal backbody that radiates light of comparable hue to that of the light source.

As such, the color temperature of a light source is a numerical measurement of its color appearance. It is based on the principle that any object will emit light if it is heated to a high enough temperature, and that the color of that light will shift in a predictable manner as the temperature is increased. The system is based on the color changes of a theoretical “blackbody radiator” as it is heated from a cold black to a white hot state.

 

So, why do we measure the hue of the light as a “temperature”? This was started in the late 1800s, when the British physicist William Kelvin heated a block of carbon. It glowed in the heat, producing a range of different colors at different temperatures. The black cube first produced a dim red light, increasing to a brighter yellow as the temperature went up, and eventually produced a bright blue-white glow at the highest temperatures. In his honor, Color Temperatures are measured in degrees Kelvin, which are a variation on Centigrade degrees. Instead of starting at the temperature water freezes, the Kelvin scale starts at “absolute zero,” which is -273 Centigrade.

 

More about black bodies here: http://www.pixelsham.com/2013/03/14/black-body-color

 

 

The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is about 5780 K. The color temperature of sunlight above the atmosphere is about 5900 K. Time of the day and atmospheric conditions bias the purity of the light that reaches us from the sun.

Some think that the Sun’s output in visible light peaks in the yellow. However, the Sun’s visible output peaks in the green:

  

 

 

http://solar-center.stanford.edu/SID/activities/GreenSun.html

Independently, we refer to the sun as a pure white light source. And we use its spectrum as a reference for other light sources.

Because the sun’s spectrum can change depending on so many factors (including pollution), a standard called D65 was defined (by the International Commission on Illumination) to represent what is considered as the average spectrum of the sun in average conditions.

This in reality tends to bias towards an overcast day of 6500K. And while it is implemented at different temperatures by different manufacturers, it is still considered a more common standard.

 

https://en.wikipedia.org/wiki/Illuminant_D65

 

https://www.scratchapixel.com/lessons/digital-imaging/colors

 

 

In this context, the White Point of a light defines the neutral color of its given color space.

https://chrisbrejon.com/cg-cinematography/chapter-1-color-management/#Colorspace

 

D65 corresponds to what the spectrum of the sun would typically look like on a midday sun somewhere in Western/Northern Europe (figure 9). This D65 which is also called the daylight illuminant is not a spectrum which we can exactly reproduce with a light source but rather a reference against which we can compare the spectrum of existing lights.

 

Another rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become “red hot” when they attain one temperature, and then “white hot” for even higher temperatures.

 

Similarly, black bodies at different temperatures also have varying color temperatures of “white light.” Despite its name, light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum.

 

The Kelvin Color Temperature scale imagines a black body object— (such as a lamp filament) being heated. At some point the object will get hot enough to begin to glow. As it gets hotter its glowing color will shift, moving from deep reds, such as a low burning fire would give, to oranges & yellows, all the way up to white hot.

 

Color temperatures over 5,000K are called cool colors (bluish white), while lower color temperatures (2,700–3,000 K) are called warm colors (yellowish white through red)

  

 

https://www.ni.com/en-ca/innovations/white-papers/12/a-practical-guide-to-machine-vision-lighting.html

 

Our eyes are very good at judging what is white under different light sources, but digital cameras often have great difficulty with auto white balance (AWB) — and can create unsightly blue, orange, or even green color casts. Understanding digital white balance can help you avoid these color casts, thereby improving your photos under a wider range of lighting conditions.

 

 

White balance (WB) is the process of removing these color casts from captured media, so that objects which appear white in perception (or expected) are rendered white in your medium.

This color cast is due to the way light itself is formed and spread.

 

What a white balancing procedure does is it identifies what is white in your footage. It doesn’t know what white is until you tell it what it is.

 

You can often do this with AWB (Automatic White Balance), but the results are not always desirable. That is why you may choose to manually change your white balance.

When you white balance you are telling your camera to treat any object with similar chrominance and luminance as white.

 

Different type of light sources generate different color casts.

 

As such, camera white balance has to take into account this “color temperature” of a light source, which mostly refers to the relative warmth or coolness of white light.

 

Matching the temperature value of an indoor/outdoor cast makes for a white balance.
The two color temperatures you’ll hear most often discussed are outdoor lighting which is often ball parked at 5600K and indoor (tungsten) lighting which is generally ball parked at 3200K. These are the two numbers you’ll hear over and over again. Higher color temperatures (over 5000K) are considered “cool” (i.e. Blue’ish). Lower color temperatures (under 5000K) are considered “warm” (i.e. orange’ish).

 

Therefore if you are shooting indoors under tungsten lighting at 3200K you will set your white balance for indoor shooting at this color temperature. In this case, your camera will correct your camera’s settings to ensure that white appears white. Your camera will either have an indoor 3200K auto option (even the most basic camera’s have this option) or you can choose to set it manually.

 

Things get complicated if you’re filming indoors during the day under tungsten lighting while the outdoor light is coming through a window. Now what we have is a mixing of color temperatures. What you need to understand in this situation is that there is no perfect white balance setting in a mixed color temperature setting. You will need to make a compromise on one end of the spectrum or the other. If you set your white balance to tungsten 3200K the daylight colors will appear very blue. If you set your white balance to optimize for daylight 5600K then your tungsten lighting will appear very orange.

 

Where to use which light:
For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.

 

 

REFERENCES

 


How to Convert Temperature (K) to RGB: Algorithm and Sample Code

https://tannerhelland.com/2012/09/18/convert-temperature-rgb-algorithm-code.html

 

http://www.vendian.org/mncharity/dir3/blackbody/UnstableURLs/bbr_color.html

 

http://riverfarenh.com/light-bulb-color-chart/

 

https://www.lightsfilmschool.com/blog/filmmaking-white-balance-and-color-temperature

 

https://astro-canada.ca/le_spectre_electromagnetique-the_electromagnetic_spectrum-eng

 

http://www.3drender.com/glossary/colortemp.htm

 

http://pernime.info/light-kelvin-scale/

 

http://lowel.tiffen.com/edu/color_temperature_and_rendering_demystified.html

 

https://en.wikipedia.org/wiki/Color_temperature

 

https://www.sylvania.com/en-us/innovation/education/light-and-color/Pages/color-characteristics-of-light.aspx

 

How to Convert Temperature (K) to RGB:
http://www.tannerhelland.com/4435/convert-temperature-rgb-algorithm-code/

 

  

 

 

https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_cinema_4d_ci_Lights_html

 

 

 

The difference between eyes and cameras
/ production, reference

 

 

 

https://www.quora.com/What-is-the-comparison-between-the-human-eye-and-a-digital-camera

 

https://medium.com/hipster-color-science/a-beginners-guide-to-colorimetry-401f1830b65a

 

There are three types of cone photoreceptors in the eye, called Long, Medium and Short. These contribute to color discrimination. They are all sensitive to different, yet overlapping, wavelengths of light. They are commonly associated with the color they are most sensitive too, L = red, M = green, S = blue.

 

Different spectral distributions can stimulate the cones in the exact same way
A leaf and a green car that look the same to you, but physically have different reflectance properties. It turns out every color (or, unique cone output) can be created from many different spectral distributions. Color science starts to make a lot more sense when you understand this.

 

When you view the charts overlaid, you can see that the spinach mostly reflects light outside of the eye’s visual range, and inside our range it mostly reflects light centered around our M cone.

 

This phenomenon is called metamerism and it has huge ramifications for color reproduction. It means we don’t need the original light to reproduce an observed color.

 

http://www.absoluteastronomy.com/topics/Adaptation_%28eye%29

 

The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand. What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as “black” can be shifted across six orders of magnitude—a factor of one million.

 

https://clarkvision.com/articles/eye-resolution.html

 

The Human eye is able to function in bright sunlight and view faint starlight, a range of more than 100 million to one. The Blackwell (1946) data covered a brightness range of 10 million and did not include intensities brighter than about the full Moon. The full range of adaptability is on the order of a billion to 1. But this is like saying a camera can function over a similar range by adjusting the ISO gain, aperture and exposure time.

In any one view, the eye eye can see over a 10,000 range in contrast detection, but it depends on the scene brightness, with the range decreasing with lower contrast targets. The eye is a contrast detector, not an absolute detector like the sensor in a digital camera, thus the distinction.  The range of the human eye is greater than any film or consumer digital camera.

As for DSLR cameras’ contrast ratio ranges in 2048:1.

 

(Daniel Frank) Several key differences stand out for me (among many):

  • The area devoted to seeing detail in the eye — the fovea — is extremely small compared to a digital camera sensor. It covers a roughly circular area of only about three degrees of arc. By contrast, a “normal” 50mm lens (so called because it supposedly mimic the perspective of the human eye) covers roughly 40 degrees of arc. Because of this extremely narrow field of view, the eye is constantly making small movements (“saccades”) to scan more of the field, and the brain is building up the illusion of a wider, detailed picture.
  • The eye has two different main types of light detecting elements: rods and cones. Rods are more sensitive, and detect only variations in brightness, but not color. Cones sense color, but only work in brighter light. That’s why very dim scenes look desaturated, in shades of gray, to the human eye. If you take a picture in moonlight with a very high-ISO digital camera, you’ll be struck by how saturated the colors are in that picture — it looks like daylight. We think of this difference in color intensity as being inherent in dark scenes, but that’s not true — it’s actually the limitation of the cones in our eyes.
  • There are specific cones in the eye with stronger responses to the different wavelengths corresponding to red, green, and blue light. By contrast, the CCD or CMOS sensor in a color digital camera can only sense luminance differences: it just counts photons in tens of millions of tiny photodetectors (“wells”) spread across its surface. In front of this detector is an array of microscopic red, blue, and green filters, one per well. The processing engine in the camera interpolates the luminance of adjacent red-, green-, or blue-filtered detectors based on a so-called “demosaicing” algorithm. This bears no resemblance to how the eye detects color. (The so-called “foveon” sensor sold by Sigma in some of its cameras avoid demosaicing by layering different color-sensing layers, but this still isn’t how the eye works.)
  • The files output by color digital cameras contain three channels of luminance data: red, green, and blue. While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.
  • The retina is much larger than the fovea, but the light-sensitive areas outside the fovea, and the nuclei to which they wire in the brain, are highly sensitive to motion, particularly in the periphery of our vision. The human visual system — including the eye — is highly adapted to detecting and analyzing potential threats coming at us from outside our central vision, and priming the brain and body to respond. These functions and systems have no analogue in any digital camera system.

Scorsese’s New Mob Epic, ‘The Irishman,’ Has Netflix and Theaters Distributors at Odds – The online feature streaming battle
/ production, ves

www.nytimes.com/2019/08/21/business/media/netflix-scorsese-the-irishman.html

When Martin Scorsese signed with Netflix to make “The Irishman,” the star-studded epic scheduled to have its premiere on the opening night of the New York Film Festival next month, he put himself in the crossfire of the so-called streaming wars.

A crucial sticking point has been the major chains’ insistence that the films they book must play in their theaters for close to three months while not being made available for streaming at the same time, which does not sit well with Netflix.

More than 95 percent of movies stop earning their keep in theaters at the 42-day mark, well short of the three-month window demanded by major chains, according to Mr. Aronson. That suggests the need for change, he said.

Having built itself into an entertainment powerhouse by keeping its subscribers interested and coming back for more, Netflix does not want to be distracted by the demands of the old-style movie business, even as it makes deals with legendary filmmakers like Mr. Scorsese.

Oscar eligibility is not much of a factor in how Netflix handles the rollout. To qualify for the Academy Awards, a film must have a seven-day run in a commercial theater in Los Angeles County, according to rules recently confirmed by the Academy of Motion Picture Arts and Sciences’ board of governors; it can even be shown on another platform at the same time. Still, there is an Academy contingent that may look askance at Netflix if it does not play by the old rules for a cinematic feature like “The Irishman.”

 

ILM to Open New Studio in Australia
/ ves

www.ilm.com/hatsrabbits/ilm-to-open-new-studio-in-australia/?fbclid=IwAR1Tr1hUhMQSAxdsLyEmgrMgIPoP5E7rECoLySvonWR7aYQs9aU6mEjI8_E

The studio is opening a new studio in Sydney to better serve its clients and complement its current operations in San Francisco, where the company is headquartered, Singapore, Vancouver, and London.

“Sydney is an ideal location for our fifth studio,” noted Rob Bredow, Executive Creative Director and Head of ILM, adding, “there is abundant artistic and technical talent in the region which are both keys to ILM’s culture of innovation. It’s particularly exciting that the first film our new studio will contribute to will be Star Wars: The Rise of Skywalker.”

Annapurna Pictures Headed for Bankruptcy?
/ ves

www.awn.com/news/annapurna-pictures-headed-bankruptcy

Annapurna Pictures, the film production, distribution and financing company founded in 2011 by Megan Ellison, daughter of software giant Oracle’s billionaire founder, Larry Ellison, is reportedly attempting to restructure a $350 million credit line secured in 2017 that the company either has or is about to default on.

Megan’s brother, David Ellison, is also in the entertainment business, though by focusing on tentpole properties, has had a much more profitable go at it; he is the founder of Skydance Media, the producer of the recent Mission Impossible hits, the Terminator sequels and the upcoming Top Gun: Maverick starring Tom Cruise.

Disney to release 4 Fox animated features and one cg/live-action hybrid, but the future of Blue Sky Studio remains in question
/ ves

www.cartoonbrew.com/feature-film/disney-to-release-4-fox-animated-features-but-the-future-of-blue-sky-in-question-173867.html

Disney has not announced any Blue Sky titles beyond Nimona in 2021, which creates uncertainty about how (or if) they will integrate the Greenwich, Connecticut-based Blue Sky into the Disney brand. The waiting game about the studio’s future will continue for the time being.

Disney Bumps ‘Avatar’ Sequel to 2021, Sets Dates on Three Unnamed ‘Star Wars’ Movies
/ ves

www.awn.com/news/disney-bumps-avatar-sequel-2021-sets-dates-three-unnamed-star-wars-movies

Three new as-yet-untitled Star Wars films will release on the pre-Christmas weekend every other year beginning in 2022 — December 16, 2022, December 20, 2024 and December 18, 2026.

Four forthcoming Avatar films will release on the pre-Christmas weekend every other year beginning with in 2021 – the first, originally scheduled to debut December 18, 2020, has been pushed to December 17, 2021.

AFL-CIO America Biggest Labor Federation Asks Game Developers to Unionize
/ ves

www.polygon.com/2019/1/16/18178332/game-developer-union-crunch

variety.com/2019/gaming/news/liz-schuler-game-developers-unionize-1203141471/

www.gameworkersunite.org/

AFL-CIO secretary-treasurer Liz Shuler took to Kotaku with a post that asks workers in the games industry to fight for adequate pay, sensible work hours, and against toxic work conditions.

“We’ve heard the painful stories of those willing to come forward, including one developer who visited the emergency room three times before taking off from work,” writes Shuler. “Developers at Rockstar Games recently shared stories of crunch time that lasted for months and even years in order to satisfy outrageous demands from management, delivering a game that banked their bosses $725 million in its first three days.”

“Growing by double digits, U.S. video game sales reached $43 billion in 2018, about 3.6 times greater than the film industry’s record-breaking box office,” she writes.

“While you’re fighting through exhaustion and putting your soul into a game, Bobby Kotick and Andrew Wilson are toasting to ‘their’ success,” she says.

“They get rich. They get notoriety. They get to be crowned visionaries and regarded as pioneers. What do you get? Outrageous hours and inadequate paychecks. Stressful, toxic work conditions that push you to your physical and mental limits. The fear that asking for better means risking your dream job.”

Some of the biggest players in game development and publishing have fostered hostile and unforgiving environments where our favorite games are made.

Co-founder and vice president of Rockstar Games Dan Houser seemingly bragged that staff were pulling 100-hour weeks to get the much-anticipated “Red Dead Redemption 2” ready for its launch last year.

Similarly, companies like Telltale have laid off the majority of their staff with little to no notice or severance. Many developers took to Twitter afterwards to air their grievances and express their hopes for change under the hashtag #AsAGamesWorker.

Intel Studios Debut Volumetric XR Video Demo – virtual cameras production
/ production

Debut footage from Intel Studios new Volumetric Studio set up outside LAX, LA. Its volumetric VR filmmaking – so you could be the camera, anywhere, anytime – not just the POV shown here. Be sure this technology will only get better and better.

OLED vs QLED – What TV is better?
/ colour, hardware

 

Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs.
OLED stands for “organic light emitting diode.”
It is a fundamentally different technology from LCD, the major type of TV today.
OLED is “emissive,” meaning the pixels emit their own light.

 

Samsung is branding its best TVs with a new acronym: “QLED”
QLED (according to Samsung) stands for “quantum dot LED TV.”
It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.”
QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.

 

OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.

 

When analyzing TVs color, it may be beneficial to consider at least 3 elements:
“Color Depth”, “Color Gamut”, and “Dynamic Range”.

 

Color Depth (or “Bit-Depth”, e.g. 8-bit, 10-bit, 12-bit) determines how many distinct color variations (tones/shades) can be viewed on a given display.

 

Color Gamut (e.g. WCG) determines which specific colors can be displayed from a given “Color Space” (Rec.709, Rec.2020, DCI-P3) (i.e. the color range).

 

Dynamic Range (SDR, HDR) determines the luminosity range of a specific color – from its darkest shade (or tone) to its brightest.

 

The overall brightness range of a color will be determined by a display’s “contrast ratio”, that is, the ratio of luminance between the darkest black that can be produced and the brightest white.

 

Color Volume is the “Color Gamut” + the “Dynamic/Luminosity Range”.
A TV’s Color Volume will not only determine which specific colors can be displayed (the color range) but also that color’s luminosity range, which will have an affect on its “brightness”, and “colorfulness” (intensity and saturation).

 

The better the colour volume in a TV, the closer to life the colours appear.

 

QLED TV can express nearly all of the colours in the DCI-P3 colour space, and of those colours, express 100% of the colour volume, thereby producing an incredible range of colours.

 

With OLED TV, when the image is too bright, the percentage of the colours in the colour volume produced by the TV drops significantly. The colours get washed out and can only express around 70% colour volume, making the picture quality drop too.

 

Note. OLED TV uses organic material, so it may lose colour expression as it ages.

 

Resources for more reading and comparison below

www.avsforum.com/forum/166-lcd-flat-panel-displays/2812161-what-color-volume.html

 

www.newtechnologytv.com/qled-vs-oled/

 

news.samsung.com/za/qled-tv-vs-oled-tv

 

www.cnet.com/news/qled-vs-oled-samsungs-tv-tech-and-lgs-tv-tech-are-not-the-same/

 

The Public Domain Is Working Again — No Thanks To Disney
/ ves

www.cartoonbrew.com/law/the-public-domain-is-working-again-no-thanks-to-disney-169658.html

The law protects new works from unauthorized copying while allowing artists free rein on older works.

The Copyright Act of 1909 used to govern copyrights. Under that law, a creator had a copyright on his creation for 28 years from “publication,” which could then be renewed for another 28 years. Thus, after 56 years, a work would enter the public domain.

However, the Congress passed the Copyright Act of 1976, extending copyright protection for works made for hire to 75 years from publication.

Then again, in 1998, Congress passed the Sonny Bono Copyright Term Extension Act (derided as the “Mickey Mouse Protection Act” by some observers due to the Walt Disney Company’s intensive lobbying efforts), which added another twenty years to the term of copyright.

it is because Snow White was in the public domain that it was chosen to be Disney’s first animated feature.
Ironically, much of Disney’s legislative lobbying over the last several decades has been focused on preventing this same opportunity to other artists and filmmakers.

The battle in the coming years will be to prevent further extensions to copyright law that benefit corporations at the expense of creators and society as a whole.

Acting Upward
/ production

actingupward.com/

A growing community of collaborators dedicated to helping actors, artists & filmmakers gain experience & improve their craft.

Photography basics: Why Use a (MacBeth) Color Chart?
/ colour, lighting, photography

Start here: http://www.pixelsham.com/2013/05/09/gretagmacbeth-color-checker-numeric-values/

 

https://www.studiobinder.com/blog/what-is-a-color-checker-tool/

 

 

 

In LightRoom

 

in Final Cut

 

in Nuke

Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).

One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

 

If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.

 

In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

 

This will set the sun usually around EV12-17 and the sky EV1-4 , depending on cloud coverage.

 

To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

Composition – cinematography Cheat Sheet

https://moodle.gllm.ac.uk/pluginfile.php/190622/mod_resource/content/1/Cinematography%20Cheat%20Sheet.pdf

Where is our eye attracted first? Why?

Size. Focus. Lighting. Color.

Size. Mr. White (Harvey Keitel) on the right.
Focus. He’s one of the two objects in focus.
Lighting. Mr. White is large and in focus and Mr. Pink (Steve Buscemi) is highlighted by
a shaft of light.
Color. Both are black and white but the read on Mr. White’s shirt now really stands out.


What type of lighting?

-> High key lighting.
Features bright, even illumination and few conspicuous shadows. This lighting key is often used in musicals and comedies.

Low key lighting
Features diffused shadows and atmospheric pools of light. This lighting key is often used in mysteries and thrillers.

High contrast lighting
Features harsh shafts of lights and dramatic streaks of blackness. This type of lighting is often used in tragedies and melodramas.

 

What type of shot?

Extreme long shot
Taken from a great distance, showing much of the locale. Ifpeople are included in these shots, they usually appear as mere specks

-> Long shot
Corresponds to the space between the audience and the stage in a live theater. The long shots show the characters and some of the locale.

Full shot
Range with just enough space to contain the human body in full. The full shot shows the character and a minimal amount of the locale.

Medium shot
Shows the human figure from the knees or waist up.

Close-Up
Concentrates on a relatively small object and show very little if any locale.

Extreme close-up
Focuses on an unnaturally small portion of an object, giving that part great detail and symbolic significance.

 

What angle?

Bird’s-eye view.
The shot is photographed directly from above. This type of shot can be disorienting, and the people photographed seem insignificant.

High angle.
This angle reduces the size of the objects photographed. A person photographed from this angle seems harmless and insignificant, but to a lesser extent than with the bird’s-eye view.

-> Eye-level shot.
The clearest view of an object, but seldom intrinsically dramatic, because it tends to be the norm.

Low angle.
This angle increases high and a sense of verticality, heightening the importance of the object photographed. A person shot from this angle is given a sense of power and respect.

Oblique angle.
For this angle, the camera is tilted laterally, giving the image a slanted appearance. Oblique angles suggest tension, transition, a impending movement. They are also called canted or dutch angles.

 

What is the dominant color?

The use of color in this shot is symbolic. The scene is set in warehouse. Both the set and characters are blues, blacks and whites.

This was intentional allowing for the scenes and shots with blood to have a great level of contrast.

 

What is the Lens/Filter/Stock?

Telephoto lens.
A lens that draws objects closer but also diminishes the illusion of depth.

Wide-angle lens.
A lens that takes in a broad area and increases the illusion of depth but sometimes distorts the edges of the image.

Fast film stock.
Highly sensitive to light, it can register an image with little illumination. However, the final product tends to be grainy.

Slow film stock.
Relatively insensitive to light, it requires a great deal of illumination. The final product tends to look polished.

The lens is not wide-angle because there isn’t a great sense of depth, nor are several planes in focus. The lens is probably long but not necessarily a telephoto lens because the depth isn’t inordinately compressed.

The stock is fast because of the grainy quality of the image.

 

Subsidiary Contrast; where does the eye go next?

The two guns.

 

How much visual information is packed into the image? Is the texture stark, moderate, or highly detailed?

Minimalist clutter in the warehouse allows a focus on a character driven thriller.

 

What is the Composition?

Horizontal.
Compositions based on horizontal lines seem visually at rest and suggest placidity or peacefulness.

Vertical.
Compositions based on vertical lines seem visually at rest and suggest strength.

-> Diagonal.
Compositions based on diagonal, or oblique, lines seem dynamic and suggest tension or anxiety.

-> Binary. Binary structures emphasize parallelism.

Triangle.
Triadic compositions stress the dynamic interplay among three main

Circle.
Circular compositions suggest security and enclosure.

 

Is the form open or closed? Does the image suggest a window that arbitrarily isolates a fragment of the scene? Or a proscenium arch, in which the visual elements are carefully arranged and held in balance?

The most nebulous of all the categories of mise en scene, the type of form is determined by how consciously structured the mise en scene is. Open forms stress apparently simple techniques, because with these unself-conscious methods the filmmaker is able to emphasize the immediate, the familiar, the intimate aspects of reality. In open-form images, the frame tends to be deemphasized. In closed form images, all the necessary information is carefully structured within the confines of the frame. Space seems enclosed and self-contained rather than continuous.

Could argue this is a proscenium arch because this is such a classic shot with parallels and juxtapositions.

 

Is the framing tight or loose? Do the character have no room to move around, or can they move freely without impediments?

Shots where the characters are placed at the edges of the frame and have little room to move around within the frame are considered tight.

Longer shots, in which characters have room to move around within the frame, are considered loose and tend to suggest freedom.

Center-framed giving us the entire scene showing isolation, place and struggle.

 

Depth of Field. On how many planes is the image composed (how many are in focus)? Does the background or foreground comment in any way on the mid-ground?

Standard DOF, one background and clearly defined foreground.

 

Which way do the characters look vis-a-vis the camera?

An actor can be photographed in any of five basic positions, each conveying different psychological overtones.

Full-front (facing the camera):
the position with the most intimacy. The character is looking in our direction, inviting our complicity.

Quarter Turn:
the favored position of most filmmakers. This position offers a high degree of intimacy but with less emotional involvement than the full-front.

-> Profile (looking of the frame left or right):
More remote than the quarter turn, the character in profile seems unaware of being observed, lost in his or her own thoughts.

Three-quarter Turn:
More anonymous than the profile, this position is useful for conveying a character’s unfriendly or antisocial feelings, for in effect, the character is partially turning his or her back on us, rejecting our interest.

Back to Camera:
The most anonymous of all positions, this position is often used to suggest a character’s alienation from the world. When a character has his or her back to the camera, we can only guess what’s taking place internally, conveying a sense of concealment, or mystery.

How much space is there between the characters?

Extremely close, for a gunfight.

 

The way people use space can be divided into four proxemic patterns.

Intimate distances.
The intimate distance ranges from skin contact to about eighteen inches away. This is the distance of physical involvement–of love, comfort, and tenderness between individuals.

-> Personal distances.
The personal distance ranges roughly from eighteen inches away to about four feet away. These distances tend to be reserved for friends and acquaintances. Personal distances preserve the privacy between individuals, yet these rages don’t necessarily suggest exclusion, as intimate distances often do.

Social distances.
The social distance rages from four feet to about twelve feet. These distances are usually reserved for impersonal business and casual social gatherings. It’s a friendly range in most cases, yet somewhat more formal than the personal distance.

Public distances.
The public distance extends from twelve feet to twenty-five feet or more. This range tends to be formal and rather detached.

Gamma correction

http://www.normankoren.com/makingfineprints1A.html#Gammabox

 

https://en.wikipedia.org/wiki/Gamma_correction

 

http://www.photoscientia.co.uk/Gamma.htm

 

https://www.w3.org/Graphics/Color/sRGB.html

 

http://www.eizoglobal.com/library/basics/lcd_display_gamma/index.html

 

https://forum.reallusion.com/PrintTopic308094.aspx

 

Basically, gamma is the relationship between the brightness of a pixel as it appears on the screen, and the numerical value of that pixel. Generally Gamma is just about defining relationships.

Three main types:
– Image Gamma encoded in images
– Display Gammas encoded in hardware and/or viewing time
– System or Viewing Gamma which is the net effect of all gammas when you look back at a final image. In theory this should flatten back to 1.0 gamma.

 

Our eyes, different camera or video recorder devices do not correctly capture luminance. (they are not linear)
Different display devices (monitor, phone screen, TV) do not display luminance correctly neither. So, one needs to correct them, therefore the gamma correction function.

The human perception of brightness, under common illumination conditions (not pitch black nor blindingly bright), follows an approximate power function (note: no relation to the gamma function), with greater sensitivity to relative differences between darker tones than between lighter ones, consistent with the Stevens’ power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality.

https://blog.amerlux.com/4-things-architects-should-know-about-lumens-vs-perceived-brightness/

cones manage color receptivity, rods determine how large our pupils should be. The larger (more dilated) our pupils are, the more light enters our eyes. In dark situations, our rods dilate our pupils so we can see better. This impacts how we perceive brightness.

 

https://www.cambridgeincolour.com/tutorials/gamma-correction.htm

A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

 

Display gamma
The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values.

 

Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. Human response to luminance is also biased. Especially sensible to dark areas.
Thus, the human visual system has a non-linear response to the power of the incoming light, so a fixed increase in power will not have a fixed increase in perceived brightness.
We perceive a value as half bright when it is actually 18% of the original intensity not 50%. As such, our perception is not linear.

 

You probably already know that a pixel can have any ‘value’ of Red, Green, and Blue between 0 and 255, and you would therefore think that a pixel value of 127 would appear as half of the maximum possible brightness, and that a value of 64 would represent one-quarter brightness, and so on. Well, that’s just not the case.

 

Pixar Color Management
https://renderman.pixar.com/color-management


– Why do we need linear gamma?
Because light works linearly and therefore only works properly when it lights linear values.

 

– Why do we need to view in sRGB?
Because the resulting linear image in not suitable for viewing, but contains all the proper data. Pixar’s IT viewer can compensate by showing the rendered image through a sRGB look up table (LUT), which is identical to what will be the final image after the sRGB gamma curve is applied in post.

This would be simple enough if every software would play by the same rules, but they don’t. In fact, the default gamma workflow for many 3D software is incorrect. This is where the knowledge of a proper imaging workflow comes in to save the day.

 

Cathode-ray tubes have a peculiar relationship between the voltage applied to them, and the amount of light emitted. It isn’t linear, and in fact it follows what’s called by mathematicians and other geeks, a ‘power law’ (a number raised to a power). The numerical value of that power is what we call the gamma of the monitor or system.

 

Thus. Gamma describes the nonlinear relationship between the pixel levels in your computer and the luminance of your monitor (the light energy it emits) or the reflectance of your prints. The equation is,

Luminance = C * value^gamma + black level

– C is set by the monitor Contrast control.

– Value is the pixel level normalized to a maximum of 1. For an 8 bit monitor with pixel levels 0 – 255, value = (pixel level)/255.

 

– Black level is set by the (misnamed) monitor Brightness control. The relationship is linear if gamma = 1. The chart illustrates the relationship for gamma = 1, 1.5, 1.8 and 2.2 with C = 1 and black level = 0.

 

Gamma affects middle tones; it has no effect on black or white. If gamma is set too high, middle tones appear too dark. Conversely, if it’s set too low, middle tones appear too light.

 

The native gamma of monitors – the relationship between grid voltage and luminance – is typically around 2.5, though it can vary considerably. This is well above any of the display standards, so you must be aware of gamma and correct it.

 

A display gamma of 2.2 is the de facto standard for the Windows operating system and the Internet-standard sRGB color space.

 

The old standard for Mcintosh and prepress file interchange is 1.8. It is now 2.2 as well.

 

Video cameras have gammas of approximately 0.45 – the inverse of 2.2. The viewing or system gamma is the product of the gammas of all the devices in the system – the image acquisition device (film+scanner or digital camera), color lookup table (LUT), and monitor. System gamma is typically between 1.1 and 1.5. Viewing flare and other factor make images look flat at system gamma = 1.0.

 

Most laptop LCD screens are poorly suited for critical image editing because gamma is extremely sensitive to viewing angle.

 

More about screens

https://www.cambridgeincolour.com/tutorials/gamma-correction.htm

CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

About black level (brightness). Your monitor’s brightness control (which should actually be called black level) can be adjusted using the mostly black pattern on the right side of the chart. This pattern contains two dark gray vertical bars, A and B, which increase in luminance with increasing gamma. (If you can’t see them, your black level is way low.) The left bar (A) should be just above the threshold of visibility opposite your chosen gamma (2.2 or 1.8) – it should be invisible where gamma is lower by about 0.3. The right bar (B) should be distinctly visible: brighter than (A), but still very dark. This chart is only for monitors; it doesn’t work on printed media.

 

The 1.8 and 2.2 gray patterns at the bottom of the image represent a test of monitor quality and calibration. If your monitor is functioning properly and calibrated to gamma = 2.2 or 1.8, the corresponding pattern will appear smooth neutral gray when viewed from a distance. Any waviness, irregularity, or color banding indicates incorrect monitor calibration or poor performance.

 

Another test to see whether one’s computer monitor is properly hardware adjusted and can display shadow detail in sRGB images properly, they should see the left half of the circle in the large black square very faintly but the right half should be clearly visible. If not, one can adjust their monitor’s contrast and/or brightness setting. This alters the monitor’s perceived gamma. The image is best viewed against a black background.

 

This procedure is not suitable for calibrating or print-proofing a monitor. It can be useful for making a monitor display sRGB images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace.

 

On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command xgamma -gamma 0.9 for setting gamma correction factor to 0.9, and xgamma for querying current value of that factor (the default is 1.0). In OS X systems, the gamma and other related screen calibrations are made through the System Preference

 

https://www.kinematicsoup.com/news/2016/6/15/gamma-and-linear-space-what-they-are-how-they-differ

Linear color space means that numerical intensity values correspond proportionally to their perceived intensity. This means that the colors can be added and multiplied correctly. A color space without that property is called ”non-linear”. Below is an example where an intensity value is doubled in a linear and a non-linear color space. While the corresponding numerical values in linear space are correct, in the non-linear space (gamma = 0.45, more on this later) we can’t simply double the value to get the correct intensity.

 

The need for gamma arises for two main reasons: The first is that screens have been built with a non-linear response to intensity. The other is that the human eye can tell the difference between darker shades better than lighter shades. This means that when images are compressed to save space, we want to have greater accuracy for dark intensities at the expense of lighter intensities. Both of these problems are resolved using gamma correction, which is to say the intensity of every pixel in an image is put through a power function. Specifically, gamma is the name given to the power applied to the image.

 

CRT screens, simply by how they work, apply a gamma of around 2.2, and modern LCD screens are designed to mimic that behavior. A gamma of 2.2, the reciprocal of 0.45, when applied to the brightened images will darken them, leaving the original image.

Kevin Geiger on Chinese animation growth
/ ves

https://www.awn.com/blog/chinas-changing-game

China is a work in progress. China is changing, and Chinese capability and pride are rising. And with that rise, China’s cultural, media/tech, and sociopolitical landscapes are rapidly morphing. It’s incumbent upon anyone in China – native or foreign – to roll with those changes.

To wit on the media front, China announced that its State Administration of Press, Publication, Radio, Film & Television (the unfortunate acronym “SAPPRFT” for short), will be abolished in favor of even tighter control under a new body at the cabinet level.

To interact successfully and satisfyingly here, you have to gain some real local perspective and develop an alternate set of instincts that are relevant for this reality as it is, not what you imagine it to be.

Disney, Fox and Paramount could lose the rights to their CGI characters
/ ves

http://www.digitalspy.com/movies/news/a841451/disney-cgi-characters-court-case-mova/

Marvel Studios and Lucasfilm owners Disney – as well as 20th Century Fox and Paramount – are caught up in a lawsuit over MOVA, software that captures actors’ facial expression to create realistic CGI models.

Rearden LLC, which claims to own the rights to MOVA, has been suing a Chinese company for stealing the technology, which was then used by the studios in their films, says The Hollywood Reporter. The plaintiff is now suing for the rights to characters created with the tech.

Shooting and editing macro stereo
/ photography

The average interocular of humans is considered to be about 65mm (2.5 inches.) When this same distance is used as the interaxial distance between two shooting cameras then the resulting stereoscopic effect is typically known as “Ortho-stereo.” Many stereographers choose 2.5” as a stereo-base for this reason.

 

If the interaxial distance used to shoot is smaller than 2.5 inches then you are shooting “Hypo-stereo.” This technique is common for theatrically released films to accommodate the effects of the big screen. It is also used for macro stereoscopic photography.

 

Hyper-stereo refers to interaxial distances greater than 2.5 inches. As I mentioned earlier the greater the interaxial separation, the greater the depth effect. An elephant can perceive much more depth than a human, and a human can perceive more depth than a mouse.

 

However, using this same analogy, the mouse can get close and peer inside the petals of a flower with very good depth perception, and the human will just go “cross-eyed.” Therefore decreasing the interaxial separation between two cameras to 1” or less will allow you to shoot amazing macro stereo-photos and separating the cameras to several feet apart will allow great depth on mountain ranges, city skylines and other vistas.

 

The trouble with using hyper-stereo is that scenes with gigantic objects in real-life may appear as small models. This phenomenon is known as dwarfism and we perceive it this way because the exaggerated separation between the taking lenses allows us to see around big objects much more that we do in the real world. Our brain interprets this as meaning the object must be small.

 

The opposite happens with hypo-stereo, where normal sized objects appear gigantic. (Gigantism.)

 

http://dashwood3d.com/blog/beginners-guide-to-shooting-stereoscopic-3d/index.html

http://3d-con.com/2014/files/NSA2014-MACRO1.pdf

http://nzphoto.tripod.com/stereo/macrostereo/macro3dwindows.htm

http://nzphoto.tripod.com/sterea/stereotake.htm

(2017) Digital Domain Holdings Reports $64 Million in Losses
/ ves

http://variety.com/2017/biz/asia/losses-at-digital-domain-holdings-double-1202020056/

Digital Domain Holdings, the Hong Kong-based visual effects and virtual reality group, saw losses in 2016 more than double to $64.3 million.

The core Digital Domain 3.0 business provided VFX for films including “Beauty and the Beast,” “Deadpool” and “X-Men: Apocalypse” during the year.

Revenues across the group increased 45% from $68 million (HK$527 million) in 2015 to $98.5 million (HK$763 million) last year. Net losses, which totaled $23.1 million (HK$179 million) in 2015, reached $64.3 million (HK$498 million) in 2016.

The company pointed to content development and research and development costs for virtual reality content and games, 360° and virtual humans, and a more than fourfold increase in amortization of intangible assets, as causes of the financial pain.

Chaos Group V-Ray Wins Academy Award
/ ves

http://www.awn.com/news/chaos-group-s-v-ray-wins-academy-award

Photorealistic production renderer used on more than 150 feature films since 2002, including recent hits like ‘Doctor Strange,’ ‘Deadpool,’ and ‘Captain America: Civil War,’ to be honored for advancing the use of fully ray-traced rendering in motion pictures.

Sony Joins Blue Sky in Settlement of Hollywood Studio Antitrust Lawsuit
/ ves

http://www.awn.com/news/sony-joins-blue-sky-settlement-hollywood-studio-antitrust-lawsuit

Sony has become the second Hollywood studio to reach a settlement in a class-action lawsuit alleging that it and other studios violated antitrust laws by conspiring to suppress the wages of animation and VFX artists via non-poaching agreements…

…in 2011, a class-action lawsuit was brought against Pixar, Lucasfilm, Apple, Google, Adobe and Intuit. The first two companies settled claims for $9 million while the other companies have gone to an appeals court after Koh rejected a $325 million settlement as insufficient.

Blue Sky Reaches Settlement in Hollywood Studio Antitrust Lawsuit
/ ves

http://www.awn.com/news/blue-sky-reaches-settlement-hollywood-studio-antitrust-lawsuit

Blue Sky Studios has reached a settlement in a class-action lawsuit, Variety reports, alleging that the animation studio behind last year’s The Peanuts Movie and the Ice Age franchise and other companies violated antitrust laws by conspiring to suppress the wages of animation and VFX artists via non-poaching agreements.

The suit contends that the roots of the anti-poaching agreements go back to the mid-1980s, when George Lucas and Ed Catmull, the president of Steve Jobs’ then-newly formed company Pixar, agreed to not raid each other’s employees. Other companies later joined conspiracy, the suit alleges, including Sony ImageMovers, Lucasfilm and Walt Disney.

The plaintiffs have been seeking class certification. Their proposed settlement class includes certain animation and visual effects employees who worked at Pixar from 2001 to 2010; Lucasfilm from 2001 to 2010; DreamWorks Animation from 2003 to 2010; the Walt Disney Co. from 2004 to 2010; Sony Pictures Animation and Sony Pictures Imageworks from 2004 to 2010; Blue Sky from 2005 to 2010; and ImageMovers from 2007 to 2010.

Photography basics: f-stop vs t-stop
/ photography

http://petapixel.com/2014/09/30/your-lens-aperture-might-be-lying-to-you-or-the-difference-between-f-stops-and-t-stops/

 

https://www.premiumbeat.com/blog/understanding-lenses-aperture-f-stop-t-stop/

 

F-stops are the theoretical amount of light transmitted by the lens; t-stops, the actual amount. The difference is about 1/3 stop, often more with zooms.

 

f-stop is the measurement of the opening (aperture) of the lens in relation to its focal length (the distance between the lens and the sensor).  The math is focal length / lens diameter.
It mainly controls depth of field, given a known amount of light.

https://www.scantips.com/lights/fstop2.html

 

The smaller f-stop (larger aperture) the more depth of field and light.

 

Note that the numbers in an aperture—f/2.8, f/8—signify a certain amount of light, but that doesn’t necessarily mean that’s directly how much light is getting to your sensor.

 

T stop on the other hand is the measurement of how much light passes through aforementioned opening and actually makes it to the sensor. There is no such a lens which does not steal some light on the way to the sensor.
In short, is the corrected f-stop number you want to collect, based on the amount of light reaching the sensor after bouncing through all the lenses, to know exactly what is making it to film. The smaller, the more light.

 

http://www.dxomark.com/Lenses/Ratings/Optical-Metric-Scores

Note that exposure stop is a measurement of sensibility to light not of lens capabilities.

Photography basics: Shutter angle and shutter speed and motion blur
/ Featured, photography

http://www.shutterangle.com/2012/cinematic-look-frame-rate-shutter-speed/

 

https://www.cinema5d.com/global-vs-rolling-shutter/

 

https://www.wikihow.com/Choose-a-Camera-Shutter-Speed

 

https://www.provideocoalition.com/shutter-speed-vs-shutter-angle/

 

 

Shutter is the device that controls the amount of light through a lens. Basically in general it controls the amount of time a film is exposed.

 

Shutter speed is how long this device is open for, which also defines motion blur… the longer it stays open the blurrier the image captured.

 

The number refers to the amount of light actually allowed through.

 

As a reference, shooting at 24fps, at 180 shutter angle or 1/48th of shutter speed (0.0208 exposure time) will produce motion blur which is similar to what we perceive at naked eye

 

Talked of as in (shutter) angles, for historical reasons, as the original exposure mechanism was controlled through a pie shaped mirror in front of the lens.

 

 

A shutter of 180 degrees is blocking/allowing light for half circle.  (half blocked, half open). 270 degrees is one quarter pie shaped, which would allow for a higher exposure time (3 quarter pie open, vs one quarter closed) 90 degrees is three quarter pie shaped, which would allow for a lower exposure (one quarter open, three quarters closed)

 

The shutter angle can be converted back and fort with shutter speed with the following formulas:
https://www.provideocoalition.com/shutter-speed-vs-shutter-angle/

 

shutter angle =
(360 * fps) * (1/shutter speed)
or
(360 * fps) / shutter speed

 

shutter speed =
(360 * fps) * (1/shutter angle)
or
(360 * fps) / shutter angle

 

For example here is a chart from shutter angle to shutter speed at 24 fps:
270 = 1/32
180 = 1/48
172.8 = 1/50
144 = 1/60
90 = 1/96
72 = 1/120
45 = 1/198
22.5 = 1/348
11 = 1/696
8.6 = 1/1000

 

The above is basically the relation between the way a video camera calculates shutter (fractions of a second) and the way a film camera calculates shutter (in degrees).

Smaller shutter angles show strobing artifacts. As the camera only ever sees at least half of the time (for a typical 180 degree shutter). Due to being obscured by the shutter during that period, it doesn’t capture the scene continuously.

 

This means that fast moving objects, and especially objects moving across the frame, will exhibit jerky movement. This is called strobing. The defect is also very noticeable during pans.  Smaller shutter angles (shorter exposure) exhibit more pronounced strobing effects.

 

Larger shutter angles show more motion blur. As the longer exposure captures more motion.

Note that in 3D you want to first sum the total of the shutter open and shutter close values, than compare that to the shutter angle aperture, ie:

 

shutter open -0.0625
shutter close 0.0625
Total shutter = 0.0625+0.0625 = 0.125
Shutter angle = 360*0.125 = 45

 

shutter open -0.125
shutter close 0.125
Total shutter = 0.125+0.125 = 0.25
Shutter angle = 360*0.25 = 90

 

shutter open -0.25
shutter close 0.25
Total shutter = 0.25+0.25 = 0.5
Shutter angle = 360*0.5 = 180

 

shutter open -0.375
shutter close 0.375
Total shutter = 0.375+0.375 = 0.75
Shutter angle = 360*0.75 = 270

 

 

Faster frame rates can resolve both these issues.

Rob Bredow on VR – How Long Will Viewers Stay Immersed in Virtual Reality?
/ production, VR

http://blogs.wsj.com/digits/2016/01/04/how-long-will-viewers-stay-immersed-in-virtual-reality/

“Is that going to be the kind of thing that’s compelling enough as its own medium to hold your attention for two hours?” said Rob Bredow, Lucasfilm’s head of new media, at an Oculus conference in September. “If the answer is yes, we haven’t yet figured out all of the language of that sort of film-making.”