Strategies for conflict resolution
/ quotes

Photography basics: Production Rendering Resolution Charts
https://www.urtech.ca/2019/04/solved-complete-list-of-screen-resolution-names-sizes-and-aspect-ratios/

 

Resolution – Aspect Ratio 4:03 16:09 16:10 3:02 5:03 5:04
CGA 320 x 200
QVGA 320 x 240
VGA (SD, Standard Definition) 640 x 480
NTSC 720 x 480
WVGA 854 x 450
WVGA 800 x 480
PAL 768 x 576
SVGA 800 x 600
XGA 1024 x 768
not named 1152 x 768
HD 720 (720P, High Definition) 1280 x 720
WXGA 1280 x 800
WXGA 1280 x 768
SXGA 1280 x 1024
not named (768P, HD, High Definition) 1366 x 768
not named 1440 x 960
SXGA+ 1400 x 1050
WSXGA 1680 x 1050
UXGA (2MP) 1600 x 1200
HD1080 (1080P, Full HD) 1920 x 1080
WUXGA 1920 x 1200
2K 2048 x (any)
QWXGA 2048 x 1152
QXGA (3MP) 2048 x 1536
WQXGA 2560 x 1600
QHD (Quad HD) 2560 x 1440
QSXGA (5MP) 2560 x 2048
4K UHD (4K, Ultra HD, Ultra-High Definition) 3840 x 2160
QUXGA+ 3840 x 2400
IMAX 3D 4096 x 3072
8K UHD (8K, 8K Ultra HD, UHDTV) 7680 x 4320
10K  (10240×4320, 10K HD) 10240 x (any)
16K (Quad UHD, 16K UHD, 8640P) 15360 x 8640

 

(more…)

What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?
/ colour, Featured, photography

https://www.discovery.com/science/mexapixels-in-human-eye

About 576 megapixels for the entire field of view.

 

Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).

 

At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:

120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.

Or.

7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.

https://clarkvision.com/articles/eye-resolution.html

 

 

How many megapixels do you really need?

https://www.tomsguide.com/us/how-many-megapixels-you-need,review-1974.html

 

 

Sensor size reference – resolutions
/ photography

domeble – Hi-Resolution CGI Backplates and 360° HDRI
/ lighting, photography, reference

www.domeble.com/

When collecting hdri make sure the data supports basic metadata, such as:

  • Iso
  • Aperture
  • Exposure time or shutter time
  • Color temperature
  • Color space Exposure value (what the sensor receives of the sun intensity in lux)
  • 7+ brackets (with 5 or 6 being the perceived balanced exposure)

 

In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allow a greater dynamic range of luminances (a Photometry measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle) between the lightest and darkest areas of an image than standard digital imaging techniques or photographic methods. This wider dynamic range allows HDR images to represent more accurately the wide range of intensity levels found in real scenes ranging from direct sunlight to faint starlight and to the deepest shadows.

 

The two main sources of HDR imagery are computer renderings and merging of multiple photographs, which in turn are known as low dynamic range (LDR) or standard dynamic range (SDR) images. Tone Mapping (Look-up) techniques, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect. Photography

 

In photography, dynamic range is measured in Exposure Values (in photography, exposure value denotes all combinations of camera shutter speed and relative aperture that give the same exposure. The concept was developed in Germany in the 1950s) differences or stops, between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.

 

The human response to brightness is well approximated by a Steven’s power law, which over a reasonable range is close to logarithmic, as described by the Weber�Fechner law, which is one reason that logarithmic measures of light intensity are often used as well.

 

HDR is short for High Dynamic Range. It’s a term used to describe an image which contains a greater exposure range than the “black” to “white” that 8 or 16-bit integer formats (JPEG, TIFF, PNG) can describe. Whereas these Low Dynamic Range images (LDR) can hold perhaps 8 to 10 f-stops of image information, HDR images can describe beyond 30 stops and stored in 32 bit images.

 

ios resolutions
/ IOS, production

http://iosres.com/

iPhone 15 Pro Anamorphic Experiment – “What Makes a Cinema Camera” by Michael Cioni/Strada
/ hardware, photography

https://www.cined.com/iphone-15-pro-anamorphic-experiment-what-makes-a-cinema-camera-by-michael-cioni-strada/

 

For Michael Cioni, a cinema camera has to fulfill five requisites:

 

  • Cinematic resolution
  • Intraframe encoding
  • High dynamic range
  • Wide color gamut (10-bit or more)
  • Removable lenses

For now, the iPhone 15 Pro meets four out of these five requirements, all except the last one.

 

 

 

Generative AI Glossary
/ A.I.

https://education.civitai.com/generative-ai-glossary/

 

How to View Apple’s Spatial Videos
/ IOS, photography

https://blog.frame.io/2024/02/01/how-to-capture-and-view-vision-pro-spatial-video/

 

Apple’s Immersive Videos format is a special container for 3D or “spatial” video. You can capture spatial video to this format either by using the Vision Pro as a head-mounted camera, or with an iPhone 15 Pro or 15 Pro Max. The headset offers better capture because its cameras are more optimized for 3D, resulting in higher resolution and improved depth effects.

 

While the iPhone wasn’t designed specifically as a 3D camera, it can use its primary and ultrawide cameras in landscape orientation simultaneously, allowing it to capture spatial video—as long as you hold it horizontally. Computational photography is used to compensate for the lens differences, and the output is two separate 1080p, 30fps videos that capture a 180-degree field of view.

 

These spatial videos are stored using the MV-HEVC (Multi-View High-Efficiency Video Coding) format, which uses H.265 compression to crunch this down to approximately 130MB per minute, including spatial audio. Unlike conventional stereoscopic formats—which combine the two views into a flattened video file that’s either side-by-side or top/bottom—these spatial videos are stored as discrete tracks within the file container.

 

Spatialify is an iOS app designed to view and convert various 3D formats. It also works well on Mac OS, as long as your Mac has an Apple Silicon CPU. And it supports MV-HEVC, so you’ll be all set. It’s just $4.99, a genuine bargain considering what it does. Find Spatialify here.

 

 

Alan Friedman Takes Stunning Hi-Res Photographs of the Sun in His Backyard
/ photography

https://www.boredpanda.com/high-resolution-sun-pictures-alan-friedman/

 

https://avertedimagination.squarespace.com/

 

He uses a small (3 ½” aperture) telescope with a Hydrogen Alpha filter and an industrial webcam to capture the surface of the Sun, which looks surprisingly calm and fluffy in the incredible photos.

 

Canon RF 5.2mm f2.8L Dual Fisheye EOS VR System for VR photography and editing
/ hardware, photography, VR

 

https://thecamerastore.com/products/canon-rf-5-2mm-f2-8l-dual-fisheye

 

As part of the EOS VR System – this lens paired with the EOS R5 updated with firmware 1.5.0 or higher and one of Canon’s VR software solutions – you can create immersive 3D that can be experienced when viewed on compatible head mount displays including the Oculus Quest 2 and more. Viewers will be able to take in the scene with a vivid, wide field of view by simply moving their head. This is the world’s first digital interchangeable lens that can capture stereoscopic 3D 180° VR imagery to a single image sensor.

 

The pairing of this lens and the EOS R5 camera brings high resolution video recording at up to 8K DCI 30p and 4K DCI 60p.

 

https://www.the-digital-picture.com/Reviews/Canon-RF-5.2mm-F2.8-L-Dual-Fisheye-Lens.aspx

 

 

 

 

LasVegas’ Sphere and the Big Sky Camera
/ hardware, photography, ves

https://theasc.com/articles/sphere-and-the-big-sky-camera

 

Sphere is a 516′-wide, 366′-tall geodesic dome that houses the world’s highest-resolution screen: a 160,000-square-foot LED wraparound that fills the peripheral vision for 17,600 spectators (20,000 if standing-room areas are included). The curved screen is a 9mm-pixel-pitch, sonically transparent surface of LED panels with 500-nit brightness that produce a high-dynamic-range experience. The audience sits 160′ to 400′ from the screen in theatrical seating, and the screen provides a 155-degree diagonal field of view and a more-than-140-degree vertical field of view.

 

The image on the screen is 16K (16,384x16,384) driven by 25 synchronized 4K video servers.

 

https://nofilmschool.com/darren-aronofsky-sphere-camera

 

 

Cross section:

 

Meta Quest 3 is here
/ hardware, VR

 

 

https://www.roadtovr.com/meta-quest-3-oculus-preview-connect-2023/

 

  • Better lenses
  • Better resolution
  • Better processor
  • Better audio
  • Better passthrough
  • Better controllers
  • Better form-factor

 

 

 

Stability.AI – Stable Diffusion 2.0 open source release
/ A.I., software

https://stability.ai/blog/stable-diffusion-v2-release

 

 

  • New Text-to-Image Diffusion Models
  • Super-resolution Upscaler Diffusion Models
  • Depth-to-Image Diffusion Model
  • Updated Inpainting Diffusion Model

 

https://www.reddit.com/r/StableDiffusion/comments/z64aup/realistic_prompts_using_tomlikesrobots_workflow/

 

 

AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability
/ A.I., ves

https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

 

“Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”

 

“In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”

 

“It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”

 

“Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”

Open Source OpenVDB Version 9.0.0 Available Now and Introduces GPU Support
/ blender, software

First introduced in 2012, nowadays OpenVDB is commonly applied in simulation tools such as Houdini, EmberGen, Blender, and used in feature film production for creating realistic volumetric images. This format, however, lacks the GPUs support and can not be applied in games due to the considerable file size (on average at least a few Gigabytes) and computational effort required to render 3D volumes.

Volumetric data has numerous important applications in computer graphics and VFX production. It’s used for volume rendering, fluid simulation, fracture simulation, modeling with implicit surfaces, etc. However, this data is not so easy to work with. In most cases volumetric data is represented on spatially uniform, regular 3D grids. Although dense regular grids are convenient for several reasons, they have one major drawback – their memory footprint grows cubically with respect to grid resolution.

OpenVDB format, developed by DreamWorksAnimation, partially solves this issue by storing voxel data in a tree-like data structure that allows the creation of sparse volumes. The beauty behind this system is that it completely ignores empty cells, which drastically decreases memory and disk usage, simultaneously making the rendering of volumes much faster.

 

 

www.aswf.io/blog/project-update-openvdb-version-9-0-0-available-now-introduces-gpu-support/

 

github.com/AcademySoftwareFoundation/openvdb/releases/tag/v9.0.0

Plex – an open source Visual Effects, Animation and Games pipeline
/ production, software

www.alexanderrichtertd.com/post/plex-open-source-pipeline

Environments
– OS: Windows | Linux | Mac
– Software: Maya 2020+ | Houdini 15+ | 3ds Max 2020+ | Nuke 12+ | …
– Renderer: Arnold | RenderMan | Mantra | V-Ray | …

Project Features
– Visual Effects, Animation & Game production management system
– file & folder management (settings | create | save | load | publish)
– flexible, portable, multi functional project environment
– additional libraries (api | img | user | shot)
– workflow tracking & reporting
– user-pipeline integration
– SSTP (simple | smart | transparent | performant)

Pipeline Features

Layered Pipeline
– create a company pipeline
– add a project pipeline
– test and develop in a personal environment

Scripts
– desktop app
– save (+ publish) | load | create | render
– get, set and handle data | img | scripts
– template UI (user, report, help, accept, comment, color code)
– setup menu, shelf, toolbar, …

Workflows and Charts
– naming conventions
– software pipeline
– folder structure (project & pipeline)

Data and Helper
– project (resolution, fps …)
– user (name, task …)
– context (shot, task, comment …)
– environment variables (PROJECT_PATH …)
– additional libraries

Feedback & Debug (+ advanced logging)
– inform user about processes
– debug like a king *bow*

Monitors For Video Editing & Vfx work – Eizo ColorEdge CG319X 4K 31"
/ hardware

www.cgdirector.com/best-monitor-graphic-design-video-editing-3d/

There are three main Panel Types found in today’s modern Monitors.

The TN Panel (Twisted nematic)
The VA Panel (Vertical Alignment)
The IPS Panel (In-plane Switching)

 

The IPS Panel is the best panel type for visually demanding work.

 

imagescience.com.au/products/monitors/monitors-for-video-editing-and-vfx

 

The Eizo CG319X is the current benchmark monitor for video work from Eizo, the most respected name in the high end colour accurate monitor business.

 

Used by some of the world’s best VFX studios – like WETA Digital and Studio Ghibli – this full true 4K monster offers superb accuracy, DCI true blacks, and fully automatic calibration with a in-built high quality calibration sensor, very generous working area, and true full 4K resolution. It can show nearly all of the DCI-P3 colour space with extreme precision.

 

If you’ve got the budget, this is without doubt the monitor to own for high end video and FX work.

 

imagescience.com.au/products/monitors/eizo-coloredge-cg319x-4k


Samsung – The Wall MicroLED frame-less TV
/ hardware, production

Samsung The Wall

 

 

The Wall TV can be configured to sizes ranging from 146 inches to 292 inches diagonally and uses MicroLED technology instead of OLED or traditional LED.

MicroLED delivers many of the benefits you’ll find in OLED, including perfect blacks and eye-popping colors, but the set also boasts 1,600 nits of brightness. That’s brighter than today’s OLED sets.

Currently, Samsung is offering two models of The Wall, or rather the individual panels that make up The Wall, the IW008J and the IW008R. While Samsung doesn’t list prices for these panels online, other resellers are listing the modules for $16 to $23 thousand dollars each.

These individual modules measure 31.75 x 17.86 inches, but have an individual resolution of 960 x 540 pixels. In order to enjoy the same 3840 x 2160 resolution you’ll get on a standard 4K TV, you’ll need to buy 16 of these panels, to set up in a 4 x 4 configuration that measures 146 inches diagonally.

If you’re in the market for a microLED TV, and are comfortable spending upwards of $300,000 to get the same 4K resolution that the best cheap 4K TVs provide, you’ll need to contact Samsung directly to order products and arrange custom installation.

 

Because The Wall is made up of borderless tiles, the modular design allows additional tiles to be added, making this even-bigger version of The Wall possible.

 

https://www.tomsguide.com/us/samsung-the-wall-tv-release-date,news-27356.html

What’s the Difference Between Ray Casting, Ray Tracing, Path Tracing and Rasterization? Physical light tracing…
/ Featured, lighting, production

RASTERIZATION
Rasterisation (or rasterization)
is the task of taking the information described in a vector graphics format OR the vertices of triangles making 3D shapes and converting them into a raster image (a series of pixels, dots or lines, which, when displayed together, create the image which was represented via shapes), or in other words “rasterizing” vectors or 3D models onto a 2D plane for display on a computer screen.

For each triangle of a 3D shape, you project the corners of the triangle on the virtual screen with some math (projective geometry). Then you have the position of the 3 corners of the triangle on the pixel screen. Those 3 points have texture coordinates, so you know where in the texture are the 3 corners. The cost is proportional to the number of triangles, and is only a little bit affected by the screen resolution.

In computer graphics, a raster graphics or bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via a monitor, paper, or other display medium.

With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. A lot of information is associated with each vertex, including its position in space, as well as information about color, texture and its “normal,” which is used to determine the way the surface of an object is facing.

Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. Each pixel can be assigned an initial color value from the data stored in the triangle vertices.

Further pixel processing or “shading,” including changing pixel color based on how lights in the scene hit the pixel, and applying one or more textures to the pixel, combine to generate the final color applied to a pixel.

 

The main advantage of rasterization is its speed. However, rasterization is simply the process of computing the mapping from scene geometry to pixels and does not prescribe a particular way to compute the color of those pixels. So it cannot take shading, especially the physical light, into account and it cannot promise to get a photorealistic output. That’s a big limitation of rasterization.

There are also multiple problems:

  • If you have two triangles one is behind the other, you will draw twice all the pixels. you only keep the pixel from the triangle that is closer to you (Z-buffer), but you still do the work twice.

  • The borders of your triangles are jagged as it is hard to know if a pixel is in the triangle or out. You can do some smoothing on those, that is anti-aliasing.

  • You have to handle every triangles (including the ones behind you) and then see that they do not touch the screen at all. (we have techniques to mitigate this where we only look at triangles that are in the field of view)

  • Transparency is hard to handle (you can’t just do an average of the color of overlapping transparent triangles, you have to do it in the right order)

 

 

 

RAY CASTING
It is almost the exact reverse of rasterization: you start from the virtual screen instead of the vector or 3D shapes, and you project a ray, starting from each pixel of the screen, until it intersect with a triangle.

The cost is directly correlated to the number of pixels in the screen and you need a really cheap way of finding the first triangle that intersect a ray. In the end, it is more expensive than rasterization but it will, by design, ignore the triangles that are out of the field of view.

You can use it to continue after the first triangle it hit, to take a little bit of the color of the next one, etc… This is useful to handle the border of the triangle cleanly (less jagged) and to handle transparency correctly.

 

RAYTRACING


Same idea as ray casting except once you hit a triangle you reflect on it and go into a different direction. The number of reflection you allow is the “depth” of your ray tracing. The color of the pixel can be calculated, based off the light source and all the polygons it had to reflect off of to get to that screen pixel.

The easiest way to think of ray tracing is to look around you, right now. The objects you’re seeing are illuminated by beams of light. Now turn that around and follow the path of those beams backwards from your eye to the objects that light interacts with. That’s ray tracing.

Ray tracing is eye-oriented process that needs walking through each pixel looking for what object should be shown there, which is also can be described as a technique that follows a beam of light (in pixels) from a set point and simulates how it reacts when it encounters objects.

Compared with rasterization, ray tracing is hard to be implemented in real time, since even one ray can be traced and processed without much trouble, but after one ray bounces off an object, it can turn into 10 rays, and those 10 can turn into 100, 1000…The increase is exponential, and the the calculation for all these rays will be time consuming.

Historically, computer hardware hasn’t been fast enough to use these techniques in real time, such as in video games. Moviemakers can take as long as they like to render a single frame, so they do it offline in render farms. Video games have only a fraction of a second. As a result, most real-time graphics rely on the another technique called rasterization.

 

 

PATH TRACING
Path tracing can be used to solve more complex lighting situations.

Path tracing is a type of ray tracing. When using path tracing for rendering, the rays only produce a single ray per bounce. The rays do not follow a defined line per bounce (to a light, for example), but rather shoot off in a random direction. The path tracing algorithm then takes a random sampling of all of the rays to create the final image. This results in sampling a variety of different types of lighting.

When a ray hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit.
It then calculates the amount of light transferred all the way to the pixel, including any color information gathered from surfaces along the way.
It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel color value.

It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. So when you increase the the samples per pixel, you can see the image quality becomes better and better.

Ray tracing tends to be more efficient than path tracing. Basically, the render time of a ray tracer depends on the number of polygons in the scene. The more polygons you have, the longer it will take.
Meanwhile, the rendering time of a path tracer can be indifferent to the number of polygons, but it is related to light situation: If you add a light, transparency, translucence, or other shader effects, the path tracer will slow down considerably.

 

Sources:
https://medium.com/@junyingw/future-of-gaming-rasterization-vs-ray-tracing-vs-path-tracing-32b334510f1f

 

https://www.reddit.com/r/explainlikeimfive/comments/8tim5q/eli5_whats_the_difference_among_rasterization_ray/

 

blogs.nvidia.com/blog/2018/03/19/whats-difference-between-ray-tracing-rasterization/

 

https://en.wikipedia.org/wiki/Rasterisation

 

https://www.dusterwald.com/2016/07/path-tracing-vs-ray-tracing/

 

https://www.quora.com/Whats-the-difference-between-ray-tracing-and-path-tracing

The difference between eyes and cameras
/ production, reference

 

 

 

https://www.quora.com/What-is-the-comparison-between-the-human-eye-and-a-digital-camera

 

https://medium.com/hipster-color-science/a-beginners-guide-to-colorimetry-401f1830b65a

 

There are three types of cone photoreceptors in the eye, called Long, Medium and Short. These contribute to color discrimination. They are all sensitive to different, yet overlapping, wavelengths of light. They are commonly associated with the color they are most sensitive too, L = red, M = green, S = blue.

 

Different spectral distributions can stimulate the cones in the exact same way
A leaf and a green car that look the same to you, but physically have different reflectance properties. It turns out every color (or, unique cone output) can be created from many different spectral distributions. Color science starts to make a lot more sense when you understand this.

 

When you view the charts overlaid, you can see that the spinach mostly reflects light outside of the eye’s visual range, and inside our range it mostly reflects light centered around our M cone.

 

This phenomenon is called metamerism and it has huge ramifications for color reproduction. It means we don’t need the original light to reproduce an observed color.

 

http://www.absoluteastronomy.com/topics/Adaptation_%28eye%29

 

The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand. What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as “black” can be shifted across six orders of magnitude—a factor of one million.

 

https://clarkvision.com/articles/eye-resolution.html

 

The Human eye is able to function in bright sunlight and view faint starlight, a range of more than 100 million to one. The Blackwell (1946) data covered a brightness range of 10 million and did not include intensities brighter than about the full Moon. The full range of adaptability is on the order of a billion to 1. But this is like saying a camera can function over a similar range by adjusting the ISO gain, aperture and exposure time.

In any one view, the eye eye can see over a 10,000 range in contrast detection, but it depends on the scene brightness, with the range decreasing with lower contrast targets. The eye is a contrast detector, not an absolute detector like the sensor in a digital camera, thus the distinction.  The range of the human eye is greater than any film or consumer digital camera.

As for DSLR cameras’ contrast ratio ranges in 2048:1.

 

(Daniel Frank) Several key differences stand out for me (among many):

  • The area devoted to seeing detail in the eye — the fovea — is extremely small compared to a digital camera sensor. It covers a roughly circular area of only about three degrees of arc. By contrast, a “normal” 50mm lens (so called because it supposedly mimic the perspective of the human eye) covers roughly 40 degrees of arc. Because of this extremely narrow field of view, the eye is constantly making small movements (“saccades”) to scan more of the field, and the brain is building up the illusion of a wider, detailed picture.
  • The eye has two different main types of light detecting elements: rods and cones. Rods are more sensitive, and detect only variations in brightness, but not color. Cones sense color, but only work in brighter light. That’s why very dim scenes look desaturated, in shades of gray, to the human eye. If you take a picture in moonlight with a very high-ISO digital camera, you’ll be struck by how saturated the colors are in that picture — it looks like daylight. We think of this difference in color intensity as being inherent in dark scenes, but that’s not true — it’s actually the limitation of the cones in our eyes.
  • There are specific cones in the eye with stronger responses to the different wavelengths corresponding to red, green, and blue light. By contrast, the CCD or CMOS sensor in a color digital camera can only sense luminance differences: it just counts photons in tens of millions of tiny photodetectors (“wells”) spread across its surface. In front of this detector is an array of microscopic red, blue, and green filters, one per well. The processing engine in the camera interpolates the luminance of adjacent red-, green-, or blue-filtered detectors based on a so-called “demosaicing” algorithm. This bears no resemblance to how the eye detects color. (The so-called “foveon” sensor sold by Sigma in some of its cameras avoid demosaicing by layering different color-sensing layers, but this still isn’t how the eye works.)
  • The files output by color digital cameras contain three channels of luminance data: red, green, and blue. While the human eye has red, green, and blue-sensing cones, those cones are cross-wired in the retina to produce a luminance channel plus a red-green and a blue-yellow channel, and it’s data in that color space (known technically as “LAB”) that goes to the brain. That’s why we can’t perceive a reddish-green or a yellowish-blue, whereas such colors can be represented in the RGB color space used by digital cameras.
  • The retina is much larger than the fovea, but the light-sensitive areas outside the fovea, and the nuclei to which they wire in the brain, are highly sensitive to motion, particularly in the periphery of our vision. The human visual system — including the eye — is highly adapted to detecting and analyzing potential threats coming at us from outside our central vision, and priming the brain and body to respond. These functions and systems have no analogue in any digital camera system.

Equirectangular 360 videos/photos to Unity3D to VR
/ IOS, production, software, VR

SUMMARY

  1. A lot of 360 technology is natively supported in Unity3D. Examples here: https://assetstore.unity.com/packages/essentials/tutorial-projects/vr-samples-51519
  2. Use the Google Cardboard VR API to export for Android or iOS. https://developers.google.com/vr/?hl=en https://developers.google.com/vr/develop/unity/get-started-ios
  3. Images and videos are for the most equirectangular 2:1 360 captures, mapped onto a skybox (stills) or an inverted sphere (videos). Panoramas are also supported.
  4. Stereo is achieved in different formats, but mostly with a 2:1 over-under layout.
  5. Videos can be streamed from a server.
  6. You can export 360 mono/stereo stills/videos from Unity3D with VR Panorama.
  7. 4K is probably the best average resolution size for mobiles.
  8. Interaction can be driven through the Google API gaze scripts/plugins or through Google Cloud Speech Recognition (paid service, https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625 )

DETAILS

  • Google VR game to iOS in 15 minutes
  • Step by Step Google VR and responding to events with Unity3D 2017.x

https://boostlog.io/@mohammedalsayedomar/create-cardboard-apps-in-unity-5ac8f81e47018500491f38c8
https://www.sitepoint.com/building-a-google-cardboard-vr-app-in-unity/

  • Basics details about equirectangular 2:1 360 images and videos.
  • Skybox cubemap texturing, shading and camera component for stills.
  • Video player component on a sphere’s with a flipped normals shader.
  • Note that you can also use a pre-modeled sphere with inverted normals.
  • Note that for audio you will need an audio component on the sphere model.
  • Setup a Full 360 stereoscopic video playback using an over-under layout split onto two cameras.
  • Note you cannot generate a stereoscopic image from two 360 captures, it has to be done through a dedicated consumer rig.
    http://bernieroehl.com/360stereoinunity/

VR Actions for Playmaker
https://assetstore.unity.com/packages/tools/vr-actions-for-playmaker-52109

100 Best Unity3d VR Assets
http://meta-guide.com/embodiment/100-best-unity3d-vr-assets

…find more tutorials/reference under this blog page
(more…)

Photography basics: Depth of Field and composition
/ composition, photography

Depth of field is the range within which focusing is resolved in a photo.
Aperture has a huge affect on to the depth of field.

 

 

Changing the f-stops (f/#) of a lens will change aperture and as such the DOF.

f-stops are a just certain number which is telling you the size of the aperture. That’s how f-stop is related to aperture (and DOF).

If you increase f-stops, it will increase DOF, the area in focus (and decrease the aperture). On the other hand, decreasing the f-stop it will decrease DOF (and increase the aperture).

The red cone in the figure is an angular representation of the resolution of the system. Versus the dotted lines, which indicate the aperture coverage. Where the lines of the two cones intersect defines the total range of the depth of field.

This image explains why the longer the depth of field, the greater the range of clarity.

What is OLED and what can it do for your TV
/ colour, hardware

https://www.cnet.com/news/what-is-oled-and-what-can-it-do-for-your-tv/

OLED stands for Organic Light Emitting Diode. Each pixel in an OLED display is made of a material that glows when you jab it with electricity. Kind of like the heating elements in a toaster, but with less heat and better resolution. This effect is called electroluminescence, which is one of those delightful words that is big, but actually makes sense: “electro” for electricity, “lumin” for light and “escence” for, well, basically “essence.”

OLED TV marketing often claims “infinite” contrast ratios, and while that might sound like typical hyperbole, it’s one of the extremely rare instances where such claims are actually true. Since OLED can produce a perfect black, emitting no light whatsoever, its contrast ratio (expressed as the brightest white divided by the darkest black) is technically infinite.

OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.

Hitchcock’s Rear Window Timelapse from Jeff Desom
/ production

-Full Resolution: 2400x550px

-Projection surface approx.10×2 meters by aligning 3 projectors

-Matrox TripleHead2Go

-Computer to play quicktime in loop mode

http://www.jeffdesom.com/hitch/