COMPOSITION
DESIGN
COLOR
- 
Scientists claim to have discovered ‘new colour’ no one has seen before: OloRead more: Scientists claim to have discovered ‘new colour’ no one has seen before: Olohttps://www.bbc.com/news/articles/clyq0n3em41o By stimulating specific cells in the retina, the participants claim to have witnessed a blue-green colour that scientists have called “olo”, but some experts have said the existence of a new colour is “open to argument”. The findings, published in the journal Science Advances on Friday, have been described by the study’s co-author, Prof Ren Ng from the University of California, as “remarkable”.  (A) System inputs. (i) Retina map of 103 cone cells preclassified by spectral type (7). (ii) Target visual percept (here, a video of a child, see movie S1 at 1:04). (iii) Infrared cellular-scale imaging of the retina with 60-frames-per-second rolling shutter. Fixational eye movement is visible over the three frames shown. (B) System outputs. (iv) Real-time per-cone target activation levels to reproduce the target percept, computed by: extracting eye motion from the input video relative to the retina map; identifying the spectral type of every cone in the field of view; computing the per-cone activation the target percept would have produced. (v) Intensities of visible-wavelength 488-nm laser microdoses at each cone required to achieve its target activation level. (C) Infrared imaging and visible-wavelength stimulation are physically accomplished in a raster scan across the retinal region using AOSLO. By modulating the visible-wavelength beam’s intensity, the laser microdoses shown in (v) are delivered. Drawing adapted with permission [Harmening and Sincich (54)]. (D) Examples of target percepts with corresponding cone activations and laser microdoses, ranging from colored squares to complex imagery. Teal-striped regions represent the color “olo” of stimulating only M cones. 
- 
Rec-2020 – TVs new color gamut standard used by Dolby Vision?Read more: Rec-2020 – TVs new color gamut standard used by Dolby Vision?https://www.hdrsoft.com/resources/dri.html#bit-depth  The dynamic range is a ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to. For a scene: Dynamic range is the ratio between the brightest and darkest parts of the scene. For a camera: Dynamic range is the ratio of saturation to noise. More specifically, the ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise. For a display: Dynamic range is the ratio between the maximum and minimum intensities emitted from the screen. The Dynamic Range of real-world scenes can be quite high — ratios of 100,000:1 are common in the natural world. An HDR (High Dynamic Range) image stores pixel values that span the whole tonal range of real-world scenes. Therefore, an HDR image is encoded in a format that allows the largest range of values, e.g. floating-point values stored with 32 bits per color channel. Another characteristics of an HDR image is that it stores linear values. This means that the value of a pixel from an HDR image is proportional to the amount of light measured by the camera. For TVs HDR is great, but it’s not the only new TV feature worth discussing. (more…)
- 
Is a MacBeth Colour Rendition Chart the Safest Way to Calibrate a Camera?Read more: Is a MacBeth Colour Rendition Chart the Safest Way to Calibrate a Camera?www.colour-science.org/posts/the-colorchecker-considered-mostly-harmless/ “Unless you have all the relevant spectral measurements, a colour rendition chart should not be used to perform colour-correction of camera imagery but only for white balancing and relative exposure adjustments.” “Using a colour rendition chart for colour-correction might dramatically increase error if the scene light source spectrum is different from the illuminant used to compute the colour rendition chart’s reference values.” “other factors make using a colour rendition chart unsuitable for camera calibration: – Uncontrolled geometry of the colour rendition chart with the incident illumination and the camera. 
 – Unknown sample reflectances and ageing as the colour of the samples vary with time.
 – Low samples count.
 – Camera noise and flare.
 – Etc…“Those issues are well understood in the VFX industry, and when receiving plates, we almost exclusively use colour rendition charts to white balance and perform relative exposure adjustments, i.e. plate neutralisation.” 
- 
Thomas Mansencal – Colour Science for PythonRead more: Thomas Mansencal – Colour Science for Pythonhttps://thomasmansencal.substack.com/p/colour-science-for-python https://www.colour-science.org/ Colour is an open-source Python package providing a comprehensive number of algorithms and datasets for colour science. It is freely available under the BSD-3-Clause terms. 
- 
SecretWeapons MixBox – a practical library for paint-like digital color mixingRead more: SecretWeapons MixBox – a practical library for paint-like digital color mixingInternally, Mixbox treats colors as real-life pigments using the Kubelka & Munk theory to predict realistic color behavior. https://scrtwpns.com/mixbox/painter/ https://scrtwpns.com/mixbox.pdf https://github.com/scrtwpns/mixbox https://scrtwpns.com/mixbox/docs/ 
LIGHTING
- 
What is the Light Field?Read more: What is the Light Field?http://lightfield-forum.com/what-is-the-lightfield/ The light field consists of the total of all light rays in 3D space, flowing through every point and in every direction. How to Record a Light Field- a single, robotically controlled camera
- a rotating arc of cameras
- an array of cameras or camera modules
- a single camera or camera lens fitted with a microlens array
 
- 
Open Source Nvidia OmniverseRead more: Open Source Nvidia Omniverseblogs.nvidia.com/blog/2019/03/18/omniverse-collaboration-platform/ developer.nvidia.com/nvidia-omniverse An open, Interactive 3D Design Collaboration Platform for Multi-Tool Workflows to simplify studio workflows for real-time graphics. It supports Pixar’s Universal Scene Description technology for exchanging information about modeling, shading, animation, lighting, visual effects and rendering across multiple applications. It also supports NVIDIA’s Material Definition Language, which allows artists to exchange information about surface materials across multiple tools. With Omniverse, artists can see live updates made by other artists working in different applications. They can also see changes reflected in multiple tools at the same time. For example an artist using Maya with a portal to Omniverse can collaborate with another artist using UE4 and both will see live updates of each others’ changes in their application. 
COLLECTIONS
| Featured AI
| Design And Composition 
| Explore posts  
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.





 
  
  
  
  
  
 


























