COMPOSITION
DESIGN
COLOR
- 
Capturing the world in HDR for real time projects – Call of Duty: Advanced WarfareRead more: Capturing the world in HDR for real time projects – Call of Duty: Advanced WarfareReal-World Measurements for Call of Duty: Advanced Warfare www.activision.com/cdn/research/Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf Local version Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf 
- 
About color: What is a LUTRead more: About color: What is a LUThttp://www.lightillusion.com/luts.html https://www.shutterstock.com/blog/how-use-luts-color-grading A LUT (Lookup Table) is essentially the modifier between two images, the original image and the displayed image, based on a mathematical formula. Basically conversion matrices of different complexities. There are different types of LUTS – viewing, transform, calibration, 1D and 3D. 
- 
Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color pickingRead more: Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color pickinghttps://bottosson.github.io/misc/colorpicker https://bottosson.github.io/posts/colorpicker/ https://www.smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/ One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient. There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly. I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them. IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab. In the Munsell color system, colors are described with three parameters, designed to match the perceived appearance of colors: Hue, Chroma and Value. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. Modern color spaces and models, such as CIELAB, Cam16 and Björn Ottosson own Oklab, are very similar in their construction.  By far the most used color spaces today for color picking are HSL and HSV, two representations introduced in the classic 1978 paper “Color Spaces for Computer Graphics”. HSL and HSV designed to roughly correlate with perceptual color properties while being very simple and cheap to compute. Today HSL and HSV are most commonly used together with the sRGB color space.  One of the main advantages of HSL and HSV over the different Lab color spaces is that they map the sRGB gamut to a cylinder. This makes them easy to use since all parameters can be changed independently, without the risk of creating colors outside of the target gamut.  The main drawback on the other hand is that their properties don’t match human perception particularly well. 
 Reconciling these conflicting goals perfectly isn’t possible, but given that HSV and HSL don’t use anything derived from experiments relating to human perception, creating something that makes a better tradeoff does not seem unreasonable. With this new lightness estimate, we are ready to look into the construction of Okhsv and Okhsl.  
- 
Tim Kang – calibrated white light values in sRGB color spaceRead more: Tim Kang – calibrated white light values in sRGB color space8bit sRGB encoded 
 2000K 255 139 22
 2700K 255 172 89
 3000K 255 184 109
 3200K 255 190 122
 4000K 255 211 165
 4300K 255 219 178
 D50 255 235 205
 D55 255 243 224
 D5600 255 244 227
 D6000 255 249 240
 D65 255 255 255
 D10000 202 221 255
 D20000 166 196 2558bit Rec709 Gamma 2.4 
 2000K 255 145 34
 2700K 255 177 97
 3000K 255 187 117
 3200K 255 193 129
 4000K 255 214 170
 4300K 255 221 182
 D50 255 236 208
 D55 255 243 226
 D5600 255 245 229
 D6000 255 250 241
 D65 255 255 255
 D10000 204 222 255
 D20000 170 199 2558bit Display P3 encoded 
 2000K 255 154 63
 2700K 255 185 109
 3000K 255 195 127
 3200K 255 201 138
 4000K 255 219 176
 4300K 255 225 187
 D50 255 239 212
 D55 255 245 228
 D5600 255 246 231
 D6000 255 251 242
 D65 255 255 255
 D10000 208 223 255
 D20000 175 199 25510bit Rec2020 PQ (100 nits) 
 2000K 520 435 273
 2700K 520 466 358
 3000K 520 475 384
 3200K 520 480 399
 4000K 520 495 446
 4300K 520 500 458
 D50 520 510 482
 D55 520 514 497
 D5600 520 514 500
 D6000 520 517 509
 D65 520 520 520
 D10000 479 489 520
 D20000 448 464 520
- 
3D Lighting Tutorial by Amaan KramRead more: 3D Lighting Tutorial by Amaan Kramhttp://www.amaanakram.com/lightingT/part1.htm The goals of lighting in 3D computer graphics are more or less the same as those of real world lighting. Lighting serves a basic function of bringing out, or pushing back the shapes of objects visible from the camera’s view. 
 It gives a two-dimensional image on the monitor an illusion of the third dimension-depth.But it does not just stop there. It gives an image its personality, its character. A scene lit in different ways can give a feeling of happiness, of sorrow, of fear etc., and it can do so in dramatic or subtle ways. Along with personality and character, lighting fills a scene with emotion that is directly transmitted to the viewer. Trying to simulate a real environment in an artificial one can be a daunting task. But even if you make your 3D rendering look absolutely photo-realistic, it doesn’t guarantee that the image carries enough emotion to elicit a “wow” from the people viewing it. Making 3D renderings photo-realistic can be hard. Putting deep emotions in them can be even harder. However, if you plan out your lighting strategy for the mood and emotion that you want your rendering to express, you make the process easier for yourself. Each light source can be broken down in to 4 distinct components and analyzed accordingly. · Intensity 
 · Direction
 · Color
 · SizeThe overall thrust of this writing is to produce photo-realistic images by applying good lighting techniques. 
LIGHTING
- 
Romain Chauliac – LightIt a lighting script for Maya and ArnoldRead more: Romain Chauliac – LightIt a lighting script for Maya and ArnoldLightIt is a script for Maya and Arnold that will help you and improve your lighting workflow. 
 Thanks to preset studio lighting components (lights, backdrop…), high quality studio scenes and HDRI library manager.https://www.artstation.com/artwork/393emJ 
- 
NVidia DiffusionRenderer – Neural Inverse and Forward Rendering with Video Diffusion Models. How NVIDIA reimagined relightingRead more: NVidia DiffusionRenderer – Neural Inverse and Forward Rendering with Video Diffusion Models. How NVIDIA reimagined relightinghttps://www.fxguide.com/quicktakes/diffusing-reality-how-nvidia-reimagined-relighting/ https://research.nvidia.com/labs/toronto-ai/DiffusionRenderer/ 
- 
Simulon – a Hollywood production studio app in the hands of an independent creator with access to consumer hardware, LDRi to HDRi through MLRead more: Simulon – a Hollywood production studio app in the hands of an independent creator with access to consumer hardware, LDRi to HDRi through MLDivesh Naidoo: The video below was made with a live in-camera preview and auto-exposure matching, no camera solve, no HDRI capture and no manual compositing setup. Using the new Simulon phone app. LDR to HDR through ML https://simulon.typeform.com/betatest (more…)Process example 
- 
9 Best Hacks to Make a Cinematic Video with Any CameraRead more: 9 Best Hacks to Make a Cinematic Video with Any Camerahttps://www.flexclip.com/learn/cinematic-video.html - Frame Your Shots to Create Depth
- Create Shallow Depth of Field
- Avoid Shaky Footage and Use Flexible Camera Movements
- Properly Use Slow Motion
- Use Cinematic Lighting Techniques
- Apply Color Grading
- Use Cinematic Music and SFX
- Add Cinematic Fonts and Text Effects
- Create the Cinematic Bar at the Top and the Bottom
  
COLLECTIONS
| Featured AI
| Design And Composition 
| Explore posts  
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.

































