COMPOSITION
- 
SlowMoVideo – How to make a slow motion shot with the open source programRead more: SlowMoVideo – How to make a slow motion shot with the open source programhttp://slowmovideo.granjow.net/ slowmoVideo is an OpenSource program that creates slow-motion videos from your footage. Slow motion cinematography is the result of playing back frames for a longer duration than they were exposed. For example, if you expose 240 frames of film in one second, then play them back at 24 fps, the resulting movie is 10 times longer (slower) than the original filmed event…. Film cameras are relatively simple mechanical devices that allow you to crank up the speed to whatever rate the shutter and pull-down mechanism allow. Some film cameras can operate at 2,500 fps or higher (although film shot in these cameras often needs some readjustment in postproduction). Video, on the other hand, is always captured, recorded, and played back at a fixed rate, with a current limit around 60fps. This makes extreme slow motion effects harder to achieve (and less elegant) on video, because slowing down the video results in each frame held still on the screen for a long time, whereas with high-frame-rate film there are plenty of frames to fill the longer durations of time. On video, the slow motion effect is more like a slide show than smooth, continuous motion. One obvious solution is to shoot film at high speed, then transfer it to video (a case where film still has a clear advantage, sorry George). Another possibility is to cross dissolve or blur from one frame to the next. This adds a smooth transition from one still frame to the next. The blur reduces the sharpness of the image, and compared to slowing down images shot at a high frame rate, this is somewhat of a cheat. However, there isn’t much you can do about it until video can be recorded at much higher rates. Of course, many film cameras can’t shoot at high frame rates either, so the whole super-slow-motion endeavor is somewhat specialized no matter what medium you are using. (There are some high speed digital cameras available now that allow you to capture lots of digital frames directly to your computer, so technology is starting to catch up with film. However, this feature isn’t going to appear in consumer camcorders any time soon.) 
- 
HuggingFace ai-comic-factory – a FREE AI Comic Book CreatorRead more: HuggingFace ai-comic-factory – a FREE AI Comic Book Creatorhttps://huggingface.co/spaces/jbilcke-hf/ai-comic-factory this is the epic story of a group of talented digital artists trying to overcame daily technical challenges to achieve incredibly photorealistic projects of monsters and aliens 
DESIGN
COLOR
- 
Scene Referred vs Display Referred color workflowsRead more: Scene Referred vs Display Referred color workflowsDisplay Referred it is tied to the target hardware, as such it bakes color requirements into every type of media output request. Scene Referred uses a common unified wide gamut and targeting audience through CDL and DI libraries instead. 
 So that color information stays untouched and only “transformed” as/when needed.Sources: 
 – Victor Perez – Color Management Fundamentals & ACES Workflows in Nuke
 – https://z-fx.nl/ColorspACES.pdf
 – Wicus
 
- 
Mysterious animation wins best illusion of 2011 – Motion silencing illusionRead more: Mysterious animation wins best illusion of 2011 – Motion silencing illusionThe 2011 Best Illusion of the Year uses motion to render color changes invisible, and so reveals a quirk in our visual systems that is new to scientists. https://en.wikipedia.org/wiki/Motion_silencing_illusion “It is a really beautiful effect, revealing something about how our visual system works that we didn’t know before,” said Daniel Simons, a professor at the University of Illinois, Champaign-Urbana. Simons studies visual cognition, and did not work on this illusion. Before its creation, scientists didn’t know that motion had this effect on perception, Simons said. A viewer stares at a speck at the center of a ring of colored dots, which continuously change color. When the ring begins to rotate around the speck, the color changes appear to stop. But this is an illusion. For some reason, the motion causes our visual system to ignore the color changes. (You can, however, see the color changes if you follow the rotating circles with your eyes.) 
- 
Willem Zwarthoed – Aces gamut in VFX production pdfRead more: Willem Zwarthoed – Aces gamut in VFX production pdfhttps://www.provideocoalition.com/color-management-part-12-introducing-aces/ Local copy: 
 https://www.slideshare.net/hpduiker/acescg-a-common-color-encoding-for-visual-effects-applications 
- 
Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color pickingRead more: Björn Ottosson – OKHSV and OKHSL – Two new color spaces for color pickinghttps://bottosson.github.io/misc/colorpicker https://bottosson.github.io/posts/colorpicker/ https://www.smashingmagazine.com/2024/10/interview-bjorn-ottosson-creator-oklab-color-space/ One problem with sRGB is that in a gradient between blue and white, it becomes a bit purple in the middle of the transition. That’s because sRGB really isn’t created to mimic how the eye sees colors; rather, it is based on how CRT monitors work. That means it works with certain frequencies of red, green, and blue, and also the non-linear coding called gamma. It’s a miracle it works as well as it does, but it’s not connected to color perception. When using those tools, you sometimes get surprising results, like purple in the gradient. There were also attempts to create simple models matching human perception based on XYZ, but as it turned out, it’s not possible to model all color vision that way. Perception of color is incredibly complex and depends, among other things, on whether it is dark or light in the room and the background color it is against. When you look at a photograph, it also depends on what you think the color of the light source is. The dress is a typical example of color vision being very context-dependent. It is almost impossible to model this perfectly. I based Oklab on two other color spaces, CIECAM16 and IPT. I used the lightness and saturation prediction from CIECAM16, which is a color appearance model, as a target. I actually wanted to use the datasets used to create CIECAM16, but I couldn’t find them. IPT was designed to have better hue uniformity. In experiments, they asked people to match light and dark colors, saturated and unsaturated colors, which resulted in a dataset for which colors, subjectively, have the same hue. IPT has a few other issues but is the basis for hue in Oklab. In the Munsell color system, colors are described with three parameters, designed to match the perceived appearance of colors: Hue, Chroma and Value. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. The parameters are designed to be independent and each have a uniform scale. This results in a color solid with an irregular shape. Modern color spaces and models, such as CIELAB, Cam16 and Björn Ottosson own Oklab, are very similar in their construction.  By far the most used color spaces today for color picking are HSL and HSV, two representations introduced in the classic 1978 paper “Color Spaces for Computer Graphics”. HSL and HSV designed to roughly correlate with perceptual color properties while being very simple and cheap to compute. Today HSL and HSV are most commonly used together with the sRGB color space.  One of the main advantages of HSL and HSV over the different Lab color spaces is that they map the sRGB gamut to a cylinder. This makes them easy to use since all parameters can be changed independently, without the risk of creating colors outside of the target gamut.  The main drawback on the other hand is that their properties don’t match human perception particularly well. 
 Reconciling these conflicting goals perfectly isn’t possible, but given that HSV and HSL don’t use anything derived from experiments relating to human perception, creating something that makes a better tradeoff does not seem unreasonable. With this new lightness estimate, we are ready to look into the construction of Okhsv and Okhsl.  
- 
Tim Kang – calibrated white light values in sRGB color spaceRead more: Tim Kang – calibrated white light values in sRGB color space8bit sRGB encoded 
 2000K 255 139 22
 2700K 255 172 89
 3000K 255 184 109
 3200K 255 190 122
 4000K 255 211 165
 4300K 255 219 178
 D50 255 235 205
 D55 255 243 224
 D5600 255 244 227
 D6000 255 249 240
 D65 255 255 255
 D10000 202 221 255
 D20000 166 196 2558bit Rec709 Gamma 2.4 
 2000K 255 145 34
 2700K 255 177 97
 3000K 255 187 117
 3200K 255 193 129
 4000K 255 214 170
 4300K 255 221 182
 D50 255 236 208
 D55 255 243 226
 D5600 255 245 229
 D6000 255 250 241
 D65 255 255 255
 D10000 204 222 255
 D20000 170 199 2558bit Display P3 encoded 
 2000K 255 154 63
 2700K 255 185 109
 3000K 255 195 127
 3200K 255 201 138
 4000K 255 219 176
 4300K 255 225 187
 D50 255 239 212
 D55 255 245 228
 D5600 255 246 231
 D6000 255 251 242
 D65 255 255 255
 D10000 208 223 255
 D20000 175 199 25510bit Rec2020 PQ (100 nits) 
 2000K 520 435 273
 2700K 520 466 358
 3000K 520 475 384
 3200K 520 480 399
 4000K 520 495 446
 4300K 520 500 458
 D50 520 510 482
 D55 520 514 497
 D5600 520 514 500
 D6000 520 517 509
 D65 520 520 520
 D10000 479 489 520
 D20000 448 464 520
- 
Capturing textures albedoRead more: Capturing textures albedoBuilding a Portable PBR Texture Scanner by Stephane Lb 
 http://rtgfx.com/pbr-texture-scanner/How To Split Specular And Diffuse In Real Images, by John Hable 
 http://filmicworlds.com/blog/how-to-split-specular-and-diffuse-in-real-images/Capturing albedo using a Spectralon 
 https://www.activision.com/cdn/research/Real_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdfReal_World_Measurements_for_Call_of_Duty_Advanced_Warfare.pdf Spectralon is a teflon-based pressed powderthat comes closest to being a pure Lambertian diffuse material that reflects 100% of all light. If we take an HDR photograph of the Spectralon alongside the material to be measured, we can derive thediffuse albedo of that material. The process to capture diffuse reflectance is very similar to the one outlined by Hable. 1. We put a linear polarizing filter in front of the camera lens and a second linear polarizing filterin front of a modeling light or a flash such that the two filters are oriented perpendicular to eachother, i.e. cross polarized. 2. We place Spectralon close to and parallel with the material we are capturing and take brack-eted shots of the setup7. Typically, we’ll take nine photographs, from -4EV to +4EV in 1EVincrements. 3. We convert the bracketed shots to a linear HDR image. We found that many HDR packagesdo not produce an HDR image in which the pixel values are linear. PTGui is an example of apackage which does generate a linear HDR image. At this point, because of the cross polarization,the image is one of surface diffuse response. 4. We open the file in Photoshop and normalize the image by color picking the Spectralon, filling anew layer with that color and setting that layer to “Divide”. This sets the Spectralon to 1 in theimage. All other color values are relative to this so we can consider them as diffuse albedo. 
LIGHTING
- 
Romain Chauliac – LightIt a lighting script for Maya and ArnoldRead more: Romain Chauliac – LightIt a lighting script for Maya and ArnoldLightIt is a script for Maya and Arnold that will help you and improve your lighting workflow. 
 Thanks to preset studio lighting components (lights, backdrop…), high quality studio scenes and HDRI library manager.https://www.artstation.com/artwork/393emJ 
COLLECTIONS
| Featured AI
| Design And Composition 
| Explore posts  
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
- 
What the Boeing 737 MAX’s crashes can teach us about production business – the effects of commoditisation
- 
SourceTree vs Github Desktop – Which one to use
- 
ComfyDock – The Easiest (Free) Way to Safely Run ComfyUI Sessions in a Boxed Container
- 
3D Gaussian Splatting step by step beginner course
- 
Zibra.AI – Real-Time Volumetric Effects in Virtual Production. Now free for Indies!
- 
How do LLMs like ChatGPT (Generative Pre-Trained Transformer) work? Explained by Deep-Fake Ryan Gosling
- 
STOP FCC – SAVE THE FREE NET
- 
Animation/VFX/Game Industry JOB POSTINGS by Chris Mayne
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.

























