BREAKING NEWS
LATEST POSTS
-
-
Carlos Vilchi – Virtual Production Stage Tech scheme v1.0
Carlos Vilchi has spent some time working on collecting all the technology related to Stage Tech including:
- -All the tracking technology existing today (inside out, outside in)
- -All lens encoding vendors, and their compatibility.
- -Tools, plugins, or Hubs.
- -The different small ecosystems between: Vicon, ZEISS Cinematography, ILM Technoprops, OptiTrack, stYpe, Antilatency, Ncam Technologies Ltd, Mo-Sys Engineering Ltd, EZtrack®, ARRI, DCS – Digital Camera Systems, Zero Density, Disguise, Aximmetry Technologies, HTC VIVE, Lightcraft Technology and more!
Local copy in the post
(more…) -
Ben McEwan – Deconstructing Despill Algorithms
Despilling is arguably the most important step to get right when pulling a key. A great despill can often hide imperfections in your alpha channel & prevents tedious painting to manually fix edges.
benmcewan.com/blog/2018/05/20/understanding-despill-algorithms/
-
Genex – Generative World Explorer
https://generative-world-explorer.github.io
Planning with partial observation is a central challenge in embodied AI. A majority of prior works have tackled this challenge by developing agents that physically explore their environment to update their beliefs about the world state. However, humans can imagine unseen parts of the world through a mental exploration and revise their beliefs with imagined observations. Such updated beliefs can allow them to make more informed decisions at the current step, without having to physically explore the world first. To achieve this human-like ability, we introduce the Generative World Explorer (Genex), a video generation model that allows an agent to mentally explore a large-scale 3D world (e.g., urban scenes) and acquire imagined observations to update its belief about the world .
-
KeenTools 2024.3 – FaceTracker for Blender Stable
FaceTracker for Blender is:
– Markerless facial mocap: capture facial performance and head motion with a matching geometry
– Custom face mesh generation: create digital doubles using snapshots of video frames (available with FaceBundle)
– 3D texture mapping: beauty work, (de)ageing, relighting
– 3D compositing: add digital make-up, dynamic VFX, hair and more
– (NEW) Animation retargeting: convert facial animation to ARKit blendshapes or Rigify rig in one clickhttps://keentools.io/products/facetracker-for-blender
FEATURED POSTS
-
What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?
https://www.discovery.com/science/mexapixels-in-human-eye
About 576 megapixels for the entire field of view.
Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:
120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
Or.
7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.
https://clarkvision.com/articles/eye-resolution.html
Details in the post
-
Scene Referred vs Display Referred color workflows
Display Referred it is tied to the target hardware, as such it bakes color requirements into every type of media output request.
Scene Referred uses a common unified wide gamut and targeting audience through CDL and DI libraries instead.
So that color information stays untouched and only “transformed” as/when needed.Sources:
– Victor Perez – Color Management Fundamentals & ACES Workflows in Nuke
– https://z-fx.nl/ColorspACES.pdf
– Wicus