BREAKING NEWS
LATEST POSTS
-
Deep Compositing in Nuke – a walkthrough
Depth Map: A depth map is a representation of the distance or depth information for each pixel in a scene. It is typically a two-dimensional array where each pixel contains a value that represents the distance from the camera to the corresponding point in the scene. The depth values are usually represented in metric units, such as meters. A depth map provides a continuous representation of the scene’s depth information.
For example, in Arnold this is achieved through a Z AOV, this collects depth of the shading points as seen from the camera.
(more…)
https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_user_guide_ac_output_aovs_ac_aovs_html
https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_3ds_max_ax_aov_tutorials_ax_zdepth_aov_html -
VFX Giant MPC and Parent Company Technicolor Shut Down Amid โSevere Financial Challenges
https://variety.com/2025/film/global/technicolor-vfx-mpc-shutter-severe-challenges-1236316354
Shaun Severi, Head of Creative Production at the Mill, claimed in a LinkedIn post that 4,500 had lost their jobs in 24 hours: โThe problem wasnโt talent or execution โ it was mismanagement at the highest levelsโฆthe incompetence at the top was nothing short of disastrous.โ
According to Severi, successive company presidents โburied the company under massive debt by acquiring VFX Studiosโฆthe second president, after a disastrous merger of the post houses, took us public, artificially inflating the companyโs value โ only for it to come crashing down when the real numbers were revealedโฆ.and the third and final president, who came from a car rental company, had no vision of what she was building, selling or managing.โ
-
Moondream Gaze Detection – Open source code
This is convenient for captioning videos, understanding social dynamics, and for specific cases such as sports analytics, or detecting when drivers or operators are distracted.
https://huggingface.co/spaces/moondream/gaze-demo
https://moondream.ai/blog/announcing-gaze-detection
-
X-Dyna – Expressive Dynamic Human Image Animation
https://x-dyna.github.io/xdyna.github.io
A novel zero-shot, diffusion-based pipeline for animating a single human image using facial expressions and body movements derived from a driving video, that generates realistic, context-aware dynamics for both the subject and the surrounding environment.
-
Flex 1 Alpha – a pre-trained base 8 billion parameter rectified flow transformer
https://huggingface.co/ostris/Flex.1-alpha
Flex.1 started as the FLUX.1-schnell-training-adapter to make training LoRAs on FLUX.1-schnell possible.
-
Generative Detail Enhancement for Physically Based Materials
https://arxiv.org/html/2502.13994v1
https://arxiv.org/pdf/2502.13994
A tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering.
-
Camera Metadata Toolkit (camdkit) for Virtual Production
https://github.com/SMPTE/ris-osvp-metadata-camdkit
Today
camdkit
supports mapping (or importing, if you will) of metadata from five popular digital cinema cameras into a canonical form; it also supports a mapping of the metadata defined in the F4 protocol used by tracking system components from Mo-Sys. -
OpenTrackIO – free and open-source protocol designed to improve interoperability in Virtual Production
OpenTrackIO defines the schema of JSON samples that contain a wide range of metadata about the device, its transform(s), associated camera and lens. The full schema is given below and can be downloaded here.
-
Martin Gent – Comparing current video AI models
https://www.linkedin.com/posts/martingent_imagineapp-veo2-kling-activity-7298979787962806272-n0Sn
๐น ๐ฉ๐ฒ๐ผ 2 – After the legendary prompt adherence of Veo 2 T2V, I have to say I2V is a little disappointing, especially when it comes to camera moves. You often get those Sora-like jump-cuts too which can be annoying.
๐น ๐๐น๐ถ๐ป๐ด 1.6 Pro – Still the one to beat for I2V, both for image quality and prompt adherence. It’s also a lot cheaper than Veo 2. Generations can be slow, but are usually worth the wait.
๐น ๐ฅ๐๐ป๐๐ฎ๐ Gen 3 – Useful for certain shots, but overdue an update. The worst performer here by some margin. Bring on Gen 4!
๐น ๐๐๐บ๐ฎ Ray 2 – I love the energy and inventiveness Ray 2 brings, but those came with some image quality issues. I want to test more with this model though for sure.
FEATURED POSTS
-
HuggingFace ai-comic-factory – a FREE AI Comic Book Creator
https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory
this is the epic story of a group of talented digital artists trying to overcame daily technical challenges to achieve incredibly photorealistic projects of monsters and aliens
-
Fal Video Studio – The first open-source AI toolkit for video editing
https://github.com/fal-ai-community/video-starter-kit
https://fal-video-studio.vercel.app
- ๐ฌ Browser-Native Video Processing: Seamless video handling and composition in the browser
- ๐ค AI Model Integration: Direct access to state-of-the-art video models through fal.ai
- Minimax for video generation
- Hunyuan for visual synthesis
- LTX for video manipulation
- ๐ต Advanced Media Capabilities:
- Multi-clip video composition
- Audio track integration
- Voiceover support
- Extended video duration handling
- ๐ ๏ธ Developer Utilities:
- Metadata encoding
- Video processing pipeline
- Ready-to-use UI components
- TypeScript support
-
Zibra.AI – Real-Time Volumetric Effects in Virtual Production. Now free for Indies!
A New Era for Volumetrics
For a long time, volumetric visual effects were viable only in high-end offline VFX workflows. Large data footprints and poor real-time rendering performance limited their use: most teams simply avoided volumetrics altogether. Itโs similar to the early days of online video: limited computational power and low network bandwidth made video content hard to share or stream. Today, of course, we canโt imagine the internet without it, and we believe volumetrics are on a similar path.
With advanced data compression and real-time, GPU-driven decompression, anyone can now bring CGI-class visual effects into Unreal Engine.
From now on, itโs completely free for individual creators!
What it means for you?
(more…)
-
Photography basics: Production Rendering Resolution Charts
https://www.urtech.ca/2019/04/solved-complete-list-of-screen-resolution-names-sizes-and-aspect-ratios/
Resolution โ Aspect Ratio 4:03 16:09 16:10 3:02 5:03 5:04 CGA 320 x 200 QVGA 320 x 240 VGA (SD, Standard Definition) 640 x 480 NTSC 720 x 480 WVGA 854 x 450 WVGA 800 x 480 PAL 768 x 576 SVGA 800 x 600 XGA 1024 x 768 not named 1152 x 768 HD 720 (720P, High Definition) 1280 x 720 WXGA 1280 x 800 WXGA 1280 x 768 SXGA 1280 x 1024 not named (768P, HD, High Definition) 1366 x 768 not named 1440 x 960 SXGA+ 1400 x 1050 WSXGA 1680 x 1050 UXGA (2MP) 1600 x 1200 HD1080 (1080P, Full HD) 1920 x 1080 WUXGA 1920 x 1200 2K 2048 x (any) QWXGA 2048 x 1152 QXGA (3MP) 2048 x 1536 WQXGA 2560 x 1600 QHD (Quad HD) 2560 x 1440 QSXGA (5MP) 2560 x 2048 4K UHD (4K, Ultra HD, Ultra-High Definition) 3840 x 2160 QUXGA+ 3840 x 2400 IMAX 3D 4096 x 3072 8K UHD (8K, 8K Ultra HD, UHDTV) 7680 x 4320 10Kย (10240ร4320, 10K HD) 10240 x (any) 16K (Quad UHD, 16K UHD, 8640P) 15360 x 8640