Blender is switching from OpenGL to Vulkan as its default graphics backend, starting significantly with Blender 4.5, to achieve better performance and prepare for future features like real-time ray tracing and global illumination. To enable this switch, go to Edit > Preferences > System and set the “Backend” option to “Vulkan,” then restart Blender. This change offers substantial benefits, including faster startup times, improved viewport responsiveness, and more efficient handling of complex scenes by better utilizing your CPU and GPU resources.
Why the Switch to Vulkan?
Modern Graphics API: Vulkan is a newer, lower-level, and more efficient API that provides developers with greater control over hardware, unlike the older, higher-level OpenGL.
Performance Boost: This change significantly improves performance in various areas, such as viewport rendering, material loading, and overall UI responsiveness, especially in complex scenes with many textures.
Better Resource Utilization: Vulkan distributes work more effectively across the CPU and reduces driver overhead, allowing Blender to make better use of your computer’s power.
Future-Proofing: The Vulkan backend paves the way for advanced features like real-time ray tracing and global illumination in future versions of Blender.
Given sparse-view videos, Diffuman4D (1) generates 4D-consistent multi-view videos conditioned on these inputs, and (2) reconstructs a high-fidelity 4DGS model of the human performance using both the input and the generated videos.
Truly Infinite Videos This isn’t a gimmick. You can generate incredibly long videos without frying your VRAM. Perfect for podcasts, presentations, or full-on virtual influencers.
More Than Just Lips This is the best part. It doesn’t just sync the mouth; it generates realistic head movements, body posture, and facial expressions that match the audio’s emotion. It makes characters feel alive.
Keeps Everything Consistent It preserves the character’s identity, the background, and even camera movements from your original video, so everything looks seamless.
Completely Open Source & Ready for Business The code, the weights, and the paper are all out there for you to use. Best of all, it’s released under an Apache 2.0 license, which means you are free to use what you create for commercial projects!
# extract one frame at the end of a video ffmpeg -sseof -0.1 -i intro_1.mp4 -frames:v 1 -q:v 1 intro_end.jpg
-sseof -0.1: This option tells FFmpeg to seek to 0.1 seconds before the end of the file. This approach is often more reliable for extracting the last frame, especially if the video’s duration isn’t an exact multiple of the frame interval. Super User -frames:v 1: Extracts a single frame. -q:v 1: Sets the quality of the output image; 1 is the highest quality.
# extract one frame at the beginning of a video ffmpeg -i speaking_4.mp4 -frames:v 1 speaking_beginning.jpg
# check video length ffmpeg -i C:\myvideo.mp4 -f null –
# Convert mov/mp4 to animated gifEdit ffmpeg -i input.mp4 -pix_fmt rgb24 output.gif Other useful ffmpeg commandsEdit
There are three models, two are available now, and a third open-weight version is coming soon:
FLUX.1 Kontext [pro]: State-of-the-art performance for image editing. High-quality outputs, great prompt following, and consistent results.
FLUX.1 Kontext [max]: A premium model that brings maximum performance, improved prompt adherence, and high-quality typography generation without compromise on speed.
Coming soon: FLUX.1 Kontext [dev]: An open-weight, guidance-distilled version of Kontext.
We’re so excited with what Kontext can do, we’ve created a collection of models on Replicate to give you ideas:
Tencent just made Hunyuan3D 2.1 open-source. This is the first fully open-source, production-ready PBR 3D generative model with cinema-grade quality. https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1
What makes it special? • Advanced PBR material synthesis brings realistic materials like leather, bronze, and more to life with stunning light interactions. • Complete access to model weights, training/inference code, data pipelines. • Optimized to run on accessible hardware. • Built for real-world applications with professional-grade output quality.
They’re making it accessible to everyone: • Complete open-source ecosystem with full documentation. • Ready-to-use model weights and training infrastructure. • Live demo available for instant testing. • Comprehensive GitHub repository with implementation details.