BREAKING NEWS
LATEST POSTS
-
Nvidia unveils $3,000 desktop AI computer for home LLM researchers
https://arstechnica.com/ai/2025/01/nvidias-first-desktop-pc-can-run-local-ai-models-for-3000
https://www.nvidia.com/en-us/project-digits
Some smaller open-weights AI language models (such as Llama 3.1 70B, with 70 billion parameters) and various AI image-synthesis models like Flux.1 dev (12 billion parameters) could probably run comfortably on Project DIGITS, but larger open models like Llama 3.1 405B, with 405 billion parameters, may not. Given the recent explosion of smaller AI models, a creative developer could likely run quite a few interesting models on the unit.
DIGITS’ 128GB of unified RAM is notable because a high-power consumer GPU like the RTX 4090 has only 24GB of VRAM. Memory serves as a hard limit on AI model parameter size, and more memory makes room for running larger local AI models. -
Gaussian Splatting OFX plugin for Nuke
https://radiancefields.com/gaussian-splatting-in-nuke
https://aescripts.com/gaussian-splatting-for-nuke
Features
- Import .ply files in Nuke.
- Support Compressed .ply files from SuperSplat
- Crop with Spherical or Box shape.
- Crop with Y Plane.
- Combine up to 10 models in the scene.
- Colorize with Ramp using Spherical or Box shape.
- Reveal model with Opacity Ramp.
- Animate Splat Scale with Spherical or Box shape.
- Each model can be distorted with Noise.
- Render Depth Pass for 3D compose.
- Color correction for each model.
- Real-time with GPU
- Export scene
-
ComfyUI + InstaID SDXL – Face and body swap tutorials
https://github.com/cubiq/ComfyUI_InstantID
https://github.com/cubiq/ComfyUI_InstantID/tree/main/examples
https://github.com/deepinsight/insightface
Unofficial version https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID
Installation details under the post
(more…) -
ComfyUI Tutorial Series Ep 25 – LTX Video – Fast AI Video Generator Model
https://comfyanonymous.github.io/ComfyUI_examples/ltxv
LTX-Video 2B v0.9.1 Checkpoint model
https://huggingface.co/Lightricks/LTX-Video/tree/main
More details under the post
(more…)
FEATURED POSTS
-
How does Stable Diffusion work?
https://stable-diffusion-art.com/how-stable-diffusion-work/
Stable Diffusion is a latent diffusion model that generates AI images from text. Instead of operating in the high-dimensional image space, it first compresses the image into the latent space.
Stable Diffusion belongs to a class of deep learning models called diffusion models. They are generative models, meaning they are designed to generate new data similar to what they have seen in training. In the case of Stable Diffusion, the data are images.
Why is it called the diffusion model? Because its math looks very much like diffusion in physics. Let’s go through the idea.
-
Colour – MacBeth Chart Checker Detection
github.com/colour-science/colour-checker-detection
A Python package implementing various colour checker detection algorithms and related utilities.
-
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering and Denoising for HDR View Synthesis
https://srameo.github.io/projects/le3d/
LE3D is a method for real-time HDR view synthesis from RAW images. It is particularly effective for nighttime scenes.
https://github.com/Srameo/LE3D