BREAKING NEWS
LATEST POSTS
-
GenUE โ Direct Prompt-to-Mesh Generation in Unreal Engine Integrated with ComfyUI
GenUE brings prompt-driven 3D asset creation directly into Unreal Engine using ComfyUI as a flexible backend. โข Generate high-quality images from text prompts. โข Choose from a catalog of batch-generated images โ no style limitations. โข Convert the selected image to a fully textured 3D mesh. โข Automatically import and place the model into your Unreal Engine scene. This modular pipeline gives you full control over the image and 3D generation stages, with support for any ComfyUI workflow or model. Full generation (image + mesh + import) completes in under 2 minutes on a high-end consumer GPU.
-
Edward Ureรฑa – Rig creator
https://edwardurena.gumroad.com/l/ramoo
What it offers:
โข Base rigs for multiple character types
โข Automatic weight application
โข Built-in facial rigging system
โข Bone generators with FK and IK options
โข Streamlined constraint panel -
GPT-Image-1 API now available through ComfyUI with Dall-E integration
https://blog.comfy.org/p/comfyui-now-supports-gpt-image-1
https://docs.comfy.org/tutorials/api-nodes/openai/gpt-image-1
https://openai.com/index/image-generation-api
โข Prompt GPT-Image-1 directly in ComfyUI using text or image inputs
โข Set resolution and quality
โข Supports image editing + transparent backgrounds
โข Seamlessly mix with local workflows like WAN 2.1, FLUX Tools, and more -
Tencent Hunyuan3D 2.5 – Transform images and text into 3D models with ultra-high-definition precision
What makes it special?
โข Massive 10B parameter geometric model with 10x more mesh faces.
โข High-quality textures with industry-first multi-view PBR generation.
โข Optimized skeletal rigging for streamlined animation workflows.
โข Flexible pipeline for text-to-3D and image-to-3D generation.
They’re making it accessible to everyone:
โข Open-source code and pre-trained models.
โข Easy-to-use API and intuitive web interface.
โข Free daily quota doubled to 20 generations! -
Alibaba 3DV-TON – A novel diffusion model for HQ and temporally consistent video
https://arxiv.org/pdf/2504.17414
Video try-on replaces clothing in videos with target garments. Existing methods struggle to generate high-quality and temporally consistent results when handling complex clothing patterns and diverse body poses. We present 3DV-TON, a novel diffusion-based framework for generating high-fidelity and temporally consistent video try-on results. Our approach employs generated animatable textured 3D meshes as explicit frame-level guidance, alleviating the issue of models over-focusing on appearance fidelity at the expanse of motion coherence. This is achieved by enabling direct reference to consistent garment texture movements throughout video sequences. The proposed method features an adaptive pipeline for generating dynamic 3D guidance: (1) selecting a keyframe for initial 2D image try-on, followed by (2) reconstructing and animating a textured 3D mesh synchronized with original video poses. We further introduce a robust rectangular masking strategy that successfully mitigates artifact propagation caused by leaking clothing information during dynamic human and garment movements. To advance video try-on research, we introduce HR-VVT, a high-resolution benchmark dataset containing 130 videos with diverse clothing types and scenarios. Quantitative and qualitative results demonstrate our superior performance over existing methods.
-
FramePack – Packing Input Frame Context in Next-Frame Prediction Models for Offline Video Generation With Low Resource Requirements
https://lllyasviel.github.io/frame_pack_gitpage/
- Diffuse thousands of frames at full fps-30 with 13B models using 6GB laptop GPU memory.
- Finetune 13B video model at batch size 64 on a single 8xA100/H100 node for personal/lab experiments.
- Personal RTX 4090 generates at speed 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache).
- No timestep distillation.
- Video diffusion, but feels like image diffusion.
Image-to-5-Seconds (30fps, 150 frames)
-
Anthony Sauzet – ProceduralMaya
A Maya script that introduces a node-based graph system for procedural modeling, like Houdini
https://github.com/AnthonySTZ/ProceduralMaya
-
11 Public Speaking Strategies
What do people report as their #1 greatest fear?
Itโs not deathโฆ.
Itโs public speaking.
Glossophobia, the fear of public speaking, has been a daunting obstacle for me for years.
11 confidence-boosting tips
1/ The 5-5-5 Rule
โ Scan 5 faces; Hold each gaze for 5 seconds.
โ Repeat every 5 minutes.
โ Creates an authentic connection.
2/Power Pause
โ Dead silence for 3 seconds after key points.
โ Let your message land.
3/ The 3-Part Open
โ Hook with a question.
โ Share a story.
โ State your promise.
4/ Palm-Up Principle
โ Open palms when speaking = trustworthy.
โ Pointing fingers = confrontational.
5/ The 90-Second Reset
โ Feel nervous? Excuse yourself.
โ 90 seconds of deep breathing reset your nervous system.
6/ Rule of Three
โ Structure key points in threes.
โ Our brains love patterns.
7/ 2-Minute Story Rule
โ Keep stories under 2 minutes.
โ Any longer, you lose them.
8/ The Lighthouse Method
โ Plant “anchor points” around the room.
โ Rotate eye contact between them.
โ Looks natural, feels structured.
9/ The Power Position
โ Feet shoulder-width apart.
โ Hands relaxed at sides.
โ Projects confidence even when nervous.
10/ The Callback Technique
โ Reference earlier points later in your talk.
โ Creates a narrative thread.
โ Audiences love connections.
11/ The Rehearsal Truth
โ Practice the opening 3x more than the rest.
โ Nail the first 30 seconds; you’ll nail the talk. -
FreeCodeCamp – Train Your Own LLM
https://www.freecodecamp.org/news/train-your-own-llm
Ever wondered how large language models like ChatGPT are actually built? Behind these impressive AI tools lies a complex but fascinating process of data preparation, model training, and fine-tuning. While it might seem like something only experts with massive resources can do, itโs actually possible to learn how to build your own language model from scratch. And with the right guidance, you can go from loading raw text data to chatting with your very own AI assistant.
-
Alibaba FloraFauna.ai – AI Collaboration canvas
-
Runway introduces Gen-4ย – Generate consistent elements by controlling input elements
https://runwayml.com/research/introducing-runway-gen-4
With Gen-4, you are now able to precisely generate consistent characters, locations and objects across scenes. Simply set your look and feel and the model will maintain coherent world environments while preserving the distinctive style, mood and cinematographic elements of each frame. Then, regenerate those elements from multiple perspectives and positions within your scenes.
๐๐ฒ๐ฟ๐ฒโ๐ ๐๐ต๐ ๐๐ฒ๐ป-๐ฐ ๐ฐ๐ต๐ฎ๐ป๐ด๐ฒ๐ ๐ฒ๐๐ฒ๐ฟ๐๐๐ต๐ถ๐ป๐ด:
โจ ๐จ๐ป๐๐ฎ๐๐ฒ๐ฟ๐ถ๐ป๐ด ๐๐ต๐ฎ๐ฟ๐ฎ๐ฐ๐๐ฒ๐ฟ ๐๐ผ๐ป๐๐ถ๐๐๐ฒ๐ป๐ฐ๐
โข Characters and environments ๐ป๐ผ๐ ๐๐๐ฎ๐ ๐ณ๐น๐ฎ๐๐น๐ฒ๐๐๐น๐ ๐ฐ๐ผ๐ป๐๐ถ๐๐๐ฒ๐ป๐ across shotsโeven as lighting shifts or angles pivotโall from one reference image. No more jarring transitions or mismatched details.
โจ ๐๐๐ป๐ฎ๐บ๐ถ๐ฐ ๐ ๐๐น๐๐ถ-๐๐ป๐ด๐น๐ฒ ๐ ๐ฎ๐๐๐ฒ๐ฟ๐
โข Generate cohesive scenes from any perspective without manual tweaks. Gen-4 intuitively ๐ฐ๐ฟ๐ฎ๐ณ๐๐ ๐บ๐๐น๐๐ถ-๐ฎ๐ป๐ด๐น๐ฒ ๐ฐ๐ผ๐๐ฒ๐ฟ๐ฎ๐ด๐ฒ, ๐ฎ ๐น๐ฒ๐ฎ๐ฝ ๐ฝ๐ฎ๐๐ ๐ฒ๐ฎ๐ฟ๐น๐ถ๐ฒ๐ฟ ๐บ๐ผ๐ฑ๐ฒ๐น๐ that struggled with spatial continuity.
โจ ๐ฃ๐ต๐๐๐ถ๐ฐ๐ ๐ง๐ต๐ฎ๐ ๐๐ฒ๐ฒ๐น ๐๐น๐ถ๐๐ฒ
โข Capes ripple, objects collide, and fabrics drape with startling realism. ๐๐ฒ๐ป-๐ฐ ๐๐ถ๐บ๐๐น๐ฎ๐๐ฒ๐ ๐ฟ๐ฒ๐ฎ๐น-๐๐ผ๐ฟ๐น๐ฑ ๐ฝ๐ต๐๐๐ถ๐ฐ๐, breathing life into scenes that once demanded painstaking manual animation.
โจ ๐ฆ๐ฒ๐ฎ๐บ๐น๐ฒ๐๐ ๐ฆ๐๐๐ฑ๐ถ๐ผ ๐๐ป๐๐ฒ๐ด๐ฟ๐ฎ๐๐ถ๐ผ๐ป
โข Outputs now blend effortlessly with live-action footage or VFX pipelines. ๐ ๐ฎ๐ท๐ผ๐ฟ ๐๐๐๐ฑ๐ถ๐ผ๐ ๐ฎ๐ฟ๐ฒ ๐ฎ๐น๐ฟ๐ฒ๐ฎ๐ฑ๐ ๐ฎ๐ฑ๐ผ๐ฝ๐๐ถ๐ป๐ด ๐๐ฒ๐ป-๐ฐ ๐๐ผ ๐ฝ๐ฟ๐ผ๐๐ผ๐๐๐ฝ๐ฒ ๐๐ฐ๐ฒ๐ป๐ฒ๐ ๐ณ๐ฎ๐๐๐ฒ๐ฟ and slash production timelines.
โข ๐ช๐ต๐ ๐๐ต๐ถ๐ ๐บ๐ฎ๐๐๐ฒ๐ฟ๐: Gen-4 erases the line between AI experiments and professional filmmaking. ๐๐ถ๐ฟ๐ฒ๐ฐ๐๐ผ๐ฟ๐ ๐ฐ๐ฎ๐ป ๐ถ๐๐ฒ๐ฟ๐ฎ๐๐ฒ ๐ผ๐ป ๐ฐ๐ถ๐ป๐ฒ๐บ๐ฎ๐๐ถ๐ฐ ๐๐ฒ๐พ๐๐ฒ๐ป๐ฐ๐ฒ๐ ๐ถ๐ป ๐ฑ๐ฎ๐๐, ๐ป๐ผ๐ ๐บ๐ผ๐ป๐๐ต๐โdemocratizing access to tools that once required million-dollar budgets.
FEATURED POSTS
-
Embedding frame ranges into Quicktime movies with FFmpeg
QuickTime (.mov) files are fundamentally time-based, not frame-based, and so donโt have a built-in, uniform โfirst frame/last frameโ field you can set as numeric frame IDs. Instead, tools like Shotgun Create rely on the timecode track and the movieโs duration to infer frame numbers. If you want Shotgun to pick up a non-default frame range (e.g. start at 1001, end at 1064), you must bake in an SMPTE timecode that corresponds to your desired start frame, and ensure the movieโs duration matches your clip length.
How Shotgun Reads Frame Ranges
- Default start frame is 1. If no timecode metadata is present, Shotgun assumes the movie begins at frame 1.
- Timecode โ frame number. Shotgun Create โhonors the timecodes of media sources,โ mapping the embedded TC to frame IDs. For example, a 24 fps QuickTime tagged with a start timecode of 00:00:41:17 will be interpreted as beginning on frame 1001 (1001 รท 24 fps โ 41.71 s).
Embedding a Start Timecode
QuickTime uses a
tmcd
(timecode) track. You can bake in an SMPTE track via FFmpegโs-timecode
flag or via Compressor/encoder settings:- Compute your start TC.
- Desired start frame = 1001
- Frame 1001 at 24 fps โ 1001 รท 24 โ 41.708 s โ TC 00:00:41:17
- FFmpeg example:
ffmpeg -i input.mov \ -c copy \ -timecode 00:00:41:17 \ output.mov
This adds a timecode track beginning at 00:00:41:17, which Shotgun maps to frame 1001.
Ensuring the Correct End Frame
Shotgun infers the last frame from the movieโs duration. To end on frame 1064:
- Frame count = 1064 โ 1001 + 1 = 64 frames
- Duration = 64 รท 24 fps โ 2.667 s
FFmpeg trim example:
ffmpeg -i input.mov \ -c copy \ -timecode 00:00:41:17 \ -t 00:00:02.667 \ output_trimmed.mov
This results in a 64-frame clip (1001โ1064) at 24 fps.
-
Free fonts
https://fontlibrary.org
https://fontsource.orgOpen-source fonts packaged into individual NPM packages for self-hosting in web applications. Self-hosting fonts can significantly improve website performance, remain version-locked, work offline, and offer more privacy.
https://www.awwwards.com/awwwards/collections/free-fonts
http://www.fontspace.com/popular/fonts
https://www.urbanfonts.com/free-fonts.htm
http://www.1001fonts.com/poster-fonts.html
How to use @font-face in CSS
Theย
@font-face
ย rule allows custom fonts to be loaded on a webpage: https://css-tricks.com/snippets/css/using-font-face-in-css/
-
What light is best to illuminate gems for resale
www.palagems.com/gem-lighting2
Artificial light sources, not unlike the diverse phases of natural light, vary considerably in their properties. As a result, some lamps render an objectโs color better than others do.
The most important criterion for assessing the color-rendering ability of any lamp is its spectral power distribution curve.
Natural daylight varies too much in strength and spectral composition to be taken seriously as a lighting standard for grading and dealing colored stones. For anything to be a standard, it must be constant in its properties, which natural light is not.
For dealers in particular to make the transition from natural light to an artificial light source, that source must offer:
1- A degree of illuminance at least as strong as the common phases of natural daylight.
2- Spectral properties identical or comparable to a phase of natural daylight.A source combining these two things makes gems appear much the same as when viewed under a given phase of natural light. From the viewpoint of many dealers, this corresponds to a naturalappearance.
The 6000ยฐ Kelvin xenon short-arc lamp appears closest to meeting the criteria for a standard light source. Besides the strong illuminance this lamp affords, its spectrum is very similar to CIE standard illuminants of similar color temperature.