RANDOM POSTs
-
7 Commandments of Film Editing and composition
Read more: 7 Commandments of Film Editing and composition1. Watch every frame of raw footage twice. On the second time, take notes. If you don’t do this and try to start developing a scene premature, then it’s a big disservice to yourself and to the director, actors and production crew.
2. Nurture the relationships with the director. You are the secondary person in the relationship. Be calm and continually offer solutions. Get the main intention of the film as soon as possible from the director.
3. Organize your media so that you can find any shot instantly.
4. Factor in extra time for renders, exports, errors and crashes.
5. Attempt edits and ideas that shouldn’t work. It just might work. Until you do it and watch it, you won’t know. Don’t rule out ideas just because they don’t make sense in your mind.
6. Spend more time on your audio. It’s the glue of your edit. AUDIO SAVES EVERYTHING. Create fluid and seamless audio under your video.
7. Make cuts for the scene, but always in context for the whole film. Have a macro and a micro view at all times.
-
CamP: Camera Preconditioning for NERF (Neural Radiance Fields)
https://arxiv.org/pdf/2308.10902.pdf
“…any errors in the camera position solve can dramatically reduce the quality of a reconstruction. “
“Introducing CamP — a method to precondition camera optimization for NeRFs to significantly improve quality. With CamP we’re able to create high quality reconstructions even when input poses are bad.
CamP is a general purpose trick that you can apply in many places. Apply CamP to prior SOTA NeRF models like Zip-NeRF, and you improve quality and establish a new SOTA. Applying CamP to existing camera optimization models, like SCNeRF, to improve their performance.” – Keunhong Park(a) ARKit Poses (w/o COLMAP) (b) ARKit Poses + CamP (Ours) -
A Looming Threat to Bitcoin (and the financial world)- The Risk of a Quantum Hack
Advancements in quantum computing pose a potential threat to Bitcoin’s security. Google’s recent progress with its Willow quantum-computing chip has highlighted the possibility that future quantum computers could break the encryption protecting Bitcoin, enabling hackers to access secure digital wallets and potentially causing significant devaluation.
Researchers estimate that a quantum computer capable of such decryption is likely more than a decade away. Nonetheless, the Bitcoin developer community faces the complex task of upgrading the system to incorporate quantum-resistant encryption methods. Achieving consensus within the decentralized community may be a slow process, and users would eventually need to transfer their holdings to quantum-resistant addresses to safeguard their assets.
A quantum-powered attack on Bitcoin could also negatively impact traditional financial markets, possibly leading to substantial losses and a deep recession. To mitigate such threats, President-elect Donald Trump has proposed creating a strategic reserve for the government’s Bitcoin holdings.
-
Alibaba Group Tongyi Lab WanxAI Wan2.1 – open source model
👍 SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
🚀 Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
🎉 Multiple tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
🔮 Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
💪 Powerful Video VAE: Wan-VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/example%20workflows_Wan2.1
https://huggingface.co/Wan-AI/Wan2.1-T2V-14B
https://huggingface.co/Kijai/WanVideo_comfy/tree/main
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.
