Subscribe to PixelSham.com RSS for free

3Dprinting (185) A.I. (923) animation (355) blender (223) colour (241) commercials (53) composition (154) cool (375) design (660) Featured (94) hardware (319) IOS (109) jokes (141) lighting (301) modeling (160) music (189) photogrammetry (199) photography (757) production (1310) python (108) quotes (501) reference (318) software (1384) trailers (310) ves (577) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • ComfyUI-CogVideoXWrapper – Control motion paths in ComfyUI

    pIXELsHAM.com
    Jan 27, 2025
    A.I.

    https://github.com/kijai/ComfyUI-CogVideoXWrapper

    Views : 32
  • One-Prompt-One-Story – Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt

    pIXELsHAM.com
    Jan 27, 2025
    A.I.

    https://byliutao.github.io/1Prompt1Story.github.io

    Tneration models can create high-quality images from input prompts. However, they struggle to support the consistent generation of identity-preserving requirements for storytelling.

    Our approach 1Prompt1Story concatenates all prompts into a single input for T2I diffusion models, initially preserving character identities.

    Views : 70
  • What did DeepSeek figure out about reasoning with DeepSeek-R1?

    pIXELsHAM.com
    Jan 27, 2025
    A.I.

    https://www.seangoedecke.com/deepseek-r1

    The Chinese AI lab DeepSeek recently released their new reasoning model R1, which is supposedly (a) better than the current best reasoning models (OpenAI’s o1- series), and (b) was trained on a GPU cluster a fraction the size of any of the big western AI labs.

    DeepSeek uses a reinforcement learning approach, not a fine-tuning approach. There’s no need to generate a huge body of chain-of-thought data ahead of time, and there’s no need to run an expensive answer-checking model. Instead, the model generates its own chains-of-thought as it goes.

    https://medium.com/@ShankarsPayana/how-deepseek-r1-using-fp8-instead-of-fp32-beat-openai-meta-gemini-and-claude-c105d94d0c39

    The secret behind their success? A bold move to train their models using FP8 (8-bit floating-point precision) instead of the standard FP32 (32-bit floating-point precision).
    …
    By using a clever system that applies high precision only when absolutely necessary, they achieved incredible efficiency without losing accuracy.

    …
    The impressive part? These multi-token predictions are about 85–90% accurate, meaning DeepSeek R1 can deliver high-quality answers at double the speed of its competitors.

    https://www.tweaktown.com/news/102798/chinese-ai-firm-deepseek-has-50-000-nvidia-h100-gpus-says-ceo-even-with-us-restrictions/index.html

    Chinese AI firm DeepSeek has 50,000 NVIDIA H100 AI GPUs

    Views : 55
  • Raphael AI – World’s First Unlimited Free AI Image Generator powered by FLUX.1-Dev model

    pIXELsHAM.com
    Jan 26, 2025
    A.I., software

    https://raphael.app

    Views : 362
  • Texture Copilot – AI Copilot for 3D Texturing

    pIXELsHAM.com
    Jan 26, 2025
    A.I.

    https://ncsoft.github.io/ncresearch/3f0ba4889e331ddbed68c9dd48d845fa18d874de

    Views : 55
  • CaPa – Carve-n-Paint Synthesisfor Efficient 4K Textured Mesh Generation

    pIXELsHAM.com
    Jan 26, 2025
    A.I., modeling

    https://ncsoft.github.io/CaPa

    https://github.com/ncsoft/CaPa

    a novel method for generating hyper-quality 4K textured mesh under only 30 seconds, providing 3D assets ready for commercial applications such as games, movies, and VR/AR.

    Views : 33
  • NVidia DynOMo – Online Point Tracking by Dynamic Online Monocular Gaussian Reconstruction

    pIXELsHAM.com
    Jan 26, 2025
    photogrammetry

    https://jennyseidenschwarz.github.io/DynOMo.github.io

    https://github.com/dvl-tum/DynOMo

    Views : 25
  • LumaLabs Ray2 – A large–scale video generative model

    pIXELsHAM.com
    Jan 26, 2025
    A.I.

    https://lumalabs.ai/ray

    Views : 31
  • SurFhead – Affine Rig Blending for Geometrically Accurate 2D Gaussian Surfel-based Head Avatars

    pIXELsHAM.com
    Jan 26, 2025
    photogrammetry

    https://summertight.github.io/SurFhead

    https://github.com/surfhead2025/surfhead

    Views : 29
  • Spell.Spline – 2D-to-3D generate entire 3D scenes or “Worlds” from an image

    pIXELsHAM.com
    Jan 26, 2025
    A.I.

    https://blog.spline.design/introducing-spell

    https://spell.spline.design/explore/featured

    Views : 49
  • The Best AI Animation Tool in 2025? (Prompt Battle)

    pIXELsHAM.com
    Jan 26, 2025
    A.I.

    Views : 23
  • Kim Jung Gi – 2020.04.16 Live Drawing

    pIXELsHAM.com
    Jan 26, 2025
    design

    Views : 16
  • Node-it Shading – Teaser for Blender

    pIXELsHAM.com
    Jan 26, 2025
    blender

    Views : 27
  • Fal Video Studio – The first open-source AI toolkit for video editing

    pIXELsHAM.com
    Jan 25, 2025
    A.I., software

    https://github.com/fal-ai-community/video-starter-kit

    https://fal-video-studio.vercel.app

    • 🎬 Browser-Native Video Processing: Seamless video handling and composition in the browser
    • 🤖 AI Model Integration: Direct access to state-of-the-art video models through fal.ai
      • Minimax for video generation
      • Hunyuan for visual synthesis
      • LTX for video manipulation
    • 🎵 Advanced Media Capabilities:
      • Multi-clip video composition
      • Audio track integration
      • Voiceover support
      • Extended video duration handling
    • 🛠️ Developer Utilities:
      • Metadata encoding
      • Video processing pipeline
      • Ready-to-use UI components
      • TypeScript support

    Views : 297
  • Tencent Hunyuan3D – an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets

    pIXELsHAM.com
    Jan 25, 2025
    A.I.

    https://github.com/tencent/Hunyuan3D-2

    Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for generating high-resolution textured 3D assets. This system includes two foundation components: a large-scale shape generation model – Hunyuan3D-DiT, and a large-scale texture synthesis model – Hunyuan3D-Paint.

    The shape generative model, built on a scalable flow-based diffusion transformer, aims to create geometry that properly aligns with a given condition image, laying a solid foundation for downstream applications. The texture synthesis model, benefiting from strong geometric and diffusion priors, produces high-resolution and vibrant texture maps for either generated or hand-crafted meshes. Furthermore, we build Hunyuan3D-Studio – a versatile, user-friendly production platform that simplifies the re-creation process of 3D assets.

    It allows both professional and amateur users to manipulate or even animate their meshes efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0 outperforms previous state-of-the-art models, including the open-source models and closed-source models in geometry details, condition alignment, texture quality, and e.t.c.

    Views : 61
Previous Page
1 … 36 37 38 39 40 … 437
Next Page

FEATURED POSTS

  • Colour composition In Storytelling by Lewis Criswell

    pIXELsHAM.com
    Jul 3, 2016
    composition, photography

    Views : 1,041
  • How To Build A Team of AI Agents in n8n

    pIXELsHAM.com
    Oct 10, 2025
    A.I., production

    Views : 9
  • Little planet effect

    pIXELsHAM.com
    May 29, 2011
    photography

    http://dirksphotoblog.wordpress.com/2006/09/06/tutorial-create-your-own-planets/

    Views : 967
  • Game Development tips

    pIXELsHAM.com
    Dec 15, 2024
    Featured, production

    Anton Slashcev


    More tips under the post

    (more…)
    Views : 75
  • Types of Film Lights and their efficiency – CRI, Color Temperature and Luminous Efficacy

    pIXELsHAM.com
    Feb 23, 2022
    colour, composition, Featured, lighting

    nofilmschool.com/types-of-film-lights

     

    “Not every light performs the same way. Lights and lighting are tricky to handle. You have to plan for every circumstance. But the good news is, lighting can be adjusted. Let’s look at different factors that affect lighting in every scene you shoot. “

    Use CRI, Luminous Efficacy and color temperature controls to match your needs.

     

    Color Temperature
    Color temperature describes the “color” of white light by a light source radiated by a perfect black body at a given temperature measured in degrees Kelvin

    https://www.pixelsham.com/2019/10/18/color-temperature/ 

    CRI
    “The Color Rendering Index is a measurement of how faithfully a light source reveals the colors of whatever it illuminates, it describes the ability of a light source to reveal the color of an object, as compared to the color a natural light source would provide. The highest possible CRI is 100. A CRI of 100 generally refers to a perfect black body, like a tungsten light source or the sun. “

    https://www.studiobinder.com/blog/what-is-color-rendering-index

    (more…)
    Views : 2,584
  • 15 Years of Art Experience in One DRAGON

    pIXELsHAM.com
    Sep 27, 2024
    colour, composition, design, lighting

     

    Bonus clip in the post: Character Design Concept Art Process – Professional Workflow

    (more…)

    Views : 27
  • It would be better if the soldiers avoided the lake / Diorama / Anycubic

    pIXELsHAM.com
    Mar 1, 2022
    3Dprinting, design

    Views : 758
  • Photography basics: Why Use a (MacBeth) Color Chart?

    pIXELsHAM.com
    Aug 24, 2018
    colour, lighting, photography

    Start here: https://www.pixelsham.com/2013/05/09/gretagmacbeth-color-checker-numeric-values/

     

    https://www.studiobinder.com/blog/what-is-a-color-checker-tool/

     

     

     

     

    In LightRoom

     

    in Final Cut

     

    in Nuke

    Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
    You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).

    One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
    So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

     

    If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.

     

    In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

     

    This will set the sun usually around EV12-17 and the sky EV1-4 , depending on cloud coverage.

     

    To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

    Views : 2,508
Views : 19,441

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.