Subscribe to PixelSham.com RSS for free

3Dprinting (179) A.I. (899) animation (353) blender (217) colour (241) commercials (53) composition (154) cool (368) design (657) Featured (91) hardware (316) IOS (109) jokes (140) lighting (300) modeling (156) music (189) photogrammetry (197) photography (757) production (1308) python (101) quotes (498) reference (317) software (1379) trailers (308) ves (573) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • Topaz Labs Project Starlight – The first and only diffusion-based AI model for enhancing video

    pIXELsHAM.com
    Feb 12, 2025
    A.I.

    https://www.topazlabs.com

    Views : 1,430
  • Stocksnap.io – Free Stock Photos

    pIXELsHAM.com
    Feb 12, 2025
    photography, reference

    https://stocksnap.io

    Views : 24
  • Mistral.ai Le Chat – high-quality pre-trained knowledge of Mistral models with recent information balanced across evidence-based responses

    pIXELsHAM.com
    Feb 12, 2025
    A.I.

    https://mistral.ai/en/news/all-new-le-chat

    Views : 34
  • Simulate Realistic Camera Movement in Blender with a Real Camera

    pIXELsHAM.com
    Feb 11, 2025
    blender

    https://80.lv/articles/this-python-script-lets-you-simulate-realistic-camera-movement-in-blender

    Views : 33
  • ByteDance Goku – Flow-Based Video Generative Foundation Models

    pIXELsHAM.com
    Feb 11, 2025
    A.I.

    https://saiyan-world.github.io/goku

    Views : 54
  • HumanDiT – Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation

    pIXELsHAM.com
    Feb 11, 2025
    A.I.

    https://agnjason.github.io/HumanDiT-page

    By inputting a single character image and template pose video, our method can generate vocal avatar videos featuring not only pose-accurate rendering but also realistic body shapes.

    Views : 73
  • DynVFX – Augmenting Real Videoswith Dynamic Content

    pIXELsHAM.com
    Feb 11, 2025
    A.I.

    https://dynvfx.github.io

    Given an input video and a simple user-provided text instruction describing the desired content, our method synthesizes dynamic objects or complex scene effects that naturally interact with the existing scene over time. The position, appearance, and motion of the new content are seamlessly integrated into the original footage while accounting for camera motion, occlusions, and interactions with other dynamic objects in the scene, resulting in a cohesive and realistic output video. 

    https://dynvfx.github.io/sm/index.html

    Views : 445
  • CLO 3D – Fashion Design Software

    pIXELsHAM.com
    Feb 8, 2025
    modeling, software

    https://www.clo3d.com/en

    https://linkin.bio/itsclo3d

    Views : 38
  • VideoJAM – Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models

    pIXELsHAM.com
    Feb 7, 2025
    A.I.

    https://hila-chefer.github.io/videojam-paper.github.io

    Views : 39
  • 100+ Open Source SVG Spinners

    pIXELsHAM.com
    Feb 7, 2025
    design, reference

    https://magecdn.com/tools/svg-loaders

    Views : 31
  • ByteDance OmniHuman-1

    pIXELsHAM.com
    Feb 7, 2025
    A.I.

    https://omnihuman-lab.github.io

    They propose an end-to-end multimodality-conditioned human video generation framework named OmniHuman, which can generate human videos based on a single human image and motion signals (e.g., audio only, video only, or a combination of audio and video). In OmniHuman, we introduce a multimodality motion conditioning mixed training strategy, allowing the model to benefit from data scaling up of mixed conditioning. This overcomes the issue that previous end-to-end approaches faced due to the scarcity of high-quality data. OmniHuman significantly outperforms existing methods, generating extremely realistic human videos based on weak signal inputs, especially audio. It supports image inputs of any aspect ratio, whether they are portraits, half-body, or full-body images, delivering more lifelike and high-quality results across various scenarios.

    Views : 58
  • Hunyuan3D-2 – Add-on for Blender and ComfyUI

    pIXELsHAM.com
    Feb 7, 2025
    A.I., modeling

    https://github.com/kijai/ComfyUI-Hunyuan3DWrapper

    https://github.com/Tencent/Hunyuan3D-2/blob/main/blender_addon.py

    https://github.com/tencent/Hunyuan3D-2

    https://huggingface.co/tencent/Hunyuan3D-2

    Views : 1,261
  • Conda – an open source management system for installing multiple versions of software packages and their dependencies into a virtual environment

    pIXELsHAM.com
    Feb 6, 2025
    production, software

    https://anaconda.org/anaconda/conda

    https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html

    NOTE The company recently changed their TOS and this service now incurs into costs for teams above a threshold.
    Use MicroMamba instead.

    Views : 21
  • Vashi Nedomansky – Shooting ratios of feature films

    pIXELsHAM.com
    Feb 6, 2025
    ves

    In the Golden Age of Hollywood (1930-1959), a 10:1 shooting ratio was the norm—a 90-minute film meant about 15 hours of footage. Directors like Alfred Hitchcock famously kept it tight with a 3:1 ratio, giving studios little wiggle room in the edit.

    Fast forward to today: the digital era has sent shooting ratios skyrocketing. Affordable cameras roll endlessly, capturing multiple takes, resets, and everything in between. Gone are the disciplined “Action to Cut” days of film.

    https://en.wikipedia.org/wiki/Shooting_ratio

    SHOOTING RATIOS OF FEATURE FILMS

    Views : 35
  • Me.Meshcapade.com – multi-person markerless mocap with detailed hands and gestures

    pIXELsHAM.com
    Feb 6, 2025
    animation

    https://me.meshcapade.com/editor

    https://meshcapade.com

    Views : 128
Previous Page
1 … 30 31 32 33 34 … 434
Next Page

FEATURED POSTS

  • BBC – Wildlife Photographer of the Year 2025, the best pictures so far

    pIXELsHAM.com
    Aug 27, 2025
    colour, composition, lighting, photography

    https://www.bbc.com/news/articles/c70r7plrdndo

    (more…)
    Views : 11
  • SVFR – A Unified Framework for Generalized Video Face Restoration

    pIXELsHAM.com
    Jan 9, 2025
    A.I., software

    https://wangzhiyaoo.github.io/SVFR

    Views : 142
  • Insta360 – iphone compatible vr friendly 360 capture

    pIXELsHAM.com
    Oct 24, 2016
    hardware, photography, VR

    http://www.trustedreviews.com/insta360-nano-review

     

    http://www.insta360.com/product/insta360-nano

     

    Views : 1,262
  • Sensitivity of human eye

    pIXELsHAM.com
    Mar 10, 2016
    colour, Featured, photography, reference

    http://www.wikilectures.eu/index.php/Spectral_sensitivity_of_the_human_eye

    http://www.normankoren.com/Human_spectral_sensitivity_small.jpg

    Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.

    http://weeklysciencequiz.blogspot.com/2013/01/violet-skies-are-for-birds.html

     

    (more…)
    Views : 8,139
  • UV maps

    pIXELsHAM.com
    Sep 29, 2018
    colour, Featured, lighting, production, reference

    https://byvalle.com/UVchecker/

    All maps in the post

    (more…)

    Views : 12,596
  • Is your perception that these circles are moving?

    pIXELsHAM.com
    May 4, 2021
    colour, cool, lighting

    https://www.pixelsham.com/wp-content/uploads/2021/05/AreTheyMoving.mp4

    The answer is…

    (more…)

    Views : 892
  • Magnificent Cardboard Airships by Jeroen van Kesterenby

    pIXELsHAM.com
    Mar 25, 2017
    design

    http://www.thisiscolossal.com/2017/03/magnificent-cardboard-airships-by-jeroen-van-kesteren/

    Views : 1,113
  • Photography basics: Why Use a (MacBeth) Color Chart?

    pIXELsHAM.com
    Aug 24, 2018
    colour, lighting, photography

    Start here: https://www.pixelsham.com/2013/05/09/gretagmacbeth-color-checker-numeric-values/

     

    https://www.studiobinder.com/blog/what-is-a-color-checker-tool/

     

     

     

     

    In LightRoom

     

    in Final Cut

     

    in Nuke

    Note: In Foundry’s Nuke, the software will map 18% gray to whatever your center f/stop is set to in the viewer settings (f/8 by default… change that to EV by following the instructions below).
    You can experiment with this by attaching an Exposure node to a Constant set to 0.18, setting your viewer read-out to Spotmeter, and adjusting the stops in the node up and down. You will see that a full stop up or down will give you the respective next value on the aperture scale (f8, f11, f16 etc.).

    One stop doubles or halves the amount or light that hits the filmback/ccd, so everything works in powers of 2.
    So starting with 0.18 in your constant, you will see that raising it by a stop will give you .36 as a floating point number (in linear space), while your f/stop will be f/11 and so on.

     

    If you set your center stop to 0 (see below) you will get a relative readout in EVs, where EV 0 again equals 18% constant gray.

     

    In other words. Setting the center f-stop to 0 means that in a neutral plate, the middle gray in the macbeth chart will equal to exposure value 0. EV 0 corresponds to an exposure time of 1 sec and an aperture of f/1.0.

     

    This will set the sun usually around EV12-17 and the sky EV1-4 , depending on cloud coverage.

     

    To switch Foundry’s Nuke’s SpotMeter to return the EV of an image, click on the main viewport, and then press s, this opens the viewer’s properties. Now set the center f-stop to 0 in there. And the SpotMeter in the viewport will change from aperture and fstops to EV.

    Views : 2,496
Views : 12,837

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.