Subscribe to PixelSham.com RSS for free

3Dprinting (183) A.I. (912) animation (354) blender (219) colour (241) commercials (53) composition (154) cool (375) design (659) Featured (92) hardware (318) IOS (109) jokes (140) lighting (300) modeling (160) music (189) photogrammetry (198) photography (757) production (1309) python (103) quotes (499) reference (318) software (1380) trailers (308) ves (576) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • Persistence of vision illusion

    pIXELsHAM.com
    Jul 1, 2025
    cool
    Views : 5
  • eufyMake E1 UV Printer – This Machine is Unbelievable

    pIXELsHAM.com
    Jul 1, 2025
    hardware
    Views : 8
  • Blender 3D Sketching – Voxel Modelling Dragon Concept Art With Quicktools

    pIXELsHAM.com
    Jul 1, 2025
    blender, design
    Views : 8
  • Researchers from Beihang University have developed a 2-cm-long wireless bug-microbot capable of ultra-fast running speeds

    pIXELsHAM.com
    Jul 1, 2025
    hardware
    Views : 9
  • Kacha – Re-designing furnitures

    pIXELsHAM.com
    Jul 1, 2025
    design

    https://kachafurniture.com/

    Views : 9
  • How The Agatha Christie Course AI Was Made | BBC Maestro

    pIXELsHAM.com
    Jul 1, 2025
    A.I., production, trailers

    Views : 11
  • Hunyuan-GameCraft – High-dynamic Interactive Game Video Generation with Hybrid History Condition

    pIXELsHAM.com
    Jun 28, 2025
    A.I., production

    https://www.arxiv.org/pdf/2506.17201

    https://huggingface.co/papers/2506.17201

    https://hunyuan-gamecraft.github.io/

    Views : 35
  • Pixel3DMM – Versatile Screen-Space Priors for Single-Image 3D Face Model Reconstruction

    pIXELsHAM.com
    Jun 27, 2025
    A.I., modeling

    https://simongiebenhain.github.io/pixel3dmm/

    Views : 13
  • Microsoft Planetary Computer Data Catalog – Petabytes of environmental monitoring data, in consistent, analysis-ready formats

    pIXELsHAM.com
    Jun 26, 2025
    reference

    https://planetarycomputer.microsoft.com/catalog

    Views : 10
  • LumaLabs – Modify Video

    pIXELsHAM.com
    Jun 26, 2025
    A.I., animation, production

    https://docs.lumalabs.ai/docs/modify-video

    https://docs.lumalabs.ai/reference/modifyvideo

    https://pypi.org/project/lumaai/

    https://www.npmjs.com/package/lumaai/v/1.15.0

    https://lumalabs.ai/api/pricing

    Views : 9
  • FXGuide – ACES 2.0 with ILM’s Alex Fry

    pIXELsHAM.com
    Jun 25, 2025
    colour, production
    fxpodcast: ACES 2.0 with ILM’s Alex Fry

    https://draftdocs.acescentral.com/background/whats-new/

    ACES 2.0 is the second major release of the components that make up the ACES system. The most significant change is a new suite of rendering transforms whose design was informed by collected feedback and requests from users of ACES 1. The changes aim to improve the appearance of perceived artifacts and to complete previously unfinished components of the system, resulting in a more complete, robust, and consistent product.

    Highlights of the key changes in ACES 2.0 are as follows:

    • New output transforms, including:
      • A less aggressive tone scale
      • More intuitive controls to create custom outputs to non-standard displays
      • Robust gamut mapping to improve perceptual uniformity
      • Improved performance of the inverse transforms
    • Enhanced AMF specification
    • An updated specification for ACES Transform IDs
    • OpenEXR compression recommendations
    • Enhanced tools for generating Input Transforms and recommended procedures for characterizing prosumer cameras
    • Look Transform Library
    • Expanded documentation

    Rendering Transform

    The most substantial change in ACES 2.0 is a complete redesign of the rendering transform.

    ACES 2.0 was built as a unified system, rather than through piecemeal additions. Different deliverable outputs “match” better and making outputs to display setups other than the provided presets is intended to be user-driven. The rendering transforms are less likely to produce undesirable artifacts “out of the box”, which means less time can be spent fixing problematic images and more time making pictures look the way you want.

    Key design goals

    • Improve consistency of tone scale and provide an easy to use parameter to allow for outputs between preset dynamic ranges
    • Minimize hue skews across exposure range in a region of same hue
    • Unify for structural consistency across transform type
    • Easy to use parameters to create outputs other than the presets
    • Robust gamut mapping to improve harsh clipping artifacts
    • Fill extents of output code value cube (where appropriate and expected)
    • Invertible – not necessarily reversible, but Output > ACES > Output round-trip should be possible
    • Accomplish all of the above while maintaining an acceptable “out-of-the box” rendering

    Views : 44
  • Atelier Loop @tatami_loop – Skeleton and muscle pose references

    pIXELsHAM.com
    Jun 25, 2025
    animation, design, reference

    https://twitter.com/tatami_loop

    Views : 15
  • ComfyUI – SeedVR2_VideoUpscaler

    pIXELsHAM.com
    Jun 24, 2025
    A.I.

    https://github.com/numz/ComfyUI-SeedVR2_VideoUpscaler

    https://github.com/ByteDance-Seed/SeedVR

    Views : 22
  • Three New AI Platforms For Cinematic AI Productions – Electric Sheep, Arcana Labs, and MovieFlo.AI

    pIXELsHAM.com
    Jun 24, 2025
    A.I., ves

    https://www.forbes.com/sites/charliefink/2025/06/23/three-new-ai-platforms-for-cinematic-ai-productions/

    Views : 14
  • Jerome Bacquet – ComfyUI Xenovision An exr output node designed to bring AI workflows closer to the standards of VFX

    pIXELsHAM.com
    Jun 24, 2025
    A.I.
    Views : 38
Previous Page
1 … 10 11 12 13 14 … 436
Next Page

FEATURED POSTS

  • Color coordination and composition

    pIXELsHAM.com
    Feb 8, 2017
    colour, composition, design, reference

    Views : 1,436
  • Getting Started With 3D Gaussian Splatting for Windows (Beginner Tutorial)

    pIXELsHAM.com
    Oct 2, 2023
    A.I., photogrammetry, software

     

    https://www.reshot.ai/3d-gaussian-splatting

     

     what are 3D Gaussians? They are a generalization of 1D Gaussians (the bell curve) to 3D. Essentially they are ellipsoids in 3D space, with a center, a scale, a rotation, and “softened edges”.

    Each 3D Gaussian is optimized along with a (viewdependant) color and opacity. When blended together, here’s the visualization of the full model, rendered from ANY angle. As you can see, 3D Gaussian Splatting captures extremely well the fuzzy and soft nature of the plush toy, something that photogrammetry-based methods struggle to do.

    Views : 856
  • Extracting motion from a video

    pIXELsHAM.com
    Mar 11, 2024
    photography, production

    Views : 72
  • Sensitivity of human eye

    pIXELsHAM.com
    Mar 10, 2016
    colour, Featured, photography, reference

    http://www.wikilectures.eu/index.php/Spectral_sensitivity_of_the_human_eye

    http://www.normankoren.com/Human_spectral_sensitivity_small.jpg

    Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.

    http://weeklysciencequiz.blogspot.com/2013/01/violet-skies-are-for-birds.html

     

    (more…)
    Views : 8,185
  • AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability

    pIXELsHAM.com
    Oct 4, 2022
    A.I., Featured, ves

    https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

     

    “Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”

     

    “In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”

     

    “It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”

     

    “Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”

    Views : 695
  • Colors For Commercial printing

    pIXELsHAM.com
    Jul 14, 2013
    colour, production

    http://lm-burns1114-dc.blogspot.co.nz/2012_10_01_archive.html

    Views : 1,067
  • 3D layered resin painting by Lilian Lee based on Riusuke Fukahori work

    pIXELsHAM.com
    Jul 13, 2019
    design

    www.3dresinpainting.com/

    www.facebook.com/resinpaintings3D/

    Views : 1,031
  • Victor Perez | Mind-Mapping Conceptualisation of Light |

    pIXELsHAM.com
    Nov 7, 2022
    colour, composition, lighting, photography

    Views : 760
Views : 17,871

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.