Subscribe to PixelSham.com RSS for free

3Dprinting (179) A.I. (899) animation (353) blender (217) colour (241) commercials (53) composition (154) cool (368) design (657) Featured (91) hardware (316) IOS (109) jokes (140) lighting (300) modeling (156) music (189) photogrammetry (197) photography (757) production (1308) python (101) quotes (498) reference (317) software (1379) trailers (308) ves (572) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • Online 360° Panorama Viewer VR

    pIXELsHAM.com
    Apr 28, 2025
    photogrammetry, photography, production, software, VR

    https://renderstuff.com/tools/360-panorama-web-viewer/

    Views : 42
  • Arto T. – A workflow for creating photorealistic, equirectangular 360° panoramas in ComfyUI using Flux

    pIXELsHAM.com
    Apr 27, 2025
    A.I., lighting, photography

    https://civitai.com/models/735980/flux-equirectangular-360-panorama

    https://civitai.com/models/745010?modelVersionId=833115

    The trigger phrase is “equirectangular 360 degree panorama”. I would avoid saying “spherical projection” since that tends to result in non-equirectangular spherical images.

    Image resolution should always be a 2:1 aspect ratio. 1024 x 512 or 1408 x 704 work quite well and were used in the training data. 2048 x 1024 also works.

    I suggest using a weight of 0.5 – 1.5. If you are having issues with the image generating too flat instead of having the necessary spherical distortion, try increasing the weight above 1, though this could negatively impact small details of the image. For Flux guidance, I recommend a value of about 2.5 for realistic scenes.

    8-bit output at the moment

    Views : 131
  • Scientists claim to have discovered ‘new colour’ no one has seen before: Olo

    pIXELsHAM.com
    Apr 27, 2025
    colour

    https://www.bbc.com/news/articles/clyq0n3em41o

    By stimulating specific cells in the retina, the participants claim to have witnessed a blue-green colour that scientists have called “olo”, but some experts have said the existence of a new colour is “open to argument”.

    The findings, published in the journal Science Advances on Friday, have been described by the study’s co-author, Prof Ren Ng from the University of California, as “remarkable”.

    (A) System inputs. (i) Retina map of 103 cone cells preclassified by spectral type (7). (ii) Target visual percept (here, a video of a child, see movie S1 at 1:04). (iii) Infrared cellular-scale imaging of the retina with 60-frames-per-second rolling shutter. Fixational eye movement is visible over the three frames shown.

    (B) System outputs. (iv) Real-time per-cone target activation levels to reproduce the target percept, computed by: extracting eye motion from the input video relative to the retina map; identifying the spectral type of every cone in the field of view; computing the per-cone activation the target percept would have produced. (v) Intensities of visible-wavelength 488-nm laser microdoses at each cone required to achieve its target activation level.

    (C) Infrared imaging and visible-wavelength stimulation are physically accomplished in a raster scan across the retinal region using AOSLO. By modulating the visible-wavelength beam’s intensity, the laser microdoses shown in (v) are delivered. Drawing adapted with permission [Harmening and Sincich (54)].

    (D) Examples of target percepts with corresponding cone activations and laser microdoses, ranging from colored squares to complex imagery. Teal-striped regions represent the color “olo” of stimulating only M cones.

    Views : 27
  • KIRI Engine 3.12 – Mesh From 3D Gaussian Splatting

    pIXELsHAM.com
    Apr 27, 2025
    photogrammetry

    Will This Replace Photogrammetry And 3D Scanner?

    Views : 20
  • Finn Jager – From HEIC (High Efficiency Image Container) iPhone to a Multichannel EXR

    pIXELsHAM.com
    Apr 27, 2025
    photography, software

    Finn Jäger has spent some time in making a sleeker tool for all you VFX nerds out there, it takes a HEIC iPhone still and exports a Multichannel EXR – the cool thing is it also converts it to acesCG and it merges the SDR base image with the gain map according to apples math hdr_rgb = sdr_rgb * (1.0 + (headroom – 1.0) * gainmap)

    https://github.com/finnschi/heic-shenanigans

    Views : 20
  • EDGS – Eliminating Densification for Efficient Convergence of 3DGS

    pIXELsHAM.com
    Apr 27, 2025
    photogrammetry

    https://compvis.github.io/EDGS

    https://github.com/CompVis/EDGS

    https://compvis-edgs.hf.space

    Views : 48
  • Wētā FX – Compositing and Lighting Godzilla X Kong: The New Empire

    pIXELsHAM.com
    Apr 27, 2025
    lighting, production

    Views : 21
  • How Arvid Schneider Created This Star Wars Environment | Houdini Biomes & Nuke

    pIXELsHAM.com
    Apr 27, 2025
    lighting, modeling, production

    Views : 19
  • Mars Lewis on the Brandolini’s Law

    pIXELsHAM.com
    Apr 27, 2025
    quotes

    Brandolini’s law (or the bullshit asymmetry principle) is an internet adage coined in 2013 by Italian programmer Alberto Brandolini. It compares the considerable effort of debunking misinformation to the relative ease of creating it in the first place.

    The law states: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”

    https://en.wikipedia.org/wiki/Brandolini%27s_law

    https://www.linkedin.com/posts/marslewis_brandolinislaw-propagandawars-medialies-activity-7320036000473255936-LQVG

    This is why every time you kill a lie, it feels like nothing changed. It’s why no matter how many facts you post, how many sources you cite, how many receipts you show—the swarm just keeps coming. Because while you’re out in the open doing surgery, the machine is behind the curtain spraying aerosol deceit into every vent.

    The lie takes ten seconds. The truth takes ten paragraphs. And by the time you’ve written the tenth, the people you’re trying to reach have already scrolled past.

    Every viral deception—the fake quote, the rigged video, the synthetic outrage—takes almost nothing to create. And once it’s out there, you’re not just correcting a fact—you’re prying it out of someone’s identity. Because people don’t adopt lies just for information. They adopt them for belonging. The lie becomes part of who they are, and your correction becomes an attack.

    And still—you must correct it. Still, you must fight.

    Because even if truth doesn’t spread as fast, it roots deeper. Even if it doesn’t go viral, it endures. And eventually, it makes people bulletproof to the next wave of narrative sewage.

    You’re not here to win a one-day war. You’re here to outlast a never-ending invasion.

    The lies are roaches. You kill one, and a hundred more scramble behind the drywall.The lies are Hydra heads. You cut one off, and two grow back. But you keep swinging anyway.

    Because this isn’t about instant wins. It’s about making the cost of lying higher. It’s about being the resistance that doesn’t fold. You don’t fight because it’s easy. You fight because it’s right.

    Views : 25
  • GenUE – Direct Prompt-to-Mesh Generation in Unreal Engine Integrated with ComfyUI

    pIXELsHAM.com
    Apr 27, 2025
    A.I., modeling

    GenUE brings prompt-driven 3D asset creation directly into Unreal Engine using ComfyUI as a flexible backend. • Generate high-quality images from text prompts. • Choose from a catalog of batch-generated images – no style limitations. • Convert the selected image to a fully textured 3D mesh. • Automatically import and place the model into your Unreal Engine scene. This modular pipeline gives you full control over the image and 3D generation stages, with support for any ComfyUI workflow or model. Full generation (image + mesh + import) completes in under 2 minutes on a high-end consumer GPU.

    Views : 118
  • Michael Gerard – Unreal Engine Nanite Foliage full workflow

    pIXELsHAM.com
    Apr 27, 2025
    modeling

    https://www.artstation.com/blogs/michael_g_art/AgdAb/nanite-foliage-complete-workflow

    Views : 58
  • Edward Ureña – Rig creator

    pIXELsHAM.com
    Apr 27, 2025
    animation

    https://edwardurena.gumroad.com/l/ramoo

    What it offers:
    • Base rigs for multiple character types
    • Automatic weight application
    • Built-in facial rigging system
    • Bone generators with FK and IK options
    • Streamlined constraint panel

    Views : 15
  • Hadi Karimi – Full online modeling workshop

    pIXELsHAM.com
    Apr 27, 2025
    modeling

    https://www.youtube.com/HadiKarimi/streams

    Views : 15
  • GPT-Image-1 API now available through ComfyUI with Dall-E integration

    pIXELsHAM.com
    Apr 27, 2025
    A.I.

    https://blog.comfy.org/p/comfyui-now-supports-gpt-image-1

    https://docs.comfy.org/tutorials/api-nodes/openai/gpt-image-1

    https://openai.com/index/image-generation-api

    • Prompt GPT-Image-1 directly in ComfyUI using text or image inputs
    • Set resolution and quality
    • Supports image editing + transparent backgrounds
    • Seamlessly mix with local workflows like WAN 2.1, FLUX Tools, and more

    Views : 60
  • Tencent Hunyuan3D 2.5 – Transform images and text into 3D models with ultra-high-definition precision

    pIXELsHAM.com
    Apr 27, 2025
    A.I., modeling

    https://www.hunyuan-3d.com/

    What makes it special?
    • Massive 10B parameter geometric model with 10x more mesh faces.
    • High-quality textures with industry-first multi-view PBR generation.
    • Optimized skeletal rigging for streamlined animation workflows.
    • Flexible pipeline for text-to-3D and image-to-3D generation.

    They’re making it accessible to everyone:
    • Open-source code and pre-trained models.
    • Easy-to-use API and intuitive web interface.
    • Free daily quota doubled to 20 generations!

    Views : 315
Previous Page
1 … 16 17 18 19 20 … 434
Next Page

FEATURED POSTS

  • Ultimate Guide to Camera Aperture

    pIXELsHAM.com
    Sep 13, 2020
    composition, photography

    Views : 811
  • FreeCodeCamp.org – Perception for Self-Driving Cars

    pIXELsHAM.com
    Jan 29, 2022
    A.I., production, software

    www.freecodecamp.org/news/perception-for-self-driving-cars-deep-learning-course/

    Views : 728
  • Mastering Camera Shots and Angles: A Guide for Filmmakers

    pIXELsHAM.com
    Mar 30, 2025
    composition, photography

    https://website.ltx.studio/blog/mastering-camera-shots-and-angles

    1. Extreme Wide Shot

    2. Wide Shot

    3. Medium Shot

    4. Close Up

    5. Extreme Close Up

    Views : 18
  • RawTherapee – a free, open source, cross-platform raw image and HDRi processing program

    pIXELsHAM.com
    Mar 5, 2021
    colour, Featured, lighting, photography, software

    rawtherapee.com/

     

    5.10 of this tool includes excellent tools to clean up cr2 and cr3 used on set to support HDRI processing.
    Converting raw to AcesCG 32 bit tiffs with metadata. 

     

     

     

    Views : 748
  • AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability

    pIXELsHAM.com
    Oct 4, 2022
    A.I., Featured, ves

    https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

     

    “Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”

     

    “In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”

     

    “It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”

     

    “Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”

    Views : 695
  • Victor Perez – CA Color Management Fundamentals & ACES Workflows in Foundry Nuke

    pIXELsHAM.com
    Apr 25, 2021
    colour, lighting, software

    Views : 1,059
  • Origami Diffusion Clip. Created with Stable Diffusion + Control Net

    pIXELsHAM.com
    Apr 23, 2023
    A.I., design

    Views : 521
  • Light properties

    pIXELsHAM.com
    Nov 23, 2018
    lighting, photography, reference

    How It Works – Issue 114
    https://www.howitworksdaily.com/

    Views : 2,124
Views : 12,313

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.