Subscribe to PixelSham.com RSS for free

3Dprinting (179) A.I. (899) animation (353) blender (217) colour (241) commercials (53) composition (154) cool (368) design (657) Featured (91) hardware (316) IOS (109) jokes (140) lighting (300) modeling (156) music (189) photogrammetry (197) photography (757) production (1308) python (101) quotes (498) reference (317) software (1379) trailers (308) ves (573) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • JangaFX llluGen – A node-based visual effects software designed specifically for video games

    pIXELsHAM.com
    Apr 28, 2025
    production, software

    https://80.lv/articles/the-first-look-at-illugen-jangafx-s-new-tool-for-vfx-in-games-tech-art/

    Screenshot
    Screenshot
    Screenshot
    Views : 69
  • Online 360° Panorama Viewer VR

    pIXELsHAM.com
    Apr 28, 2025
    photogrammetry, photography, production, software, VR

    https://renderstuff.com/tools/360-panorama-web-viewer/

    Views : 42
  • Arto T. – A workflow for creating photorealistic, equirectangular 360° panoramas in ComfyUI using Flux

    pIXELsHAM.com
    Apr 27, 2025
    A.I., lighting, photography

    https://civitai.com/models/735980/flux-equirectangular-360-panorama

    https://civitai.com/models/745010?modelVersionId=833115

    The trigger phrase is “equirectangular 360 degree panorama”. I would avoid saying “spherical projection” since that tends to result in non-equirectangular spherical images.

    Image resolution should always be a 2:1 aspect ratio. 1024 x 512 or 1408 x 704 work quite well and were used in the training data. 2048 x 1024 also works.

    I suggest using a weight of 0.5 – 1.5. If you are having issues with the image generating too flat instead of having the necessary spherical distortion, try increasing the weight above 1, though this could negatively impact small details of the image. For Flux guidance, I recommend a value of about 2.5 for realistic scenes.

    8-bit output at the moment

    Views : 131
  • Scientists claim to have discovered ‘new colour’ no one has seen before: Olo

    pIXELsHAM.com
    Apr 27, 2025
    colour

    https://www.bbc.com/news/articles/clyq0n3em41o

    By stimulating specific cells in the retina, the participants claim to have witnessed a blue-green colour that scientists have called “olo”, but some experts have said the existence of a new colour is “open to argument”.

    The findings, published in the journal Science Advances on Friday, have been described by the study’s co-author, Prof Ren Ng from the University of California, as “remarkable”.

    (A) System inputs. (i) Retina map of 103 cone cells preclassified by spectral type (7). (ii) Target visual percept (here, a video of a child, see movie S1 at 1:04). (iii) Infrared cellular-scale imaging of the retina with 60-frames-per-second rolling shutter. Fixational eye movement is visible over the three frames shown.

    (B) System outputs. (iv) Real-time per-cone target activation levels to reproduce the target percept, computed by: extracting eye motion from the input video relative to the retina map; identifying the spectral type of every cone in the field of view; computing the per-cone activation the target percept would have produced. (v) Intensities of visible-wavelength 488-nm laser microdoses at each cone required to achieve its target activation level.

    (C) Infrared imaging and visible-wavelength stimulation are physically accomplished in a raster scan across the retinal region using AOSLO. By modulating the visible-wavelength beam’s intensity, the laser microdoses shown in (v) are delivered. Drawing adapted with permission [Harmening and Sincich (54)].

    (D) Examples of target percepts with corresponding cone activations and laser microdoses, ranging from colored squares to complex imagery. Teal-striped regions represent the color “olo” of stimulating only M cones.

    Views : 27
  • KIRI Engine 3.12 – Mesh From 3D Gaussian Splatting

    pIXELsHAM.com
    Apr 27, 2025
    photogrammetry

    Will This Replace Photogrammetry And 3D Scanner?

    Views : 20
  • Finn Jager – From HEIC (High Efficiency Image Container) iPhone to a Multichannel EXR

    pIXELsHAM.com
    Apr 27, 2025
    photography, software

    Finn Jäger has spent some time in making a sleeker tool for all you VFX nerds out there, it takes a HEIC iPhone still and exports a Multichannel EXR – the cool thing is it also converts it to acesCG and it merges the SDR base image with the gain map according to apples math hdr_rgb = sdr_rgb * (1.0 + (headroom – 1.0) * gainmap)

    https://github.com/finnschi/heic-shenanigans

    Views : 20
  • EDGS – Eliminating Densification for Efficient Convergence of 3DGS

    pIXELsHAM.com
    Apr 27, 2025
    photogrammetry

    https://compvis.github.io/EDGS

    https://github.com/CompVis/EDGS

    https://compvis-edgs.hf.space

    Views : 48
  • Wētā FX – Compositing and Lighting Godzilla X Kong: The New Empire

    pIXELsHAM.com
    Apr 27, 2025
    lighting, production

    Views : 21
  • How Arvid Schneider Created This Star Wars Environment | Houdini Biomes & Nuke

    pIXELsHAM.com
    Apr 27, 2025
    lighting, modeling, production

    Views : 19
  • Mars Lewis on the Brandolini’s Law

    pIXELsHAM.com
    Apr 27, 2025
    quotes

    Brandolini’s law (or the bullshit asymmetry principle) is an internet adage coined in 2013 by Italian programmer Alberto Brandolini. It compares the considerable effort of debunking misinformation to the relative ease of creating it in the first place.

    The law states: “The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.”

    https://en.wikipedia.org/wiki/Brandolini%27s_law

    https://www.linkedin.com/posts/marslewis_brandolinislaw-propagandawars-medialies-activity-7320036000473255936-LQVG

    This is why every time you kill a lie, it feels like nothing changed. It’s why no matter how many facts you post, how many sources you cite, how many receipts you show—the swarm just keeps coming. Because while you’re out in the open doing surgery, the machine is behind the curtain spraying aerosol deceit into every vent.

    The lie takes ten seconds. The truth takes ten paragraphs. And by the time you’ve written the tenth, the people you’re trying to reach have already scrolled past.

    Every viral deception—the fake quote, the rigged video, the synthetic outrage—takes almost nothing to create. And once it’s out there, you’re not just correcting a fact—you’re prying it out of someone’s identity. Because people don’t adopt lies just for information. They adopt them for belonging. The lie becomes part of who they are, and your correction becomes an attack.

    And still—you must correct it. Still, you must fight.

    Because even if truth doesn’t spread as fast, it roots deeper. Even if it doesn’t go viral, it endures. And eventually, it makes people bulletproof to the next wave of narrative sewage.

    You’re not here to win a one-day war. You’re here to outlast a never-ending invasion.

    The lies are roaches. You kill one, and a hundred more scramble behind the drywall.The lies are Hydra heads. You cut one off, and two grow back. But you keep swinging anyway.

    Because this isn’t about instant wins. It’s about making the cost of lying higher. It’s about being the resistance that doesn’t fold. You don’t fight because it’s easy. You fight because it’s right.

    Views : 25
  • GenUE – Direct Prompt-to-Mesh Generation in Unreal Engine Integrated with ComfyUI

    pIXELsHAM.com
    Apr 27, 2025
    A.I., modeling

    GenUE brings prompt-driven 3D asset creation directly into Unreal Engine using ComfyUI as a flexible backend. • Generate high-quality images from text prompts. • Choose from a catalog of batch-generated images – no style limitations. • Convert the selected image to a fully textured 3D mesh. • Automatically import and place the model into your Unreal Engine scene. This modular pipeline gives you full control over the image and 3D generation stages, with support for any ComfyUI workflow or model. Full generation (image + mesh + import) completes in under 2 minutes on a high-end consumer GPU.

    Views : 118
  • Michael Gerard – Unreal Engine Nanite Foliage full workflow

    pIXELsHAM.com
    Apr 27, 2025
    modeling

    https://www.artstation.com/blogs/michael_g_art/AgdAb/nanite-foliage-complete-workflow

    Views : 58
  • Edward Ureña – Rig creator

    pIXELsHAM.com
    Apr 27, 2025
    animation

    https://edwardurena.gumroad.com/l/ramoo

    What it offers:
    • Base rigs for multiple character types
    • Automatic weight application
    • Built-in facial rigging system
    • Bone generators with FK and IK options
    • Streamlined constraint panel

    Views : 15
  • Hadi Karimi – Full online modeling workshop

    pIXELsHAM.com
    Apr 27, 2025
    modeling

    https://www.youtube.com/HadiKarimi/streams

    Views : 15
  • GPT-Image-1 API now available through ComfyUI with Dall-E integration

    pIXELsHAM.com
    Apr 27, 2025
    A.I.

    https://blog.comfy.org/p/comfyui-now-supports-gpt-image-1

    https://docs.comfy.org/tutorials/api-nodes/openai/gpt-image-1

    https://openai.com/index/image-generation-api

    • Prompt GPT-Image-1 directly in ComfyUI using text or image inputs
    • Set resolution and quality
    • Supports image editing + transparent backgrounds
    • Seamlessly mix with local workflows like WAN 2.1, FLUX Tools, and more

    Views : 61
Previous Page
1 … 16 17 18 19 20 … 434
Next Page

FEATURED POSTS

  • Creative director and motion designer Danil Krivoruchko

    pIXELsHAM.com
    Feb 4, 2021
    animation, composition, design

    https://myshli.com/

     

     

     

     

    Views : 714
  • Invoke.ai Canvas

    pIXELsHAM.com
    Jan 12, 2024
    A.I., software

    https://invoke-ai.github.io/InvokeAI/

     

     

    Views : 462
  • How To Create Beauty Portrait Photography on a Budget

    pIXELsHAM.com
    Oct 3, 2020
    lighting, photography

    Views : 724
  • QR code logos

    pIXELsHAM.com
    Sep 30, 2024
    design, Featured

     Reading QR codes without a computer!

    (more…)
    Views : 1,316
  • Photography basics: Solid Angle measures

    pIXELsHAM.com
    Aug 1, 2020
    Featured, lighting, photography

    http://www.calculator.org/property.aspx?name=solid+angle

     

     

    A measure of how large the object appears to an observer looking from that point. Thus. A measure for objects in the sky. Useful to retuen the size of the sun and moon… and in perspective, how much of their contribution to lighting. Solid angle can be represented in ‘angular diameter’ as well.

    http://en.wikipedia.org/wiki/Solid_angle

     

    http://www.mathsisfun.com/geometry/steradian.html

     

    A solid angle is expressed in a dimensionless unit called a steradian (symbol: sr). By default in terms of the total celestial sphere and before atmospheric’s scattering, the Sun and the Moon subtend fractional areas of 0.000546% (Sun) and 0.000531% (Moon).

     

    http://en.wikipedia.org/wiki/Solid_angle#Sun_and_Moon

     

    On earth the sun is likely closer to 0.00011 solid angle after athmospheric scattering. The sun as perceived from earth has a diameter of 0.53 degrees. This is about 0.000064 solid angle.

    http://www.numericana.com/answer/angles.htm

     

    The mean angular diameter of the full moon is 2q = 0.52° (it varies with time around that average, by about 0.009°). This translates into a solid angle of 0.0000647 sr, which means that the whole night sky covers a solid angle roughly one hundred thousand times greater than the full moon.

     

    More info

     

    http://lcogt.net/spacebook/using-angles-describe-positions-and-apparent-sizes-objects

    http://amazing-space.stsci.edu/glossary/def.php.s=topic_astronomy

     

    Angular Size

    The apparent size of an object as seen by an observer; expressed in units of degrees (of arc), arc minutes, or arc seconds. The moon, as viewed from the Earth, has an angular diameter of one-half a degree.

     

    The angle covered by the diameter of the full moon is about 31 arcmin or 1/2°, so astronomers would say the Moon’s angular diameter is 31 arcmin, or the Moon subtends an angle of 31 arcmin.

    Views : 3,470
  • 10 Best Uses of Color in movies of All Time

    pIXELsHAM.com
    Nov 4, 2019
    colour

    Views : 939
  • Hyperrealistic sculptures

    pIXELsHAM.com
    Feb 5, 2017
    design

    http://www.webdesignerdepot.com/2009/11/mind-blowing-hyperrealistic-sculptures/

    Views : 1,364
  • mattepaint.com – library of animated sky and landscape HDRIs

    pIXELsHAM.com
    Aug 18, 2023
    lighting, photography, production, reference

    https://mattepaint.com/gallery/hdri/hdris050/

     

     

     

    https://www.pixelsham.com/wp-content/uploads/2023/08/mattepaint.mp4

    Views : 297
Views : 12,787

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.