Subscribe to PixelSham.com RSS for free

3Dprinting (180) A.I. (904) animation (354) blender (217) colour (241) commercials (53) composition (154) cool (369) design (657) Featured (91) hardware (317) IOS (109) jokes (140) lighting (300) modeling (157) music (189) photogrammetry (197) photography (757) production (1308) python (101) quotes (498) reference (317) software (1379) trailers (308) ves (575) VR (221)

POPULAR SEARCHES unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke

  • Explore Posts
  • Job Postings
  • ReelMatters.com
  • About and Contact
    • About And Contact
    • Portfolio
    • Privacy Policy
    • RSS feed page

BREAKING NEWS

LATEST POSTS

  • Studio Tim Fu – Living Sketches architecture

    pIXELsHAM.com
    Jun 2, 2025
    design

    https://timfu.com/

    (more…)
    Views : 25
  • How to Build & Sell AI Agents – Ultimate Beginner’s Guide

    pIXELsHAM.com
    Jun 2, 2025
    A.I., production

    Views : 5
  • N8N.io – From Zero to Your First AI Agent in 25 Minutes

    pIXELsHAM.com
    Jun 2, 2025
    A.I., Featured, production

    https://n8n.io

    https://github.com/n8n-io/self-hosted-ai-starter-kit

    Views : 18
  • Transformer Explainer -Interactive Learning of Text-Generative Models

    pIXELsHAM.com
    Jun 2, 2025
    A.I.

    https://github.com/poloclub/transformer-explainer

    Transformer Explainer is an interactive visualization tool designed to help anyone learn how Transformer-based models like GPT work. It runs a live GPT-2 model right in your browser, allowing you to experiment with your own text and observe in real time how internal components and operations of the Transformer work together to predict the next tokens. Try Transformer Explainer at http://poloclub.github.io/transformer-explainer

    Views : 14
  • How to Design for 3D Printing in Blender – Beginner Tutorial

    pIXELsHAM.com
    Jun 2, 2025
    3Dprinting, blender, modeling

    Views : 14
  • Henry Daubrez – How to generate VR/ 360 videos directly with Google VEO

    pIXELsHAM.com
    May 30, 2025
    A.I., VR

    https://www.linkedin.com/posts/upskydown_vr-googleveo-veo3-activity-7334269406396461059-d8Da

    If you prompt for a 360° video in VEO (like literally write “360°” ) it can generate a Monoscopic 360 video, then the next step is to inject the right metadata in your file so you can play it as an actual 360 video.
    Once it’s saved with the right Metadata, it will be recognized as an actual 360/VR video, meaning you can just play it in VLC and drag your mouse to look around.

    Spatial Media Metadata Injector – for 360 videos
    Views : 14
  • Revopoint Trackit – Optical Tracking 3D Scanner

    pIXELsHAM.com
    May 30, 2025
    photogrammetry

    https://www.kickstarter.com/projects/revopoint3d/revopoint-trackit-optical-tracking-3d-scanner

    Views : 8
  • Teoman Şirvancı – Making a CG F1 Toy Car turntable with Renderman

    pIXELsHAM.com
    May 29, 2025
    lighting, modeling

    https://renderman.pixar.com/f1-toy-car

    Views : 18
  • Black Forest Labs released FLUX.1 Kontext

    pIXELsHAM.com
    May 29, 2025
    A.I., Featured, production

    https://replicate.com/blog/flux-kontext

    https://replicate.com/black-forest-labs/flux-kontext-pro

    There are three models, two are available now, and a third open-weight version is coming soon:

    • FLUX.1 Kontext [pro]: State-of-the-art performance for image editing. High-quality outputs, great prompt following, and consistent results.
    • FLUX.1 Kontext [max]: A premium model that brings maximum performance, improved prompt adherence, and high-quality typography generation without compromise on speed.
    • Coming soon: FLUX.1 Kontext [dev]: An open-weight, guidance-distilled version of Kontext.

    We’re so excited with what Kontext can do, we’ve created a collection of models on Replicate to give you ideas:

    • Multi-image kontext: Combine two images into one.
    • Portrait series: Generate a series of portraits from a single image
    • Change haircut: Change a person’s hair style and color
    • Iconic locations: Put yourself in front of famous landmarks
    • Professional headshot: Generate a professional headshot from any image

    Views : 78
  • AI Models – A walkthrough by Andreas Horn

    pIXELsHAM.com
    May 28, 2025
    A.I.

    the 8 most important model types and what they’re actually built to do: ⬇️

    1. 𝗟𝗟𝗠 – 𝗟𝗮𝗿𝗴𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
    → Your ChatGPT-style model.
    Handles text, predicts the next token, and powers 90% of GenAI hype.
    🛠 Use case: content, code, convos.

    2. 𝗟𝗖𝗠 – 𝗟𝗮𝘁𝗲𝗻𝘁 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗠𝗼𝗱𝗲𝗹
    → Lightweight, diffusion-style models.
    Fast, quantized, and efficient — perfect for real-time or edge deployment.
    🛠 Use case: image generation, optimized inference.

    3. 𝗟𝗔𝗠 – 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗔𝗰𝘁𝗶𝗼𝗻 𝗠𝗼𝗱𝗲𝗹
    → Where LLM meets planning.
    Adds memory, task breakdown, and intent recognition.
    🛠 Use case: AI agents, tool use, step-by-step execution.

    4. 𝗠𝗼𝗘 – 𝗠𝗶𝘅𝘁𝘂𝗿𝗲 𝗼𝗳 𝗘𝘅𝗽𝗲𝗿𝘁𝘀
    → One model, many minds.
    Routes input to the right “expert” model slice — dynamic, scalable, efficient.
    🛠 Use case: high-performance model serving at low compute cost.

    5. 𝗩𝗟𝗠 – 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
    → Multimodal beast.
    Combines image + text understanding via shared embeddings.
    🛠 Use case: Gemini, GPT-4o, search, robotics, assistive tech.

    6. 𝗦𝗟𝗠 – 𝗦𝗺𝗮𝗹𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
    → Tiny but mighty.
    Designed for edge use, fast inference, low latency, efficient memory.
    🛠 Use case: on-device AI, chatbots, privacy-first GenAI.

    7. 𝗠𝗟𝗠 – 𝗠𝗮𝘀𝗸𝗲𝗱 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹
    → The OG foundation model.
    Predicts masked tokens using bidirectional context.
    🛠 Use case: search, classification, embeddings, pretraining.

    8. 𝗦𝗔𝗠 – 𝗦𝗲𝗴𝗺𝗲𝗻𝘁 𝗔𝗻𝘆𝘁𝗵𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹
    → Vision model for pixel-level understanding.
    Highlights, segments, and understands *everything* in an image.
    🛠 Use case: medical imaging, AR, robotics, visual agents.

    Views : 19
  • Spaitial.ai – Spatial Foundation Models

    pIXELsHAM.com
    May 28, 2025
    A.I., photogrammetry

    https://www.spaitial.ai/

    Views : 14
  • Introducting ComfyUI Native API Nodes

    pIXELsHAM.com
    May 22, 2025
    A.I.

    https://blog.comfy.org/p/comfyui-native-api-nodes

    Models Supported

    • Black Forest Labs Flux 1.1[pro] Ultra, Flux .1[pro]
    • Kling 2.0, 1.6, 1.5 & Various Effects
    • Luma Photon, Ray2, Ray1.6
    • MiniMax Text-to-Video, Image-to-Video
    • PixVerse V4 & Effects
    • Recraft V3, V2 & Various Tools
    • Stability AI Stable Image Ultra, Stable Diffusion 3.5 Large
    • Google Veo2
    • Ideogram V3, V2, V1
    • OpenAI GPT4o image
    • Pika 2.2

    Views : 15
  • ComfyUI-CoCoTools_IO – A set of nodes focused on advanced image I/O operations, particularly for EXR file handling

    pIXELsHAM.com
    May 21, 2025
    A.I., production

    https://github.com/Conor-Collins/ComfyUI-CoCoTools_IO

    Features

    • Advanced EXR image input with multilayer support
    • EXR layer extraction and manipulation
    • High-quality image saving with format-specific options
    • Standard image format loading with bit depth awareness

    Current Nodes

    Image I/O

    • Image Loader: Load standard image formats (PNG, JPG, WebP, etc.) with proper bit depth handling
    • Load EXR: Comprehensive EXR file loading with support for multiple layers, channels, and cryptomatte data
    • Load EXR Layer by Name: Extract specific layers from EXR files (similar to Nuke’s Shuffle node)
    • Cryptomatte Layer: Specialized handling for cryptomatte layers in EXR files
    • Image Saver: Save images in various formats with format-specific options (bit depth, compression, etc.)

    Image Processing

    • Colorspace: Convert between sRGB and Linear colorspaces
    • Z Normalize: Normalize depth maps and other single-channel data
    Views : 71
  • Google AI – Meet Flow, The AI-powered Filmmaking with Veo 3

    pIXELsHAM.com
    May 21, 2025
    A.I.

    https://blog.google/technology/ai/google-flow-veo-ai-filmmaking-tool/

    Views : 14
  • NVidia – 3D Guided Generative AI restyling in Blender

    pIXELsHAM.com
    May 21, 2025
    A.I., blender

    https://build.nvidia.com/nvidia/genai-3d-guided

    https://github.com/NVIDIA-AI-Blueprints/3d-guided-genai-rtx

    Views : 57
Previous Page
1 … 13 14 15 16 17 … 434
Next Page

FEATURED POSTS

  • 9 Best Hacks to Make a Cinematic Video with Any Camera

    pIXELsHAM.com
    Jul 23, 2022
    composition, lighting, photography, production

    https://www.flexclip.com/learn/cinematic-video.html

    • Frame Your Shots to Create Depth
    • Create Shallow Depth of Field
    • Avoid Shaky Footage and Use Flexible Camera Movements
    • Properly Use Slow Motion
    • Use Cinematic Lighting Techniques
    • Apply Color Grading
    • Use Cinematic Music and SFX
    • Add Cinematic Fonts and Text Effects
    • Create the Cinematic Bar at the Top and the Bottom

     

     

    Views : 680
  • N8N.io – From Zero to Your First AI Agent in 25 Minutes

    pIXELsHAM.com
    Jun 2, 2025
    A.I., Featured, production

    https://n8n.io

    https://github.com/n8n-io/self-hosted-ai-starter-kit

    Views : 18
  • Feature Composition using shapes

    pIXELsHAM.com
    Dec 26, 2018
    composition, photography

    Views : 1,217
  • Key/Fill ratios and scene composition using false colors and Nuke node

    pIXELsHAM.com
    Apr 22, 2021
    composition, Featured, lighting, photography

    www.videomaker.com/article/c03/18984-how-to-calculate-contrast-ratios-for-more-professional-lighting-setups

    To measure the contrast ratio you will need a light meter. The process starts with you measuring the main source of light, or the key light.

    Get a reading from the brightest area on the face of your subject. Then, measure the area lit by the secondary light, or fill light. To make sense of what you have just measured you have to understand that the information you have just gathered is in F-stops, a measure of light. With each additional F-stop, for example going one stop from f/1.4 to f/2.0, you create a doubling of light. The reverse is also true; moving one stop from f/8.0 to f/5.6 results in a halving of the light.

     

    (more…)
    Views : 3,083
  • ComfyDock – The Easiest (Free) Way to Safely Run ComfyUI Sessions in a Boxed Container

    pIXELsHAM.com
    Mar 7, 2025
    A.I., Featured

    https://www.reddit.com/r/comfyui/comments/1j2x4qv/comfydock_the_easiest_free_way_to_run_comfyui_in/

    https://github.com/ComfyDock

    ComfyDock is a tool that allows you to easily manage your ComfyUI environments via Docker.

    Common Challenges with ComfyUI

    • Custom Node Installation Issues: Installing new custom nodes can inadvertently change settings across the whole installation, potentially breaking the environment.
    • Workflow Compatibility: Workflows are often tested with specific custom nodes and ComfyUI versions. Running these workflows on different setups can lead to errors and frustration.
    • Security Risks: Installing custom nodes directly on your host machine increases the risk of malicious code execution.

    How ComfyDock Helps

    • Environment Duplication: Easily duplicate your current environment before installing custom nodes. If something breaks, revert to the original environment effortlessly.
    • Deployment and Sharing: Workflow developers can commit their environments to a Docker image, which can be shared with others and run on cloud GPUs to ensure compatibility.
    • Enhanced Security: Containers help to isolate the environment, reducing the risk of malicious code impacting your host machine.

    Views : 104
  • DUNE | DP Greig Fraser ACS, ASC | ShotDeck: Shot Talk

    pIXELsHAM.com
    Mar 17, 2022
    colour, composition, lighting

    Views : 710
  • Stefan Pabst – Mind-Bending 3D Drawings

    pIXELsHAM.com
    Jan 7, 2016
    design

    http://www.booooooom.com/2015/10/22/video-of-the-day-youtube-sensation-stefan-pabsts-mind-bending-3d-drawings/

    Views : 1,134
  • Black Body color aka the Planckian Locus curve for white point eye perception

    pIXELsHAM.com
    Mar 14, 2013
    colour, Featured, lighting, photography, reference

    http://en.wikipedia.org/wiki/Black-body_radiation

     

    Black-body radiation is the type of electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body (an opaque and non-reflective body) held at constant, uniform temperature. The radiation has a specific spectrum and intensity that depends only on the temperature of the body.

    A black-body at room temperature appears black, as most of the energy it radiates is infra-red and cannot be perceived by the human eye. At higher temperatures, black bodies glow with increasing intensity and colors that range from dull red to blindingly brilliant blue-white as the temperature increases.

    (more…)
    Views : 3,830
Views : 16,273

RSS feed page

Search


Categories


Archive


Disclaimer


Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.