▶︎ You send your idea (from WhatsApp, Telegram, or Slack or manual click) ▶︎ The AI agent (powered by Gemini or any LLM) turns it into a structured video prompt ▶︎ It calls Replicate or Fal.ai to generate the video ▶︎ The final video is saved to your Google Sheet
Image Generation on Midjourney Video Generation on Kling 2.1
I used Joystick png to add buttons,then some asmr video sounds to make it look more lively,I used text as Buttons,
Prompts:
All Prompts are in order just like in video
First-person POV video game screenshot, playing as a young anime protagonist in a slightly oversized white t-shirt and knee-length blue shorts. Visible hands pushing open a sun-faded wooden door, forearms resting on the frame. In a dusty hallway mirror reflection: character’s soft Ghibli-style face with windblown hair. Inside a cozy coastal cottage: slanted sunlight through lace curtains, pastel walls with watercolor seascapes, overstuffed bookshelf spilling seashells. Foreground: ‘E: Rest’ prompt over a quilted sofa. Background: steaming teacup on a driftwood table, open window revealing distant lighthouse and Miyazaki fluffy clouds. Soft painterly textures, slight fisheye lens, identical HUD (minimap corner, health bar)
First-person POV video game screenshot, playing as a young anime protagonist in a slightly oversized white t-shirt and knee-length blue shorts. View includes visible hands gripping a steering wheel, sunlit arms resting on car door, and rearview mirror showing character’s soft Ghibli-style face with windblown hair. Driving through a vibrant coastal town: cobblestone streets, pastel houses with flower boxes, distant lighthouse. Soft painterly textures, Miyazaki skies with fluffy clouds, slight fisheye lens effect, HUD elements (minimap corner, health bar).
First-person POV video game screenshot, playing as a young protagonist in a loose white t-shirt and faded denim shorts. Visible arms holding a woven basket, sneakers stepping on rain-damp cobblestones. Walking through a chaotic Ghibli street market: cramped stalls selling glowing mushrooms, floating lanterns, and spiral-cut fruits. Fishmonger shouts while soot sprites dart between crates. Foreground: vendor handing you a peach (interactive ‘E’ prompt). Background: yakuza thugs lurking near a steaming noodle cart. Soft painterly lighting, depth of field, subtle HUD (minimap corner, health bar). Studio Ghibli meets Grand Theft Auto
First-person POV video game screenshot, playing as a young anime protagonist in a slightly oversized white t-shirt (salt-stained sleeves) and knee-length blue shorts, visible hands gripping a bamboo fishing rod. Kneeling on a mossy dock pier at sunset, arms resting on knees. Foreground: ‘E: Reel In’ prompt as line pulls taut. Background: pastel fishing boats, distant lighthouse under Miyazaki’s fluffy clouds. Glowing koi fish breaching turquoise water, soot sprites stealing bait from a tin. Identical soft painterly textures, fisheye lens effect, HUD (minimap corner, health bar).
Video Prompts :
All Prompts are in order just like in video
The black-haired boy strides from the rustic house toward the ocean, the camera tracking his movement in a GTA-style third-person perspective as coastal winds flutter white curtains and sunlight glimmers on distant sailboats, blending warm interior details with expanding seaside horizons under a tranquil sky.
The brown-haired boy drives a vintage blue convertible along the coastal cobblestone street, colorful flower-adorned buildings passing by as the camera follows the car’s journey toward the sunlit ocean horizon, sea breeze gently tousling his hair under a serene sky.
The young boy navigates the bustling cobblestone market, basket of oranges in arm, as vibrant stalls and fluttering awnings frame his journey, the camera tracking his focused stride through chattering crowds under swaying traditional lanterns.
A school of fish swims gracefully through crystal-clear water, sunlight filtering through the surface, coral reefs swaying gently, creating a serene underwater scene with the camera stationary.
There are several free or open-source VFX asset management systems available that can be used in production environments. These tools vary in scope—from lightweight tools to full-fledged pipeline frameworks. Below is a breakdown of the most notable ones and what makes them stand out.
1. Free & Open-Source VFX Asset Management Systems
1.1 OpenPype (formerly Pype)
License: Open source (Apache 2.0)
– Asset management and project structure setup – Integrates with Maya, Houdini, Nuke, Blender, and others – Includes publishing, versioning, and task tracking – Web interface (OpenPype Studio) for overview and management
Strengths: Actively developed, modular and extendable, production-proven in real studios
– Production tracking, shot management – Web-based interface with intuitive UX – Built-in review and feedback system – API for integration into pipelines
Strengths: Great for team collaboration, focuses on communication between departments
License: Proprietary (older versions may be available for small studios/educational users)
– Project management, review, and pipeline integration – Strength: Industry-proven Note: Current versions are commercial; older community editions may still be used.
1.4 Tactic
License: Open source (EPL 1.0)
– General-purpose asset and workflow management – Web-based, highly configurable
Strengths: Adaptable to VFX pipelines, powerful templating/scripting
Drawbacks: Steep learning curve, not VFX-specific out of the box
Why: – Specifically built for VFX and animation pipelines – Extensively integrates with key DCCs – Actively maintained with a large community – Includes both asset and task management – Works out-of-the-box but is customizable
Tencent just made Hunyuan3D 2.1 open-source. This is the first fully open-source, production-ready PBR 3D generative model with cinema-grade quality. https://github.com/Tencent-Hunyuan/Hunyuan3D-2.1
What makes it special? • Advanced PBR material synthesis brings realistic materials like leather, bronze, and more to life with stunning light interactions. • Complete access to model weights, training/inference code, data pipelines. • Optimized to run on accessible hardware. • Built for real-world applications with professional-grade output quality.
They’re making it accessible to everyone: • Complete open-source ecosystem with full documentation. • Ready-to-use model weights and training infrastructure. • Live demo available for instant testing. • Comprehensive GitHub repository with implementation details.
Load holograms, animate cameras, capture frames, and feed them to your favorite AI models. Developed by Lovis Odin for Kartel.ai You can obtain the MPD URL directly from the official 8i Web Player.
The human eye perceives half scene brightness not as the linear 50% of the present energy (linear nature values) but as 18% of the overall brightness. We are biased to perceive more information in the dark and contrast areas. A Macbeth chart helps with calibrating back into a photographic capture into this “human perspective” of the world.
In photography, painting, and other visual arts, middle gray or middle grey is a tone that is perceptually about halfway between black and white on a lightness scale in photography and printing, it is typically defined as 18% reflectance in visible light
Light meters, cameras, and pictures are often calibrated using an 18% gray card[4][5][6] or a color reference card such as a ColorChecker. On the assumption that 18% is similar to the average reflectance of a scene, a grey card can be used to estimate the required exposure of the film.
To measure the contrast ratio you will need a light meter. The process starts with you measuring the main source of light, or the key light.
Get a reading from the brightest area on the face of your subject. Then, measure the area lit by the secondary light, or fill light. To make sense of what you have just measured you have to understand that the information you have just gathered is in F-stops, a measure of light. With each additional F-stop, for example going one stop from f/1.4 to f/2.0, you create a doubling of light. The reverse is also true; moving one stop from f/8.0 to f/5.6 results in a halving of the light.
Supported by LG, Philips, Panasonic and Sony sell the OLED system TVs. OLED stands for “organic light emitting diode.” It is a fundamentally different technology from LCD, the major type of TV today. OLED is “emissive,” meaning the pixels emit their own light.
Samsung is branding its best TVs with a new acronym: “QLED” QLED (according to Samsung) stands for “quantum dot LED TV.” It is a variation of the common LED LCD, adding a quantum dot film to the LCD “sandwich.” QLED, like LCD, is, in its current form, “transmissive” and relies on an LED backlight.
OLED is the only technology capable of absolute blacks and extremely bright whites on a per-pixel basis. LCD definitely can’t do that, and even the vaunted, beloved, dearly departed plasma couldn’t do absolute blacks.
QLED, as an improvement over OLED, significantly improves the picture quality. QLED can produce an even wider range of colors than OLED, which says something about this new tech. QLED is also known to produce up to 40% higher luminance efficiency than OLED technology. Further, many tests conclude that QLED is far more efficient in terms of power consumption than its predecessor, OLED.