Reimagine any video. Shoot it in post with director-grade control over style, character, and setting. Restyle expressive actions and performances, swap entire worlds, or redesign the frame to your vision. Shoot once. Shape infinitely.
Transformer Explainer is an interactive visualization tool designed to help anyone learn how Transformer-based models like GPT work. It runs a live GPT-2 model right in your browser, allowing you to experiment with your own text and observe in real time how internal components and operations of the Transformer work together to predict the next tokens. Try Transformer Explainer at http://poloclub.github.io/transformer-explainer
If you prompt for a 360ยฐ video in VEO (like literally write “360ยฐ” ) it can generate a Monoscopic 360 video, then the next step is to inject the right metadata in your file so you can play it as an actual 360 video. Once it’s saved with the right Metadata, it will be recognized as an actual 360/VR video, meaning you can just play it in VLC and drag your mouse to look around.
There are three models, two are available now, and a third open-weight version is coming soon:
FLUX.1 Kontext [pro]: State-of-the-art performance for image editing. High-quality outputs, great prompt following, and consistent results.
FLUX.1 Kontext [max]: A premium model that brings maximum performance, improved prompt adherence, and high-quality typography generation without compromise on speed.
Coming soon: FLUX.1 Kontext [dev]: An open-weight, guidance-distilled version of Kontext.
Weโre so excited with what Kontext can do, weโve created aย collection of modelsย on Replicate to give you ideas:
theย 8 most important model typesย and what theyโre actually built to do: โฌ๏ธ
1. ๐๐๐ โ ๐๐ฎ๐ฟ๐ด๐ฒ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น โ Your ChatGPT-style model. Handles text, predicts the next token, and powers 90% of GenAI hype. ๐ Use case: content, code, convos.
2. ๐๐๐ โ ๐๐ฎ๐๐ฒ๐ป๐ ๐๐ผ๐ป๐๐ถ๐๐๐ฒ๐ป๐ฐ๐ ๐ ๐ผ๐ฑ๐ฒ๐น โ Lightweight, diffusion-style models. Fast, quantized, and efficient โ perfect for real-time or edge deployment. ๐ Use case: image generation, optimized inference.
3. ๐๐๐ โ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐๐ฐ๐๐ถ๐ผ๐ป ๐ ๐ผ๐ฑ๐ฒ๐น โ Where LLM meets planning. Adds memory, task breakdown, and intent recognition. ๐ Use case: AI agents, tool use, step-by-step execution.
4. ๐ ๐ผ๐ โ ๐ ๐ถ๐ ๐๐๐ฟ๐ฒ ๐ผ๐ณ ๐๐ ๐ฝ๐ฒ๐ฟ๐๐ โ One model, many minds. Routes input to the right โexpertโ model slice โ dynamic, scalable, efficient. ๐ Use case: high-performance model serving at low compute cost.
5. ๐ฉ๐๐ โ ๐ฉ๐ถ๐๐ถ๐ผ๐ป ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น โ Multimodal beast. Combines image + text understanding via shared embeddings. ๐ Use case: Gemini, GPT-4o, search, robotics, assistive tech.
6. ๐ฆ๐๐ โ ๐ฆ๐บ๐ฎ๐น๐น ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น โ Tiny but mighty. Designed for edge use, fast inference, low latency, efficient memory. ๐ Use case: on-device AI, chatbots, privacy-first GenAI.
7. ๐ ๐๐ โ ๐ ๐ฎ๐๐ธ๐ฒ๐ฑ ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น โ The OG foundation model. Predicts masked tokens using bidirectional context. ๐ Use case: search, classification, embeddings, pretraining.
8. ๐ฆ๐๐ โ ๐ฆ๐ฒ๐ด๐บ๐ฒ๐ป๐ ๐๐ป๐๐๐ต๐ถ๐ป๐ด ๐ ๐ผ๐ฑ๐ฒ๐น โ Vision model for pixel-level understanding. Highlights, segments, and understands *everything* in an image. ๐ Use case: medical imaging, AR, robotics, visual agents.
Petroleum jelly
This crude but reasonably effective technique involves smearing petroleum jelly (“Vaseline”) on a plate of glass in front of the camera lens, also known as vaselensing, then cleaning and reapplying it after each shot โ a time-consuming process, but one which creates a blur around the model. This technique was used for the endoskeleton in The Terminator. This process was also employed by Jim Danforth to blur the pterodactyl’s wings in Hammer Films’ When Dinosaurs Ruled the Earth, and by Randal William Cook on the terror dogs sequence in Ghostbusters.[citation needed]
Bumping the puppet
Gently bumping or flicking the puppet before taking the frame will produce a slight blur; however, care must be taken when doing this that the puppet does not move too much or that one does not bump or move props or set pieces.
Moving the table
Moving the table on which the model is standing while the film is being exposed creates a slight, realistic blur. This technique was developed by Ladislas Starevich: when the characters ran, he moved the set in the opposite direction. This is seen in The Little Parade when the ballerina is chased by the devil. Starevich also used this technique on his films The Eyes of the Dragon, The Magical Clock and The Mascot. Aardman Animations used this for the train chase in The Wrong Trousers and again during the lorry chase in A Close Shave. In both cases the cameras were moved physically during a 1-2 second exposure. The technique was revived for the full-length Wallace & Gromit: The Curse of the Were-Rabbit.
Go motion
The most sophisticated technique was originally developed for the film The Empire Strikes Back and used for some shots of the tauntauns and was later used on films like Dragonslayer and is quite different from traditional stop motion. The model is essentially a rod puppet. The rods are attached to motors which are linked to a computer that can record the movements as the model is traditionally animated. When enough movements have been made, the model is reset to its original position, the camera rolls and the model is moved across the table. Because the model is moving during shots, motion blur is created.
A variation of go motion was used in E.T. the Extra-Terrestrial to partially animate the children on their bicycles.