Many users assume shared conversations are only seen by friends or colleagues — but when you use OpenAI’s share feature, those chats get now indexed by search engines like Google.
Meaning: your “private” AI prompts could end up very public. This is called Google dorking — and it’s shockingly effective.
Over 70,000 chats are now publicly viewable. Some are harmless. Others? They might expose sensitive strategies, internal docs, product plans, even company secrets.
OpenAI currently does not block indexing. So if you’ve ever shared something thinking it’s “just a link” — it might now be searchable by anyone. You can even build a bot to crawl and analyze these.
Welcome to the new visibility layer of AI. I can’t say I am surprised…
I was working an album cover last night and got these really cool images in midjourney so made a video out of it. Animated using Pika. Song made using Suno Full version on my bandcamp. It’s called Static.
Marigold repurposes Stable Diffusion for dense prediction tasks such as monocular depth estimation and surface normal prediction, delivering a level of detail often missing even in top discriminative models.
Key aspects that make it great: – Reuses the original VAE and only lightly fine-tunes the denoising UNet – Trained on just tens of thousands of synthetic image–modality pairs – Runs on a single consumer GPU (e.g., RTX 4090) – Zero-shot generalization to real-world, in-the-wild images
Generate New Camera Angles Generate the Next Shot Use Any Style to Transfer to a Video Change Environments, Locations, Seasons and Time of Day Add Things to a Scene Remove Things from a Scene Change Objects in a Scene Apply the Motion of a Video to an Image Alter a Character’s Appearance Recolor Elements of a Scene Relight Shots Green Screen Any Object, Person or Situation
I ran Steamboat Willie (now public domain) through Flux Kontext to reimagine it as a 3D-style animated piece. Instead of going the polished route with something like W.A.N. 2.1 for full image-to-video generation, I leaned into the raw, handmade vibe that comes from converting each frame individually. It gave it a kind of stop-motion texture, imperfect, a bit wobbly, but full of character.
Our human-centric dense prediction model delivers high-quality, detailed (depth) results while achieving remarkable efficiency, running orders of magnitude faster than competing methods, with inference speeds as low as 21 milliseconds per frame (the large multi-task model on an NVIDIA A100). It reliably captures a wide range of human characteristics under diverse lighting conditions, preserving fine-grained details such as hair strands and subtle facial features. This demonstrates the model’s robustness and accuracy in complex, real-world scenarios.
The state of the art in human-centric computer vision achieves high accuracy and robustness across a diverse range of tasks. The most effective models in this domain have billions of parameters, thus requiring extremely large datasets, expensive training regimes, and compute-intensive inference. In this paper, we demonstrate that it is possible to train models on much smaller but high-fidelity synthetic datasets, with no loss in accuracy and higher efficiency. Using synthetic training data provides us with excellent levels of detail and perfect labels, while providing strong guarantees for data provenance, usage rights, and user consent. Procedural data synthesis also provides us with explicit control on data diversity, that we can use to address unfairness in the models we train. Extensive quantitative assessment on real input images demonstrates accuracy of our models on three dense prediction tasks: depth estimation, surface normal estimation, and soft foreground segmentation. Our models require only a fraction of the cost of training and inference when compared with foundational models of similar accuracy.
Aider enables developers to interactively generate, modify, and test code by leveraging both cloud-hosted and local LLMs directly from the terminal or within an IDE. Key capabilities include comprehensive codebase mapping, support for over 100 programming languages, automated git commit messages, voice-to-code interactions, and built-in linting and testing workflows. Installation is straightforward via pip or uv, and while the tool itself has no licensing cost, actual usage costs stem from the underlying LLM APIs, which are billed separately by providers like OpenAI or Anthropic.
Key Features
Cloud & Local LLM Support Connect to most major LLM providers out of the box, or run models locally for privacy and cost control aider.chat.
Codebase Mapping Automatically indexes all project files so that even large repositories can be edited contextually aider.chat.
100+ Language Support Works with Python, JavaScript, Rust, Ruby, Go, C++, PHP, HTML, CSS, and dozens more aider.chat.
Git Integration Generates sensible commit messages and automates diffs/undo operations through familiar git tooling aider.chat.
Voice-to-Code Speak commands to Aider to request features, tests, or fixes without typing aider.chat.
Images & Web Pages Attach screenshots, diagrams, or documentation URLs to provide visual context for edits aider.chat.
Linting & Testing Runs lint and test suites automatically after each change, and can fix issues it detects
Hand drawn sketch | Models made in CC4 with ZBrush | Textures in Substance Painter | Paint over in Photoshop | Renders, Animation, VFX with AI. Each 5-8 hours spread over a couple days.
As I continue to explore the use of AI tools to enhance my 3D character creation process, I discover they can be incredibly useful during the previsualization phase to see what a character might ultimately look like in production. I selectively use AI to enhance and accelerate my creative process, not to replace it or use it as an end to end solution.
My new AI-assisted short film is here. Kira explores human cloning and the search for identity in today’s world. It took nearly 600 prompts, 12 days (during my free time), and a $500 budget to bring this project to life. The entire film was created by one person using a range of AI tools, all listed at the end. Enjoy. ~ Hashem
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.