• Newtonโ€™s Cradle – An AI Film By Jeff Synthesized

    ,

    Narrative voice via Artlistai, News Reporter PlayAI, All other voices are V2V in Elevenlabs.
    Powered by (in order of amount) โ€˜HailuoAIโ€™, โ€˜KlingAIโ€™ and of course some of our special source. Performance capture by โ€˜Runwayโ€™s Act-Oneโ€™.
    Edited and color graded in โ€˜DaVinci Resolveโ€™. Composited with โ€˜After Effectsโ€™.

    In this film, the โ€˜Newtonโ€™s Cradleโ€™ isnโ€™t just a symbolic objectโ€”it represents the fragile balance between control and freedom in a world where time itself is being manipulated. The oscillation of the cradle reflects the constant push and pull of power in this dystopian society. By the end of the film, we discover that this seemingly innocuous object holds the potential to disrupt the system, offering a glimmer of hope that time can be reset and balance restored.

  • xinsir – controlnet-union-sdxl-1.0 examples

    ,

    https://huggingface.co/xinsir/controlnet-union-sdxl-1.0

    deblur

    inpainting

    outpainting

    upscale

    openpose

    depthmap

    canny

    lineart

    anime lineart

    mlsd

    scribble

    hed

    softedge

    ted

    segmentation

    normals

    openpose + canny

  • What is deepfake GAN (Generative Adversarial Network) technology?

    https://www.techtarget.com/whatis/definition/deepfake

    Deepfake technology is a type of artificial intelligence used to create convincing fake images, videos and audio recordings. The term describes both the technology and the resulting bogus content and is a portmanteau of deep learning and fake.

    Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say.

    Deepfakes aren’t edited or photoshopped videos or images. In fact, they’re created using specialized algorithms that blend existing and new footage. For example, subtle facial features of people in images are analyzed through machine learning (ML) to manipulate them within the context of other videos.

    Deepfakes uses two algorithms — a generator and a discriminator — to create and refine fake content. The generator builds a training data set based on the desired output, creating the initial fake digital content, while the discriminator analyzes how realistic or fake the initial version of the content is. This process is repeated, enabling the generator to improve at creating realistic content and the discriminator to become more skilled at spotting flaws for the generator to correct.

    The combination of the generator and discriminator algorithms creates a generative adversarial network.

    A GAN uses deep learning to recognize patterns in real images and then uses those patterns to create the fakes.

    When creating a deepfake photograph, a GAN system views photographs of the target from an array of angles to capture all the details and perspectives.
    When creating a deepfake video, the GAN views the video from various angles and analyzes behavior, movement and speech patterns.
    This information is then run through the discriminator multiple times to fine-tune the realism of the final image or video.

  • The History, Evolution and Rise of AI

    https://medium.com/@lmpo/a-brief-history-of-ai-with-deep-learning-26f7948bc87b

    ๐Ÿ”น 1943: ๐— ๐—ฐ๐—–๐˜‚๐—น๐—น๐—ผ๐—ฐ๐—ต & ๐—ฃ๐—ถ๐˜๐˜๐˜€ create the first artificial neuron.
    ๐Ÿ”น 1950: ๐—”๐—น๐—ฎ๐—ป ๐—ง๐˜‚๐—ฟ๐—ถ๐—ป๐—ด introduces the Turing Test, forever changing the way we view intelligence.
    ๐Ÿ”น 1956: ๐—๐—ผ๐—ต๐—ป ๐— ๐—ฐ๐—–๐—ฎ๐—ฟ๐˜๐—ต๐˜† coins the term โ€œArtificial Intelligence,โ€ marking the official birth of the field.
    ๐Ÿ”น 1957: ๐—™๐—ฟ๐—ฎ๐—ป๐—ธ ๐—ฅ๐—ผ๐˜€๐—ฒ๐—ป๐—ฏ๐—น๐—ฎ๐˜๐˜ invents the Perceptron, one of the first neural networks.
    ๐Ÿ”น 1959: ๐—•๐—ฒ๐—ฟ๐—ป๐—ฎ๐—ฟ๐—ฑ ๐—ช๐—ถ๐—ฑ๐—ฟ๐—ผ๐˜„ and ๐—ง๐—ฒ๐—ฑ ๐—›๐—ผ๐—ณ๐—ณ create ADALINE, a model that would shape neural networks.
    ๐Ÿ”น 1969: ๐— ๐—ถ๐—ป๐˜€๐—ธ๐˜† & ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜ solve the XOR problem, but also mark the beginning of the “first AI winter.”
    ๐Ÿ”น 1980: ๐—ž๐˜‚๐—ป๐—ถ๐—ต๐—ถ๐—ธ๐—ผ ๐—™๐˜‚๐—ธ๐˜‚๐˜€๐—ต๐—ถ๐—บ๐—ฎ introduces Neocognitron, laying the groundwork for deep learning.
    ๐Ÿ”น 1986: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐——๐—ฎ๐˜ƒ๐—ถ๐—ฑ ๐—ฅ๐˜‚๐—บ๐—ฒ๐—น๐—ต๐—ฎ๐—ฟ๐˜ introduce backpropagation, making neural networks viable again.
    ๐Ÿ”น 1989: ๐—๐˜‚๐—ฑ๐—ฒ๐—ฎ ๐—ฃ๐—ฒ๐—ฎ๐—ฟ๐—น advances UAT (Understanding and Reasoning), building a foundation for AI’s logical abilities.
    ๐Ÿ”น 1995: ๐—ฉ๐—น๐—ฎ๐—ฑ๐—ถ๐—บ๐—ถ๐—ฟ ๐—ฉ๐—ฎ๐—ฝ๐—ป๐—ถ๐—ธ and ๐—–๐—ผ๐—ฟ๐—ถ๐—ป๐—ป๐—ฎ ๐—–๐—ผ๐—ฟ๐˜๐—ฒ๐˜€ develop Support Vector Machines (SVMs), a breakthrough in machine learning.
    ๐Ÿ”น 1998: ๐—ฌ๐—ฎ๐—ป๐—ป ๐—Ÿ๐—ฒ๐—–๐˜‚๐—ป popularizes Convolutional Neural Networks (CNNs), revolutionizing image recognition.
    ๐Ÿ”น 2006: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐—ฅ๐˜‚๐˜€๐—น๐—ฎ๐—ป ๐—ฆ๐—ฎ๐—น๐—ฎ๐—ธ๐—ต๐˜‚๐˜๐—ฑ๐—ถ๐—ป๐—ผ๐˜ƒ introduce deep belief networks, reigniting interest in deep learning.
    ๐Ÿ”น 2012: ๐—”๐—น๐—ฒ๐˜… ๐—ž๐—ฟ๐—ถ๐˜‡๐—ต๐—ฒ๐˜ƒ๐˜€๐—ธ๐˜† and ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป launch AlexNet, sparking the modern AI revolution in deep learning.
    ๐Ÿ”น 2014: ๐—œ๐—ฎ๐—ป ๐—š๐—ผ๐—ผ๐—ฑ๐—ณ๐—ฒ๐—น๐—น๐—ผ๐˜„ introduces Generative Adversarial Networks (GANs), opening new doors for AI creativity.
    ๐Ÿ”น 2017: ๐—”๐˜€๐—ต๐—ถ๐˜€๐—ต ๐—ฉ๐—ฎ๐˜€๐˜„๐—ฎ๐—ป๐—ถ and team introduce Transformers, redefining natural language processing (NLP).
    ๐Ÿ”น 2020: OpenAI unveils GPT-3, setting a new standard for language models and AIโ€™s capabilities.
    ๐Ÿ”น 2022: OpenAI releases ChatGPT, democratizing conversational AI and bringing it to the masses.


  • Eddie Yoon – There’s a big misconception about AI creative

    , ,

    You’re being tricked into believing that AI can produce Hollywood-level videos…

    Weโ€™re far from it.

    (more…)
  • Andreas Horn – Want cutting edge AI?

    ๐—ง๐—ต๐—ฒ ๐—ฏ๐˜‚๐—ถ๐—น๐—ฑ๐—ถ๐—ป๐—ด ๐—ฏ๐—น๐—ผ๐—ฐ๐—ธ๐˜€ ๐—ผ๐—ณ ๐—”๐—œ ๐—ฎ๐—ป๐—ฑ ๐—ฒ๐˜€๐˜€๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐—ฝ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐˜€๐˜€๐—ฒ๐˜€:

    – Collect: Data from sensors, logs, and user input.
    – Move/Store: Build infrastructure, pipelines, and reliable data flow.
    – Explore/Transform: Clean, prep, and detect anomalies to make the data usable.
    – Aggregate/Label: Add analytics, metrics, and labels to create training data.
    – Learn/Optimize: Experiment, test, and train AI models.

    ๐—ง๐—ต๐—ฒ ๐—น๐—ฎ๐˜†๐—ฒ๐—ฟ๐˜€ ๐—ผ๐—ณ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฎ๐—ป๐—ฑ ๐—ต๐—ผ๐˜„ ๐˜๐—ต๐—ฒ๐˜† ๐—ฏ๐—ฒ๐—ฐ๐—ผ๐—บ๐—ฒ ๐—ถ๐—ป๐˜๐—ฒ๐—น๐—น๐—ถ๐—ด๐—ฒ๐—ป๐˜:

    – Instrumentation and logging: Sensors, logs, and external data capture the raw inputs.
    – Data flow and storage: Pipelines and infrastructure ensure smooth movement and reliable storage.
    – Exploration and transformation: Data is cleaned, prepped, and anomalies are detected.
    – Aggregation and labeling: Analytics, metrics, and labels create structured, usable datasets.
    – Experimenting/AI/ML: Models are trained and optimized using the prepared data.
    – AI insights and actions: Advanced AI generates predictions, insights, and decisions at the top.

    ๐—ช๐—ต๐—ผ ๐—บ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐—ถ๐˜ ๐—ต๐—ฎ๐—ฝ๐—ฝ๐—ฒ๐—ป ๐—ฎ๐—ป๐—ฑ ๐—ธ๐—ฒ๐˜† ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€:

    – Data Infrastructure Engineers: Build the foundation โ€” collect, move, and store data.
    – Data Engineers: Prep and transform the data into usable formats.
    – Data Analysts & Scientists: Aggregate, label, and generate insights.
    – Machine Learning Engineers: Optimize and deploy AI models.

    ๐—ง๐—ต๐—ฒ ๐—บ๐—ฎ๐—ด๐—ถ๐—ฐ ๐—ผ๐—ณ ๐—”๐—œ ๐—ถ๐˜€ ๐—ถ๐—ป ๐—ต๐—ผ๐˜„ ๐˜๐—ต๐—ฒ๐˜€๐—ฒ ๐—น๐—ฎ๐˜†๐—ฒ๐—ฟ๐˜€ ๐—ฎ๐—ป๐—ฑ ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€ ๐˜„๐—ผ๐—ฟ๐—ธ ๐˜๐—ผ๐—ด๐—ฒ๐˜๐—ต๐—ฒ๐—ฟ. ๐—ง๐—ต๐—ฒ ๐˜€๐˜๐—ฟ๐—ผ๐—ป๐—ด๐—ฒ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—ณ๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป, ๐˜๐—ต๐—ฒ ๐˜€๐—บ๐—ฎ๐—ฟ๐˜๐—ฒ๐—ฟ ๐˜†๐—ผ๐˜‚๐—ฟ ๐—”๐—œ.

    https://www.linkedin.com/posts/andreashorn1_%F0%9D%97%AA%F0%9D%97%AE%F0%9D%97%BB%F0%9D%98%81-%F0%9D%97%B0%F0%9D%98%82%F0%9D%98%81%F0%9D%98%81%F0%9D%97%B6%F0%9D%97%BB%F0%9D%97%B4-%F0%9D%97%B2%F0%9D%97%B1%F0%9D%97%B4%F0%9D%97%B2-%F0%9D%97%94%F0%9D%97%9C-%F0%9D%97%A7-activity-7276861752477184000-KvUy

  • The Public Domain Is Working Again — No Thanks To Disney

    ,

    www.cartoonbrew.com/law/the-public-domain-is-working-again-no-thanks-to-disney-169658.html

    The law protects new works from unauthorized copying while allowing artists free rein on older works.

    The Copyright Act of 1909 used to govern copyrights. Under that law, a creator had a copyright on his creation for 28 years from โ€œpublication,โ€ which could then be renewed for another 28 years. Thus, after 56 years, a work would enter the public domain.

    However, the Congress passed the Copyright Act of 1976, extending copyright protection for works made for hire to 75 years from publication.

    Then again, in 1998, Congress passed the Sonny Bono Copyright Term Extension Act (derided as the โ€œMickey Mouse Protection Actโ€ by some observers due to the Walt Disney Companyโ€™s intensive lobbying efforts), which added another twenty years to the term of copyright.

    it is because Snow White was in the public domain that it was chosen to be Disneyโ€™s first animated feature.
    Ironically, much of Disneyโ€™s legislative lobbying over the last several decades has been focused on preventing this same opportunity to other artists and filmmakers.

    The battle in the coming years will be to prevent further extensions to copyright law that benefit corporations at the expense of creators and society as a whole.

  • HDRI Median Cut plugin

    , ,

    www.hdrlabs.com/picturenaut/plugins.html

     

     

    Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.

     

    Here is an openCV example:

    (more…)
  • Christopher Butler – Understanding the Eye-Mind Connection – Vision is a mental process

    , , , ,

    https://www.chrbutler.com/understanding-the-eye-mind-connection

     

    The intricate relationship between the eyes and the brain, often termed the eye-mind connection, reveals that vision is predominantly a cognitive process. This understanding has profound implications for fields such as design, where capturing and maintaining attention is paramount. This essay delves into the nuances of visual perception, the brain’s role in interpreting visual data, and how this knowledge can be applied to effective design strategies.

     

    This cognitive aspect of vision is evident in phenomena such as optical illusions, where the brain interprets visual information in a way that contradicts physical reality. These illusions underscore that what we “see” is not merely a direct recording of the external world but a constructed experience shaped by cognitive processes.

     

    Understanding the cognitive nature of vision is crucial for effective design. Designers must consider how the brain processes visual information to create compelling and engaging visuals. This involves several key principles:

    1. Attention and Engagement
    2. Visual Hierarchy
    3. Cognitive Load Management
    4. Context and Meaning