• The History, Evolution and Rise of AI

    https://medium.com/@lmpo/a-brief-history-of-ai-with-deep-learning-26f7948bc87b

    ๐Ÿ”น 1943: ๐— ๐—ฐ๐—–๐˜‚๐—น๐—น๐—ผ๐—ฐ๐—ต & ๐—ฃ๐—ถ๐˜๐˜๐˜€ create the first artificial neuron.
    ๐Ÿ”น 1950: ๐—”๐—น๐—ฎ๐—ป ๐—ง๐˜‚๐—ฟ๐—ถ๐—ป๐—ด introduces the Turing Test, forever changing the way we view intelligence.
    ๐Ÿ”น 1956: ๐—๐—ผ๐—ต๐—ป ๐— ๐—ฐ๐—–๐—ฎ๐—ฟ๐˜๐—ต๐˜† coins the term โ€œArtificial Intelligence,โ€ marking the official birth of the field.
    ๐Ÿ”น 1957: ๐—™๐—ฟ๐—ฎ๐—ป๐—ธ ๐—ฅ๐—ผ๐˜€๐—ฒ๐—ป๐—ฏ๐—น๐—ฎ๐˜๐˜ invents the Perceptron, one of the first neural networks.
    ๐Ÿ”น 1959: ๐—•๐—ฒ๐—ฟ๐—ป๐—ฎ๐—ฟ๐—ฑ ๐—ช๐—ถ๐—ฑ๐—ฟ๐—ผ๐˜„ and ๐—ง๐—ฒ๐—ฑ ๐—›๐—ผ๐—ณ๐—ณ create ADALINE, a model that would shape neural networks.
    ๐Ÿ”น 1969: ๐— ๐—ถ๐—ป๐˜€๐—ธ๐˜† & ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜ solve the XOR problem, but also mark the beginning of the “first AI winter.”
    ๐Ÿ”น 1980: ๐—ž๐˜‚๐—ป๐—ถ๐—ต๐—ถ๐—ธ๐—ผ ๐—™๐˜‚๐—ธ๐˜‚๐˜€๐—ต๐—ถ๐—บ๐—ฎ introduces Neocognitron, laying the groundwork for deep learning.
    ๐Ÿ”น 1986: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐——๐—ฎ๐˜ƒ๐—ถ๐—ฑ ๐—ฅ๐˜‚๐—บ๐—ฒ๐—น๐—ต๐—ฎ๐—ฟ๐˜ introduce backpropagation, making neural networks viable again.
    ๐Ÿ”น 1989: ๐—๐˜‚๐—ฑ๐—ฒ๐—ฎ ๐—ฃ๐—ฒ๐—ฎ๐—ฟ๐—น advances UAT (Understanding and Reasoning), building a foundation for AI’s logical abilities.
    ๐Ÿ”น 1995: ๐—ฉ๐—น๐—ฎ๐—ฑ๐—ถ๐—บ๐—ถ๐—ฟ ๐—ฉ๐—ฎ๐—ฝ๐—ป๐—ถ๐—ธ and ๐—–๐—ผ๐—ฟ๐—ถ๐—ป๐—ป๐—ฎ ๐—–๐—ผ๐—ฟ๐˜๐—ฒ๐˜€ develop Support Vector Machines (SVMs), a breakthrough in machine learning.
    ๐Ÿ”น 1998: ๐—ฌ๐—ฎ๐—ป๐—ป ๐—Ÿ๐—ฒ๐—–๐˜‚๐—ป popularizes Convolutional Neural Networks (CNNs), revolutionizing image recognition.
    ๐Ÿ”น 2006: ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป and ๐—ฅ๐˜‚๐˜€๐—น๐—ฎ๐—ป ๐—ฆ๐—ฎ๐—น๐—ฎ๐—ธ๐—ต๐˜‚๐˜๐—ฑ๐—ถ๐—ป๐—ผ๐˜ƒ introduce deep belief networks, reigniting interest in deep learning.
    ๐Ÿ”น 2012: ๐—”๐—น๐—ฒ๐˜… ๐—ž๐—ฟ๐—ถ๐˜‡๐—ต๐—ฒ๐˜ƒ๐˜€๐—ธ๐˜† and ๐—š๐—ฒ๐—ผ๐—ณ๐—ณ๐—ฟ๐—ฒ๐˜† ๐—›๐—ถ๐—ป๐˜๐—ผ๐—ป launch AlexNet, sparking the modern AI revolution in deep learning.
    ๐Ÿ”น 2014: ๐—œ๐—ฎ๐—ป ๐—š๐—ผ๐—ผ๐—ฑ๐—ณ๐—ฒ๐—น๐—น๐—ผ๐˜„ introduces Generative Adversarial Networks (GANs), opening new doors for AI creativity.
    ๐Ÿ”น 2017: ๐—”๐˜€๐—ต๐—ถ๐˜€๐—ต ๐—ฉ๐—ฎ๐˜€๐˜„๐—ฎ๐—ป๐—ถ and team introduce Transformers, redefining natural language processing (NLP).
    ๐Ÿ”น 2020: OpenAI unveils GPT-3, setting a new standard for language models and AIโ€™s capabilities.
    ๐Ÿ”น 2022: OpenAI releases ChatGPT, democratizing conversational AI and bringing it to the masses.


  • Zibra.AI – Real-Time Volumetric Effects in Virtual Production. Now free for Indies!

    https://www.zibra.ai/

    A New Era for Volumetrics

    For a long time, volumetric visual effects were viable only in high-end offline VFX workflows. Large data footprints and poor real-time rendering performance limited their use: most teams simply avoided volumetrics altogether. Itโ€™s similar to the early days of online video: limited computational power and low network bandwidth made video content hard to share or stream. Today, of course, we canโ€™t imagine the internet without it, and we believe volumetrics are on a similar path.

    With advanced data compression and real-time, GPU-driven decompression, anyone can now bring CGI-class visual effects into Unreal Engine. 

    From now on, itโ€™s completely free for individual creators!

    What it means for you?

    (more…)
  • What is physically correct lighting all about?

    ,

    http://gamedev.stackexchange.com/questions/60638/what-is-physically-correct-lighting-all-about

     

    2012-08 Nathan Reed wrote:

    Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to “look good” subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.

     

    For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.

     

    In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you’re trying to simulate, based on measurements.

     

    Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).

     

    It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.

     

    On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.

     

    Here are some resources to look at for physically-based lighting in games:

     

    SIGGRAPH 2013 Physically Based Shading Course, particularly the background talk by Naty Hoffman at the beginning. You can also check out the previous incarnations of this course for more resources.

     

    Sรฉbastien Lagarde, Adopting a physically-based shading model and Feeding a physically-based shading model

     

    And of course, I would be remiss if I didn’t mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.