BREAKING NEWS
LATEST POSTS
-
Mamba and MicroMamba – A free, open source general software package managers forย anyย kind of software and all operating systems
https://mamba.readthedocs.io/en/latest/user_guide/micromamba.html
https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html
https://micro.mamba.pm/api/micromamba/win-64/latest
https://prefix.dev/docs/mamba/overview
With mamba, it’s easy to set up
software environments
. A software environment is simply a set of different libraries, applications and their dependencies. The power of environments is that they can co-exist: you can easily have an environment called py27 for Python 2.7 and one called py310 for Python 3.10, so that multiple of your projects with different requirements have their dedicated environments. This is similar to “containers” and images. However, mamba makes it easy to add, update or remove software from the environments.Download the latest executable from https://micro.mamba.pm/api/micromamba/win-64/latest
You can install it or just run the executable to create a python environment under Windows:
(more…) -
PlayBook3D – Creative controls for all media formats
Playbook3d.com is a diffusion-based render engine that reduces the time to final image with AI. It is accessible via web editor and API with support for scene segmentation and re-lighting, integration with production pipelines and frame-to-frame consistency for image, video, and real-time 3D formats.
-
AI and the Law – AI Creativity – Genius or Gimmick?
7:59-9:50 Justine Bateman:
“I mean first I want to give people, help people have a little bit of a definition of what generative AI isโ
think of it as like a blender and if you have a blender at home and you turn it on, what does it do? It depends on what I put into it, so it cannot function unless it’s fed things.
Then you turn on the blender and you give it a prompt, which is your little spoon, and you get a little spoonfulโlittle Frankenstein spoonfulโout of what you asked for.
So what is going into the blender? Every but a hundred years of film and television or many, many years of, you know, doctor’s reports or students’ essays or whatever it is.
In the film business, in particular, that’s what we call theft; it’s the biggest violation. And the term that continues to be used is “all we did.” I think the CTO of OpenAIโbelieve that’s her position; I forget her nameโwhen she was asked in an interview recently what she had to say about the fact that they didn’t ask permission to take it in, she said, “Well, it was all publicly available.”
And I will say this: if you own a carโI know we’re in New York City, so it’s not going to be as applicableโbut if I see a car in the street, it’s publicly available, but somehow it’s illegal for me to take it. That’s what we have the copyright office for, and I don’t know how well staffed they are to handle something like this, but this is the biggest copyright violation in the history of that office and the US government” -
Aze Alter – What If Humans and AI Unite? | AGE OF BEYOND
https://www.patreon.com/AzeAlter
Voices & Sound Effects: https://elevenlabs.io/
Video Created mainly with Luma: https://lumalabs.ai/
LUMA LABS
KLING
RUNWAY
ELEVEN LABS
MINIMAX
MIDJOURNEY
Music By Scott Buckley -
ComfyUI-Manager Joins Comfy-Org
https://blog.comfy.org/p/comfyui-manager-joins-comfy-org
On March 28,ย ComfyUI-Managerย will be moving to theย Comfy-Orgย GitHub organization asย Comfy-Org/ComfyUI-Manager. This represents a natural evolution as they continue working to improve the custom node experience for all ComfyUI users.
What This Means For You
This change is primarily about improving support and development velocity. There are a few practical considerations:
- Automatic GitHub redirectsย will ensure all existing links, git commands, and references to the repository will continue to work seamlessly without any action needed
- For developers:ย Any existing PRs and issues will be transferred to the new repository location
- For users:ย ComfyUI-Manager will continue to function exactly as beforeโno action needed
- For workflow authors: Resources that reference ComfyUI-Manager will continue to work without interruption
-
AccVideo – Accelerating Video Diffusion Model with Synthetic Dataset
https://aejion.github.io/accvideo
https://github.com/aejion/AccVideo
https://huggingface.co/aejion/AccVideo
AccVideo is a novel efficient distillation method to accelerate video diffusion models with synthetic datset. This method is 8.5x faster than HunyuanVideo.
FEATURED POSTS
-
The History, Evolution and Rise of AI
https://medium.com/@lmpo/a-brief-history-of-ai-with-deep-learning-26f7948bc87b
๐น 1943: ๐ ๐ฐ๐๐๐น๐น๐ผ๐ฐ๐ต & ๐ฃ๐ถ๐๐๐ create the first artificial neuron.
๐น 1950: ๐๐น๐ฎ๐ป ๐ง๐๐ฟ๐ถ๐ป๐ด introduces the Turing Test, forever changing the way we view intelligence.
๐น 1956: ๐๐ผ๐ต๐ป ๐ ๐ฐ๐๐ฎ๐ฟ๐๐ต๐ coins the term โArtificial Intelligence,โ marking the official birth of the field.
๐น 1957: ๐๐ฟ๐ฎ๐ป๐ธ ๐ฅ๐ผ๐๐ฒ๐ป๐ฏ๐น๐ฎ๐๐ invents the Perceptron, one of the first neural networks.
๐น 1959: ๐๐ฒ๐ฟ๐ป๐ฎ๐ฟ๐ฑ ๐ช๐ถ๐ฑ๐ฟ๐ผ๐ and ๐ง๐ฒ๐ฑ ๐๐ผ๐ณ๐ณ create ADALINE, a model that would shape neural networks.
๐น 1969: ๐ ๐ถ๐ป๐๐ธ๐ & ๐ฃ๐ฎ๐ฝ๐ฒ๐ฟ๐ solve the XOR problem, but also mark the beginning of the “first AI winter.”
๐น 1980: ๐๐๐ป๐ถ๐ต๐ถ๐ธ๐ผ ๐๐๐ธ๐๐๐ต๐ถ๐บ๐ฎ introduces Neocognitron, laying the groundwork for deep learning.
๐น 1986: ๐๐ฒ๐ผ๐ณ๐ณ๐ฟ๐ฒ๐ ๐๐ถ๐ป๐๐ผ๐ป and ๐๐ฎ๐๐ถ๐ฑ ๐ฅ๐๐บ๐ฒ๐น๐ต๐ฎ๐ฟ๐ introduce backpropagation, making neural networks viable again.
๐น 1989: ๐๐๐ฑ๐ฒ๐ฎ ๐ฃ๐ฒ๐ฎ๐ฟ๐น advances UAT (Understanding and Reasoning), building a foundation for AI’s logical abilities.
๐น 1995: ๐ฉ๐น๐ฎ๐ฑ๐ถ๐บ๐ถ๐ฟ ๐ฉ๐ฎ๐ฝ๐ป๐ถ๐ธ and ๐๐ผ๐ฟ๐ถ๐ป๐ป๐ฎ ๐๐ผ๐ฟ๐๐ฒ๐ develop Support Vector Machines (SVMs), a breakthrough in machine learning.
๐น 1998: ๐ฌ๐ฎ๐ป๐ป ๐๐ฒ๐๐๐ป popularizes Convolutional Neural Networks (CNNs), revolutionizing image recognition.
๐น 2006: ๐๐ฒ๐ผ๐ณ๐ณ๐ฟ๐ฒ๐ ๐๐ถ๐ป๐๐ผ๐ป and ๐ฅ๐๐๐น๐ฎ๐ป ๐ฆ๐ฎ๐น๐ฎ๐ธ๐ต๐๐๐ฑ๐ถ๐ป๐ผ๐ introduce deep belief networks, reigniting interest in deep learning.
๐น 2012: ๐๐น๐ฒ๐ ๐๐ฟ๐ถ๐๐ต๐ฒ๐๐๐ธ๐ and ๐๐ฒ๐ผ๐ณ๐ณ๐ฟ๐ฒ๐ ๐๐ถ๐ป๐๐ผ๐ป launch AlexNet, sparking the modern AI revolution in deep learning.
๐น 2014: ๐๐ฎ๐ป ๐๐ผ๐ผ๐ฑ๐ณ๐ฒ๐น๐น๐ผ๐ introduces Generative Adversarial Networks (GANs), opening new doors for AI creativity.
๐น 2017: ๐๐๐ต๐ถ๐๐ต ๐ฉ๐ฎ๐๐๐ฎ๐ป๐ถ and team introduce Transformers, redefining natural language processing (NLP).
๐น 2020: OpenAI unveils GPT-3, setting a new standard for language models and AIโs capabilities.
๐น 2022: OpenAI releases ChatGPT, democratizing conversational AI and bringing it to the masses.
-
Zibra.AI – Real-Time Volumetric Effects in Virtual Production. Now free for Indies!
A New Era for Volumetrics
For a long time, volumetric visual effects were viable only in high-end offline VFX workflows. Large data footprints and poor real-time rendering performance limited their use: most teams simply avoided volumetrics altogether. Itโs similar to the early days of online video: limited computational power and low network bandwidth made video content hard to share or stream. Today, of course, we canโt imagine the internet without it, and we believe volumetrics are on a similar path.
With advanced data compression and real-time, GPU-driven decompression, anyone can now bring CGI-class visual effects into Unreal Engine.
From now on, itโs completely free for individual creators!
What it means for you?
(more…)
-
Matt Gray – How to generate a profitable business
In the last 10 years, over 1,000 people have asked me how to start a business. The truth? Theyโre all paralyzed by limiting beliefs. What they are and how to break them today:
(more…)
Before we get into the How, letโs first unpack why people think they canโt start a business.
Here are the biggest reasons Iโve found:
-
What is physically correct lighting all about?
http://gamedev.stackexchange.com/questions/60638/what-is-physically-correct-lighting-all-about
2012-08 Nathan Reed wrote:
Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to “look good” subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.
For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.
In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you’re trying to simulate, based on measurements.
Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).
It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.
On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.
Here are some resources to look at for physically-based lighting in games:
SIGGRAPH 2013 Physically Based Shading Course, particularly the background talk by Naty Hoffman at the beginning. You can also check out the previous incarnations of this course for more resources.
Sรฉbastien Lagarde, Adopting a physically-based shading model and Feeding a physically-based shading model
And of course, I would be remiss if I didn’t mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.