BREAKING NEWS
LATEST POSTS
-
GIL To Become Optional in Python 3.13
GIL or Global Interpreter Lock can be disabled in Python version 3.13. This is currently experimental.
What is GIL? It is a mechanism used by the CPython interpreter to ensure that only one thread executes the Python bytecode at a time.
https://medium.com/@r_bilan/python-3-13-without-the-gil-a-game-changer-for-concurrency-5e035500f0da
Advantages of the GIL
- Simplicity of Implementation: The GIL simplifies memory management in CPython by preventing concurrent access to Python objects, which can help avoid race conditions and other threading issues.
- Ease of Use for Single-Threaded Programs: For applications that are single-threaded, the GIL eliminates the overhead associated with managing thread safety, allowing for straightforward and efficient code execution.
- Compatibility with C Extensions: The GIL allows C extensions to operate without needing to implement complex threading models, which simplifies the development of Python extensions that interface with C libraries.
- Performance for I/O-Bound Tasks: In I/O-bound applications, the GIL does not significantly hinder performance since threads can be switched out during I/O operations, allowing other threads to run.
Disadvantages of the GIL
- Limited Multithreading Performance: The GIL can severely restrict the performance of CPU-bound multithreaded applications, as it only allows one thread to execute Python bytecode at a time, leading to underutilization of multicore processors.
- Thread Management Complexity: Although the GIL simplifies memory management, it can complicate the design of concurrent applications, forcing developers to carefully manage threading issues or use multiprocessing instead.
- Hindrance to Parallel Processing: With the GIL enabled, achieving true parallelism in Python applications is challenging, making it difficult for developers to leverage multicore architectures effectively.
- Inefficiency in Context Switching: Frequent context switching due to the GIL can introduce overhead, especially in applications with many threads, leading to performance degradation.
https://geekpython.in/gil-become-optional-in-python
-
Ben Gunsberger – AI generated podcast about AI using Google NotebookLM
Listen to the podcast in the post
“I just created a AI-Generated podcast by feeding an article I write into Google’s NotebookLM. If I hadn’t make it myself, I would have been 100% fooled into thinking it was real people talking.”
-
Apple releases Depth Pro – An open source AI model that rewrites the rules of 3D vision
The model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU.
https://github.com/apple/ml-depth-pro
https://arxiv.org/pdf/2410.02073
-
Anders Langlands – Render Color Spaces
https://www.colour-science.org/anders-langlands/
This page compares images rendered in Arnold using spectral rendering and different sets of colourspace primaries: Rec.709, Rec.2020, ACES and DCI-P3. The SPD data for the GretagMacbeth Color Checker are the measurements of Noburu Ohta, taken from Mansencal, Mauderer and Parsons (2014) colour-science.org.
-
Björn Ottosson – How software gets color wrong
https://bottosson.github.io/posts/colorwrong/
Most software around us today are decent at accurately displaying colors. Processing of colors is another story unfortunately, and is often done badly.
To understand what the problem is, let’s start with an example of three ways of blending green and magenta:
- Perceptual blend – A smooth transition using a model designed to mimic human perception of color. The blending is done so that the perceived brightness and color varies smoothly and evenly.
- Linear blend – A model for blending color based on how light behaves physically. This type of blending can occur in many ways naturally, for example when colors are blended together by focus blur in a camera or when viewing a pattern of two colors at a distance.
- sRGB blend – This is how colors would normally be blended in computer software, using sRGB to represent the colors.
Let’s look at some more examples of blending of colors, to see how these problems surface more practically. The examples use strong colors since then the differences are more pronounced. This is using the same three ways of blending colors as the first example.
Instead of making it as easy as possible to work with color, most software make it unnecessarily hard, by doing image processing with representations not designed for it. Approximating the physical behavior of light with linear RGB models is one easy thing to do, but more work is needed to create image representations tailored for image processing and human perception.
Also see:
-
EVER (Exact Volumetric Ellipsoid Rendering) – Gaussian splatting alternative
https://radiancefields.com/how-ever-(exact-volumetric-ellipsoid-rendering)-does-this-work
https://half-potato.gitlab.io/posts/ever/
Unlike previous methods like Gaussian Splatting, EVER leverages ellipsoids instead of Gaussians and uses Ray Tracing instead of Rasterization. This shift eliminates artifacts like popping and blending inconsistencies, offering sharper and more accurate renderings.
-
The Rise and Fall of Adobe – The better, alternative software list to a criminal company
Best alternatives to Adobe:
https://github.com/KenneyNL/Adobe-Alternatives
- Affinity (Photo and illustration editing) https://affinity.serif.com/
- DaVinci Resolve (video editing): https://www.blackmagicdesign.com/au/products/davinciresolve/
- Clip Studio Paint (illustration): https://www.clipstudio.net/en/
- Toon Boom (animation): https://www.toonboom.com/
-
Microsoft is discontinuing its HoloLens headsets
https://www.theverge.com/2024/10/1/24259369/microsoft-hololens-2-discontinuation-support
Software support for the original HoloLens headset will end on December 10th.
Microsoft’s struggles with HoloLens have been apparent over the past two years.
FEATURED POSTS
-
AnimationXpress.com interviews Daniele Tosti for TheCgCareer.com channel
You’ve been in the VFX Industry for over a decade. Tell us about your journey.
It all started with my older brother giving me a Commodore64 personal computer as a gift back in the late 80′. I realised then I could create something directly from my imagination using this new digital media format. And, eventually, make a living in the process.
That led me to start my professional career in 1990. From live TV to games to animation. All the way to live action VFX in the recent years.I really never stopped to crave to create art since those early days. And I have been incredibly fortunate to work with really great talent along the way, which made my journey so much more effective.
What inspired you to pursue VFX as a career?
An incredible combination of opportunities, really. The opportunity to express myself as an artist and earn money in the process. The opportunity to learn about how the world around us works and how best solve problems. The opportunity to share my time with other talented people with similar passions. The opportunity to grow and adapt to new challenges. The opportunity to develop something that was never done before. A perfect storm of creativity that fed my continuous curiosity about life and genuinely drove my inspiration.
Tell us about the projects you’ve particularly enjoyed working on in your career
(more…)
-
How does Stable Diffusion work?
https://stable-diffusion-art.com/how-stable-diffusion-work/
Stable Diffusion is a latent diffusion model that generates AI images from text. Instead of operating in the high-dimensional image space, it first compresses the image into the latent space.
Stable Diffusion belongs to a class of deep learning models called diffusion models. They are generative models, meaning they are designed to generate new data similar to what they have seen in training. In the case of Stable Diffusion, the data are images.
Why is it called the diffusion model? Because its math looks very much like diffusion in physics. Let’s go through the idea.