BREAKING NEWS
LATEST POSTS
-
Disney’s Price Hikes Usher in Era of the Not-So-Cheap Ad Tier
“When Disney+’s ad tier launches, in December, it will cost U.S. customers $7.99 a month, the current price of the service’s ad-free tier. The price of the no-ads version will be hiked to $10.99.”
This is balanced out by the company’s plan to keep the rate of content spending for all its platforms at around $30 billion for the next few years and its measured revision of subscriber goals. “It now looks like Disney+ is tracking towards tightened and trimmed sub guidance, while the ad-supported tier + price increases + content rationalization = a much improved long-term profit outlook,” Wells Fargo analyst Steven Cahall wrote in an Aug. 11 note.
-
Open source Cycles render implemented into Gaffer
https://github.com/GafferHQ/gaffer/releases/tag/1.0.3.0
https://github.com/GafferHQ/gaffer/pull/4812
This release introduces support for the open source Cycles renderer. This is introduced as an opt-in feature preview intended for early testing and feedback as breaking changes can be expected while we continue to improve Cycles integration in future releases. As such, the use of Cycles is disabled by default but can be enabled via an environment variable. Additionally we’ve added support for viewing parameter history in the Light Editor, automatic render-time translation of UsdPreviewSurface shaders and UsdLuxLights for Arnold and made the usual small fixes and improvements.
-
NVIDIA GauGAN360 – AI driven latlong HDRI creation tool
https://blogs.nvidia.com/blog/2022/08/09/neural-graphics-sdk-metaverse-content/
Unfortunately, png output only at the moment:
http://imaginaire.cc/gaugan360/
-
Peter Timberlake – free high quality practice material for compositors
https://www.petertimberlake.com/practicematerial
“…a bunch of high quality practice material for compositors looking to build their reels. Contains all plates, roto, CG elements, matte paintings, and everything required to start compositing.”
-
Amazon’s ‘The Lord of the Rings’ to Cost $465M for Just One Season
“The Hollywood Reporter has confirmed that Amazon will spend roughly NZ$650 million — $465 million in U.S. dollars — for just the first season of the show.”
“Amazon’s spending will trigger a tax rebate of NZ$160 million ($114 million U.S). This is somewhat controversial in New Zealand as the government could end up on the hook for hundreds of millions of dollars to help subsidize Amazon’s elves-and-hobbits drama series. Stuff reported that the country’s treasury has labeled the show a “significant fiscal risk” given there is no capped upside to how much Amazon — and therefore the government — might spend. ”
-
Academy Software Foundation Siggraph 2022 – New Developments in MaterialX and OSL
https://www.materialx.org/assets/ASWF_OSD2022_MaterialX_OSL_Final.pdf
Local copy:
-
StableDiffusion text-to-image applied to videos
#stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://t.co/7SFUVKoUdl
Working on a more permissive release & inpainting checkpoints.
Soon™ coming to @runwayml for text-to-video-editing pic.twitter.com/7XVKydxTeD
— Patrick Esser (@pess_r) August 11, 2022
stablediffusion text-to-image checkpoints are now available for research purposes upon request at https://github.com/CompVis/stable-diffusion
https://github.com/CompVis/stable-diffusion
FEATURED POSTS
-
If a blind person gained sight, could they recognize objects previously touched?
Blind people who regain their sight may find themselves in a world they don’t immediately comprehend. “It would be more like a sighted person trying to rely on tactile information,” Moore says.
Learning to see is a developmental process, just like learning language, Prof Cathleen Moore continues. “As far as vision goes, a three-and-a-half year old child is already a well-calibrated system.”
-
Kristina Kashtanova – “This is how GPT-4 sees and hears itself”
“I used GPT-4 to describe itself. Then I used its description to generate an image, a video based on this image and a soundtrack.
Tools I used: GPT-4, Midjourney, Kaiber AI, Mubert, RunwayML
This is the description I used that GPT-4 had of itself as a prompt to text-to-image, image-to-video, and text-to-music. I put the video and sound together in RunwayML.
GPT-4 described itself as: “Imagine a sleek, metallic sphere with a smooth surface, representing the vast knowledge contained within the model. The sphere emits a soft, pulsating glow that shifts between various colors, symbolizing the dynamic nature of the AI as it processes information and generates responses. The sphere appears to float in a digital environment, surrounded by streams of data and code, reflecting the complex algorithms and computing power behind the AI”