3Dprinting (178) A.I. (834) animation (348) blender (206) colour (233) commercials (52) composition (152) cool (361) design (647) Featured (79) hardware (311) IOS (109) jokes (138) lighting (288) modeling (144) music (186) photogrammetry (189) photography (754) production (1288) python (91) quotes (496) reference (314) software (1350) trailers (306) ves (549) VR (221)
https://theta360.com/en/about/theta/z1.html
Theta Z1 is Ricoh’s flagship 360 camera that features 1-inch sensors, which are the largest available for dual lens 360 cameras. It has been a highly regarded camera among 360 photographers because of its excellent image quality, color accuracy, and its ability to shoot Raw DNG photos with exceptional exposure latitude.
Bracketing mode 2022
Rquirement: Basic app iOS ver.2.20.0, Android ver.2.5.0, Camera firmware ver.2.10.3
https://community.theta360.guide/t/new-feature-ae-bracket-added-in-the-shooting-mode-z1-only/8247
HDRi for VFX
https://community.theta360.guide/t/create-high-quality-hdri-for-vfx-using-ricoh-theta-z1/4789/4
ND filtering
https://community.theta360.guide/t/neutral-density-solution-for-most-theta-cameras/7331
https://community.theta360.guide/t/long-exposure-nd-filter-for-ricoh-theta/1100
Text2Light
Royalty free links
Nvidia GauGAN360
An ST map is an image where every pixel has a unique Red and Green colour value that corresponds to an X and Y coordinate in screen-space. You can use one to efficiently warp an image in Nuke.
In 3D you project the STMap against all the elements in the scene and render glass/transmittive elements on top of that.
rayshader is an open source package for producing 2D and 3D data visualizations in R. rayshader uses elevation data in a base R matrix and a combination of raytracing, hillshading algorithms, and overlays to generate stunning 2D and 3D maps. In addition to maps, rayshader also allows the user to translate ggplot2 objects into beautiful 3D data visualizations.
The models can be rotated and examined interactively or the camera movement can be scripted to create animations. Scenes can also be rendered using a high-quality pathtracer, rayrender. The user can also create a cinematic depth of field post-processing effect to direct the user’s focus to important regions in the figure. The 3D models can also be exported to a 3D-printable format with a built-in STL export function, and can be exported to an OBJ file.
https://github.com/AcademySoftwareFoundation/OpenRV
https://adsknews.autodesk.com/news/rv-open-source
“Autodesk is committed to helping creators envision a better world, and having access to great tools allows them do just that. So we are making RV, our Sci-Tech award-winning media review and playback software, open source. Code contributions from RV along with DNEG’s xStudio and Sony Pictures Imageworks’ itview will shape the Open Review Initiative, the Academy Software Foundation’s (ASWF) newest sandbox project to build a unified, open source toolset for playback, review, and approval. ”
Texel density (also referred to as pixel density or texture density) is a measurement unit used to make asset textures cohesive compared to each other throughout your entire world.
It’s measured in pixels per centimeter (ie: 2.56px/cm) or pixels per meter (ie: 256px/m).
https://www.beyondextent.com/deep-dives/deepdive-texeldensity
https://github.com/proceduralit/StableDiffusion_Houdini
https://github.com/proceduralit/StableDiffusion_Houdini/wiki/
This is a Houdini HDA that submits the render output as the init_image and with getting help from PDG, enables artists to easily define variations on the Stable Diffusion parameters like Sampling Method, Steps, Prompt Strength, and Noise Strength.
Right now DreamStudio is the only public server that the HDA is supporting. So you need to have an account there and connect the HDA to your account.
DreamStudio: https://beta.dreamstudio.ai/membership
https://www.zumolabs.ai/post/what-is-neural-rendering
“The key concept behind neural rendering approaches is that they are differentiable. A differentiable function is one whose derivative exists at each point in the domain. This is important because machine learning is basically the chain rule with extra steps: a differentiable rendering function can be learned with data, one gradient descent step at a time. Learning a rendering function statistically through data is fundamentally different from the classic rendering methods we described above, which calculate and extrapolate from the known laws of physics.”
The Cattery is a library of free third-party machine learning models converted to .cat files to run natively in Nuke, designed to bridge the gap between academia and production, providing all communities access to different ML models that all run in Nuke. Users will have access to state-of-the-art models addressing segmentation, depth estimation, optical flow, upscaling, denoising, and style transfer, with plans to expand the models hosted in the future.
https://www.foundry.com/insights/machine-learning/the-artists-guide-to-cattery
https://community.foundry.com/cattery
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.