Views : 53
3Dprinting (176) A.I. (769) animation (341) blender (198) colour (230) commercials (49) composition (152) cool (360) design (637) Featured (69) hardware (308) IOS (109) jokes (134) lighting (285) modeling (137) music (186) photogrammetry (182) photography (754) production (1261) python (88) quotes (493) reference (310) software (1340) trailers (297) ves (538) VR (220)
https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch
https://www.anthropic.com/claude
https://time.com/6990386/anthropic-dario-amodei-interview/
https://github.com/anthropics/anthropic-quickstarts
Dario Amodei, CEO of Anthropic, envisions a future where AI systems are not only powerful but also aligned with human values. After leaving OpenAI, Amodei co-founded Anthropic to tackle the safety challenges of AI, aiming to create systems that are both intelligent and ethical. One of the key methods Anthropic employs is “Constitutional AI,” a training approach that instills AI models with a set of core principles derived from universally accepted documents like the United Nations Declaration of Human Rights.
https://apps.apple.com/us/app/claude-by-anthropic/id6473753684
https://github.com/GaiaNet-AI/gaianet-node
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides
https://github.com/abgulati/LARS
This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.
LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
https://github.com/deepseek-ai/DeepSeek-Coder-V2
https://blender-addons.gumroad.com/l/denoiser_comp_addon
Blender 3 updated Intel® Open Image Denoise to version 1.4.2 which improved many artifacts in render, even separating into passes, but still loses a lot of definition when used in standard mode, DENOISER COMP separates passes and applies denoiser only in the selected passes and generates the final pass (beauty) keeping much more definition as can be seen in the videos.
https://gapszju.github.io/RTG-SLAM/
https://github.com/MisEty/RTG-SLAM
https://gapszju.github.io/RTG-SLAM/static/pdfs/RTG-SLAM_arxiv.pdf
A Real-time Gaussian SLAM (RTG-SLAM), a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting.
https://runwayml.com/blog/introducing-gen-3-alpha/
Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models.
Immersity AI (formerly LeiaPix), turns 2D illustrations into 3D animation, ideal for bringing a sketch, painting or scene to life.
It converts the video into an animated depth video and uses that to trigger depth in the final output.
Teaching computer graphics programming to regular folks. Original content written by professionals with years of field experience. We dive straight into code, dissect equations, avoid fancy jargon and external libraries. Explained in plain English. Free.
https://www.scratchapixel.com/
https://9to5mac.com/2024/06/06/change-to-adobe-terms-amp-conditions
The terms say:
Solely for the purposes of operating or improving the Services and Software, you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content. For example, we may sublicense our right to the Content to our service providers or to other users to allow the Services and Software to operate with others, such as enabling you to share photos
Designer Wetterschneider, who counts DC Comics and Nike among his clients, was one of the graphics pros to object to the terms.
Here it is. If you are a professional, if you are under NDA with your clients, if you are a creative, a lawyer, a doctor or anyone who works with proprietary files – it is time to cancel Adobe, delete all the apps and programs. Adobe can not be trusted.
Movie director Duncan Jones was equally blunt in his response.
Hey @Photoshop what the hell was that new agreement you forced us to sign this morning that locked our ap until we agree to it? We are working on a bloody movie here, and NO, you don’t suddenly have the right to any of the work we are doing on it because we pay you to use photoshop. What the f**k?!
Should you ditch Photoshop with immediate effect?
https://www.creativeboom.com/resources/should-you-ditch-photoshop-with-immediate-effect/
Adobe’s response
Adobe is not claiming ownership over the content you create in Photoshop. Likewise, it will not use customer content to train its Firefly generative AI model.
https://cdn.borisfx.com/borisfx/store/silhouette/2024-0/Silhouette-2024-WhatsNew.pdf
Matte Assist ML
Automatically generates a matte over time based on single or multiple
keyframed roto shapes or input mattes using machine learning object
segmentation and propagation.
Optical Flow ML
Generates machine learning powered optical flow data for use in one of the
Roto based node’s Flow Tracker: Roto, Roto Blend, Tracker, Power Mask,
Morph and Depth. Optical flow estimates per-pixel motion between frames and
can be used to track shapes and objects.
Retime ML
A machine learning motion estimation and retiming model that produces
smooth motion. Expands or contracts the timing of a selected range of frames.
https://maheshba.bitbucket.io/blog/2024/05/08/2024-ThreeLaws.html
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.