BREAKING NEWS
LATEST POSTS
-
Anthropic releases a new free Claude 3.5 Sonnet AI model
https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch
https://www.anthropic.com/claude
https://time.com/6990386/anthropic-dario-amodei-interview/
https://github.com/anthropics/anthropic-quickstarts
Dario Amodei, CEO of Anthropic, envisions a future where AI systems are not only powerful but also aligned with human values. After leaving OpenAI, Amodei co-founded Anthropic to tackle the safety challenges of AI, aiming to create systems that are both intelligent and ethical. One of the key methods Anthropic employs is “Constitutional AI,” a training approach that instills AI models with a set of core principles derived from universally accepted documents like the United Nations Declaration of Human Rights.
https://apps.apple.com/us/app/claude-by-anthropic/id6473753684
-
GaiaNet – Install and run your own local and decentralized free AI agent service
https://github.com/GaiaNet-AI/gaianet-node
GaiaNet is a decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise. It allows individuals and businesses to create AI agents. Each GaiaNet node provides
- a web-based chatbot UI.
- an OpenAI compatible API. See how to use a GaiaNet node as a drop-in OpenAI replacement in your favorite AI agent app.
-
LARS – An application that enables you to run LLMs locally on your device
https://github.com/abgulati/LARS
This grounding helps increase accuracy and reduce the common issue of AI-generated inaccuracies or “hallucinations.” This technique is commonly known as “Retrieval Augmented Generation”, or RAG.
LARS aims to be the ultimate open-source RAG-centric LLM application. Towards this end, LARS takes the concept of RAG much further by adding detailed citations to every response, supplying you with specific document names, page numbers, text-highlighting, and images relevant to your question, and even presenting a document reader right within the response window. While all the citations are not always present for every response, the idea is to have at least some combination of citations brought up for every RAG response and that’s generally found to be the case.
-
Chinese’s DeepSeek-Coder-V2 – Breaking the Barrier of Closed-Source Models in open source Code Intelligence
An open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K.
https://github.com/deepseek-ai/DeepSeek-Coder-V2
-
TDK claims insane energy density in solid-state battery breakthrough
The new material provides an energy density—the amount that can be squeezed into a given space—of 1,000 watt-hours per liter, which is about 100 times greater than TDK’s current battery in mass production.
TDK has 50 to 60 percent global market share in the small-capacity batteries that power smartphones and is targeting leadership in the medium-capacity market, which includes energy storage devices and larger electronics such as drones.
-
Wanderson M. Pimenta – Denoiser Comp Addon – FREE DOWNLOAD – BLENDER TO NUKE/DAVINCI SUPPORT
https://blender-addons.gumroad.com/l/denoiser_comp_addon
Blender 3 updated Intel® Open Image Denoise to version 1.4.2 which improved many artifacts in render, even separating into passes, but still loses a lot of definition when used in standard mode, DENOISER COMP separates passes and applies denoiser only in the selected passes and generates the final pass (beauty) keeping much more definition as can be seen in the videos.
-
RTG-SLAM: Real-time 3D Reconstruction at Scale Using Gaussian Splatting
https://gapszju.github.io/RTG-SLAM/
https://github.com/MisEty/RTG-SLAM
https://gapszju.github.io/RTG-SLAM/static/pdfs/RTG-SLAM_arxiv.pdf
A Real-time Gaussian SLAM (RTG-SLAM), a real-time 3D reconstruction system with an RGBD camera for large-scale environments using Gaussian splatting.
FEATURED POSTS
-
HuggingFace ai-comic-factory – a FREE AI Comic Book Creator
https://huggingface.co/spaces/jbilcke-hf/ai-comic-factory
this is the epic story of a group of talented digital artists trying to overcame daily technical challenges to achieve incredibly photorealistic projects of monsters and aliens
-
How does Stable Diffusion work?
https://stable-diffusion-art.com/how-stable-diffusion-work/
Stable Diffusion is a latent diffusion model that generates AI images from text. Instead of operating in the high-dimensional image space, it first compresses the image into the latent space.
Stable Diffusion belongs to a class of deep learning models called diffusion models. They are generative models, meaning they are designed to generate new data similar to what they have seen in training. In the case of Stable Diffusion, the data are images.
Why is it called the diffusion model? Because its math looks very much like diffusion in physics. Let’s go through the idea.
-
HDRI Median Cut plugin
www.hdrlabs.com/picturenaut/plugins.html
Note. The Median Cut algorithm is typically used for color quantization, which involves reducing the number of colors in an image while preserving its visual quality. It doesn’t directly provide a way to identify the brightest areas in an image. However, if you’re interested in identifying the brightest areas, you might want to look into other methods like thresholding, histogram analysis, or edge detection, through openCV for example.
Here is an openCV example:
(more…)
-
What Is The Resolution and view coverage Of The human Eye. And what distance is TV at best?
https://www.discovery.com/science/mexapixels-in-human-eye
About 576 megapixels for the entire field of view.
Consider a view in front of you that is 90 degrees by 90 degrees, like looking through an open window at a scene. The number of pixels would be:
90 degrees * 60 arc-minutes/degree * 1/0.3 * 90 * 60 * 1/0.3 = 324,000,000 pixels (324 megapixels).At any one moment, you actually do not perceive that many pixels, but your eye moves around the scene to see all the detail you want. But the human eye really sees a larger field of view, close to 180 degrees. Let’s be conservative and use 120 degrees for the field of view. Then we would see:
120 * 120 * 60 * 60 / (0.3 * 0.3) = 576 megapixels.
Or.
7 megapixels for the 2 degree focus arc… + 1 megapixel for the rest.
https://clarkvision.com/articles/eye-resolution.html
Details in the post