-
AI image generators could be their own demise due to inbreeding
A paper by computer scientists Matyas Bohacek and Hany Farid with the catchy title ‘Nepotistically Trained Generative-AI Models Collapse‘ shows that training AI image generators on AI images quite quickly leads to a deterioration in the quality of output. Farid likened the phenomenon to inbreeding. “If a species inbreeds with their own offspring and doesn’t diversify their gene pool, it can lead to a collapse of the species,” he said.
https://www.creativebloq.com/ai/ai-art/research-shows-ai-image-generators-could-be-their-own-demise
https://arxiv.org/pdf/2311.12202
-
Public Work – A search engine for free public domain content
Explore 100,000+ copyright-free images from The MET, New York Public Library, and other sources.
-
The most expensive software bug in human history
How a manually misconfigured and untested server continued running old test code during a live trading session, lead the bot to make $8.65 billion in unintended stock trades in just 28 minutes.
-
Ben Meer – Techniques for Staying Calm in Stressful Situations
https://benmeer.com/newsletter/staying-calm/
Stress is your body’s way of signaling that something important is happening.
- Slow Down
- Breathe
- Write
- Focus on Brain Health
- Zoom Out
- Reframe Negative Words
-
Split-Aperture 2-in-1 Computational Cameras – SIGGRAPH 2024
https://light.princeton.edu/publication/2in1-camera/
“We combine these two optical systems in a single camera by splitting the aperture: one half applies application-specific modulation using a diffractive optical element, and the other captures a conventional image. This co-design with a dual-pixel sensor allows simultaneous capture of coded and uncoded images — without increasing physical or computational footprint.”
-
The Edge – World’s first major AI law enters into force in Europe
The EU Artificial Intelligence (AI) Act, which went into effect on August 1, 2024.
This act implements a risk-based approach to AI regulation, categorizing AI systems based on the level of risk they pose. High-risk systems, such as those used in healthcare, transport, and law enforcement, face stringent requirements, including risk management, transparency, and human oversight.
Key provisions of the AI Act include:
- Transparency and Safety Requirements: AI systems must be designed to be safe, transparent, and easily understandable to users. This includes labeling requirements for AI-generated content, such as deepfakes (Engadget).
- Risk Management and Compliance: Companies must establish comprehensive governance frameworks to assess and manage the risks associated with their AI systems. This includes compliance programs that cover data privacy, ethical use, and geographical considerations (Faegre Drinker Biddle & Reath LLP) (Passle).
- Copyright and Data Mining: Companies must adhere to copyright laws when training AI models, obtaining proper authorization from rights holders for text and data mining unless it is for research purposes (Engadget).
- Prohibitions and Restrictions: AI systems that manipulate behavior, exploit vulnerabilities, or perform social scoring are prohibited. The act also sets out specific rules for high-risk AI applications and imposes fines for non-compliance (Passle).
For US tech firms, compliance with the EU AI Act is critical due to the EU’s significant market size
COLLECTIONS
| Featured AI
| Design And Composition
| Explore posts
POPULAR SEARCHES
unreal | pipeline | virtual production | free | learn | photoshop | 360 | macro | google | nvidia | resolution | open source | hdri | real-time | photography basics | nuke
FEATURED POSTS
Social Links
DISCLAIMER – Links and images on this website may be protected by the respective owners’ copyright. All data submitted by users through this site shall be treated as freely available to share.
