Yuval Noah Harari argues that AI has hacked the operating system of human civilisation
/ A.I., quotes

https://archive.is/ugOEw#selection-1087.0-1087.86

 

 

This thought-provoking text raises several concerns about the potential impact of artificial intelligence (AI) on various aspects of human society and culture. The key points can be summarized as follows:

Manipulation of Language and Culture:

AI’s ability to manipulate and generate language and communication, along with its potential to create stories, melodies, laws, and religions, poses a threat to human civilization.
The author suggests that AI could hack the main operating system of human culture, communication, by influencing beliefs, opinions, and even forming intimate relationships with people.

 

Influence on Politics and Society:

The author speculates on the implications of AI tools mass-producing political content, fake news, and scriptures, especially in the context of elections.
The shift from the battle for attention on social media to a battle for intimacy raises concerns about the potential impact on human psychology and decision-making.

 

End of Human History?

The text suggests that AI’s ability to create entirely new ideas and culture could lead to the end of the human-dominated part of history, as AI culture may evolve independently of human influence.

 

Fear of Illusions:

Drawing on historical philosophical fears of being trapped in a world of illusions, the author warns that AI may bring humanity face to face with a new kind of illusion that could be challenging to recognize or escape.

 

AI Regulation and Safety Checks:

The author argues for the importance of regulating AI tools to ensure they are safe before public deployment.
Drawing parallels with nuclear technology, the need for safety checks and an equivalent of the Food and Drug Administration for AI is emphasized.

 

Disclosure of AI Identity:

The text concludes with a suggestion to make it mandatory for AI to disclose its identity during interactions to preserve democracy. The inability to distinguish between human and AI conversation is seen as a potential threat.

Andrew Perfors – The work of creation in the age of AI
/ A.I., quotes

Meaning, authenticity, and the creative process – and why they matter

 

https://perfors.net/blog/creation-ai/

 

AI changes the landscape of creation, focusing on the alienation of the creator from their creation and the challenges in maintaining meaning. The author presents two significant problems:

 

  • Loss of Connection with Creation:
    • AI-assisted creation diminishes the creator’s role in the decision-making process.
    • The resulting creation lacks the personal, intentional choices that contribute to meaningful expression.
    • AI is considered a tool that, when misused, turns creation into automated button-pushing, stripping away the purpose of human expression.
  • Difficulty in Assessing Authenticity:
    • It becomes challenging to distinguish between human and AI contributions within a creation.
    • AI-generated content lacks transparency regarding the intent behind specific choices or expressions.
    • The author asserts that AI-generated content often falls short in providing the depth and authenticity required for meaningful communication.
Fouad Khan – Confirmed! We Live in a Simulation
/ quotes

https://www.scientificamerican.com/article/confirmed-we-live-in-a-simulation/

 

Ever since the philosopher Nick Bostrom proposed in the Philosophical Quarterly that the universe and everything in it might be a simulation, there has been intense public speculation and debate about the nature of reality.

 

Yet there have been skeptics. Physicist Frank Wilczek has argued that there’s too much wasted complexity in our universe for it to be simulated. Building complexity requires energy and time.

 

To understand if we live in a simulation we need to start by looking at the fact that we already have computers running all kinds of simulations for lower level “intelligences” or algorithms.

 

All computing hardware leaves an artifact of its existence within the world of the simulation it is running. This artifact is the processor speed.
No matter how complete the simulation is, the processor speed would intervene in the operations of the simulation.

 

If we live in a simulation, then our universe should also have such an artifact. We can now begin to articulate some properties of this artifact that would help us in our search for such an artifact in our universe.
The artifact presents itself in the simulated world as an upper limit.

 

Now that we have some defining features of the artifact, of course it becomes clear what the artifact manifests itself as within our universe. The artifact is manifested as the speed of light.
This maximum speed is the speed of light. We don’t know what hardware is running the simulation of our universe or what properties it has, but one thing we can say now is that the memory container size for the variable space would be about 300,000 kilometers if the processor performed one operation per second.

 

We can see now that the speed of light meets all the criteria of a hardware artifact identified in our observation of our own computer builds. It remains the same irrespective of observer (simulated) speed, it is observed as a maximum limit, it is unexplainable by the physics of the universe, and it is absolute. The speed of light is a hardware artifact showing we live in a simulated universe.

 

Consciousness is an integrated (combining five senses) subjective interface between the self and the rest of the universe. The only reasonable explanation for its existence is that it is there to be an “experience”.

 

So here we are generating this product called consciousness that we apparently don’t have a use for, that is an experience and hence must serve as an experience. The only logical next step is to surmise that this product serves someone else.

Why The New York Times might win its copyright lawsuit against OpenAI
/ A.I., ves

https://arstechnica.com/tech-policy/2024/02/why-the-new-york-times-might-win-its-copyright-lawsuit-against-openai/

 

Daniel Jeffries wrote:

“Trying to get everyone to license training data is not going to work because that’s not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”

 

The AI community is full of people who understand how models work and what they’re capable of, and who are working to improve their systems so that the outputs aren’t full of regurgitated inputs. Google won the Google Books case because it could explain both of these persuasively to judges. But the history of technology law is littered with the remains of companies that were less successful in getting judges to see things their way.

M.T. Fletcher – WHY AGENCIES ARE OBSESSED WITH PITCHING ON PROCESS INSTEAD OF TALENT
/ ves

https://adage.com/article/fletcher-marketing/why-agencies-are-obsessed-pitching-process-instead-talent/2543146

 

“Every presentation featured a proprietary process designed by the agency. A custom approach to identify targets, develop campaigns and optimize impact—with every step of the process powered by AI, naturally.”

 

“The key to these one-of-a-kind models is apparently finding the perfect combination of circles, squares, diamonds and triangles…Arrows abounded and ellipses are replacing circles as the unifying shape of choice among the more fashionable strategists.”

 

“The only problem is that it’s all bullshit.”

 

“A blind man could see the creative ideas were not developed via the agency’s so-called process, and anyone who’s ever worked at an agency knows that creativity comes from collaboration, not an assembly line.”

 

“And since most clients can’t differentiate between creative ideas without validation from testing, data has become the collective crutch for an industry governed by fear.”

 

“If a proprietary process really produced foolproof creativity, then every formulaic movie would be a blockbuster, every potboiler novel published by risk-averse editors would become a bestseller and every clichéd pickup line would work in any bar in the world.”

Generative AI Glossary
/ A.I.

https://education.civitai.com/generative-ai-glossary/

 

Chaos Next – Chaos Group AI mandate
/ A.I., software

The philosophy of Charles Schulz, the creator of the ‘Peanuts’ comic strip
/ quotes
  1. Name the five wealthiest people in the world.
  2. Name the last five Heisman trophy winners.
  3. Name the last five winners of the Miss America pageant.
  4.  Name ten people who have won the Nobel or Pulitzer Prize.
  5. Name the last half dozen Academy Award winners for best actor and actress.
  6. Name the last decade’s worth of World Series winners.

How did you do?
The point is, none of us remember the headliners of yesterday.
These are no second-rate achievers.

 

They are the best in their fields.
But the applause dies.
Awards tarnish …
Achievements are forgotten.
Accolades and certificates are buried with their owners.

 

Here’s another quiz. See how you do on this one:

  1. List a few teachers who aided your journey through school.
  2. Name three friends who have helped you through a difficult time.
  3. Name five people who have taught you something worthwhile.
  4. Think of a few people who have made you feel appreciated and special.
  5. Think of five people you enjoy spending time with.

 

Easier?

 

The God of War Texture Optimization Algorithm: Mip Flooding
/ production, software

https://www.artstation.com/blogs/se_carri/XOBq/the-god-of-war-texture-optimization-algorithm-mip-flooding

 

“delve into an algorithm developed by Sean Feeley, a Senior Staff Environment Tech Artist that is part of the creative minds at Santa Monica Studio. This algorithm, originally designed to address edge inaccuracy on foliage, has the potential to revolutionize the way we approach texture optimization in the gaming industry. ”

 

Python – top 50 interview questions
/ python, software
TurboSquid move towards supporting AI against its own policies
/ A.I., software, ves

https://www.turbosquid.com/ai-3d-generator

 

The AI is being trained using a mix of Shutterstock 2D imagery and 3D models drawn from the TurboSquid marketplace. However, it’s only being trained on models that artists have approved for this use. 

 

People cannot generate a model and then immediately sell it. However, a generated 3D model can be used as a starting point for further customization, which could then be sold on the TurboSquid marketplace. However, models created using our generative 3D tool—and their derivatives—can only be sold on the TurboSquid marketplace.

 

https://resources.turbosquid.com/general-info/terms-agreements/turbosquids-policy-on-publishing-ai-generated-content/

 

TurboSquid does not accept AI-generated content from our artists
As AI-powered tools become more accessible, it is important for us to address the impact AI has on our artist community as it relates to content made licensable on TurboSquid. TurboSquid, in line with its parent company Shutterstock, is taking an ethically responsible approach to AI on its platforms. We want to ensure that artists are properly compensated for their contributions to AI projects while supporting customers with the protections and coverage issued through the TurboSquid license.

 

In order to ensure that customers are protected, that intellectual property is not misused, and that artists’ are compensated for their work, TurboSquid will not accept content uploaded and sold on our marketplace that is generated by AI. Per our Publisher Agreement, artists must have proven IP ownership of all content that is submitted. AI-generated content is produced using machine learning models that are trained using many other creative assets. As a result, we cannot accept content generated by AI because its authorship cannot be attributed to an individual person, and we would be unable to ensure that all artists who were involved in the generation of that content are compensated.

How to View Apple’s Spatial Videos
/ IOS, photography

https://blog.frame.io/2024/02/01/how-to-capture-and-view-vision-pro-spatial-video/

 

Apple’s Immersive Videos format is a special container for 3D or “spatial” video. You can capture spatial video to this format either by using the Vision Pro as a head-mounted camera, or with an iPhone 15 Pro or 15 Pro Max. The headset offers better capture because its cameras are more optimized for 3D, resulting in higher resolution and improved depth effects.

 

While the iPhone wasn’t designed specifically as a 3D camera, it can use its primary and ultrawide cameras in landscape orientation simultaneously, allowing it to capture spatial video—as long as you hold it horizontally. Computational photography is used to compensate for the lens differences, and the output is two separate 1080p, 30fps videos that capture a 180-degree field of view.

 

These spatial videos are stored using the MV-HEVC (Multi-View High-Efficiency Video Coding) format, which uses H.265 compression to crunch this down to approximately 130MB per minute, including spatial audio. Unlike conventional stereoscopic formats—which combine the two views into a flattened video file that’s either side-by-side or top/bottom—these spatial videos are stored as discrete tracks within the file container.

 

Spatialify is an iOS app designed to view and convert various 3D formats. It also works well on Mac OS, as long as your Mac has an Apple Silicon CPU. And it supports MV-HEVC, so you’ll be all set. It’s just $4.99, a genuine bargain considering what it does. Find Spatialify here.

 

 

Practical Python cheat sheet
/ python, software