intelligence (AI) is likely to impact job salaries rather than eliminating jobs entirely. The primary argument is that AI will erode the skill premium traditionally commanded by high-skilled workers. This erosion happens through three key mechanisms:
Skill Premium on Specialized Tasks: AI enables low-skilled workers to perform tasks at a level comparable to high-skilled workers, making skilled workers more substitutable and reducing their wage premium.
Skill Premium on Learning Advantages: AI’s ability to continuously learn and improve from vast amounts of data threatens professions that rely on continuous learning and skill development. For example, in healthcare, AI can absorb and replicate the learning and expertise of doctors, diminishing their unique value.
Skill Premium on Managerial Advantages: AI agents can take over managerial tasks like planning and resource allocation, which have traditionally required human intervention. As AI becomes more sophisticated, even complex managerial roles might lose their premium as AI performs these functions more efficiently.
These factors collectively lead to a commoditization of skills, reducing the relative advantage and salary premium of traditionally high-skilled and managerial roles. The article emphasizes that while AI may not replace jobs outright, it will significantly affect how jobs are valued and compensated.
There’s a point beyond which no individual, no team, and no company can solve the dependency and constraint puzzle using brute-force methods.
Imagine a company where 10% of the work involves multiple teams, touches different codebases, requires careful coordination, and requires frequent meetings that span organizational boundaries and challenge local incentives. This situation might still be feasible.
Now imagine that this percentage is more like 25%. Very quickly, the constraint satisfaction problem becomes an order of magnitude more complex.
What might a heuristic approach look like in product development?
Reducing work-in-progress limits
Force ranking priorities
Weighted-shortest-job-first
There (is) a chance that teams will miss an opportunity to find an optimal solution? Yes. But the probability of that happening is far outweighed by the likelihood that 1) bad things will NOT happen, and 2) good things may emerge.
The trouble, I believe, is that it can be incredibly hard for managers to make the case for, on the surface, doing less. Discussions about WIP limits and prioritization often devolve into debates over the actual WIP limit and precise estimates! Instead of seeing the forest through the trees, we obsess about finding the optimal answer.
Depth of field is the range within which focusing is resolved in a photo.
Aperture has a huge affect on to the depth of field.
Changing the f-stops (f/#) of a lens will change aperture and as such the DOF.
f-stops are a just certain number which is telling you the size of the aperture. That’s how f-stop is related to aperture (and DOF).
If you increase f-stops, it will increase DOF, the area in focus (and decrease the aperture). On the other hand, decreasing the f-stop it will decrease DOF (and increase the aperture).
The red cone in the figure is an angular representation of the resolution of the system. Versus the dotted lines, which indicate the aperture coverage. Where the lines of the two cones intersect defines the total range of the depth of field.
This image explains why the longer the depth of field, the greater the range of clarity.
Spectral sensitivity of eye is influenced by light intensity. And the light intensity determines the level of activity of cones cell and rod cell. This is the main characteristic of human vision. Sensitivity to individual colors, in other words, wavelengths of the light spectrum, is explained by the RGB (red-green-blue) theory. This theory assumed that there are three kinds of cones. It’s selectively sensitive to red (700-630 nm), green (560-500 nm), and blue (490-450 nm) light. And their mutual interaction allow to perceive all colors of the spectrum.
Building a successful business requires a focus on three key elements: product excellence, go-to-market strategy, and operational excellence. Neglecting any of these areas can lead to failure, as evidenced by the high percentage of startups that don’t make it past the five-year mark. Founders and CEOs must ensure a solid product foundation while also integrating effective sales, marketing, and management strategies to achieve sustainable growth and scale.
Foundation: Product Excellence, Core Values and Mission
Core Values: These are the guiding principles that dictate behavior and action within the company. They form the ethical foundation and are crucial for maintaining consistency in decision-making.
Mission: This defines the company’s purpose and goals. A clear and compelling mission helps align the team and provides a sense of direction.
Efficiency and Scalability: This layer focuses on creating efficient processes that can scale as the company grows. Streamlined operations reduce costs and increase productivity.
Structure: Operational Excellence and Innovation
Operational Excellence: Efficient processes, quality control, and continuous improvement fall into this layer. Ensuring that the company operates smoothly and effectively is crucial for sustainability.
Innovation: Staying competitive requires innovation. This involves developing new products, services, or processes that add value and keep the company relevant in the market.
Quality Control and Continuous Improvement: Ensuring that operational processes are of high quality and constantly improving helps maintain product excellence and customer satisfaction.
Technology and Infrastructure: Investing in the right technology and infrastructure to support business operations is vital. This includes everything from manufacturing equipment to software systems that enhance operational efficiency.
Strategy: Go-to-Market Strategy, Vision and Long-Term Planning
Vision: A forward-looking vision inspires and motivates the team. It outlines where the company aims to be in the future and helps in setting long-term goals.
Strategic Planning: This involves setting long-term goals and determining the actions and resources needed to achieve them. It includes market analysis, competitive strategy, and growth planning.
Market Understanding: A deep understanding of the target market, including customer segments, competitors, and market trends, is essential. This knowledge helps in positioning the product effectively.
Marketing and Sales Execution: This involves creating a robust marketing plan that includes branding, messaging, and advertising strategies to attract and retain customers. Additionally, building a strong sales strategy ensures that the product reaches the right customers through the right channels.
Customer Acquisition and Retention: Effective strategies for acquiring new customers and retaining existing ones are critical. This includes loyalty programs, customer service excellence, and engagement initiatives.
A LUT (Lookup Table) is essentially the modifier between two images, the original image and the displayed image, based on a mathematical formula. Basically conversion matrices of different complexities. There are different types of LUTS – viewing, transform, calibration, 1D and 3D.
Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to “look good” subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.
For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.
In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you’re trying to simulate, based on measurements.
Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).
It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.
On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.
Here are some resources to look at for physically-based lighting in games:
SIGGRAPH 2013 Physically Based Shading Course, particularly the background talk by Naty Hoffman at the beginning. You can also check out the previous incarnations of this course for more resources.
And of course, I would be remiss if I didn’t mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.