AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability
/ A.I., ves

https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/

 

“Simon Willison created a Datasette browser to explore WebVid-10M, one of the two datasets used to train the video generation model, and quickly learned that all 10.7 million video clips were scraped from Shutterstock, watermarks and all.”

 

“In addition to the Shutterstock clips, Meta also used 10 million video clips from this 100M video dataset from Microsoft Research Asia. It’s not mentioned on their GitHub, but if you dig into the paper, you learn that every clip came from over 3 million YouTube videos.”

 

“It’s become standard practice for technology companies working with AI to commercially use datasets and models collected and trained by non-commercial research entities like universities or non-profits.”

 

“Like with the artists, photographers, and other creators found in the 2.3 billion images that trained Stable Diffusion, I can’t help but wonder how the creators of those 3 million YouTube videos feel about Meta using their work to train their new model.”

9 Best Hacks to Make a Cinematic Video with Any Camera

https://www.flexclip.com/learn/cinematic-video.html

  • Frame Your Shots to Create Depth
  • Create Shallow Depth of Field
  • Avoid Shaky Footage and Use Flexible Camera Movements
  • Properly Use Slow Motion
  • Use Cinematic Lighting Techniques
  • Apply Color Grading
  • Use Cinematic Music and SFX
  • Add Cinematic Fonts and Text Effects
  • Create the Cinematic Bar at the Top and the Bottom
VQGAN + CLIP AI made Music Video for the song Canvas by Resonate
/ A.I., design, music, production

” In this video, I utilized artificial intelligence to generate an animated music video for the song Canvas by Resonate. This tool allows anyone to generate beautiful images using only text as the input. My question was, what if I used song lyrics as input to the AI, can I make perfect music synchronized videos automatically with the push of a button? Let me know how you think the AI did in this visual interpretation of the song.

After getting caught up in the excitement around DALL·E2 (latest and greatest AI system, it’s INSANE), I searched for any way I could use similar image generation for music synchronization. Since DALL·E2 is not available to the public yet, my search led me to VQGAN + CLIP (Vector Quantized Generative Adversarial Network and Contrastive Language–Image Pre-training), before settling more specifically on Disco Diffusion V5.2 Turbo. If you don’t know what any of these words or acronyms mean, don’t worry, I was just as confused when I first started learning about this technology. I believe we’re reaching a turning point where entire industries are about to shift in reaction to this new process (which is essentially magic!).

DoodleChaos”

Gamification techniques for every day production
/ production, quotes

https://www.zippia.com/advice/gamification-statistics/

  • 90% of employees say gamification makes them more productive at work.
  • On average, employees experience a 60% engagement increase with a gamified work experience.
  • Companies that use gamification are seven times more profitable than those that do not use gamified elements at work—whether with employees or consumers.
  • The North American gamification industry, led primarily by the U.S., is valued at $2.72 billion.
  • 72% of people say gamification motivates them to do tasks and work harder on the job.
  • 67% of students agree that gamified learning is both more engaging and motivating than traditional classes.

 

hatrabbits.com/en/gamification/

Gamification is the process of using game elements in a non-game context. It has many advantages over traditional learning approaches, including: Increasing learner motivation levels. Improving knowledge retention

10 gamification techniques you can use instantly

  • – Create ‘flow’ If a task is too easy, you will get bored. …
  • – Let users ‘complete’ a task. …
  • – Set up appropriate challenges. …
  • – Allow players to customise things. …
  • – Allow users to ‘unlock’ stuff. …
  • – Make people curious. …
  • – Use the element of surprise. …
  • – Recognize achievements.

 

Working from home – tips to help you stay productive
/ production, quotes

https://www.freecodecamp.org/news/working-from-home-tips-to-stay-productive/

 

Tip #1 – Build a strong work-life balance

  1. Take breaks and go outside
  2. Take up a sport or activity that takes your mind off things
  3. Split your computer
  4. Don’t setup work notifications on your phone
  5. Work from a co-working space

 

Tip #2 – Create your own working environment

  1. Choose or create a workspace that stimulates you
  2. Separate your workspace from your sleeping one
  3. Declutter your desk

 

Tip #3 – Socialize at work

  1. Schedule virtual time with your colleagues
  2. Try to schedule a social meeting each week in your company

 

Tip #4 – Become a time-management expert

  1. Try to have a set work schedule
  2. Schedule your tasks

 

Tip #5 – Learn to deep focus

  1. Listen to focus songs
  2. Use the “Do not disturb” option on your phone
  3. Limit distractions
Outpost VFX lighting tips
/ lighting

www.outpost-vfx.com/en/news/18-pro-tips-and-tricks-for-lighting

Get as much information regarding your plate lighting as possible

Always use a reference

Replicate what is happening in real life

Invest into a solid HDRI

Start Simple

Observe real world lighting, photography and cinematography

Don’t neglect the theory

Learn the difference between realism and photo-realism.

Keep your scenes organised

Advanced Computer Vision with Python OpenCV and Mediapipe
/ Featured, production, python, software

https://www.freecodecamp.org/news/advanced-computer-vision-with-python/

 

https://www.freecodecamp.org/news/how-to-use-opencv-and-python-for-computer-vision-and-ai/

 

 

Working for a VFX (Visual Effects) studio provides numerous opportunities to leverage the power of Python and OpenCV for various tasks. OpenCV is a versatile computer vision library that can be applied to many aspects of the VFX pipeline. Here’s a detailed list of opportunities to take advantage of Python and OpenCV in a VFX studio:

 

  1. Image and Video Processing:
    • Preprocessing: Python and OpenCV can be used for tasks like resizing, color correction, noise reduction, and frame interpolation to prepare images and videos for further processing.
    • Format Conversion: Convert between different image and video formats using OpenCV’s capabilities.
  2. Tracking and Matchmoving:
    • Feature Detection and Tracking: Utilize OpenCV to detect and track features in image sequences, which is essential for matchmoving tasks to integrate computer-generated elements into live-action footage.
  3. Rotoscoping and Masking:
    • Segmentation and Masking: Use OpenCV for creating and manipulating masks and alpha channels for various VFX tasks, like isolating objects or characters from their backgrounds.
  4. Camera Calibration:
    • Intrinsic and Extrinsic Calibration: Python and OpenCV can help calibrate cameras for accurate 3D scene reconstruction and camera tracking.
  5. 3D Scene Reconstruction:
    • Stereoscopy: Use OpenCV to process stereoscopic image pairs for creating 3D depth maps and generating realistic 3D scenes.
    • Structure from Motion (SfM): Implement SfM techniques to create 3D models from 2D image sequences.
  6. Green Screen and Blue Screen Keying:
    • Chroma Keying: Implement advanced keying algorithms using OpenCV to seamlessly integrate actors and objects into virtual environments.
  7. Particle and Fluid Simulations:
    • Particle Tracking: Utilize OpenCV to track and manipulate particles in fluid simulations for more realistic visual effects.
  8. Motion Analysis:
    • Optical Flow: Implement optical flow algorithms to analyze motion patterns in footage, useful for creating dynamic VFX elements that follow the motion of objects.
  9. Virtual Set Extension:
    • Camera Projection: Use camera calibration techniques to project virtual environments onto physical sets, extending the visual scope of a scene.
  10. Color Grading:
    • Color Correction: Implement custom color grading algorithms to match the color tones and moods of different shots.
  11. Automated QC (Quality Control):
    • Artifact Detection: Develop Python scripts to automatically detect and flag visual artifacts like noise, flicker, or compression artifacts in rendered frames.
  12. Data Analysis and Visualization:
    • Performance Metrics: Use Python to analyze rendering times and optimize the rendering process.
    • Data Visualization: Generate graphs and charts to visualize render farm usage, project progress, and resource allocation.
  13. Automating Repetitive Tasks:
    • Batch Processing: Automate repetitive tasks like resizing images, applying filters, or converting file formats across multiple shots.
  14. Machine Learning Integration:
    • Object Detection: Integrate machine learning models (using frameworks like TensorFlow or PyTorch) to detect and track specific objects or elements within scenes.
  15. Pipeline Integration:
    • Custom Tools: Develop Python scripts and tools to integrate OpenCV-based processes seamlessly into the studio’s pipeline.
  16. Real-time Visualization:
    • Live Previsualization: Implement real-time OpenCV-based visualizations to aid decision-making during the preproduction stage.
  17. VR and AR Integration:
    • Augmented Reality: Use Python and OpenCV to integrate virtual elements into real-world footage, creating compelling AR experiences.
  18. Camera Effects:
    • Lens Distortion: Correct lens distortions and apply various camera effects using OpenCV, contributing to the desired visual style.

 

Interpolating frames from an EXR sequence using OpenCV can be useful when you have only every second frame of a final render and you want to create smoother motion by generating intermediate frames. However, keep in mind that interpolating frames might not always yield perfect results, especially if there are complex changes between frames. Here’s a basic example of how you might use OpenCV to achieve this:

 

import cv2
import numpy as np
import os

# Replace with the path to your EXR frames
exr_folder = "path_to_exr_frames"

# Replace with the appropriate frame extension and naming convention
frame_template = "frame_{:04d}.exr"

# Define the range of frame numbers you have
start_frame = 1
end_frame = 100
step = 2

# Define the output folder for interpolated frames
output_folder = "output_interpolated_frames"
os.makedirs(output_folder, exist_ok=True)

# Loop through the frame range and interpolate
for frame_num in range(start_frame, end_frame + 1, step):
    frame_path = os.path.join(exr_folder, frame_template.format(frame_num))
    next_frame_path = os.path.join(exr_folder, frame_template.format(frame_num + step))

    if os.path.exists(frame_path) and os.path.exists(next_frame_path):
        frame = cv2.imread(frame_path, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR)
        next_frame = cv2.imread(next_frame_path, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR)

        # Interpolate frames using simple averaging
        interpolated_frame = (frame + next_frame) / 2

        # Save interpolated frame
        output_path = os.path.join(output_folder, frame_template.format(frame_num))
        cv2.imwrite(output_path, interpolated_frame)

        print(f"Interpolated frame {frame_num}") # alternatively: print("Interpolated frame {}".format(frame_num))



 

Please note the following points:

 

  • The above example uses simple averaging to interpolate frames. More advanced interpolation methods might provide better results, such as motion-based algorithms like optical flow-based interpolation.
  • EXR files can store high dynamic range (HDR) data, so make sure to use cv2.IMREAD_ANYDEPTH flag when reading these files.
  • OpenCV might not support EXR format directly. You might need to use a library like exr to read and manipulate EXR files, and then convert them to OpenCV-compatible formats.
  • Consider the characteristics of your specific render when using interpolation. If there are large changes between frames, the interpolation might lead to artifacts.
  • Experiment with different interpolation methods and parameters to achieve the desired result.
  • For a more advanced and accurate interpolation, you might need to implement or use existing algorithms that take into account motion estimation and compensation.

 

Your Career in Animation: How to Survive and Thrive – book by Veteran animation director and author David B. Levy
/ animation, quotes, ves

www.awn.com/animationworld/your-career-animation-how-survive-and-thrive-now-available

“The new edition of his book is filled with insight and advice from over 150 animation industry professionals, a wide-ranging tome of suggestions, reality checks, and inspiration on how to set your sights and blaze your own career trail. He shares practical tips on building a reel, portfolio, and resume; pitching and selling shows; and taking to heart and learning from on-the-job criticism.”

“Everyone seems to agree, and I certainly do, and it’s my POV in the book, that self-development is everything. You shouldn’t wait for anyone to make you anything, to give you opportunities you haven’t earned yet.”

Official Pytorch implementation of Detailed Expression Capture and Animation
/ software

github.com/YadiraF/DECA

DECA reconstructs a 3D head model with detailed facial geometry from a single input image. The resulting 3D head model can be easily animated.

The code is based on Learning an Animatable Detailed 3D Face Model from In-The-Wild Images

arxiv.org/abs/2012.04012

Remote working pros and cons
/ production

www.leforttalentgroup.com/business-blog/is-the-genie-out-forever

Cons of remote working:

  • 1-Prefer 2 distinct locations in life — 1 for work, 1 for everything else
  • 2-Being able to manage the group of employees in one location is preferable — Meetings, training, management of teams and personalities has been easier.
  • 3-Confidentiality and Security — depending on the nature of the business, being able to lessen liabilities by containing the work location
  • 4-Social community — Many fully enjoy the traditional work community and build life long connections
  • 5-Love — A quick Google search shows various sources that cite anywhere from 20-33 percent of people met their spouse through work. What will those stats look like in a year or two from now?
  • 6-Road Warriors with great sound systems in their cars — Some enjoy the commute to unwind after work cranking tunes or catch up with friends and family waiting for the gridlock to ease. Others to continue working from the car.

Pros of remote working:

  • 1-The overhead costs — Keeping large commercial real estate holdings and related maintenance costs
  • 2-Killer commutes — 5-20 hours/week per employee in lost time now potentially used for other purposes
  • 3-Daily Daycare Scramble — Racing to drop them off or pick them up each day
  • 4-Environmentally, a lower carbon footprint — Less traffic, less pollution
  • 5-Quality Family time — Many parents are spending more time with their growing children

Some useful tips about working online:

  • Clarify and focus on priorities.
  • Define and manage expectations more explicitly than normal (give context to everything)
  • Log all your working hours.
  • Learn about and respect people’s boundaries.
  • Pay attention to people’s verbal and physical cues.
  • Pay attention to both people’s emotional, hidden and factual cues.
  • Be wary about anticipating, judging, rationalizing, competing, defending, rebutting…
The future of BCI and VR according to Gabe Newell from Valve Corporation
/ VR

www.tvnz.co.nz/one-news/new-zealand/gabe-newell-says-brain-computer-interface-tech-allow-video-games-far-beyond-human-meat-peripherals-can-comprehend

 

 

– Valve is currently working on an open-source BCI software project, to interpret the signals being read from people’s brains using VR headsets.

– “If you’re a software developer in 2022 who doesn’t have one of these in your test lab, you’re making a silly mistake,”

– “The real world will seem flat, colourless, blurry compared to the experiences you’ll be able to create in people’s brains.”

– “BCIs have advanced to a point where that (VR) vertigo could be suppressed artificially, and that “it’s more of a certification issue than a scientific one”.

– Neuroplasticity is the ability of our brains to re-learn how to operate the body when something changes.

– “You can iterate software faster than a prosthetic”

Python and TCL: Tips and Tricks for Foundry Nuke
/ Featured, production, python, software

www.andreageremia.it/tutorial_python_tcl.html

https://www.gatimedia.co.uk/list-of-knobs-2

https://learn.foundry.com/nuke/developers/63/ndkdevguide/knobs-and-handles/knobtypes.html

 

http://www.andreageremia.it/tutorial_python_tcl.html

 

http://thoughtvfx.blogspot.com/2012/12/nuke-tcl-tips.html


Check final image quality

https://www.compositingpro.com/tech-check-compositing-shot-in-nuke/

Local copy:
http://pixelsham.com/wp-content/uploads/2023/03/compositing_pro_tech_check_nuke_script.nk

 

Nuke tcl procedures
https://www.gatimedia.co.uk/nuke-tcl-procedures

 

Knobs
https://learn.foundry.com/nuke/developers/63/ndkdevguide/knobs-and-handles/knobtypes.html

 

# return to the top
nuke.Root().begin()
nuke.allNodes()
nuke.Root().end()



# check if Nuke is running in UI or batch mode
nuke.env['gui'] # True or False



# prformatted font to use in a text node:
liberation mono




# import node from a path
# Replace '/path/to/your/script.nk' with the actual path to your Nuke script
script_path = '/path/to/your/script.nk'
# Create the node in the script
mynode = nuke.nodePaste(script_path)
# or
mynode = nuke.scriptReadFile(script_path) #asynchronous so the code wont wait for its completion, mynode  is empty
# same as 
mynode = nuke.tcl('source "{}"'.format(script_path))
my node will be empty and it wont select the node either
# or synchronous
mynode = NukeUI.Scriptlets.loadScriptlet(script_path) 




# connect a knob on an internal node to a parent's knob
# add a python expression to the internal node's knob like:
nuke.thisNode().parent()['falseColors'].getValue()
# or the opposite
not nuke.thisNode().parent()['falseColors'].getValue()
# or as tcl expression
1- parent.thatNode.disable




# check session performance
Sebastian Schütt – Monitoring Nuke’s sessions performance
nuke.startPerformanceTimers() nuke.resetPerformanceTimers() nuke.stopPerformanceTimers() # set a project start and end frames new_frame_start = 1 new_frame_end = 100 project_settings = nuke.Root() project_settings['first_frame'].setValue(new_frame_start) project_settings['last_frame'].setValue(new_frame_end) # disable/enable a node newReadNode['disable'].setValue(True) # force refresh a node myNode['update'].execute() # or myNode.forceValidate() # pop up UI alert warning nuke.alert('prompt')   # return a given node hdriGenNode = nuke.toNode('HDRI_Light_Export') clampTo1 = nuke.toNode('HDRI_Light_Export.Clamp To 1')   # access nodes within a group nuke.toNode('GroupNodeName.nestedNodeName')   # access a knob on a node hdriGenNode.knob('checkbox').getValue()   # return the node type topNode.Class() nuke.selectedNode().Class() nuke.selectedNode().name()   # return nodes within a group hdriGenNode = nuke.toNode('HDRI_Light_Export') hdriGenNode.begin() sel = nuke.selectedNodes() hdriGenNode.end()   # nodes position node.setXpos( 111 ) node.setYpos( 222 ) xPos = node.xpos() yPos = node.ypos() print 'new x position is', xPos print 'new y position is', yPos # execute a node's button through python node['button'].execute() # add knobs div = nuke.Text_Knob('someTextKnob','') myNode.addKnob(div) lgt_name = nuke.EvalString_Knob('lgt1_name','LGT1 name', 'some text') # id, label, txt myNode.addKnob(lgt_name) lgt_size = nuke.XY_Knob('lgt1_size', 'LGT1 size') myNode.addKnob(lgt_size) lgt_3Dpos = nuke.XYZ_Knob('lgt1_3Dpos', 'LGT1 3D pos') myNode.addKnob(lgt_3Dpos) lgt_distance = nuke.Double_Knob('lgt1_distance', ' distance') myNode.addKnob(lgt_distance ) lgt_isSun = nuke.Boolean_Knob('lgt1_isSun', ' sun/HMI') myNode.addKnob(lgt_isSun ) lgt_mask_clr = nuke.AColor_Knob('lgt1_maskClr', 'LGT1 mask clr') lgt_mask_clr.setValue([0.12, 0.62, 0.115, 0.65]) lgt_mask_clr.setVisible(False) myNode.addKnob(lgt_mask_clr) # add tab group knob lightTab = nuke.Tab_Knob('lgt1_tabBegin', 'LGT1, nuke.TABBEGINGROUP) myNode.addKnob(lightTab) lightTab = nuke.Tab_Knob('lgt1_tabEnd', 'LGT1', nuke.TABENDGROUP) myNode.addKnob(lightTab) # note if you have only one tab and you are programmatically adding to the bottom of it # remove the last endGroup node to make sure the new knobs go into the tab myNode.removeKnob(myNode['endGroup']) # python script knob remove_script = """ node = nuke.thisNode() for knob in node.knobs(): print(knob) if "lgt%s" in knob: node.removeKnob(node.knobs()[knob]) node.begin() lightGizmo = nuke.toNode('lgt%s') nuke.delete(lightGizmo) node.end() """ % (str(length), str(length)) lgt_remove = nuke.PyScript_Knob('lgt1_remove', 'LGT1 Remove', remove_script) myNode.addKnob(lgt_remove ) # link checkbox to function through knobChanged hdriGenNode.knob('knobChanged').setValue(''' nk = nuke.thisNode() k = nuke.thisKnob() if ("Jabuka_checkbox" in k.name()): print 'ciao' ''') # knobChanged production example my_code = """ n = nuke.thisNode() k = nuke.thisKnob() if k.name()=="sheetOrSequence" or k.name()=="showPanel": #print(nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue()) if nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue() == 0.0: n['frameEnd'].setValue(nuke.toNode(n.name() + '.MasterSwitch')['masterAppendClip_lastFrame'].getValue()) elif nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue() == 1.0 : n['frameEnd'].setValue(nuke.toNode(n.name() + '.MasterSwitch')['masterContactSheet_lastFrame'].getValue()) """ nuke.toNode("JonasCSheet1").knob("knobChanged").setValue(my_code) # retrieve the knobChanged callback node['knobChanged'].toScript() # nuke knobChanged callback https://corson.be/nuke_python_snippet/ # “knobChanged” is an “hidden” knob which holds code executed each time that we touch any node’s knob. # Thanks to that we can filter some user actions on the node and doing cool stuff like dynamically adding things inside a group. # This follows the node code = """ knob = nuke.thisKnob() if knob.name() == 'size': print "size : %s" % knob.value() """ nuke.selectedNode()["knobChanged"].setValue(code) def find_dependent_nodes(selected_node, targetClass): dependent_nodes = set() visited_nodes = set() def recursive_search(node): if node in visited_nodes: return visited_nodes.add(node) dependents = node.dependent() for dependent_node in dependents: print(dependent_node.Class()) if dependent_node.Class() == targetClass: dependent_nodes.add(dependent_node) recursive_search(dependent_node) recursive_search(selected_node) return dependent_nodes find_dependent_nodes(node, 'Write') # nuke changed through a nuke callback def myCallback(): # Code to execute when any checkbox knob changes print("Some checkbox value has changed!") n = nuke.thisNode() k = nuke.thisKnob() if k.name()=="myknob" or k.name()=="showPanel": print('do this') nuke.addKnobChanged(myCallback) nuke.removeKnobChanged(myCallback) # remove it first every time you wish to change the callback # nuke callback production example (note this will need to be saved in a place that nuke can retrieve: https://support.foundry.com/hc/en-us/articles/115000007364-Q100248-Adding-Callbacks-in-Nuke) def sheetOrSequenceCallback(): # Code to execute when any checkbox knob changes #print("Some checkbox value has changed!") n = nuke.thisNode() k = nuke.thisKnob() if k.name()=="sheetOrSequence" or k.name()=="showPanel": #print(nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue()) if nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue() == 0.0: n['frameEnd'].setValue(nuke.toNode(n.name() + '.MasterSwitch')['masterAppendClip_lastFrame'].getValue()) elif nuke.toNode(n.name() + '.MasterSwitch')['which'].getValue() == 1.0 : n['frameEnd'].setValue(nuke.toNode(n.name() + '.MasterSwitch')['masterContactSheet_lastFrame'].getValue()) # Add the callback function to the knob nuke.addKnobChanged(sheetOrSequenceCallback) nuke.removeKnobChanged(sheetOrSequenceCallback) # more about callbacks
# return all knobs for label, knob in sorted(jonasNode.knobs().items()): print(label, knob.value()) # remove a knob for label, knob in sorted(mynode.knobs().items()): if 'keyshot' in label.lower(): mynode.removeKnob(knob) # work inside a node group posNode.begin() posNode.end()   # move back to root level nuke.Root().begin() # Add a button link to docs import webbrowser browser = webbrowser.get('chrome') site = 'https://yoursite' browser.open(site)   # return all nodes nuke.allNodes() # python code inside a text node message [python -exec { import re import json output = 'hello' ... ... }] [python output]   # connect nodes blur.setInput(0, read) # label a node blur['label'].setValue("Size: [value size]\nChannels: [value channels]\nMix: [value mix]") # disconnect nodes node.setInput(0, None)   # arrange nodes for n in nuke.allNodes(): n.autoplace()   # snap them to closest grid for n in nuke.allNodes(): nuke.autoplaceSnap( n )   # help on commands help(nuke.Double_Knob)   # rename nodes node['name'].setValue('new') # query the format of an image at a given node level myNode.input(0).format().width()   # select given node all_nodes = nuke.allNodes() for i in all_nodes: i.knob("selected").setValue(False) myNode.setSelected(True)   # return the connected nodes metaNode.dependent()   # return the input node metaNode.input(0)   # copy and paste node nuke.toNode('original node').setSelected(True) nuke.nodeCopy(nukescripts.cut_paste_file()) nukescripts.clear_selection_recursive() newNode = nuke.nodePaste(nukescripts.cut_paste_file()) # copy and paste node alternative https://corson.be/nuke_python_snippet/ node = nuke.selectedNode() newNode = nuke.createNode(node.Class(), node.writeKnobs(nuke.WRITE_NON_DEFAULT_ONLY | nuke.TO_SCRIPT), inpanel=False) node.writeKnobs(nuke.WRITE_USER_KNOB_DEFS | nuke.WRITE_NON_DEFAULT_ONLY | nuke.TO_SCRIPT)   # set knob value metaNode.knob('operation').setValue('Avg Intensities')   # get knob value writeNode.knob('file').value() # get a pulldown choice knob label pulldown_knob = node[knob_name] pulldown_index = pulldown_knob.value() # Get the current index of the pulldown knob pulldown_label = pulldown_knob.enumName(pulldown_index) # link two knobs' attributes # add knob link k = nuke.Link_Knob('attr1_id','attr1') k.makeLink(node.name(), 'attr2_id.attr2')   # link two knobs between different nodes sel = nuke.selectedNode() lgt_colorspace = nuke.Link_Knob('colorspace', 'Colorspace') sel.addKnob(lgt_colorspace) Read1 = nuke.selectedNode() sel.knob('colorspace').makeLink(Read1.name(), 'colorspace') # link pulldown menus
Ben, how do I expression-link Pulldown Knobs?
This syntax can be read as {child node}.{knob} — link to {parent node}.{knob} nuke.toNode('lgtRenderStatistics.Text2').knob('yjustify').setExpression('lgtRenderStatistics.yjustify') nuke.toNode('lgtRenderStatistics.Text2').knob('xjustify').setExpression('lgtRenderStatistics.xjustify')   # create a grade node set to only red and change its gain mg = nuke.nodes.Grade(name='test2',channels='red') mg['white'].setValue(2) # remove a node nuke.delete(newNode)   # get one value out of an array paramater mynode.knob(pos_name).value()[0] mynode.knob(pos_name).value()[1]   # find all nodes of type write writeNodesList = [] for node in nuke.allNodes('Write'): writeNodesList.append(node)   # create an expression in python to connect parameters myNode.knob("ROI").setExpression("parent." + pos_name) # link knobs between nodes through an expression (https://learn.foundry.com/nuke/content/comp_environment/expressions/linking_expressions.html) Transform1.scale # connect two checkbox knobs so that one works the opposite of the other (False:True) node = nuke.toNode('myNode.Text2_all_sphericalcameratest_beauty') node.knob('disable').setExpression('parent.viewStats ? 0 : 1') # connect parameters between nodes at different level through an (non python) expression maskGradeNode.knob('white').setExpression('parent.parent.lgt1_maskClr') # connect parameters between nodes at different level through a python expression nuke.thisNode().parent()['sheetOrSequence'].getValue() # or using python in TCL [python {nuke.thisNode().parent()['sheetOrSequence'].getValue()}] # multiline python expression in TCL [python {nuke.thisNode().parent()['sheetOrSequence'].getValue()}] [python {print(nuke.thisNode())}] # multiline python expression from code with a return statement nuke.selectedNode().knob('which').setExpression('''[python -execlocal x = 2 for i in range(10): x += i ret = x]''', 0) # connect 2d knobs on the same node through a python expression nuke.thisNode()['TL'].getValue()[0] + ((nuke.thisNode()['TR'].getValue()[0] - nuke.thisNode()['TL'].getValue()[0])/2) # To add this as a python expression on each x and y of a 2d knob newLabel_expression_x = "[python nuke.thisNode()\['TL'\].getValue()\[0\] + ((nuke.thisNode()\['TR'\].getValue()\[0\] - nuke.thisNode()\['TL'\].getValue()\[0\])/2)]" newLabel_expression_y = "[python nuke.thisNode()\['TL'\].getValue()\[1\] + 10]" node['lightLabel'].setExpression(newLabel_expression_x, 0) node['lightLabel'].setExpression(newLabel_expression_y, 1) # note this may launch some errors when generating the node # a tcl expression seems to work best newLabel_expression_x = "lgt" + str(length) + "_tl.x() + 20" newLabel_expression_y = "lgt" + str(length) + "_tl.y() + 20" lgt_label.setExpression(newLabel_expression_x, 0) lgt_label.setExpression(newLabel_expression_y, 1) # load gizmo nuke.load('Offset')   # set knobs colors hdriGenNode.knob('add').setLabel("<span style="color: yellow;"&gt;Add Light") # or at creation, add knob with color lgt_LUX = nuke.Text_Knob('lgt%s_LUX' %str(length),"<font color='yellow'> LUX",'0') # id, label, txt # or when creating the knob manually: <font color='#FF0000'>Keyshot 1 or <font color='red'>Keyshot 1 # set color knob values hdriGenNode.knob('lgt_maskClr_1').setValue([0.0, 0.5, 0.0, 0.8]) hdriGenNode.knob('lgt_maskClr_1').setValue(0.4,3) # set only the alpha # return nuke file path nuke.root().knob('name').value()   # write metadata metadata_content = '{set %sName %s}\n{set %sMaxLuma %s}\n{set %sEV %s}\n{set %sLUX %s}\n{set %sPos2D %s}\n{set %sPos3D %s}\n{set %sDistance %s}\n{set %sScale %s}\n{set %sOutputPath %s}\n' % (lgtName, lgtCustomName, lgtName, str(maxL[0]), lgtName, str(lgt_EV), lgtName, str(lgt_LUX), lgtName, string.replace(str(pos2D),' ',''), lgtName, string.replace(str(pos3D),' ',''), lgtName, str(distance), lgtName, string.replace(str(scale),' ',''), lgtName, outputPath) metadataNode["metadata"].fromScript(metadata_content) # read metadata # metadata should be stored under the read node itself under one of the tabs readNode.metadata().get( 'exr/arnold/host/name' )   # return scene name nuke.root().knob('name').value()   # animate text by getting a knob's value of a specific node: [value Read1.first]   # animate text by getting a knob's value of current node: [value this.size]   # add to the menus mainMenu = nuke.menu( "Nodes" ) mainMenuItem = mainMenu.findItem( "NewMenuName" ) if not mainMenuItem : mainMenuItem = mainMenu.addMenu( "NewMenuName" ) subMenuItem = mainMenuItem.findItem( "subMenu" ) if not subMenuItem: subMenuItem = mainMenuItem.addMenu( "subMenu" ) return [ mainMenuItem ] menus = myMenus() for menu in menus: menu.addCommand('my tool', 'mytool.file.function()', None)   # add aov layer nuke.Layer(mynode, [mynode +'.red', mynode +'.green', mynode +'.blue', mynode +'.alpha'])   # onCreate options (like an onload option) # https://community.foundry.com/discuss/topic/106936/how-to-use-the-oncreate-callback # https://benmcewan.com/blog/2018/09/10/add-new-functionality-to-default-nodes-with-addoncreate/ # For example, you could do this: def setIt(): n = nuke.thisNode() k= n.knob( 'artist' ) user = envTools.getUser() k.setValue(user) nuke.addOnCreate(setIt, nodeClass = "") # retrieve the oncreate function sel = nuke.selectedNodes() code = sel[0]['onCreate'].getValue() print(code) # Or if you want to bake your code directly to a node: code = """ n = nuke.thisNode() k= n.knob( 'artist' ) user = envTools.getUser() k.setValue(user) """ nuke.selectedNode()["onCreate"].setValue(code) # Problem with onCreate is that it's run every time the node is created, which means even open a script will trigger the code.   # replace known nodes nodeToPaste = '''set cut_paste_input [stack 0] version 12.2 v10 push $cut_paste_input Group { name DeepToImage tile_color 0x60ff selected true xpos 862 ypos -3199 addUserKnob {20 DeepToImage} addUserKnob {6 volumetric_composition l "volumetric composition" +STARTLINE} volumetric_composition true } Input { inputs 0 name Input1 xpos -891 ypos -705 } DeepToImage { volumetric_composition {{parent.volumetric_composition}} name DeepToImage xpos -891 ypos -637 } ModifyMetaData { metadata { {remove exr/chunkCount ""} } name ModifyMetaData1 xpos -891 ypos -611 } Output { name Output1 xpos -891 ypos -530 } end_group ''' fileName = '/tmp/deleteme.cache' out_file = open(fileName, "w") out_file.write(str(nodeToPaste)) out_file.close() allNodes = nuke.allNodes() for i in allNodes: i.knob("selected").setValue(False) for node in allNodes: if 'DeepToImage' in node.name(): node.setSelected(True) newNode = nuke.nodePaste(fileName) nuke.delete(node) # force a knob on the same line hdriGenNode.addKnob(lgt_name) # stay on the same line lgt_lightGroup.clearFlag(nuke.STARTLINE) hdriGenNode.addKnob(lgt_lightGroup) # start a new line lgt_extractMode.setFlag(nuke.STARTLINE) hdriGenNode.addKnob(lgt_extractMode)   # text message per frame set cut_paste_input [stack 0] version 12.2 v10 push 0 push 0 push 0 push 0 Text2 { inputs 0 font_size_toolbar 100 font_width_toolbar 100 font_height_toolbar 100 message "MultiplyFloat.a 0.003\nMultiplyFloat2.a 0.35" old_message {{77 117 108 116 105 112 108 121 70 108 111 97 116 46 97 32 32 32 48 46 48 48 51 10 77 117 108 116 105 112 108 121 70 108 111 97 116 50 46 97 32 48 46 51 53} } box {175.2000122 896 1206.200012 1014} transforms {{0 2} } global_font_scale 0.5 center {1024 540} cursor_initialised true autofit_bbox false initial_cursor_position {{175.2000122 974.4000854} } group_animations {{0} imported: 0 selected: items: "root transform/"} animation_layers {{1 11 1024 540 0 0 1 1 0 0 0 0} } name Text20 selected true xpos 1197 ypos -132 } FrameRange { first_frame 1017 last_frame 1017 time "" name FrameRange19 selected true xpos 1197 ypos -77 } AppendClip { inputs 5 firstFrame 1017 meta_from_first false time "" name AppendClip3 selected true xpos 1433 } push 0 Reformat { format "2048 1080 0 0 2048 1080 1 2K_DCP" name Reformat4 selected true xpos 1843 ypos -251 } Merge2 { inputs 2 name Merge5 selected true xpos 1843 } push $cut_paste_input Reformat { format "2048 1080 0 0 2048 1080 1 2K_DCP" name Reformat5 selected true xpos 1715 ypos 80 } Merge2 { inputs 2 name Merge6 selected true xpos 1843 ypos 86 } # check negative pixels set cut_paste_input [stack 0] version 12.2 v10 push $cut_paste_input Expression { expr0 "r < 0 ? 1 : 0" expr1 "g < 0 ? 1 : 0" expr2 "b < 0 ? 1 : 0" name Expression4 selected true xpos 1032 ypos -106 } FilterErode { channels rgba size -1.3 name FilterErode4 selected true xpos 1032 ypos -58 } # check where the user is clicking on the viewer area import nuke from PySide2.QtWidgets import QApplication from PySide2.QtCore import QObject, QEvent, Qt from PySide2.QtGui import QMouseEvent class ViewerClickCallback(QObject): def eventFilter(self, obj, event): if event.type() == QEvent.MouseButtonPress and event.button() == Qt.LeftButton: # Mouse click detected mouse_pos = event.pos() print("Mouse clicked at position:", mouse_pos.x(), mouse_pos.y()) return super(ViewerClickCallback, self).eventFilter(obj, event) #Create an instance of the callback callback = ViewerClickCallback() #Install the event filter on the application qapp = QApplication.instance() qapp.installEventFilter(callback) qapp.removeEventFilter(callback) ## you can put this under a node's button and close the callback after a given mouse click ## OR closing the callback through a different button import nuke from PySide2.QtCore import QObject, QEvent, Qt from PySide2.QtWidgets import QApplication class ViewerClickCallback(QObject): def __init__(self, arg1): super().__init__() self.arg1 = arg1 def eventFilter(self, obj, event): if event.type() == QEvent.MouseButtonPress and event.button() == Qt.LeftButton: # Mouse click detected mouse_pos = event.pos() print("Mouse clicked at position:", mouse_pos.x(), mouse_pos.y(),'\n') if self.arg1 == 'bl': jonasNode['bl'].setValue([mouse_pos.x(), mouse_pos.y()]) qapp.removeEventFilter(callback) return super(ViewerClickCallback, self).eventFilter(obj, event) # Create an instance of the callback callback = ViewerClickCallback('bl') # Store the callback as a global variable nuke.root().knob('custom_callback').setValue(callback) # Install the event filter on the application qapp = QApplication.instance() qapp.installEventFilter(callback) ## on the second button: import nuke # Retrieve the callback object from the global variable callback = nuke.root().knob('custom_callback').value() # Retrieve the QApplication instance qapp = QApplication.instance() # Remove the event filter qapp.removeEventFilter(callback) # collect deepsamples for a given node nodelist = ['DeepSampleB_hdri','DeepSampleA_hdri','DeepSampleB_spheres','DeepSampleA_spheres','DeepSampleB_volume','DeepSampleA_volume','DeepSampleB_furmg','DeepSampleA_furmg','DeepSampleB_furfg','DeepSampleA_furfg','DeepSampleB_furbg','DeepSampleA_furbg','DeepSampleB_checkers','DeepSampleA_checkers'] nodelist = ['DeepSampleB_hdri'] finalSamplesList = [] for nodeName in nodelist: print(nodeName) finalSamplesList.append(nodeName) finalSampes = 0 for posX in range(0,1921): for posY in range(0,1081): nukeNode = nuke.toNode(nodeName) nukeNode['pos'].setValue([posX,posY]) current_posSamples = nukeNode['samples'].getValue() finalSamples = finalSamples + current_posSamples print(finalSamples) finalSamplesList.append(finalSamples) # false color expressions set cut_paste_input [stack 0] version 13.2 v8 push $cut_paste_input Expression { expr0 "(r > 2) || (g > 2) || (b > 2) ? 3:0" expr1 "((r > 1) && (r < 2)) || ((g > 1) && (g < 2)) || ((b > 1) && (b < 2))\n ? 2:0" expr2 "((r > 0) && (r < 1)) || ((g > 0) && (g < 1)) || ((b > 0) && (b < 1))\n ? 1:0" name Expression4 selected true xpos 90 ypos 1795 }

(more…)

teaching AI + ethics from elementary to high school
/ A.I.

codeorg.medium.com/microsoft-code-org-partner-to-teach-ai-ethics-from-elementary-to-high-school-4b983fd809e3

At a time when AI and machine learning are changing the very fabric of society and transforming entire industries, it is more important than ever to give every student the opportunity to not only learn how these technologies work, but also to think critically about the ethical and societal impacts of AI.

Cloud computing – AWS vs AZURE vs GOOGLE
/ hardware, production

www.otava.com/reference/aws-vs-azure-key-differences/

 

www.edureka.co/blog/aws-vs-azure/

 

www.quora.com/What-are-the-major-differences-between-AWS-Azure-and-Google-Cloud

 

Google usually gives the biggest bang for your buck, but is not strong in all fields.
Azure will probably fit best if you are already using a lot of Microsoft products and Windows.
AWS is most mature, has the most flexibility, the best console, but it’s VM’s are less powerful or just more expensive

 

Amazon AWS

– Offers the most infrastructure as a service offerings such as low level computing (EC2), storage (S3), VPC (networking), database (RDS) with support for various operating systems (Windows, Many Linux flavors) with a vast 3rd party marketplace called AWS Marketplace where vendors provide their add-ons.
– Pioneer for serverless computing with Lambda, and now Fargate/Elastic Kubernetes Service.
– While Amazon has more product and feature offerings, it requires professional setup to operate & maintain. AWS gives you building blocks, it is up to you to build it together.
– According to Marketing Intelligence for Cloud Service Providers | Intricately there are over 1MM customers making it one of the most popular computing platforms to date: Intricately
– Price is similar to Microsoft

 

Microsoft AZURE

– Supports both Windows & Linux workloads, with a very deep integration into Microsoft’s developer ecosystem with Visual Studio, .NET etc. If you are an MSFT developer, Azure is an easy way to get your application deployed.
– Price is similar to AWS

 

Google

– The newest entrant to the market, offers mostly Platform as Service offerings like Machine Learning as a Service, Kubernetes as a Service etc. However, Google does not offer as many product offerings.
– Google pioneered Kubernetes and is the leader in terms of delivering a truly managed Kubernetes experience
– Price can be cheaper than AWS and Microsoft due to the PaaS nature of the product, may require less building.

 

AnimationXpress.com interviews Daniele Tosti for TheCgCareer.com channel
/ Featured, ves

https://www.animationxpress.com/vfx/meet-daniele-tosti-a-senior-cg-artist-who-is-on-a-mission-to-inspire-the-next-generation-of-artists/

 

You’ve been in the VFX Industry for over a decade. Tell us about your journey.

It all started with my older brother giving me a Commodore64 personal computer as a gift back in the late 80′. I realised then I could create something directly from my imagination using this new digital media format. And, eventually, make a living in the process.
That led me to start my professional career in 1990. From live TV to games to animation. All the way to live action VFX in the recent years.

I really never stopped to crave to create art since those early days. And I have been incredibly fortunate to work with really great talent along the way, which made my journey so much more effective.

 

What inspired you to pursue VFX as a career?

An incredible combination of opportunities, really. The opportunity to express myself as an artist and earn money in the process. The opportunity to learn about how the world around us works and how best solve problems. The opportunity to share my time with other talented people with similar passions. The opportunity to grow and adapt to new challenges. The opportunity to develop something that was never done before. A perfect storm of creativity that fed my continuous curiosity about life and genuinely drove my inspiration.

 

Tell us about the projects you’ve particularly enjoyed working on in your career

I quite enjoyed working on live TV projects, as the combination of tight deadlines and high quality was quite an incredible learning platform as a professional artist. But working on large, high end live action feature projects was really where I learnt most of my trade. And gave me the most satisfaction.

Every film I worked on had some memorable experiences. Right from Avatar to Iron Man 3 to Jungle Book to The Planet of the Apes to The Hobbits to name a few.

But above all, the technical challenges and the high quality we reached in each and every of the projects that I worked on, the best memories come from working with amazing and skilled artists, from a variety of disciplines. As those were my true mentors and became my best friends.

Post Production, Animation, VFX, Motion Graphics, Video Editing …

 

What are some technologies and trends that you think are emerging in the VFX Industry?

In the last few years there has definitely been a bias from some major studios to make VFX a commodity. In the more negative sense of the word. When any product reaches a level of quality that attracts a mass of consumers and reaches a plateau of opportunities, large corporation tend to respond with maximising its sale values by leveraging marketing schemes and deliverable more than the core values of the product itself. This is often a commoditisation approach that tends to empower agents who are not necessarily knowledgeable of a product’s cycles, and in that process, lowering the quality of the product itself for the sake of profits. It is a pretty common event in modern society and it applies to any brand name, not just VFX.

One challenge with VFX’s technology and artistry is that it relies on the effectiveness of artists and visionaries for the most. And limiting the authority, ownerships and perspective of such a crowd has definitely directly impacted the overall quality of the last decade of productions, both technically and artistically. There are very few and apart creative forces who have been able to deliver project that one could identify as a truly creative breakthrough. While the majority of productions seem to have suffered from some of these commoditisation patterns.

The other bigger challenge with this current trend is that VFX, due to various, historical business arrangements, is often relying on unbalanced resources as well as very small and feeble economic cycles and margins. Which make the entire industry extremely susceptible to marketing failures and to unstable leadership. As a few recent bankruptcies have demonstrated.

It is taking some reasonable time for the VFX crowd to acknowledge these trends and learn to be profitable, as the majority has never been educated on fair business practices.

But. Thankfully, the VFX circle is also a crowd of extremely adaptable and talented individuals, who are quite capable at resolving issues, finding alternatives and leveraging their passion. Which I believe is one of the drives behind the current evolution in the use of artificial intelligence, virtual reality, virtual production, real time rendering, and so on.

There is still a long path ahead of us but I hope we are all learning ways to make our passion speaks in profitable ways for everyone.

It is also highly likely that, in a near future, larger software and hardware corporation, thanks to their more profitable business practices, large development teams and better understanding of marketing, will eventually take over a lot of the cycles that the current production houses currently run. And in that process allow creative studios to focus back on VFX artistry.

 

What effect has the pandemics-induced lockdown had on the industry?

It is still early to say. I fear that if live action production does not start soon, we may see some of the economic challenges I mention above. At both studio and artists’ scale. There is definitely a push from production houses to make large distribution clients understand the fragility of the moment, especially in relation to payment cycles and economic support. Thus, there is still a fair risk that the few studios which adopted a more commoditised view to production will make their artists pay some price for their choices.

But, any challenge brings opportunities. For example, there is finally some recognition into a momentum to rely on work-from-home as a feasible solution to a lot of the current office production’s limitations and general artistry restrictions. Which, while there is no win-win in this pandemic, could be a silver lining.

 

What would you say to the budding artists who wish to become CG artists or VFX professionals?

Follow your passion but treat this career as any other business.
Learn to be adaptable. Find a true balance between professional and family life. Carefully plan your future. And watch our channel to learn more about all these.

Being a VFX artist is fundamentally based on mistrust.
This because schedules, pipelines, technology, creative calls… all have a native and naive instability to them that causes everyone to grow a genuine but beneficial lack of trust in the status quoThe VFX motto: “Love everyone but trust no one” is born on that.

 

What inspired you to create a channel for aspiring artists?

As many fellow and respected artists, I love this industry, but I had to understand a lot of business practices at my own expenses.
You can learn tools, cycles and software from books and schools. But production life tends to drive its own rhythms and there are fewer opportunities to absorb those.

Along my career I had some challenges finding professional willing to share their time to invest into me. But I was still extremely fortunate to find other mentors who helped me to be economically and professionally successful in this business. I owe a lot to these people. I promised myself I would exchange that favour by helping other artists, myself.

 

What can students expect to learn from your channel?

I am excited to have the opportunity to fill some of the voids that the current education systems and industry may have. This by helping new artists with true life stories by some of the most accomplished and successful talents I met during my career. We will talk about technology trends as much as our life experiences as artists. Discussing career advises. Trying to look into the future of the industry. And suggesting professional tips. The aim through this mentor-ship is to inspire new generations to focus on what is more important for the VFX industry. Take responsibilities for their art and passions as much as their families.

And, in the process, to feel empowered to materialise from their imagination more and more of those creative, awe inspiring moments that this art form has gifted us with so far.

 

http://TheCGCareer.com

 

Raspberry Pi – introduction and basic projects
/ hardware, production, software

Connect through SSH on windows
https://www.putty.org/

Connect through Desktop
Remote Desktop

Common commands
> sudo raspi-config
> sudo apt-get update
> sudo apt-get upgrade
> ifconfig
> nano test.py
> wget https://path.to.image.png
> sudo apt-get install git
> git clone https://REPOSITORY
> sudo reboot
> suto shutdown -r now (reboot after shutdown)
> cat /etc/os-release

 

 

 

Starting kits:

(more…)

Blue Griffon – the new open source WYSIWYG, NVU like html and CSS editor
/ software

www.bluegriffon.org

BlueGriffon is an open source WYSIWYG editor powered by Gecko, the rendering engine developed for Mozilla Firefox. One of a few derivatives of NVU, a now-discontinued HTML editor, BlueGriffon is the only actively developed NVU derivative that supports HTML5 as well as modern components of CSS.

If your goal is to write as little actual HTML as possible, then BlueGriffon is the tool you want. It’s a true drag-and-drop WYSIWYG website designer, and even includes a dual view option so you can see the code behind your design, in case you want to edit it or just learn from it.

It also supports the EPUB ebook format, so you don’t have to just publish to the web: you can provide your readers with a download of your content that they can take with them. Licensed under the MPL, GPL, and LGPL, a version of BlueGriffon is available for Linux, Windows, and Mac.

Source: opensource.com/alternatives/dreamweaver

Keyword Extraction theory and practice
/ production, software

monkeylearn.com/keyword-extraction/

Keyword extraction is the automated process of extracting the most relevant words and expressions from text.

 

https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/

 

https://cloud.google.com/natural-language/docs/basics

What the Boeing 737 MAX’s crashes can teach us about production business – the effects of commoditisation
/ quotes, ves

newrepublic.com/article/154944/boeing-737-max-investigation-indonesia-lion-air-ethiopian-airlines-managerial-revolution

 

 

Airplane manufacturing is no different from mortgage lending or insulin distribution or make-believe blood analyzing software (or VFX?) —another cash cow for the one percent, bound inexorably for the slaughterhouse.

 

The beginning of the end was “Boeing’s 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO (Harry Stonecipher) who liked to use what he called the “Hollywood model” for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers.” And all that came with it. “Stonecipher’s team had driven the last nail in the coffin of McDonnell’s flailing commercial jet business by trying to outsource everything but design, final assembly, and flight testing and sales.”

 

It is understood, now more than ever, that capitalism does half-assed things like that, especially in concert with computer software and oblivious regulators.

 

There was something unsettlingly familiar when the world first learned of MCAS in November, about two weeks after the system’s unthinkable stupidity drove the two-month-old plane and all 189 people on it to a horrific death. It smacked of the sort of screwup a 23-year-old intern might have made—and indeed, much of the software on the MAX had been engineered by recent grads of Indian software-coding academies making as little as $9 an hour, part of Boeing management’s endless war on the unions that once represented more than half its employees.

 

Down in South Carolina, a nonunion Boeing assembly line that opened in 2011 had for years churned out scores of whistle-blower complaints and wrongful termination lawsuits packed with scenes wherein quality-control documents were regularly forged, employees who enforced standards were sabotaged, and planes were routinely delivered to airlines with loose screws, scratched windows, and random debris everywhere.

 

Shockingly, another piece of the quality failure is Boeing securing investments from all airliners, starting with SouthWest above all, to guarantee Boeing’s production lines support in exchange for fair market prices and favorite treatments. Basically giving Boeing financial stability independently on the quality of their product. “Those partnerships were but one numbers-smoothing mechanism in a diversified tool kit Boeing had assembled over the previous generation for making its complex and volatile business more palatable to Wall Street.”

(more…)

If a blind person gained sight, could they recognize objects previously touched?
/ colour, quotes

news.psu.edu/story/141360/2006/04/17/research/probing-question-if-blind-person-gained-sight-could-they-recognize

 

Blind people who regain their sight may find themselves in a world they don’t immediately comprehend. “It would be more like a sighted person trying to rely on tactile information,” Moore says.

 

Learning to see is a developmental process, just like learning language, Prof Cathleen Moore continues. “As far as vision goes, a three-and-a-half year old child is already a well-calibrated system.”

11 The Nine Situations | The Art of War by Sun Tzu
/ quotes

 

https://medium.com/@shahmm/building-a-great-business-and-the-art-of-war-strategy-part-01-b8e4db611d4f

https://tweakyourbiz.com/global/the-art-of-war

 

https://www.fastcompany.com/3021122/fighting-your-business-battles-6-lasting-lessons-from-sun-tzus-art-of-war

 

– Being prepared at what you do can be the difference between success and failure when things go wrong

 

– Your king is your own customers. If you care for them, they will care for your project. Anticipate their needs, desires, wants and fulfill them with an unbiased mind.

 

– Understand and respect the scope, ownerships and accountabilities of the project you work on.

 

– Be subtle and diplomatic. You can only learn when you listen. But always be prepared to answer and follow up.

 

– Share efforts with other people in the project by offering free help, as that will come back as an investement.

 

– Focus on key elements of a production which are the least organized or efficient.

 

– Validate and qualify your resources before taking on a plan.

 

– Invest into a plan only if you are sure it can be completed successfully.

 

– Value a project’s requirements and its users’ experience before the technology development itself.

 

– Motivate your teams by the gains in specific production investments.

 

– Organize tasks and teams based on their strenghts and self efficiency.

 

– Analyze the project’s requirements and resources. Then prioritize them accordingly.

 

– Observe and resolve bottlenecks, opportunities and users’ needs

 

– Detail a plan B as soon as you striclty commit to a detailed plan A.

 

– Dedicate some time and small teams to research efficient alternatives.

 

– Build only and always on top of stable and known cycles.

 

– Focus on the big items if they can resolve a lot of small ones.

 

– If something worked before is still worth to think out of the box.

 

– Combine all your team strengths into a unified collaborative effort.

 

Ethan Roffler interviews CG Supervisor Daniele Tosti
/ Featured, lighting, ves

Ethan Roffler
I recently had the honor of interviewing this VFX genius and gained great insight into what it takes to work in the entertainment industry. Keep in mind, these questions are coming from an artist’s perspective but can be applied to any creative individual looking for some wisdom from a professional. So grab a drink, sit back, and enjoy this fun and insightful conversation.



Ethan

To start, I just wanted to say thank you so much for taking the time for this interview!

Daniele
My pleasure.
When I started my career I struggled to find help. Even people in the industry at the time were not that helpful. Because of that, I decided very early on that I was going to do exactly the opposite. I spend most of my weekends talking or helping students. ;)

Ethan
That’s awesome! I have also come across the same struggle! Just a heads up, this will probably be the most informal interview you’ll ever have haha! Okay, so let’s start with a small introduction!

Daniele
Short introduction: I worked very hard and got lucky enough to work on great shows with great people. ;) Slightly longer version: I started working for a TV channel, very early, while I was learning about CG. Slowly made my way across the world, working along very great people and amazing shows. I learned that to be successful in this business, you have to really love what you do as much as respecting the people around you. What you do will improve to the final product; the way you work with people will make a difference in your life.

Ethan
How long have you been an artist?

Daniele
Loaded question. I believe I am still trying and craving to be one. After each production I finish I realize how much I still do not know. And how many things I would like to try. I guess in my CG Sup and generalist world, being an artist is about learning as much about the latest technologies and production cycles as I can, then putting that in practice. Having said that, I do consider myself a cinematographer first, as I have been doing that for about 25 years now.

Ethan
Words of true wisdom, the more I know the less I know:) How did you get your start in the industry?
How did you break into such a competitive field?

Daniele
There were not many schools when I started. It was all about a few magazines, some books, and pushing software around trying to learn how to make pretty images. Opportunities opened because of that knowledge! The true break was learning to work hard to achieve a Suspension of Disbelief in my work that people would recognize as such. It’s not something everyone can do, but I was fortunate to not be scared of working hard, being a quick learner and having very good supervisors and colleagues to learn from.

Ethan
Which do you think is better, having a solid art degree or a strong portfolio?

Daniele
Very good question. A strong portfolio will get you a job now. A solid strong degree will likely get you a job for a longer period. Let me digress here; Working as an artist is not about being an artist, it’s about making money as an artist. Most people fail to make that difference and have either a poor career or lack the understanding to make a stable one. One should never mix art with working as an artist. You can do both only if you understand business and are fair to yourself.



Ethan

That’s probably the most helpful answer to that question I have ever heard.
What’s some advice you can offer to someone just starting out who wants to break into the industry?

Daniele
Breaking in the industry is not just about knowing your art. It’s about knowing good business practices. Prepare a good demo reel based on the skill you are applying for; research all the places where you want to apply and why; send as many reels around; follow up each reel with a phone call. Business is all about right time, right place.

Ethan
A follow-up question to that is: Would you consider it a bad practice to send your demo reels out in mass quantity rather than focusing on a handful of companies to research and apply for?

Daniele
Depends how desperate you are… I would say research is a must. To improve your options, you need to know which company is working on what and what skills they are after. If you were selling vacuum cleaners you probably would not want to waste energy contacting shoemakers or cattle farmers.

Ethan
What do you think the biggest killer of creativity and productivity is for you?

Daniele
Money…If you were thinking as an artist. ;) If you were thinking about making money as an artist… then I would say “thinking that you work alone”.

Ethan
Best. Answer. Ever.
What are ways you fight complacency and maintain fresh ideas, outlooks, and perspectives

Daniele
Two things: Challenge yourself to go outside your comfort zone. And think outside of the box.

Ethan
What are the ways/habits you have that challenge yourself to get out of your comfort zone and think outside the box?

Daniele
If you think you are a good character painter, pick up a camera and go take pictures of amazing landscapes. If you think you are good only at painting or sketching, learn how to code in python. If you cannot solve a problem, that being a project or a person, learn to ask for help or learn about looking at the problem from various perspectives. If you are introvert, learn to be extrovert. And vice versa. And so on…

Ethan
How do you avoid burnout?

Daniele
Oh… I wish I learned about this earlier. I think anyone that has a passion in something is at risk of burning out. Artists, more than many, because we see the world differently and our passion goes deep. You avoid burnouts by thinking that you are in a long term plan and that you have an obligation to pay or repay your talent by supporting and cherishing yourself and your family, not your paycheck. You do this by treating your art as a business and using business skills when dealing with your career and using artistic skills only when you are dealing with a project itself.

Ethan
Looking back, what was a big defining moment for you?

Daniele
Recognizing that people around you, those being colleagues, friends or family, come first.
It changed my career overnight.

Ethan
Who are some of your personal heroes?

Daniele
Too many to list. Most recently… James Cameron; Joe Letteri; Lawrence Krauss; Richard Dawkins. Because they all mix science, art, and poetry in their own way.

Ethan
Last question:
What’s your dream job? ;)

Daniele
Teaching artists to be better at being business people… as it will help us all improve our lives and the careers we took…

Being a VFX artist is fundamentally based on mistrust.
This because schedules, pipelines, technology, creative calls… all have a native and naive instability to them that causes everyone to grow a genuine but beneficial lack of trust in the status quo. This is a fine balance act to build into your character. The VFX motto: “Love everyone but trust no one” is born on that.

 

A Brief History of Color in Art
/ colour, reference

www.artsy.net/article/the-art-genome-project-a-brief-history-of-color-in-art

Of all the pigments that have been banned over the centuries, the color most missed by painters is likely Lead White.

This hue could capture and reflect a gleam of light like no other, though its production was anything but glamorous. The 17th-century Dutch method for manufacturing the pigment involved layering cow and horse manure over lead and vinegar. After three months in a sealed room, these materials would combine to create flakes of pure white. While scientists in the late 19th century identified lead as poisonous, it wasn’t until 1978 that the United States banned the production of lead white paint.

More reading:
www.canva.com/learn/color-meanings/

https://www.infogrades.com/history-events-infographics/bizarre-history-of-colors/

Equirectangular 360 videos/photos to Unity3D to VR
/ IOS, production, software, VR

SUMMARY

  1. A lot of 360 technology is natively supported in Unity3D. Examples here: https://assetstore.unity.com/packages/essentials/tutorial-projects/vr-samples-51519
  2. Use the Google Cardboard VR API to export for Android or iOS. https://developers.google.com/vr/?hl=en https://developers.google.com/vr/develop/unity/get-started-ios
  3. Images and videos are for the most equirectangular 2:1 360 captures, mapped onto a skybox (stills) or an inverted sphere (videos). Panoramas are also supported.
  4. Stereo is achieved in different formats, but mostly with a 2:1 over-under layout.
  5. Videos can be streamed from a server.
  6. You can export 360 mono/stereo stills/videos from Unity3D with VR Panorama.
  7. 4K is probably the best average resolution size for mobiles.
  8. Interaction can be driven through the Google API gaze scripts/plugins or through Google Cloud Speech Recognition (paid service, https://assetstore.unity.com/packages/add-ons/machinelearning/google-cloud-speech-recognition-vr-ar-desktop-desktop-72625 )

DETAILS

  • Google VR game to iOS in 15 minutes
  • Step by Step Google VR and responding to events with Unity3D 2017.x

https://boostlog.io/@mohammedalsayedomar/create-cardboard-apps-in-unity-5ac8f81e47018500491f38c8
https://www.sitepoint.com/building-a-google-cardboard-vr-app-in-unity/

  • Basics details about equirectangular 2:1 360 images and videos.
  • Skybox cubemap texturing, shading and camera component for stills.
  • Video player component on a sphere’s with a flipped normals shader.
  • Note that you can also use a pre-modeled sphere with inverted normals.
  • Note that for audio you will need an audio component on the sphere model.
  • Setup a Full 360 stereoscopic video playback using an over-under layout split onto two cameras.
  • Note you cannot generate a stereoscopic image from two 360 captures, it has to be done through a dedicated consumer rig.
    http://bernieroehl.com/360stereoinunity/

VR Actions for Playmaker
https://assetstore.unity.com/packages/tools/vr-actions-for-playmaker-52109

100 Best Unity3d VR Assets
http://meta-guide.com/embodiment/100-best-unity3d-vr-assets

…find more tutorials/reference under this blog page
(more…)

Daniele Tosti Interview for the magazine InCG, Taiwan, Issue 28, 201609
/ Featured, ves

Interview for the magazine InCG, Taiwan, Issue 28, 201609

————————————————————-
– First of all can you introduce yourself to our audience, who you are, how you join this part of industry? Can you talk about your past experience as VFX artist?

My career started on a late Christmas night in the middle of the 1980s. I remember waking up to the soundtrack of Ghostbusters playing off from a new Commodore 64 console. My older brother, Claudio, left the console in my room, as a gift. And I was hooked.

Since that moment I spent any free time available to play with computer technology and in particular computer graphic. Eventually this evolved into a passion that pushed me to learn the basic techniques and the art of all related to computer graphic. In a time when computer graphic at consumer level was still in its infancy.

My place would be filled with any computer graphic magazine I could put my hands on. As well as the first few books. A collection that at some point grew to around 300 books. From the making-of movie books. To reference books. To animation books. And so on. My first girlfriends were not too thrilled about sharing the space in that room.

This passion, as well as the initial few side jobs creating small animated videos and logos for local companies, eventually gave me enough confidence in my abilities and led me into my first professional job. As a computer graphic technician, driving lead and credit titles for one of the first few private national TV stations in Italy. Not necessarily a striking but a well paid job.

The fact that I could make money through what I loved the most was an eye opener in my young life. It gave me fuel to invest even more of my time in the art and it did set the fundamentals for a very long career than has spanned over 20 years, across TV productions, commercials, video games and more recently feature movies.

————————————————————-

– Can you introduce us about your current company?

After leaving Italy I started working for some of the most recognized Studios around the world, and eventually for facilities such as Disney Features, Sony Imageworks, Moving Picture Company. During that period I had the fortune to serve along world level talents and supervisors, who helped me refine both my technical and artistic skills. This while also investing my time into learning about management and training cycles.

I started sharing some of this personal knowledge and production experience throughout the world with ReelMatters Ltd.

But eventually those extra skills allowed me to reach my dream in 2008, when I joined the team at Weta Digital in Wellington, New Zealand, to help on James Cameron’s Avatar.

Weta has since been my family and the source of my pride. The level of expertise, passion and vision among the crew at Weta is inspirational and clearly visible in any project we work on. We all tend to thrive on perfection here and continuously pushing quality well beyond standards. One of the reasons why Weta is still at the forefront of the VFX industry nowadays.

————————————————————-

– What sort of movie had you participated before? Out of all movies what was the most challenging that you had encountered?

Due to my early, self thought, home training, it became easier for me to be involved with CG animation productions first. On that front, my best memories are working on Sony Imageworks’ “Surf’s Up” as well as on Steven Spielberg and Peter Jackson’s “The Adventure Of Tintin”. Movies which both raised the bar for CG environments and character animation.

Most recently I have seen myself more involved with live action features, such as: “Avatar”, “Rise Of The Planet Of The Apes” and “Dawn Of The Planet Of The Apes”, “The Hobbit: An Unexpected Journey” and “The Hobbit: The Battle Of Armies”, “Iron Man Three”. All the way to Jon Favreau’s Walt Disney production: “The Jungle Book”.

Each production has its own level of complexity and it is hard to make comparisons. Having some basic training has been fundamental for me to be able to see these features to delivery, while being flexible enough in sorting out those unique daily trials.

Feature production overall is an unique challenge itself. You do need a solid understanding of both technology and human nature to be able to find solutions which are applicable to a constantly moving target, across the life of a project. Often under a commercially driven, delivery pressure. And while working along a multitude of different unique talents.

It is quite a life changing experience, worth the pages of a best selling book. Where each chapter has its own plot.

————————————————————-

– How do you co-operate with other special effect artist in order to create realistic effect?

While there is an incredible amount of high class talent in the feature production business, no production is ever done by just an individual. It’s always the product of a constant collaboration that flows from the brain of visionary directors to the hands of skillful visual artist, and back.

Providing the perfect backdrop for this collaboration is what usually makes some productions more successful than others.

In that context. Creativity is the true fusion of the best ideas shared by this pool of minds, independently from which level of production you are at.

Management’s job is to feed and support this fusion, not to drive it.

And the working environment is one that allows trust and respect between all parties, while avoiding mechanical routines.

In other words. No piece of hardware or software will make a visually pleasant picture by itself unless someone infuses it with a soul. As George Sand once said “ The artist vocation is to send light into the human heart.”.

And to paraphrase Arthur C. Clark, I believe that a true collaboration between visionaries and artists is what makes “any sufficiently advanced (CG) technology indistinguishable from magic”.

————————————————————-

– What does it mean to you to create a good quality effect?

Any good CG effect that you would call as such is an effect that live for its purpose. Which most of the time is to support the action or the plot at hand.

In a live action feature, I tend to be in awe when the effect is helping experiencing that perfect Suspension Of Disbelief. Which is, the willingness to suspend logic and criticism for the sake of enjoying the unbelievable.

As soon as any effect breaks from its purpose or it is not up to the task at hand, your brain will tend to over analyze the visuals and, as such, take you away from the overall experience.

It is interesting to see that movies such as Jurassic Park are still holding their ground nowadays. Where more modern vfx productions tend to look dated very quickly. From that point of view, it appears to me that a quite a common mistake today is to overcompensate visuals with camera work, digital grading and computer generated work for the sake of the effect, more than to serve the story and the truth of the moment.

————————————————————-

– If it is possible for you to share tips about creating good quality effect?

1- The generalist at heart.

One question that I get quite often during my seminars is what should new vfx artists focus on. Is it specializing on a tool? Or learning a discipline? Or mastering a specific skill?

It is a fact that higher level Studios tend to hire people with well defined talents that fit in specific operational labels. In this way it is easier for them to fulfill recruitment numbers and satisfy production’s immediate needs.

What happens after wards, when you start working as a VFX artist, is not always as well defined. The flexible nature of feature production cycles and delivery deadlines is often a catalyst for a multitude of variations in an artist’s work life. Especially on the post-production side of a digital pipeline. For that reason, I notice that people with more generic skills, with an ability to adapt to new processes and a genuinely open nature tend to fit in better and last longer throughout various projects.

The exception here being artists with dedicated PHDs and/or masters of a very specific domain, which makes them highly specialized in the VFX crowd and able to have a niche of their own.

Looking at the software or hardware side of things, technology is still progressing on a daily basis. And will continue doing so. To this extent, many facilities rely on proprietary technology. Thus specializing on a single tool, without learning the CG art’s basics, is also a dangerous game to play. You may end up being obsolete along the program you have learned. Or, in the best case, having a very limited number of facilities you can apply to.

What I suggest as a general rule to young VFX artists is to focus their energies in learning all that constitutes the basis of a successful career in computer graphic, along with improving their natural talent. So. From understanding modeling. To lighting and color. From rigging to animation. From procedural cycles to FX mechanism.

Doing so, building the knowledge necessary not only to satisfy a possible recruitment position, but also to be able to interact with people with different talents in a large facility. And as such, have enough confidence to quickly help and fit it in the bigger picture, which often forms these complex production pipelines.

On that note, competition for very few spots in a large studio is also a challenge when combined with trying to win the attention of a busy HR office or of a busy VFX Supervisor.

When applying for a VFX position, it is quite beneficial to have a very clear introduction letter, which simply states in one line the discipline you are applying for. That being for example: modeling, animation, texturing, shading, … But never indicating more than one discipline at the time. Then in the body of the introduction letter describe that, if need arises, you could also help covering other positions which fit along your skills.

Finally, supporting your application with a very short demo reel (one minute top, possibly less) that shows and clearly labels your very best work in the main discipline you are applying for and clarifies your side skills, wherever those are applicable. To this extent, if you are interested in multiple disciplines, it is highly recommended to prepare multiple introduction letters and related demo reels to satisfy each separate application.

2-What constitute the best production pipeline.

There is always a lot of pride in winning accolades in the VFX industry. And deservedly so. The amount of energy, investments, time and talent required to achieve such a task is, to say the least, overwhelming. Very few Studios and individuals have the sensibility,
experience and organization to pull that feat.

In support of these cycles, there is also a lot of new technology and specialized tools which continuously push the boundaries of what is achievable in computer graphic on a daily basis. To the point that I am confident the majority of senior VFX people in the industry would agree that we are still at the beginning of this exploration, in many ways.

Where a painter is looking for an intimate inspiration to fill in his lonely blank canvas, with a brush and a small collection of colors at his disposal. CG is often the product of a perfect balance between a crowd of ambitions, thousands of frames, a multitude of digital gadgets and a variety of complex mediums.

The combination of new visions and new science is also what makes organizing these complex VFX tasks an expensive challenge in itself, worth the efforts of the most influential CTOs and producers around the world.

A challenge well described in a white-paper about The Status Of Visual Effects written by Renee Dunlop, Paul Malcolm, Eric Roth for the Visual Effects Society in July 2008.
Between the pages, the writers detail a few of the biggest obstacles currently affecting production:
– The difficulty to determine who is in charge of certain creative decisions.
– Directors and Producers’ mixed approach to pre and post visualization.
– The lack of consistency and resources between pre, mid and post production.
– A lack of consistency throughout pipelines, mainly due to the impact of new technologies.

Most of the time, this translates into a very costly, “brute-force” solution workflow. Which, in its own, destabilize any reasonable software production schemes that Studios are willing to invest into.
While a collection of good stable software it’s a fair base for any visual effects venture, I firmly believe that to defy these challenges the core of any VFX pipeline should be a software agnostic one.

All CG elements should be able to be translated effortlessly across tools, independently from their original disciplines’ unique requirements.
And, more than the compartmentalized organization used in other markets, the key structure of this pipeline should focus on the flow of data and the quality of the inventory.
The rest is important, but not essential.

By achieving such a system, the work environment would prove to:
. Be flexible enough to maintain integrity across platforms and departments.
. Allow modifications to the software infrastructure without affecting deliverables.
. Accept various in house and external content.
. And deliver quality without jeopardizing speed.

Overall and independently from the approach, the support of flow of data and of inventory quality is for me a critical element that would help any production survive under the majority of modern, commercial delivery stress requirements.
This framework would help maintaining productivity stable even with continuous changes in a feature’s vision and objectives.

Finally, it would help training the modern VFX artist not to rely on those unique tools or solutions which are software centric and bound to expiry when new technology arises. Thus keeping skills and talent always applicable to the task at hand, to the long lasting benefit of the production studio.

To support such a mechanism, facilities should consider researching and investing into :
. A stable, software independent, browser based, asset and shot manager.
. A solid look development structure.
. A software independent, script based, rendering management solution.

And an asset living in this environment should sport basic qualities such as:
. being version-able
. being hash-able
. being track-able
. being verbose
. being software and hierarchic relation agnostic
. being self-contained
. supporting expandable qualities
. supporting temporally and shading stable procedural decimation

————————————————————-

– Can you give a word of inspiration to those who wish to participate as VFX artist

If anyone is willing to notice it or not, the vast majority of top grossing movies coming out every year are now filled with special effects created by a new wave of craftsmen who share their talent all around the world.

We are living in a period where the new DaVincis, Botticellis and Galileos live their life, comfortably seating in front of a computer. Creating a new art form which converts ones and zeros into a visually pleasing virtual reality. All this while offering their artistry away from language, race and belief barriers.

The knowledge required to achieve such a task is still a mix of an incredible amount of disciplines.

From biology and zoology, to physics and mathematics. From sculpting to painting. From astronomy to molecular chemistry.

It is an incredible opportunity to have a working career, learning about all aspects of life, while creating a new Suspension Of Disbelief

Processing – a flexible software sketchbook
/ production, software

https://processing.org/

 

Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping.

 

» Free to download and open source
» Interactive programs with 2D, 3D or PDF output
» OpenGL integration for accelerated 2D and 3D
» For GNU/Linux, Mac OS X, Windows, Android, and ARM
» Over 100 libraries extend the core software