vfx: cooler projects from the last 2 years here in Chile

I've been quite fortunate to have been able to work on some pretty cool projects since freelancing as a Nuke compositor in Santiago about 18 months ago. I realised I haven't shared much on here for too long so here is my start at posting some updates and other coolness.

feels.tv has been completely rebranded and hats off the the great work that they are doing. Whilst checking out the site I came across one of the projects I was involved with earlier this year for AFP Habitat.

Entel Campaña Institucional

This was a cool project working with Feels and the guys at Alaska films. Feels got me onboard to help out with the compositing side of things. We comped up a stadium, integrated cg balloons and extended a crowd amongst other things.

La Ruta del Vino

Not a comp job but a 1 hour mega project in which I was involved as videographer, editor, post producer amongst other things. We did this project at Orangutan and is really something I'm very proud of.

Ladybug Reveal Teaser

This was an idea that Juan Paulo at Leyenda had to help promote the business - there are some very talented dudes working over there in the world of animation. 

Orangutan showreel 2013

Some of the cooler work we did at Orangutan this year.

Recursive rsync with only specific file types

I had never worked out how to use rsync to create intermediate directories when syncing over certain filetypes (in my case nuke’s .nk files). So the other day I googled around and found exactly what i was looking for.

A great way to sync directories and certain filetypes recursively:

rsync -pavr –progress –include “+ */” -include=”*.nk” -exclude=”- *” /my/source/directory /my/destination/

And all the thanks needs to go to Mike:

fcp 7 to nuke

A FCP 7 timeline with 20 shots in it (all in one video layer).
I need to comp them all separately in nuke.
I want to have each shot in a separate numbered directory, with subdirectories for “scripts” and “renders”

Here is a little python script i wrote this morning to export a basic FCP 7 timeline as a series of nuke scripts into numbered shot directories.
This script parses an XML file that I exported from FCP 7 (the sequence/edit).

I have only used this for the purposes i needed, but it works.
Perhaps it could be expanded to incorporate multiple video layers.

Each nuke script that is generated is just a read node, with the nuke.root frame range set to the duration of the clip in FCP 7.

Even if the script doesn’t help everyone’s fcp7-nuke needs I found that i learnt a lot about Python’s elementTree XML library. Nice clean simple.
And also this page was a big help to get things going:

The script needs to be run from a shell/terminal:
python /path/to/the/script/fcpNuke.py /path/to/you/xmlfile.xml

*file re-uploaded 11 May 2012

The script:
fcp7toNuke (349 downloads)

some Nuke Python snippets (and other general coolness)

If anyone has some cool snippets to share please feel free to leave them in a comment below.

*last updated: 23 December 2016

user knob value in an expression / particle expression  

[value yourUserKnob]

random particle colour with particle expression node

*** a blendMat node must be applied before the emitter:

 operation: plus
 surface blend: modulate

the “color” parameter of a particleExpression node can be set using a 3dVector:
v( r, g, b )

and each channel can be referenced like this:
the red value for example :

give the colour parameter a random value – (needs to be a 3dVector)
v(random*2, random, random/1.4 )

iterate over the animation curve of a Transform node copying the value of the Y position to a parameter of another Transform node.

dest = nuke.nodes.Transform()
destTransform = dest["translate"]
destAnim = dest["translate"].animation(1)
a = nuke.selectedNode()["translate"]
ypos = a.animation(1)
keys = ypos.keys()
for a in range(0,42):
    dest["translate"].animation(1).setKey(a, ypos.evaluate(a))


display the current duration since the first frame (useful when sequence doesn’t start at zero/one)
put this in a text node or in a node’s “label” tab

[python {int( nuke.frame() - nuke.root()['first_frame'].value() ) }]


take a RotoPaint and change each layer’s RGBA to a grey level based on it’s “depth” – useful for prep’ing for a stereo conversion i’d imagine

import random
a = nuke.selectedNode()
if a.Class() == "RotoPaint":
    knob = a['curves']
    i = 0.0
    for c in knob.rootLayer:
        pos = float ((len(knob.rootLayer) - i) / len(knob.rootLayer))
        i = i+1
        attrs = c.getAttributes()
        attrs.set(1, attrs.kRedOverlayAttribute, pos)
        attrs.set(1, attrs.kRedAttribute, pos)
        attrs.set(1, attrs.kGreenOverlayAttribute, pos)
        attrs.set(1, attrs.kGreenAttribute, pos)
        attrs.set(1, attrs.kBlueOverlayAttribute, pos)
        attrs.set(1, attrs.kBlueAttribute, pos)
        attrs.set(1, attrs.kAlphaOverlayAttribute, pos)
        attrs.set(1, attrs.kAlphaAttribute, pos)

for all selected Tracker nodes multiply their track values by a factor of 4

for a in nuke.selectedNodes():
    if a.Class()=="Tracker3":
	def scaleMe(track):
		if track.isAnimated():
			for j in range (0,2):
		        anim = track.animation(j)
		        keys = anim.keys()
		        scaleFactor = 4
		        while keys:
		            keys.pop().y *= scaleFactor
	#i am sure there is a much nicer way of iterating over all the knobs but this worked for what i quickly needed
	t1 = a['track1']
	t2 = a['track2']
	t3 = a['track3']
	t4 = a['track4']

set bbox to “B” for nodes inside all Group nodes

import nuke
def bboxGroup():
  for b in nuke.allNodes():
    if b.Class() == "Group":
      for a in nuke.toNode(b.name()).nodes():
        classTypes = ['Merge' , 'Keymix', 'Copy', ]
        print a.name()
        for n in classTypes:
          if n in a.Class():
              for p in a['bbox'].values():
                if 'B' in p:

“Autowrite” node
Copy the following line into the python “beforeRender” field in a write node.
The write node’s “file” field will be filled based on the script’s name/path.
Obviously this all depends on your pipeline etc.

For my current situation each vfx shot has it’s own directory, which is then populated with “renders” & “scripts” subdirectories.
So for me I can do this:


delete all nodes that are not selected

s = nuke.selectedNodes()
b = nuke.allNodes()
for n in b:
    if n not in s:

selects all dependencies (input nodes & their parents) from a selected node

a = nuke.selectedNode()
nodesToSelect = []
def climb(node):
    # print node.name()
    for n in node.dependencies():
for x in nodesToSelect:
	print x.name()
    # x.setSelected(1)

set all Read nodes to cache locally

for a in nuke.allNodes():
    if a.Class()=='Read':

print last frame of script

print nuke.root()['last_frame'].value()

create a backdrop based on selected Nodes

margin = 100
xpMax = nuke.selectedNode().xpos()
xpMin = nuke.selectedNode().xpos()
ypMax = nuke.selectedNode().ypos()
ypMin = nuke.selectedNode().ypos()
for a in nuke.selectedNodes():
    if a.xpos() > xpMax:
        xpMax = a.xpos()
    if a.xpos() < xpMin: xpMin = a.xpos() if a.ypos() > ypMax:
        ypMax = a.ypos()
    if a.ypos() < ypMin:
        ypMin = a.ypos()
bd = nuke.nodes.BackdropNode(bdwidth=(xpMax-xpMin)+margin, bdheight=(ypMax-ypMin)+margin) 

disable “postage stamps” on only “Read” nodes

for a in nuke.allNodes():
   if a.Class()=='Read':

disable “postage stamps” on all nodes

for a in nuke.allNodes():

“unhide” all nodes’ inputs – useful when receiving a sneaky comp/lighting script

for a in nuke.allNodes():

change the “first” frame of all selected nodes that are “Read” nodes:
(example changes the first frame to 1018)

for a in nuke.selectedNodes():
    if a.Class() == 'Read':

print a selected nodes’ methods

import struct
node = nuke.selectedNode()
for a in node['lookup'].animations():
    print dir(a)

print inputs (dependencies) of a selected node:

for a in nuke.selectedNode().dependencies():
    print a.name()

print outputs (dependents) of a selected node:

for a in nuke.selectedNode().dependent():
    print a.name()

find all the TimeOffset nodes in a Group called “Group2”, and change the value of each offset based on it’s position in the array of found time offsets

tos = []
for a in nuke.toNode('Group2').nodes():
	if a.Class()=='TimeOffset':
for b in tos:

set the ‘bbox’ for any selected Merge, Keymix & Copy nodes to “B”

for a in nuke.selectedNodes():
	classTypes = ['Merge' , 'Keymix', 'Copy', ]
	for n in classTypes:
		if n in a.Class():
			for p in a['bbox'].values():
				if 'B' in p:

remove all animation from a selected nodes

for a in nuke.selectedNode().knobs():

add keyframes – animate a mix

for a in nuke.selectedNodes():
	a['mix'].setValueAt(0,(nuke.frame() - 1))

half the colour value of all the Constant nodes in a script

for a in nuke.allNodes():
	if a.Class() == "Constant":
		a['color'].setValue(a['color'].value()[0] / 2 , 0)
		a['color'].setValue(a['color'].value()[1] / 2 , 1)
		a['color'].setValue(a['color'].value()[2] / 2 , 2)

find all the transform nodes in a script, and if their input is a Crop, set the ‘scale’ value to be twice it’s current value (also checks if the scale is a list/array or a float)

for a in nuke.allNodes():
	if a.Class() == "Transform":
		if a.input(0).Class() == "Crop":
			x = a['scale'].value()
			if type(x).__name__ == 'list':
				a['scale'].setValue(x[0] * 2 , 0)
				a['scale'].setValue(x[1] * 2 , 1)
			if type(x).__name__ == 'float':

set all the gain values of all ColorCorrect nodes to be twice their current value

for a in nuke.allNodes():
	if a.Class() == "ColorCorrect":
		a['gain'].setValue(a['gain'].value() * 2)

print files with ‘mov’ in filename

for a in nuke.allNodes():
	if 'Read' in a['name'].value():
		if 'mov' in a['file'].value():
			print a['file'].value()

change font size of all “write” nodes in script

for a in nuke.selectedNodes():
	if "Write" in a['name'].value():

create 20 constants with incrementing colour values

def makeConstants(amount):
	for i in range(amount):
		a= nuke.nodes.Constant()
		color= float( float(i) / float(amount) )

change the name of all Text nodes to contents the text “message”

for a in nuke.allNodes():
    if a.Class()=='Text':

(in an expression node) – alpha channel – if the y value of the pixel is even, make it 0, else make it 1 (for viewing fields?)

Music Video: Tamarama – “Middle of a magazine”

I recently did some grading on a music video for Tamarama on their track “Middle of a Magazine”. It was shot on few flavours of hd(v) – so ran the clip through shake to deinterlace it and did the grading in Color. Let me know what u think. Best of luck with the clip guys!


distributed rendering using Apple Qmaster 3 – success!!!


Thanks to a fair amount of googling and a few days on and off of testing I seem to have a working setup for doing distributed processing using Apple Qmaster – for Shake, Maya and Compressor.

Here is my effort at explaining what I did to get it working – big apologies, it’s poorly written. I’ll revisit this post soon.

some notes on our workflow

  • Our current workflow for dealing with files/assets evolves around a directory structure that breaks each project up into shot numbers and the various departments we have etc..
  • Our xserve is the centralised area for our assets and the QMaster render manager.
  • Each artist’s local machine has a directory somewhere that mimics the xServe’s project directory structure for working locally (to keep network traffic down over gigabit ethernet).
  • I set up $JOBS_LOCAL and $JOBS_SERVER env variables on each machine and the xserve – these variables point to the relevant local project directory on the artist’s mac and the project directory on the server.
  • I created a python script that does a find and replace of the 2 variables and writes our a new shake script renamed “*_SERVER.shk”, or “*_LOCAL.shk”
  • (See further down for setting up the ENV variables.)

    Centralised $NR_INCLUDE_PATH
    I setup an env variable for $NR_INCLUDE_PATH for all Shake machines and the xserve – to look at a sharepoint (the nreal folder) on the xServe and automatically mount it – so all the Shake machines would be using the same macros/plugins and settings. I setup a new user on the xserve “shake” that can only mount the nreal directory.

    After some googling around I found a way to automount volumes:

    OS X 10.5 (fstab)
    /etc/fstab – to automount sharepoint on xServe
    LINK: http://blogs.sun.com/lowbit/entry/easy_afp_autmount_on_os

    # ————————————————————————————-
    # Mount AFP share from xServe via AFP
    # ————————————————————————————-
    ourserver.local:/nreal /Network/nreal url auto,url==afp://nreal:password@ourserver.local/nreal 0 0
    # ————————————————————————————-

    how to refresh automount(login as root/su):

    sudo automount -cv

    OS X 10.4 (netinfo manager)
    LINK: http://ask.metafilter.com/54223/Help-me-automount-my-smb-share-in-Apple-OS-X-reward-inside
    in terminal.app:

    nidump fstab . > /tmp/fstab
    echo “ourservernreal /Network/nreal url url==cifs://nreal:password@ourserver/nreal 0 0” >> /tmp/fstab
    sudo niload fstab . < /tmp/fstab sudo killall -HUP automount

    how to refresh automount(login as root/su):

    sudo killall -HUP automount

    I had tried for some time to get shake scripts to render over our network using Qmaster but it just wouldn’t work. The QMaster logs were where I found all my errors. ‘frame 0021 could not be found’, ‘UNIX error 3 file could not be found’

    things to check

  • shake / maya / qmaster 3 node is installed on all render machines
  • the location of all your media can be accessed by all machines
  • What seemed strange was that if I logged into the xServe and executed a script via terminal with shake (to render just on the xserve) the render would complete successfully. Then it clicked that maybe the environment variables I was using in my scripts ($xxx) might not be getting recognised by Qmaster or the way Qmaster launches Shake??

    The big tip-off
    I googled for the errors I kept seeing and luckily enough this forum post popped up:

    “I have pinned this down to at least one reason – that the shake qmaster isn’t picking up the NR_INCLUDE_PATH environment variable. Does anyone know where you need to set this up on a cluster node (I can get the qmasterd to pick it up but that doesn’t solve the problem!) “

    If you are trying to use Qmaster, and need to set environment variables, then you need to create a wrapper script that sets the variables and then calls the appropriate version of shake.

    For example, (this was from Apple)

    NR_INCLUDE_PATH=/Network_Applications/Shake Plugins/nreal/include;export NR_INCLUDE_PATH
    NR_ICON_PATH=/Network_Applications/Shake Plugins/nreal/icons;export NR_ICON_PATH

    umask 000

    /Applications/Shake3.50/shake.app/Contents/MacOS/shake $*
    if [ $status -ne 0 ]
    exit $status

    Then when using Qmaster, you run the application using this
    script (saved for example as /usr/bin/shakeWrapper) which must be
    installed on all nodes in the cluster.


    Cheers Nell!

    I took Nell’s script, added a few lines to it and stuck it in /usr/bin/

    /usr/bin/shakeWrapper – create a file to launch Shake respecting ENV variables – later alias ‘shake’ to this file

    echo “Shake 4 running through a wrapper script – /usr/bin/shakeWrapper”

    umask 000

    /Applications/Shake/shake.app/Contents/MacOS/shake $*
    if [ $status -ne 0 ]
    exit $status

    I added the first few lines there so when I later made an alias to this script the Shake user would have some idea what is going on when launching Shake via the terminal.

    Setting up the ENV (environment) variables…

    /etc/profile – to declare system-wide Environment variables / aliases
    (alias shake to use a wrapper to make it launch respecting/recognising the env variables)

    # System-wide .profile for sh(1)

    if [ -x /usr/libexec/path_helper ]; then
    eval `/usr/libexec/path_helper -s`

    if [ “${BASH-no}” != “no” ]; then
    [ -r /etc/bashrc ] && . /etc/bashrc

    export JOBS_LOCAL

    export JOBS_SERVER

    export NR_INCLUDE_PATH

    alias shake=”/usr/bin/shakeWrapper”

    *remember to enter into terminal:
    source /etc/profile

    FCS2, PSD, Motion, AJA kona 10-bit YUV and Colorista

    Recently I have noticed a few bugs with our FCS2 system.

    We are using Final Cut Studio 2 with an AJA 10bit Kona LHe card equipped MacPro 2×3GHz Quad running OSX 10.4.9.

    1. Importing PSD into FCP

    Until about a month ago I was able to import PSD files into FCP as a layered sequence. Now I don’t seem to be able to get that to happen – the file only comes in a flattened file. Not sure why

    My workaround for now is to open the PSD file in Motion as a layered comp. I think this is actually a better way to lay up gfx anyway – Motion is much faster for controlling the layers. Props to Dr Gormley for the tip!

    2. Problems playing out Motion projects in FCP

    For me one of the coolest things of working with Motion in FCP is that you can jump back and forth between the two and the embedded Motion project will update.

    But, now when I render the Motion clip and put it out to a VTR, it plays out bizarre – sometimes with a heavy PINK cast to it.

    To workaround this I changed the FCP sequence setting from ‘Render 10-bit Material in high-precision YUV’ to ‘Render in 8-bit YUV’. This seems to allow me to get SOMETHING to tape that is not screwed up – but means we are not going to tape in 10bit.

    The other way around this is to simply render a QuickTime out of Motion – but I really like the flexibility of the embedded comps in FCP – good for working with clients.

    3. Magic Bullet Colorista – no likey 10-bit sequences either

    The Colorista plugin is awesome and I use it quite a lot. But it seems to be another hater of the 10bit YUV render setting in FCP. Sometimes I render the clips with Colorista on them, and it will produce some random artifacting to the clip, sometimes random bright noise in some areas.

    Workaround: Changing the fcp sequences’ render settings to 8bit fixes this

    So I am going to suss this out and find some solutions/explanations – I will blog back about this as I go.

    Python script to get all maximum values in an image sequence

    Yesterday i fudged together a python script to use Shake and create an image that is the maximum values of that sequence.

    The output of the python script is a Shake script that contains as many FileIns as there are images in the sequence – each one slipped one frame more than the previous. Does that make any sense?

    Each image is plugged into a Max layer node and then that is plugged into the next Max layer node etc….

    So depending on the length of your sequence the Shake script can be enormous!

    Python is awesome.


    Here is the script:
    shake max python script (1219 downloads)