SOLUTION: WebSockets, Django Channels, MacOS 10.12, Server 5.2, Apache 2.4.23

Finally…

I have WebSocket connections working via Apache in MacOS Server 5.2.

As this caused a lot of head/heartache over the last few weeks I thought I’d better share what I experienced in case someone finds themselves in the same boat!

And the solution is simple.

The short version:

I use MacOS Server 5.2 (ships with Apache 2.4.23) to run a python Django application via the mod_wsgi module.

I had been trying to setup proxypass and wstunnel in MacOS 10.12 & Server 5.2 to handle websocket connections via an ASGI interface server called Daphne running on localhost on port 8001.

I wanted to reverse proxy any WebSocket connection to wss://myapp.local/chat/stream/ to ws://localhost:8001/chat/stream/

From what I had read on all the forums and mailing lists that I had scoured was to simply make some proxypass definitions in the appropriate virtual host and make sure that the mod_proxy and mod_proxy_wstunnel modules were loaded and it would work.

Long story short – from what I understand all of this trouble came down to MacOS Server 5 and one major change:
“A single instance of httpd runs as a reverse proxy, called the Service Proxy, and several additional instances of httpd run behind that proxy to support specific HTTP-based services, including an instance for the Websites service.”

All I needed to do to proxy the websocket connection was the following:

in:
/Library/Server/Web/Config/Proxy/apache_serviceproxy.conf

Add (around line 297 (in the section about user websites, webdav):
ProxyPass / http://localhost:8001/
ProxyPassReverse / http://localhost:8001/

RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* ws://localhost:8001%{REQUEST_URI} [P]

I then kicked over the service proxy:

sudo launchctl unload -w /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.apple.serviceproxy.plist
sudo launchctl load -w /Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.apple.serviceproxy.plist

And the web socket connections were instantly working!

The long version:

For many weeks I have been trying to get WebSocket connections functioning with Apache and Andrew Godwin’s Django Channels project in an app I am developing.

Django Channels is “a project to make Django able to handle more than just plain HTTP requests, including WebSockets and HTTP2, as well as the ability to run code after a response has been sent for things like thumbnailing or background calculation.”

My interest in Django Channels came from my requirement of a chat system in my webapp. After watching a several of Andrew’s demos on youtube and having read through the docs and eventually installing Andrew’s demo django channels project I figured I would be able to get this working in production on our MacOS Server.

The current release of MacOS 10.12 and Server 5.2 ships with Apache 2.4.23. This comes with the necessary mod_proxy_wstunnel module to be able to proxy WebSocket connections (ws:// and secure wss://) in Apache and is already loaded in the server config file:

/Library/Server/Web/Config/apache2/httpd_server_app.conf

Daphne is Andrew’s ASGI interface server that supports WebSockets & long-poll HTTP requests. WSGI does not.

With Daphne running on localhost on a port that MacOS isn’t occupying (i went with 8001) the idea was to get Apache to reverse proxy certain requests to Daphne.

Daphne can be run on a specified port (8001 in tis example) like so (-v2 for more feedback):
daphne -p 8001 yourapp -v2

I wanted Daphne to handle only the web socket connections (as I use currently depend on some apache modules for serving media such as mod_xsendfile). In my case the websocket connection was via /chat/stream/ based on Andrew’s demo project.

From what I had read in MacOS Server’s implementation of Apache the idea is to declare these proxypass commands inside the virtual host files of your “sites” in:

/Library/Server/Web/Config/apache2/sites/

In config files such as:
0000_127.0.0.1_34543_.conf

I did also read that any customisation for web apps running on MacOS Server should be made to the plist file for the required web app in:
/Library/Server/Web/Config/apache2/webapps/
In a plist file such as:
com.apple.webapp.wsgi.plist
Anyway…

I edited the 0000_127.0.0.1_34543_.conf file adding:

ProxyPass /chat/stream/ ws://localhost:8001/
ProxyPassReverse /chat/stream/ ws://localhost:8001/

Eager to test out my first web socket chat connection I refreshed the page only to see an error printed in the apache log:

No protocol handler was valid for the URL /chat/stream/. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.

I had read of many people finding a solution at least with Apache on Ubuntu or a custom install on MacOS.
I even tried installing Apache using Brew and when that didn’t work I almost proceeded to install nginx.

After countless hours/days of googling I reached to the Apache mailing list for some help with this error. Yann Ylavic was very generous with his time and offered me various ideas on how to get it going. After trying the following:

SetEnvIf Request_URI ^/chat/stream/ is_websocket
RequestHeader set Upgrade WebSocket env=is_websocket
ProxyPass /chat/stream/ ws://myserver.local:8001/chat/stream/

I noticed that the interface server on port 8001 Daphne was starting to receive ws connections!
However in the client browser it was logging:

“Error during WebSocket handshake: ‘Upgrade’ header is missing”

From what I could see mod_dumpio was logging that the “Connection: Upgrade” and “Upgrade: WebSocket” headers were being sent as part of the web socket handshake:

mod_dumpio: dumpio_in (data-HEAP): HTTP/1.1 101 Switching Protocols\r\nServer: AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n
mod_dumpio.c(164): [client 127.0.0.1:63944] mod_dumpio: dumpio_out
mod_dumpio.c(58): [client 127.0.0.1:63944] mod_dumpio: dumpio_out (data-TRANSIENT): 160 bytes
mod_dumpio.c(100): [client 127.0.0.1:63944] mod_dumpio: dumpio_out (data-TRANSIENT): HTTP/1.1 101 Switching Protocols\r\nServer: AutobahnPython/0.17.1\r\nUpgrade: WebSocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: 17WYrMeMS8a4ImHpU0gS3/k0+Cg=\r\n\r\n

However the client browser showed nothing in the response headers.

I was more stumped than ever.

I explored the client side jQuery framework as well as the the Django channels & autobahn module to see if perhaps something was amiss, and then revised my own app and various combinations of suggestions about Apache and it’s module. But nothing stood out to me.

Then I reread the ReadMe.txt inside the apache2 dir:

/Library/Server/Web/Config/apache2/ReadMe.txt

“Special notes about the web proxy architecture in Server application 5.0:

This version of Server application contains a revised architecture for all HTTP-based services. In previous versions there was a single instance of httpd acting as a reverse proxy for Wiki, Profile, and Calendar/Address services, and also scting as the Websites service.º With this version, there is a major change: A single instance of httpd runs as a reverse proxy, called the Service Proxy, and several additional instances of httpd run behind that proxy to support specific HTTP-based services, including an instance for the Websites service.

Since the httpd instance for the Websites service is now behind a reverse proxy, or Service Proxy, note the following:

It is only the external Service Proxy httpd instance that listens on TCP ports 80 and 443; it proxies HTTP requests and responses to Websites and other HTTP-based services.

I wondered if thus ServiceProxy had something to do with it. I had a look over:

/Library/Server/Web/Config/Proxy/apache_serviceproxy.conf

and noticed a comment – “# The user websites, and webdav”.
I figured it wouldn’t hurt to try adding the proxypass definitions & rewrite rules that people had suggested on the forums as their solution.

ProxyPass / http://localhost:8001/
ProxyPassReverse / http://localhost:8001/

RewriteEngine on
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]
RewriteRule .* ws://localhost:8001%{REQUEST_URI} [P]

Sure enough after restarting the ServiceProxy it all started to work!

http://stackoverflow.com/questions/41287959/mod-proxy-wstunnel-mac-os-x-10-11-6-apache-2-4-18
https://github.com/adamteale/macos-server5-websocket/

Learning to be a Djangonaut: Localisation/translation – locale for Chile

Djangonaut
A person who is expert in Django web framework.
He is a Djangonaut dude.

In my pursuit to become a gun djangonaut and get my knowledge of the framework sound and stable I have been following along Marina Mele’s fantastic series “TaskBuster Django Tutorial”. It covers a lot of topics that I am keen to understand and be able to explain one day.

My current goal is to have a django web app that has a Restful API that will allow various types of authentication. On top of this I will create an iOS app that can talk to it.

I had started to follow along with a post that Félix Descôteaux had written titled “A Rest API using Django and authentication with OAuth2 AND third parties!”. As I was getting into it I realised that I had better spend some time get a better basic understanding of Django and some other key concepts of python and web app development I had yet to explore. Geez the internet is just so cool when you want to learn something!

Along the way following Marina’s series I hit a bit trouble getting “Localizations” to work – when a site works across multiple languages. Django seems to have a easy to use mechanism but I was having issues with the locale for Chilean Spanish (I always am really…). 🙂

So in an effort to try to give back a little to the wonderful internet for anyone else who may stumble across this post with the same issue…

Basically:

It was a combination of using “es-CL”, “es_CL” & “es-cl” in various places.

 

Property Value
Base locale ID es-CL
Language es
Language Spanish
Country CL
Country Chile

In the base.py (settings file):

LANGUAGES = (
('en', _('English')),
('<strong>es-CL</strong>', _('Spanish (Chile)')),
)

Then in the test_all_users.py I used ‘es-cl‘ wherever necessary.

Then in the tb_test virtual env shell:

$ python manage.py makemessages -l es_CL

I edited the django.po with my translated strings.
(I had to laugh when I read the “po” in django.po for this Chilean translation :] )

Then in the tb_test virtual env shell:

$ python manage.py compilemessages -l es_CL

Now I am passing the tests and seeing the translated page in the browser.

I hope that helps someone out one day!

fcp 7 to nuke

Scenario
A FCP 7 timeline with 20 shots in it (all in one video layer).
I need to comp them all separately in nuke.
I want to have each shot in a separate numbered directory, with subdirectories for “scripts” and “renders”

Here is a little python script i wrote this morning to export a basic FCP 7 timeline as a series of nuke scripts into numbered shot directories.
This script parses an XML file that I exported from FCP 7 (the sequence/edit).

I have only used this for the purposes i needed, but it works.
Perhaps it could be expanded to incorporate multiple video layers.

Each nuke script that is generated is just a read node, with the nuke.root frame range set to the duration of the clip in FCP 7.

Even if the script doesn’t help everyone’s fcp7-nuke needs I found that i learnt a lot about Python’s elementTree XML library. Nice clean simple.
And also this page was a big help to get things going:
http://drumcoder.co.uk/blog/2010/jun/17/using-elementtree-python/

The script needs to be run from a shell/terminal:
python /path/to/the/script/fcpNuke.py /path/to/you/xmlfile.xml

*file re-uploaded 11 May 2012

The script:
fcp7toNuke (349 downloads)

some Nuke Python snippets (and other general coolness)

If anyone has some cool snippets to share please feel free to leave them in a comment below.

*last updated: 23 December 2016

user knob value in an expression / particle expression  

[value yourUserKnob]

random particle colour with particle expression node

*** a blendMat node must be applied before the emitter:

 operation: plus
 surface blend: modulate

the “color” parameter of a particleExpression node can be set using a 3dVector:
v( r, g, b )

and each channel can be referenced like this:
the red value for example :
x(color)

give the colour parameter a random value – (needs to be a 3dVector)
v(random*2, random, random/1.4 )

iterate over the animation curve of a Transform node copying the value of the Y position to a parameter of another Transform node.

dest = nuke.nodes.Transform()
destTransform = dest["translate"]
destTransform.setAnimated()
destAnim = dest["translate"].animation(1)
 
a = nuke.selectedNode()["translate"]
ypos = a.animation(1)
keys = ypos.keys()
 
for a in range(0,42):
    dest["translate"].animation(1).setKey(a, ypos.evaluate(a))

 

display the current duration since the first frame (useful when sequence doesn’t start at zero/one)
put this in a text node or in a node’s “label” tab

[python {int( nuke.frame() - nuke.root()['first_frame'].value() ) }]

 

take a RotoPaint and change each layer’s RGBA to a grey level based on it’s “depth” – useful for prep’ing for a stereo conversion i’d imagine

import random
a = nuke.selectedNode()
if a.Class() == "RotoPaint":
    knob = a['curves']
    i = 0.0
    for c in knob.rootLayer:
 
        pos = float ((len(knob.rootLayer) - i) / len(knob.rootLayer))
        i = i+1
 
        attrs = c.getAttributes()
        attrs.set(1, attrs.kRedOverlayAttribute, pos)
        attrs.set(1, attrs.kRedAttribute, pos)
        attrs.set(1, attrs.kGreenOverlayAttribute, pos)
        attrs.set(1, attrs.kGreenAttribute, pos)
        attrs.set(1, attrs.kBlueOverlayAttribute, pos)
        attrs.set(1, attrs.kBlueAttribute, pos)
        attrs.set(1, attrs.kAlphaOverlayAttribute, pos)
        attrs.set(1, attrs.kAlphaAttribute, pos)

for all selected Tracker nodes multiply their track values by a factor of 4

for a in nuke.selectedNodes():
    if a.Class()=="Tracker3":
 
	def scaleMe(track):
		if track.isAnimated():
			for j in range (0,2):
		        anim = track.animation(j)
		        keys = anim.keys()
		        scaleFactor = 4
		        while keys:
		            keys.pop().y *= scaleFactor
	#i am sure there is a much nicer way of iterating over all the knobs but this worked for what i quickly needed
	t1 = a['track1']
	t2 = a['track2']
	t3 = a['track3']
	t4 = a['track4']
	scaleMe(t1)
	scaleMe(t2)
	scaleMe(t3)
	scaleMe(t4)

set bbox to “B” for nodes inside all Group nodes

import nuke
 
def bboxGroup():
  for b in nuke.allNodes():
    if b.Class() == "Group":
      for a in nuke.toNode(b.name()).nodes():
        classTypes = ['Merge' , 'Keymix', 'Copy', ]
        print a.name()
        for n in classTypes:
          if n in a.Class():
            try:
              for p in a['bbox'].values():
                if 'B' in p:
                  a['bbox'].setValue(a['bbox'].values().index(p))
            except:
              pass

“Autowrite” node
Copy the following line into the python “beforeRender” field in a write node.
The write node’s “file” field will be filled based on the script’s name/path.
Obviously this all depends on your pipeline etc.

For my current situation each vfx shot has it’s own directory, which is then populated with “renders” & “scripts” subdirectories.
So for me I can do this:

nuke.thisNode()["file"].setValue(nuke.root().name().replace("scripts","renders").replace(".nk",".mov"))

delete all nodes that are not selected

s = nuke.selectedNodes()
b = nuke.allNodes()
 
for n in b:
    if n not in s:
        nuke.delete(n)

selects all dependencies (input nodes & their parents) from a selected node

a = nuke.selectedNode()
nodesToSelect = []
 
nodesToSelect.append(a)
def climb(node):
    # print node.name()
    for n in node.dependencies():
        nodesToSelect.append(a)
        climb(n)
 
climb(a)
 
for x in nodesToSelect:
	print x.name()
    # x.setSelected(1)

set all Read nodes to cache locally

for a in nuke.allNodes():
    if a.Class()=='Read':
        a['cached'].setValue(1)
        a['cacheLocal'].setValue(0)

print last frame of script

print nuke.root()['last_frame'].value()

create a backdrop based on selected Nodes

margin = 100
xpMax = nuke.selectedNode().xpos()
xpMin = nuke.selectedNode().xpos()
ypMax = nuke.selectedNode().ypos()
ypMin = nuke.selectedNode().ypos()
 
for a in nuke.selectedNodes():
    if a.xpos() &gt; xpMax:
        xpMax = a.xpos()
    if a.xpos() &lt; xpMin: xpMin = a.xpos() if a.ypos() &gt; ypMax:
        ypMax = a.ypos()
    if a.ypos() &lt; ypMin:
        ypMin = a.ypos()
 
bd = nuke.nodes.BackdropNode(bdwidth=(xpMax-xpMin)+margin, bdheight=(ypMax-ypMin)+margin) 
bd.setXpos(xpMin-margin/2)
bd.setYpos(ypMin-margin/2)

disable “postage stamps” on only “Read” nodes

for a in nuke.allNodes():
   if a.Class()=='Read':
       a['postage_stamp'].setValue(0)

disable “postage stamps” on all nodes

for a in nuke.allNodes():
    try:
        a['postage_stamp'].setValue(0)
    except:
        pass

“unhide” all nodes’ inputs – useful when receiving a sneaky comp/lighting script

for a in nuke.allNodes():
    try:
        a['hide_input'].setValue(0)
    except:
        pass

change the “first” frame of all selected nodes that are “Read” nodes:
(example changes the first frame to 1018)

for a in nuke.selectedNodes():
    if a.Class() == 'Read':
        a['first'].setValue(1018)

print a selected nodes’ methods

import struct
node = nuke.selectedNode()
for a in node['lookup'].animations():
    print dir(a)

print inputs (dependencies) of a selected node:

for a in nuke.selectedNode().dependencies():
    print a.name()

print outputs (dependents) of a selected node:

for a in nuke.selectedNode().dependent():
    print a.name()

find all the TimeOffset nodes in a Group called “Group2”, and change the value of each offset based on it’s position in the array of found time offsets

tos = []
for a in nuke.toNode('Group2').nodes():
	if a.Class()=='TimeOffset':
		tos.append(a)
for b in tos:
	b['time_offset'].setValue(tos.index(b))

set the ‘bbox’ for any selected Merge, Keymix & Copy nodes to “B”

for a in nuke.selectedNodes():
	classTypes = ['Merge' , 'Keymix', 'Copy', ]
	for n in classTypes:
		if n in a.Class():
			for p in a['bbox'].values():
				if 'B' in p:
					a['bbox'].setValue(a['bbox'].values().index(p))

remove all animation from a selected nodes

for a in nuke.selectedNode().knobs():
	nuke.selectedNode()[a].clearAnimated()

add keyframes – animate a mix

for a in nuke.selectedNodes():
	a['mix'].setAnimated()
	a['mix'].setValueAt(1,nuke.frame())
	a['mix'].setValueAt(0,(nuke.frame() - 1))

half the colour value of all the Constant nodes in a script

for a in nuke.allNodes():
	if a.Class() == "Constant":
		a['color'].setValue(a['color'].value()[0] / 2 , 0)
		a['color'].setValue(a['color'].value()[1] / 2 , 1)
		a['color'].setValue(a['color'].value()[2] / 2 , 2)

find all the transform nodes in a script, and if their input is a Crop, set the ‘scale’ value to be twice it’s current value (also checks if the scale is a list/array or a float)

for a in nuke.allNodes():
	if a.Class() == "Transform":
		if a.input(0).Class() == "Crop":
			x = a['scale'].value()
			if type(x).__name__ == 'list':
				a['scale'].setValue(x[0] * 2 , 0)
				a['scale'].setValue(x[1] * 2 , 1)
			if type(x).__name__ == 'float':
				a['scale'].setValue(x*2)

set all the gain values of all ColorCorrect nodes to be twice their current value

for a in nuke.allNodes():
	if a.Class() == "ColorCorrect":
		a['gain'].setValue(a['gain'].value() * 2)

print files with ‘mov’ in filename

for a in nuke.allNodes():
	if 'Read' in a['name'].value():
		if 'mov' in a['file'].value():
			print a['file'].value()

change font size of all “write” nodes in script

for a in nuke.selectedNodes():
	if "Write" in a['name'].value():
		a['note_font_size'].setValue(60)

create 20 constants with incrementing colour values

def makeConstants(amount):
	for i in range(amount):
		a= nuke.nodes.Constant()
		color= float( float(i) / float(amount) )
		a['color'].setValue(color)
makeConstants(20)

change the name of all Text nodes to contents the text “message”

for a in nuke.allNodes():
    if a.Class()=='Text':
        a.setName(a['message'].value())

##Expressions
(in an expression node) – alpha channel – if the y value of the pixel is even, make it 0, else make it 1 (for viewing fields?)
y%2==1?0:1

Learning Obj-C

Yesterday I decided I have to give up on PyObjC for the moment and step into the world of Objective-C.

I have got to a point in developing SubIt that I can’t seem to get enough out of the PyObjC bridge to be able to get what SubIt needs.

So now I am in the process of rewriting SubIt in Obj-C.

Obj-C seems weird and complicated and I can’t see why it has to be this way when Python/Ruby seem just so much much easier to understand. Oh well, I have taken the plunge.

I spent most of yesterday reading a great and easy to follow PDF called “Become An Xcoder“, available for free download:
http://www.cocoalab.com/?q=becomeanxcoder

I have been looking at a bunch of podcasts/videos/books and to me this PDF was the quickest/easiest way to get a little up to speed.

distributed rendering using Apple Qmaster 3 – success!!!

qmaster3logo


Thanks to a fair amount of googling and a few days on and off of testing I seem to have a working setup for doing distributed processing using Apple Qmaster – for Shake, Maya and Compressor.

Here is my effort at explaining what I did to get it working – big apologies, it’s poorly written. I’ll revisit this post soon.

some notes on our workflow

  • Our current workflow for dealing with files/assets evolves around a directory structure that breaks each project up into shot numbers and the various departments we have etc..
  • Our xserve is the centralised area for our assets and the QMaster render manager.
  • Each artist’s local machine has a directory somewhere that mimics the xServe’s project directory structure for working locally (to keep network traffic down over gigabit ethernet).
  • I set up $JOBS_LOCAL and $JOBS_SERVER env variables on each machine and the xserve – these variables point to the relevant local project directory on the artist’s mac and the project directory on the server.
  • I created a python script that does a find and replace of the 2 variables and writes our a new shake script renamed “*_SERVER.shk”, or “*_LOCAL.shk”
  • (See further down for setting up the ENV variables.)

    Centralised $NR_INCLUDE_PATH
    I setup an env variable for $NR_INCLUDE_PATH for all Shake machines and the xserve – to look at a sharepoint (the nreal folder) on the xServe and automatically mount it – so all the Shake machines would be using the same macros/plugins and settings. I setup a new user on the xserve “shake” that can only mount the nreal directory.

    After some googling around I found a way to automount volumes:

    OS X 10.5 (fstab)
    /etc/fstab – to automount sharepoint on xServe
    LINK: http://blogs.sun.com/lowbit/entry/easy_afp_autmount_on_os

    # ————————————————————————————-
    # Mount AFP share from xServe via AFP
    # ————————————————————————————-
    ourserver.local:/nreal /Network/nreal url auto,url==afp://nreal:password@ourserver.local/nreal 0 0
    # ————————————————————————————-

    how to refresh automount(login as root/su):

    sudo automount -cv

    OS X 10.4 (netinfo manager)
    LINK: http://ask.metafilter.com/54223/Help-me-automount-my-smb-share-in-Apple-OS-X-reward-inside
    in terminal.app:

    nidump fstab . > /tmp/fstab
    echo “ourservernreal /Network/nreal url url==cifs://nreal:password@ourserver/nreal 0 0” >> /tmp/fstab
    sudo niload fstab . < /tmp/fstab sudo killall -HUP automount

    how to refresh automount(login as root/su):

    sudo killall -HUP automount

    QMASTER
    I had tried for some time to get shake scripts to render over our network using Qmaster but it just wouldn’t work. The QMaster logs were where I found all my errors. ‘frame 0021 could not be found’, ‘UNIX error 3 file could not be found’

    things to check

  • shake / maya / qmaster 3 node is installed on all render machines
  • the location of all your media can be accessed by all machines
  • What seemed strange was that if I logged into the xServe and executed a script via terminal with shake (to render just on the xserve) the render would complete successfully. Then it clicked that maybe the environment variables I was using in my scripts ($xxx) might not be getting recognised by Qmaster or the way Qmaster launches Shake??

    The big tip-off
    I googled for the errors I kept seeing and luckily enough this forum post popped up:
    http://www.highend3d.com/boards/index.php?showtopic=204342

    “I have pinned this down to at least one reason – that the shake qmaster isn’t picking up the NR_INCLUDE_PATH environment variable. Does anyone know where you need to set this up on a cluster node (I can get the qmasterd to pick it up but that doesn’t solve the problem!) “

    If you are trying to use Qmaster, and need to set environment variables, then you need to create a wrapper script that sets the variables and then calls the appropriate version of shake.

    For example, (this was from Apple)

    NR_INCLUDE_PATH=/Network_Applications/Shake Plugins/nreal/include;export NR_INCLUDE_PATH
    NR_ICON_PATH=/Network_Applications/Shake Plugins/nreal/icons;export NR_ICON_PATH

    umask 000

    /Applications/Shake3.50/shake.app/Contents/MacOS/shake $*
    status=$?
    if [ $status -ne 0 ]
    then
    exit $status
    fi

    Then when using Qmaster, you run the application using this
    script (saved for example as /usr/bin/shakeWrapper) which must be
    installed on all nodes in the cluster.

    Regards
    Nell

    Cheers Nell!

    I took Nell’s script, added a few lines to it and stuck it in /usr/bin/

    /usr/bin/shakeWrapper – create a file to launch Shake respecting ENV variables – later alias ‘shake’ to this file

    echo
    echo “Shake 4 running through a wrapper script – /usr/bin/shakeWrapper”
    echo

    umask 000

    /Applications/Shake/shake.app/Contents/MacOS/shake $*
    status=$?
    if [ $status -ne 0 ]
    then
    exit $status
    fi

    I added the first few lines there so when I later made an alias to this script the Shake user would have some idea what is going on when launching Shake via the terminal.

    Setting up the ENV (environment) variables…

    /etc/profile – to declare system-wide Environment variables / aliases
    (alias shake to use a wrapper to make it launch respecting/recognising the env variables)

    # System-wide .profile for sh(1)

    if [ -x /usr/libexec/path_helper ]; then
    eval `/usr/libexec/path_helper -s`
    fi

    if [ “${BASH-no}” != “no” ]; then
    [ -r /etc/bashrc ] && . /etc/bashrc
    fi

    JOBS_LOCAL=”/Volumes/otherdrive/jobs”;
    export JOBS_LOCAL

    JOBS_SERVER=”/Volumes/ourserversharepoint/jobs”;
    export JOBS_SERVER

    NR_INCLUDE_PATH=”$HOME/nreal/include”:”/Network/nreal/include/”;
    export NR_INCLUDE_PATH

    alias shake=”/usr/bin/shakeWrapper”

    *remember to enter into terminal:
    source /etc/profile

    some Python & SSH remote machine coolness

    And now for a bit of self taught nerdy goodness…

    I just managed to get something kind of cool working.

    We have a cool tool in the pipeline, but in the meantime my hacked together python efforts will have to do! 😉

    A few months ago I created a python script for building ‘job’ directories – for new jobs that came into the building – a script that runs in a shell.

    I then learnt about wxpython and turned the script into a GUI app.

    This was cool to run on my machine, but I needed a way for it to be used by other artists / producers.

    We got an xServe and some disk storage so I put the script onto the xServe. I was able to log in via VNC and run the script to create the job directories that everyone could access. This gave us all a place to put files in a structured manner and not let things get too messed up (thanks to file permissions).

    But the problem was that for me to execute the script, I’d have to VNC or SSH into the server. I didn’t mind doing it, but it was not so useful when I wasn’t around.

    So back to this ‘new job’ script….

    I just worked out how to setup a producer’s mac to be able to “remotely execute the python ‘new job’ script on our xServe” using SSH and python.

    All a producer has to do is type into Terminal:

    newjob

    and it all starts up. The producer is asked the ‘name of the job’ and ‘the number of shots’ and the script goes ahead and builds all the directories.

    How I did it

    Getting into the xServe.
    I realised that I would not be able to do too much to the file structure of the xServe’s file system if I was to simply mount it on a Desktop – due to permissions etc… and I also didn’t want ‘the user’ to have to mount anything manually. It needed to brainless. SSH seemed the best way.

    I found out pretty quickly that there wasn’t an easy way to get out of entering the ‘admin password’ for the xServe to remotely execute the script (you have to enter a password when you ssh into another machine).

    I didn’t want to give away our admin password for the xserve. So I read up on SSH and discovered SSH-Key’s.
    A cool way to authenticate your machine to the remote machine without having to enter the ‘login password’ for the remote user. Instead you give the SSH connection a ‘passphrase’. Initially you have to login to the remote machine with the remote password, but after that it’s all ‘ about ‘enter your passphrase for key’.

    To set up the RSA key for your local machine I followed this easy to read tutorial from IBM.

    Basically you need to generate a public key and add that to the list of ‘known computers’ on the remote machine.

    To generate the public key (according to the IBM article) run this command in a shell:

    $ ssh-keygen -t rsa

    and follow the prompts.

    You can choose to not have a passphrase, but that seemed a little too much of a security risk for our setup.
    Next I got a bit confused, the command spat out 2 files into a ‘.ssh’ directory in your home folder:

    id_dsa
    id_dsa.pub

    These are the default names and location if u DON’T enter a name for the key.

    To get the public key (the .pub file) into the list of known machines on the xServe I mounted the xServe’s Admin’s home folder on the local machine.

    I got into the .ssh directory in the home folder of the xserve, and went looking for the file:

    .ssh/authorized_keys

    but couldn’t see it.

    So i created a file called ‘authorized_keys’ using the following command:

    touch authorized_keys

    Then to add the public key I ran a ‘cat’ command to append the public key file to the ‘authorized_keys’ file on the server.

    It worked! I was able to login the the xServe using the passphrase. Shweet!

    I found this post on an apple forum also very helpful.

    Executing the script

    On the remote machine / xServe is where the python script lives – in a folder called ‘scripts’ in the home folder.
    To run this script in a shell on the xServe I would type:

    python /Path/to/the/script/newjob.py

    So I figured from the local machine the command to execute using ssh would be:

    ssh admin@ourserver.local python /Path/to/the/script/newjob.py

    And that did work.

    But that was wayyyyyy too long for or our Producer to have to type up.

    So I made bash alias in the bash_profile:

    alias newjob ssh admin@ourserver.local python /Path/to/the/script/newjob.py

    i tried the alias, and it connected, but it didn’t seem the script was doing anything. But it was. I couldnt see the interactive text prompting me to enter a ‘jobname’ or ‘amount of shots’, but it WAS asking for it.

    Why couldn’t I see the text? I was stumped. Why wasn’t the shell showing me the feedback from the xServe?

    I read the man pages of SSH and found out a bunch of options you can pass the ssh command. Somehow I came across this:

    -t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

    As soon as I put that into the command it all worked!

    ssh -t admin@ourserver.local python /Path/to/the/script/newjob.py

    The last thing I did to make this even easier was to create a ‘.command’ file in the OSX Finder, that can just be double clicked and it will open Terminal.app and run the above command.

    rsync

    life is good with rsync.

    I have been playing around with a python script using rsync to create an easy way for syncing new material from our server to local machines (it can be a pain in the ass to navigate folders looking for the new stuff) – keeping the folder structure etc… – and most importantly not destroying anything.

    I’d never really understood how to use it – thinking it was only for local to remote transfers – but it works on anything. Brilliant!

    RSYNC rocks!