Thursday, July 7, 2016

Scripting A Simple Pose Reader

I recently learned this setup and just wanted to share a scripted version I had been working on.  For anyone who has never used a pose reader setup, it's a fantastic way to make your actions that are driven by specific poses to be more stable.  When I first learned rigging in school, I was taught to use set driven keyframes to drive an object based on a certain pose, so I included that setup in this blog as a comparison to the simple pose reader setup.  As a side note, the Pirate rig that is in my latest demo reel actually utilized this "simple pose reader" quite a bit - as I was learning the technique I started scripting it.  I used it to drive things like special deformation joints, corrective blendshapes, auto hips and shoulders, as well as accessories on the pirate's belt would move as the user interacts with the leg controls.  It was quite fun to work with such a simple and effective setup.

Anyway, here is a quick gif video of an example comparing two setups, the green is an action driven by a pose through an SDK - and the red is an action driven by a pose through the PSR. 


The SDK setup is a simple, the joint's rotateZ attribute from 0-90 will drive the translateY attribute of the sphere from 0-1.  Very limited without adding more animations.

Here is the simple graph setup for the PSR to drive the movement of the sphere.  To summarize, you are utilizing the PSR's controlled rotation values and remapping those values to different values that are sent over to the sphere (in this instance a single rotation of the joint drives the translate y of the sphere).



And here is the script I was working with...


  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
# Maya imports
import pymel.core as pm
import maya.OpenMaya as om


def simple_pose_reader( root_joint ):
    """
    Function to create a simple pose reader with "bend", "twist", "side" attributes 
    on the selected joint to use to drive other systems
    
    Args:
        root_joint (pynode) : a pynode object representing the main joint used for the pose reading
        
    Returns:
        None
        
    """
    
    def first_or_default( sequence, default=None ):
        """
        Function to return the first item in a list or a default value
        
        Args:
            sequence (list) : a list of items to parse
            default (object) : a default value to return back if nothing was parsed from sequence
            
        Returns:
            The first object within the sequence or a default value
            
        """
        
        for item in sequence:
            return item
            
        return default

        
    def get_bone_draw_axis( joint, default=om.MVector(0,1,0) ):
        """
        Function to return the first item in a list or a default value
        
        Args:
            joint (pynode) : the pynode that represents the joint to determine the draw_axis of
            default (object) : a default value to return back if the draw_axis was not determined
            
        Returns:
            A MVector type that represents the normalized direction of the joint draw axis
            
        """
        
        try:
            child = first_or_default( joint.getChildren( type='joint' ) )
            
        except:
            raise ValueError( joint + " does not have any children" )
        
        # Get the local position of the child joint ( localspace is offset from the parent space )
        pos = [ value for value in child.getTranslation( localSpace=True ) ]
        
        # Check which axis is greater than the others, this will determine the draw_axis vector
        # X Axis
        if abs( pos[0] ) > abs( pos[1] ) and abs( pos[0] ) > abs( pos[2] ):
        
            if pos[0] > 0.0:
                return om.MVector( 1, 0, 0 )
                
            return om.MVector( -1, 0, 0 )
            
        # Y Axis
        elif abs( pos[1] ) > abs( pos[0] ) and abs( pos[1] ) > abs( pos[2] ):
        
            if pos[1] > 0.0:
                return om.MVector( 0, 1, 0 )
                
            return om.MVector( 0, -1, 0 )
            
        # Z Axis
        elif abs( pos[2] ) > abs( pos[0] ) and abs( pos[2] ) > abs( pos[1] ):
        
            if pos[2] > 0.0:
                return om.MVector( 0, 0, 1 )
                
            return om.MVector( 0, 0, -1 )
            
        return default

        
    def snap_to_transform( snap_transform, snap_to_transform ):
        """
        Function to snap a transform to another transform based on position and orientation
        
        Args:
            snap_transform (pynode) : the object to snap
            snap_to_transform (pynode) : the object to snap to
            
        Returns:
            None
            
        """
        
        snap_transform.setTranslation( snap_to_transform.getTranslation( worldSpace=True ), 
                                       worldSpace=True )
                                                      
        snap_transform.setRotation( snap_to_transform.getRotation( worldSpace=True ), 
                                    worldSpace=True )

    # ensure pynode
    root_joint = pm.PyNode( root_joint )
    
    setup_dict = { 'root_joint' : root_joint,
                   'child_joint' : first_or_default( root_joint.getChildren( type='joint' ) ),
                   'parent' : root_joint.getParent() }

    # create pose reader attributes on root_joint
    for attr in [ 'Bend', 'Twist', 'Side' ]:
        setup_dict['root_joint'].addAttr( 'psr_' + attr.lower(), 
                                          attributeType='float', 
                                          niceName='PSR ' + attr, 
                                          keyable=True )
        attr_name_list = [ setup_dict['root_joint'].name(), '.psr_', attr.lower() ]
        setup_dict[ 'attr_psr_' + attr.lower() ] = pm.PyNode( ''.join( attr_name_list ) )
    
    # create organizing groups
    setup_dict[ 'psr_main_grp' ] = pm.group( name=setup_dict['root_joint'].name() + '_psrMain_GRP', 
                                             empty=True )
    setup_dict[ 'psr_target_grp' ] = pm.group( name=setup_dict['root_joint'].name() + '_psrTarget_GRP',
                                               empty=True )
    setup_dict[ 'psr_twist_grp' ] = pm.group( name=setup_dict['root_joint'].name() + '_psrTwist_GRP',
                                              empty=True )

    # create locators
    for loc in [ 'psrMain', 'psrMainTarget', 'psrMainUp', 'psrTwist', 'psrTwistTarget', 'psrTwistUp' ]:
        loc_name_list = [ setup_dict['root_joint'].name(), '_', loc, '_LOC' ]
        setup_dict[ loc ] = pm.spaceLocator( name= ''.join( loc_name_list ) )
        setup_dict[ loc ].setParent( setup_dict['psr_main_grp'] )

    # target locators parent under the target group, which is driven by the root_joint
    # the main grp is parented under the root_joint parent to maintain aiming without
    # taking in extra transforms from the root_joint or it's children
    [ setup_dict[ item ].setParent( setup_dict[ 'psr_target_grp' ] ) for item in [ 'psrMainTarget', 'psrTwistTarget' ] ]
    [ setup_dict[ item ].setParent( setup_dict[ 'psr_main_grp' ] ) for item in [ 'psr_target_grp', 'psr_twist_grp' ] ]
    [ setup_dict[ item ].setParent( setup_dict[ 'psr_twist_grp' ] ) for item in [ 'psrTwist', 'psrTwistUp' ] ]
    
    if setup_dict[ 'parent' ]:
        setup_dict[ 'psr_main_grp' ].setParent( setup_dict[ 'parent' ] )
    
    # align main group to the selected root joint
    snap_to_transform( setup_dict[ 'psr_main_grp' ], setup_dict[ 'root_joint' ] )

    draw_axis = get_bone_draw_axis( setup_dict['root_joint'] )
    child_trans = setup_dict[ 'child_joint' ].getAttr( 't' )
    child_offset = om.MVector( child_trans[0], child_trans[1], child_trans[2] )

    # X draw axis
    if draw_axis == om.MVector( 1, 0, 0 ) or draw_axis == om.MVector( -1, 0, 0 ): 
        main_up_offset = om.MVector( child_offset.x, child_offset.x, 0 )
        main_up_vector = om.MVector( 0, -1, 0 )
        twist_target_offset = om.MVector( 0, child_offset.x, 0 )
        setup_dict[ 'twist_driver_rot' ] = [ '.rotateY', '.rotateX', '.rotateZ' ]
    
    # Y draw axis
    elif draw_axis == om.MVector( 0, 1, 0 ) or draw_axis == om.MVector( 0, -1, 0 ): 
        main_up_offset = om.MVector( 0, child_offset.y, child_offset.y )
        main_up_vector = om.MVector( 0, 0, -1 )
        twist_target_offset = om.MVector( 0, child_offset.y, 0 )
        setup_dict[ 'twist_driver_rot' ] = [ '.rotateZ', '.rotateY', '.rotateX' ]
        
    # Z draw axis
    else: 
        main_up_offset = om.MVector( child_offset.z, 0, child_offset.z )
        main_up_vector = om.MVector( -1, 0, 0 )
        twist_target_offset = om.MVector( 0, 0, child_offset.z )
        setup_dict[ 'twist_driver_rot' ] = [ '.rotateX', '.rotateY', '.rotateY' ]
        
    setup_dict[ 'psrMainTarget' ].setTranslation( child_offset * 0.5, 
                                                  localSpace=True, 
                                                  relative=True )
    for loc in ['psrMain', 'psrTwistUp']:                                                  
        setup_dict[ loc ].setTranslation( child_offset * -1.0, 
                                          localSpace=True, 
                                          relative=True )

    setup_dict[ 'psrTwistTarget' ].setTranslation( twist_target_offset * -1.0, 
                                                   localSpace=True, 
                                                   relative=True )
                                              
    setup_dict[ 'psrMainUp' ].setTranslation( main_up_offset * -1.0, 
                                              localSpace=True, 
                                              relative=True )
                                              
    setup_dict[ 'psrMainAC' ] = pm.aimConstraint( setup_dict[ 'psrMainTarget' ], 
                                                  setup_dict[ 'psrMain' ], 
                                                  maintainOffset=True, 
                                                  aimVector=[ draw_axis.x, draw_axis.y, draw_axis.z ],
                                                  upVector=[ main_up_vector.x, main_up_vector.y, main_up_vector.z ],
                                                  worldUpType='objectrotation',
                                                  worldUpObject=setup_dict[ 'psrMainUp' ].name(),
                                                  weight=1.0 )
                                                  
    setup_dict[ 'psrTwistAC' ] = pm.aimConstraint( setup_dict[ 'psrTwistTarget' ], 
                                                   setup_dict[ 'psrTwist' ], 
                                                   maintainOffset=True, 
                                                   aimVector=[ main_up_vector.x, main_up_vector.y, main_up_vector.z ],
                                                   upVector=[ draw_axis.x * -1, draw_axis.y * -1, draw_axis.z * -1 ],
                                                   worldUpType='objectrotation',
                                                   worldUpObject=setup_dict[ 'psrTwistUp' ].name(),
                                                   weight=1.0 )
                                                          
    setup_dict[ 'psr_target_grp' ].setParent( setup_dict[ 'root_joint' ] )

    # the bend and side rotations of main psr locator drives the twist psr grp
    # this allows the child twist locator (under the twist psr grp) to maintain 
    # an accurate twist rotation only
    pm.connectAttr( setup_dict['psrMain'] + setup_dict['twist_driver_rot'][0], 
                    setup_dict[ 'psr_twist_grp'] + setup_dict['twist_driver_rot'][0], 
                    force=True )
    pm.connectAttr( setup_dict['psrMain'] + setup_dict['twist_driver_rot'][2], 
                    setup_dict[ 'psr_twist_grp'] + setup_dict['twist_driver_rot'][2],
                    force=True )
                            
    # connect final calculations to the custom attributes on the root_joint
    # these can be used for various setups that are driven from the pose reader
    pm.connectAttr( setup_dict['psrMain'] + setup_dict['twist_driver_rot'][0], 
                    setup_dict[ 'attr_psr_bend' ],
                    force=True )
    pm.connectAttr( setup_dict['psrMain'] + setup_dict['twist_driver_rot'][2], 
                    setup_dict[ 'attr_psr_side' ],
                    force=True )
    pm.connectAttr( setup_dict['psrTwist'] + setup_dict['twist_driver_rot'][1], 
                    setup_dict[ 'attr_psr_twist' ],
                    force=True )


selection = pm.ls( selection=True )

if selection:
    simple_pose_reader( selection[0] )

Sunday, July 3, 2016

Maya Picture in Picture Tool (PiP) Development Part 2

I haven't had too much time to devote to getting PiP fully complete in a while - I am to a point now where I would like to put up a beta download and see what kind of feedback I get for improvements.

pip_tool.zip


Just unzip the pip_tool folder to your Maya python path (example, your documents/maya/scripts/ folder).  The code to launch the tool, from a Python tab in the Maya script Editor...


1
2
import pip_tool.pip as PiP
PiP.jbPiP_UI()


This can be dragged to a shelf for later use.


I recently posted about my exploration with unit tests, I wanted to also post some of my early work with unit testing for PiP.  I feel like most folks who are early on in their learning path with unit tests would benefit from seeing examples of what kinds of things to test for.  This isn't my most current test library for PiP but it should get the point across...



  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
"""
Jason Breneman

unittest_pip.py

This file contains the unit test library for the Picture in Picture tool.  Some unit tests
do not pass specifically due to launching Maya through a standalone session.

"""

# maya libraries
import maya.standalone
import maya.cmds as cmds

try:
    maya.standalone.initialize()
    
except:
    pass

# python libraries
import unittest
import os
import uuid
import shutil
import logging

logging.basicConfig( level=logging.INFO )
logger = logging.getLogger( __name__ )
logger.info( "PiP Unit Test starting...\n" )

MAYA_VERSION = cmds.about( version=True )
MAYA_APP_DIR = os.environ["MAYA_APP_DIR"]
ROOT_DEV_DIR = sys.path[0].replace( "\\", "/" ).replace( "/pip_tool/tests", "" )
ROOT_TOOL_PATH = MAYA_APP_DIR + "/scripts"

logger.info( "'Maya Version' : " + MAYA_VERSION  + "\n" )
logger.info( "'MAYA_APP_DIR' : " + MAYA_APP_DIR + "\n" )


class TestLibrary( unittest.TestCase ):
    """
    Test Library
    
    unit test class for Picture in Picture tool
    
    """

    
    def setUp( self ):
        """
        Method to provide any standard setup instructions for a test case function
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        
        pass
        
        
    def tearDown( self ):
        """
        Method to provide any standard tear down instructions for a test case function
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        
        pass
    
    
    def test_files_installed_check( self ):
        """
        Test method for copying files from the development environment to the 
        MAYA_APP_DIR path
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        
        logger.info( "*** TestCase *** Remove existing install, and apply a fresh install\n" )
        
        if os.path.exists( ROOT_TOOL_PATH + "/pip_tool" ):
            shutil.rmtree( ROOT_TOOL_PATH + "/pip_tool" )

        shutil.copytree( ROOT_DEV_DIR + "/pip_tool", ROOT_TOOL_PATH + "/pip_tool" )
        
        
    def test_import_check( self ):
        """
        Test method to check for a successful module import
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        logger.info( "*** TestCase *** Import pip_tool module\n" )
        
        import_success = False
        error_message = "TestCase Failure, PiP module did not import correctly.\n"
        
        try:
            import pip_tool.pip as PiP
            import_success = True
            
        except:
            pass
            
        self.assertTrue( import_success, error_message )

        
        
    def test_pip_instance( self ):
        """
        Test method to check a successful instantiation of PiP
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        
        logger.info( "*** TestCase *** Load a PiP instance\n" )
        
        import pip_tool.pip as PiP
        reload( PiP )
        loadout = PiP.jbPiP_UI()
        
        loadout_exists = cmds.modelEditor( loadout.name_instance + "__ME", 
                                           query=True, 
                                           exists=True )
        error_message = "TestCase Failure, PiP loadout does not exist.  Maya UI required for successful TestCase\n"
        self.assertTrue( loadout_exists, error_message )
        
        
    def test_pip_callback_newscene( self ):
        """
        Test method to check if the new scene callback gets properly deleted when
        a new scene event occurs
        
        Args:
            self (object) : reference to the TestLibrary class instance
            
        Returns:
            None
            
        """
        
        logger.info( "*** TestCase *** Delete PiP on New Scene Callback\n" )
        
        import pip_tool.pip as PiP
        reload( PiP )
        loadout = PiP.jbPiP_UI()
                
        cmds.file( new=True, force=True )
        
        self.assertEquals( loadout.newscene_callbackid, None )


# if starting from a command prompt, load unit test class instance
if __name__ == "__main__":
    unittest.main()

From these examples, you can see I started with testing the results of early development stages such as moving files from my development path to the Maya script path, or just a simple import check, UI loading checks, etc.  Kind of a neat thing to note here is my unit tests are being ran from the windows command line to launch a standalone Maya session (no UI).  The problem with this is PiP is a UI based tool, so as I kept developing more tests I realized I could run some tests in command line but to run some specific tests I would need a normal Maya session.

The current beta release seems to have a few graphical glitches on certain computer setups - that and a few other "nice to haves" are the known issues that I am wanting to polish before I consider it complete. Anyway, that's all for now!

Wednesday, June 29, 2016

Discovering the Benefits of Unit Testing and TDD


In the past few years I kept hearing the term unit testing pop up here and there.  I seemed to always brush it aside, I figured I would eventually find time to learn about it what it is and why I kept hearing so much buzz about it.  Well I finally found the time when I was developing my picture in picture tool.  And I must say, I am now convinced of how awesome it is.

I won't go super in depth, there are plenty of blogs, documentation and the like that go over the details of unit testing (python docs link).  I will say that when utilizing unit testing, I feel much more confident in the stability of my code and my ability to track down future bugs.

Unit testing in general can be described as implementing small testing methods that validate small bits of actual code.  It may seem silly to write "test code" for your code.  Typically what I refer to as "linear coding", you end up testing your code yourself while you write it.  So why spend all that extra time on code that isn't used within the tool?  Sanity, is my answer.  I've built small tools and large tools, there are always easy bugs and hard bugs to find within the code.  Sometimes finding a bug is simple, but it's always at least a matter of minutes to fix something, and it would be nice if I could apply the fix, have all my tests pass with said fix and be confident that the code is now fixed as well as stable.  After spending the time reading up on unit testing, I found that I regretted not researching it sooner.



Along my path of learning about unit testing, I have also created an initiative at work to try and train the rest of the TAs to understand and implement unit testing within their normal tool development process. While researching unit testing, it eventually lead me to learning about Test Driven Development (TDD).

The concept of TDD, being that you start with writing your unit tests that are setup to actually fail, that failure then prompts you to create the code to make the test succeed, and as you add new code you refactor the existing code.  I actually really enjoy this methodology, as it is a very helpful guide to creating very stable code.  I have noted a few helpful things I have been practicing this on my own..

  • Write a lot of useful and relevant unit tests, I found that breaking my code down into testable chunks helped me to consider all the areas that could have led to possible future bugs.  This actually helped me in understanding why some pieces of code could be more bug prone.  The more tests, the more detailed report of the stability of your code.  Along with this, I would be sure those test functions are very narrow and focused on a specific testable variable.
  • Find a new bug? Write a new test if you can, and add it to your test suite.  This will help continue the population of updated unit tests and will help keep your code stable as bugs are fixed.
  • Readability and good commenting count, especially in your test suite methods!  The point of these unit test suites is to improve the quality of your code and to assist in future bug fixes.  It's always good to consider the future bug fixer may not be you.  So commenting and properly naming the unit test function is essential.  The entire purpose of running the test suite is to quickly find bugs, if it's difficult to read or understand what the test suite is doing it may slow down the process of fixing said bug(s).  I tend to like the function naming that most people suggest: test_import_module_validation() instead of something like test_import().  In this instance, try to avoid short handing the name of a function.
  • Take advantage of the error message argument within the assert functions.  The purpose of these unit tests is to provide success/failure checks along with information, the error message is a great opportunity to provide more information to the test runner on how the test failed. It's great to use unit tests within TDD to make guide you in creating solid code, but always remember that the person doing the tests in the future may not be the original author, so the more information the better.
  • Not necessarily related to learning/writing unit tests, but if you are learning with the goal of implementing this among a team it's a good idea to document everything.  What it is, how to use it, best practices, pros, cons - the works.  Provide a path for the team to learn the information (direction), encourage them to refer back to the information (documentation) when needed, and also give them a time to learn it (training).  


Wednesday, March 30, 2016

Maya Picture in Picture Tool (PiP) Development Part 1

It's been a while since I last posted but I wanted to write about a tool I had been planning for a while and finally found some time to start developing.  For now, I would consider this tool in an alpha state and I do plan on having it available for download in the near future once it's fully vetted.


Alpha recording of PiP (Maya 2016)
This is PiP, it is a tool within Maya to create view port within a view port (picture in picture) setup. The original idea came from watching a lot of animators at work creating tear off model panels with a camera setup in the game camera's angle to see how their animation was reading visually.  Most of our animators have at least 2 monitors, their main monitor is typically where they animate on their primary view port, but they also tear off a copy of the model panel with the game camera ( along with the graph editor and other tools) thrown onto the secondary monitor.  This may not get to some people - but I have taken it as a personal goal to try to keep folks centered and focused in the same monitor space as much as possible - which led to the idea "can you nest a model panel into the existing model panel?"  Which leads to other crazy ideas such as "can graph splines be represented in the view port?" and "can we view silhouette and textures and animate at the same time?"

PiP allows the artist to view multiple cameras at once within the context of their main work and thought process - rather than diverting their gaze to a second monitor to see how things look, and later returning back to work.  You can make as many PiP window instances as you desire, resize them to your liking, and even play back animations in all view ports at once to really get a good idea of how things are working together.

Throughout the initial brain storming of this I started focusing on the other practices for multiple model panels and cameras such as a face camera or a face rig osipa-style slider control camera - seeing the multiple disciplinary uses for PiP really made me think twice about how useful it could be if I ever found time to develop it!

Well I found some time, it really isn't a large tool - I have parented ui objects to the view port in the past, it was more of figuring out the intricacies of doing it with model editor widgets.  I wanted the interface to be super simple for anyone to use, I also wanted to make sure older versions of Maya could use the tool.  I am using maya.cmds instead of pymel.core for a lot of the work just to help with speed although it's not really a very "procedurally heavy" tool, the speed differences are something I am lot more aware of in the past few years of working with pymel.  Any version of Maya that is older than 2014 will utilize PyQt for it's UI setup, and newer versions will utilize PySide.  This is just some of the code I used to configure which UI library to use based on the maya version.



# Maya libraries
import maya.OpenMaya as om
import maya.OpenMayaUI as omUI
import maya.cmds as cmds

# Find module file path used for icon relative path finding
module_file_path = __file__.replace( "\\", "/" ).rpartition( "/" )[0]
mayaMainWindowPtr = omUI.MQtUtil.mainWindow()
MAYA_VERSION = cmds.about( version=True ).replace( " x64", "" )
MAYA_UI_LIBRARY = "PyQt"

# PyQt
if int( MAYA_VERSION ) < 2014:
    from PyQt4 import QtGui
    from PyQt4 import QtCore
    import sip
    wrapInstance = sip.wrapinstance
    
# PySide
else:
    from PySide import QtCore
    from PySide import QtGui
    from shiboken import wrapInstance
    MAYA_UI_LIBRARY = "PySide"
    QString = str
    
mayaMainWindow = wrapInstance( long( mayaMainWindowPtr ), QtGui.QWidget )


And here is snippet of code used to make a new model editor nested to the main model editor.  Nothing super crazy, the main idea here is using maya's ui api to get the main view port - wrap it to a QWidget.  Then using Maya's cmds engine, create a new modelEditor, wrap it also to a QWidget - then parent them together.

This process was a little different for the older versions of Maya that used PyQt - the Maya view port ui is constructed a little differently which forced me to make a window with a layout containing the new modelEditor then parent the window into the main view port QWidget. 

        # cache the main viewport widget
        self.main_m3dView = omUI.M3dView()
        omUI.M3dView.getM3dViewFromModelPanel( self.defaultModelPanel, self.main_m3dView )
        viewWidget = wrapInstance( long( self.main_m3dView.widget() ), QtGui.QWidget )
        
        # Create modelEditor
        editor = cmds.modelEditor( self.nameInstance + "__ME" )
        cmds.modelEditor( editor, 
                          edit=True, 
                          camera=self.defaultCameraStart,
                          interactive=False,
                          displayAppearance='smoothShaded',
                          displayTextures=True,
                          headsUpDisplay=False,
                          shadows=True )

        # parent the modelEditor to the viewport
        ptr_me = omUI.MQtUtil.findControl( editor )
        wrap_me = wrapInstance( long( ptr_me ), QtGui.QWidget )
        wrap_me.setParent( viewWidget )
        self.window = wrap_me
        self.window.move( self.startingPos[0], self.startingPos[1] )
        self.window.setFixedSize( QtCore.QSize( self.windowSize[0], self.windowSize[1] ) )


Well, that's mostly all I wanted to cover for now.  It's been fun trying to think of cool ways to keep the users focused in the view port rather than spreading their gaze over more desktop space.  If someone else was looking for something similar hopefully this post has helped.  My part 2 will hopefully include a download link!

Saturday, November 7, 2015

Maya Python: Get the Hierarchy Root Joint

  I am taking a break from the Rigging System Case Study series; Part 3 may take some time to write out everything.  I've recently began exploring a personal project that required me to to take a look at rewriting some really simple rigging utility functions.  I decided to post a few and here is the first...


import maya.cmds as cmds


def _getHierarchyRootJoint( joint="" ):
    """
    Function to find the top parent joint node from the given 
    'joint' maya node

    Args:
        joint (string) : name of the maya joint to traverse from

    Returns:
        A string name of the top parent joint traversed from 'joint'

    Example:
        topParentJoint = _getHierarchyRootJoint( joint="LShoulder" )

    """
    
    # Search through the rootJoint's top most joint parent node
    rootJoint = joint

    while (True):
        parent = cmds.listRelatives( rootJoint,
                                     parent=True,
                                     type='joint' )
        if not parent:
            break;

        rootJoint = parent[0]

    return rootJoint 
  

  I've used this particular function for part of the traversal from mesh->skinCluster->influence->top parent joint.  I've used it for mostly exporter, animation tools and for rigging purposes - building an export skeleton layer on a game rig.

  For the purposes of my personal project - the code needs to be as fast as possible for the utility functions.  I like to stay away from plugin dependencies for Maya tools where possible,  so I am working with Maya commands engine for my Maya utility code - it's not as Pythonic as PyMel, but is faster and worth spending the extra time to consider if you are worried about speed of the tool.  It's a trade off to consider when thinking of the needs of the tool.  For instance the Rigging System that I've been blogging about was written with PyMel, where as most of the animation tools I've worked with I have used the Maya commands engine.  With my timing decorator on I am averaging about 0.0075-0.008s for this function traversing about 250 joints up the chain.

  Speaking of the time decorator, here is mine that I created to track/debug my utility stuff.  I would suggest using logging instead of print, print is bloated and would provide less accurate data for you to analyze. 


from functools import wraps
import time
import logging
import maya.utils 
 
# Create debug logger within in - a few Maya version block the basic logger
logger = logging.getLogger( "MyDebugLogger" )
logger.propagate = False
handler = maya.utils.MayaGuiLogHandler()
handler.setLevel( logging.INFO )
formatter = logging.Formatter( "%(message)s" )
handler.setFormatter( formatter )
logger.addHandler( handler )
 
def timeDecorator( f ):
    """
    Decorator function to apply a timing process to a function given

    Args:
        f (object) : Python function passed through the decorator tag

    Returns:
        return the value from the function wrapped with the decorator
        function process

    Examples:
        @timeDecorator
        def myFunc( arg1, arg2 ):

    """

    @wraps(f)
    def wrapped( *args, **kwargs ):
        """ 
        Wrapping the timing calculation around the function call 
        
        Returns:
            Result of the called wrapped function
            
        """
        

        
        # log the process time 
        t0 = time.clock()
        r = f( *args, **kwargs )
        logger.warning( "{funcName} processing took : {processTime}".format( funcName=f.__name__, processTime= + time.clock() - t0 ) )
        
        return r

    return wrapped

Wednesday, November 4, 2015

Case Study: Building a Rigging system Part 2

The rigging tool kit v2.0 ui and using the Armature system to place and orient joints from a Biped Template

  I mentioned at the end of Part 1 - accuracy of joint placement with "Volume Guides" and empowerment for the Rigging team were the areas that needed improvements.  In this entry, I will walk through version 2.0 of the rigging system and the improvements made and how those improvements impacted the artists.  And again, try to explain the holes I could see and my thought process in fixing those for the future release.


Big Takeaways


  User experience is EXTREMELY important - even though all the tools and functionality exist, if they are in a confusing layout then the user isn't able to work at full capacity because they are fighting with a bad experience.  Thinking of UX (user experience) from things like UI layout down to how a user interacts with editing a custom meta data node eventually would lead to the most current release which I will cover in a future post.


Empowering the Artist

  • Improving Templates:
  The new release for the rigging system would do away with the "Volume Guide" step and would start using only "Templates" - which are Maya files that have a skeleton and meta data attached to them that Artists have saved out for future use through the "Template Library" feature. 
 This decision freed the artists from relying on new anatomies from the "Volume Guide" created by the TA team and allowed them to draw their own Skeletons and save them as needed.  "Templates" have ranged from full skeletons like a Biped or Quadruped down to single components like wings, cape, arms, etc.  Seeing how the artists have branched out the "Template Library" in this way has reassured me that giving them the ability to do this was definitely the correct decision.

  • Exposing and Expanding Rig Meta Data:
The v2.0 Meta Data Editor.  It is very painful to look at now-a-days :(
  The "Volume Guides" in v1.0 already had some representation of meta data.  They were custom attributes added to transforms that were stored in the scene, the attributes instructed the rig builder process on how to construct the rig.  Mixing different anatomies would result in different rig setups based on the hidden meta data. 
  In v2.0 the decision was made to expose the editing of these nodes to artists and expand the use of meta data to rigging "Modules".   Thinking of the rigging system as rig modules rather than specific anatomy types was a HUGE step in the foundation of the rig builder.  The meta data was still an empty transform with extra attributes. For example the original FK Module meta data node had these attributes....
ModuleType (string) - This would store the name of the module type (i.e. "FK") 
Joints (string) - This stored a list of string names for the joints 
MetaScale (float) - This value was used to set an intial build scale for controls
MetaParent (string) - This would store the name of the DAG parent for this module to parent to.
Side (string) - This string value would determine the side of the component, left/right/center 
  To further customize a rigging "Module" - the artist could create a sub-module rigging component named a rigging "Attribute".  These rigging "Attributes" would apply a modification to a rig "Module".  Examples are things like SingleParent (Module follows some transform with translations and rotations), DynamicParent (Module can dynamic switch what transform it follow), etc
  A Meta Data Editor was also added to the rigging system, which allowed the Artist to create or edit meta data nodes easier than working in Maya's Attribute Editor. The build process could figure out what to do and how to do it based on meta data information. The build process was (1) Module with Module's Python Code on post build,  (2) Loop through Module's Attributes with Python Code on each post build.

  • Custom Python Code Customization:
    Each Meta Data node also had a custom string attribute that would hold Python code.  The code would execute the python after the build process for that specific module - which allowed a lot of flexibility for the artist to work.  The Meta Data Editor also had a custom python code editor - which at this time was just a a simple PyQt QLineEdit.
  This was a big deal for the extensiveness of the system but it also motivated our artists to learn more scripting - which has been a tremendous win for the overall rigging and tech art departments.  A motivating reason to learn! :)

  • Seeking a Better Control:
  The original release of the rigging tools was using a very traditional design for rig controls - NURBS curves.  Nurbs were easily customizable but not as easy for the builder to restore those edits on rig delete/build.
  This led to an exploration of a custom C++ Maya node (MPxLocator class) that used custom attributes that would dictate what shape is drawing.
  The custom control node allowed the artist to edit the "look" of the rig and it created an easy way to save the control settings when the rig is deleted - so that when it's recreated it will restore the last control settings. The build process would temporarily save the settings to custom attributes on the joints, then restore those settings when the rig builds - and later delete those temporary attributes.


The available attributes for the custom Maya Locator node, also the
Beauty Pass tool which allowed copy/mirror control settings for faster work flow
  Since the custom control was using the OpenGL library we were able to manipulate things like shape type, thickness of lines, fill shading of a shape, opacity of lines and shades, clear depth buffer (draw on top) among many other cool features. 
* Thinking back on using a custom plugin for controls, I think I would look more into wrapping a custom pymel node from NURBS and trying to use custom attributes to save out the data for each CV similar to how I saved the custom attributes for the plugin control.  I would lose the coolness of controlling the OpenGL drawing onto the viewport, but would gain a lot of flexibility on the shape library and the overall maintenance of the plugin with Maya updates.  


Speed with Accuracy
The Armature system, the spheres are color coded based on the primary joint axis.
The bones are colored with the secondary and tertiary axis.

  • Interactive and Non-Destructive Rigging with Armature:
  This update addressed a lot of empowering the artist to control the system, with the removal of the "Volume Guide" system we needed a similar work process that would assist the artist in positioning and orienting joints.  We introduced the Armature system, which was a temporary rig that would allow the artist to position and orient joints with precision and speed.  
  I won't go into details of the rig system for Armature, but high level description is it would build a temporary rig based on the connected meta data, the artist would manipulate the rig into position with a familiar "puppet control system" then remove the Armature and have the updated Skeleton.  This skeleton update would have NO detrimental affects to existing skinClusters - which was a HUGE win for the artists as they would have small joint placement iterations as they were skinning the character.  
 Using a rig to manipulate your joints made a lot of sense to our artists and as a tool the rig could toggle certain features on like symmetry movement which would mirror adjustments across the body.  The Artist also had a toggle feature for hierarchy movement which would cause children to follow the parent or not. 


Thoughts

  Throughout the development of the v2.0 update I was already formulating plans for the v3.0 update.  Version 2.0 was huge for laying the ground work for how I personally thought of rig construction - and even how I approach teaching it to students or more novice co-workers.

   Thinking of rigging on a component or module level instead of a specific anatomy type, gave me a perspective of feature needs rather than general anatomy needs.  Don't get me wrong the anatomy is still high priority when figuring out the way something moves, but what I am saying is thinking of the UX for the animator or for the rigger can have a huge impact on how you build a rigging system.


A post-mortem study of v2.0's ui layout readability, and then a really early mock up of the ui that was eventually used for v3.0's ui.

  At the time of v2.0 the buzz word for our Tech Art department was UX - that is probably the big take away from v2.0.  I took that as my main driving force for the most current update (v3.0).  At the time of this release I was still learning best practices for UX - a lot of time was spent through the iterations of v1.0 and v3.0 shadowing artists, doing walk through tutorials and just chatting on what is a good work flow and what would be the theoretical "perfect workflow".  Some of the things that popped up that I will cover in v3.0.

  • The Meta Data editor required too many steps (This still relied on the user using the Attribute Editor, Connection Editor, etc)
  • A string based meta data attribute is easy to mess up (I discovered message attributes as a key solution to this issue)
  • It's hard to acclimate folks who are used to rigging their own way (This can be helped a bit by providing structure with flexibility)
  • There was too many set instructions for the rig builder - not enough flexibility.  Even with a full python post build script available - artists wanted more nodes to work with rather than script it.
  • Layout of the UI needed optimization, more templates visible, add template search filters, reworking specific tools.
  • Debugging a module was difficult for the artist - this required a lot of shadow time for me to find out how the artist was working and thinking but it also provided a very valuable information and solutions that we would implement in v3.0
  • The more we went into our own territory of "what is the method of rigging" with our own tool set the more important high level terms, tutorials and documentation became.  This became a big hurdle - we had to make sure we trained people on rigging theory instead of just learning the default process of rigging in Maya.  We managed to lessen this hurdle by really pushing the UX of the tool in v3.0.