Wednesday, June 29, 2016

Discovering the Benefits of Unit Testing and TDD


In the past few years I kept hearing the term unit testing pop up here and there.  I seemed to always brush it aside, I figured I would eventually find time to learn about it what it is and why I kept hearing so much buzz about it.  Well I finally found the time when I was developing my picture in picture tool.  And I must say, I am now convinced of how awesome it is.

I won't go super in depth, there are plenty of blogs, documentation and the like that go over the details of unit testing (python docs link).  I will say that when utilizing unit testing, I feel much more confident in the stability of my code and my ability to track down future bugs.

Unit testing in general can be described as implementing small testing methods that validate small bits of actual code.  It may seem silly to write "test code" for your code.  Typically what I refer to as "linear coding", you end up testing your code yourself while you write it.  So why spend all that extra time on code that isn't used within the tool?  Sanity, is my answer.  I've built small tools and large tools, there are always easy bugs and hard bugs to find within the code.  Sometimes finding a bug is simple, but it's always at least a matter of minutes to fix something, and it would be nice if I could apply the fix, have all my tests pass with said fix and be confident that the code is now fixed as well as stable.  After spending the time reading up on unit testing, I found that I regretted not researching it sooner.



Along my path of learning about unit testing, I have also created an initiative at work to try and train the rest of the TAs to understand and implement unit testing within their normal tool development process. While researching unit testing, it eventually lead me to learning about Test Driven Development (TDD).

The concept of TDD, being that you start with writing your unit tests that are setup to actually fail, that failure then prompts you to create the code to make the test succeed, and as you add new code you refactor the existing code.  I actually really enjoy this methodology, as it is a very helpful guide to creating very stable code.  I have noted a few helpful things I have been practicing this on my own..

  • Write a lot of useful and relevant unit tests, I found that breaking my code down into testable chunks helped me to consider all the areas that could have led to possible future bugs.  This actually helped me in understanding why some pieces of code could be more bug prone.  The more tests, the more detailed report of the stability of your code.  Along with this, I would be sure those test functions are very narrow and focused on a specific testable variable.
  • Find a new bug? Write a new test if you can, and add it to your test suite.  This will help continue the population of updated unit tests and will help keep your code stable as bugs are fixed.
  • Readability and good commenting count, especially in your test suite methods!  The point of these unit test suites is to improve the quality of your code and to assist in future bug fixes.  It's always good to consider the future bug fixer may not be you.  So commenting and properly naming the unit test function is essential.  The entire purpose of running the test suite is to quickly find bugs, if it's difficult to read or understand what the test suite is doing it may slow down the process of fixing said bug(s).  I tend to like the function naming that most people suggest: test_import_module_validation() instead of something like test_import().  In this instance, try to avoid short handing the name of a function.
  • Take advantage of the error message argument within the assert functions.  The purpose of these unit tests is to provide success/failure checks along with information, the error message is a great opportunity to provide more information to the test runner on how the test failed. It's great to use unit tests within TDD to make guide you in creating solid code, but always remember that the person doing the tests in the future may not be the original author, so the more information the better.
  • Not necessarily related to learning/writing unit tests, but if you are learning with the goal of implementing this among a team it's a good idea to document everything.  What it is, how to use it, best practices, pros, cons - the works.  Provide a path for the team to learn the information (direction), encourage them to refer back to the information (documentation) when needed, and also give them a time to learn it (training).  


Wednesday, March 30, 2016

Maya Picture in Picture Tool (PiP) Development Part 1

It's been a while since I last posted but I wanted to write about a tool I had been planning for a while and finally found some time to start developing.  For now, I would consider this tool in an alpha state and I do plan on having it available for download in the near future once it's fully vetted.


Alpha recording of PiP (Maya 2016)
This is PiP, it is a tool within Maya to create view port within a view port (picture in picture) setup. The original idea came from watching a lot of animators at work creating tear off model panels with a camera setup in the game camera's angle to see how their animation was reading visually.  Most of our animators have at least 2 monitors, their main monitor is typically where they animate on their primary view port, but they also tear off a copy of the model panel with the game camera ( along with the graph editor and other tools) thrown onto the secondary monitor.  This may not get to some people - but I have taken it as a personal goal to try to keep folks centered and focused in the same monitor space as much as possible - which led to the idea "can you nest a model panel into the existing model panel?"  Which leads to other crazy ideas such as "can graph splines be represented in the view port?" and "can we view silhouette and textures and animate at the same time?"

PiP allows the artist to view multiple cameras at once within the context of their main work and thought process - rather than diverting their gaze to a second monitor to see how things look, and later returning back to work.  You can make as many PiP window instances as you desire, resize them to your liking, and even play back animations in all view ports at once to really get a good idea of how things are working together.

Throughout the initial brain storming of this I started focusing on the other practices for multiple model panels and cameras such as a face camera or a face rig osipa-style slider control camera - seeing the multiple disciplinary uses for PiP really made me think twice about how useful it could be if I ever found time to develop it!

Well I found some time, it really isn't a large tool - I have parented ui objects to the view port in the past, it was more of figuring out the intricacies of doing it with model editor widgets.  I wanted the interface to be super simple for anyone to use, I also wanted to make sure older versions of Maya could use the tool.  I am using maya.cmds instead of pymel.core for a lot of the work just to help with speed although it's not really a very "procedurally heavy" tool, the speed differences are something I am lot more aware of in the past few years of working with pymel.  Any version of Maya that is older than 2014 will utilize PyQt for it's UI setup, and newer versions will utilize PySide.  This is just some of the code I used to configure which UI library to use based on the maya version.



# Maya libraries
import maya.OpenMaya as om
import maya.OpenMayaUI as omUI
import maya.cmds as cmds

# Find module file path used for icon relative path finding
module_file_path = __file__.replace( "\\", "/" ).rpartition( "/" )[0]
mayaMainWindowPtr = omUI.MQtUtil.mainWindow()
MAYA_VERSION = cmds.about( version=True ).replace( " x64", "" )
MAYA_UI_LIBRARY = "PyQt"

# PyQt
if int( MAYA_VERSION ) < 2014:
    from PyQt4 import QtGui
    from PyQt4 import QtCore
    import sip
    wrapInstance = sip.wrapinstance
    
# PySide
else:
    from PySide import QtCore
    from PySide import QtGui
    from shiboken import wrapInstance
    MAYA_UI_LIBRARY = "PySide"
    QString = str
    
mayaMainWindow = wrapInstance( long( mayaMainWindowPtr ), QtGui.QWidget )


And here is snippet of code used to make a new model editor nested to the main model editor.  Nothing super crazy, the main idea here is using maya's ui api to get the main view port - wrap it to a QWidget.  Then using Maya's cmds engine, create a new modelEditor, wrap it also to a QWidget - then parent them together.

This process was a little different for the older versions of Maya that used PyQt - the Maya view port ui is constructed a little differently which forced me to make a window with a layout containing the new modelEditor then parent the window into the main view port QWidget. 

        # cache the main viewport widget
        self.main_m3dView = omUI.M3dView()
        omUI.M3dView.getM3dViewFromModelPanel( self.defaultModelPanel, self.main_m3dView )
        viewWidget = wrapInstance( long( self.main_m3dView.widget() ), QtGui.QWidget )
        
        # Create modelEditor
        editor = cmds.modelEditor( self.nameInstance + "__ME" )
        cmds.modelEditor( editor, 
                          edit=True, 
                          camera=self.defaultCameraStart,
                          interactive=False,
                          displayAppearance='smoothShaded',
                          displayTextures=True,
                          headsUpDisplay=False,
                          shadows=True )

        # parent the modelEditor to the viewport
        ptr_me = omUI.MQtUtil.findControl( editor )
        wrap_me = wrapInstance( long( ptr_me ), QtGui.QWidget )
        wrap_me.setParent( viewWidget )
        self.window = wrap_me
        self.window.move( self.startingPos[0], self.startingPos[1] )
        self.window.setFixedSize( QtCore.QSize( self.windowSize[0], self.windowSize[1] ) )


Well, that's mostly all I wanted to cover for now.  It's been fun trying to think of cool ways to keep the users focused in the view port rather than spreading their gaze over more desktop space.  If someone else was looking for something similar hopefully this post has helped.  My part 2 will hopefully include a download link!

Saturday, November 7, 2015

Maya Python: Get the Hierarchy Root Joint

  I am taking a break from the Rigging System Case Study series; Part 3 may take some time to write out everything.  I've recently began exploring a personal project that required me to to take a look at rewriting some really simple rigging utility functions.  I decided to post a few and here is the first...


import maya.cmds as cmds


def _getHierarchyRootJoint( joint="" ):
    """
    Function to find the top parent joint node from the given 
    'joint' maya node

    Args:
        joint (string) : name of the maya joint to traverse from

    Returns:
        A string name of the top parent joint traversed from 'joint'

    Example:
        topParentJoint = _getHierarchyRootJoint( joint="LShoulder" )

    """
    
    # Search through the rootJoint's top most joint parent node
    rootJoint = joint

    while (True):
        parent = cmds.listRelatives( rootJoint,
                                     parent=True,
                                     type='joint' )
        if not parent:
            break;

        rootJoint = parent[0]

    return rootJoint 
  

  I've used this particular function for part of the traversal from mesh->skinCluster->influence->top parent joint.  I've used it for mostly exporter, animation tools and for rigging purposes - building an export skeleton layer on a game rig.

  For the purposes of my personal project - the code needs to be as fast as possible for the utility functions.  I like to stay away from plugin dependencies for Maya tools where possible,  so I am working with Maya commands engine for my Maya utility code - it's not as Pythonic as PyMel, but is faster and worth spending the extra time to consider if you are worried about speed of the tool.  It's a trade off to consider when thinking of the needs of the tool.  For instance the Rigging System that I've been blogging about was written with PyMel, where as most of the animation tools I've worked with I have used the Maya commands engine.  With my timing decorator on I am averaging about 0.0075-0.008s for this function traversing about 250 joints up the chain.

  Speaking of the time decorator, here is mine that I created to track/debug my utility stuff.  I would suggest using logging instead of print, print is bloated and would provide less accurate data for you to analyze. 


from functools import wraps
import time
import logging
import maya.utils 
 
# Create debug logger within in - a few Maya version block the basic logger
logger = logging.getLogger( "MyDebugLogger" )
logger.propagate = False
handler = maya.utils.MayaGuiLogHandler()
handler.setLevel( logging.INFO )
formatter = logging.Formatter( "%(message)s" )
handler.setFormatter( formatter )
logger.addHandler( handler )
 
def timeDecorator( f ):
    """
    Decorator function to apply a timing process to a function given

    Args:
        f (object) : Python function passed through the decorator tag

    Returns:
        return the value from the function wrapped with the decorator
        function process

    Examples:
        @timeDecorator
        def myFunc( arg1, arg2 ):

    """

    @wraps(f)
    def wrapped( *args, **kwargs ):
        """ 
        Wrapping the timing calculation around the function call 
        
        Returns:
            Result of the called wrapped function
            
        """
        

        
        # log the process time 
        t0 = time.clock()
        r = f( *args, **kwargs )
        logger.warning( "{funcName} processing took : {processTime}".format( funcName=f.__name__, processTime= + time.clock() - t0 ) )
        
        return r

    return wrapped

Wednesday, November 4, 2015

Case Study: Building a Rigging system Part 2

The rigging tool kit v2.0 ui and using the Armature system to place and orient joints from a Biped Template

  I mentioned at the end of Part 1 - accuracy of joint placement with "Volume Guides" and empowerment for the Rigging team were the areas that needed improvements.  In this entry, I will walk through version 2.0 of the rigging system and the improvements made and how those improvements impacted the artists.  And again, try to explain the holes I could see and my thought process in fixing those for the future release.


Big Takeaways


  User experience is EXTREMELY important - even though all the tools and functionality exist, if they are in a confusing layout then the user isn't able to work at full capacity because they are fighting with a bad experience.  Thinking of UX (user experience) from things like UI layout down to how a user interacts with editing a custom meta data node eventually would lead to the most current release which I will cover in a future post.


Empowering the Artist

  • Improving Templates:
  The new release for the rigging system would do away with the "Volume Guide" step and would start using only "Templates" - which are Maya files that have a skeleton and meta data attached to them that Artists have saved out for future use through the "Template Library" feature. 
 This decision freed the artists from relying on new anatomies from the "Volume Guide" created by the TA team and allowed them to draw their own Skeletons and save them as needed.  "Templates" have ranged from full skeletons like a Biped or Quadruped down to single components like wings, cape, arms, etc.  Seeing how the artists have branched out the "Template Library" in this way has reassured me that giving them the ability to do this was definitely the correct decision.

  • Exposing and Expanding Rig Meta Data:
The v2.0 Meta Data Editor.  It is very painful to look at now-a-days :(
  The "Volume Guides" in v1.0 already had some representation of meta data.  They were custom attributes added to transforms that were stored in the scene, the attributes instructed the rig builder process on how to construct the rig.  Mixing different anatomies would result in different rig setups based on the hidden meta data. 
  In v2.0 the decision was made to expose the editing of these nodes to artists and expand the use of meta data to rigging "Modules".   Thinking of the rigging system as rig modules rather than specific anatomy types was a HUGE step in the foundation of the rig builder.  The meta data was still an empty transform with extra attributes. For example the original FK Module meta data node had these attributes....
ModuleType (string) - This would store the name of the module type (i.e. "FK") 
Joints (string) - This stored a list of string names for the joints 
MetaScale (float) - This value was used to set an intial build scale for controls
MetaParent (string) - This would store the name of the DAG parent for this module to parent to.
Side (string) - This string value would determine the side of the component, left/right/center 
  To further customize a rigging "Module" - the artist could create a sub-module rigging component named a rigging "Attribute".  These rigging "Attributes" would apply a modification to a rig "Module".  Examples are things like SingleParent (Module follows some transform with translations and rotations), DynamicParent (Module can dynamic switch what transform it follow), etc
  A Meta Data Editor was also added to the rigging system, which allowed the Artist to create or edit meta data nodes easier than working in Maya's Attribute Editor. The build process could figure out what to do and how to do it based on meta data information. The build process was (1) Module with Module's Python Code on post build,  (2) Loop through Module's Attributes with Python Code on each post build.

  • Custom Python Code Customization:
    Each Meta Data node also had a custom string attribute that would hold Python code.  The code would execute the python after the build process for that specific module - which allowed a lot of flexibility for the artist to work.  The Meta Data Editor also had a custom python code editor - which at this time was just a a simple PyQt QLineEdit.
  This was a big deal for the extensiveness of the system but it also motivated our artists to learn more scripting - which has been a tremendous win for the overall rigging and tech art departments.  A motivating reason to learn! :)

  • Seeking a Better Control:
  The original release of the rigging tools was using a very traditional design for rig controls - NURBS curves.  Nurbs were easily customizable but not as easy for the builder to restore those edits on rig delete/build.
  This led to an exploration of a custom C++ Maya node (MPxLocator class) that used custom attributes that would dictate what shape is drawing.
  The custom control node allowed the artist to edit the "look" of the rig and it created an easy way to save the control settings when the rig is deleted - so that when it's recreated it will restore the last control settings. The build process would temporarily save the settings to custom attributes on the joints, then restore those settings when the rig builds - and later delete those temporary attributes.


The available attributes for the custom Maya Locator node, also the
Beauty Pass tool which allowed copy/mirror control settings for faster work flow
  Since the custom control was using the OpenGL library we were able to manipulate things like shape type, thickness of lines, fill shading of a shape, opacity of lines and shades, clear depth buffer (draw on top) among many other cool features. 
* Thinking back on using a custom plugin for controls, I think I would look more into wrapping a custom pymel node from NURBS and trying to use custom attributes to save out the data for each CV similar to how I saved the custom attributes for the plugin control.  I would lose the coolness of controlling the OpenGL drawing onto the viewport, but would gain a lot of flexibility on the shape library and the overall maintenance of the plugin with Maya updates.  


Speed with Accuracy
The Armature system, the spheres are color coded based on the primary joint axis.
The bones are colored with the secondary and tertiary axis.

  • Interactive and Non-Destructive Rigging with Armature:
  This update addressed a lot of empowering the artist to control the system, with the removal of the "Volume Guide" system we needed a similar work process that would assist the artist in positioning and orienting joints.  We introduced the Armature system, which was a temporary rig that would allow the artist to position and orient joints with precision and speed.  
  I won't go into details of the rig system for Armature, but high level description is it would build a temporary rig based on the connected meta data, the artist would manipulate the rig into position with a familiar "puppet control system" then remove the Armature and have the updated Skeleton.  This skeleton update would have NO detrimental affects to existing skinClusters - which was a HUGE win for the artists as they would have small joint placement iterations as they were skinning the character.  
 Using a rig to manipulate your joints made a lot of sense to our artists and as a tool the rig could toggle certain features on like symmetry movement which would mirror adjustments across the body.  The Artist also had a toggle feature for hierarchy movement which would cause children to follow the parent or not. 


Thoughts

  Throughout the development of the v2.0 update I was already formulating plans for the v3.0 update.  Version 2.0 was huge for laying the ground work for how I personally thought of rig construction - and even how I approach teaching it to students or more novice co-workers.

   Thinking of rigging on a component or module level instead of a specific anatomy type, gave me a perspective of feature needs rather than general anatomy needs.  Don't get me wrong the anatomy is still high priority when figuring out the way something moves, but what I am saying is thinking of the UX for the animator or for the rigger can have a huge impact on how you build a rigging system.


A post-mortem study of v2.0's ui layout readability, and then a really early mock up of the ui that was eventually used for v3.0's ui.

  At the time of v2.0 the buzz word for our Tech Art department was UX - that is probably the big take away from v2.0.  I took that as my main driving force for the most current update (v3.0).  At the time of this release I was still learning best practices for UX - a lot of time was spent through the iterations of v1.0 and v3.0 shadowing artists, doing walk through tutorials and just chatting on what is a good work flow and what would be the theoretical "perfect workflow".  Some of the things that popped up that I will cover in v3.0.

  • The Meta Data editor required too many steps (This still relied on the user using the Attribute Editor, Connection Editor, etc)
  • A string based meta data attribute is easy to mess up (I discovered message attributes as a key solution to this issue)
  • It's hard to acclimate folks who are used to rigging their own way (This can be helped a bit by providing structure with flexibility)
  • There was too many set instructions for the rig builder - not enough flexibility.  Even with a full python post build script available - artists wanted more nodes to work with rather than script it.
  • Layout of the UI needed optimization, more templates visible, add template search filters, reworking specific tools.
  • Debugging a module was difficult for the artist - this required a lot of shadow time for me to find out how the artist was working and thinking but it also provided a very valuable information and solutions that we would implement in v3.0
  • The more we went into our own territory of "what is the method of rigging" with our own tool set the more important high level terms, tutorials and documentation became.  This became a big hurdle - we had to make sure we trained people on rigging theory instead of just learning the default process of rigging in Maya.  We managed to lessen this hurdle by really pushing the UX of the tool in v3.0.


Tuesday, June 9, 2015

Case Study: Building a Rigging system Part 1

  I have decided to do a few blog series posts based on a few case studies that have popped up in my experiences as a tech artist.  Not necessarily in any particular order but I will start with the task of building a rigging system...


Big Takeaways:  


  Empower the Rigging team to OWN the tool kit as their own,  it's not the 100 controls they personally named themselves that makes them a Rigger but the overall artistic design and functionality of the creature they just designed for kinematic movement.  Accuracy of joint placement is so important that it must be a higher priority than speed in any system where you are laying out a skeleton.



Starting Out


  Early on my first task at Turbine was to work on our rigging tools, this was great for me - my background in tech art began from learning rigging and I really wanted to bring the pipeline into a modern standard.  The final product went through three major "versions", but it all started with trying to wrap up a bunch of helper functions into a build procedure.  This was still so early that we were still exploring exactly what would the animators' preferences be for basic feature styles, orientations, etc.  We knew we ( the tech art team ) were making a shift from MEL to Python, and we were starting to explore PyMel - so I started up coding that build procedure in PyMel.

  It was fortunate that most of the characters at the time were bipeds, the build process only handled our basic biped.  I realize most auto riggers start out with a biped design and some branch out to quadrupeds...I knew my vision for our tool kit would be a system that would aim to let the artist dictate the rig and not the tool kit itself - but it took some time to imagine how best to enable that control.


Inspiration, Incubation and Implementation


  The skeleton layout tools for the first release of the rigging tool kit were heavily influenced on a Siggraph paper from years ago by ILM on their Block Party system.  A lot of auto riggers use locators to set up a layout for a skeleton.  Although the locator technique gives a lot of accuracy - I didn't feel like it was particularly fast enough.  I adopted the Block Party term "Volume Guide" loosely applied to my proxy resolution biped mesh guide.  The rigger would move the body parts to fit within the mesh, the skeleton would be built to replace the guide and then they could proceed in the build rig procedure.

  The biped guide was styled like an art maquette, the rigger would use translate/rotate/scale to manipulate the proxy mesh into place.  The shapes of the proxy geo were actually shape parented under their respective joints, this gave the user the speed of rotation mirroring across the body during the layout process because they were actually manipulating joints and not polygonal objects. Later on this iteration of the tool kit grew to support more "Volume Guides", horse, lion, bird and spider were added.  The ability to "kit-bash" body parts was also added.  For example combining the upper body of a human with the base of the horse allowed you to create a centaur,  or adding wings onto a human would give you an angel,  another example is a werewolf - top half human, bottom half lion/cat guide.


The basic Human Biped Guide
"Kit Bashing" together a Human and Horse Guide
  















  The actual building tools and how the Guide's "knew how they were to be rigged" was inspired by a power point presentation at GDC from several years ago as well by Bungie, you can view the power point here.  The main thing I learned from the Bungie talk was to add mark up data (meta data) onto the maya objects, that the building procedure could later find and determine what to do with that.

  This was the start of the "modularity" of what the system would become, it allowed artist to load in parts of other guides to have multiple fingers, arms, spines, legs, tails, wings, etc and plug them with a simple press of the P button (maya parent).  This allowed a lot of quick development for creature rigs.  The speed of the Guide rig to create a skeleton layout that could be rigged with a click of a button was great - but I started to notice that it was somewhat difficult to get our Rigging team to adapt to a "volumetric" way of building a skeleton - so there needed to be some changes to improve and adapt to how we as a team wanted to work.


Volume Guide - The term used for a master anatomical creature maya file that I built to serve as a building block for a character.  The Guide could be "Kit-Bashed" together with other Guides to create something unique.


Template - The Rigger has edited a Guide and wanted to save it for re-use.  This could just be as simple as an A-pose Human Biped or as complex as a Centaur with 4 Human arms, 2 Human Heads, Bird Wings and a Lions Tail.


  Below is the Version 1.0 UI layout, it was awkward and very "tabby" - it served it's purpose of having a way to list the available Guides and Templates.  Along the way, other tools were added to other tabs, a Skinning tab, Control tools, etc.  The actual meta data editing tools were rather limited at this point to just strings and float based UI widgets.  At the time, meta data was only thought of as simple hooks to make the python code to build the rig find the right objects - later on it would become the core foundation of thought for the Rigger in designing their vision for the character's movement.

Rigging Tool Kit Version 1.0


Looking Back


  Overall this first release served best as a way to get our rigging team to start thinking less on the topic of "how do I make this IK setup?" and more on "how does the anatomy of this creature function?".  This was a really key growth point I think for most of our Riggers, as we were hand building most rigs for each project.  We already saw speed gains from originally 5 days for a rigged character down to less than a day.

  There was a ton of personal growth as far as becoming a better Tech Artist, I learned a lot about supporting a team, good coding practices and it was a good way to build my understanding of games - Turbine was my first game job.  The expectations of rigs for games are different than what I had run into in film.

  Version 1.0 did have some downfalls that ultimately lead to developing version 2.0.  The pseudo geometry/joint based manipulation of the layout was fast but not accurate enough - it needed some way to be more precise without any effort from the rigging artist.  The other main issue was the guides were created by myself, which meant if a new anatomy was needed I would have to have the forethought to make the volume guide.  These 2 issues of lack of accuracy for faster speed, and the rigging team empowerment were enough to spend a little more time into the rigging system to build something that would really be the core of the tool kit in version 2.0 and eventually version 3.0.



Wednesday, June 3, 2015

No more Crisis :(

  Yesterday, WB/Turbine announced the closing of Infinite Crisis, the MOBA game that I had the opportunity to work on for 2.5 years.  It has been sad to see the game you work on for so long come to an end, but it has been encouraging to reminisce on how fun it was to work on a super hero game with such a creative team.  IC was the project that allowed the Tech Art team to revolutionize Turbine's Art pipeline.  It was a blast to make great art work with such great people.





Monday, June 1, 2015

The P4 Handshake

  I recently ran into the issue at work where we had to make sure perforce and maya were communicating when a new file was exported to the tree.  This was only an issue on newer projects due to a new pipeline with a new engine choice.   We were already using the P4 python library  in house,  but we just needed to make sure the right functions were called, users were logged in, etc.  I just wanted to post a few findings because I felt like the documentation was particularly clear for what I needed.

Example Scenario:
  On export of a new asset to the P4 tree, I wanted to add the new file to the P4 Depot.  If a user already logged into P4, then a ticket exists ( lasts for 12 hours by default in P4 ).  If a user has not logged into Perforce, then prompt the user for the info and create a ticket using the perforce python api run_login() call.

Now the important piece of info that documentation was not clear about: you can call "add" or "edit" calls from P4Python as long as a ticket exists and hasn't expired on the matching user/port/client info.  The documentation seems to infer you can pass an argument into run_login() of the ticket hash id, but according to perforce support it is worded incorrectly.  So we set out to check if the user had an existing non-expired ticket by using this call..


from P4 import P4 as p4api
mP4 = p4api()

mP4.port = sP4Server
mP4.user = sP4User
mP4.client = sP4Workspace
mP4.host = os.environ["COMPUTERNAME"].upper()

bConnection = mP4.connect()

try:
    mP4.fetch_change()

    mP4.disconnect()
except:
    # prompt user for password, because fetch change failed



The try/except, using fetch_change() gave me a quick way to check if the current info for the user was correct for an active and existing ticket without actually doing any work in p4 ( such as edit/add ).  If the fetch_change() failed, I proceeded with some user prompting to supply correct information.

If it was successful, I later would just call the edit or in this case the "add" perforce python api call..


mP4.run( "add", sPathToFile )




  These are just a few snippets of the fix we had to implement to make sure we are correctly adding to changelists and interacting with P4.  Maybe not knowing you can just call edit/add if you have a ticket isn't an issue for other TAs but it was one of those things that really bothered me when I got into it and it felt the need to post about it.