Sunday, December 3, 2017

Compiling Maya 2018 plugins with CMake on Mac OSX

In addition to this post I have just spent a couple of hours trying to get my plugins to compile in maya 2018 and finally succeeded so I thought I'd share this here in case someone else has the same issues getting it to work.


I tested this on MacOS HighSierra so I cant guarantee it works with other versions too (yes I did set a root password ☝)

First of all the maya developer kit is now download only and doesn't ship with maya anymore.

Head over to the Autodesk website and download it. From the archive extract the three folders /devkit /include and /mkspec into the maya folder.


Your folder structure should then look like this:

  • /Applications/Autodesk/maya2018/devkit
  • /Applications/Autodesk/maya2018/mkspecs
  • /Applications/Autodesk/maya2018/include
  • /Applications/Autodesk/maya2018/Maya.app

Now you need to get the latest FindMaya.cmake from Chad Vernons Github and make sure your project points to it.

In your projects CMakeLists.txt now make sure to set the project to Maya 2018 by defining this



If you compile now you might get the following warning from CMake:

-- Configuring done
CMake Warning (dev):
  Policy CMP0042 is not set: MACOSX_RPATH is enabled by default.  Run "cmake
  --help-policy CMP0042" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  MACOSX_RPATH is not specified for the following targets:


Its just a warning but in order to disable it you might want to also set this in CMakeSettings.txt


Now when you compile you will get spammed with a huge amount of error messages and foremost the issue here is that cmake doesnt tell maya for which platform you want to compile.

The way you would do it in C++ is usually with a preprocessor directive like so:

#define OSMac_

In CMake you can do this with  add_definitions() and the -D argument. It looks slightly confusing but the full command you have to add to your CMakeLists.txt is this:


In my case the full projects CMakeList.txt now looks like the following:


This is it, you should now be able to compile. Hope this was useful for you.


Friday, December 1, 2017

Skin Sliding Deformer

I always wanted to do a skin sliding deformer and it turned out to be quite simple as well. I am sure there many ways of making it better and more advanced but here I basically just do a closestPoint on the original Shape with an offset coming from a controller object!

As you might be aware of this instead of posting a lot of text I rather just show a quick video so here you go😀

Written in C++

Friday, November 17, 2017

Collision Deformer small update


I went back to work a bit on my collision deformer again to see if I can work out how to preserve the volume of the mesh a little bit. Its quite fun playing around with this and I learned some new things!

Wednesday, November 8, 2017

Simple sub surface scattering in maya


I think PyMel is definitely the way the maya python API should have been designed from the start but it also takes the user away from the architecture maya is built on. I love using Pymel for various reasons but in order to become a better tech artist I am trying to force myself to using OpenMaya more often.


I used to work with a tech artist before that really contributed a huge amount to me wanting to learn scripting and he made script in 3dmax to bake mesh thickness down to vertex colors. I thought this could be a fun little challenge and here is what I came up with.


The idea is to do raycasts from each vertex into random directions (the amount is specified by the samples) and to get the distance  where it has hit the mesh again. Then normalise that based on the average distance and remap it to the vertex colors.







Here you can see the raw output with different amount of samples. Depending on the amount of samples and verts this can take some time. As you can see, good results usually start showing at 256+ unless you are going for an N64 lava level type of look 😃





After calculating the vertex colors I am also doing a gaussian blur pass which makes it look quite nice.

Here is another screen shot with the raw data on the left followed by 1x2x2, 2x2x2, 3x2x2 and a 4x2x2  gaussian blur. This was calculated with 128 samples.



Here are the results on a character, original on the left and the calculated with 2x2x2 blur in different colors.


When projected on to a texture this can also be used as a pretty decent start for a SSS map. Here its cycling through no SSS, light SSS and exaggerated SSS for demonstration purposes

See below for the code

Tuesday, November 7, 2017

Maya 2018: Viewport 2.0 Normalmap display issues

I have been trying to get a normalmap to display properly in Maya 2018's viewport but I kept getting visible seams in Viewport 2.0 despite it looking perfectly fine in 3dsmax, Marmoset etc. In previous maya versions it displayed fine in the Legacy high quality viewport but that is unfortunately no longer an option since all the legacy viewports have been removed in 2018.


It turns out it is a very simple fix. Maya is trying to manage all textures colorspace and defaults them to SRGB. There is an option in the file node of the texture that lets you select the colorspace Raw. In order for this option to stay you also have to check the checkbox "Ignore Color Space File Rules" just below it. Hope it helps you too!
 

Monday, October 30, 2017

Blood & Truth Announcement trailer


In February 2017 I joined Sony London studios to work on Blood & Truth. I am happy to finally be able to show something with this first announcement trailer. Its been a lot of fun and a huge learning curve for me personally, I am working with some of the best guys in the industry on this.

Hope you enjoy it!

Friday, October 20, 2017

Embedding videos in Qt maya tools

I have been meaning to post this for some time because figuring this out has taken me quite some time and maybe someone out there finds this useful.

Qt has a media library called Phonon but it seems like maya doesn't ship with the necessary backend dll for some reason (maybe something to do with licensing?) If you ever tried this you will basically get a black-scren video player.


This missing dll file contains all the necessary functionality to decode media files so if you would like to embedd audio or video in your Qt tools you have to place the precompiled phonon_ds9d4.dll into <mayadir>\qt-plugins\phonon_backend
(source:  https://ilmvfx.wordpress.com/2014/08/30/how-to-make-phonon-work-in-maya-qt-and-pyqt4-compiling-for-maya2014-for-windows/)


Just make sure you have changed the path to your media file on the bottom of this script and you have the necessary codec installed on your machine. You can read more about what is possible on the official PySide documentation

See below for sample code of a simple videoplayer...

Tuesday, October 17, 2017

Facial tracker: Head stabilisation

I went back to do some more work on the face tracker and added head stabilisation. It takes the average velocity of all markers and negates this value from each marker on all frames. This way it can counteract any movement of the camera footage.

Thinking about it now, I think it might be better to allow the user to specify the head stabilisation markers manually... to be investigated! Anyway here is a quick video showing it in action.


Wednesday, July 26, 2017

Neural nets continued

I took a step back from trying to do too complex things to quickly and focused on getting my neural net do basic OR and AND gates and managed to get good results. This has given me a lot of lessons about how it all works.

Realising you might already know this but it took me quite a while to understand what a simple neural net is actually doing. A great analogy for me to understand it is water valves.

Imagine each neuron is a little glass sphere that can store 1 liter of water. Now imagine hoses (connections) that connect all the glass spheres together. On each hose that gets connected into a glass sphere you have a tab at the end (connection weight) which allows you to regulate how much water gets through to the next glass sphere.

Before we begin we set all tabs on all hoses to some random value (evenly distributed random, in python you can use random.uniform() for this)

Now we start pouring some water into our inputs. If our inputs are 1 and 0, this means we will push 1 litre of water through the first glass sphere and nothing into the second. The water will keep getting pushed through the hidden layers all the way to the output. This is called forward feeding and will initially produce a random output (remember? All our valves have been set totally random).

In order to start learning, we now check the amount of water we got out at the end and see how far we have been off. We can now use this information to push the water from the output backwards, all the way to the first valves and adjust them very slightly to the left if we had to much water or to the right if we had too little. The reason we adjust it very slightly is because we don't know yet how much this might affect other inputs. This process is called back propagation.

Now we simply keep repeating this process a few thousand times and hopefully we will then have all the valves set to the right position so whenever we input water it should output the right amount.

Here is a video demonstrating the current state. I added a little curve on the bottom that is showing the total error, which is really useful when something doesn't work. Towards the end you can also see that it is infinitely scalable (not that you would need that amount of neurons for such a simple case but its great to see it still producing the same results). Hopefully ill do something more useful with this soon (I am sure there are tons of applications for 3d graphics) but I'm really happy with my progress on this.


Monday, July 10, 2017

How to train your trainer


Machine Learning is all the hype these days! Ever since I first heard about it I was trying to understand it, because it just seems so mind boggling what crazy stuff people are doing with it.

While I am still very far away from that I have been spending a few weeks on getting a little closer to that goal.

After doing some research and watching countless tutorials and brushing up on Calculus I think what explained the most to me was an extremely good tutorial made by David Miller that Matt LeFevre (be sure to check his great work on his blog) shared with me and it helped me a huge deal to get more understanding of it. 

There are still a lot of unknowns to me but I thought I'd share my current progress in a quick video. 
The network is attempting to learn how to multiply float numbers between 0.0 - 1.0 and tries to minimise the error rate as much as possible before moving on to the next multiplication. You can see the results it comes up with in the lower center.

Next step will be to see how it interpolates, I want the network to give me the correct result for a multiplication it hasn't seen before. Lets see how that goes, Ill be sure to update once I get there (if I do) 😊


Saturday, July 1, 2017

2D Voronoi in PyQt

This is just a fun little side project I ended up doing.
I don't know what it's useful for but it looks cool 😃

Click read more to get the code!
 

Thursday, June 29, 2017

Showreel 2017

Its been a while since I last did a reel but here is some of the work I have been doing in the last couple of years. This includes some of the work me and a bunch of awesome dudes did on Tom Clancys: The Division.

Hope you enjoy!

Thursday, June 22, 2017

Collision Deformer


Recently I learned a bit about writing nodes in maya and decided to start working an a collision deformer.

It allows for ground and mesh-to-mesh collisions. There are also parameters for thickness (which will push the mesh vertices along the targets normals) and I experimented a little bit with stickiness.

Before this video sits to long on my HD collecting dust before I touch this project again  I thought I'd share the current state of it in a quick video.

Updated article: here



Wednesday, June 21, 2017

Video tracker improvements

Just a quick post as this a significant quality improvement on tracking!

It seems like in some footage its actually better to slightly blur the video, I guess that makes sense as is sort of averages the colour values, less noise and more coherent error rates.

After this I also apply a gaussian filter to the keyframes, which eradicates the high frequency oscillations but still largely keeps the original positions. Getting really happy with these results now!

 

Monday, June 19, 2017

How to train your face tracking


Last week I was talking to a colleague at work about different facial tracking software and thought that it can't be that massively complicated to write a basic software that tracks pixels.
"All it needs to do is compare a bunch of colour values right?" Sure it can probably be much more complicated than that but I thought I'd give it a shot and after a week of endless nights I have a first result that I am pretty happy with.

The solution I went for was essentially just looking for sub-images (a region defined by tracker points) in larger images (a larger area around the tracker points) over and over again. Each possibility gets an error rating and after trying every possible combination the software will pick the solution with the smallest error rate. Once a solution is picked it will do this again but now with the pixel values of the new region.