Wednesday, September 4, 2019

Distance Constraint

Been a while since I last posted anything on here but I have not stopped doing cool stuff!

Recently I saw a post by a friend that showed a distance constraint and that inspired me to try it myself. It's pretty simple, the target just stays in a sphere around a source on a vector between them and its super cool when its chained together.

In this example below I added a little springiness to each segment, fun experiment πŸΊπŸ€“

Here is some more information including code samples if you want to try it out yourself (Huge thanks to Johnathon Selstad for sharing!)

Sunday, June 24, 2018

Never stop learning

In December 2017 I left Sony Entertainment and moved to sunny California to work for the company I always wanted to be: Blizzard Entertainment! I work with some incredibly talented and passionate  people and have honestly never felt so inspired.

Since then I have been facing a lot of awesome new challenges and had a lot of stuff to learn. In order to keep the momentum going I finally set out to learn something that has been on my to-do list for a long time:

 C++! Thats right, the reaaaaal stuff! πŸ‘½

I kinda poked around with it in the maya environment but I have very little understanding of the the basics of the language. Since I am also mostly very animation focused I decided to go outside my comfort zone even more and get into basics of rendering by writing a pure raytracer in C++ using no additional libraries like OpenGL. By doing so I learned a lot of things I heard others talk about before but never fully realised what they were, such as various image filter methods etc.

I only decided to use Qt for a simple UI and also for generating the final render image since I was already familiar with it coming from PyQt. For the most part it is almost identical so there were many cases I didn't even need to look things up in the documentation because I already knew what classes and functions I needed.

Starting out I did a lot of reading and especially for the topic of raytracing I can strongly recommend the following links that helped me a lot to get a good overview of the basics

I developed this project on Mac OSX using VS Code which I can strongly recommend because it is basically like Visual studio light and it did pretty much everything I needed it to do. Its very simple to use and has some great debugging (compared to Python we can no longer print things and call it debug! 😜) 

Below I just wanted to share a couple of videos that show the progress I made over a period of a month or two.

What you see here is basically just a generated QImage that is being re-rendered with new input values whenever you drag the mouse over it. For instance to move the camera or reposition the light. 

And here is the final output I managed to produce. This has diffuse/spec/normal maps at a resolution of 1024x1024 on the spheres and the plane is textured procedurally. I got the reflections kinda working (I think something in the math is wrong but hey!). I also did cheat a little bit on the DOF and used Qts BlurEffect by using my depth channel, which is not really entirely correct since it bleeds
edges but it gave a pretty decent effect for almost no extra work 😊

My take away from this project is that there is never a point in time where you should stop learning new things and especially uncomfortable things because they might seem like a behemoth. Go talk to your programmers, artists, tech artists and steal all their knowledge, most of them are happy to help you and generally get excited about people wanting to learn about "their world". They might even learn something in the process just by explaining it to you.

This stimulation of your mind is absolutely necessary, especially in a field where things change so much all the time. 

Friday, December 22, 2017

Collision Deformer stickiness

Just a small update to share before the holidays, I added stickiness to the collision deformer.
Wishing you all merry Christmas and a happy new year! πŸŽ„πŸŽ…

Thursday, December 14, 2017

RBF Node: Interpolate all the things

Ever since I saw RBF I wanted to at least be able to understand it. I actually invested quite a lot of time into this and sort of had this on the backlog for a while to write about. This node is basically interpolating between a range of values, based on a range of other values.

A python dictionary is actually quite a good way to explain the core concept. There are keys and values, for instance lets say our keys are numbers from 1 to 5, and our values are 11-15.

values 11 12 13 14 15

Notice how they match in length. Now how can we interpolate the keys based on the values? For example whats the the value 13 interpolated on the keys? Quite easy as its literally just sliding upwards to the keys and seeing the number 3. This could work for any range, all we have to do is remap the values right? So if keys are 0-1 and values are 100-200 then it should look like this:

values 100 125 150 175 200

To find the interpolated key value of 0.375 on our values we just remap this and we should get 137.5.  What we have done here mathematically is we checked the distance between each of the keys and remapped that to the values. That is a super simplified example based on 1D input data. The distance data is what really drives everything here and this can be applied to any dataset, as long as we have a valid distance function.

For 1D data the "distance" is basically one negated by the other but how about we want interpolate
3d positions based on 3d rotations? As example we have 3 rotations and 3 positions. 

90.0, 0.0, 0.0
0.0, 90.00.0
0.0, 0.0, 90.0
positions -1.0, 0.0, 1.0 1.0, 0.0-1.0 1.0, 0.0, 1.0

How can we find the interpolated rotation of the position [ -0.5, 0.0, 0.5
If we try and calculate it with just standard arithmetic and as the input data gets increasingly complex you will see that there are in fact multiple solutions possible, there isn't just a single one. We do want to have a single solution though, the best possible interpolation between any number of elements. What if we have 500 rotations and 500 positions? 

To find a single solution we now build matrices of the distances between all of our keys and build a  linear equation to solve this against our values. Since we have 3 values in each, we will create a 3x3 matrix which will have the euclidean distance from each position to all the other positions.

The distance between two points in 3D is done by taking the sum of point 1 - point 2 squared and taking the square root. In python this may look like so:

For our given positions this is what the distance matrix looks like. Note how the values diagonally from top left to bottom right are all 0. This is because these represent the distances to themselves.

0.0 2.8 2.0
2.8 0.0 2.0
2.0 2.0 0.0

We will then build another 3x3 matrix of the rotations and solve the linear equation Ax=B where A is the matrix of position distances and B is the matrix of rotations. We are looking to find x here which is basically the weight of A elements contributing to form B and our ultimate solution.

There are many python packages to solve linear equations out there and to be completely honest I haven't fully understood the math behind it, it is actually quite complicated but if you want to read up more about it then you can find plenty of information online about solving linear equations using LU Decomposition.

In my case I used the mpmath library which also comes with its own matrix class and lu decomposition solve and a couple of other useful functions.

Now we end up with the following calculation where we have to find x

0.0 2.8 2.0
2.8 0.0 2.0
2.0 2.0 0.0
⋅ X =
90.0 0.0 0.0
0.0 90.0 0.0
0.0 0.0 90.0

Solving this equation will give us back a "weight matrix"

x =
-32.1 0.0 45.0
32.1 0.0 0.0
45.0 45.0 -63.0

All we have to do now is to get the distances from our interpolation probe point to all input positions and multiply them with their according weight.

So for our interpolation point p at [ -0.5, 0.0, 0.5 ] the distances to all input positions will return us

[ 54.578, 32.604, 90.15 ]

Now we simply iterate through this distance list and multiply it by its total weight. For instance the first item would be multiplied with the sum of the first column of the weight matrix and so on. We are then left with, our ultimate solution

40.22, 30.58, -30.65

You can also solve those linear equations online here. The distance matrix A can also be filtered through a basis function before we solve the equation and there are plenty of functions to choose from, eg. "linear", "gaussian" or "multiquadric" just to get some interesting interpolation effects for different datatypes.

I made a quick video to demonstrate this node, written in Python. Towards the end of the video you can see the clavicle joint rotation driving some blend shapes. Hope you enjoy!

Tuesday, December 12, 2017

Collision Deformer with Springs

More fun stuff on the Collision Deformer! This time I added a springs like behaviour that makes the collisions look like Jelly. There is also a damping attribute that softens the springiness.

Sunday, December 3, 2017

Compiling Maya 2018 plugins with CMake on Mac OSX

In addition to this post I have just spent a couple of hours trying to get my plugins to compile in maya 2018 and finally succeeded so I thought I'd share this here in case someone else has the same issues getting it to work.

I tested this on MacOS HighSierra so I cant guarantee it works with other versions too (yes I did set a root password ☝)

First of all the maya developer kit is now download only and doesn't ship with maya anymore.

Head over to the Autodesk website and download it. From the archive extract the three folders /devkit /include and /mkspec into the maya folder.

Your folder structure should then look like this:

  • /Applications/Autodesk/maya2018/devkit
  • /Applications/Autodesk/maya2018/mkspecs
  • /Applications/Autodesk/maya2018/include
  • /Applications/Autodesk/maya2018/

Now you need to get the latest FindMaya.cmake from Chad Vernons Github and make sure your project points to it.

In your projects CMakeLists.txt now make sure to set the project to Maya 2018 by defining this

If you compile now you might get the following warning from CMake:

-- Configuring done
CMake Warning (dev):
  Policy CMP0042 is not set: MACOSX_RPATH is enabled by default.  Run "cmake
  --help-policy CMP0042" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

  MACOSX_RPATH is not specified for the following targets:

Its just a warning but in order to disable it you might want to also set this in CMakeSettings.txt

Now when you compile you will get spammed with a huge amount of error messages and foremost the issue here is that cmake doesnt tell maya for which platform you want to compile.

The way you would do it in C++ is usually with a preprocessor directive like so:

#define OSMac_

In CMake you can do this with  add_definitions() and the -D argument. It looks slightly confusing but the full command you have to add to your CMakeLists.txt is this:

In my case the full projects CMakeList.txt now looks like the following:

This is it, you should now be able to compile. Hope this was useful for you.

Friday, December 1, 2017

Skin Sliding Deformer

I always wanted to do a skin sliding deformer and it turned out to be quite simple as well. I am sure there many ways of making it better and more advanced but here I basically just do a closestPoint on the original Shape with an offset coming from a controller object!

As you might be aware of this instead of posting a lot of text I rather just show a quick video so here you goπŸ˜€

Written in C++

Friday, November 17, 2017

Collision Deformer Passive collision

I went back to work a bit on my collision deformer again to see if I can work out how to preserve the volume of the mesh a little bit. Its quite fun playing around with this and I learned some new things along the way!

Wednesday, November 8, 2017

Simple sub surface scattering in maya

I think PyMel is definitely the way the maya python API should have been designed from the start but it also takes the user away from the architecture maya is built on. I love using Pymel for various reasons but in order to become a better tech artist I am trying to force myself to using OpenMaya more often.

I used to work with a tech artist before that really contributed a huge amount to me wanting to learn scripting and he made script in 3dmax to bake mesh thickness down to vertex colors. I thought this could be a fun little challenge and here is what I came up with.

The idea is to do raycasts from each vertex into random directions (the amount is specified by the samples) and to get the distance  where it has hit the mesh again. Then normalise that based on the average distance and remap it to the vertex colors.

Here you can see the raw output with different amount of samples. Depending on the amount of samples and verts this can take some time. As you can see, good results usually start showing at 256+ unless you are going for an N64 lava level type of look πŸ˜ƒ

After calculating the vertex colors I am also doing a gaussian blur pass which makes it look quite nice.

Here is another screen shot with the raw data on the left followed by 1x2x2, 2x2x2, 3x2x2 and a 4x2x2  gaussian blur. This was calculated with 128 samples.

Here are the results on a character, original on the left and the calculated with 2x2x2 blur in different colors.

When projected on to a texture this can also be used as a pretty decent start for a SSS map. Here its cycling through no SSS, light SSS and exaggerated SSS for demonstration purposes

See below for the code

Tuesday, November 7, 2017

Maya 2018: Viewport 2.0 Normalmap display issues

I have been trying to get a normalmap to display properly in Maya 2018's viewport but I kept getting visible seams in Viewport 2.0 despite it looking perfectly fine in 3dsmax, Marmoset etc. In previous maya versions it displayed fine in the Legacy high quality viewport but that is unfortunately no longer an option since all the legacy viewports have been removed in 2018.

It turns out it is a very simple fix. Maya is trying to manage all textures colorspace and defaults them to SRGB. There is an option in the file node of the texture that lets you select the colorspace Raw. In order for this option to stay you also have to check the checkbox "Ignore Color Space File Rules" just below it. Hope it helps you too!

Monday, October 30, 2017

Blood & Truth Announcement trailer

In February 2017 I joined Sony London studios to work on Blood & Truth. I am happy to finally be able to show something with this first announcement trailer. Its been a lot of fun and a huge learning curve for me personally, I am working with some of the best guys in the industry on this.

Hope you enjoy it!

Friday, October 20, 2017

Embedding videos in Qt maya tools

I have been meaning to post this for some time because figuring this out has taken me quite some time and maybe someone out there finds this useful.

Qt has a media library called Phonon but it seems like maya doesn't ship with the necessary backend dll for some reason (maybe something to do with licensing?) If you ever tried this you will basically get a black-scren video player.

This missing dll file contains all the necessary functionality to decode media files so if you would like to embedd audio or video in your Qt tools you have to place the precompiled phonon_ds9d4.dll into <mayadir>\qt-plugins\phonon_backend

Just make sure you have changed the path to your media file on the bottom of this script and you have the necessary codec installed on your machine. You can read more about what is possible on the official PySide documentation

See below for sample code of a simple videoplayer...

Tuesday, October 17, 2017

Facial tracker: Head stabilisation

I went back to do some more work on the face tracker and added head stabilisation. It takes the average velocity of all markers and negates this value from each marker on all frames. This way it can counteract any movement of the camera footage.

Thinking about it now, I think it might be better to allow the user to specify the head stabilisation markers manually... to be investigated! Anyway here is a quick video showing it in action.

Wednesday, July 26, 2017

Neural nets continued

I took a step back from trying to do too complex things to quickly and focused on getting my neural net do basic OR and AND gates and managed to get good results. This has given me a lot of lessons about how it all works.

Realising you might already know this but it took me quite a while to understand what a simple neural net is actually doing. A great analogy for me to understand it is water valves.

Imagine each neuron is a little glass sphere that can store 1 liter of water. Now imagine hoses (connections) that connect all the glass spheres together. On each hose that gets connected into a glass sphere you have a tab at the end (connection weight) which allows you to regulate how much water gets through to the next glass sphere.

Before we begin we set all tabs on all hoses to some random value (evenly distributed random, in python you can use random.uniform() for this)

Now we start pouring some water into our inputs. If our inputs are 1 and 0, this means we will push 1 litre of water through the first glass sphere and nothing into the second. The water will keep getting pushed through the hidden layers all the way to the output. This is called forward feeding and will initially produce a random output (remember? All our valves have been set totally random).

In order to start learning, we now check the amount of water we got out at the end and see how far we have been off. We can now use this information to push the water from the output backwards, all the way to the first valves and adjust them very slightly to the left if we had to much water or to the right if we had too little. The reason we adjust it very slightly is because we don't know yet how much this might affect other inputs. This process is called back propagation.

Now we simply keep repeating this process a few thousand times and hopefully we will then have all the valves set to the right position so whenever we input water it should output the right amount.

Here is a video demonstrating the current state. I added a little curve on the bottom that is showing the total error, which is really useful when something doesn't work. Towards the end you can also see that it is infinitely scalable (not that you would need that amount of neurons for such a simple case but its great to see it still producing the same results). Hopefully ill do something more useful with this soon (I am sure there are tons of applications for 3d graphics) but I'm really happy with my progress on this.

Monday, July 10, 2017

How to train your trainer

Machine Learning is all the hype these days! Ever since I first heard about it I was trying to understand it, because it just seems so mind boggling what crazy stuff people are doing with it.

While I am still very far away from that I have been spending a few weeks on getting a little closer to that goal.

After doing some research and watching countless tutorials and brushing up on Calculus I think what explained the most to me was an extremely good tutorial made by David Miller that Matt LeFevre (be sure to check his great work on his blog) shared with me and it helped me a huge deal to get more understanding of it. 

There are still a lot of unknowns to me but I thought I'd share my current progress in a quick video. 
The network is attempting to learn how to multiply float numbers between 0.0 - 1.0 and tries to minimise the error rate as much as possible before moving on to the next multiplication. You can see the results it comes up with in the lower center.

Next step will be to see how it interpolates, I want the network to give me the correct result for a multiplication it hasn't seen before. Lets see how that goes, Ill be sure to update once I get there (if I do) 😊