Friday, November 17, 2017

Collision Deformer small update

I went back to work a bit on my collision deformer again to see if I can work out how to preserve the volume of the mesh a little bit. Its quite fun playing around with this and I learned some new things!

Wednesday, November 8, 2017

Simple sub surface scattering in maya


I think PyMel is definitely the way the maya python API should have been designed from the start but it also takes the user away from the architecture maya is built on. I love using Pymel for various reasons but in order to become a better tech artist I am trying to force myself to using OpenMaya more often.


I used to work with a tech artist before that really contributed a huge amount to me wanting to learn scripting and he made script in 3dmax to bake mesh thickness down to vertex colors. I thought this could be a fun little challenge and here is what I came up with.


The idea is to do raycasts from each vertex into random directions (the amount is specified by the samples) and to get the distance  where it has hit the mesh again. Then normalise that based on the average distance and remap it to the vertex colors.







Here you can see the raw output with different amount of samples. Depending on the amount of samples and verts this can take some time. As you can see, good results usually start showing at 256+ unless you are going for an N64 lava level type of look 😃





After calculating the vertex colors I am also doing a gaussian blur pass which makes it look quite nice.

Here is another screen shot with the raw data on the left followed by 1x2x2, 2x2x2, 3x2x2 and a 4x2x2  gaussian blur. This was calculated with 128 samples.



Here are the results on a character, original on the left and the calculated with 2x2x2 blur in different colors.


When projected on to a texture this can also be used as a pretty decent start for a SSS map. Here its cycling through no SSS, light SSS and exaggerated SSS for demonstration purposes

See below for the code

Tuesday, November 7, 2017

Maya 2018: Viewport 2.0 Normalmap display issues

I have been trying to get a normalmap to display properly in Maya 2018's viewport but I kept getting visible seams in Viewport 2.0 despite it looking perfectly fine in 3dsmax, Marmoset etc. In previous maya versions it displayed fine in the Legacy high quality viewport but that is unfortunately no longer an option since all the legacy viewports have been removed in 2018.


It turns out it is a very simple fix. Maya is trying to manage all textures colorspace and defaults them to SRGB. There is an option in the file node of the texture that lets you select the colorspace Raw. In order for this option to stay you also have to check the checkbox "Ignore Color Space File Rules" just below it. Hope it helps you too!
 

Monday, October 30, 2017

Blood & Truth Announcement trailer


In February 2017 I joined Sony London studios to work on Blood & Truth. I am happy to finally be able to show something with this first announcement trailer. Its been a lot of fun and a huge learning curve for me personally, I am working with some of the best guys in the industry on this.

Hope you enjoy it!

Friday, October 20, 2017

Embedding videos in Qt maya tools

I have been meaning to post this for some time because figuring this out has taken me quite some time and maybe someone out there finds this useful.

Qt has a media library called Phonon but it seems like maya doesn't ship with the necessary backend dll for some reason (maybe something to do with licensing?) If you ever tried this you will basically get a black-scren video player.


This missing dll file contains all the necessary functionality to decode media files so if you would like to embedd audio or video in your Qt tools you have to place the precompiled phonon_ds9d4.dll into <mayadir>\qt-plugins\phonon_backend
(source:  https://ilmvfx.wordpress.com/2014/08/30/how-to-make-phonon-work-in-maya-qt-and-pyqt4-compiling-for-maya2014-for-windows/)


Just make sure you have changed the path to your media file on the bottom of this script and you have the necessary codec installed on your machine. You can read more about what is possible on the official PySide documentation

See below for sample code of a simple videoplayer...

Tuesday, October 17, 2017

Facial tracker: Head stabilisation

I went back to do some more work on the face tracker and added head stabilisation. It takes the average velocity of all markers and negates this value from each marker on all frames. This way it can counteract any movement of the camera footage.

Thinking about it now, I think it might be better to allow the user to specify the head stabilisation markers manually... to be investigated! Anyway here is a quick video showing it in action.


Wednesday, July 26, 2017

Neural nets continued

I took a step back from trying to do too complex things to quickly and focused on getting my neural net do basic OR and AND gates and managed to get good results. This has given me a lot of lessons about how it all works.

Realising you might already know this but it took me quite a while to understand what a simple neural net is actually doing. A great analogy for me to understand it is water valves.

Imagine each neuron is a little glass sphere that can store 1 liter of water. Now imagine hoses (connections) that connect all the glass spheres together. On each hose that gets connected into a glass sphere you have a tab at the end (connection weight) which allows you to regulate how much water gets through to the next glass sphere.

Before we begin we set all tabs on all hoses to some random value (evenly distributed random, in python you can use random.uniform() for this)

Now we start pouring some water into our inputs. If our inputs are 1 and 0, this means we will push 1 litre of water through the first glass sphere and nothing into the second. The water will keep getting pushed through the hidden layers all the way to the output. This is called forward feeding and will initially produce a random output (remember? All our valves have been set totally random).

In order to start learning, we now check the amount of water we got out at the end and see how far we have been off. We can now use this information to push the water from the output backwards, all the way to the first valves and adjust them very slightly to the left if we had to much water or to the right if we had too little. The reason we adjust it very slightly is because we don't know yet how much this might affect other inputs. This process is called back propagation.

Now we simply keep repeating this process a few thousand times and hopefully we will then have all the valves set to the right position so whenever we input water it should output the right amount.

Here is a video demonstrating the current state. I added a little curve on the bottom that is showing the total error, which is really useful when something doesn't work. Towards the end you can also see that it is infinitely scalable (not that you would need that amount of neurons for such a simple case but its great to see it still producing the same results). Hopefully ill do something more useful with this soon (I am sure there are tons of applications for 3d graphics) but I'm really happy with my progress on this.


Monday, July 10, 2017

How to train your trainer


Machine Learning is all the hype these days! Ever since I first heard about it I was trying to understand it, because it just seems so mind boggling what crazy stuff people are doing with it.

While I am still very far away from that I have been spending a few weeks on getting a little closer to that goal.

After doing some research and watching countless tutorials and brushing up on Calculus I think what explained the most to me was an extremely good tutorial made by David Miller that Matt LeFevre (be sure to check his great work on his blog) shared with me and it helped me a huge deal to get more understanding of it. 

There are still a lot of unknowns to me but I thought I'd share my current progress in a quick video. 
The network is attempting to learn how to multiply float numbers between 0.0 - 1.0 and tries to minimise the error rate as much as possible before moving on to the next multiplication. You can see the results it comes up with in the lower center.

Next step will be to see how it interpolates, I want the network to give me the correct result for a multiplication it hasn't seen before. Lets see how that goes, Ill be sure to update once I get there (if I do) 😊


Saturday, July 1, 2017

2D Voronoi in PyQt

This is just a fun little side project I ended up doing.
I don't know what it's useful for but it looks cool 😃

Click read more to get the code!
 

Thursday, June 29, 2017

Showreel 2017

Its been a while since I last did a reel but here is some of the work I have been doing in the last couple of years. This includes some of the work me and a bunch of awesome dudes did on Tom Clancys: The Division.

Hope you enjoy!