Friday, October 20, 2017

Embedding videos in Qt maya tools

I have been meaning to post this for some time because figuring this out has taken me quite some time and maybe someone out there finds this useful.

Qt has a media library called Phonon but it seems like maya doesn't ship with the necessary backend dll for some reason (maybe something to do with licensing?) If you ever tried this you will basically get a black-scren video player.


This missing dll file contains all the necessary functionality to decode media files so if you would like to embedd audio or video in your Qt tools you have to place the precompiled phonon_ds9d4.dll into <mayadir>\qt-plugins\phonon_backend
(source:  https://ilmvfx.wordpress.com/2014/08/30/how-to-make-phonon-work-in-maya-qt-and-pyqt4-compiling-for-maya2014-for-windows/)


Just make sure you have changed the path to your media file on the bottom of this script and you have the necessary codec installed on your machine. You can read more about what is possible on the official PySide documentation

See below for sample code of a simple videoplayer...

Tuesday, October 17, 2017

Facial tracker: Head stabilisation

I went back to do some more work on the face tracker and added head stabilisation. It takes the average velocity of all markers and negates this value from each marker on all frames. This way it can counteract any movement of the camera footage.

Thinking about it now, I think it might be better to allow the user to specify the head stabilisation markers manually... to be investigated! Anyway here is a quick video showing it in action.


Wednesday, July 26, 2017

Neural nets continued

I took a step back from trying to do too complex things to quickly and focused on getting my neural net do basic OR and AND gates and managed to get good results. This has given me a lot of lessons about how it all works.

Realising you might already know this but it took me quite a while to understand what a simple neural net is actually doing. A great analogy for me to understand it is water valves.

Imagine each neuron is a little glass sphere that can store 1 liter of water. Now imagine hoses (connections) that connect all the glass spheres together. On each hose that gets connected into a glass sphere you have a tab at the end (connection weight) which allows you to regulate how much water gets through to the next glass sphere.

Before we begin we set all tabs on all hoses to some random value (evenly distributed random, in python you can use random.uniform() for this)

Now we start pouring some water into our inputs. If our inputs are 1 and 0, this means we will push 1 litre of water through the first glass sphere and nothing into the second. The water will keep getting pushed through the hidden layers all the way to the output. This is called forward feeding and will initially produce a random output (remember? All our valves have been set totally random).

In order to start learning, we now check the amount of water we got out at the end and see how far we have been off. We can now use this information to push the water from the output backwards, all the way to the first valves and adjust them very slightly to the left if we had to much water or to the right if we had too little. The reason we adjust it very slightly is because we don't know yet how much this might affect other inputs. This process is called back propagation.

Now we simply keep repeating this process a few thousand times and hopefully we will then have all the valves set to the right position so whenever we input water it should output the right amount.

Here is a video demonstrating the current state. I added a little curve on the bottom that is showing the total error, which is really useful when something doesn't work. Towards the end you can also see that it is infinitely scalable (not that you would need that amount of neurons for such a simple case but its great to see it still producing the same results). Hopefully ill do something more useful with this soon (I am sure there are tons of applications for 3d graphics) but I'm really happy with my progress on this.


Monday, July 10, 2017

How to train your trainer


Machine Learning is all the hype these days! Ever since I first heard about it I was trying to understand it, because it just seems so mind boggling what crazy stuff people are doing with it.

While I am still very far away from that I have been spending a few weeks on getting a little closer to that goal.

After doing some research and watching countless tutorials and brushing up on Calculus I think what explained the most to me was an extremely good tutorial made by David Miller that Matt LeFevre (be sure to check his great work on his blog) shared with me and it helped me a huge deal to get more understanding of it. 

There are still a lot of unknowns to me but I thought I'd share my current progress in a quick video. 
The network is attempting to learn how to multiply float numbers between 0.0 - 1.0 and tries to minimise the error rate as much as possible before moving on to the next multiplication. You can see the results it comes up with in the lower center.

Next step will be to see how it interpolates, I want the network to give me the correct result for a multiplication it hasn't seen before. Lets see how that goes, Ill be sure to update once I get there (if I do) 😊


Saturday, July 1, 2017

2D Voronoi in PyQt

This is just a fun little side project I ended up doing.
I don't know what it's useful for but it looks cool 😃

Click read more to get the code!
 

Thursday, June 29, 2017

Showreel 2017

Its been a while since I last did a reel but here is some of the work I have been doing in the last couple of years. This includes some of the work me and a bunch of awesome dudes did on Tom Clancys: The Division.

Hope you enjoy!

Thursday, June 22, 2017

Collision Deformer


Recently I learned a bit about writing nodes in maya and decided to start working an a collision deformer.

It allows for ground and mesh-to-mesh collisions. There are also parameters for thickness (which will push the mesh vertices along the targets normals) and I experimented a little bit with stickiness.

Before this video sits to long on my HD collecting dust before I touch this project again  I thought I'd share the current state of it in a quick video.


Wednesday, June 21, 2017

Video tracker improvements

Just a quick post as this a significant quality improvement on tracking!

It seems like in some footage its actually better to slightly blur the video, I guess that makes sense as is sort of averages the colour values, less noise and more coherent error rates.

After this I also apply a gaussian filter to the keyframes, which eradicates the high frequency oscillations but still largely keeps the original positions. Getting really happy with these results now!

 

Monday, June 19, 2017

How to train your face tracking


Last week I was talking to a colleague at work about different facial tracking software and thought that it can't be that massively complicated to write a basic software that tracks pixels.
"All it needs to do is compare a bunch of colour values right?" Sure it can probably be much more complicated than that but I thought I'd give it a shot and after a week of endless nights I have a first result that I am pretty happy with.

The solution I went for was essentially just looking for sub-images (a region defined by tracker points) in larger images (a larger area around the tracker points) over and over again. Each possibility gets an error rating and after trying every possible combination the software will pick the solution with the smallest error rate. Once a solution is picked it will do this again but now with the pixel values of the new region.

Tuesday, March 14, 2017

Maya C++ Plugin Environment on Mac OSX

This is just a quick guide for all the Mac users out there that are trying to build Maya plugins in C++.

I learned a lot about CMake and compiling for maya from Chad Vernon's series, if you have a bit of time on your hands, I definitely recommend going through this as it will be very detailed.

For this example I also used Chad Vernon's BlendNode that he was so nice to share.