Wednesday, July 26, 2017

Neural nets continued

I took a step back from trying to do too complex things to quickly and focused on getting my neural net do basic OR and AND gates and managed to get good results. This has given me a lot of lessons about how it all works.

Realising you might already know this but it took me quite a while to understand what a simple neural net is actually doing. A great analogy for me to understand it is water valves.

Imagine each neuron is a little glass sphere that can store 1 liter of water. Now imagine hoses (connections) that connect all the glass spheres together. On each hose that gets connected into a glass sphere you have a tab at the end (connection weight) which allows you to regulate how much water gets through to the next glass sphere.

Before we begin we set all tabs on all hoses to some random value (evenly distributed random, in python you can use random.uniform() for this)

Now we start pouring some water into our inputs. If our inputs are 1 and 0, this means we will push 1 litre of water through the first glass sphere and nothing into the second. The water will keep getting pushed through the hidden layers all the way to the output. This is called forward feeding and will initially produce a random output (remember? All our valves have been set totally random).

In order to start learning, we now check the amount of water we got out at the end and see how far we have been off. We can now use this information to push the water from the output backwards, all the way to the first valves and adjust them very slightly to the left if we had to much water or to the right if we had too little. The reason we adjust it very slightly is because we don't know yet how much this might affect other inputs. This process is called back propagation.

Now we simply keep repeating this process a few thousand times and hopefully we will then have all the valves set to the right position so whenever we input water it should output the right amount.

Here is a video demonstrating the current state. I added a little curve on the bottom that is showing the total error, which is really useful when something doesn't work. Towards the end you can also see that it is infinitely scalable (not that you would need that amount of neurons for such a simple case but its great to see it still producing the same results). Hopefully ill do something more useful with this soon (I am sure there are tons of applications for 3d graphics) but I'm really happy with my progress on this.


Monday, July 10, 2017

How to train your trainer


Machine Learning is all the hype these days! Ever since I first heard about it I was trying to understand it, because it just seems so mind boggling what crazy stuff people are doing with it.

While I am still very far away from that I have been spending a few weeks on getting a little closer to that goal.

After doing some research and watching countless tutorials and brushing up on Calculus I think what explained the most to me was an extremely good tutorial made by David Miller that Matt LeFevre (be sure to check his great work on his blog) shared with me and it helped me a huge deal to get more understanding of it. 

There are still a lot of unknowns to me but I thought I'd share my current progress in a quick video. 
The network is attempting to learn how to multiply float numbers between 0.0 - 1.0 and tries to minimise the error rate as much as possible before moving on to the next multiplication. You can see the results it comes up with in the lower center.

Next step will be to see how it interpolates, I want the network to give me the correct result for a multiplication it hasn't seen before. Lets see how that goes, Ill be sure to update once I get there (if I do) 😊


Saturday, July 1, 2017

2D Voronoi in PyQt

This is just a fun little side project I ended up doing.
I don't know what it's useful for but it looks cool 😃

Click read more to get the code!
 

Thursday, June 29, 2017

Showreel 2017

Its been a while since I last did a reel but here is some of the work I have been doing in the last couple of years. This includes some of the work me and a bunch of awesome dudes did on Tom Clancys: The Division.

Hope you enjoy!

Thursday, June 22, 2017

Collision Deformer


Recently I learned a bit about writing nodes in maya and decided to start working an a collision deformer.

It allows for ground and mesh-to-mesh collisions. There are also parameters for thickness (which will push the mesh vertices along the targets normals) and I experimented a little bit with stickiness.

Before this video sits to long on my HD collecting dust before I touch this project again  I thought I'd share the current state of it in a quick video.