Week One – Introduction to Machine Learning and Neural Networks

I am excited to be taking part in my very first research and learn more about a topic I find much interest in. With Dr. Sprague’s help, I managed to get down some basic concepts of machine learning and neural networks. I had no prior knowledge of the topic, so I was curious about the concepts I am going to be learning.

I started by reading a couple of chapters from “Machine Learning: An Algorithmic Perspective” by Stepthen Marsland. The book was great in helping me understand basic terms and concepts. I stumbled upon both easy and difficult subjects, but successfully understood this first introduction to machine learning, neurons and neural networks. I’ve also watched some of Dr. Sprague’s videos on the subjects, which were a great help!!!

Machine Learning

The focus of my studies will be on machine learning, that is, making computers able to learn by themselves from data. There are different types of machine learning: supervised, unsupervised, reinforcement, and evolutionary. What I will be considering for my introduction to this subject is supervised learning, which uses examples with correct answers to train the machine and make it generalize for other inputs. In the future, I will be considering reinforcement learning, which tells the algorithm when the answer is wrong, but not how to arrive at the correct one, letting it explore possibilities by itself.

Neurons and Neural Networks

perceptron_node

I was very excited to learn how to represent a neuron mathematically and how neural networks work with different algorithms. I spent time considering linear regression and classification. I have only gotten to work on single layer neural networks and realized that the linear regression method has a limitation for non-linear situations (which would be fixed by adding more layers, but that’s for later). Another algorithm I learned was gradient descent, which can change the weights to a neuron in relation to the Error function calculated (by taking partial derivatives). I have also started looking into classifications using a sigmoid activation function and learned that it is a more accurate way to classify output than ‘binary’ classification.

While I don’t have a research project yet, I am achieving some serious progress in understanding neural networks and hope that I can soon start considering possible projects to dedicate my time to for the next couple of weeks.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s