Neural Networks- Geoffrey Hinton

Dec 21, 2017

Why Machine Learning

  • Because all sides of a 3D figure cannot be detected by software, too complex!
  • Because computational power is cheaper than paying software engineers to write complex code

Reinforcement Learning

  • The output is an action or a sequence of actions and the only supervisory signal is an occasional scalar reward
  • We usually use a discount factor for delayed rewards so that we don’t have to look too far into the future
  • The rewards are typically delayed, so it is hard to know where we went wrong or right
  • A scalar reward does not supply much information

Neural Network Architectures

  • Feed forward networks
  • Recurrent networks(RNN) = loops
  • Symmetrically connected networks
    • Easier to analyze than RNN
    • Obey energy function
    • Can’t be used to model cycles
  • If more than one hidden layer = Deep neural networks
  • Activities of neurons in each layer must be a non linear function of activities of neurons in the previous layer

Training Algorithm: Perceptron Convergence procedure

Screenshot-2017-12-22 Perceptrons The first generation of neural networks [8 min] - University of Toronto Coursera

Why perceptrons got fucked in 1970s

  • XNOR = Set of training cases that is not linearly separable. Cannot be classified by perceptrons
  • Same number of on pixels in rotating and spacing configuration = Cannot classify

Screenshot-2017-12-22 What perceptrons can't do [15 min] - University of Toronto Coursera.png

Trick: When 2D is not enough to create boundary, go 3D to make the dataset linearly separable!

Advertisements