上個學(xué)期的neuroinformatik(神經(jīng)信息學(xué))中有一個章節(jié)介紹的是hopfield網(wǎng)絡(luò)。
a Hopeld net is a neural network with feedback, i.e. the output of the net at time t becomes the input of the net at time t + 1. e.g.The output of neuron j at time t + 1 is given by where theta is the threshold of neuron j, N is the number of neurons in the Hopeld net.
If the weights are initialized suitably, the Hopeld net can be used as an autoassociative memory that recognizes a certain number of patterns. When presented with an initial input, the net will converge to the learned pattern that most closely resembles that input. To achieve this, the weights need to be initialized as follows: Where vector x are the patterns to be learned.
Hopfield nets have a scalar value associated with each state of the network referred to as the "energy", E, of the network, where:

簡單的說就是,所學(xué)習(xí)的”知識“并非存儲在神經(jīng)元內(nèi),而是保存在神經(jīng)元間的連接中。而某個神經(jīng)元的輸出則取決于與其相連的神經(jīng)元的輸入。這和我們?nèi)祟惖挠洃洐C能是很類似的。
附,突然想到一個比方:科學(xué)不是全局最優(yōu)解,上帝是。科學(xué)只是目前的局部最優(yōu)解,是在有限的時間有限的空間下所能找到的最優(yōu)解。在我們不是上帝的時候,科學(xué)提供的答案正如這局部最優(yōu)解,是你只能接受的現(xiàn)實。而要想跳出局限性,我們需要不時的隨機重新設(shè)定搜索的出發(fā)點。
|