1/17/18

Today I continued with the neural network project from my deep learning course. Today's lectures sealed with creating the neural network itself, first initializing the input layer, then the two hidden layers, and finally the output layer. Constructing these layers consisted of choosing an "activation function" (think of how a neuron fires only when the electric signal passed to it from the previous neuron exceeds a certain threshold) for the neurons in a particular layer, choosing the number of nodes in that layer, and initializing the "weights" on the links between nodes with values close to zero. As to the activation functions, two different ones were used in this network: a linear function that equals zero for values of x less than 0, and equals x for values of x greater than zero (used in the hidden layers) and a "sigmoid" function that "squishes" the output value to a value in the range [0,1], in essence producing the probability that the network's prediction is correct. The lecturer discussed the possibility of using something called a softmax function instead of a sigmoid function in applications where the dependent variable has more than two categorical outcomes. This may be useful in my thesis project since there are three possible decisions for AI players to make (call/check, raise/bet, fold). Or I could use a threshold function that makes AI players check/bet/fold using some type of threshold/error function. For example, if an AI player bets $200, the next AI player uses a neural network to decide that he's willing to bet a certain amount on the cards, (ex. $150). Since $150 is less than $200, that AI player would fold, or maybe decide to bet using a different method if he thinks the previous player is bluffing. I realize this is kind of tangential; I'm just bouncing some ideas off what I've learned.
note to self: categorical_crossentropy algorithm used in the compile function for data sets with dependent variables that have more than two outcomes

Comments

Popular Posts