Today we considered a variety of
strategies for stimulus representation such as a series of exponentials
of varying lengths triggered by onset and offset of stimuli. We
considered running these traces through tile-coding mechanisms and also
just series of multiple-timescale bumps.
Then we tried to make a network model to explain sensory
precondition. The fundamental mystery was why a prediction of a
stimulus should be treated similarly to the stimulus itself. This
seems to be required to get the natural, easy network model to
work. To achieve this, we had to resort to the theory that a
stimulus is one of its own best predictors. Then the presence of
the stimulus will give rise to the prediction of the stimulus.
Then we achieve the idea of: if A predicts B and B predicts C, then A
should predict C. If A is a short stimulus, then it should not
predict itself, which should result in the extinction of its self
prediction. If the CR actually prevents the US, will the
prediction of the US be extinguished and therefore the CR as well, or
will a new stimulus serve to reinforce the CR instead?
We decided a good goal for James's project (or maybe for all of us)
would be to block out the space of models (of response generation and
stimulus representation) and propose some specific experiments that
could be done to distinguish them.
NEXT STEPS:
- James is going to make a program to implement the TD model with
various stimulus representations.
- Early on we're going to try just training with a long stimulus
and explore this idea of "predicting yourself".
- Consider implementing a network model. (Wouldn't that be
cool...!)
- We're predicting more data coming from Jim Kehoe.
- Consider proposing specific experiments for answering unresolved
questions.