Difference between revisions of "Fundamentals of Computational Neuroscience (2nd Edition)"
(→Errata) |
(→Errata) |
||
Line 21: | Line 21: | ||
I would appreciate your comments and corrections. Please send email to [mailto:tt@cs.dal.ca tt@cs.dal.ca] | I would appreciate your comments and corrections. Please send email to [mailto:tt@cs.dal.ca tt@cs.dal.ca] | ||
− | + | ;Page 29, Section 2.2.3 | |
− | ;Page | ||
:Equation (2.4) should be gL*V(t) + g_syn ... (plus sign instead of the minus sign) | :Equation (2.4) should be gL*V(t) + g_syn ... (plus sign instead of the minus sign) | ||
− | |||
;Page 61, Section 3.1.5 | ;Page 61, Section 3.1.5 |
Revision as of 22:42, 28 January 2012
The second edition of Fundamentals of Computational Neuroscience by Dr. Thomas P. Trappenberg is now available at Oxford University Press and Amazon.ca. (If you are looking for resources for the first edition, you can find them here). Comments, suggestions, etc. can be sent to tt@cs.dal.ca
Contents
Programs
A full list of programs used in the book is available here. Programs are available for the MATLAB, Octave, and SciLab environments.
Figures
A full list of figures from the book is available here.
Slides
Course slides are available for Fundamentals of Computational Neuroscience (2nd Edition). At this time there are short versions with the most important points for each chapter.
Slides can be downloaded here. If you would like to contribute additional material, please email to tt@cs.dal.ca
Animations
Errata
I would appreciate your comments and corrections. Please send email to tt@cs.dal.ca
- Page 29, Section 2.2.3
- Equation (2.4) should be gL*V(t) + g_syn ... (plus sign instead of the minus sign)
- Page 61, Section 3.1.5
- Equation (3.20) on the right should read u(v>30) = u+d
- Page 131, Section 5.2.3
- The numerical example is not correct. The sum of the synaptic events is binominal distributed with mean 10000*0.005=50 and variance 10000*0.005*(1-0.005), which is reasonable well approximated by a Gaussian with this mean and variance.
- Important for the argument here is that the `noise' in the average is much less than N times the 'noise' of the single events since some of the fluctuations will go in different directions and will chancel out each other. More formally, the sum of any random numbers with means mu and variances sigma^2 is a random number with mean N*mu and variance N*sigma. Thus, the variation in the background becomes less important when many synapses are involved since the standard deviation of the sum of random variables only scales with the square root of the number of variables.
- Page 146, Table 6.1
- The Boolean AND function has a zero (0) as first entry in y column instead of the one (1). The displayed function is the Boolean XOR function (or non-XOR, depending on the translation of 0/1 to true/false).
- Page 330, Appendix B.4
- This second derivative in the example is (x-t). The term should be substituted in equation B.15 instead of th term (1-x).