Difference between revisions of "Fundamentals of Computational Neuroscience (2nd Edition)"
(→Slides) |
(→Errata) |
||
(12 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | The second edition of ''Fundamentals of Computational Neuroscience'' by Dr. Thomas P. Trappenberg is now available at [http://www.oup.com/us/catalog/general/subject/Medicine/Neuroscience/?view=usa&ci=9780199568413 Oxford University Press] and [http://www.amazon.ca/Fundamentals-Computational-Neuroscience-Thomas-Trappenberg/dp/0199568413/ref=sr_1_1?ie=UTF8&s=books&qid=1260376141&sr=8-1 Amazon.ca]. | + | The second edition of ''Fundamentals of Computational Neuroscience'' by Dr. Thomas P. Trappenberg is now available at [http://www.oup.com/us/catalog/general/subject/Medicine/Neuroscience/?view=usa&ci=9780199568413 Oxford University Press] and [http://www.amazon.ca/Fundamentals-Computational-Neuroscience-Thomas-Trappenberg/dp/0199568413/ref=sr_1_1?ie=UTF8&s=books&qid=1260376141&sr=8-1 Amazon.ca]. (If you are looking for resources for the first edition, you can find them [[Fundamentals of Computational Neuroscience (1st Edition)|here]]). |
+ | Comments, suggestions, etc. can be sent to [mailto:tt@cs.dal.ca tt@cs.dal.ca] | ||
[[File:FundCmpNeuro2Cover.jpg|thumb|300px|right]] | [[File:FundCmpNeuro2Cover.jpg|thumb|300px|right]] | ||
Line 13: | Line 14: | ||
== Slides == | == Slides == | ||
− | Course slides are available for Fundamentals of Computational Neuroscience (2nd Edition). At this time there are short versions with the most important points for each chapter.<br \> Slides can be downloaded [[Fundamentals of Computational Neuroscience (2nd Edition): Slides|here]]. | + | Course slides are available for Fundamentals of Computational Neuroscience (2nd Edition). At this time there are short versions with the most important points for each chapter.<br \> Slides can be downloaded [[Fundamentals of Computational Neuroscience (2nd Edition): Slides|here]]. If you would like to contribute additional material, please email to [mailto:tt@cs.dal.ca tt@cs.dal.ca] |
+ | == Animations == | ||
+ | [[Media:Animations.zip|Animations.zip]] | ||
== Errata == | == Errata == | ||
− | I would appreciate your comments and corrections. Please send email to tt@cs.dal.ca | + | I would appreciate your comments and corrections. Please send email to [mailto:tt@cs.dal.ca tt@cs.dal.ca] |
− | ;Page | + | ;Page 29, Section 2.2.3 |
− | : | + | :Equation (2.4) should be gL*V(t) + g_syn ... (plus sign instead of the minus sign) |
;Page 61, Section 3.1.5 | ;Page 61, Section 3.1.5 | ||
Line 27: | Line 30: | ||
:The numerical example is not correct. The sum of the synaptic events is binominal distributed with mean 10000*0.005=50 and variance 10000*0.005*(1-0.005), which is reasonable well approximated by a Gaussian with this mean and variance. | :The numerical example is not correct. The sum of the synaptic events is binominal distributed with mean 10000*0.005=50 and variance 10000*0.005*(1-0.005), which is reasonable well approximated by a Gaussian with this mean and variance. | ||
:Important for the argument here is that the `noise' in the average is much less than N times the 'noise' of the single events since some of the fluctuations will go in different directions and will chancel out each other. More formally, the sum of any random numbers with means mu and variances sigma^2 is a random number with mean N*mu and variance N*sigma. Thus, the variation in the background becomes less important when many synapses are involved since the standard deviation of the sum of random variables only scales with the square root of the number of variables. | :Important for the argument here is that the `noise' in the average is much less than N times the 'noise' of the single events since some of the fluctuations will go in different directions and will chancel out each other. More formally, the sum of any random numbers with means mu and variances sigma^2 is a random number with mean N*mu and variance N*sigma. Thus, the variation in the background becomes less important when many synapses are involved since the standard deviation of the sum of random variables only scales with the square root of the number of variables. | ||
+ | |||
+ | ;Page 146, Table 6.1 | ||
+ | :The Boolean AND function has a zero (0) as first entry in y column instead of the one (1). The displayed function is the Boolean XOR function (or non-XOR, depending on the translation of 0/1 to true/false). | ||
+ | |||
+ | ;Page 152, Equation 6.13 | ||
+ | :There is an an extra parenthesis. | ||
+ | |||
+ | ;Page 204, Equation 7.23 | ||
+ | :While the effective weight kernel is dependent on the activity of rotation cells, the rate of these cells should be deleted in the product within the integral. | ||
;Page 330, Appendix B.4 | ;Page 330, Appendix B.4 | ||
:This second derivative in the example is (x-t). The term should be substituted in equation B.15 instead of th term (1-x). | :This second derivative in the example is (x-t). The term should be substituted in equation B.15 instead of th term (1-x). |
Latest revision as of 20:10, 9 November 2014
The second edition of Fundamentals of Computational Neuroscience by Dr. Thomas P. Trappenberg is now available at Oxford University Press and Amazon.ca. (If you are looking for resources for the first edition, you can find them here). Comments, suggestions, etc. can be sent to tt@cs.dal.ca
Contents
Programs
A full list of programs used in the book is available here. Programs are available for the MATLAB, Octave, and SciLab environments.
Figures
A full list of figures from the book is available here.
Slides
Course slides are available for Fundamentals of Computational Neuroscience (2nd Edition). At this time there are short versions with the most important points for each chapter.
Slides can be downloaded here. If you would like to contribute additional material, please email to tt@cs.dal.ca
Animations
Errata
I would appreciate your comments and corrections. Please send email to tt@cs.dal.ca
- Page 29, Section 2.2.3
- Equation (2.4) should be gL*V(t) + g_syn ... (plus sign instead of the minus sign)
- Page 61, Section 3.1.5
- Equation (3.20) on the right should read u(v>30) = u+d
- Page 131, Section 5.2.3
- The numerical example is not correct. The sum of the synaptic events is binominal distributed with mean 10000*0.005=50 and variance 10000*0.005*(1-0.005), which is reasonable well approximated by a Gaussian with this mean and variance.
- Important for the argument here is that the `noise' in the average is much less than N times the 'noise' of the single events since some of the fluctuations will go in different directions and will chancel out each other. More formally, the sum of any random numbers with means mu and variances sigma^2 is a random number with mean N*mu and variance N*sigma. Thus, the variation in the background becomes less important when many synapses are involved since the standard deviation of the sum of random variables only scales with the square root of the number of variables.
- Page 146, Table 6.1
- The Boolean AND function has a zero (0) as first entry in y column instead of the one (1). The displayed function is the Boolean XOR function (or non-XOR, depending on the translation of 0/1 to true/false).
- Page 152, Equation 6.13
- There is an an extra parenthesis.
- Page 204, Equation 7.23
- While the effective weight kernel is dependent on the activity of rotation cells, the rate of these cells should be deleted in the product within the integral.
- Page 330, Appendix B.4
- This second derivative in the example is (x-t). The term should be substituted in equation B.15 instead of th term (1-x).