This page is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
I just added a section on neural network models for theoretical neuroscience. Suggestions, links to other articles, welcome, but I think I will not add anything more to this article. It is already quite big. --Olethros 15:59, 16 December 2005 (UTC)
June 15, 2007: I would like to remove the phrase stating that the brain is a computer...This is not really useful or germane, and leads to silly, wasteful, pseudo-philosophical arguments, that are best left to other venues. Or at the very least, remove the word "computer" and replace it with some other more descriptive term....
From the article:
I think such comparisons are almost never valid. There is no common concept of "logical operation"; computer instructions (which can vary considerably depending on the specific hardware) are very different from anything that goes on inside the brain. Perhaps the most fundamental difference is that today's computers operate sequentially (or occasionally with a small amount of parallelism), while human brains are massively parallel. Wmahan. 03:10, 2005 Apr 13 (UTC)
Comparisons like this aren't very useful, I think.
(That is an unsigned post by User:194.95.59.130.)Ben please vote!
(BTW, please always sign with ~~~~) Ben please vote! 05:01, May 23, 2005 (UTC)
However problematic, brain-computer comparisons are fun and help readers get a sense for the many interesting aspects of computation. I added a sentence to point out that Turing only applies to static functions (aka off line) while new theories of neural computation have developed non-Turing computing models (see Maass and Markram ref). Another point I'd like to add to the same paragraph: computers now have lots of embedded auxillary processors further blurring the what we mean by "computer" JohnJBarton 05:17, 28 September 2005 (UTC)
---
My feeling is that the consensus within the community is that the term ANNs is extremely misleading. The relation of ANNs to a real brain is slim, and limited to:
1) There exist variable-strength connections between nearly identical elements (neurons) 2) The elements' response with respect to a stimulus is bounded (it usually being a form of sigmoid response)
Furthermore, one should distinguish between
1) Neuromimetic models, which are artificial neural models specifically created to model real neurons. Sometimes these models are of single neurons, sometimes of networks. The goal there is to try and model some aspects of a neuron's or a small neural cluster's behaviour; and perhaps see if those are sufficient to explain some particular type of neural processing.
2) Abstract models, which have a connection to biological systems in the loosest possible sense. These are the types of models that are used in AI currently, and they are just a straightforward application of statistics. More specifically, such models embody a unification of statistical estimation, optimisation and control.
Furthermore I would like to add that supervised model training, where you have a stimulus and a desired response, has very little biological sense because it requires a the mechanism for providing the desired response.
So, if I were to re-write this article, I think first of all I'd say 1) Why are ANNs called ANNs 2) Distinguish between abstract models and neuromimetic ones 3) Talk about the first, simple model, the perceptron and its relation with statistics. This one is interesting. 4) Talk about the second simple model, the backpropagation network, and its relation with statistics. 5) Talk about reinforcement learning models and their relation with the dopamine system of reward in the brain 6) Talk about neuromimetic models
The comparison of the brain, ANNs, and computers is worthwhile to have, but perhaps it should be made clearer. Especially with respect to parallelism. For example, humans cannot really do more than a couple of tasks at the same time. Neural processing _is_ parallel, while CPU execution is serial.. but, at the transistor level, all those transistors work in parallel. The only reason that CPU execution is serial is because the program is defined as a sequence of instructions. So, I think someone should edit this bit also.
---
Firstly, why is this section named 'Comparison of biological and artificial neural networks'? There is only one paragraph which does that, the rest compares brains and computers.
Secondly, there are a number of problems with this section which I will explain:
Furthermore, while a computer is centralized with a processor at its core, the question of whether the brain is centralized or decentralized (distributed) is unresolved.
This is not a question that has much meaning. When a biological organism makes a decision, various parts of the brain contribute. For example, while the motor cortex is ultimately responsible for limb movement, other parts of the brain can modulate it and inhibit movement. The brain does appear to have multiple processing centers that are performing different tasks.. but so do computers: they have hard disks, graphics cards, memory buses, timers.. not to mention billions of transistors that are actually working in parallel :)
So to me this question is more philosophical: Can we describe the brain's operation as a sequential decision making process?
The answer to that question is yes: but only theoretically. So what we should be asking instead is:
Is it more sensible to describe the brain's operations as that of multiple processing centers operating in parallel and interacting with each other in simple ways, or as a single, exteremely complicated sequential process? I think most would agree that the anser is the former. So, there isn't really a question to be answered. The brain 'centers' do operate in parallel.
However, there is another question: what is the extent of interaction between centers? When we solve some tasks, it seems that more than one brain region is 'active' apart from the dominantly active one. Does that mean that more than one is necessary to perform the task?
In any case, this paragraph also conflates models of the brain with ANNs, which are an entirely different beast. ANNs are not trying to model a complete brain - which is not to say that you could not try and create a model of a brain that uses ANNs in there somewhere.
Later on:
Some other basic differences between neural networks in the brain and artificial neural networks follow. The brain is made up of a great number of components (about 1011), each of which is connected to many other components (about 104), each of which performs some relatively simple computation, whose nature is unclear, in slow fashion (less than a kHz), and based mainly on the information it receives from its local connections.
This is extremely problematic and should be re-written. --Olethros 14:20, 15 December 2005 (UTC) --- This section is still not up to par. It inserts quotations in a haphazard way, and makes overgeneralising statements without clearly defining at any point what it is talking about. Most importantly, it is not directly relevant to the neural network modelling. Computers are definitely not models of the brain, so why compare them?
I'll specify all remaining problems: Perhaps the most fundamental difference between brain and computer is that today's computers operate primarily sequentially, or with a small amount of parallelism (for details, see hyper-threading, SIMD, MMX and SSE2), while human brains are massively parallel. This refers to the execution of sequential programs by the machine. If you give a human a sequential program to follow, he will execute it sequentially. It is not an architectural problem. The brain has neurons that work in parallel. Computers have transistors that work in parallel. The brain has neural centers (to overextend a term) that work in parallel, a computer has modules that work in parallel (there can be many modules in the same chip). The central difference is that of sequential program execution - and the fact that a 'program' exists at all. However, the process of logical thought in the brain subjectively appears to be completely sequential. And a computer program is an alternative expression of a logical process. So, one could argue very easily that in fact, both systems are parallel. Or that both systems are serial. It's a philosophical question, and this article is not the place to answer it. It is thus my opinion that this paragraph should be moved to another article.
The second paragraph seems to try to compare the brain with a very specific susbset of artificial neural networks. There are networks that try to imitate real neurons much more closely; these are mostly studied in theoretical neuroscience. I have already added this section in this article, where I explain more or less which aspects of the real neurons neuroscientists try to model with their theoretical neural networks. Of course, the neural networks used in artificial intelligence are even further away from real systems. There is a connection, to be sure, which I try and explain in the theoretical neuroscience section. Furthermore, the various allusions to the particular neural model that the author has in mind are not elucidated and will, at best, confuse the uninitiated reader. Thus, this paragraph should be removed. If you think that there is something not answered in the theoretical neuroscience and the artificial intelligence sections, nor in the philosophy of perception and cognitive science articles, that should be discussed here, then we can add it.
The third paragraph mentions various facts without any references - and without any context. And again, it is out of place in this article: it is a comparison between brains and computers rather than between brains and models of the brain.
However, a comparison of computing power between different types neural models is interesting. People have been trying to, for example discover whether the spiking behaviour of real neurons is essential, or whether the static real-valued behaviour of simple ANNs is sufficient to perform certain types of computation. To this extent, the allussion to Maass's work is relevant. It is a part of the, now largely settled, debate of dynamic versus static neural systems. That would be an interesting article in itself, but it is actually a subject theoretical neuroscience. I could add a short paragraph about this comparison there, if the subject is not discussed in one of the related articles, since this article is already too long.
So, my recommendation is that the whole of this section should be removed.
--Olethros 17:35, 16 December 2005 (UTC)
[Actually, the most profound reason why this entire section fails is much more straightforward: language. You NN guys use the math/electrical engineering vernacular while the vast majority of neurobiologists do not. I'm not placing a value judgment on this. But it is a fact that neither camp deals with the other; in fact, the NN crowd probably wouldn't understand 90% of the articles in the Journal of Neuroscience and flip that for the neuro geeks. This article does nothing whatsoever do bridge this gap. In my world, that is PRECISELY what a good encyclopedia presentation does. Santiago Ramon y Cajal is convulsing in his grave.]
So, my question is, what would you suggest to 'bridge the gap'? Isn't the 'Neural networks and Neuroscience' section sufficient? Perhaps it should be moved near the top?
--Olethros 13:13, 17 December 2005 (UTC)
Neuroscience is a very broad field and neural networks is simply one of its many subdivisions. The fact that one branch is very poorly informed of another is not 'Admittedly the case sometimes.' but rather the prevailing case ALL of the time. My sense is that the neural network folks are especially far removed from the biological core of the field. I mean no one with an advanced degree in the discipline would ever refer to a 'pure neuroscientist' since 1. as opposed to....? 2. neuroscience covers genetics, biochemistry, anatomy, physiology, electrophysiology, medicine, immunology, psychology, and on and on and on. That you could use the term and then contrast with other 'pure' fields suggests a profound lack of understanding and appreciation of scope of neuroscience and its implied goal to integrate the many disparate components that define the study of the central nervous system. What are your qualifications to be writing this article relating to neuroscience? (My doctorate (Neuroscience )was awarded in 1991.) --
--Olethros 16:08, 17 December 2005 (UTC)
---
I decided to remove all the old stuff and add a very simple, two paragraph section that mentions the main points, without any numbers or assertions. The brain-computer comparison merits a separate article - and so does the discussion of various neural models. I guess that a good starting point are the books that I reference in the neuroscience section.
Hope that's alright with everyone.
--Olethros 14:31, 17 December 2005 (UTC)
While I think comparing the brain to a computer CPU is entirely irrelevent, computers are made up of layers of virtual machines, one of which may be an ANN, and I think comparing the brain to that virtual machine is completely relevent. Any timing comparisons, if made, should be to the timing of an implementation of such a virtual machine, not the underlying hardware CPU, although such timing comparisons would remain of limited value if comparing discrete ANN operations with the continuous "analog" functions of the brain network. However I think it would be very informative to present more in the way of comparison between these networks. For example there is a stroke treatment program by Albert Einstein Medical Center which uses computer-generated visual stimulation to retrain the brain to recover lost sight in almost the identical way that ANNs are trained. There are many other striking parallels between methods of stroke therapy aimed at retraining brain function and methods of ANN training. --12.144.20.254 20:56, 20 December 2005 (UTC)
Newbie here but i think John said it perfectly...
"However problematic, brain-computer comparisons are fun and help readers get a sense .... what we mean by "computer" JohnJBarton 05:17, 28 September 2005 (UTC)"
Also i have read edits and comments about +/ ( read "and or") requesting simpler, less technical/medical/scientific explanations{http://en.wikipedia.org/w/index.php?title=Talk:Neuron&action=edit§ion=8}, And received one in person from my son last night{http://en.wikipedia.org/wiki/Second_Amendment_to_the_United_States_Constitution}[or gun rights] while not suggesting a child’s version of Wikipedia something akin to a "Bill Nye the Science Guy's" { http://en.wikipedia.org/wiki/Bill_Nye_the_Science_Guy }/ everyman’s link on each disambiguation (Neural network)page would be helpful / welcome. JSo9-10 (talk) 19:59, 21 October 2009 (UTC)
Sorry to sound negative, but I don't see how links like hyper-threading, SIMD, MMX, and SSE2 pertain to neural networks. Even ANNs are related to them in only in a loose sense.
So far, the article seems to be focusing on a comparison of computers and humans (implementation details), rather than the general concept of neural network. There are whole books written about topics like computers vs. humans, whether the brain is Turing-equivalent, and low-level computer details. But it's arguably not very relevant to the article. Just my thoughts. Wmahan. 23:33, 2005 Apr 13 (UTC)( READ THIS CARE FULLY)
CBurnett, parallel processing is in no way necessary for neural networks, we just use it because it it speeds up the calculations. If you had enough patience, you could most certainly run a NN on a single processor.the1physicist 23:15, 16 December 2005 (UTC)
Parallel processing is LESS powerful than serial. Serial processing can perfectly simulate parallel processing without altering the algoritm. However, parallel processing cannot usually simulate a serial processing algorthm. The advantage of parallel processing is one of speed, not one of function. Parallel is far less functional. This means a sufficiently powerful serial processor could fully emulate the human brain assuming the algorithm for the human brain were fully decoded and understood and could be run on a massively parallel processor, which should be the real question rather than arguing symantecs about hardware. This article seems to miss the point, and the arguments about parallel vs serial significantly damage the authors credibility. All of this has very little to do with neural networks. Brian Davis 8-24-06
The parallel distributed processing of the mid-1980s became popular under the name connectionism. In early 1950s Friedrich Hayek was one of the first to posit the idea of spontaneous order in the brain arising out of decentralized networks of simple units (neurons). A design issue in cognitive modeling, also relating to neural networks, is additionally a decision between holistic and atomism, or (more concrete) modular in structure.
This paragraph doesn't make too much sense to me - seems to be talking about a couple of different topics, neither of which is clearly stated. Unfortunately I don't know enough about the subject to be comfortable rewriting it - anyone want to take a shot? Reedbeta 04:53, 20 Apr 2005 (UTC)
><><><><><><><><><><><><><><><><>
In response to your message, the statment in the article is jamming a lot of declarations about the history of NN. But it's all consistent. I am a Cog Sci student at UCSD (where Backpropagation was concived) and am taking NN courses. Decentalized systems is a great thing to look up and opens up the mind to new way of looking at order in the universe (the grand scheme of things). The way we interact with the world doesn't have to necessarly go throught he brain and be "computed" in order to cause a reaction (i.e. perciving direction of sound is a process done by the auditory sys. more so than the brain). A good book that I read for a "Distributed Cog." class was "Mindware" by Andy clark. He gives his 2 cents about the subject by comparing it to its disputes against and prasies for it. Hope this helps. Peace.
- Jonathan Holborn
I think the paragraph, even if jammed, can be understood if one follows the links. The history section needs to expanded of course, no doubt. Ben please vote! 05:04, May 23, 2005 (UTC)
I guess this section should talk predominantly about ANNs used in AI.
The big problem is the conflation of three different things in the term ANN:
1) The task 2) The model 3) The learning algorithm
But there are so many different ways to categorise.
1) Model Architecture: feedforward/recursive, modular/monolithic, parameterless 2) Model Dynamics: static/dynamic, deterministic/stochastic 3) Task
3.1 supervised: classification, regression 3.2 unsupervised: clustering, compression, visualisation, pre-processing 3.3 control: optimal stochastic control, reinforcement learning
4) Learning algorithm: gradient-based, EM, stochastic, exact 5) Formalism: ad hoc, biologically motivated, statistical [can apply to arhictecture, dynamics or learning algorithm]
The thing is that all these influence one another to some extent. Maybe it's a good idea to talk about these things in separate paragraphs, while giving example of particular models/algorithms that correspond to each one.
Then, instead of having a nested list of neural networks, we can just make a table: Name | Architecture | Dynamics | Task | Algorithm ..
How about that?
--Olethros 16:28, 15 December 2005 (UTC)
Alright, there is also an Artificial Neural Network article. I somehow missed that. On the one hand, that article seems to be quite well written; on the other hand it seems to be missing some details and connections with other fields. It will not be easy to edit it. Hm. I am not sure what I should do. For the moment, I have just separated the stuff I added into a new section, called Background. --Olethros 00:04, 16 December 2005 (UTC)
Shouldn't 5,8* be 5.8*?
English is not my natural language, so in the case you agree with this please edit in order to be in better english, or edit in order to be put in the article. Thanks.--GengisKanhg (my talk) 16:45, 6 October 2005 (UTC)
i dont understand what are you trying to say...IIUM(MAS) —Preceding unsigned comment added by 211.25.51.1 (talk) 04:24, 24 September 2007 (UTC)
As models of certain parts of animal neural systems, ANN are a scope of Artificial Inteligence (AI) science, frecuenty it is say that ANN are a scope of computer science too. These sentences are not false, but second is not enough true.
Researches have been modelling ANN with electromechanical equipment, electronics and in our days with computers, in future, maybe others knowledge fields will be use. Then, in our days ANN are ussually modelling using computers, but computers are only a way, the best right now, to modell the neural system.
In the figure we see the relation between ANN, computer science and AI, we realize that ANN is an area of AI, and AI itself is related with computer science because it use it in most its fields (i.e. genetic algorith, fuzzy logic or ANN), but its goals and scopes are not the same. So, ANN use computer science as a great tool in order to achieve its goal. The relation between ANN and computer science is almost the relation between AI and Computer Science. In the past this relation was not exist or was very little (i.e. in Leonart DaVinci times), in the present it is large and in the future maybe it changes. It depends in the path Computer Science and AI will follow. If computer science will wide to any computational process including biological thinking process then, AI and Computer Science will joined. --
i want bipolar and three input programme
I have to make a point here, which has been missed. There is a lengthy comparison made between computers and brains, speaking mainly of parallelism, number of processing units and type of processing done. I think this discussion is only tangentially related to Artificial Neural Networks.
ANN as a term encompasses a very large class of models. So the article should discuss how the models relate to actual biological neuronal networks. A computer is not a neural network model, so the discussion is not relevant here. Most ANN models are defined mathematically and the computer is merely a simulation platform, though in the case of statistical models, the computer program is almost exactly the same as the math.
So, I'd like to see this section renamed or moved, and another section added called 'comparing biological with artificial neural networks' which actually does compares the ANN models with real biological data. There has been a lot of work done on that, see for example the book [http://neurotheory.columbia.edu/~larry/book/ Theoretical Neuroscience], by Peter Dayan and L. F. Abbott
Perhaps a useful categorisation is the purpose of the models:
1 Artificial Intelligence
This includes statistical and ad-hoc models whose purpose is to solve a particular AI task such as prediction, control, pattern recognition. The models' purpose is to solve the task in a practical way; relation to biological neurons is tangential and often formulated as an afterthought.
2 Theoretical Neuroscience
This includes neuromimetic systems (which are physical implementations of neuron-like elements, usually employing analogue electronics) and computational models of single neurons, neural clusters, or complete neural systems.
Research in this area tries to either
a) Relate the function of some biological neural system to a simple mathematical model. The relations can be made from the individual neuron level up to the organism behaviour level. Statistical modelling is frequently used for this - the final purpose is to discover how biological systems solve particular tasks. A common example is the dopamine reward system in the basal ganglia, and its relation to reinforcement learning (which, in turn, is approximate stochastic dynamic programming)
b) Create a simple model that exhibits a particular property observed in biological networks. These mostly deal with the observed behaviour of biological networks, and are concentrated not on discovering how tasks are solved but on discovering what are the essential characteristics that neurons or neural networks have that cause them to behave in a particular way. Common examples include models of the spiking behaviour of neurons, and models of semi-random and oscillatory behaviour exhibited in large neural clusters.
- Does anyone in this group know what a dendrodendritic synapse is? How about axoaxonic? Somatodendritic? - Is anyone familiar with the anatomy and wiring of the olfactory system? - Is anyone familiar with the book, 'The Synaptic Organization of the Brain'by Gordon Shepherd? - How about: what percentage of human brain function is feedforward? - Does anyone in this entire forum know what at I'm driving at?
- ---- .. I understand what you are trying to say..
- Yes. Dendrodendritic synapse is a connection between two dendrites. Axoaxonic is a connection between two axons, somatodendritic is a connection from a dendrite to the neuronal soma.
- No
- No
- This question is not precise. In any case it all brain function relies on some kind of feedback, obviously. Otherwise nothing would be learnt. The question on whether the feedback is only necessary for learning, or whether it is also necessary for non-learning functions has not been answered and I am not sure how you can answer it. In an abstract sense, it possible to perform many functions without any feedback, however motor movements relies on feedback extensively. To some lesser extent, vision. I am not an expert on this really, however. And don't ask 'where the feedback comes from' - it is not possible to separate stimuli.
- You want to say that it is impossible to build a human-like intelligence with a feed-forward neural network? --Olethros 00:12, 16 December 2005 (UTC)
Shepherd has conducted extensive research on the olfactory system. He has shown that the structure is complex - at least 'complex' versus a simplistic and reductionist view of neurons having distinguishable receiving, processing and output regions. His book reveals how every processing unit (neuron) acts as, to a greater or lesser degree, a signal MODULATING processor. In other words, the real time and immediate feedback through, for example, dendrodendritic synapses are systems that are neither loops nor networks. They can modify (or negate) input at the site of initiation of signal transduction to the soma. Shepherd gives examples of these single-unit, non-network integration elements throughout the brain. The issue here is not large systems such motor, visual etc. but the neuronal basis of 'brain function' and there's nothing abstract about it. Designs of neural networks have always suffered from focusing on the network and not the neuron. Until the NN field appreciates the full complexity of processing at the single unit level, the networks produced cannot claim a neuronal pedigree.
Neurobiology is a field of highly specialized, and highly insular, interests. It is my experience that this is a product of the organ we have chosen to study. And few of us are sufficiently familiar with the research of our peers to speak intelligently about their work. The NN field is no different; I sense a broad and deep familiarity with electrical engineering and a cursory exploration of neurobiology in its practitioners. My impression is that the goal of the NN endeavor is emulate 'the way the brain works'. Good luck (I don’t even know what that phrase means). I would be astounded to gain complete functional insight into 1 square millimeter.
Erm, theoretical and computational neuroscience people are working on exactly those things: i.e. what is the relation between neural element complexity and system coplexity? I have just added a new section about it, feel free to comment.
ANN people are mostly concerned with using mathematical models for statistical inference and decision making.
It is true that in the past, the ANN people were under the impression that they could reach an understanding of how the brain worked through their simple models. Now that it has been made clear that a simple answer is not possible, the field has split into machine learning and neuroscience - people frequently jump from one to the other, but that's all.
To complicate things, there are some electrical engineers that try and re-create computational neuroscience models with analogue electronic components, but that's another story. --Olethros 15:57, 16 December 2005 (UTC)
I found the description of the mathematics very lacking here. The E function, or whatever it is, isn't even defined. Somone should rework this for clarity.Kaimiddleton 05:43, 19 December 2005 (UTC)
Moved this unreferenced test from main page: >>One good example of Neural Net today is Derek Smart's Battlecruiser series, which features advanced Neural Net AI.
The results of what exactly Nature suggested should be corrected is out... italicize each bullet point once you make the correction. -- user:zanimum
OK, all these are now actually addressed. The only thing that is missing is the Cognitron, about which I cannot really remember a lot of things. I never thought it was a articularly important network. Also, the History of Neural Network Analogy, though now corrected in its essential details, seems to be 'History of Artificial Neural Networks', and a partial one at that. I suggest it be removed completely.--Olethros 20:03, 22 December 2005 (UTC)
Nature wrote, "It is claimed that the Cognitron was the first multilayered neural network...it is very likely that one could find a proposal for a multilayered neural network considerably earlier" that seems to be in direct contradiction to this site which has a heading which reads, "The Cognitron - First Multilayered Network (1975)". I'm looking for a more authoritative source. Anyone other input would be appreciated. Broken S 23:04, 22 December 2005 (UTC)
While I am reasonably happy with the current section, it is perhaps too detailed for Neural Network and should perhaps be moved to ANN as a "Theoretical Background" section, and replaced by a section similar in size to Neural Networks and Neuroscience. Thoughts?--Olethros 16:03, 26 December 2005 (UTC)
As a student trying to decide which classes to take, a breakdown of the applications of NN's would be very useful. Since the world doesn't center around me, I would also argue that most people who look up a scientific topic like this are as interested if not more in the applications or potential applications than they are in the theory, method, or history. Krymson 16:23, 7 January 2006 (UTC)
Abstract applications are outlined in the neural networks and AI section - real-life applications are listed in the main artificial neural networks article. --Olethros 18:47, 7 January 2006 (UTC)
Just to point out - this article is in fact wholly about artificial/computational model neural networks. There is nothing here on real networks. Don't have time to contribute to this myself just now, so am just flagging this as a concernGleng 21:42, 3 April 2006 (UTC)
I have cleaned up the external links a bit by removing the software links. There are two reasons for this. First of all the selection was pretty arbitrary, and there is lots of software out there. Second, we do have a neural network software article. I think however that instead of piling up links indiscriminately it would be good to stick to the simple principle of adding a link if there is an article describing the software. --Denoir 05:55, 1 July 2006 (UTC)
Where I come from computers might "model" brains or "mimic" brains, but they don't "describe" brains unless they are playing 20 questions with you. And who is the hell is Marr? Is this article supposed to be by Academics in the field for Academics in the field, or is it supposed to be accessible to laymen?
I did some work on readability today. I also removed this paragraph to the talk page. It is an orphan paragraph which does not fit into the flow of the article and says nothing which is directly related to neural network theory.
Removed paragraph: In more recent times, neuroscientists have successfully made some associations between reinforcement learning and the dopamine system of reward. However, the role of this and other neuromodulators is still under active investigation.
Regards to all.Trilobitealive 03:19, 12 January 2007 (UTC)
"In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having."
Although this remark was made in defense on neural networks, it also typifies what sometimes make them an inferior option: the opacity and generality of their structure and often 'black box' approach to their use means that they can give workable but suboptimal solutions. In my postgraduate pattern recognition module, the professor claimed that the general nature of the neural net means that is can be used for a variety of tasks, but often a model better suited to the task at hand can give superior results. As an example he notes that in comparisons they did in speech and pattern recognition between neural nets and hidden Markov models, the HMMs won convincingly. 146.232.75.208 14:55, 16 January 2007 (UTC)
There is no meantion in the history seciton of McCulloch & Pitts, the founders of neural computation.
It might be helpful to the reader to explain the current status of neural-net research, if a good source can be found. My understanding, as an AI researcher, is that:
As it currently reads, the article has a somewhat dated feel to it, as if it were circa 1995. --Delirium 07:26, 17 January 2007 (UTC)
Yes, this article has now been bloated again. The introduction should be cut. I'll cut it. --Olethros 09:46, 22 May 2007 (UTC)
I understand that biological neural networks can't be covered in the Artificial neural network article, but why so much duplication with information on computer neural networks - there seems to have been little effort to rationalize what goes in one article as opposed to the other. -- John Broughton (♫♫) 20:07, 12 May 2007 (UTC)
I did some cleanup in the introduction, mainly by removing some marginally relevant and inaccurate statements. I think the intro would be much better if it remains simple. Most of the fact in the introduction that I removed would fit better in the Brain section (i.e. the stuff about the brain not being a von neumann machine etc), the Theoretical Neuroscience section (i.e. the congitive modelling stuff) or the History of the neural network analogy section. These sections are already quite long, though, so they need some cleanup. I will leave them as is for the moment. As for the ANN overlap, I will severely reduce the section to give the briefest possible introduction, since a lot of stuff related to supervised learning and cost functions, etc, is duplicated (and extended) in the ANN article. Speaking of which, the List of ANN types section should be in a separate article.
--Olethros 10:00, 22 May 2007 (UTC)
I removed the term "modern" in the first paragraph and replaced it with "more common", since the term modern has connotative meanings, which are ambiguous and potentially misleading, not to mention argumentative with reference to those who may use the term as described in sentence number one, i.e. implying that they are somehow not modern.
While some may argue that neural network, by itself, should not be used to represent an artificial neural network, it is a common usage for the Consequently, I chose the phrase, "more common" to replace "modern". Perhaps neither works here, in which case I would strongly recommend not using any phrases, which imply one usage is more correct or more modern.... — Preceding unsigned comment added by 168.68.129.127 (talk)
Title self-explanatory —Preceding unsigned comment added by 65.183.135.166 (talk) 21:54, 25 September 2007 (UTC)
Since the general study deserves, and will increasingly, it's own article. Again one of those funny societal/linguistic situations with the primary common usage of the term being in connection with the artificial systems. I am loath however to consign my network or those of Others to the artificial Article. Lycurgus (talk) 16:21, 26 November 2007 (UTC)
Oppose Agree with above, as the scope for "Neural Networks" includes biological networks as well. --Sylvestersteele (talk) 15:05, 28 January 2008 (UTC)
Oppose Agreed. Merge makes no sense whatsoever to anyone familiar with the material. Both topics are fairly large already, merge would make the page even more unwieldy. Fippy Darkpaw (talk) 20:27, 12 March 2008 (UTC)
Artificial Neural Network deals with Computer Programming related stuff and Neural Network in general related to Human Bio. So Seperate articles are required. —Preceding unsigned comment added by S arkumar (talk • contribs) 05:22, 21 January 2008 (UTC)
Oppose Agreed, merge makes no sense. Can we get rid of the merge suggestion at the top of the page, since clearly nobody supports it? Fippy Darkpaw (talk) 22:19, 21 February 2008 (UTC)
Oppose - I oppose a merge. It's understandable that redundant information can be both frustrating and inefficient. But the methodology on how you approach the problems with creating an artificial brain or Artificial neural networks versus a Biological neural network can be completely different. I believe that the page Neural networks should act as a generalization of neural networks, a place to compare and contrast the research being done, or a place where information on their commonalities can be put. Also the page works as a bit of a sorter, if a large section of research is done I think contributors should be able to look at the Neural networks page, and assess "Does this research belong on the BNN page, the ANN page? Or is my contribution a general thing, true for both and thus belongs on the NN page?"--Sparkygravity (talk) 18:54, 22 February 2008 (UTC)
Since all the votes in a few months are against, I'm gonna go ahead and remove the merge tag. Fippy Darkpaw (talk) 20:28, 12 March 2008 (UTC)
My opinion is that they should be separate, yet related to one another via links. When teaching Artifical Neural Networks at UCLA in the early 1990's, I alway kept them separate. In every paper concerning them I have written, I delineate the artificial neural networks from the biological ones. In teaching, I first introduce a new type of neuron or network by first presenting and outlining its biological counterpart(s) (when they were known to exist and not just as a computational/mathematical contruct). Thus, one has biological neurons and networks and artificial neurons and networks. Lets keep them separate yet related.Johnsonalme (talk) 19:10, 30 January 2008 (UTC)
Artificial Neural Nets are part of N-FAI. N-FAI uses various metaphors from biology and physics to discover new problem solving methods. This is distinct from computer models designed to improve our understanding of the brain. I therefore agree that we need to separate, yet related articles there. A common article would be inconvenient for users and researchers, potentially misleading for students. —Preceding unsigned comment added by Eelstork (talk • contribs) 17:19, 5 February 2008 (UTC)
Is it just me, or does it seem like there is just a bit too much apologetic material in the "Criticism" section? Psalm 119:105 (talk) 09:38, 30 March 2009 (UTC)
I recognized the following Resource in the article Artificial Neural Network. It is a free 200-page illustrated manuscript, available in german and english language. I found it easy to read and think, it fits to this article as well. If you don't mind, I'd try and add it to the external links section within the next few days. 91.55.57.70 (talk) 10:21, 21 September 2009 (UTC)
Done. 80.136.180.134 (talk) 14:17, 25 September 2009 (UTC)
I was redirected to this Neural Network page when I typed in the phrase Neural Computation. My intention was to find information about a field of research, not the specific idea of Neural Networks. I believe that Neural Computation should instead redirect to the Computational Neuroscience page. 128.2.245.12 (talk) 14:49, 11 October 2009 (UTC)