What is the difference between brain and computer




















This is because a neuron is constantly getting information from other cells through synaptic contacts. Information traveling across a synapse does NOT always result in a action potential.

Rather, this information alters the chance that an action potential will be produced by raising or lowering the threshold of the neuron. Both have a memory that can grow. Computer memory grows by adding computer chips. Memories in the brain grow by stronger synaptic connections. Both can adapt and learn. It is much easier and faster for the brain to learn new things. Yet, the computer can do many complex tasks at the same time "multitasking" that are difficult for the brain. For example, try counting backwards and multiplying 2 numbers at the same time.

However, the brain also does some multitasking using the autonomic nervous system. For example, the brain controls breathing, heart rate and blood pressure at the same time it performs a mental task. Both have evolved over time. The human brain has weighed in at about 3 pounds for about the last , years. In the same way that implementation wasn't an issue before, it isn't here either. Parallel processing can be implemented equivalently on a serial machine.

Also, this isn't even true anymore, as most if not all supercomputers used for research have dozens, hundreds, or even thousands of processors. And even consumer level machines are becoming parallel, with dual-core CPUs coming out in the past few years. The modularity issue I'm intrigued by. Clearly the areas of the brain are not as discrete as those in our computers, but don't we still refer to experience occurring in the neocortex?

Although I really don't know enough about this and I want to know more! My gut instinct is telling me here that a brain based completely on spaghetti wiring just wouldn't work very well You might call me out for nitpicking here, but CPUs don't require system clocks. Asynchronous processors are being actively researched and produced. A key advantage is that clockless CPUs don't consume energy when they aren't active. Machines based on synchronous processors, on the other hand, constantly have the "pulse" of the clock traveling through the system and the frequency at which it "beats" determines the speed of the CPU.

The pulse of the clock continuously coursing through all of the circuits also results in "wasted cycles," meaning power is being used when the CPU isn't doing anything, and heat is being dissipated for no reason. Again, this seems to be a superficial architectural difference.

I think that if your intent is to simulate the brain using artificial neural networks, then the how the RAM or hard drive works is inconsequential. This one I completely agree with. I always get the feeling when I read philosophy of AI papers that some of the philosophers take the sentiment "the mind is the program being executed on the machine that is the brain" too far.

Consequently, and I feel this is actually a central problem with philosophy of AI, they pay too little attention to how the brain actually operates and try to think about how to implement consciousness on a computer without considering how the activities of the brain relate to the mind. Anyway, I think it would be fair to describe the brain as an asynchronous, analog, and massively parallel computer where the hardware itself is inherently mutable and self-organizing.

A consistent thread in your comment is that some differences are merely "implementational" or "architectural" details, and thus are actually unimportant or otherwise superficial. IMO, that attitude is scientifically dangerous how can you know for sure? I don't think the track record of either is very good: 20th century advances in statistical theory may be responsible for the few successes in both disciplines just don't tell Chris at Mixing Memory I said that Just adding something to Jonathan's answer to 6: A computer can run entirely in hardware actually it's a very strange affirmation, but you know what I mean.

And more: if that hardware is a FPGA, it can mutate and self-organize. I think Chris' arguments would target "today personal computers" and not computers in general, since "computer" is a very wide term.

Just wanted to chime in on what a great article this is. Not too complex or technical, and gives a great overview of the significant differences. Rafael, when you say "I think Chris' arguments would target 'today personal computers' and not computers in general, since 'computer' is a very wide term", you have a good point, but that is exactly the computer model on which analogies were and are based. Chris's arguments are about the analogies were drawn, and not not about computers, and not about analogies can be drawn on the basis of future or experimental computer architectures.

Thanks incze - that is exactly where I was coming from. That said, I am now considering a second post entitled "10 important similarities between brains and Turing machines" haha. You compare high and low, fundamental things about the brain against implementation details of a specific microprocessor.

But Jonathan already said all those things much better than I can. Rafael, your point about targeting "todays computers" is questionable. There are much more obvious differences between a brain and todays computers: The microprocessor is square, the brain is roughly round. The computer run on electricity, the brain runs on icky chemical stuff. The computer has a hard disk, the brain do not. The computer stores data sequentially, the brain do not. Actually, all of these points are wrong: sometimes trivially, sometimes in a way that completely invalidates the conclusions that accompany them.

But the biggest problem isn't a mistake, but an unspoken assumption: it's being argued incorrectly, as it happens that brains aren't like computers, whenn the arguments being made are actually about the idea that brains aren't like one particular type of computational devices.

I forget where I was reading that before computers, people compared the workings of the brain to the steam engine, and before that, to watches. It seems whatever the most complex and intricate man-made device of the time was, that would be compared to the brain.

It sort of makes sense because in some ways the brain is like any complex machine, and also it's interesting to compare the best man-made devices with what we may perceive to be the most important natural "device". I do think the brain has tons of things more in common with a computer than a watch even a complex one , but maybe that is partly due to the current cultural importance of computers and my relative lack of amazement and lack of understanding the specifics of watches.

Difference 4 is perhaps the most superficial. Synchronous electronics is made to simplify architecture, but as noted it is wasteful and also difficult in large "nets" where the clock no longer look ideal. I haven't looked at all on todays common dual and quad processors, but it would surprise me if not one of the advantages is just decoupling of each processor from each others clock synchronity.

It is anyway rather difficult read: impossible to run the same software with the same timing on a more complex machine. Difference 6 and 9: As noted, there are some special systems which allows reconfiguring hardware vs software to suit the task, and they may become more popular because they also save energy.

And in a small manner, that is also what happens when power saving slow down or turn off parts of some modern systems. Ideally, this plasticity could also work around failing components in future aging systems to increase reliability and save repairs. Who knows, maybe it will become reality. Difference 7: Another superficial difference, since signal handling in VLSI components is complex and highly non-linear. Threshold and local, connected devices are made to simplify architecture, but are again wasteful and difficult in larger applications.

The difference to 4 is that the alternatives are not much developed, and may never be used. But it is rather impressive that the Blue Brain project apparently need to use one processor just to emulate one neuron I got to an extreme to explain what I thought, but it was wrong.

I just think there are some restrictions above the arguments, but Jonathan already said it in details. This is a great overview and vey educational. Thanks for putting this together. Just the devil's advocate in me.

Chris, I agree with all your points, but I can't help thinking 'straw man' as I read this. I mean, that may not be quite the right word for it, but isn't most of this already pretty well assimilated into common thinking about the brain?

With the exception of 7 I do think a lot of writers equate neurons with big wet transistors , I don't think I've read anything that compares brain functions with computer circuitry in any kind of a literal sense. Of course, I'm excluding certain philosophers when I say that - there's just no accounting for what some of them will argue Now, I will admit that I'm not really that well-read on the subject, so maybe I've just been lucky so far.

As someone whose specialty happens to be computer science, I would have to say that I agree overall with your overview, except for a few points. Well, no. You're generalizing a specific architecture the serial von Neumann machine as a computer. Parallelism, concurrency and distribution are huge areas of research now of days, primarily due to the fact that the hardware industry has reached a plateau with the serial arch.

You could grant that computers are probably not as massively parallel as human brain architecture, but that's really a question of scale and not essence. And as well, there is a good reason that parallelism hasn't been a hugely popular field up until now: parallel machines are notoriously difficult to program.

Even with the comparatively minor levels of multi-threading and task distribution being used with new multi-core processors on the market, software dev schedules are being doubled and sometimes tripled to assimilate the requirements. Other than that, I don't have many complains. But when it comes to fields like A. As technology, A. In the end, I see as somewhat akin to human flight. Our air traversing machines are certainly technically different than those produced by nature, but mostly because nature's ad hoc, Rube Goldberg designs didn't prove very useful.

Computing is the same way, IMO. The technical value of A. This post is a healthy and much-needed rebuttal to the weird idea that in 20 or so years machine intelligence may exceed human intelligence the so-called Singularity. The proponents of this idea seem to be basing their belief largely on extrapolating on Moore's Law. But if Moore's law holds up for the next 20 years computers will be only about 4, times more capable.

And yes, I'm using the bastardized version of Moore's law that presumes increasing component density [what Moore was really talking about] correlates in a way with increased processing power.

But if artificial intelligence requires only or chiefly on increased hardware power than it would already exist. It would just take 4, times longer than we would deem practical. Before reading this article I would have argued is primarily as software problem. Now I have to agree that it is also a hardware problem. But who's to say we can't simulate if not develop hardware that will work sufficiently similar to our organic hardware?

Then it will still come down to software problem. And we just don't have a good model of how human "brain software" works. And I don't think we will for a very long time. Chris, thanks for the post: a very concise presentation of the arguments circling in my own field.

I have to say though that the discussion is just as good :- If I might add a few points Concerning point I think this is something which has been remarkably overlooked, and one which, if you were listing your 10 points in order of importance, I would place near the top.

The fact that brains and therefore from point 6, the mind are embodied have a body which is localised in the real world is, I believe, the most important constraint that is on this system.

I would agree with your point that 'computers' lack this, however, I should point out that the rapidly growing field of cognitive robotics of which I see myself as a part is growing in importance precisely because of this. The view that brains be they biological or otherwise need bodies again, biological or otherwise is thus not one which has been forgotten. Concerning point 3: I agree with your point, and I have to say that slightly disagree with Jonathans response that "Parallel processing can be implemented equivalently on a serial machine".

I believe by definition, the best you can hope for in terms of parallel processing is a simulation of parallel processing. There are clever computational algorithms capable of getting very close in certain circumstances threading, etc , but at the end of the day and if using a single processor only one computational instruction can be executed at any given time step, thus imposing a degree of serial processing on what would ideally be parallel.

Also, on the point of multiprocessor systems, this is in all likelyhood adequate for relatively simple systems, however, I get the feeling I hasten to add that this is based on limited experience of multiprocessor systems that for larger systems if one were to simulate many thousands of neurons an a processor per neuron , the problem would not be one of raw processing power, but one of communication limitations between the processors bandwidth, speed etc - although I have come across work which is attempting precisely this in Switzerland perhaps?

Having said all this, I do believe that pseudo-parallel computation is more than adequate for most modelling purposes, and that the shortfall may be compesentated for to a certain extent. Concerning point 9: if you are referring to hardware, then I completely agree - but not if you were also including software in that. More generally, the points you have raised are very important ones - particularly your assertion that the brain is nothing like a standard desktop - maybe your idea of writing another one on the similarities is a good one on the functional nature of the two rather than the structural nature!

Thanks again though! The things you say are all roughly correct. To make them more so would bog the article down in details that important to the majority of readers.

If all you mean to do is tell people with a rough understanding of how their computer works that their brain doesn't work the same way, then all of your points are valid. If you want to get into cutting-edge, high-end, or low-market-share technology then the argument requires more support, but is far from invalidated.

Also, the argument that the brain is not like a computer does not mean that the computer can't simulate some aspects of the brain - it means that they don't inherently work the same way. Before I annoy anyone here's something for Jonathan: As far as modularity goes, there is some.

You can predict what kind of deficits a person will have based on where an injury occurs. The problem comes from assuming that this is where the processing of that particular thing is done. To use computers as an analogy - I can't help it - if we cut the power cord on a compter it stops adding numbers together. Thus addition takes place in the power cord. To save reading all of this lengthy post, my other responses to Jonathan are summarized: 2, 3 and 5 We can't accurately describe these phenomena, let alone model them.

The distinction between the actual brain and the computer model only disappears when the model is completely accurate. Also, Chris has massively understated the differences between RAM and working memory. It would be cruel and severely diminish the capability of your fly. For instance, to make capacity vary you could some of your RAM unavailable sometimes. Why would you do that? To assume that current simulations of the spreading activation of a neuron works with a lookup table - the current method - is accurate is to assume that we know all about how this addressing works.

I assure you that we don't. Are you trying to tell me that the amount of RAM available will affect how we traverse a neural network lookup table? Because then the difference between working memory which we don't really understand either and RAM becomes extremely important. Thus when Jonathan says "implement a neural network" does he mean a current neural network, in which case it isn't really very much like the brain, and thus not in conflict with this article at all?

Or does he mean implement an accurate model of all functional aspects of the brain? Because computers aren't like that now and we have no evidence they ever will be. The simple fact is that arguing that the brain is analogous to a Turing machine is a dangerous thing to do. Philosophers have created theoretical machines capable of solving the halting problem for the uninitiated that's a problem computers can't solve.

The brain may be a realisation of some super-Turing machine. It is true that any parallel arrangement of Turing machines can be modelled by a single machine, but it is not certain that the brain can be modelled by a collection of parallel Turing machines.

Yeah, we've built "artificial neural networks", but most of those are research simulations! Simulating analog processes on a digital system or vice versa tends to pull in huge overheads, worsening the basic order of the computational costs -- and it still isn't "exact".

Simulating massively parallel systems on CPU-based systems is worse, and less reliable. The CPU version fundamentally has time cost at least linear to the number of nodes and connections, whereas a true parallel system does not.

It might well be possible to make something like "content-addressible" memory in the RAM model, but it would be a "bloody hack" with no connection to our usual programming schemes, or to a biological-style memory. Then too, our ability to "program" neural nets is frankly humbled, by the ordinary development of almost any vertebrate's nervous system.

And here is one dimension along which the brain is like a computer. Here, we review models of the higher level aspects of human intelligence, which depend critically on the prefrontal cortex and associated subcortical areas.

The picture emerging from a convergence of detailed mechanistic models and more abstract functional models represents a synthesis between analog and digital forms of computation. Specifically, the need for robust active maintenance and rapid updating of information in the prefrontal cortex appears to be satisfied by bistable activation states and dynamic gating mechanisms.

These mechanisms are fundamental to digital computers and may be critical for the distinctive aspects of human intelligence. I think this list is all correct, except for the last item.

Much more likely is that the vast majority of the complexity is accidental. If Moore's law holds, I don't see why we can't have supercomputers within ten years that could simulate a human brain, given a model that we probably won't have for another 15 years. Re: 'computer' defined It appears inevitable to speak in analogies when discussing AI, so the analogy I'd offer is this: the brain is like what we would call a computer network.

It has areas of specialization of function, but it operates overall by combining data, memory and processing to arrive at decisions. Of course, one gets into the philosophy of human intelligence and I think it's important to note that Chris is not criticizing or "targeting" computers.

His target is the erroneous thinking that people working in cognitive science, AI, etc. Or more accurately, "their conceptualization of a brain" and "their conceptualization of a computer.

From what I read, it seems that part of what created said thinking was a mistaken idea of how a computer is built, and so in this article Chris must explain a few points about how typical computers work in order to help dispel the incorrect portions of the analogy. Don't take that to mean that he is making authoritative statements about computers and what they can be or that his article is meant to target shortcomings of computers.

Any criticism is targeted at a tendency for some researchers and lay people to think of a brain as being like the computer they sit at every day. So yes, there are alternate computer architectures that more closely resemble a brain. And certainly computers don't have to work the way Chris describes.

But the model of "computer" that he uses very strongly matches that used by people who subscribe to the erroneous analogy he is attempting to debunk.

Your points are well-taken and very relevant when people start talking about resemblances between brains and computers. But it seems a little misguided to compare brains to computers, whose current form, was a result of a lot of historical and technical factors: technology available, standardization decisions taken, and so on.

I think computers could have taken other forms -- for instance, if the von neumann architecture had not proved to be so influential, isn't it possible that computers today would be massively parallel?

The key difference, it seems to me, is the difference between brain processes and computational processes. What is the role of brain processes in our interaction with the world? And at what level of description are brain processes computational if at all? And so on. Sorry for the delay in posting all these very thoughtful comments - I have been camping in Utah for a couple of days, and just now got access to the intarweb:.

I understand that they made the mistake of only considering single-layer networks when pouncing on perceptrons; if they had considered multi-layered networks they would have seen that things like XOR are possible. Linearity and analog systems notwithstanding, I can say with the hindsight of a huge generational gap that it just seems silly to me that they didn't consider multi-layered networks. We can know for sure because all modern day digital computers are Turing-equivalent, meaning any program implemented on one can by implemented on another and be computationally equivalent despite differences in system design.

Just as the brain only has hardware as you said, there is no software that is the mind running on top , the only thing that counts when programming a mind is the software. The high-level software need not be concerned with memory registers when representing knowledge, and "pointers" can be implemented on a system that uses only non-volatile memory.

I think the only real problem here is whether or not digital computers can simulate the continuous nature of the brain. If it is the case that a discrete state machine is not hindered by this, then the brain's architecture with all of the intricacies of neuronal activity can be implemented to the fullest extent with no other problem although we'd of course want to abstract away as much complexity as possible.

However, if digital computers cannot simulate continuous structures with sufficient robustness, then I think AI would have to start putting more research into analog circuits. But I don't think we currently have enough evidence yet to make the case for either. So yes, brains and PCs have different architectures, but that doesn't mean you necessarily cannot implement a mind on a computer. I never argued that the brain is not an information processing device, but for some reason many think that's what this post is about e.

But perhaps we could push shreeharsh in that direction? On the other hand, while there is substantial reason to believe that embodiment is important, a lot of the arguments used to support this claim are far too philosophical for my taste and indeed I am currently collecting experimental evidence against one of the strongest claims for the importance of embodiment in developmental psychology.

You'll notice the evidence I present in 10 actually pertains to immersion rather than embodiment a logical fallacy I permitted myself;. I believe embodiment is important, but I don't think it's actually been proven. Brian - I saw Randy's Science paper and probably like yourself was very surprised. So I have some difficulty taking that paper's perspective very far. I sure don't remember telling you that If you want creativity, energy efficiency, and prioritization, a human is your best bet.

We can work together and enjoy the best of both worlds. That is, until Skynet becomes self-aware. Kris Sharma is a content creator living in Boise, Idaho. He writes frequently on technology topics, including automation, machine learning, and data security. Feel free to hit him up on LinkedIn. The opinions expressed in these articles are those of the individual authors and not Micron Technology, Inc. Upgrading your systems and components can cause damage to the system or components, including potential data loss.

Micron is not responsible for any damage or harm, including data loss or system interruptions, that may occur. Neither Crucial nor Micron Technology, Inc. Micron products are warranted as provided for in the products when sold, applicable data sheets or specifications.

Micron, the Micron logo, Crucial, and the Crucial logo are trademarks or registered trademarks of Micron Technology, Inc. Any names or trademarks of third parties are owned by those parties and any references herein do not imply any endorsement, sponsorship or affiliation with these parties.

The current debate has to do with how memories disappear if it is simply because they fall or other memories interfere. However, in computers, we access information through the exact address in which it is located. You can also make an index of memories and access by neighboring terms.

Difference Between Human Beings and Computer I will try to explore the basic difference between human beings and computer. A human is a living being having living characteristics like using food and water with need base and having emotions A computer is a nonliving thing, it is an electronic device made by human and consumes electricity and emotionless A human has a brain and can think creatively in a different angle.

It works just what has been faded through commands A human makes decisions according to his thoughts. It can perform multiple tasks at the same time The human short-term memory has to do with nearby indications and ideas that lead to long-term memory. A human has a brain and can think creatively in a different angle. A human brain works as analog with different processing speed.



0コメント

  • 1000 / 1000