Brains Are Not Computers

I cracked up when I saw the headline: Why your brain is not a computer. I kept on grinning while reading it because it makes some of the same points I’ve tried to make here. It’s nice to know other people see these things, too; it’s not just me.

Because, to quote an old gag line, “If you can keep your head when all about you are losing theirs,… perhaps you’ve misunderstood the situation.” The prevailing attitude seems to be that brains are just machines that we’ll figure out, no big deal. So it’s certainly (and ever) possible my skepticism represents my misunderstanding of the situation.

But if so I’m apparently not the only one…

I’m not sure how much I can add to what the article says, so I’ll mostly share some bits that really caught my eye. I recommend reading the article itself for the full text.

It starts with the major progress we’ve made in neuroscience, of understanding of how the brain functions. (This leads to a view of the brain as a mechanism that can be reduced to its parts — a view that leads many to reject the David Chalmers notion of a “hard” problem.)


And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.

Which perhaps raises the question: Have we simply not gathered enough evidence? Are we just too early in our science to understand how the pieces we have so far work together? Or is there something forever opaque when it comes to the brain?

This article argues in favor of the later possibility. (Obviously I agree.)

It is possible that repeated calls for more theory may be a pious hope. It can be argued that there is no possible single theory of brain function, not even in a worm, because a brain is not a single thing. (Scientists even find it difficult to come up with a precise definition of what a brain is.)

The author (Matthew Cobb) goes on to write about how the brain is an evolved system, and different parts of it evolved at different times to meet different needs.

That makes the brain an amalgam of sub-systems with more sophisticated systems evolving on top of more primitive ones. We still make reference to our early origins when we talk about thinking with our “lizard” or “hind” brains. A lot of psychology relates to motivation from our more primitive selves.

Churchland and Abbott spelled out the implication: “Global understanding, when it comes, will likely take the form of highly diverse panels loosely stitched together into a patchwork quilt.”


So far nothing very controversial, and I think most would agree about the brain being an amalgam.

But then we get to computationalism:

For more than half a century, all those highly diverse panels of patchwork we have been working on have been framed by thinking that brain processes involve something like those carried out in a computer. But that does not mean this metaphor will continue to be useful in the future. At the very beginning of the digital age, in 1951, the pioneer neuroscientist Karl Lashley argued against the use of any machine-based metaphor.

(1951. As usual, I’m actually late to the party.)

Lashley wrote that our brain metaphors go back to Descartes, who had a hydraulic model. Then came telephone models, electrical field models, and ultimately computer models.

Essentially our models match the current technology. That technology creates a language that informs — and limits — how we model the world.

True to form, now there are quantum models of brain function.

This dismissal of metaphor has recently been taken even further by the French neuroscientist Romain Brette, who has challenged the most fundamental metaphor of brain function: coding. […] Brette’s fundamental criticism was that, in thinking about “code”, researchers inadvertently drift from a technical sense, in which there is a link between a stimulus and the activity of the neuron, to a representational sense, according to which neuronal codes represent that stimulus.

In other words, viewing these systems through a “computer” lens colors the view in ways that may not match what’s really happening.

One issue is that this excludes seeing the interconnectedness of brain systems. A computational view is linear, like a line of dominoes falling, a series of causes and effects. But the brain is a vast mesh of interconnected and interacting systems.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function.

(Or, as I’ve said so many times: “The brain is not a computer!” 😀 😉 )


Cobb mentions Hungarian neuroscientist György Buzsáki, who in his recent book The Brain from Inside Out, concludes that the brain doesn’t represent information, it constructs it.

I rather liked that turn of phrase. We do construct our personal model of reality.

I’ve been reading about how the eye works and what kinds of signals it sends to the brain. It’s rather astonishing how we create a three-dimensional model of our surroundings from those signals.

That applies to all our senses. Our model of reality is just that, a model we create. We also create imaginary models, often of what will happen in the future. It’s what allows us to make choices.


Cobb writes about how science metaphors are both useful and limiting. They also have a life-span, and we may be coming to the end of the computer metaphor for the brain. (So maybe I’m actually early to the party?)

One problem is that it’s not clear what will replace it (and maybe no metaphor should). And obviously not everyone agrees the view has reached retirement age. But a possible sign of it is that people are questioning it.

Cobb does an interesting bit about emergence. He first differentiates between weak and strong emergence. But…

…weak emergence cannot explain the activity of even the simplest nervous systems, never mind the working of your brain, so we fall back on strong emergence, where the phenomenon that emerges cannot be explained by the activity of the individual components.

However strong emergence can be criticized for lack of an obvious causal mechanism. But…

…faced with the mysteries of neuroscience, emergence is often our only resort. And it is not so daft – the amazing properties of deep-learning programmes, which at root cannot be explained by the people who design them, are essentially emergent properties.

I am sympathetic to this view. (I recently argued in favor of strong emergence.)


Cobb is just getting started about computers:

A related view of the nature of consciousness turns the brain-as-computer metaphor into a strict analogy. Some researchers view the mind as a kind of operating system that is implemented on neural hardware, with the implication that our minds, seen as a particular computational state, could be uploaded on to some device or into another brain. In the way this is generally presented, this is wrong, or at best hopelessly naive.

Which I have to admit echoes my own sentiments. I have argued long and hard against the idea that a “mind algorithm” exists.

As Cobb goes on to write, “It implies that our minds are somehow floating about in our brains, and could be transferred into a different head or replaced by another mind.”

Yes. Exactly. This form of computationalism is old-fashioned dualism.

Later he writes:

Even something as apparently straightforward as working out the storage capacity of a brain falls apart when it is attempted. Such calculations are fraught with conceptual and practical difficulties. Brains are natural, evolved phenomena, not digital devices.

Indeed. The brain is not RAM or a hard drive.

Also, logic gates (which neurons do have a resemblance to) do not a computer make. They are just a small part of what it requires to make a computer. And for that matter:

A neuron is not like a binary switch that can be turned on or off, forming a wiring diagram. Instead, neurons respond in an analogue way, changing their activity in response to changes in stimulation.

What’s funny to me is how many times I’ve made these same arguments. I’ve written plenty of posts (and comments) that have said the same things.


This part made me chuckle:

In 2015, the roboticist Rodney Brooks chose the computational metaphor of the brain as his pet hate in his contribution to a collection of essays entitled This Idea Must Die.

And I’ve been accused of having strong feelings on the matter.

[In fact I don’t. But I do enjoy a debate and argue any point with some degree of passion. It’s just a sign I believe in what I’m saying. I’m no sophist.]


Cobb writes about a 2017 experiment in which two neuroscientists attempted to understand a simple computer system using neuroscience techniques.

And failed.

They were unable to detect the hierarchies of information. They could see all the activity, but they couldn’t really see the operating system and the application it’s running.

Which is something I’ve argued before: If we sit inside a computer and watch the information flow, there is nothing that really distinguishes one program from another. An algorithm running on an execution engine is entirely arbitrary and user-defined.

I was struck by how, while they could see various synchronization signals, which showed up prominently in data-reduction, they were unable to determine whether those signals were byproducts or crucial inputs to the process.

Brain researchers apparently struggle with the same conundrum regarding synchronous signals in the brain. Byproducts or crucial inputs?


Cobb concludes, in part:

This is the great problem we have to solve. On the one hand, brains are made of neurons and other cells, which interact together in networks, the activity of which is influenced not only by synaptic activity, but also by various factors such as neuromodulators. On the other hand, it is clear that brain function involves complex dynamic patterns of neuronal activity at a population level.

One might even be tempted to call it a “hard” problem.


Obviously this is controversial territory. The study of human consciousness is one of the most fragmented and questing branches of science I know.

Even theoretical physics and cosmology seem more grounded in at least having definitions scientists agree on. Consciousness lacks common definitions for its most fundamental aspects.

(I’m also inclined to think it suffers from too much philosophy.)

In any event, I got a huge kick out of this article. It’s an excerpt from his book, The Idea of the Brain, which will be published this month.

The bottom line, perhaps, is to not get carried away with metaphors and, in particular, to maybe move on from the brain is a computer one.

Stay metaphorical, my friends!

About Wyrd Smythe

The canonical fool on the hill watching the sunset and the rotation of the planet and thinking what he imagines are large thoughts. View all posts by Wyrd Smythe

17 responses to “Brains Are Not Computers

  • Wyrd Smythe

    Here’s a video that would have been useful to include in some previous posts where I talked about computers being just overgrown calculators:

    More to the point, the reason why it’s so hard to determine what the software is doing is that this process Tom describes looks pretty much the same no matter what software is running. It’s only when interpreted in its overall effect that the virtual reality of the software emerges.

  • Wyrd Smythe

    It may be helpful to distinguish between (what I call) “strong” computationalism — which sees the brain (if not everything) as, not metaphorically but literally a computation and, thus, a computer — and “weak” computationalism — which just claims brain function can be simulated by computation.

    The thing is, if Cobb’s point (and mine) about the metaphor being wrong is correct, or at least has argumentative weight, then it has even more weight against a literal view. I find it persuasive, but I already share Cobb’s view so of course I do.

  • SelfAwarePatterns

    I disagree. (No doubt you are shocked. 🙂 )

    There was a lot of discussion among neuroscientists about this article on Twitter. Much of it making the points I’ve made before. (Which, of course, I got from them, so no originality on my part.)

    In summary, Cobb is arguing against the brain being a Von Neumann machine. No serious neuroscientist argues that it is, so he’s arguing against a strawman. Blake Richards makes the typical case in this thread.

    Cobb’s implication that neuroscientists are lost and unable to make progress is a notion I keep seeing from mysterian philosophers, psychologists, and others outside of the field (Cobb is a zoologist). It’s discordant with the vast majority of what I see in neuroscience literature.

    • Wyrd Smythe

      “I disagree. (No doubt you are shocked. 🙂 )”

      So shocked! 😉

      “In summary, Cobb is arguing against the brain being a Von Neumann machine.”

      This is kind of the crux of it. Neuroscientists want to call the brain a “computer” while at the same time saying “not that kind of computer.” But if the brain is “not that kind of computer” why would there be any equivalence with computers that are “that kind of computer”?

      You’ve said in the past that the brain is not a “conventional computer” and also that the brain is not “a Turing Machine.” Obviously I agree.

      Here we’re saying “von Neumann machine,” which I think is a little bit of its own strawdog because there are computers that don’t use the von Neumann architecture. Or if it’s not a strawdog, it’s ignorance of how “computing” is defined. Not by von Neumann architecture, but by what a TM can accomplish (or lambda calculus, which is purely mathematical).

      So you were far more on track than Richards when you said the brain wasn’t a TM.

      Indeed not.

      But conventional computers are TMs, so why would conventional computers be analogous to brains?

      As I have said before, we can call what the brain does “computation” only if we also say that an analog transistor radio “computes” audio from radio waves. What the brain does is far more like an analog circuit than a digital computation.

      But doesn’t expanding the definition that way destroy the putative equivalence? Doesn’t it admit it will require special analogue “computer” to do what the brain does?

      “Cobb’s implication that neuroscientists are lost and unable to make progress”

      I think that overstates his position. He starts the article with how advanced the science is. He is only arguing against what he sees as an outdated metaphor.

      “(Cobb is a zoologist)”

      Firstly, attacking his credentials is ad hominem. (We’re not even zoologists. What does that say about our opinions?) Maybe this is his other field of study and he’s been actively following it a long time.

      Secondly, zoologists study animals, which have brains. This may well be within his field of study.

      “It’s discordant with the vast majority of what I see in neuroscience literature.”

      His point is that it’s a global view that needs rethinking. He’s a bit like Sabine Hossenfelder arguing against the prevailing views in the theoretical physics field.

      • SelfAwarePatterns

        “Or if it’s not a strawdog, it’s ignorance of how “computing” is defined.”

        In addition to being a neuroscientist, Blake Richards is also an AI researcher. But for the views of a straight computer scientist:

        “I think that overstates his position.”

        Cobb’s words:

        And yet there is a growing conviction among some neuroscientists that our future path is not clear. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach.

        And then own your takeaway in the post:

        Or is there something forever opaque when it comes to the brain?

        This article argues in favor of the later possibility. (Obviously I agree.)

        “Firstly, attacking his credentials is ad hominem.”

        Pointing out that he’s not a neuroscientist, and grouping him with philosophers and psychologists, is not attacking his credentials. If his specialty was how the brain works, then I’m sure he’d list neuroscientist or neurobiologist on his bylines.

        I will take (most) neuroscientist opinions on neuroscience over a zoologist’s (or a philosopher’s or psychologist’s). If the subject were ethology, anatomy, or something along those lines, then (most) zoologist opinions would outweigh neuroscientist ones.

        We are indeed amateurs, crucially dependent on the experts. The truth is neither of us have first hand knowledge of the relevant scientific research. It’s why I don’t pretend to have my own scientific theory of consciousness. I might speculate, but always as an amateur, a student. Whenever we think we’re seeing things the experts are missing, we’re most likely making first year graduate student mistakes (at best).

      • Wyrd Smythe

        Any reference to the person making the argument, rather than the content of the argument itself, is, by definition, ad hominem. (I think a good goal in a debate is to never use the word “you” or to characterize the other person in any way. Debates should be strictly about ideas. Not always possible, but a worthy goal.)

        Simply put, people with the very best credentials are sometimes very wrong, and people with no credentials at all are sometimes spot on. I agree we depend on experts to learn, but at some point we are no longer novices and are capable of being thoughtful and discerning about the material. We reach a point where we don’t depend so much on a trusted so-and-so saying something but on what is actually said.

        As for Cobb overstating or not, that’s also a distraction from the content of his argument. FWIW, I think there is a difference between “Cobb’s implication that neuroscientists are lost and unable to make progress” and what I or Cobb said.

        (Besides, if it really is overstating the point, then it should be easy to refute. 😉 )

        Getting to the content, the only point in play seems to involve von Neumann architecture. The Melanie Mitchell Twitter thread also invokes a “von Neumann architecture.” (And kudos for using “architecture” rather than “machine” — a von Neumann machine is something different.)

        But this is still a strawdog because no one on either side is making any claim about von Neumann architecture. The phrase “von Neumann” doesn’t even appear in Cobb’s article. So, yeah, I think it is a strawdog.

        A von Neumann architecture is a specific way to build a computer, and most conventional computers do use it. But it’s not what a computer scientist invokes when talking about computation. Computation is not defined with any reference to von Neumann architecture. Computation is defined in reference to a Turing Machine.

        I think Mitchell takes the conversation to a better place with: “But the question remains whether a broader, more abstract notion of *computation* can be useful when thinking about how the brain operates.” (I suspect Cobb might agree it has been useful but might be due for retirement.)

        Which goes back to my point about expanding the definition.

        I think there’s a succinct way to put it:

        CS Axiom: Computer1 ≡ TM
        All agree: Brain ≢ TM
        Therefore: Brain ≢ Computer1

        A Computer1 is a conventional computer equivalent to a Turing Machine. They are characterized by:

        1. Operation: an algorithm with steps
        2. Processing mode: digital, numeric (“codes”)
        3. IPO Data flow: linear:digital:linear

        Then we also have:

        Asserted: Brain ≡ Computer2
        However: Computer2 ≢ Computer1

        Because of the conclusion above. So the bottom line is we’re talking about some other kind of computer. An analogue system characterized by:

        1. Operation: no algorithm; no steps
        2. Processing mode: magnitudes, forces, pressures, etc.
        3. IPO Data flow: linear:linear:linear

        Which is a very different ball of wax. (To the extent we’re talking Positronic brain, I’ve always been on board.)

        The objection is that the phrase “the brain is a computer” is misleading because it doesn’t mean the sort of computer pretty much everyone binds with the term. We’re talking about something that has far more in common with a transistor radio than a computer.

        To bring this all home, Mitchell also suggests: “This seems to me to be the correct approach to the question ‘is the brain a computer?’: To rephrase it as a better question: ‘Does the notion of computation, construed broadly, offer a useful theoretical framework for understanding brain processes?'”

        And what Cobb is suggesting is that maybe it was, but the very “vocabulary” Mitchell mentions might be limiting and maybe it’s time to move beyond it to a more systems-oriented approach.

  • Wyrd Smythe

    As an aside, I think there is also a kind of scale problem here. The proposed equivalence between the brain and a computer often places a lot on how neurons are like logic gates.

    Which, indeed, overall, they are. They do sum inputs into a single output, but that very general property is about all they have in common.

    But the behavior of a neuron is far more sophisticated than any logic gate. It isn’t just the vastly larger number of inputs, or that some inhibit. It’s that there are extrasynapic neurotransmitter effects, and dendritic effects, and likely external effects from glial cells and whatnot.

    Neurons are analytical systems in their own right. I think one can argue that even synapses are sophisticated enough to qualify as distinct analytical systems.

    So seeing neurons as logic gates is a metaphoric view that abstracts away most of what a neuron actually does.

    More importantly, consider the scale difference between a conventional computer made from billions of simple logic gates (themselves made from a dozen or so simple transistors) and a brain made from billions of complex analytical subsystems (themselves made from thousands of complex subsystems).

    Calling the brain a “computer” seems like a scale mistake.

  • Wyrd Smythe

    I had hoped a post like this would draw out a concrete statement along the lines of, “No, brains are computers because X, Y, and Z,” because I really do wonder what the grounding is that results in such an abiding belief — one that seems to react pretty strongly to being challenged.

    But the best response so far — really the only response so far — is to complain that this isn’t about von Neumann architecture (which, of course, obviously, it’s not), which is a negative response that seems to miss the target entirely.

    What is the affirmative defense that the brain is like a computer? Even Mitchell agrees storage is a problematic comparison. I’ve shown that equating neurons with logic gates is a weak grounding.

    What else is there? That’s a question I’d really like answered.

  • Wyrd Smythe

    The other question I have is what exactly does the comparison of the brain and computer really bring to the table? What is its value?

    Cobb suggests it’s a metaphor; Mitchell suggests it brings a language; those amount to the same thing. Has that provided value?

    Does it inform theories such as IIT or GWT or HOT? Does it help neuroscientists analyze the workings of the brain?

    What makes the metaphor so useful that it must be defended?

  • Wyrd Smythe

    And finally there is the inherent contradiction that in the phrase “the brain is (like) a computer” people aren’t talking about conventional computers (von Neumann architecture or otherwise — Turing Machines at root) or the standard Computer Science definition of computing.

    They mean a different kind of computer and a different kind of computing. An analogue form: asynchronous, massively parallel, hugely interconnected. A totally different kind of computing than Computer Science talks about.

    Which is fine. I agreed long ago to understand the definition.

    But then how does conventional computing enter into things?

    Other than its ability to simulate a natural physical system, how does it apply to the brain? What is its value? And why does it really exist anyway?

    Is this literally a case of Descartes and his hydraulics metaphor?

  • rung2diotimasladder

    Finally, a brain blog post I think I understand. I couldn’t say whether the brain is like a computer, but I have wondered whether people are forgetting that the computer metaphor is a metaphor. If the metaphor is taken as such, it’s not too difficult to discard it when it no longer makes sense, but if not, I can see how the metaphor might be limiting.

    Your point about Descartes, by the way, is well taken.

    • Wyrd Smythe

      Well, thank you. (You know the joke about Descartes drinking in a bar? The bar tender asks him if he’d like another drink, and Descartes replies, “I think not.” — and, poof, he vanishes into thin air.)

      One question I ask is what value the metaphor ever had. Is there some understanding we wouldn’t have achieved without it? There are some general similarities, such as the idea of “memory,” but they vanish upon close examination.

      I thought Cobb’s article, which was seen by many, might draw out an affirmative defense, but it appears the best response mustered from the industry is the negative complaint about von Neumann architecture. Which doesn’t say much for the argument.

      So now it almost seems like a sociological question to me: Why the abiding belief in a metaphor that can’t be strongly justified and which has questionable value?

      One theory I have is that it’s wishful thinking by people who are desperate to have brain uploading work — people who really, really, really want to live forever in a computer. (What many don’t seem to realize is that’s the last technology we’ll accomplish. If it’s even possible, effectively or in principle.)

      • rung2diotimasladder

        I think we’d be hard-pressed to find another metaphor, given where we are with technology, but yeah, I’ve never loved it. I think you might be right about memory. The funny thing is, I think a lot of computer terminology came from brain terminology, and then that connection crossed back. I can’t help but think of dreaming as deleting the useless crap I picked up from the previous day. I can almost hear that crinkling sound as I click on “empty trash.” 🙂

        I can’t say much about the connection from the computer/neurology sides of things, but philosophically, it seems obvious that you should be careful not to take the metaphor too far. For me the problem becomes very clear when you think about embodiment and environment. Computers don’t learn the way we do. They don’t inhabit the world, they don’t—as you’ve pointed out—have history.

        Anyway, with computer metaphors, we should be careful not to put Descartes before the horse. 😉

      • Wyrd Smythe

        I wonder if the use of brain terminology in computing traces back to when it went mainstream. In the early days, the names were very technical. IBM called disk drives “Direct Access Storage Devices” (DASDs — “daz-dees”); memory was first called “core” (because they used tiny iron donuts (“cores”) for bits).

        But then, as it became more a thing, people found more familiar terminology. We do tend to see new things in terms of things we know.

        “For me the problem becomes very clear when you think about embodiment and environment.”

        That is one of the more interesting aspects of the problem. The difference between intelligence and consciousness becomes hugely significant here. Over on Mike’s blog they were talking about robot minesweepers — specifically what they should “think” (if anything) about death.

        One proposition is that, given it’s an artificial mind we create, we could give it any worldview, including one based on self-sacrifice and dying on the job. The counter proposition is that anything smart enough to have significant intelligence would be smart enough to figure out self-preservation.

        And one would think anything with appreciable consciousness would seem to want to keep on being conscious. Can self-preservation be engineered out of consciousness?

        One might hope machine consciousness isn’t possible just due to the moral quicksand it creates.

      • rung2diotimasladder

        Can self-preservation be engineered out of consciousness?

        That’s an interesting question. If I had to give a quick answer, I’d say no. That would seem to be an necessary aspect of consciousness, but in what sense is it necessary? I don’t know if I could answer that. It just seems everything about our consciousness is tied to that will to survive. Feeling, motivation, even morality, seem tied to that.

        In any case, I hope they don’t give minesweepers a desire for self-preservation!

      • Wyrd Smythe

        No, that would certainly not be something you’d want to build in! There is the idea that, even without such programming, a smart enough general purpose intelligence might figure out that self-preservation is a good goal because it supports whatever goals are programmed in.

        I think that’s essentially what our self-preservation amounts to: The desire to preserve what we’ve amassed (knowledge, experience, wealth, goods, land) as well as the desire to go on doing things. The idea that “this unit must continue” for those overall goals emerges pretty easily.

        I don’t think we’d give a minesweeper that kind of intelligence, but we might with other machines with more complex goals. One problem with AGI is that it’s likely to operate very fast with perfect memory. Such an intelligence could be difficult to thwart if it got the wrong ideas (especially if it was smart enough to bide its time and consider its next moves).

        Here are a couple of good cartoons.


  • headbirths

    Hello again,
    Back at the start of lockdown (i.e. around the time of this post), I decided to write a book – I had gained all that commuting time, on top of evenings, after all. (After I while I stopped blogging as a result.) At the start of this year, I switched to a new tentative title: “The Brain is not a Computer.” Hmm, I thought today, now I have a first draft, I wonder why I haven’t already looked to see what’s out there with that title. And Mr Google led me to you!
    (If you’re interested in a sneak peak, leave me a message on my site somewhere.)

    To add to the conversation here, here are some assertions I make…

    Brains are not computers (obviously!) but they do do computation – in a distributed, self-organizing way. (Slime mold can also do computation; high-dimensional, non-linear dynamical systems such as a bucket of water can do computation.) Computers are designed, and particularly designed to avoid noise, whereas brains are evolved (obviously!) within a noisy environment and are noisy themselves. We no longer need to compare brains to computers because they are no longer the most advanced technology we have; Neural nets like Deep Reinforcement Learning are. We are finally trying to understand the brain in terms of brain-like technology! Computers may be able to create consciousness, but then I don’t really even know if you’re conscious; the ‘hard question’ is how do we develop a methodology to tell?

And what do you think?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: