A bit of what I’ve been up to at MIT

I recently moved to Cambridge, MA to take the best job of all time helping Sebastian Seung’s Computational Neuroscience Lab at MIT build a game to map the human brain. Yea. It’s called EyeWire and you should check it out.

Best job in the world for several reasons. For someone obsessed with thinking about thinking, this life is positively dreamy.  I think I live one solid series of awesome moments. I love the people in my lab. I got to move to Cambridge and live one mile from Harvard and one mile from MIT. I love walking to work. I love work! It doesn’t feel like a job. I love going to hackathons. I love hanging out with geniuses. I love MIT Media Lab. I love working with neuroscientists. I love learning new things. I love being around curious people ready to share their passion for creating game-changing technologies. I love going to intellectual events at MIT and Harvard. I love connecting with so many TEDxers on the east coast! I love snow (though we haven’t had much yet). I love great food and even greater company. I love talking about molecules and python and infographics and chilling with scientists every day. Bascially, I love life. I love life very much.

I’m aslo helping a group at the Media Lab (which I’m not really supposed to talk about), developing a new app for TEDx music (also not supposed to talk about..but no one reads this blog 😉 and building an anonymized open-source database of health and lifestyle data with WIkiLife. Other things too..but it’s late and I want to read Nietzsche.

Below is a post I just wrote for the EyeWire blog. I blog at MIT now. Rad. Life is amazing. I hope you, dear reader, are following you passion and pursuing diligently the ideas that strike you most curious.  Reality will exceed your wildest expectations if you let it.

Cheers, much love.


It may come as a surprise that although we know much about how the eye works, neuroscience researchers do not fully understand how visual signals translate into perception.

We’ve landed on Mars, can grow organs, and even skydive from space, yet when it comes to a thorough understanding of the territory so close to home that it is home, much is missing. Neuroscientists don’t even know precisely how many different types of cells are in the brain. Here at Sebastian Seung‘s Computational Neuroscience Lab at MIT, we’re taking a different approach: crowd-sourcing. In order to solve the mind’s great mysteries, we need you.

Why don’t we know how the mind works? One reason is that your mind is massive. Researchers estimate that there are 100 billion neurons in your brain with about one million miles of connectivity. A million miles is equivalent to driving around Earth 40 times. You can infer that in order for such great length of neurons to fit into your three micron scale image by FSUpound brain these structures must be very tiny. A large neuron is about 100 microns in diameter while the contact area of a synapse is about 400 nm in length.

In order to see neurons and the tiny structures called dendrites through which they function, researchers utilize a new imaging technique. “Fix whole brain tissue, slice off layers just a few microns thick, image each slice with an electron microscope, and trace the path of each neuron,” explains David Zhou, Masters Student at Carnegie Mellon, on Quora. These gamechanging techniques generate terabytes of data for even a cubic milimeter of brain tissue. Now that we can see the brain at the synaptic scale, we have to analyze the images. How?

neuron cell reconstruction Seung Lab

The image above shows the process of layering image slices to render 3D reconstructions. Like most neuroscience labs, the Seung Lab uses a combination of AI algorithms and tracing (3D reconstruction) performed by humans. Why not just use algorithms? Images can be challenging to identify, particularly for a computer. Pure algorithms make many mistakes, such as slicing a single cell into thousands of pieces and merging multiple cells into one monstrously massive neuron. See below image for an example of AI missing a chunk of a neuron.

correcting a computer's mistake, Seung Lab

We hope to one day train computers to map neurons on their own; however, that day will be far in the future and we need to accelerate neuroscience discovery now. To achieve this, we need something more intelligent than even the most powerful supercomputer— you.

It takes an MIT-trained neuroscientist anywhere from 15 to 80 hours to reconstruct a single neuron. At that rate, it would take about 570,000,000 years to map the connectivity of an enture human brain, known as aconnectome. This is why we need your help.

Rather than mapping and entire brain, we’re starting with a retina. Our goal is to map the connections of a specific type of cell: J-Cells. These neurons are responsible for perception of upward motion. We plan to publish the outcome in a scientific journal and list EyeWire users as co-authors.

By playing the 3D game Eyewire, you become part of the Seung Lab at MIT by helping to map the connections of a neural network.

Scientific American writes that “no specialized knowledge of neuroscience is required [to play EyeWire]; citizen scientists need only be curious, intelligent and observant. Your input will help scientists understand how the retina functions. It will also be used by engineers to improve the underlying computational technology, eventually making it powerful enough to detect “miswirings” of the brain that are hypothesized to underlie disorders like autism and schizophrenia.”

We hope that you will help us trace the wires of perception through EyeWire. Play EyeWire and let us know what you think on Facebook.

Also explore..


Leave a Reply

Your email address will not be published. Required fields are marked *