Hi, my name is Rob Turetsky and this is my homepage. I am a Ph.D. candidate in the Electrical Engineering department at Columbia University. I am a graduate researcher in the Bionet Group, let by Professor Aurel Lazar. My main research interest is in computational neuroscience. Specifically, I am interested in the properties of bursting neural networks, which are feedback circuits responsible for generating motor rhythms. I am also interested in spike signal processing, creating an analogue between traditional DSP and neural computing. I believe that the language and tools of systems and signal processing provide functional insight into both the hows and the whys of cortical computation. You can read more about my research here
I was first introduced to computational neuroscience by Arunava Banerjee at Rutgers University, where I did my undergrad in Electrical Enginerring, Computer Science and film (and spent many hours in the VLSI lab with Prof. Mike Bushnell). I came to Columbia University in 2000, where I worked on content-based analysis of music and multipitch extraction with Dan Ellis at LabROSA. I worked as a consultant for Philips Research with Nevenka Dimitrova on subverting computer vision in content-based analysis of movies with screenply mining and sequence alignment. Prior to that, I worked as a consultant for Morgan Stanley, designing and building web-based financial applications. I was a fellow at Columbia's STV: Science and Technology Ventures which aims to bring technology invented by the Columbia community out of the research lab and into the world. Building my teaching skills has been an important part of my career. I have been a teaching assistant for DSP twice with Prof. Ellis, Music Signal Processing with Prof. Alexandros Eleftheriadis (where I got to set up a music studio I couldn't possibly afford) and twice for Computational Neuroscience with Prof. Lazar. I won an "Outstanding TA" two years in a row for DSP, which had between 60-80 students. I spent a year as an NSF GK12 fellow, attempting to help NYC teachers at Booker T Washington Middle School teach science to the kids. I directly supervised the semester research project of an M.S. student, as well as the grading of two graders
After content-based analysis of music/movies, my interest in Computational Neuroscience was again piqued by Professor Paul Sajda's Computational Neural Modeling, and again by Prof. Lazar's Time Encoding class. It was there that the idea of sensory sampling was brought to my attention - that auditory neurons are selective to a specific frequency band because bandpass signals can be sampled at twice the width of the band instead of twice the maximum frequency. Neurons can only fire at a few hundred Hz, far less then the 3.4kHz of speech or the 20kHz threshold of hearing - but with a filterbank framework, our ears can sample everything. Not only that, but this sampling is invertible: we can recover an audio signal from spikes (at least in theory anyway). This idea was the catalyst for me to realize the potential of the analogy between signals and synapses.