While this page will work without javascript, please try to enable javascript and go here for better readability and navigation.

Narayana Prasad Santhanam's home page

Research

Updates: papers and preprints for the first two topics (prediction, lossy compression, MRFs, entropy estimation) added 10/10. Others are being updated and should be done by 10/10. The page now uses javascript for rendering math and navigation: please turn it on if you have turned it off. If you have emailed me about any paper, I will email back the paper to you as well.

My work spans several applications, but are connected by a set of probabilistic and combinatorial approaches. A lot of it---though not all---has to do with high dimensional problems---genetic data in biology, text data, and risk management in large financial instruments such as reinsurance, or rare event detection in very complex systems such as smart grids. Essentially, it is either futile or useless to handle problems involving this sort of data in the traditional sense. Yet, insights gained in one of these problems carry over to others as well, which makes a systematic instead of ad-hoc approach desirable.

A starting point for my personal evolution of ideas has been the following paper on probability estimation that won the prestigious Information Theory Society Paper Award . Since then, I have enjoyed expanding the horizons of undersampled problems using approaches and techniques that have been, at least to me, surprising and elegant.

This is in line with similar observations made by others as well--in the high dimensional domain, one has to typical rework and look at fundamental statistical problems in estimation, classification, and regression in an entirely new light. Practically, the commercial importance of a systematic theory of large alphabet applications cannot be understated in the fields mentioned above. Theoretically, they often force a deeper insight into even the simplest problems, and often involve unexpectedly beautiful mathematical results.

Workshops I have co-organized two workshops in the broad areas of large alphabet information theory and statistics---the first sponsored by the American Institute of Math and the NSF (2009), while the second was at the Banff International Research Station (2011). A total of over 60 researchers attended one or the other of these workshops, from diverse areas including biology, computer science, economics, information theory, mathematics, networking, and statistics.

If you are looking for any of my publications other than those listed here, please send me an email.

Prediction and risk management


Lossy compression


Markov Random Fields


Entropy estimation


Bayesian non-parametrics


Large alphabet Markov sources


Estimating channels with memory


Other papers


Valid HTML 4.01! Valid CSS!