About Me

I am a graduate student working towards a Masters degree in Computer Science at The Johns Hopkins University.

In a previous life I was a Deep Learning Research Software Engineer, working at Liv.ai, which has since been acquired by Flipkart.com, one of India's largest Ecommerce companies. My experties lie in applying deep learning methods to solve a large variety of NLP problems

Projects

  • NER

    Text to Speech

    Currently working on projects and architectures to solve the problem of Text to Speech (Project Currently Underway)

  • paraphrase

    Paraphrase Detection
    github link

    I have implemented an unorthodox CNN based model in tensorflow to solve the paraphrasing problem as described in Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks”(MPCNN) by Hua He, Kevin Gimpel, and Jimmy Lin.

  • NER

    Machine Translation

    I built a neural machine translation models in Tensorflow based on seq2seq learning and attention along with a lot of bells and whistles to achieve state of the art efficiency for Indian languages.

  • gappi app

    Gappi Transcription Chat App

    Gappi is a revolutionary chatting app for Indian languages. The core functionality of the app is to enable chatting via sending transcription of audio into one of the many Indian languages of your choosing. You can give it a try in english if unacquainted with indian languages. My 4 member team is responsible for the development of the deep neural networks comprising of CNNs, LSTMs and RNNs which enable the required Natural Language Processing.

  • NER

    Named Entity Recognition

    We built a NER model for the purpose of intent classification of interaction with users (eg. Cab booking, Flight booking, Food ordering, etc). The model was initially built using Conditional Random fields but we later moved to stacked lstms to improve generalization and greater independance from training data. We observed that the CRF based model was hamstrung by the variation in training data it recieved, Stacked LSTM models performed significantly better.

  • char-nnet

    Character based Neural Language Models

    Using stacked CNNs and LSTMs along with a myriad of tweaks within them, we have programmed DNNs, (mostly in python, theano) to build language models. We then incorporate these language models to enable transliteration and our other efficient APIs
    - Speach to Text
    - Audio Search
    - Tagging of Audio files according to words present in them.

  • char-nnet

    Generative Adversarial Networks

    Building upon Dr. Ian goodfellows work, we have utilised GANs for the purpose of regularization of our DNNs using adversarial perturbations, leading to a more robust system.

  • Sentiment Analysis of Twitter data

    As a Major Project towards completion of my Bachelors degree,I created a system, which was an ensemble of CNNs and used word embeddings generated via word2vec.I trained my neural network to try to predict the 2016 US Elections by gauging the mood of the nation reflected by their twitter reactions.

  • char-nnet

    Tree LSTM

    Much of the work performed on morphologically rich languages benefits greatly by the presence of semantic tree structure of sentences in training data. To further leverage the benefits of this meta-information, we created an LSTM not as a standard linear chain but rather as nodes in leaves of the semantic tree to construct a Language Model.

  • char-nnet

    Transliteration keyboard

    Building upon our previous transliteration work, I have made a transliteration tool whos prediction capabilities are augmented by LSTM based DNN language model and predicts not only the nearest phonetically similar caracter but also words that are probable word being typed out.

Resume