Tarun Aditya Kusupati
Computer Science at University of Dayton
United States
Hi, I'm Tarun Aditya Kusupati!
Research Intern at Ajeenkya DY Patil University
Results-oriented and highly skilled Deep Learning Engineer with expertise in developing and optimizing advanced multilingual neural machine translation and transliteration models. Proven ability to leverage GPU acceleration and implement custom transformer architectures using frameworks like Fairseq, achieving exceptional BLEU scores. Adept at conducting rigorous benchmarking and achieving outstanding performance on FLORES test sets. Strong proficiency in Python programming, neural network design, and GPU optimization. Accomplished in scientific computing, statistical analysis, and machine learning, as evidenced by the application of advanced techniques in an honors-level Applied Data Science program. Excellent problem-solving skills, with a keen focus on continuous learning and staying updated with the latest advancements in the field. Effective communicator and collaborative team player, ready to contribute technical expertise to challenging projects.
Experience
Ajeenkya DY Patil University
Research Intern
June 2022 - August 2022
- Engineered top-tier, resource-efficient multilingual neural machine translation and transliteration models, securing exceptional benchmarking scores on a single consumer-grade GPU.
- Boosted transliteration accuracy by a noteworthy 20% and achieved a significant 15% transliteration speed enhancement over baseline models.
- Amplified translation accuracy by a meteoric 50%, notably surpassing baseline counterparts, coupled with a substantial 30% transliteration speed improvement over conventional models.
Education
Certificates & Badges
No certificates or badges added
Projects
Diligently researched and crafted neural network architectures, fine-tuned for resource-constrained environments, laying the groundwork for an advanced multilingual neural machine translation model.
- Executed and refined a sophisticated model with ~112 million parameters leveraging Fairseq and a customized transformer architecture in Python, achieving optimal performance with seamless efficacy on a consumer-grade GPU.
- Rigorously benchmarked and analyzed translation performance over a spectrum of language pairs using the FLORES test set, attaining exceptional results and a commendable focus on resource efficiency, with an average BLEU score of 10.98 for English to Indic languages and 9.69 for Indic to English translations.
- Expertly researched and strategically selected cutting-edge neural network architectures tailored for low-resource environments to underpin a robust multilingual neural machine transliteration model.
- Skillfully implemented and meticulously fine-tuned a high-performance model using Fairseq and a bespoke transformer architecture in Python, achieving seamless integration and optimal efficiency on a consumer-grade GPU. The compact model boasts a 16-million-parameter count while delivering exceptional results.
- Vigorously conducted benchmarking and performed comprehensive evaluations of transliteration efficacy across varied language pairs using the Aksharantar test set, securing remarkable outcomes with a keen emphasis on resource efficiency, evidenced by an impressive average Top-1 accuracy of ~80% for English to Indic language translations.
Languages
English
Professional
Hindi
Intermediate
Skills
Artificial Intelligence
Machine Learning
Deep Learning
NLP, Natural Language Processing
Python
PyTorch
Docker
Ready for a personalized experience? We use cookies and similar technologies to tailor our site just for you. By clicking 'Accept', you're giving us the thumbs up to use cookies and similar technologies. 🍪