This is a 10+ year journey
I started off as a web designer, studing electronics and signal processing.
Web designing got me interested in graphic designing, which exposed me to photoshop.
Never before, and after, had anyone been so fascinated by the Gaussian Blur feature.
Over the next 5 years, I took an interest in Image processing, took up a masters in Computer Vision, got addicted to Semantic Scene Understanding.
Over the following 5 years, I gave up on a PhD in Computer Vision, and got fascinated by real world computer vision problems, productionizing research and deploying machine learning applications.
Today, I can say I can build scalable machine learning applications, from ground up, and implement them end to end, with authentication and monitoring.
I occassionally like to write poetry, take some photos and hike aimlessly
→ Created the ML Platforms team and hired two team members, line managing and leading projects for the team.
→ Deployed collaborative experiment tracking with ClearML, \textbf{saving \$40k and growing}.
→ Implemented a queue based process using ClearML to train models on the appropriate GPU generation, \textbf{reducing training time by 50\%}.
→ Implemented a tool using GPT-4 and langchain to automate data generation to reduce the data acquisition time from \textbf{~4 months} to \textbf{~2 weeks}.
→ Automated and productionised a \textbf{data preprocessing} workflow using Deepgram, GPT-4, Langchain, Temporal, FastAPI and Streamlit, allowing the asynchronous \textbf{system to serve 1000\% more workflows per day}.
→ Led the team to implement a production pipeline solving the end to end ML lifecycle, reducing \textbf{lead time by 50\%}. Used Dagster, Airflow, ClearML and CI/CD pipelines.
→ Developed a Focus of Attention system for a retail, vision based analytics platform using 3D geometry, depth estimation models and Head pose detectors.
→ Developed proof of concept for defect classification in 48 hours, featured in the internal newsletter and piloted in a major electrical company
→ Introduced a Benchmark-driven-development research philosophy. Created a grafana based analytics service to establish and monitor object tracking metrics.
→ Supported the creation of a python based middleware for large format printers.
→ Implemented sensor based calibration routines for large format and 3D printers.
→ Implemented internal tools to identify and fix calibration bugs, halving debugging time spent by the R&D team.
→ Conducted courses in Machine Learning and Blockchains (can be found on GitHub.
PhD Student and teaching assistant
Computer Vision Center
Skills: Python, C++, OpenCV, Neural networks, Conditional Random Fields, SVM, Random forests
→ Implemented a computer vision pipeline for Semantic Segmentation of urban scenes, using Random Forests, Bag of words, Conditional Random Fields and Convolutional Neural Networks
→ Implemented Slither - A random forest framework, built in C++ and Python, with the following paper.