Harnessing the power of modern computing architectures and cloud platforms, I specialize in designing and implementing scalable data solutions. With a rich experience spanning across data engineering, cloud technologies, and big data ecosystems, I'm dedicated to optimizing data workflows and driving efficiency in complex systems.
π Education:
- MS in Computer Science & Engineering from University at Buffalo, The State University of New York. Specialized in Data Intensive Computing, Algorithms for Modern Computing Systems, and Machine Learning.
- BE in Computer Technology from Yeshwantrao Chavan College of Engineering.
π Professional Experience:
- Solutions Architect Intern at Brainchip Inc.: Developed FastAPI web-based GUI tools, streamlined integrated circuit scaling, and automated unit testing with Pytest and Drone.
- Data Engineer at Walmart Global Tech: Led ETL mappings on GCP, optimized PySpark pipelines, and spearheaded data migration projects.
- Big Data Developer at Capgemini: Optimized Spark applications, designed data lakes, and pioneered automation using Azure Logic Functions and Azure Data Factory.
π§ Technical Stack:
- Languages: Python, SQL, Java, Scala, C++, R, Bash
- Big Data: Apache Kafka, Apache Hadoop, MapReduce, Databricks, ElasticSearch
- Cloud Platforms: GCP, AWS, Azure
- DevOps: Docker, Kubernetes, Jenkins
- Databases: SQL Server, Oracle, MySQL, MongoDB, Cassandra, HBase
** Well my resume provides a history of all the good stuff, like my qualifications, experience, and killer sense of humor. Okay, maybe not the last one, but it's worth a shot, right? You can find my resume at - Resume
π± Currently Exploring: Sustainable and Efficient AI, Serverless Deployments, and Deep Learning.
Connect with me:
π Check out my repositories to see my hands-on projects and contributions!