- 🎓 Studying Master's in Computer Science at Arizona State University, graduating in May 2024.
- Open to 2023 Internship/Co-op and 2024 Full time Opportunities in Data Science, Data Engineering and Machine Learning Engineer roles.
✔ Nischay is a Data Science Engineer with 2+ years of Industry experience delivering end to end Data Science projects across B2B, B2C Sales, Finance and Product industry.
✔ Capable of driving projects of varying scope and delivering high impact.
✔ Experience across both B2B, B2C Business Analytics, Customer Service Analytics, Demand Forecasting, and predictive maintenance.
- Programming languages:
- Database:
- Frameworks:
- Backend Technologies:
- Cloud technologies:
- Other technologies:
Data Scientist - Assistant Manager at Piramal Finance [April 2021 - July 2022]
- Increased Sales department attrition rate forecasting accuracy by 85% using a Random Forest Regression model trained on historical data and demographics of 60,000+ employees. Leveraged tools like AWS EC2, S3, Sagemaker, Glue, Code Commit, and Apache Airflow.
- Improved Customer Service team efficiency by 15% by developing a Tracking Analytics data pipeline that automated the CSAT dashboard and incorporated 10+ operational KPIs. Leveraged tools like AWS S3, Glue, Code Commit, PowerBI and Apache Airflow.
- Reduced Data loading latency on dashboard by 50% by integrating AWS Glue with Apache Airflow to create DAGs for automated DataMart’s refresh. Leveraged tools like AWS S3, Glue, and Apache Airflow.
- Developed tailored reports on consumer behaviors that informed business decisions by optimizing analytical solutions to analyze customer-centric data from Banking, Business, and Customer Service departments. Leveraged tools like AWS S3, Glue, Code Commit and Apache Airflow.
- Improved daily business operations and sales reporting accuracy by 60% by building an API with an automated scheduling module. This reduced manual labor by 20% and freed up time for employees to focus on more strategic tasks. Leveraged tools like Python, Outlook, Excel, and Apache Airflow.
Junior Data Scientist at Radome Technology [June 2019 – March 2021]
- Developed and deployed inventory and sales forecasting modules using statistical (ARIMA, ARMA ) modeling and (Random Forest, Support Vector Machine) regression-based predictive models, achieving an impressive 85% accuracy rate. Leveraged tools like AWS EC2, S3, Sagemaker, Glue, Code Commit, PostgreSQL and Apache Airflow.
- Performed pre-processing techniques such as Feature engineering, Dimensionality reduction, which resulted in significant improvement in the model performance and prediction accuracy by 10%. Leveraged tools like PCA, Feature Importance, and Normalizing.
- Proactively contributed to R&D by researching machine learning papers related to Forecasting and Object Detection Computer vision based and developed proof-of-concepts and presenting demos to senior team members and clients. Leveraged tools like Tensorflow, AWS S3, Sagemaker, Glue, Code Commit and PostgreSQL.
- Developed an end-to-end object detection application using Regional CNN pre-trained model, to detect various aircraft with 83% accuracy in real-time at 30 frames per second video output. Leveraged tools like Tensorflow, Rekognition, Code Commit, and Python Flask.
Credit Card Fraud Analytics & Machine Learning Modeling - Link
- A case study to build a Credit Card Fraud detecting Model, from highly variable and imbalanced real-dataset, using classification model (Logistic Regression, Decision Tree, K-Nearest Neighbor, SVC).
- Plot correlation matrix to check the influence of variables on Target label and Box plot to identify the data distribution and outlier patterns.
- Performed PCA dimensionality reduction, Robust scaling to remove outliers and Sampling to get equal number of Fraud/Not Fraud cases.
- Key metric to access our model performance is False Negative rate. Specificity score of models – (Logistic Regression – 0.98, Support Vector Classifier – 0.99)
- Plot correlation matrix to check the influence of variables on Target Price variable and Box plot to identify the data distribution and outlier patterns.
- Use NLTK(Natural Language Toolkit) package to extract the key amenities from the data given column, form a Word Cloud. • After, doing complete EDA, we found that type of room, Property and number of bedrooms influenced a lot on pricing. Essentials amenities like Workspace, Parking, Laptop Friendly, Hair Dryer and Wi-Fi are most common in expensive listings.
Like my work and want to connect.
You are currently here! 👉 GitHub: https://github.com/imnischaygowda
👔 LinkedIn: https://www.linkedin.com/in/nischayggowda/
📖 Blog: https://imnischaygowda.hashnode.dev/