My honest friends and superiors agreed that my biggest weekness is software development, so that's what I picked as a part of my career π
-
π Iβm currently a Modeling and simulation specialist, a machine learning staff scientist at Idaho National Laboratory, and a member of RAVEN development team, working on several projects including -but not limited to- Surrogate Construction, Reduced Order Modeling, sparse sensing, metamodeling of porous materials, scaling interpolation and representativity of mockup experiments to target real-world plants, data-driven discovery of governing physics and system identification, digital twins, Time series analysis, Koopman theory, agile software development, and more.
-
π± Iβd love to learn in the near future: MLOps, R, Cafee, mongoDB, MySQL,NoSQL, SCALA, Julia, SAS, SPSS, ApacheSpark, Kafka, Hadoop, Hive, MapReduce, Casandra, Weka.
-
π§βπ€βπ§ Iβm looking to collaborate on Physics-based neural networks.
- π¬ Ask me about ROM, uncertainty quantification, sensitivity analysis, active subspaces, probabilistic error bounds, dynamic mode decomposition (DMD).
-
β‘ Fun fact: I like basketball, volleyball, and soccer.
-
π‘ website | π linkedin | researchgate |
-
π¦ [twitter][twitter] | πΊ [youtube][youtube] | π· [instagram][instagram] |
-
π€π½ Machine Learning: regression, regularization, classification, clustering, collaborative filtering, support vector machines, naive Bayes, decision trees, random forests, anomaly detection, recommender systems, artificial data synthesis, ceiling analysis, Artificial Neural Networks (ANNs), Deep Neural Networks (DNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTMs), Natural Language Processing (NLP), Transformer models, Attention Mechanisms.
-
Reduced Order Modeling: PCA, PPCA, KPCA, isomap, laplacian eigenmaps, LLE, HLLE, LTSA, surrogate modeling, Koopman theory, time-delayed embeddings, dynamic mode decomposition (DMD), dynamical systems and control, data-driven (equation-free) modeling, sparse identification of dynamical systems (Sindy), compressive sensing for full map recovery from sparse measurements, time-series analysis, ARMA, ARIMA.
-
Sensitivity Analysis (SA): Sobol indices, morris screenning, PAWN, moment-independent SA.
-
Uncertainty Quantification (UQ): Forward UQ, adjoint UQ, invers UQ.
-
Optimization: Gradient-Based Optimizers, conjugate gradient, Metaheuristic: Simulated Annealing, Genetic Algorithms.
-
π₯οΈ Programming Languages and Packages: Bash scripting, MATLAB, Python: numpy, scipy, matplotlib, plotly, bokeh, seaborn, pandas, Jupyter notebook, ScikitLearn, Keras, Tensorflow.
-
** High Performance Computing (HPC)**
- π―οΈ Machine Learning - Stanford|Online | Intro to ML. (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance delimma)
- π―οΈ Neural Networks and Deep Learning - DeepLearning.AI | Build, train, and apply fully connected deep neural networks; implement efficient (vectorized) neural networks; identify key parameters in a neural networkβs architecture
- π―οΈ Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization - DeepLearning.AI | L2 and dropout regularization, hyperparameter tuning, batch normalization, and gradient checking; optimization algorithms such as mini-batch gradient descent, Momentum, RMSprop and Adam, implement a neural network in TensorFlow.
- π―οΈ Structuring Machine Learning Projects - DeepLearning.AI | Diagnose errors in a machine learning system; prioritize strategies for reducing errors; understand complex ML settings, such as mismatched training/test sets, and comparing to and/or surpassing human-level performance; and apply end-to-end learning, transfer learning, and multi-task learning.
- π―οΈ Convolution Neural Networks - DeepLearning.AI | Build a convolutional neural network, including recent variations such as residual networks; apply convolutional networks to visual detection and recognition tasks; and use neural style transfer to generate art and apply these algorithms to a variety of image, video, and other 2D or 3D data.
- π―οΈ Sequence Models - DeepLearning.AI | Natural Language Processing, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network, Attention Models
- π―οΈ Deep Learning Specialization - DeepLearning.AI |