GraphNeural Network in Natural Language processing with spacexdataset Example
Text Classification using Graph Neural Network Example
Light GCN Supervised Task using GNN and PyTorch Example
Graph neural Network in NLP with Movie Dataset Example
Image Caption Generator - Animal Dataset Example
Transformers with Natural Language Processing Example
GPT2 generated Tweet Example
- PyTorch: Review Universal studio analysis with Hugging Face using PyTorch Example
- PyTorch: Sarcastic analysis with Hugging Face using PyTorch Example
- PyTorch: Luxury Product Apparel analysis with Hugging Face using_PyTorch Example
- PyTorch: Sap press analysis with Hugging Face using PyTorch Example
- PyTorch: Tweet emotions analysis with Hugging Face using PyTprch Example
- PyTorch: Climate-change analysis with Hugging Face using PyTorch Example
- PyTorch: Sentiment analysis with Hugging Face using PyTorch Example
- PyTorch: Natural Language Processing with PyTorch Example
- PyTorch: Senitiment analysis with Hugging Face and BERT using PyTorch Example
- PyTorch: Emotion analysis using Hugging Face and BERT using PyTorch Example
- PyTorch: Training Word Embedding, Gensim and PyTorch Example
- PyTorch: Sentiment analysis physic vs chemistry vs biology with Gensim using sklearn Example
- PyTorch: Twetter sentiment analysis with Gensim using sklearn Example
- TensorFlow: Netflix_titles using Deep Learning in Keras Example
- TensorFlow: Deep learning for NLP Example
- TensorFlow: Deep learning for NLP Example
- TensorFlow: Generating text using LSTM Example
- TensorFlow: Text Generation using LSTM Example
- TensorFlow: Text generation using Recurrent Neural Network with 034.txt Example
- TensorFlow: Logistic regression and Neural Network in Text classification Example
- TensorFlow: Text Classification with Deep Neural Network, Logistic regression Example
- TensorFlow: with IMDB Dataset Example
- TensorFlow: using LSTM with Vaccination tweets Example
- Natural Language Processing: Spark NLP & ML for Text Classification Example
- Natural Language Processing: using DataAnalyst.csv Example
- Natural Language Processing: using Full-Economic-News-DFE-839861.csv Example
- Natural Language Processing: Natural Language Procesing with Python Example
- Natural Language Processing: Natural Language Processing Example
- Natural Language Processing: with SpaCy Example
- Natural Language Processing: using SpaCy Example
- Natural Language Processing: using KMeans method Example
- Natural Language Processing: using KMeans method with internet news data Example
- Natural Language Processing: Text Classification with Football-Scenarios-DFE-832307.csv Example
- Natural Language Processing: Text Classification using agreement-sentence-agreement-DFE dataset Example
- Natural Language Processing: Text Classification using vaccination_tweets Example
- Natural Language Processing: Text Classifications with covid19_tweets Example
- Natural Language Processing: using tripadvisor_hotel_review dataset Example
- Natural Language Processing: using indian_food dataset Example
- Natural Language Processing: using Apple-Twitter-Sentiment_DFE Example
- Natural Language Processing: using%20user_reviews_g1 Example
- Scikit Learn: Regression Models with Decision Tree, Random Forest and XGBoost Example
- Scikit Learn: USA real estate dataset Example
- Scikit Learn: House price of Bengaluru_House_Data using Regression model Example
- Scikit Learn: kc house price prediction using regression Example
Within the field of machine learning, there are two main types of tasks: supervised and unsupervised. Scikit-learn provides a wide selection of supervised and unsupervised learning algorithms.
Supervised Learning
Supervised learning algorithms can be used to solve both classification and regression problems.
Linear Regresion: This Supervised Learning algorithm is used to predict continuous, numerical values based on given data input. Linear Regression tries to find parameters of the linear function, so the distance between the all the points and the like is as small as possible. Example
Logistic regression: as a tool for building model which is used to a descrete set of classes. Logistic regression is commonly used when the response variable is continuous and a more complex cost function, this cost function can be defined as the 'Sigmoid function'. Example 1 and see also Example 2
k-nearest neigbors: How likely a point belongs to one class or other depending upon which class it's 'k' nearest instances belong to. k-nearest neighbor algorithm (KNN) is a non-parmetric method and can be used for both classification and regression problem. Example
Support vector machines (SVM): SVMs are a set of related supervised learning methods used for classification and regression. Support vector machines can be defined as systems which use hypothesis space of a linear functions in a high dimensional feature space, trained with a learning algorithm from optimization theory that implements a learning bias derived from statistical learning theory. Example
Random Forest: Random forests is a supervised learning algorithm. It can be used both for classification and regression. A Random forest ii made of many decision trees. Random forest has a variety of applications, such as recommendation engines, image classification and feature selection. Example
Naive Bayes: algorithm can be defined as a supervised classification algorithm which is based on Bayes' theorem. Example
Artificial Neural Network (ANNs): ANNs are the most commonly used tools in Machine Learning. A neural network is a statistical tool to interpret a set of features in the input data and it tries to either classify the input(Classification) and predict the output based on a continuous input(Regression). The process of creating a neural network in Python begins with the most basic form, a single perceptron. We can extend the discussion to multilayer perceptrons, or more commonly known as artificial neural networks. Example 1 and Example 2 and see also GitHub
Unsupervised learning
Unsupervised learning algorithm will be use a metric such as distance in order to identify how close a set of points are to each other and how far apart two such groups are.
k-Means cluster: K-Means clustering is one of the most commonly used clustering algorithms, which belong to the family of unsupervised machine learning models. It tries to find cluster centers that are representative of certain regions of the data. Therefore, the specific algorithm that you want to might depend on the problem you are trying to solve and also on what algorithm are available in the specific package that you are using. As we know some of the first clustering algorithm consisted of simply finding the centroid positions that minimize the distances to all the points in each cluster. The points in each cluster are closer to that centroid than other cluster centroids. As might be obvious at this point, the hardest part with this is figuring out how many clusters there are. Example
Principal Component analysis (PCA): PCA is an unsupervised learning method. PCA is an important technique to undersand in the fields of statistics and data science/machine learning. PCA simplifies the complexity in high-dimensional data. It does this by transforming the data into fewer dimensions, which act as summaries of features. PCA is fundamentally a dimensionality reduction algorithm, but it can also be useful as a tool for visualization, for noise filtering, for feature extraction and engineering, and much more. Example
A Gaussian Mixture Models (GMM): GMM is a category of probabilistic model which states that all generated data points are derived from a mixture of a finite Gaussian distributions that has no known parameters. Gaussian mixture models are very useful when it comes to modeling data, especially data which comes from several groups. Example