Skip to content

bpwilcox/bw-projects

Repository files navigation

bw-projects

About me

My name is Brian Wilcox. I am currently a professional in the robotics industry. I graduated from the MS program at UC San Diego in Electrical & Computer Engineering with a focus in Intelligent Systems, Robotics, and Control. Previously, I graduated from MIT in 2016 with a BS in Mechanical Engineering where I focused on control theory and biomechanics. I hope one day to combine my passions for robotics-topics and medical devices to help grow the future of medical robotics.

About this repository

This repository is a collection of robotics, control, and machine learning relevant projects over the past few years. It is meant to provide an overview of many (though not all) of the technical methods that I have implemented in code. Projects included range in scale from class assignments to course projects to significant research efforts. Please note that not all code is fully-implementable due to the omission of training data in this repo (limited space for large datasets). Most projects contain a report, paper, or presentation describing the methods and results in further detail.

List of Projects:

Project Overviews:

Study of Human Motor Control and Task Performance with Circular Constraints

back to top

see project files

Description:

This project is from my MIT MechE bachelor's thesis. The aim is to investigate human motor control strategies. Curved constraints offer a unique opportunity to exploit forces of contact. A circular crank experiment using the MIT MANUS robot was designed in order to test how well subjects can follow a set of simple instructions to rotate the crank at various constant speeds. In addition to velocity measurements read at the end effector of teh MIT MANUS, force and EMG measurements are also taken and qualitatively analyzed.

Methods:

  • Velocity, force, and EMG data were collected during four tasks:
    • turning the crank at the subject’s preferred or comfortable speed
    • turning the crank at a constant preferred speed,
    • turning the crank at a constant preferred speed with a visual feedback display
    • rotating the crank at three instructed speeds (slow, medium, and fast) with visual feedback.
  • The coefficient of variation (CV) of the velocity for each trial was computed as a measure of performance.

Results

pref cvspeed

Statistical analysis showed that speed significantly affected CV but the direction of turning the crank, clockwise or counterclockwise, did not. The observation that CV increased as speed decreased, despite visual feedback, confirms previous studies showing that human motor control is more imprecise at slower speeds.

Adaptive Virtual Object Controller for an Interactive Robotic Manipulator

back to top

see project files

Description:

This project followed my thesis work in the MIT graduate course 2.152: Nonlinear Control, taught by Jean-Jacques Slotine. An InMotion2 planar robot arm was being used in research as a virtual crank in order to test human performance and motor control strategies with constrained motion. A limitation of these experiments is the non-uniform inertia of the robot manipulator which creates an undesired or less convincing experience for the subject and more unreliable data for the researcher. This project aimed to investigate a controller design which would remove the inertial and nonlinear effects of the device arm and will compensate for errors in the model while maintaining the virtual constraint. To do this, an adaptive impedance/admittance controller is designed and shown to be globally asymptotically stable for this application.

Methods:

  • Computed Torque control: cancel inertial and nonlinear effects
  • Admittance control: simulate virtual crank constraint and inertia
  • Adaptive control: compensate for innacuracy of model

Results

ezgif com-video-to-gif 1 circlefast circlefastr params

Simulations showed convergence of the end effector to the radius and maintenance of desired velocity yet not convergence to the theoretical parameters.

Impedance Control for Use in Autonomous Decommissioning

back to top

see project files

Description:

This project was a group project in the MIT graduate course: 2.151: Advanced System Dyamics & Control The goal of this project is to determine the feasibility of achieving desirable endpoint impedances to promote stable and robust interactions from a free-floating vehicle equipped with a backdrivable manipulator, such as in the task of autonomous decommissioning of underwater structures. Because such a vehicle does not exist yet, analyses are drawn from two similar systems that together encapsulate the desired system: a fixed-base anthropomorphic robot with a redundant backdrivable manipulator (Baxter), and a free-floating raft with a non-backdrivable manipulator (Dexter), both constrained to planar motion. An LQR controller design is also compared to address the mechanical and control effort constraints for the given robots and tasks.
impedance

Methods:

  • Impedance control design
  • Observer design
  • Kalman filter
  • LQR control

Results

Some selected figures:

The experimental set up of Dexter underwater robot

image

Response of Dexter with impedance control image

LQR deisign initial condition response with varying expense levels p irall

Through experimentation, Dexter provides a concrete example of achievable manipulator impedance characteristics for a desirable interaction response, creating the performance parameters that are then matched by Baxter’s more complex manipulator. Physical experimentation is complemented by simulations of the two impedance systems, using the physical parameters determined for both Dexter and Baxter. LQR design is suitable for considering limitations on the actuators and geometry of the manipulators, with future work possible to include impedance into the cost function.

Port-Hamiltonian Modeling and Control for Multi-Body Simulation

back to top

see project files

Description:

This project was undertaken during my internship abroad in the UPC Biomechanical Engineering Group (BIOMEC) in Barcelona, Spain. In this work, we focused on the Port-Hamiltonian modeling approach as a method for control design of human multi-body computer simulations (Matlab suffers erros in forward simulation as a result of numerical integration). This pH approach is viable for biomechanical systems because it is simple to connect multiple body segments, external assistive devices, actuators, and more under the same methodology. Under this Port-Hamiltonian formulation, we conduct inverse dynamics, forward dynamics, and control design in a way that remains consistent with the fundamental framework and that is easily implemented in computer simulations. We have described a pH model of a simple biomechanical system and show how the pH model is suitable for the simulation of human generated motion capture data.

Methods:

  • Motion Capture
  • Coordinate Correction for rigid-body links
  • PD Control
  • Port-Hamiltonian dynamics modeling and control design

Results

Below is the forward dynamics simulation with no control forwardsim Using the PH control design, we obtain: phcontrolsim

While PD control is a common, and successful, method for reducing the tracking error, we find that the control design described by Dirksz and Scherpen to offer the advantage of transforming the Port-Hamiltonian formulation of the system into a new form that is easily implemented without having to perform inverse dynamics directly.

Model-Less Control using local Jacobian updates

back to top

see project files

Description:

This was the beginning of my current MS research project where I began by trying to extend my advisor, Professor Yip's, previous work on model-less control of continuum manipulators. This work is a simple example using parts of his basic methodology on a simpler planar robot model.

Methods:

  • Local Jacobian update optimization
  • Kalman filter (commented out)

Results

ezgif com-video-to-gif

Online Learning and Control using Sparse Local Gaussian Processes for Teleoperation

back to top

see project files

Description:

This project is a work-in-progress for my MS Thesis work in the ARClab at UC San Diego. This work aims to achieve the goal of online model-learning and control for a teleoperation task by using Sparse Online Locally Adaptive (Gaussian Process) Regression (SOLAR) to infer a local function mapping of robot sensor states to joint states and perform a prediction of the teleoperation command for joint control. This work represents a novel combination of recent frameworks for Gaussian Process Regression, where Local Gaussian process models are learned online and sparsified via Online Variational Inference, with partitioning and prediction biased by a "drifting" online sparse Gaussian process.

Methods:

  • Gaussian Process Regression
  • Local GP Partitioning
  • Drifting GP
  • Online Variational Inference
  • ROS platform

Results

4lspiral 6lstar1

Orientation Tracking and Panoramic Image Stitching with IMU

back to top

see project files

Description:

In this project, we use an unscented Kalman filter to track the orientation of a rotating camera. A panorama is generated using these filtered 3-D orientation states by stitching together rotated images.

Methods:

  • Quaternion transformations and averaging
  • Unscented Kalman Filter (UKF)
  • Panoramic Image Stitching

Results

o2 a2 p2

Color Segmentation and Barrel Detection

back to top

see project files

Description:

In this project, a multivariate Gaussian model is trained to classify the pixels of images containing a red barrel into several color classes. The red barrels are detected by grouping the regions of the barrel’s red labeled pixels into bounded boxes and determining whether these boxes satisfy a ”barrellness” threshold. The distance to the barrel from the camera was learned by training a linear regressor on the training data of known distances and the width and height of the detected barrels. This method was evaluated on a validation and test set.

Methods:

  • Multivariate Gaussian Classification
  • OpenCV Bounded-Box detection
  • Joining covered or split contours which are close by a distance threshold
  • Linear Regression

Results

divided 008 004 valresults

SLAM and Texture Mapping of mobile robot

back to top

see project files

Description:

In this project, we use grid-based SLAM with particle filters to predict and update the state of the robot and an occupancy grid of the environment map. Observations from a lidar senso, odometry information, and configurations of the lidar relative to the robot center of mass are given. Given localized robot poses and updated grid map from SLAM, RGB-D camera images are used to generate a color texture map of the ground plane in the grid map.

Methods:

  • Particle Filter
  • 2D Occupancy Grid
  • SLAM
  • Stratified Resampling
  • RGB-D Texture Mapping

Results

map_0_6 map_test

Kinematic and Dynamic Simulation of simple 3 DOF Robot Arm

back to top

see project files

Description:

In this project, kinematic and dynamic models are built and used to simulate a trajectory profile, compensate for gravity, and track control to a reference position

Methods:

  • Forward/inverse kinematics
  • Forward/Inverse dynamics
  • P-control

Results

q1traj q2f_rect

2D Planar RRT motion planner

back to top

see project files

Description:

Given a 2 dof planar robot arm model and physical obstacles in a task space, this project entailed creating an RRT motion planner with collision detection in the robot's c-space. The final trajectory was smoothed with a spline.

Methods:

  • Configuration-space collision detection
  • Rapidly exploring random trees (RRT)
  • Spline trajectory smoothing

Results

0229 ezgif com-optimize

Support Vector Machines

back to top

see project files

Description:

The objective of this project was to review the Support Vector Machine algorithm as a convex optimization problem and show how its dual formulation solves for the optimal decision boundary. The use of soft-margin version and the kernel trick are shown to form a strong dual form and their strengths are verified through an algorithm in MATLAB to classify datasets.

Methods:

  • SVM
  • Hard-Margin
  • Soft-Margin
  • Kernel Trick

Results

soft1 rbf1 poly

Pixel Classification via EM for Gaussian Mixtures

back to top

see project files

Description:

In this work, a cheetah image was classified into foreground and background via Gaussian mixtures trained with Expectation maximization (EM). Different combinations of randomly initialized mixtures are compared as well as the dimension of the features (1 to 64) and number of mixtures (1 to 32).

Methods:

  • Multivariate Gaussian Mixtures
  • Expectation Maximization

Results

cheetah 1 6b

Principal Component Analysis vs Linear Discriminant Analysis for Face Recognition

back to top

see project files

Description:

In this work, PCA was conducted using SVD for dimensionality reduction of face images. The transformed features were used for gaussian classification. Likewise, LDA was conducted using regulation (RDA) and tested for face recognition. A combination of PCA and then LDA was also tested.

Methods:

  • PCA by SVD
  • Linear Discriminant Analysis (LDA)
  • Regularized Discriminant Analysis (RDA)
  • PCA + LDA
  • Multivariate Gaussian Classification

Results

5a 5b

PCA Results: 33.33% error; LDA Results: 18.33% error; PCA + LDA Results: 30% error

Multi-Layer Perceptron for MNIST digit classification

back to top

see project files

Description:

In this pair project, we built (from scratch) a small neural network to classify handwritten digits, obtained from the MNIST database. We experimented with 1 and 2 hidden layers between the inputs (785 image pixels) and the outputs (10 classication probabilities) - the hidden layers used hyperbolic tangent and sigmoid activation functions, while the output layer used softmax activation to discern classes. To increase speed of convergence, we employed some standard techniques such as momentum, stochastic gradient descent, and preprocessing on input data.

Methods:

  • N-layer multi-player perceptron
  • Stochastic gradient descent
  • momentum
  • annealing
  • hyperbolic tangent sigmoid
  • l1/l2 regulatization

Results

twohidlayers_f

We observed that with momentum, stochastic descent, and two relatively small hidden layers, we managed to achieve a classification accuracy of 96.14% on the test dataset.

Convolutional Neural Network Transfer Learning

back to top

see project files

Description:

This group project in a Neural Networks course was focused on doing Transfer Learning with VGG16. We utilized Keras which is a Deep Learning Library which includes the VGG16 ConvNet. VGG16 was trained on ImageNet, we worked with the CalTech256 and UrbanTibes dataset. To perform the transfer learning, we replaced the Softmax Layer of the VGG16 pre-trained model by our own Softmax Layer which will predict classes of our datasets. After replacing, we attempted to train it and observe its performance. Lastly, we explored the use of a temperature-based softmax regression on the last output layer of the VGG16 model as input to a softmax layer for the Caltech256.

Methods:

  • Deep Learning CNN on Keras Library
  • Transfer Learning
  • Temperature-based Softmax Regression

Results

Caltech256 Dataset

cal256_test_acc

UrbanTribes Dataset

urbantribesvalilossviter80 acc_per_epoch_256 max_final_256

Character Level Recurrent Neural Network for music generation

back to top

see project files

Description:

For this project, we explored recurrent neural networks. This was done through training a basic RNN model using characters extracted from a music dataset that was provided in ABC format. Varying hidden units, temperature-based softmax regression, and dropout regularization were methods used to improve validation loss. After training, we ran the network in generative mode in order to "compose" music.

Methods:

  • Deep Learning RNN on Keras Library
  • Dropout regularization
  • Temperature-based Softmax Regression

Results

100hidden_dropout0_2_loss

ABC format of song generated by RNN

text_dropout0_2

Music score for the above format

song_dropout0_2

Bidirectional LSTM RNN for Metrical Analysis of Poetry

back to top

see project files

Description:

In this project, we used a biLSTM recurrent neural network to perform character-level supervised learning on English metrical poetry. Scansion is the process of parsing out the stressed and unstressed syllables in each line. Using For Better For Verse, a project by the University of Virginia’s Department of English, we obtained a training dataset in the form of English language poems together with their metrical scansion. After formatting the syllabication into a character-level form, we trained 64 poems and validated the network on 16 poems.

bidirectional

Methods:

  • BiDirectional LSTM RNN
  • Dropout regularization
  • L2 regularization

Results

accd3s25 loss_seq25_drop20 image image

For the validation set, we achieved around 95% accuracy per character. The number of characters per line of our validation set is 38.99. So we achieved around 13% per line accuracy. The weights with the lowest validation loss performed poorly in trying to predict the output of the given poem, it did not even have the correct amount of lines. We tried instead to use the weights with the lowest training loss which was the last epoch we ran. As we can see from the below results, these weights performed very well when predicting the output of the given poem with only a few errors.

image

back to top

About

A collection of robotics, control, and machine learning relevant projects over the years

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published