Jupyter Notebook HTML Other
Switch branches/tags
Nothing to show
Clone or download
Failed to load latest commit information.
ADMC Merge pull request #205 from CarthyCraft/master Jan 31, 2017
ARANZGeo more guesss Jan 24, 2017
Anjum48 Add files via upload Jan 31, 2017
BGC_Team Submission 6 Feb 1, 2017
Bird_Team Submission 7 Jan 31, 2017
CC_ml added v8 Jan 31, 2017
CEsprey - RandomForest Facies tree classifer majority vote ensemble Dec 22, 2016
CannedGeo_ Merge pull request #16 from Alfo5123/LA_SUBMISSION Nov 15, 2016
CarlosFuerte Merge branch 'master' of https://github.com/davidgtang/2016-ml-contest Jan 31, 2017
DiscerningHaggis Submission 3 Jan 31, 2017
EvgenyS Merge pull request #182 from evgenizer/master Jan 31, 2017
GCC_FaciesClassification Merge pull request #147 from gccrowther/master Jan 27, 2017
HouMath adding validation files and data Feb 1, 2017
Houston_J Merge pull request #160 from HoustonJ2017/patch-3 Jan 29, 2017
JLOWE JLOWE_Entry Jan 31, 2017
JesperDramsch Pipeline in Trees Jan 5, 2017
Kr1m Kr1m Last Try... Jan 31, 2017
LA_TEAM_FRESH added houmath edit Jan 24, 2017
LA_Team adding validation files and data Feb 1, 2017
LiamLearn k-fold demo Nov 14, 2016
MSS_Xmas_Trees final final one Jan 31, 2017
MandMs missing file XGB Jan 31, 2017
Mendacium Merge pull request #199 from thanish/master Jan 31, 2017
PA_Team other validations Feb 2, 2017
Pet_Stromatolite Update readme.md Jan 31, 2017
SHandPR adding validation files and data Feb 1, 2017
ShiangYong Fourth Submission Jan 31, 2017
StoDIG StoDIG submission #5 Jan 31, 2017
aaML Team aaML Jan 31, 2017
adatum adds "well_partitioned" which partitions training and validation data… Jan 31, 2017
ar4 submission 3 Feb 1, 2017
boostedXmas Used the weighted F1 score as cross-validation loss function, and add… Dec 19, 2016
dagrha Submission 5, another Random Forest Jan 31, 2017
esaTeam other validations Feb 2, 2017
fvf forgot to add Jan 30, 2017
geoLEARN Merge branch 'master' into aaMLSubmission Feb 1, 2017
gram adding validataion data output Nov 18, 2016
ispl other validations Feb 2, 2017
itwm First commit for the 2016 challenge Jan 31, 2017
jpoirier my first python submission! Jan 30, 2017
rkappius syncing up w origin Jan 26, 2017
.gitignore adds missing lib/ directory with helper functions Jan 31, 2017
Dockerfile adding Dcokerfile for mybinder to work again Jan 23, 2017
Facies_classification.ipynb Merge branch 'master' into aaMLSubmission Feb 1, 2017
LICENSE Initial commit Sep 30, 2016
PA_Team_Submission_7_RF_01.csv Submission 7 Jan 31, 2017
PA_Team_Submission_7_RF_01.ipynb Submission 7 Jan 31, 2017
README.md updated adatum score Feb 2, 2017
Stochastic_validations.ipynb adding validation files and data Feb 1, 2017
blind_stuart_crawford_core_facies.csv adding validation files and data Feb 1, 2017
classification_utilities.py update main notebook to (better) match article results Oct 28, 2016
environment.yml ready for binder Oct 1, 2016
facies_vectors.csv added files from local Sep 30, 2016
index.ipynb reverting some changes Jan 30, 2017
nofacies_data.csv added files from local Sep 30, 2016
prediction_depths.csv adding validation files and data Feb 1, 2017
training_data.csv added files from local Sep 30, 2016
utils.py adding validation files and data Feb 1, 2017
validation_data_nofacies.csv added files from local Sep 30, 2016




Final standings: congratulations to LA_Team!

The top teams, based on the median F1-micro score from 100 realizations of their models were:

Position Team F1 Algorithm Language Solution
1 LA_Team (Mosser, de la Fuente) 0.6388 Boosted trees Python Notebook
2 PA Team (PetroAnalytix) 0.6250 Boosted trees Python Notebook
3 ispl (Bestagini, Tuparo, Lipari) 0.6231 Boosted trees Python Notebook
4 esaTeam (Earth Analytics) 0.6225 Boosted trees Python Notebook

I have stochastic scores for other teams, and will continue to work through them, but it seems unlikely that these top teams will change at this point.

Welcome to the Geophysical Tutorial Machine Learning Contest 2016! Read all about the contest in the October 2016 issue of the magazine. Look for Brendon Hall's tutorial on lithology prediction with machine learning.

You can run the notebooks in this repo in the cloud, just click the badge below:


You can also clone or download this repo with the green button above, or just read the documents:


F1 scores of models against secret blind data in the STUART and CRAWFORD wells. The logs for those wells are available in the repo, but contestants do not have access to the facies.

** These are deterministic scores, the final standings depend on stochastic scores — see above **

Team F1 Algorithm Language Solution
LA_Team (Mosser, de la Fuente) 0.641 Boosted trees Python Notebook
ispl (Bestagini, Tuparo, Lipari) 0.640 Boosted trees Python Notebook
SHandPR 0.631 Boosted trees Python Notebook
HouMath 0.630 Boosted trees Python Notebook
esaTeam 0.629 Boosted trees Python Notebook
Pet_Stromatolite 0.625 Boosted trees Python Notebook
PA Team 0.623 Boosted trees Python Notebook
CC_ml 0.619 Boosted trees Python Notebook
geoLEARN 0.613 Random forest Python Notebook
ar4 0.606 Random forest Python Notebook
Houston_J 0.600 Boosted trees Python Notebook
Bird Team 0.598 Random forest Python Notebook
gccrowther 0.589 Random forest Python Notebook
thanish 0.580 Random forest R Code
MandMs 0.579 Majority voting Python Notebook
evgenizer 0.578 Boosted trees Python Notebook
jpoirier 0.574 Random forest Python Notebook
kr1m 0.570 AdaBoosted trees Python Notebook
ShiangYong 0.570 ConvNet Python Notebook
CarlosFuerte 0.570 Multilayer perceptron Python Notebook
fvf1361 0.568 Majority voting Python Notebook
CarthyCraft 0.566 Boosted trees Python Notebook
gganssle 0.561 Deep neural net Lua Notebook
StoDIG 0.561 ConvNet Python Notebook
wouterk1MSS 0.559 Random forest Python Notebook
Anjum48 0.559 Majority voting Python Notebook
itwm 0.557 ConvNet Python Notebook
JJlowe 0.556 Deep neural network Python Notebook
adatum 0.552 Majority voting R Notebook
CEsprey 0.550 Majority voting Python Notebook
osorensen 0.549 Boosted trees R Notebook
rkappius 0.534 Neural network Python Notebook
JesperDramsch 0.530 Random forest Python Notebook
cako 0.522 Multi-layer perceptron Python Notebook
BGC_Team 0.519 Deep neural network Python Notebook
CannedGeo 0.512 Support vector machine Python Notebook
ARANZGeo 0.511 Deep nerual network Python Code
daghra 0.506 k-nearest neighbours Python Notebook
BrendonHall 0.427 Support vector machine Python Initial score in article

Getting started with Python

Please refer to the User guide to the geophysical tutorials for tips on getting started in Python and find out more about Jupyter notebooks.

Find out more about the contest

If you intend to enter this contest, I suggest you check the open issues and read through the closed issues too. There's some good info in there.

To find out more please read the article in the October issue or read the manuscript in the tutorials-2016 repo.


We've never done anything like this before, so there's a good chance these rules will become clearer as we go. We aim to be fair at all times, and reserve the right to make judgment calls for dealing with unforeseen circumstances.

IMPORTANT: When this contest was first published, we asked you to hold the SHANKLE well blind. This is no longer necessary. You can use all the published wells in your training. Related: I am removing the file of predicted facies for the STUART and CRAWFORD wells, to reduce confusion — they are not actual facies, only those predicted by Brendon's first model.

  • You must submit your result as code and we must be able to run your code.
  • Entries will be scored by a comparison against known facies in the STUART and CRAWFORD wells, which do not have labels in the contest dataset. We will use the F1 cross-validation score. See issue #2 regarding this point. The scores in the 'leaderboard' reflect this.
  • Where there is stochastic variance in the predictions, the median average of 100 realizations will be used as the cross-validation score. See issue #114 regarding this point. The scores in the leaderboard do not currently reflect this. Probably only the top entries will be scored in this way. [updated 23 Jan]
  • The result we get with your code is the one that counts as your result.
  • To make it more likely that we can run it, your code must be written in Python or R or Julia or Lua [updated 26 Oct].
  • The contest is over at 23:59:59 UT (i.e. midnight in London, UK) on 31 January 2017. Pull requests made aftetr that time won't be eligible for the contest.
  • If you can do even better with code you don't wish to share fully, that's really cool, nice work! But you can't enter it for the contest. We invite you to share your result through your blog or other channels... maybe a paper in The Leading Edge.
  • This document and documents it links to will be the channel for communication of the leading solution and everything else about the contest.
  • This document contains the rules. Our decision is final. No purchase necessary. Please exploit artificial intelligence responsibly.


Please note that the dataset is not openly licensed. We are working on this, but for now please treat it as proprietary. It is shared here exclusively for use on this problem, in this contest. We hope to have news about this in early 2017, if not before.

All code is the property of its author and subject to the terms of their choosing. If in doubt — ask them.

The information about the contest, and the original article, and everything in this repo published under the auspices of SEG, is licensed CC-BY and OK to use with attribution.