Skip to content

Commit

Permalink
🚨 test(integration): reproducibility with PyTorch (#10)
Browse files Browse the repository at this point in the history
  • Loading branch information
jean-francoisreboud committed Oct 19, 2022
1 parent 48984b7 commit 221043b
Show file tree
Hide file tree
Showing 27 changed files with 1,281 additions and 94 deletions.
6 changes: 2 additions & 4 deletions .github/workflows/examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,8 @@ jobs:
- name: Test
run: |
conda activate maexamples-ci
swift test --filter MAExamples
swift test -c release --filter MAExamples
- name: Remove Conda Environment
if: always()
run: |
conda deactivate
conda env remove --name maexamples-ci
run: conda env remove --name maexamples-ci
39 changes: 39 additions & 0 deletions .github/workflows/integration-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: integration-tests

on:
workflow_dispatch:
push:
branches:
- main
- release**

jobs:
MATorchTests:
runs-on: self-hosted
defaults:
run:
shell: bash -l {0}

steps:
- uses: actions/checkout@v3

- name: Setup Conda Environment
run: |
conda create --name matorch-ci python=3.7
conda env list
- name: Install Python Library
working-directory: Tests/MATorchTests/Base
run: |
conda activate matorch-ci
pip="$(dirname `which python`)"/pip
$pip install -e .
- name: Test
run: |
conda activate matorch-ci
swift test --filter MATorchTests
- name: Remove Conda Environment
if: always()
run: conda env remove --name matorch-ci
2 changes: 1 addition & 1 deletion .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: unit-tests
on: [push]

jobs:
MAKit:
MAKitTests:
runs-on: self-hosted

steps:
Expand Down
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,8 @@ DerivedData/
.swiftpm/config/registries.json
.swiftpm/xcode/package.xcworkspace/*
.netrc

.idea/
*.egg-info/
__pycache__/

24 changes: 24 additions & 0 deletions .swiftpm/xcode/xcshareddata/xcschemes/MAKit.xcscheme
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,20 @@
ReferencedContainer = "container:">
</BuildableReference>
</BuildActionEntry>
<BuildActionEntry
buildForTesting = "YES"
buildForRunning = "YES"
buildForProfiling = "YES"
buildForArchiving = "YES"
buildForAnalyzing = "YES">
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "MAKit_MATorchTests"
BuildableName = "MAKit_MATorchTests"
BlueprintName = "MAKit_MATorchTests"
ReferencedContainer = "container:">
</BuildableReference>
</BuildActionEntry>
</BuildActionEntries>
</BuildAction>
<TestAction
Expand Down Expand Up @@ -132,6 +146,16 @@
ReferencedContainer = "container:">
</BuildableReference>
</TestableReference>
<TestableReference
skipped = "NO">
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "MATorchTests"
BuildableName = "MATorchTests"
BlueprintName = "MATorchTests"
ReferencedContainer = "container:">
</BuildableReference>
</TestableReference>
</Testables>
</TestAction>
<LaunchAction
Expand Down
6 changes: 6 additions & 0 deletions AUTHORS
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Below is a list of people and organizations that have contributed
# to the MAKit project. Names should be added to the list like so:
#
# Name/Organization <email address>

Jean-François Reboud <jean-francois.reboud@owkin.com>
4 changes: 2 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Below is a list of MAKit code owners who should review changes
# before delivering into the release.

#
# Code owning should propagate to contributers upon request.

* @jean-francoisreboud
* @jean-francoisreboud
84 changes: 84 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Code of Conduct

## Our Pledge

In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our
project and our community a harassment-free experience for everyone,
regardless of age, body size, disability, ethnicity, sex characteristics,
gender identity and expression, level of experiencee, education,
socio-economic status, nationality, personal appearance, race, religion,
or sexual identity and orientation.

## Our Standards

Examples of behavior that contributes to creating a positive environment
include:

- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

- The use of sexualized language or imagery and unwelcome sexual attention
or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others’ private information,
such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate
in a professional setting

## Our Responsabilities

Project maintainers are responsible for clarifying the standards
of acceptable behavior and are expected to take appropriate and
fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit,
or reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.

## Scope

This Code of Conduct applies within all project spaces,
and it also applies when an individual is representing the project
or its community in public spaces.
Examples of representing a project or community include using
an official project e-mail address, posting via an official
social media account, or acting as an appointed representative
at an online or offline event.
Representation of a project may be further defined and clarified by
project maintainers.

This Code of Conduct also applies outside the project spaces when there is
a reasonable belief that an individual's behavior may have a negative impact
on the project or its community.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by contacting the project team at #TODO.
All complaints will be reviewed and investigated and will result in a response
that is deemed necessary and appropriate to the circumstances.
The project team is obligated to maintain confidentiality with regard
to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct
in good faith may face temporary or permanent repercussions as determined
by other members of the project’s leadership.

## Attribution

This Code of Conduct is adapted from the Contributor Covenant, version 1.4,
available at
https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct,
see https://www.contributor-covenant.org/faq
31 changes: 31 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Contributing

Thank you for you interest in contributing to MAKit! From commenting to
reviewing and sending MR requests, all contributions are welcome.

## Developer environment

Install XCode with command line tools.

## Coding style

Go to XCode preferences -> Text editing tab -> Page guide at column: 80

## Testing

### CI

The unit tests are run after each push on the repository.

The integration tests are not run systematically, neither are the examples.

Once the MR is "ready to review", please trigger the workflows on GitHub
to ensure these additional tests have completed.

### Local

Testing the unit tests on XCode is straight forward.
Testing the integration tests and the examples require an additional setup.
More information in the README.

## Release
21 changes: 16 additions & 5 deletions Package.swift
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
// swift-tools-version: 5.7
// The swift-tools-version declares the minimum version of Swift required to build this package.
// The swift-tools-version declares the minimum version of Swift required
// to build this package.

import PackageDescription

Expand All @@ -9,7 +10,6 @@ let package = Package(
.macOS(.v10_15)
],
products: [
// Products define the executables and libraries a package produces, and make them visible to other packages.
.library(
name: "MAKit",
targets: ["MAKit", "MATestsUtils"]
Expand All @@ -22,8 +22,6 @@ let package = Package(
),
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
// Targets can depend on other targets in this package, and on products in packages this package depends on.
.target(
name: "MAKit",
dependencies: [],
Expand All @@ -39,9 +37,22 @@ let package = Package(
name: "MAKitTests",
dependencies: ["MAKit", "MATestsUtils"]
),
.testTarget(
name: "MATorchTests",
dependencies: ["MAKit", "PythonKit"],
resources: [
.process("Base/python_lib"),
.process("Base/setup.py")
]
),
.testTarget(
name: "MAExamples",
dependencies: ["MAKit", "PythonKit"]
dependencies: ["MAKit", "PythonKit"],
resources: [
.process("Base/data"),
.process("Base/python_lib"),
.process("Base/setup.py")
]
),
]
)
57 changes: 52 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Maximal Activation Kit.
A deep-learning framework for computer vision.

It aims at promoting a total control and understanding of the main
This framework aims at promoting a total control and understanding of the main
operations needed to train deep learning models.

The `Layer` is the essential component needed to build an explicit graph of
Expand All @@ -22,6 +22,47 @@ The API explicitly exposes the following functions:

## Layer

# MATorchTests

`MATorchTests` contains integration tests that allow to compare `MAKit` models
with their equivalent in `PyTorch`.

The goal is to demonstrate a good level of reproducibility and
interoperability with `PyTorch`.

## Setup

These tests require a special `Python` environment.

```bash
conda create --name matorch python=3.7
conda activate matorch
cd Tests/MATorchTests/Base
pip install -e .
```

You should be able to run the tests right from XCode or
with a `bash` command:

```bash
swift test --filter MATorchTests
```

You may eventually clean the environment with:

```bash
conda deactivate
conda env remove --name matorch
```

## Steps

1. Create a model in `MAKit` and `PyTorch`.
1. Get the weigths from the `PyTorch` model and load them in the `MAKit` model.
1. Load data from `PyTorch` and set it on both models.
1. Compute forward, apply dummy loss then the backward pass.
1. Compare the gradient norm on the very first layer in both models.

# MAExamples

`MAExamples` contains examples that show how to interact with `MAKit`.
Expand All @@ -42,7 +83,7 @@ We want to train the model to discriminate between 2 labels

### Setup

This example has some Python dependencies. In order to run
This example has some `Python` dependencies. In order to run
the example, we first have to setup the environment:

```bash
Expand All @@ -59,6 +100,12 @@ with a `bash` command:
swift test --filter MAExamples
```

Or to run the tests in the production model:

```bash
swift test -c release --filter MAExamples
```

You may eventually clean the environment with:

```bash
Expand All @@ -69,7 +116,7 @@ conda env remove --name maexamples
### Steps

1. Dump the training and testing datasets.
2. Evaluate a random model on the testing dataset: watch a bad performance.
3. Train a model on the training dataset.
4. Evaluate the trained model on the testing dataset:
1. Evaluate a random model on the testing dataset: watch a bad performance.
1. Train a model on the training dataset.
1. Evaluate the trained model on the testing dataset:
watch a better performance.
Loading

0 comments on commit 221043b

Please sign in to comment.