Skip to content
evacougnon edited this page Sep 24, 2023 · 15 revisions

This section provides information for toolbox developers in terms of how to set up a development environment, how the development process, how to test, and how to perform a toolbox release.


Development environment

To start developing for the toolbox, the following is a recommended list of things to have at hand and that you need to configure/setup:

  1. git & python
  2. Matlab R2018b
  3. Matlab runtime v95
  4. Build Requirements
  5. Tests and imos-toolbox bucket management

1. Git & python

For Linux, the majority of distributions distribute a recent version of git. I personally use a more recent package (git-2.30+), but your mileage may vary given your distribution (usually several months or years behind):

apt-get install git

For Windows, I recommend installing from https://gitforwindows.org/. They also came with a nice bash shell that's familiar for Linux users.

Python3.6+ is required to build the stand-alone binary app. The app should be rebuild/package in every release using the build.py python script provided at the root of the repository. Instructions/requirements to generate the Matlab binaries can be found here.

2. Matlab R2018b

For development, it is recommended to install Matlab R2018b with at least the following toolboxes:

Both statistics and signal processing are required if you wish to run some SpikeQC tests and functions. The parallel computing toolbox is used to run some unit-tests in parallel (Linux). None of the above toolboxes is required for the stand-alone binary to run, but they are required for the build process.

3. Matlab runtime v95

The matlab runtime is a sandboxed/minimal Matlab library collection to allow users to run Matlab apps without the need to have the entire matlab software installed. For more information and how to install it, check our runtime installation guide

4. Build Requirements

A toolbox release includes a stand-alone binary application that is an IMOS-toolbox "self"-contained application. This stand-alone binary requires only that the user have the Matlab runtime installed. See here for more details about the stand-alone binary release.

The build task evolve since the earlier versions of 2.6, and now is done completely with a python script driver, that will call the matlab and java compilers (mcc/java). Please check our build-script instructions on how to create a stand-alone binary file with our python script.

5. Tests and imos-toolbox bucket management

To execute all tests in the codebase, one needs several binary and text files. These out-of-the-tree files are kept in an AWS s3 bucket, named imos-toolbox. The idea of using this bucket is to keep data and code separated. This has several advantages, such as keeping the codebase small and not redistributing binary data that the user do not care about.

If you need to send/archive test files in the bucket, you will need to configure some permissions and be familiar with AWS s3 buckets.

For added convenience, the codebase contains a script, send_testfile.sh that can quickly archive a file to the imos-toolbox s3 bucket. The script requires quite a basic AWS credentials configuration and uses boto3 to send files with a md5sum metadata to the imos-toolbox bucket. You will also need write permissions to the bucket, which shall be granted to you by an AODN AWS administrator.

Documentation for the boto3 AWS credentials setup can be found here.

The send_testfile.sh script is very basic and accepts only a single argument: the path of the file. It will calculate the md5sum of the file and store both file and the md5sum metadata in a bucket prefix. For further convenience, the script assumes the argument folder is the prefix and the argument basename is the bucket filename.

For example, the default way to send a testfile into the bucket is:

cd <imos_toolbox_root>
./send_testfile.sh data/testfiles/Sea_Bird_Scientific/SBE/19plus/v000/YON20200306CFALDB_with_PAR_and_battery.cnv

This will instruct the file to be stored in the imos-toolbox with the prefix data/testfiles/Sea_Bird_Scientific/SBE/19plus/v000/. The file YON20200306CFALDB_with_PAR_and_battery.cnv will have a md5sum s3 metadata attributes, which will be used by the get_testfiles.py script for robust checking after download.

If you call:

./send_testfile.sh /home/user/my_secret_file

then you are storing my_secret_file in the public imos-toolbox bucket, with prefix=/home/user/. Hence, for this reason, the script only accepts a single argument and will not recursively send any file.

Another way to manage the bucket, including the removal of files, is via the AWS web interface. You will need to login here, change your role accordingly, select s3 services, imos-toolbox bucket, and then manage the files manually.

The script uses the AWS boto3 command-line interface to send files to the bucket without the need to use the AWS web interface.

Hence, to use the send_testfile.sh script, you will need your AODN AWS credentials properly configured and authorized to manage the bucket (write-access).

The development process

On a feature scale basis, the IMOS toolbox is not a matlab toolbox per se, but a GUI application. Some functionality cannot be easily decoupled from it, and as such, development has to happen on several fronts, including functional scope testing, I/O between components, and GUI behaviour.

An important chunk of the development and debugging is focused on making sure the components are couple correctly - metadata and data are stored and available with the right names/locations - and that the GUI can operate with hiccups. Most of the development today tries to decouple, as much as possible, the functionality from the User Interface requirements, so that more code is reused and several actions are composable instead of monolithic.

In terms of function testing, the toolbox is currently in a transitional state. The codebase had limited testing capabilities, but testing has increased and we got extra tools to allow a test-driven development.

As mentioned above, be aware that some test functionality also requires instrument files, and these files are kept apart from the codebase. You may need to fully set up your development environment before running some tests or

Suggested coding rules and conventions

This section contains some suggested coding rules and conventions. The current codebase still contains some unconventional code and practices. These are mostly concentrated in old files (e.g. parsers and the GUI code).

The main unconventional parts of the codebase are:

  1. License blocks in source code.
  2. matlabcase, camelCase, snake_case mix.
  3. Different indentation rules.
  4. Lax documentation.
  5. Long & nested scopes, function names (specific names) define flow behaviour (context switching), lack of interfaces and unit tests.

Problem 1 is historical and a strict interpretation of the GNU license (gnuism) that all source code should contain the license block. This is largely unnecessary given that we provide a license.txt file at the codebase root folder. After some time, I personally started removing the license text block every time I'm modifying a file ( and doing it as a single commit). The license is just noise that makes the code harder to navigate and inspect.

Problem 2 is quite minor since the different naming is mostly associated with the functionality origin. For example, most Matlab core functions are provided without underscores and camelCase naming (isstruct,fieldnames,plotyy,getpos), while most of the toolbox codebase upper level functions is written in camelCase (e.g. mainWindow,depthPP,imosInOutWaterQC). The snake_case function is mostly used for toolboxes methods such as IMOS.gen_variables,IMOS.find,IMOS.adcp.bin_in_water. Few exceptions out there (isequal_tol,nc_flat,nc_get_var).

Problem 3 is also historical and quite common. Although matlab ignores indentation at runtime, most of us would expect a fixed indention rule across the codebase. This was not the case early on and several attempts to define a common indentation also faces problems of files edited in windows and Linux (CRLF vs LF end-of-line standard). Although the end-of-line standard was solved, multiple indentation levels still remain.

Problem 4 is endemic. Most documentation does not provide any clue of the type of the input and output arguments or provide examples of the function usage. The solution here was to adopt a strict documentation block template, which also works as a test docstring.

Problem 5 is common. For example, some quality control and Pre-processing functions area of action are very wide or deeply nested (e.g. depthPP. Other details are hard-coded behaviour, such as the quality control and Pre-processing function names contain the QC and PP monikers used for filtering and context switching, as well as passing entire structures and all the verification requirements (lack of interfaces). There is also a lot of repetition and long boilerplate code, particular with assignments to variables and all the metadata.

Hence, The suggested rules for new development are the following:

Problem 1 - avoid a license block in the source code.

This avoids copy/paste, handling license year updates, artificially increasing lines of code, enhance search tools, reduce the number of pages the developer have to flip to get to the actual source code.

Problem 2 - Suggested naming conventions

  • The name of top-level functions in the toolbox should be maintained as camelCase.
  • The name of methods, lower-level functions, nested and/or within an already defined function scope, should be snake_case.
  • Renaming of functions to match conventions should be delegated to after test coverage for that function and usage is completed.
  • Try to use semantic names, like actions (compute_xyz), verifications (is_valid_field), filtering (variable_map), etc. The action_name_type rule is usually a good rule.
  • Leave camelCase to name top-level functions or GUI calls.
  • Use snake_case for variable names, local functions, toolbox functions or anything that is not called by the UI.
  • If a function is aligned with what should be a matlab core function, use matlabcase. For example, Util/Schema/isint8,Util/CellUtils/iscellsize.

Problem 3 - Suggested indentation rules

  1. Source code files should be stored in github with LF new-lines strings (Linux end-of-line standard).

    • If you plan to develop in Windows, make sure you configure git for that. This can be accomplished by adding * text=auto eol=lf to your .gitattributes file.
  2. Indentation is by four spaces, no tabs, without extra leading or ending whitespace. Every main control declaration block should be one empty line apart from previous code (e.g. for,if,while). There should be spaces between commas and operators such as +,-,=,&&,||, and all functions should have an end closure.

For example:

function [d] = raise_missing_argument(a, b, c)

if numel(b) > 1 || numel(c) > 1
    errormsg('b or c not singleton')
end

for k = 1:length(a)
    d = a(k)*b + c;
end

end
  1. If the file is a class object, the indentation should follow rule 2.
classdef myclass < handle

    properties (Access = public)
        myname = 'a';
    end

    methods
       
        function x = myclass(varargin)
            x.b = 1;
            x.c = 2;
        end

    end

end
  1. If the file is a function, with n functions within its own closure, the indentation after all function declaration should be 0. Nested functions, on the other hand, should follow rule 2 (four spaces).

For example:

function [out] = f1(in)

out2 =f2(in)

function [out] = f3(in)
    out = in;
end

out = f3(out2);

end

function [out] = f2(in)

out = in;

end

Problem 4. Suggested documentation template

The historical way Matlab documented functions are quite particular. Usually, the function signature is never provided and optional arguments are buried in usually large chunks of documentation text. Types are also very rarely mentioned. Some historical toolbox functions contain a bit more information related to types, inputs and outputs, but given the highly variable structure inputs, the help docstring was never enough to understand typical usage and requirements.

Hence, from version 2.6 a more standardized docstring format was introduced. The format follows some previous toolbox docstrings but with a more strict formatting (see below). The rules are:

  1. A docstring should contain the function signature below the function declaration
  2. A Description of what the function does should follow.
  3. Inputs section should contain the argument names, types, and a short description.
  4. As above, but for the Output section
  5. The example block should contain a minimal example, a direction to the reader for further tests, or be an actual test.
  6. The author(s) should follow at the end before the function body.
  7. A nested function or within the same file scope may not need the author if the same author and may skip the example block if usage is the usage scope is narrow.

For example:

function [out1] = function_name(arg1,arg2)
% function [out1] = function_name(arg1,arg2)
%
% Description of Function. 
%
% Inputs:
%
% arg1[type] - Arg1 description
% arg2[type] - Arg2 description
%
% Outputs:
%
% out1[type] - out1 description
%
% Example:
%
% %the Example block is executed by `testDocstring`, so this double comment line acts as a comment for it.
%
% assert(function_name(1,1)==2)
%
%
% author: hugo.oliveira@utas.edu.au
%

nargin(2,2);
out1=arg1+arg2+prod(arg1,arg2);

end

function [pout] = prod(arg1,arg2)
% function [pout] = prod(arg1,arg2)
%
% Do a product 
%
% Inputs:
% 
% arg1[double] - a number
% arg2[double] - another number
%
% Outputs:
%
% pout[double] - arg1*arg2
%

nargin(2,2)
pout=arg1*arg2;

end

The fixed docstring formatting allowed us to create tools that explore this standard. The most important one is testDocstring. This function allows one to execute code blocks within the Example section of the function docstrings, which will clearly return a valid assert. Hence, the docstring acts as a test and using the template above is highly recommended. See the unit tests documentation for more details.

Another case is fsig, which prints only the function signature. There were plans to expand this to other helpers, like describe to exclude the example block from being displayed. Are you are keen to try!?

Finally, there are tools out there that allows you to avoid copy/paste the above template (and others) and easily write the docstring. They are called snippets. If you use vim, a good and highly configurable plugin is ultisnips.

Problem 5. Code recommendations

The following recommendations will help you to write better code and avoid repeating the problems raised above.

  • Write function/toolboxes tests with our docstring template.

    • By using our docstring template, you will be able to use tetDocstring to quickly test your function.
      • testDocstring will return true if your example block is OK!
    • Use xunit tests for more complex testing (UI, parsers, multiple arguments).
  • Use classes for stateful tasks (UI mostly).

    • Using nested functions is very hard to debug, since they are considered anonymous functions and one cannot create temporary variables within them when debugging.
    • Objected oriented classes are available and they make state management much more manageable than using only nested callback functions.
  • Write complex classes/objects/UI tests as xunit tests.

    • Given the high number of arguments and states, these are best tested with classes instead of simple test functions.
  • Use functions/toolboxes for stateless jobs.

    • Using the DRY and DOTW principles - Do not Repeat Yourself / Do One Thing Well - one avoid repetition and enable composable software.
  • Write function/toolboxes tests as docstrings.

    • This avoids writing a new test file for every single function.
    • Enables documentation to be kept updated since a bad (good) example is a failed (success) test.
  • Use Test-Driven Development.

    • Writing a test first, and/or as soon as a behaviour is attained, permit a faster and more complete test coverage.
    • Helps one keep applying DRY and DOTW principles since everyone dislikes writing long and complex tests.
  • Use conventional commits

    • This will make git navigation, search, and logs clearer. Also helps when creating changelogs.
  • Flat is better than nested. Use functions/wrappers to reduce long statements/blocks.

    • This will make debugging easier, particular for nested loops.
    • Anonymous functions are your best friend
  • Avoid vectorisation just for the sake of it.

    • Matlab JIT compiler is quite good these days, so vectorisation is way less important now than it was 10 years ago.
    • A simple loop is easily (auto-)vectorized by jit. It is a win-win situation - simple/flat loops are more readable and faster.
  • Beware of the jit compiler, copy on write, and overall mutability rules.

    • Most of the time, your data is not being copied (release from memory) unless you modify it (get out of a function).
    • Passing a dataset to a function is a pass by reference most of the times.
    • Avoid mutation of big arrays, particular if they are not defined in the current scope - they may require big copies.

Tips and Tricks.

  • Be familiar with the scopes of the different objects - functions, toolboxes, and classes

    • The toolbox syntax (folders with + in the name) are quite handy to avoid writing your logic in a single class file.
  • Be familiar with several utility functions in the Util directory, particularly Path,NetCDF,StructUtils,CellUtils,Schema and the +IMOS toolbox.

    • There was some effort to improve the libraries of the toolbox so we can write better, shorter and cleaner code (DRY/DOTW again). Be familiar with these utility functions and how to use them. Although there is some repetition with other functions currently used in the toolbox (e.g. IMOS.get and derivatives with getVar and friends), you will abstract away several silly things, like looping via dummy indexes to access a variable or accessing variables via index numbers.
  • Beware that some functions are not available in Matlab runtime, so you may need to code them by hand for users without some matlab toolboxes.

  • Beware that the matlab Runtime may not contain all functions required for a given functionality.

  • If deep introspection of a toolbox dataset is required or comparison, check Schema/treeDiff.m - This will compare items in structs/cells one by one, type per type, recursively. There is still some love in terms of error msgs, but you will soon note how useful it is, particularly when detecting metadata mismatch.

  • The majority of the GUI elements are modified in the selectionChange callback, within the mainWindow.m file. Most of the main window plots are triggered by different Callbacks in displayManager.

  • Be familiar with dispmsg,errormsg,warnmsg which enrich the debugging features.

  • Be familiar with the keyboard,dbstack commands or use the debugger in the matlab editor.

Test-driven development

Since version 2.6, development is mostly test driven. For every new functionality, tests are developed/provided, while old functionality is handled on a case by case basis. For more information about how to use test frameworks and some further details, see our UnitTests page.

Test-driven development - files

One important part of testing is to ensure instrument files are read and store in memory in the toolbox expected way (a dataset). For that, individual instrument files are required. At the moment, these files are NOT kept in the repository, given their size and most binary characteristics.

In particular, you will need to download some files to fully test the toolbox. These instructions can be found in the how to obtain the test files.

Adding/Removing test files

To add new files, you will need AODN Project Officer permissions. This is so because adding files deals directly with the AWS imos-toolbox bucket via s3-object PUT requests. Hence, there are two ways to perform this task:

  1. Manually via AWS web interface.
  2. Using the send_testfile.sh script.

Both methods require AODN Project Officer permissions. To use send_testfile.sh, the user needs to install the aws python API and a properly configured role configuration and credentials (the ~/.aws/).

Also, be aware that some manual or automatic testing may leave some temporary files within the test folders (e.g. the mqc,ppp,pqc session files). Although some tests were updated to ignore these session files, extra tests may be created against these session files and, as such, fail.

It is for this reason that send_testfile.sh only accept one argument - the path of the file. For your convenience, the script assumes that the path of the file will be the s3 prefix at the bucket. Hence:

cd <imos_toolbox_root>
md5sum
./send_testfiles.sh data/testfiles/ECOTriplet/v000/FLSB-3201IENGR.raw

with store a file named FLSB-3201IENGR.raw, at the S3Prefix `data/testfiles/ECOTriplet/v000/', with a hashed md5sum attribute stored at the s3 metadata table of that particular file.

Removal of files from the bucket is a manual process.

The release process

A toolbox release is a multi-step procedure and includes full testing, stand-alone binary creation, a changelog, and a tag/github release.

Assuming a Linux machine is used as main development, the workflow is:

  1. Run all tests in Linux.
  2. Bump version in imosToolbox.m
  3. Build binaries for Linux.
  4. Verify GUI functionality and version of the stand-alone binary(in Linux), if applicable.
  5. Run all tests in Windows.
  6. Build binaries for Windows.
  7. Verify GUI functionality and version of the stand-alone binary(in Windows), if applicable.
  8. Send the windows binary back to the linux machine, aggregate the imosToolbox.m version bump and both binaries in commits.
  9. Git push, wait for merge.
  10. After the branch is merged, update the respective version branch to master (e.g. master->2.6).
  11. Create a changelog.
  12. Create a tag/GitHub release with the changelog.

Common errors/mistakes here include:

  • Not running all tests after some follow-up fixes.
  • Not doing the bump version before building binaries.
  • Not synching the repository between the platforms
  • Missing some manual testing in windows.
  • Not synching the versioned branch to the master branch after the feature is merged.

The above mistakes are quite common given the number and manual nature of some procedures (like testing in windows).

It is not unusual that 1-4 is part of the development process, so these steps are usually repeated as a double check before a release. It is also a good idea to double-check the binaries before sending the commits to the origin since this is quite a common error (wrong versions/binaries).

Step 1. Run all tests in Linux.

To run all xunit tests, you can execute the test/runAllTests.m function. runAllTests will run all files under the test directory.

To run all docstring tests, you can execute the test/checkDocstrings.m function. checkDocstrings will accept a folder to scan and execute all docstring tests.

Alternatively, it is possible to use the bash script ./runalltests.sh which will, by default, run the tests in parallel (High CPU consumption) in non-interactive mode.

Errors explanation

A.Error #1

Error occurred in testSignature/testReadSignature1000(s1000_file=S100287A003_HAIFA_buoy_00_ad2cp_ppp,mode=timeSeries

>testVoltageParameters/testBatVoltageSignature(signature_file=S100287A003_HAIFA_buoy_00_ad2cp_ppp,mode=timeSeries)

These errors are because some other test created a “.ppp” file in the test folder. The file name is a bit garbage because matlab didn’t accept dots in structure names, so they were substitute for “_”.

Several solutions here:

  1. Update the tests and filter/avoid picking up the “.ppp” files (see FilesInFolder function in other tests),
  2. Set the tests to create these files in a different place (the “tmp” directory) by setting the “toolbox_input_file” to some other path, or
  3. Clean-up the test folder of any .ppp, .mqc or .pqc files before the run.

solution 1 is recommanded, which was what Hugo was doing before the latest release (early 2021)

B.Error #2

>     test_imosTiltVelocitySetQC/test_below_secondThreshold_ui(interactive=value1)              X       Filtered by assumption.

>   ---------------------------------------------------------------------------------------------------------------------------

>     test_imosTiltVelocitySetQC/test_above_secondThreshold_ui(interactive=value1)              X       Filtered by assumption.

>     testDepthPP/testDepthPPOverwriteDepth(mode=profile,interactive=value1)                                                         X       Filtered by assumption.

>    
...

The above are skipped tests. To run them, you need to set a global variable interactive and then run the tests in the matlab command window.

The global variable can have any value, the script will check if a global variable exist to run these tests.

Step 2. Bump version

Modify the version number in <toolbox_root_dir>/imosToolbox.m. Note that, although we use the same moniker of semVer protocol, the numbering rule is not the same.

The first digit of the version is/was always 2. The second digit is usually only updated when Matlab versions are changed - the number 6 is associated with Matlab R2018b and the v95 runtime. Hence, the second digit marks possible (but not certain) incompatibility changes. The last digit is just a monotonically increasing number to mark updates.

Hot fixes are not distinguished from standard releases in the version digits.

Step 3. Build binaries

See https://github.com/aodn/imos-toolbox/wiki/MatlabBinaries

Step 4. Verify GUI

To verify GUI behaviour, one has two options:

  1. Craft a specific function to trigger a UI element.
  2. Manual GUI testing

Indeed, 1 is the prefered method, since it is easy to debug, test and evaluate. However, this doesn't exclude 2, since the communication between other UI elements is not fully tested.

The manual GUI testing can now be done in two ways:

  1. Test with individual files
  2. Test with a database file/folder.

Given that a database file or folder is harder to modify than using a simple individual file, 1. is the preferred method. If a feature requires a particular file to be tested, it is a good idea to keep this file within the imos-toolbox test file repository/bucket, so tests are easily reproduced. Sometimes, however, there is no way to test without a database and the associated metadata. This is particularly true for PreProcessing Routines.

Step 5. Run all tests in Windows

As said, most of the guide here assumed development to mainly happen in Linux. Unfortunately, the Matlab compiler (mcc) do not support cross-compilation, i.e., compiling a windows stand-alone app from a Linux OS. Hence, to build the imos-toolbox binaries for windows, a windows operating system is required. Finally, since we need to support users using the toolbox within matlab and as a stand-alone app, both dependencies need to be installed.

The testing and building of the toolbox in windows is usually achieved with a windows virtual machine. After properly installing Matlab R2018b, the Matlab Runtime, git, python (and libraries dependencies), and cloning the actual repository within the Linux machine, you are ready to test the toolbox.

The procedure is the same as above, you may just call from matlab the runAllTests function and the checkDocstrings("<your_imos_toolbox_path>"

To handle git in windows, I highly recommend installing After installing all the requirements, including git for windows,

You may obtain the toolbox repository through 1. cloning/pull from Github (remote), 2. cloning from a git bundle, 3. accessing your Linux repository via some sort of mount point (samba, sharing, etc), 4. explicit rsync between your Linux machine and the windows machine.

The simplest way to obtain the repository in the windows machine is to use git itself (option 1 or 2).

Since rebasing or amends to commits can happen often when testing in windows, I recommend using a git bundle. This is a fast and quick way to store/obtain the repository in windows virtual machine, particularly if you can "mount" your Linux folder from windows (or can easily send files to it). This avoids some problems, particularly the roundtrip between committing changes in your local machine, sending them to github, and obtaining them from github in windows, fixups, pull/push cycles, etc.

To create an imos toolbox bundle, use:

cd <imos_toolbox_root>
git bundle create imos-toolbox.bundle --all

Then, you just need to send the bundle to the machine (e.g. scp or just making it available via samba) and cloning it from windows:

git clone imos-toolbox.bundle imos-toolbox

Steps 6-9. Windows handling and building

Once you got your imos-toolbox repository in windows (by cloning from github or from a bundle), you will need to drive the build script. There are some particularities to build the toolbox in windows with the same build.py script used in Linux.

  1. You may use bash in windows to manipulate the repo with git.
  2. You need to use the cmd.exe DOS prompt to compile the binaries with the build.py python script.

TODO: continue here.

Step 10 & 11 - Git push, wait for merge, and sync branch versions.

This is an important step since the way some people obtain the toolbox via git. The current recommendation is that people follow the versioned branch instead of the master branch. Hence, people will be pulling from a versioned branch - say 2.6 - instead of master.

Thus, after sending the merge request of a branch to master, and this merge is concluded, one needs to remember to update the tip of 2.6 to master:

git checkout master
git pull
git checkout 2.6
git merge master

This is very easy to forget since the review can take some time.

Step 12. Changelog

The description of changes in the source code is an important step for users since they can know in advance how the new release may affect their usage.

If you are using conventional commits, the easiest way to create a changelog is:

git log --oneline <last_tag>..<next_tag> | grep -v Merge

where <last_tag> (<next_tag>) are tags (or commit hashes). This will output the list of all commits and their one-line descriptions. You may wish to do an additional call without the --oneline (and the grep pipe) to wrap up some commit messages in big features/breaking changes.

Step 13. Create a tag/GitHub release with the changelog.

This is pretty much dumping your changelog in the release page of GitHub and associating a new version tag with it.

The tag version should match the imosToolbox.m version.

When creating the tag, GitHub will automatically create the two Code Source assets (.zip and .tar.gz) necessary for the release, no need to attach any files.

Be aware that a tag created in the GitHub web interface is synced directly to master, and such, quite hard to undo. You may use draft view/release options to avoid creating a tag straight away.

Improvements to the release cycle

Clone this wiki locally