Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New GUI idea based on H5Py #33

Closed
ErichZimmer opened this issue Jun 29, 2021 · 74 comments
Closed

New GUI idea based on H5Py #33

ErichZimmer opened this issue Jun 29, 2021 · 74 comments
Labels
enhancement New feature or request

Comments

@ErichZimmer
Copy link
Collaborator

ErichZimmer commented Jun 29, 2021

Background/Issue

The current GUI stores data in separate files which can make it hard to do more thorough data processing. To combat this, an already suggested solution was to store all results in a single dictionary of the dataset and export the results in a manner the user deems sufficient. However, on large processing sessions (>60,000 images), the GUI can become quite slow especially on lower performing laptops. Furthermore, the performance of the GUI starts to decrease with these large sessions. This can cause a disruption to efficient work flows and an increase in glitches (mostly applies lower performing computers).

Proposed solution

After exploring different ways of storing huge amounts of data, H5Py was found to perform pretty well even on underperforming computers (e.g. my laptop 😢). When properly configured, most data is stored in the hard drive, leaving RAM mostly unused unlike dictionary style designs. Additionally, the structuring of an HDF5 file makes it very simple to load specific sections of data/results which has its advantages. Taking advantage of these features, the HDF5 file is structured like the following;

  • session: The main file which everything is stored in
  • session\images: Group containing all image filenames and frames for the GUI
  • session\images\files_a: Dataset of full path lengths of A frames
  • session\images\files_b: Dataset of full path lengths of B frames
  • session\images\frames: Dataset of Frame names used for GUI frame list and file names for results
  • session\results: Group containing all results
  • session\results\frame_...: Group containing components of a result
  • session\results\frame_...\x: Dataset containing x component (there would be a dataset for each component in the group)
  • session\results\frame...Attrs: Attributes containing processing time, ROI and mask coordinates for further processing and display

Possible downfalls

  • It takes somewhat careful planning to get H5Py to work as needed. If structured wrong, the advantages of a HDF5 based storage format is waivered and performances are similar to that of a dictionary style storage format. However, this requires some major mistakes and shouldn't really happen.
  • Extra dependency (H5Py)
  • To get multiprocessing to work, there will most likely be an extra dependency

PS, I'm back 😁 (got medically discharged from an injury) and ready to relearn everything/hopefully not be so ill-informed on testing methods like I was back then -_-. Additionally, your inputs on using HDF5 or others for storage would be helpful for further research and designing.

@ErichZimmer ErichZimmer added the enhancement New feature or request label Jun 29, 2021
@alexlib
Copy link
Member

alexlib commented Jul 4, 2021

@ErichZimmer is back :) @eguvep - what do you think?

@alexlib
Copy link
Member

alexlib commented Jul 4, 2021

@ErichZimmer can you please take a look at NetCDF files? the xarray project and our sister pivpy project uses it (as a competitor for HDF5) and xarray provides some great extension over pandas to get easy things e.g. data.piv.average or data.piv.mean(dim='t')

@ErichZimmer
Copy link
Collaborator Author

@alexlib I made a mostly functional GUI built around HDF5 and is parallel capable through some work arounds with current limitations on dependencies. So far, the only extra dependency for this GUI is H5Py. I'll try other styles to see their performances, but HDF5 is performing pretty well so far...

Some GUI screenshots:
Prototype_HDF5_GUI

extensive_preproc

advanced_settings

PS, please ignore spelling errors as I am low on time and my mobile hotspot won't let me edit previous posts for some reason..

@eguvep
Copy link
Member

eguvep commented Jul 5, 2021

Dear Erich!

I am very happy to read that you are back and it is great to see your immediate productive postings!
Besides the performance advantage, it would be great to put the whole parameter object into the data-set. When loading a data-set, the associated parameters should also be loaded directly. This would be a big advantage for people who are switching between data-sets or when reevaluating older data. What do you think, @ErichZimmer?

Another thing – mentioned in our previous discusson – is compatibility. Our simple and stupid CSV files are the lowest common denominator with almost every other code (like awk or other command line stuff; they are even human readable) and we follow the UNIX philosophy by using text files. I would strongly vote for an CSV import and export option to not destroy this compatibility. Or are there any command line tools for extracting HDF5 data (I am a novice in HDF5)?

Can we be shure, that changes in the HDF5 code do not break the GUI? As far as I can see, HDF5 seems to be fairly mature, right?

I had a quick look at the other data-formats, @alexlib. And I worked with NetCDF before (there is a JPIV extension for generating synthetic PIV images based on that format and the SIG project). It is hard to tell – if not impossible – which format is best. HD5 seems to be slightly more flexible, so it seems possible to put really everything into the files. Everything that is hard to decide, can be decided randomly in my opinion ;-) So lets give HDF5 a try!

Regards!

Peter

@alexlib
Copy link
Member

alexlib commented Jul 5, 2021

I think we need to split the two topics:
A) whether we want a single database (a single binary or ASCII file, or a group of files that are bonded together) for everything - probably we do. PIVLAb has a MAT file that contains all the session details and then if you do export, exports multiple data files. BTW, MAT files are HDF5 files AFAIK.
B) Choice of the file format. For performance the binary formats are obviously the solution, although now pandas have very fast CSV reader that I believe can be our solution. If we work with pandas - we get also built in HDF5 support (through h5py) and we can also convert it to xarray format.

My suggestion is to try pandas with CSV first and then if the performance is not sufficient, keep working with pandas and HDF5.

Regarding HDF5 - they're fast and flexible, and there are some things like HDFView or h5dump that help to see their content.

NetCDF - the only benefit is the straightforward continuation and connection with the pivpy - after all, we probably want to have GUI also for the post-processing, colorful images, vorticity, strain, etc.

@eguvep
Copy link
Member

eguvep commented Jul 5, 2021

For clarity, the PIV database could even be a separate project. A PIV-database object could provide methods

  • for creating such an object from a set of images, CSV-files and settings,
  • reading it and providing the data in different formats, and
  • exporting the data.
    We could than use this object in the GUI, but also in Jupyter notebooks or other places.

@alexlib
Copy link
Member

alexlib commented Jul 5, 2021

Great idea.

What is the structure of this project? For pivpy we use xarray DataSet - it's a pandas-dataframe-on-steroids with metadata attached to it. I didn't find another similar solution that provides me an option to average along a "named" dimension and have a lot of underlying mechanics of all kind of possible numerical operators.

@ErichZimmer
Copy link
Collaborator Author

ErichZimmer commented Jul 5, 2021

Do you know any way of chunking PIV data so the user doesn't have to load too much stuff into memory? The reason why I chose a HDF5 format was because at most, there is only two complete PIV results (2 frames) loaded into memory and the rest is stored on the hard drive. In my case, I analyzed ~3,000 images to get ~3000 results of which I only have to load and work on one result at a time on the GUI. Since I am not using chunking, the results can have different sizes for whatever reason (different size windowing/overlap). Perhaps, NetCDF partnered up with xarray would be the way to go, but for now, I'll stick with a HDF5 format until I learn more about NetCDF (I like xarray a lot though so I'll try..)

By the way, there is an export page on the GUI to export our results in multiple different ways and file types.

@alexlib
Copy link
Member

alexlib commented Jul 5, 2021

The parallel or chunked reading is not from xarray, but from dask

http://xarray.pydata.org/en/stable/user-guide/dask.html

@ErichZimmer
Copy link
Collaborator Author

ErichZimmer commented Jul 5, 2021

Well, this got a little more confusing... But I'll see what I can do as the GUI is currently made to switch internal formats relatively easy.

For HDF5, the GUI is setup like this:
Session: the file that contains everything
Session\images: group that contains all image related stuff
Session\images\img_list: datasets that contains all images loaded into the GUI
Session\images\files_a: dataset of A frames list
Session\images\files_b: dataset of B frames list
Session\images\frames: dataset of frames list for display
Session\images\settings: group for image settings
Session\images\settings\frame_{i}: dataset of settings for frame i
Session\results: group for all results
Session\results\frame_{i}: results group for frame i
Datasets in group:
Session\results\frame_{i}\x_raw: raw x component dataset
Session\results\frame_{i}\y_raw: raw y component dataset
Session\results\frame_{i}\u_raw: raw u component dataset
Session\results\frame_{i}\v_raw: raw v component dataset
Session\results\frame_{i}\tp_raw: raw vector type dataset
More will be added once postprocessing is working up to 'standard'
Attributes in group;
Session\results\frame_{i}.attrs['processed']: Boolean if frame was processed
Session\results\frame_{i}.attrs['process_time']: Time it took to process frame
Session\results\frame_{i}.attrs['units']: list of units for GUI purposes Ex: [px, px, px/dt, px/dt]
Session\results\frame_{i}.attrs['roi_present']: Boolean if roi is present
Session\results\frame_{i}.attrs['roi_coords']: roi coords in x min, x max, y min, y max
Session\results\frame_{i}.attrs['mask_coords']: mask coords, set to [] if no mask present
Session\results\frame_{i}.attrs['window_size']: Used for GUI purposes

PS, nearly flipped out hitting the close with comment button since that thing is HUGE on my phone :(

@alexlib
Copy link
Member

alexlib commented Jul 5, 2021

some simple facts: NetCDF4 = HDF5 with some extra limitations and it's own API. same performance
there is h5netcdf to read/write NetCDF through h5py - so no more dependenices
there are newer format called zarr - very good for Python, might be an issue for other languages. does everything that HDF5 does and a bit better for the cloud storage (natural)

which branch are you at @ErichZimmer ? I'll try to see if I understand if there is a point to use xarray + netcdf file for it.

@ErichZimmer
Copy link
Collaborator Author

@alexlib ,
I haven't uploaded it yet mainly because of lack of internet and having to relearn everything. I plan on making my version of the GUI a separate repository. This is to keep the original GUI and format the same (has some really good perks) and make a separate GUI for more in depth and complicated analysis.
@eguvep , What do you think of this idea?

@eguvep
Copy link
Member

eguvep commented Jul 30, 2021

Hi @ErichZimmer,
now (v0.4.11) the new Add-In infrastructure is working (File → Select Add-Ins). Also see the examples in:
https://github.com/OpenPIV/openpiv_tk_gui/tree/master/openpivgui/AddIns
It should be fairly easy now, to implement H5P as an add-in, instead of a complete separate project. This might lead to a bit of double code (but less double code than in two projects)., Altogether, the code should be more separate and self-consistent within the add-ins than ever before.
In this way, it should be possible to have two (or more) GUIs in one. A simple one (e.g. for teaching or beginners), and one ore more with more complex or special features, if the add-ins are selected.
What do you think?

@ErichZimmer
Copy link
Collaborator Author

This is a good idea. I'll see what I can do and clean up the code (when I have time) so I can push it up a separate branch for more testing. However, this might take a while along with having to figure out how to use GitHub command line without messing around with the wrong branches. Hopefully, I can do this soon so we can test NetCDF with the advanced GUI and merge to create a nice GUI system/ecosystem. Additionally, I'll work on the present simple GUI as the spatial and temporal pre-processing needs to be updated along with a few other minor things. The add-ins system looks nice :)

Some pictures of the GUI:
default_GUI_size

masking

preproc

preview_grid_size

advanced_algs

validation

modify

plotting

test2

You can load external results, settings, or another session
external_results

Or export the current figure (not from the basic figure generator), settings, or results.
export_figure

All of this comes with the cost of 23 new functions and 3000+ lines of code. However, it is simple since I have very little programming skills by programming standards :P

@eguvep
Copy link
Member

eguvep commented Jul 30, 2021

That looks very impressive!
Regarding the add-in structure, there is also a documentation for a quick start:
https://openpiv-tk-gui.readthedocs.io/en/latest/usage.html#add-in-handler

@alexlib
Copy link
Member

alexlib commented Jul 30, 2021

@ErichZimmer looks very nice. We need to figure out how to merge this into the existing one. Through AddIn or otherwise, by some coding.

@ErichZimmer
Copy link
Collaborator Author

@eguvep Currently, the advanced GUI is not compatible with the add-ins system. However, an option can be selected in the add-ins panel to enable the advanced GUI and all its features. I just have to figure out how the list boxes are going to be coded as they are completely incompatible with the simple GUI.
On another note, support for PyFFTW would be nice, but optional, for faster computation speeds for large batches (on a 4 core laptop, I can get about 250 frames processed every 10 minutes with HD images and windowing of 128>64>32>16>12 with 50% overlap on an Intel Pentinum N3710 running at ~2.25 GHz and 1 GB unused RAM) This might be an interesting feature for OpenPIV in the future (but lets stay away from arbitrary windowing :) ). It reintroduces Cython though...

@alexlib
Copy link
Member

alexlib commented Aug 3, 2021

Computational speed is important but only if we find a way to install the package as simple as it is now. We shall probably first try numba. We can always create a professional version with a different name and installation instructions, eg openpiv-Python-pro code the advanced users

@eguvep
Copy link
Member

eguvep commented Aug 3, 2021

I think, we could make it compatible by making the code more modular with the help of the add-in system. In my dreams ;-) every user can compos her or his individual GUI by selecting or deselecting the features they need or do not need.

@ErichZimmer
Copy link
Collaborator Author

@alexlib numba works great on the correlation_to_displacement method, which is somewhat slow. However, numba limits the officially supported operating systems to something like windows, MacOS, and Linux. I'm trying a vectorized implementation, however it gives off wonky results.

@eguvep That would be very nice and is a great idea. Some GUI's have good control on what features are needed and what isn't. The advanced GUI is starting to incorporate this in an attempt to make the main code similar enough to the the simpler version, but it is hard to combine the two as there isn't much double coding (the only similar function is initializing the widgets). I probably wasn't thinking about the Add-Ins system until I was done with the main functions.

@ErichZimmer
Copy link
Collaborator Author

This might take a little more work than I thought. When spending my spare time playing around and trying two merge the two projects, The advanced GUI loses some of its functionality due to it being built around H5Py. For instance, an entire extractions rider would not be feasible to implement ins the simple GUI's format. However, I'm trying to incorporate the simple GUI into the advanced one, which seems to go a little more smoother.

If I were to take away the H5Py core, only the scatter plot and histogram plot would remain functional.
GUI_statistics

@alexlib
Copy link
Member

alexlib commented Sep 1, 2021

@eguvep @ErichZimmer please also take a look on the way the GUI for this tracker is arranged. Seems quite simple in terms of uncluttered environment with multiple options. I think this is the same concept as for napari
https://www.youtube.com/watch?v=ajEp18opM-Y&list=PL56zLBbX0yZZw18yyMM9tD0fLrobmdbJG&index=1

@ErichZimmer
Copy link
Collaborator Author

@eguvep
I toyed around with different merging ideas and found that it may be best to remove h5py and only temporary store data when analyzing the current frame. This would allow the user to find the optimal settings before batch processing. Furthermore, it would make ensemble correlation MUCH more easier to implement and expand for more advanced features. In conclusion, you get most of the benefits of the h5py GUI with no additional dependencies and still have most of GUI features. However, this would require a massive change to the current GUI to something similar to the h5py GUI and the json file would be much larger if manual object masking is used. Calibration might be interesting too.

@alexlib
That video is very interesting. Insight 4G (I think that's right) uses an identical system for preprocessing, analysis algorithms, and postprocessing. It is very flexible and can be easily fine tuned for advanced users. With all the new preprocessing algorithms and a surplus of advanced algorithms on the h5py GUI, this will most definitely be helpful. Just have to wait until I have time and finish merging my "mega" GUI that takes advantage of nearly all OpenPIV functions except for 3d PIV. The GUI has a total of 6,000 lines when all files are summed up. That's a lot of work ;)

Regards,
Erich

PS, maybe we can create an executable with an embedded python interpreter for users that don't want to bother with installing python. If we do go this route, an executable would have to made in each operating system. Just stay away from the ones that attempt to transcribe python to c or c++. It'll make you lose your hair at the end of the day ;)

@ErichZimmer
Copy link
Collaborator Author

Should we try to keep h5py or netCDF based GUIs? Using them allow for a huge amount of opportunities before exporting files, but at the cost of complexity and some additional computation costs.

@alexlib
Copy link
Member

alexlib commented Sep 11, 2021

Should we try to keep h5py or netCDF based GUIs? Using them allow for a huge amount of opportunities before exporting files, but at the cost of complexity and some additional computation costs.

I agree that one of those would be great. I think the main part here is a fast I/O and if possible, access from outside of the GUI, e.g. from a Jupyter notebook - allowing interaction with the data from a post-processing package. I do not mind h5py or netcdf - as long as we in the future interface them, i.e. we will add h5data.to_necdf() and netcdfdata._to_h5() later on.
@eguvep ?

@ErichZimmer
Copy link
Collaborator Author

access from outside of the GUI
FWI, all data stored in the HDF5 file can be accessed from a notebook. I've done it many times when I was still working on the GUI for debugging and more efficient structuring. Postprocessed and other data can also be stored in the HSF5 file and depending on the name of the groups and databases, the GUI can read and process them.

@eguvep
Copy link
Member

eguvep commented Sep 13, 2021

Should we try to keep h5py or netCDF based GUIs? Using them allow for a huge amount of opportunities before exporting files, but at the cost of complexity and some additional computation costs.

In my point of view, one of the main design-goals of the GUI is simplicity, so that none-programmers can easily understand and contribute. The add-in system is structuring the code even more to make it even more accessible. On the other hand, I see the advantages of an efficient binary file format. Do you really see no way of using h5py in the scope of a plug-in? This would be the most desirable solution, in my opinion.

@ErichZimmer
Copy link
Collaborator Author

Additionally, a second order image dewarping function is being developed, but it is going slow due to my lack of expertise in mathematics (took too long of a break :P )

@alexlib
Copy link
Member

alexlib commented Oct 1, 2021

Additionally, a second order image dewarping function is being developed, but it is going slow due to my lack of expertise in mathematics (took too long of a break :P )

Great. Where is it? We had another repo by Theo with a similar development - better is we learn from both

@ErichZimmer
Copy link
Collaborator Author

My internet is a little too slow to push the GUIs to my fork, so I'll try again later. The theory is based on the article, Distortion correction of two-component - two-dimensional PIV using a large imaging sensor with application to measurements of a turbulent boundary layer flow at Reτ = 2386 where the normalized autocorrelation of the calibration image is used to find the peaks. Invalid peak locations can be manually removed so only valid peaks are left. After that, the object plane peaks need to be found, but I'm having trouble with that currently. Finally, to solve and warp the image, scikit-image ProjectiveTransform() is used to get the warp matrix which is then used to warp the image with the warp() function. For the sake of performance, two application methods should be featured; one for images and one for vectors. Both have similar RMS and bias errors, but correcting vectors is much faster and can be used on any vector field.

@alexlib
Copy link
Member

alexlib commented Oct 1, 2021

please see https://github.com/TKaeufer/Open_PIV_mapping
we tried also to use scipy and scikit-image, but eventually Theo's code was the most robust

@ErichZimmer
Copy link
Collaborator Author

Is the repository public?

@alexlib
Copy link
Member

alexlib commented Oct 2, 2021

Is the repository public?

see my fork - I invited you https://github.com/alexlib/Open_PIV_mapping

@ErichZimmer
Copy link
Collaborator Author

That repository is much different then my attempt which uses a meshed region of interest and for-loops to find the points. As soon as I get a decent internet connection, I'll hopefully get everything pushed to a fork or repository for everyone to see (including my spaghetti-coding skills :P).

@ErichZimmer
Copy link
Collaborator Author

Just did some tests and your fork/repository is quite better and more robust then my implementation. I'll see if there is some enhancements/refactoring I can do. Does this repository allow for the calibration of vectors as a post-processing method?

@alexlib
Copy link
Member

alexlib commented Oct 2, 2021

It is Theo work in progress, he have chosen to work in the image space. But it should work on vectors as well

@ErichZimmer
Copy link
Collaborator Author

I played around with different ideas and keep reverting back to Theo's repository. The calibration seems pretty simple and would be a nice addition for OpenPIV.
On another note, I tested rectangular windows with the GUI and it works like a charm except that it's 50% slower than square windows. Here is a screenshot of raw vectors using circular correlation.
rectangular_windows

@ErichZimmer
Copy link
Collaborator Author

The image pair is from PIV Challenge 2014 case A (testing micro-PIV).

@ErichZimmer
Copy link
Collaborator Author

To avoid major overhead with shared dictionaries, the files are stored in a temporary folder before loaded into the GUI and deleted. This makes multiprocessing as fast as the simple GUI and removes the need for a batch size. Is this method alright?
New processing steps:

  • Initiate multiprocessing class
  • Run multiprocessing
  • Process image pairs
  • Save results as dictionary stored in .npz files
  • Load said files into h5py
  • Delete temporary files

@alexlib
Copy link
Member

alexlib commented Oct 6, 2021

I am not quite sure about the step of saving to npz and then loading to hdf5 - could it be maybe stored already in hdf5, to save one conversion or loading/saving step?

@ErichZimmer
Copy link
Collaborator Author

H5py doesn't directly support parallel writing, so it's this wierd work around or the other one based on a shared memory dictionary that is then loaded into h5py. I am still looking for better options through mpi4py, but so far, it isn't successful and complicates the installation process of the GUI. In my opinion, this issue is one of the few problems with h5py where other libraries (Ex: not using h5py like in the simple GUI) would be better.

@alexlib
Copy link
Member

alexlib commented Oct 6, 2021

I understand. So there are two options: a) use mutiprocessing and RAM - to keep all the parallel results in memory, b) store every result by a separate worker to a temporary file and then combine them.
I guess if there is a significant speed-up in the option b) compared to a single thread / single processing path - let's do it this way.

@alexlib
Copy link
Member

alexlib commented Oct 6, 2021

take a look at zarr
pydata/xarray#3096
pydata/xarray#4035
https://zarr.readthedocs.io/en/stable/tutorial.html

can it help? it seems to have some solution and it's pip-installable.

@ErichZimmer
Copy link
Collaborator Author

take a look at zarr
I looked at it and it seems promising and easy to implement with minimal change in code.

on calibration
I got somewhat familiar with the image calibration interface and I like it so far. However, an improvement in precision can be attained by using a centroid algorithm and find_first_peak/find_second_peak in pyprocess.
My version of the calibration software follows the instructions of an article mentioned previously and ignores scaling to minimize user input. It is based off of Theo's script and Fluere, and can only be applied to the vector field via for loop. I still like Theo's script more, though, as it is more flexible.

@ErichZimmer
Copy link
Collaborator Author

It would be great to incorporate the script into something like OpenPIV.tools or its own calibration file as some cameras (e.g. my raspberry pi controlled 1 Mp global shutter sensor) have quite a fisheye distortion and messes up the measurements.

@ErichZimmer
Copy link
Collaborator Author

The subpixel function works for the original script, so I'll simply use the original script by Theo.

@alexlib
Copy link
Member

alexlib commented Oct 7, 2021

It would be great to incorporate the script into something like OpenPIV.tools or its own calibration file as some cameras (e.g. my raspberry pi controlled 1 Mp global shutter sensor) have quite a fisheye distortion and messes up the measurements.

Good idea. Move the discussion to openpiv-Python repo issues please

@ErichZimmer
Copy link
Collaborator Author

Zarr is creating a file for each frame, so I'll have to figure out what I'm doing wrong here. It does allow multiprocessing though ;)

@ErichZimmer
Copy link
Collaborator Author

ErichZimmer commented Oct 10, 2021

Using npy files wasn't a smart decision. They save and load fast, but the individual file sizes can get up to 3 MB for 50,000 vectors. For large sessions, this uses up quite a bit of space before it is deleted. Zarr is still making a bunch of files and in a way, acts like the temporary npy files. I'll try mpi4py again for built in parallel with h5py.
Additionally, h5py files can get quite large, with some being >20 GB for large processing sessions. However, a similar amount of space is taken by text files.

@ErichZimmer
Copy link
Collaborator Author

Using a batch system similar to the shared memory dictionary system, the results can be processed in parallel and loaded in serial. If we are to use this system, then Zarr might be a good file system to use as it operates in a very similar fashion with multiple linked files.

@ErichZimmer
Copy link
Collaborator Author

It also allows for exporting the session in HDF5 and netCDF.

@ErichZimmer
Copy link
Collaborator Author

I found that the temporary file system works best, so I'll keep it to now. It doesn't take any extra space on the hard drive.

@ErichZimmer
Copy link
Collaborator Author

Here is the somewhat buggy h5py gui.
https://github.com/ErichZimmer/openpiv_tk_gui/tree/GUI_enhancement2

@ErichZimmer
Copy link
Collaborator Author

It requires h5py as an extra dependency.

@ErichZimmer
Copy link
Collaborator Author

To not pollute your GUI with features that cannot be merged (at least I wasn't able to due to my basic programming knowledge), I'm going to close this issue so I can focus more on your GUI.

@ErichZimmer
Copy link
Collaborator Author

I also moved the h5py GUI to a new repository to eliminate accidentally pushing the wrong GUI to my fork of your GUI.
https://github.com/ErichZimmer/openpiv-python-gui

I honestly like your GUI a little more because of its simplicity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants