Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added proper error handling in InputParser.jl #60

Merged
merged 6 commits into from
Jun 2, 2022
Merged

Conversation

EmilSoleymani
Copy link
Collaborator

Added Exception Handling in InputParser.jl

If there is anything wrong with the input file format, like missing info or data of the wrong type (i.e. a floating point number was entered instead of a whole number), corresponding errors are thrown with useful error messages as feedback. Also incorporated catching these errors and handling them as needed when calling them in runtests.jl.

test/runtests.jl Outdated
println("Error! Skipping Tests!")
elseif id == Int(InputParser.SoilNumberErrorId)
println("Error: Invalid input file!")
println("\t>Line $(e.line): Soil layer number must be larger than previous one!")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see essentially the same code over and over again in this change. Can you refactor the code so that you only have this once?

Copy link
Owner

@smiths smiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilSoleymani, I would like you to think about refactoring the test cases to remove the frequently repeated code. Also, I don't see any test cases for the new exception. I would think that Julia would let you assert that the proper exception was raised. A test case around that would be good.

Copy link
Owner

@smiths smiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilSoleymani, I don't see what I'm looking for in the test cases. I might be missing it, or you might need to add a few more test cases. I'd like test cases that tests that the exceptions are raised correctly. In Pyunit the test case would have a syntax like @AssertRaises(ExceptionName). I did a quick google and I found on a testing with Julia webpage test syntax like this: @test_throws exception expr. Do you use that (or something similar) somewhere and I missed it?

@EmilSoleymani
Copy link
Collaborator Author

@EmilSoleymani, I don't see what I'm looking for in the test cases. I might be missing it, or you might need to add a few more test cases. I'd like test cases that tests that the exceptions are raised correctly. In Pyunit the test case would have a syntax like @AssertRaises(ExceptionName). I did a quick google and I found on a testing with Julia webpage test syntax like this: @test_throws exception expr. Do you use that (or something similar) somewhere and I missed it?

I never got to adding that. I made an issue about it (#64) and will be adding it shortly.

@smiths
Copy link
Owner

smiths commented May 30, 2022

Thank you for the clarification @EmilSoleymani. I read the issue incorrectly. You are saying what you are going to do, and for some reason I interpreted as something that you have already done. 😄 It doesn't hurt that we had a conversation to clarify anyway. 😄

@EmilSoleymani
Copy link
Collaborator Author

I have added the test cases which confirm the correct errors are being thrown given various bad input files. This is achieved using the @test_throws macro as you suggested. I have also cleaned up the runtests.jl file. There are big chunks of code where I compare the InputData object and its values that were parsed from a file to its expected values. Since each file could have many different types of values, there is no elegant solution other than having a function containing all the test cases for each input file (these functions are in a module Tests of tests.jl) and having an array of these testing functions. The runtests.jl file now has one for loop where it goes over all test files, parses InputData object, and executes the corresponding test function.

test/runtests.jl Outdated Show resolved Hide resolved
end
Base.showerror(io::IO, e::ParsingError) = print(io, "Could not parse file.")

# FoundationError is for when foundationOption is not read as 1 or 2
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you are using the same file format as the original file, but once we get this working, we should update the file format to use a more meaningful string for the input. Remember what 1 or 2 means is asking too much of the user. Ideally, we'll eventually have the GUI interface which would let someone pick their foundation option with a drop down.

Copy link
Owner

@smiths smiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good @EmilSoleymani. Once you fix the minor typos I indicated, I'll approve the pull request.

Copy link
Owner

@smiths smiths left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@smiths smiths merged commit 954fbd0 into main Jun 2, 2022
@smiths smiths deleted the update-input-parser branch June 2, 2022 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants