-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add explanations about how to generate tests in DAP #513
Conversation
Feel free to tell me if this explanation is not enough. |
Co-authored-by: Olle Jonsson <olle.jonsson@gmail.com>
@ono-max thanks for the instructions! I have a question though: how do I know what steps were taken from the original tests? the written tests are low-level requests/responses and it's hard to reverse-engineer the original actions. for example, how do I know if the scope request here was done by the client automatically (for UI), or it's triggered by clicking something? |
Well, if I want to know them, I run the command like |
I mean the test case itself doesn't tell me what actions were performed originally. I can try recording it several times and select the most similar result. But that's not efficient because any mis-click or forgotten step means a restart. Ideally, documenting all the steps inside the test case should help. However, I don't think that's a sustainable way. I think we should have unit tests for individual commands. And they can be written with test helpers: perform_dap_request "threads"
assert_dap_response {
threads: [
{
id: 1,
name: /#1 .*/
}
]
} perform_dap_request "stackTrace", {
threadId: 1,
startFrame: 0,
levels: 20
}
assert_dap_response {
stackFrames: [
{
name: "<main>",
line: 1,
column: 1,
source: {
name: /#{File.basename temp_file_path}/,
path: /#{temp_file_path}/,
sourceReference: nil
},
id: 1
}
]
} It'll be easy to add/update them, just like all the tests under We'll still keep the recorded tests as integration tests, probably something like PROGRAM = <<~RUBY
# set breakpoints in line 20
# continue
# show backtrace
# show info
# quit
class Foo
def first_call
second_call(20)
end
def second_call(num)
third_call_with_block do |ten|
forth_call(num, ten)
end
end
def third_call_with_block(&block)
@ivar1 = 10; @ivar2 = 20
yield(10)
end
def forth_call(num1, num2)
num1 + num2
end
end
RUBY Then we can easily re-record them when needed according to the steps. And because there should be relatively few of them, maintaining them will be easier. |
Sorry, never mind the above topics. I'll rewrite them later. |
First, I'd like to know your opinion in details. Your opinion is
Is that correct? |
And because the recording approach almost always generate certain commands, like
|
Thank you for explaining to me. |
Because we have already wrote some tests? From 1 and 2, I thought you want to change from protocol-level testing to using some methods such as |
I'd be happy to rewrite them into simpler ones, like for stepping tests we only test related endpoints and wihout
that'll be an improvement for sure and perhaps we can start from there. but I think the clear separation for unit/integration testing is more important. for example, |
please continue the discussion on another ticket about the test data format. |
or you can continue here. |
Ahh, I got your points. I'll rethink it. |
@ono-max if you think it's ok to convert most of the command-specific tests into unit-tests, we can do that as follows:
|
I create a prototype to think about it. Could you give me some more time? Thanks. |
No description provided.