Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a Mocked Server for Testing #30

Closed
3 of 10 tasks
RobotSail opened this issue Aug 2, 2022 · 12 comments
Closed
3 of 10 tasks

Add a Mocked Server for Testing #30

RobotSail opened this issue Aug 2, 2022 · 12 comments
Labels
enhancement New feature or request

Comments

@RobotSail
Copy link
Contributor

RobotSail commented Aug 2, 2022

Based on a request from #27, we should introduce a mocked server which tests that values are correctly sent by go-gpt3 and received by the server, as well as the ability for the client to receive data from the server.

This can help us to quickly iterate on tests without spending any OpenAI credits.

We need the following to complete this task

  • Create a mocked OpenAI API server
  • Write endpoint handlers and tests for the mocked server:
    • Completions
    • Edits
    • Moderation
    • Search
    • Embeddings
    • Answers
    • Files
    • Classifications
@vimitdhawan
Copy link

@RobotSail Do you need any help on this?

@RobotSail
Copy link
Contributor Author

Hi @vimitdhawan, we merged #31 last year, and I haven't touched the others once since. If you wanted to contribute some tests for these other points, that would be amazing.

@Rascal0814
Copy link
Contributor

The testing of the file is not done yet, can I contribute a bit to this?

@Rascal0814
Copy link
Contributor

The testing of the file is not done yet, can I contribute a bit to this?

And I think it is unreasonable to put all the tests in the api_test file, and the functions of the same nature should be put in one file

@drkennetz
Copy link

@RobotSail Hi. Client should be an interface to a client which implements the methods. That would solve all your testing problems. Then you could just setup a mock on your interface. Here's a simple example:

type Client interface {
    ListEngines(ctx context.Context) (engines EnginesList, err error)
}

type client struct {
    config ClientConfig
}

func (c *client) ListEngines(ctx context.Context) (engines EnginesList, err error) {
...
}

type MockClient struct {
    mock.Mock
}

func (c *MockClient) ListEngines(ctx context.Context) (engines EnginesList, err error) {
    args := c.Called(ctx)
    return args.Get(0).(EnginesList), args.Error(1)
}

This is untested but should be the general format. I'd be happy to try to take it on, it would just be a pretty large refactor but would make it easier for users (like me) to test the Client.

@sashabaranov
Copy link
Owner

@drkennetz, the approach you are talking about would generally be correct if there were some business logic in Client. However, Client is the whole business logic we have here, you don't get it tested if you mock it.

@RobotSail
Copy link
Contributor Author

RobotSail commented Mar 1, 2023

After some further thought, do we really even need the rest of these tests?
The whole point of this package is providing a client to call OpenAI's API, so would it even make sense to have a goal of some % of code coverage? All that is being done here is the API gets called and then returns the results.

@drkennetz
Copy link

The goal of mock's is never to test an API call, it is to ensure that changes in internal software don't break your code. They provide fast iteration if you ever decide to change your code, and they provide consistent results.

I don't understand what you mean by saying that "you don't get it tested if you mock it". Yeah, you don't test that you actually make an API call to openai, but you test the method takes in and puts out data correctly.

There is a lot of software that is "just an API call" and contains nothing except a Get request to a database, and that software should absolutely be wrapped in tests. It is also core to the business.

Arguably, it doesn't make sense to make "actual" api requests to a third party as you have no control over them changing the API, there is no way you can guarantee consistent results, and it will be abundantly apparent to you and everybody else if the API stops working like it used to. All you're really checking by getting an actual response is that you can connect to the API.

@sashabaranov
Copy link
Owner

@drkennetz got it, thank you! From your example above, it appears that you've proposed to mock a client like this https://github.com/sashabaranov/go-gpt3/blob/39ca4e94882215d59857cd8791c12082edec2c97/engines.go#L24 which would defeat the purpose of testing the business logic in the client.

From your last message, I understand that you propose to mock HTTPClient here https://github.com/sashabaranov/go-gpt3/blob/master/config.go#L16 and making it mock server responses, basically substituting call to a mock server to just returning the result.

Would love to merge your PR implementing that! 🙌🏻

@drkennetz
Copy link

Sorry for the lack of clarity in my original post!

I'd be happy to do that. Maybe I'll start with a small one as an example, and if we're happy with it I can implement across the board 👍

vvatanabe added a commit to vvatanabe/go-openai that referenced this issue Jun 12, 2023
coggsfl added a commit to coggsfl/go-openai that referenced this issue Jun 12, 2023
@vvatanabe
Copy link
Collaborator

@sashabaranov Mock testing of all OpenAI APIs was implemented with a pull request
#356 . Did you meet the requirements of this Issue? Can you close this issue?

@vvatanabe vvatanabe added the enhancement New feature or request label Jul 1, 2023
@vvatanabe
Copy link
Collaborator

Fixed #356

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants