Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add testing ability #71

Merged
merged 16 commits into from
Apr 3, 2023
Merged

add testing ability #71

merged 16 commits into from
Apr 3, 2023

Conversation

gehrisandro
Copy link
Collaborator

@gehrisandro gehrisandro commented Mar 11, 2023

This PR makes it easier to write tests in your application using this library.

This is related to openai-php/laravel#23

@nunomaduro @mpociot I have decided to create most of the testing logic within the client library itself and not in the Laravel wrapper. The main reason for this decision is to provide the same testing options to users which are using the client directly.
Additionally it should be fairly easy to provide the testing options in other wrappers too (@GromNaN)

The PR for the Laravel wrapper is here: openai-php/laravel#27

There are multiple things to discuss:

  • First of all: Do you agree to put most of the testing logic into the client library?
  • I have add fixtures for all responses to make faking easier: It's not necessary to provide all response parameters, instead it's enough to provide the parameters the user is interested in. (see example below) Do you think that's too much?
  • The response fixtures are at the moment classes having a constant containing the default response attributes. Maybe it could be changed to something different as it seems a bit too much to have classes.
  • I have excluded the OpenAI\Testing namespace from phpstan because it's not really possible to add proper doc types as the return type could be possibly every available response.

Todos:

  • Add documentation

A test using a fake response without providing all parameters

it('returns the faked response', function () {
    $fake = new ClientFake([
        CreateResponse::fake([
            'choices' => [
                [
                    'text' => 'awesome!',
                ],
            ],
        ]),
    ]);

    $completions = $fake->completions()->create([
        'model' => 'text-davinci-003',
        'prompt' => 'PHP is ',
    ]);

    expect($completions['choices'][0])
        ->text->toBe('awesome!');
});

@gehrisandro gehrisandro marked this pull request as ready for review March 20, 2023 18:47
@mpociot
Copy link
Contributor

mpociot commented Mar 24, 2023

@gehrisandro thank you very much for putting in the work for this PR.
I just tried the branch on What The Diff and it works great.

Internally, I personally don't think it's an issue that the testing implementation is inside the library.

There are two things that I noticed during implementing this:

The fixture values used for default responses should be simpler.

While testing, I didn't specify all of the choices responses and I saw the default fixture values like "this is indeed a test". Those default values made me wonder for a second if I'm making some actual API requests, so I think it might be better to either leave them out entirely or use some more verbose defaults that clearly indicate that this is a fake response.

Add the ability to test exceptions/errors

I have to add some tests to ensure that error handling is working correctly. This can have a couple of different reasons.
Either the API returns an internal error (GPT4 seems to do this), a rate limit error or something else.

I thought that I could just create a fake response with an error, but that didn't work as the ErrorException does not get thrown in the fake client:

CreateResponse::fake([
    'error' => [
        'message' => 'Something went wrong',
    ],
])

Other than that, this is perfect! :)

@nunomaduro
Copy link
Contributor

I would like to add, on top of @mpociot's feedback, that we need to adjust this pull request for the new streaming feature coming out.

@gehrisandro
Copy link
Collaborator Author

Hi @mpociot

Thank you for your valuable feedback.

  1. I will review the default values and make them more clearly.
    Not providing any default would force the users to always provide a full API response to fake a response object and this is in my opinion less convenient because in most tests they will be only interested in a specific value.

  2. To be honest I totally forgot about returning exceptions. I have to take a look how to implement this as you are going to create a fake response instance but in the client the exception handling takes place before the response object is instantiated.
    For now I think we should allow to pass exceptions as fake responses.

@nunomaduro I will start adjusting to the stream responses. Hope to finish until Monday evening.

@gehrisandro
Copy link
Collaborator Author

@mpociot Exception testing should work now:

$client = new ClientFake([
    new \OpenAI\Exceptions\ErrorException([
        'message' => 'The model `gpt-1` does not exist',
        'type' => 'invalid_request_error',
        'code' => null,
    ])
]);

// the `ErrorException` will be thrown
$completion = $client->completions()->create([
    'model' => 'text-davinci-003',
    'prompt' => 'PHP is ',
]);

@nunomaduro nunomaduro merged commit ca2f24a into main Apr 3, 2023
@nunomaduro nunomaduro deleted the add-testing-ability branch April 3, 2023 07:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants