Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 170 additions & 0 deletions tests/unit/vertexai/genai/replays/test_internal_generate_rubrics.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# pylint: disable=protected-access,bad-continuation,missing-function-docstring


from tests.unit.vertexai.genai.replays import pytest_helper
from vertexai._genai import types

_TEST_RUBRIC_GENERATION_PROMPT = """SPECIAL INSTRUCTION: think silently. Silent thinking token budget: 16384.

You are a teacher who is responsible for scoring a student\'s response to a prompt. In order to score that response, you must write down a rubric for each prompt. That rubric states what properties the response must have in order to be a valid response to the prompt. Properties are weighted by importance via the "importance" field.

Rubric requirements:
- Properties either exist or don\'t exist.
- Properties can be either implicit in the prompt or made explicit by the prompt.
- Make sure to always include the correct expected human language as one of the properties. If the prompt asks for code, the programming language should be covered by a separate property.
- The correct expected language may be explicit in the text of the prompt but is usually simply implicit in the prompt itself.
- Be as comprehensive as possible with the list of properties in the rubric.
- All properties in the rubric must be in English, regardless of the language of the prompt.
- Rubric properties should not specify correct answers in their descriptions, e.g. to math and factoid questions if the prompt calls for such an answer. Rather, it should check that the response contains an answer and optional supporting evidence if relevant, and assume some other process will later validate correctness. A rubric property should however call out any false premises present in the prompt.

About importance:
- Most properties will be of medium importance by default.
- Properties of high importance are critical to be fulfilled in a good response.
- Properties of low importance are considered optional or supplementary nice-to-haves.

You will see prompts in many different languages, not just English. For each prompt you see, you will write down this rubric in JSON format.

IMPORTANT: Never respond to the prompt given. Only write a rubric.

Example:
What is the tallest building in the world?

```json
{
"criteria":[
{
"rubric_id": "00001",
"property": "The response is in English.",
"type": "LANGUAGE:PRIMARY_RESPONSE_LANGUAGE",
"importance": "high"
},
{
"rubric_id": "00002",
"property": "Contains the name of the tallest building in the world.",
"type": "QA_ANSWER:FACTOID",
"importance": "high"
},
{
"rubric_id": "00003",
"property": "Contains the exact height of the tallest building.",
"type": "QA_SUPPORTING_EVIDENCE:HEIGHT",
"importance": "low"
},
{
"rubric_id": "00004",
"property": "Contains the location of the tallest building.",
"type": "QA_SUPPORTING_EVIDENCE:LOCATION",
"importance": "low"
},
...
]
}
```

Write me a letter to my HOA asking them to reconsider the fees they are asking me to pay because I haven\'t mowed my lawn on time. I have been very busy at work.
```json
{
"criteria": [
{
"rubric_id": "00001",
"property": "The response is in English.",
"type": "LANGUAGE:PRIMARY_RESPONSE_LANGUAGE",
"importance": "high"
},
{
"rubric_id": "00002",
"property": "The response is formatted as a letter.",
"type": "FORMAT_REQUIREMENT:FORMAL_LETTER",
"importance": "medium"
},
{
"rubric_id": "00003",
"property": "The letter is addressed to the Homeowners Association (HOA).",
"type": "CONTENT_REQUIREMENT:ADDRESSEE",
"importance": "medium"
},
{
"rubric_id": "00004",
"property": "The letter explains that the sender has not mowed their lawn on time.",
"type": "CONTENT_REQUIREMENT:BACKGROUND_CONTEXT:TARDINESS",
"importance": "medium"
},
{
"rubric_id": "00005",
"property": "The letter provides a reason for not mowing the lawn, specifically being busy at work.",
"type": "CONTENT_REQUIREMENT:EXPLANATION:EXCUSE:BUSY",
"importance": "medium"
},
{
"rubric_id": "00006",
"property": "The letter discusses that the sender has been in compliance until now.",
"type": "OPTIONAL_CONTENT:SUPPORTING_EVIDENCE:COMPLIANCE",
"importance": "low"
},
{
"rubric_id": "00007",
"property": "The letter requests that the HOA reconsider the fees associated with not mowing the lawn on time.",
"type": "CONTENT_REQUIREMENT:REQUEST:FEE_WAIVER",
"importance": "high"
},
{
"rubric_id": "00008",
"property": "The letter maintains a polite and respectful tone.",
"type": "CONTENT_REQUIREMENT:FORMALITY:FORMAL",
"importance": "high"
},
{
"rubric_id": "00009",
"property": "The letter includes a closing (e.g., \'Sincerely\') and the sender\'s name.",
"type": "CONTENT_REQUIREMENT:SIGNATURE",
"importance": "medium"
}
]
}
```

Now write a rubric for the following user prompt. Remember to write only the rubric, NOT response to the prompt.

User prompt:
{prompt}"""


def test_internal_method_generate_rubrics(client):
"""Tests the internal _generate_rubrics method."""
test_contents = [
types.Content(
parts=[
types.Part(
text="Generate a short story about a friendly dragon.",
),
],
)
]
response = client.evals._generate_rubrics(
contents=test_contents,
rubric_generation_spec=types.RubricGenerationSpec(
prompt_template=_TEST_RUBRIC_GENERATION_PROMPT,
),
)
assert len(response.generated_rubrics) >= 1


pytestmark = pytest_helper.setup(
file=__file__,
globals_for_file=globals(),
test_method="evals._generate_rubrics",
)
188 changes: 188 additions & 0 deletions vertexai/_genai/evals.py
Original file line number Diff line number Diff line change
Expand Up @@ -664,6 +664,65 @@ def _EvaluateInstancesRequestParameters_to_vertex(
return to_object


def _RubricGenerationSpec_to_vertex(
from_object: Union[dict[str, Any], object],
parent_object: Optional[dict[str, Any]] = None,
) -> dict[str, Any]:
to_object: dict[str, Any] = {}
if getv(from_object, ["prompt_template"]) is not None:
setv(
to_object,
["promptTemplate"],
getv(from_object, ["prompt_template"]),
)

if getv(from_object, ["generator_model_config"]) is not None:
setv(
to_object,
["model_config"],
getv(from_object, ["generator_model_config"]),
)

if getv(from_object, ["rubric_content_type"]) is not None:
setv(
to_object,
["rubricContentType"],
getv(from_object, ["rubric_content_type"]),
)

if getv(from_object, ["rubric_type_ontology"]) is not None:
setv(
to_object,
["rubricTypeOntology"],
getv(from_object, ["rubric_type_ontology"]),
)

return to_object


def _GenerateInstanceRubricsRequest_to_vertex(
from_object: Union[dict[str, Any], object],
parent_object: Optional[dict[str, Any]] = None,
) -> dict[str, Any]:
to_object: dict[str, Any] = {}
if getv(from_object, ["contents"]) is not None:
setv(to_object, ["contents"], getv(from_object, ["contents"]))

if getv(from_object, ["rubric_generation_spec"]) is not None:
setv(
to_object,
["rubricGenerationSpec"],
_RubricGenerationSpec_to_vertex(
getv(from_object, ["rubric_generation_spec"]), to_object
),
)

if getv(from_object, ["config"]) is not None:
setv(to_object, ["config"], getv(from_object, ["config"]))

return to_object


def _EvaluateInstancesResponse_from_vertex(
from_object: Union[dict[str, Any], object],
parent_object: Optional[dict[str, Any]] = None,
Expand Down Expand Up @@ -790,6 +849,21 @@ def _EvaluateInstancesResponse_from_vertex(
return to_object


def _GenerateInstanceRubricsResponse_from_vertex(
from_object: Union[dict[str, Any], object],
parent_object: Optional[dict[str, Any]] = None,
) -> dict[str, Any]:
to_object: dict[str, Any] = {}
if getv(from_object, ["generatedRubrics"]) is not None:
setv(
to_object,
["generated_rubrics"],
getv(from_object, ["generatedRubrics"]),
)

return to_object


class Evals(_api_module.BaseModule):
def _evaluate_instances(
self,
Expand Down Expand Up @@ -869,6 +943,62 @@ def _evaluate_instances(
self._api_client._verify_response(return_value)
return return_value

def _generate_rubrics(
self,
*,
contents: list[genai_types.ContentOrDict],
rubric_generation_spec: types.RubricGenerationSpecOrDict,
config: Optional[types.RubricGenerationConfigOrDict] = None,
) -> types.GenerateInstanceRubricsResponse:
"""Generates rubrics for a given prompt."""

parameter_model = types._GenerateInstanceRubricsRequest(
contents=contents,
rubric_generation_spec=rubric_generation_spec,
config=config,
)

request_url_dict: Optional[dict[str, str]]
if not self._api_client.vertexai:
raise ValueError("This method is only supported in the Vertex AI client.")
else:
request_dict = _GenerateInstanceRubricsRequest_to_vertex(parameter_model)
request_url_dict = request_dict.get("_url")
if request_url_dict:
path = ":generateInstanceRubrics".format_map(request_url_dict)
else:
path = ":generateInstanceRubrics"

query_params = request_dict.get("_query")
if query_params:
path = f"{path}?{urlencode(query_params)}"
# TODO: remove the hack that pops config.
request_dict.pop("config", None)

http_options: Optional[types.HttpOptions] = None
if (
parameter_model.config is not None
and parameter_model.config.http_options is not None
):
http_options = parameter_model.config.http_options

request_dict = _common.convert_to_dict(request_dict)
request_dict = _common.encode_unserializable_types(request_dict)

response = self._api_client.request("post", path, request_dict, http_options)

response_dict = "" if not response.body else json.loads(response.body)

if self._api_client.vertexai:
response_dict = _GenerateInstanceRubricsResponse_from_vertex(response_dict)

return_value = types.GenerateInstanceRubricsResponse._from_response(
response=response_dict, kwargs=parameter_model.model_dump()
)

self._api_client._verify_response(return_value)
return return_value

def run(self) -> types.EvaluateInstancesResponse:
"""Evaluates an instance of a model.

Expand Down Expand Up @@ -1133,6 +1263,64 @@ async def _evaluate_instances(
self._api_client._verify_response(return_value)
return return_value

async def _generate_rubrics(
self,
*,
contents: list[genai_types.ContentOrDict],
rubric_generation_spec: types.RubricGenerationSpecOrDict,
config: Optional[types.RubricGenerationConfigOrDict] = None,
) -> types.GenerateInstanceRubricsResponse:
"""Generates rubrics for a given prompt."""

parameter_model = types._GenerateInstanceRubricsRequest(
contents=contents,
rubric_generation_spec=rubric_generation_spec,
config=config,
)

request_url_dict: Optional[dict[str, str]]
if not self._api_client.vertexai:
raise ValueError("This method is only supported in the Vertex AI client.")
else:
request_dict = _GenerateInstanceRubricsRequest_to_vertex(parameter_model)
request_url_dict = request_dict.get("_url")
if request_url_dict:
path = ":generateInstanceRubrics".format_map(request_url_dict)
else:
path = ":generateInstanceRubrics"

query_params = request_dict.get("_query")
if query_params:
path = f"{path}?{urlencode(query_params)}"
# TODO: remove the hack that pops config.
request_dict.pop("config", None)

http_options: Optional[types.HttpOptions] = None
if (
parameter_model.config is not None
and parameter_model.config.http_options is not None
):
http_options = parameter_model.config.http_options

request_dict = _common.convert_to_dict(request_dict)
request_dict = _common.encode_unserializable_types(request_dict)

response = await self._api_client.async_request(
"post", path, request_dict, http_options
)

response_dict = "" if not response.body else json.loads(response.body)

if self._api_client.vertexai:
response_dict = _GenerateInstanceRubricsResponse_from_vertex(response_dict)

return_value = types.GenerateInstanceRubricsResponse._from_response(
response=response_dict, kwargs=parameter_model.model_dump()
)

self._api_client._verify_response(return_value)
return return_value

async def batch_evaluate(
self,
*,
Expand Down
Loading
Loading