Skip to content

Conversation

@AlexandrePicosson
Copy link
Contributor

This PR fixes the type hint for EvaluationContextAttributes

The current type hint was impossible to satisfy with nested dict attributes that are not empty as it allowed only for mappings of mappings and not mappings of str, bool, ...

Related Issues

Fixes #532

How to test

Simplest way to test this is to create a small file containing the following code and run mypy on it, it should not error out.

from openfeature.evaluation_context import EvaluationContext

evaluation_context = EvaluationContext(
    targeting_key="key", attributes={"user_attributes": {"city": "Paris"}}
)

@AlexandrePicosson AlexandrePicosson requested review from a team as code owners September 13, 2025 10:10
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @AlexandrePicosson, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical type-hinting issue within the EvaluationContextAttributes definition. The previous type alias was overly restrictive, preventing the correct type-checking of nested dictionary attributes that contained non-mapping values. This fix ensures that the type system accurately reflects the intended structure, improving code correctness and developer experience when working with evaluation contexts.

Highlights

  • Type Hint Correction: The type alias for EvaluationContextAttributes was updated to correctly handle nested dictionary attributes that contain primitive types (like str, bool) instead of exclusively requiring nested mappings. This resolves a type-checking issue where the previous definition was too restrictive.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

The current type hint was impossible to satisfy with nested dict
attributes that are not empty.

Fixes open-feature#532

Signed-off-by: AlexandrePicosson <picosson.alexandre@gmail.com>
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in the type alias for EvaluationContextAttributes. The previous type hint was overly restrictive for nested dictionaries. The change to use a direct recursive reference solves the issue. I've also added one comment regarding a potential discrepancy with the OpenFeature specification for sequence types that was noticed during this review, which could be addressed in a follow-up.

@AlexandrePicosson AlexandrePicosson force-pushed the fix/evaluation-context-type-hint branch from 2bfaf61 to 64f521c Compare September 13, 2025 10:13
@codecov
Copy link

codecov bot commented Sep 13, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.85%. Comparing base (92f5da4) to head (64f521c).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #534   +/-   ##
=======================================
  Coverage   97.85%   97.85%           
=======================================
  Files          39       39           
  Lines        1822     1822           
=======================================
  Hits         1783     1783           
  Misses         39       39           
Flag Coverage Δ
unittests 97.85% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gruebel gruebel merged commit 0e0f018 into open-feature:main Sep 13, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Invalid type alias for EvaluationContextAttributes

2 participants