Skip to content

Conversation

@Saga4
Copy link
Contributor

@Saga4 Saga4 commented Sep 23, 2025

User description

  • LSP reduce no of candidates
  • config revert
  • pass reference values to aiservices
  • fix inline condition

PR Type

Enhancement, Bug fix


Description

  • Replace dynamic getters with constants

  • Cap candidates via new MAX limits

  • Propagate effective config across modules

  • Adjust worktree creation to new counts


Diagram Walkthrough

flowchart LR
  CFG["config_consts: compute EFFECTIVE values"] --> API["aiservice: send n_candidates/_lp"]
  CFG --> GIT["git_utils: worktrees use EFFECTIVE"]
  CFG --> OPT["function_optimizer: tests/time/candidates use EFFECTIVE"]
Loading

File Walkthrough

Relevant files
Enhancement
aiservice.py
Use effective candidate constants in API payloads               

codeflash/api/aiservice.py

  • Replace get_n_candidates with N_CANDIDATES_EFFECTIVE.
  • Replace get_n_candidates_lp with N_CANDIDATES_LP_EFFECTIVE.
  • Ensure payload uses new effective constants.
+3/-3     
config_consts.py
Introduce capped EFFECTIVE constants and remove getters   

codeflash/code_utils/config_consts.py

  • Add caps: MAX_N_CANDIDATES, MAX_N_CANDIDATES_LP.
  • Precompute _IS_LSP_ENABLED with fallback.
  • Introduce N_*_EFFECTIVE constants replacing getters.
  • Remove getter functions for dynamic values.
+10/-19 
git_utils.py
Worktree creation uses effective candidate count                 

codeflash/code_utils/git_utils.py

  • Switch worktree count to N_CANDIDATES_EFFECTIVE.
  • Remove dependency on removed getter.
+2/-2     
function_optimizer.py
Optimizer switched to EFFECTIVE constants throughout         

codeflash/optimization/function_optimizer.py

  • Replace getters with EFFECTIVE constants for tests/time.
  • Use N_CANDIDATES_LP_EFFECTIVE in line profiler path.
  • Use N_CANDIDATES_EFFECTIVE for optimization requests.
  • Standardize total looping time via EFFECTIVE value.
+15/-15 

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


saga4 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@github-actions
Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Import-time Side Effects

Computing EFFECTIVE constants at import-time based on LSP availability may freeze configuration for the process lifetime; if LSP state or config is expected to change at runtime, these values will not update. Confirm this is intended.

try:
    from codeflash.lsp.helpers import is_LSP_enabled

    _IS_LSP_ENABLED = is_LSP_enabled()
except ImportError:
    _IS_LSP_ENABLED = False

N_CANDIDATES_EFFECTIVE = min(N_CANDIDATES_LSP if _IS_LSP_ENABLED else N_CANDIDATES, MAX_N_CANDIDATES)
N_CANDIDATES_LP_EFFECTIVE = min(N_CANDIDATES_LP_LSP if _IS_LSP_ENABLED else N_CANDIDATES_LP, MAX_N_CANDIDATES_LP)
N_TESTS_TO_GENERATE_EFFECTIVE = N_TESTS_TO_GENERATE_LSP if _IS_LSP_ENABLED else N_TESTS_TO_GENERATE
TOTAL_LOOPING_TIME_EFFECTIVE = TOTAL_LOOPING_TIME_LSP if _IS_LSP_ENABLED else TOTAL_LOOPING_TIME
Worktree Count Consistency

Worktrees are allocated using N_CANDIDATES_EFFECTIVE + 1; ensure this aligns with all call sites that use num_candidates and with MAX caps so you don’t under/over-provision worktrees relative to generated candidates.

def create_git_worktrees(
    git_root: Path | None, worktree_root_dir: Path | None, module_root: Path
) -> tuple[Path | None, list[Path]]:
    if git_root and worktree_root_dir:
        worktree_root = Path(tempfile.mkdtemp(dir=worktree_root_dir))
        worktrees = [Path(tempfile.mkdtemp(dir=worktree_root)) for _ in range(N_CANDIDATES_EFFECTIVE + 1)]
        for worktree in worktrees:
            subprocess.run(["git", "worktree", "add", "-d", worktree], cwd=module_root, check=True)
    else:
API Contract Change

Replaced dynamic getters with constants in payload keys 'n_candidates' and 'n_candidates_lp'. Verify backend expects capped EFFECTIVE values and that LSP/non-LSP semantics are preserved.

        "num_variants": num_candidates,
        "trace_id": trace_id,
        "python_version": platform.python_version(),
        "experiment_metadata": experiment_metadata,
        "codeflash_version": codeflash_version,
        "current_username": get_last_commit_author_if_pr_exists(None),
        "repo_owner": git_repo_owner,
        "repo_name": git_repo_name,
        "n_candidates": N_CANDIDATES_EFFECTIVE,
    }

    logger.info("!lsp|Generating optimized candidates…")
    console.rule()
    try:
        response = self.make_ai_service_request("/optimize", payload=payload, timeout=600)
    except requests.exceptions.RequestException as e:
        logger.exception(f"Error generating optimized candidates: {e}")
        ph("cli-optimize-error-caught", {"error": str(e)})
        return []

    if response.status_code == 200:
        optimizations_json = response.json()["optimizations"]
        logger.info(f"!lsp|Generated {len(optimizations_json)} candidate optimizations.")
        console.rule()
        end_time = time.perf_counter()
        logger.debug(f"!lsp|Generating possible optimizations took {end_time - start_time:.2f} seconds.")
        return self._get_valid_candidates(optimizations_json)
    try:
        error = response.json()["error"]
    except Exception:
        error = response.text
    logger.error(f"Error generating optimized candidates: {response.status_code} - {error}")
    ph("cli-optimize-error-response", {"response_status_code": response.status_code, "error": error})
    console.rule()
    return []

def optimize_python_code_line_profiler(  # noqa: D417
    self,
    source_code: str,
    dependency_code: str,
    trace_id: str,
    line_profiler_results: str,
    num_candidates: int = 10,
    experiment_metadata: ExperimentMetadata | None = None,
) -> list[OptimizedCandidate]:
    """Optimize the given python code for performance by making a request to the Django endpoint.

    Parameters
    ----------
    - source_code (str): The python code to optimize.
    - dependency_code (str): The dependency code used as read-only context for the optimization
    - trace_id (str): Trace id of optimization run
    - num_candidates (int): Number of optimization variants to generate. Default is 10.
    - experiment_metadata (Optional[ExperimentalMetadata, None]): Any available experiment metadata for this optimization

    Returns
    -------
    - List[OptimizationCandidate]: A list of Optimization Candidates.

    """
    payload = {
        "source_code": source_code,
        "dependency_code": dependency_code,
        "num_variants": num_candidates,
        "line_profiler_results": line_profiler_results,
        "trace_id": trace_id,
        "python_version": platform.python_version(),
        "experiment_metadata": experiment_metadata,
        "codeflash_version": codeflash_version,
        "lsp_mode": is_LSP_enabled(),
        "n_candidates_lp": N_CANDIDATES_LP_EFFECTIVE,
    }

@Saga4 Saga4 changed the title saga/fix candidate inline [fix] candidate inline and remove functions for config Sep 23, 2025
@github-actions
Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Avoid stale LSP state caching

Cache the LSP-enabled flag at import time can become stale if LSP state changes
during runtime. Convert this to a lazy getter so the effective constants reflect the
current environment at use time.

codeflash/code_utils/config_consts.py [25-33]

-try:
-    from codeflash.lsp.helpers import is_LSP_enabled
+def _is_lsp_enabled_cached() -> bool:
+    try:
+        from codeflash.lsp.helpers import is_LSP_enabled
+        return is_LSP_enabled()
+    except ImportError:
+        return False
 
-    _IS_LSP_ENABLED = is_LSP_enabled()
-except ImportError:
-    _IS_LSP_ENABLED = False
+def _n_candidates_effective() -> int:
+    return min(N_CANDIDATES_LSP if _is_lsp_enabled_cached() else N_CANDIDATES, MAX_N_CANDIDATES)
 
+def _n_candidates_lp_effective() -> int:
+    return min(N_CANDIDATES_LP_LSP if _is_lsp_enabled_cached() else N_CANDIDATES_LP, MAX_N_CANDIDATES_LP)
+
+def _n_tests_to_generate_effective() -> int:
+    return N_TESTS_TO_GENERATE_LSP if _is_lsp_enabled_cached() else N_TESTS_TO_GENERATE
+
+def _total_looping_time_effective() -> float:
+    return TOTAL_LOOPING_TIME_LSP if _is_lsp_enabled_cached() else TOTAL_LOOPING_TIME
+
Suggestion importance[1-10]: 6

__

Why: The suggestion identifies a valid risk of stale _IS_LSP_ENABLED-derived constants, proposing lazy getters. It's context-aware but deviates from the PR’s direction of using computed constants and would require broader refactor; still a reasonable maintainability improvement.

Low
Use dynamic effective value

If N_TESTS_TO_GENERATE_EFFECTIVE becomes a dynamic getter (to avoid stale LSP
state), ensure call sites use a callable, not a constant. Otherwise, they’ll capture
an outdated value at import time.

codeflash/optimization/function_optimizer.py [240-243]

-n_tests = N_TESTS_TO_GENERATE_EFFECTIVE
+n_tests = _n_tests_to_generate_effective()
Suggestion importance[1-10]: 4

__

Why: This depends on adopting dynamic getters from suggestion 1; as the PR uses constants (N_TESTS_TO_GENERATE_EFFECTIVE), changing to a callable is inconsistent with current code. Minor potential benefit, but misaligned with PR approach.

Low
General
Compute effective value at call

If effective values are made dynamic to reflect current LSP status, ensure requests
compute them at call time. Passing a cached module-level constant risks incorrect
candidate counts.

codeflash/api/aiservice.py [135-136]

-"n_candidates": N_CANDIDATES_EFFECTIVE,
+"n_candidates": _n_candidates_effective(),
Suggestion importance[1-10]: 4

__

Why: Similar to suggestion 2, this presumes dynamic getters that the PR does not implement. While it could avoid staleness, it conflicts with the PR’s shift to effective constants and yields only marginal robustness gains here.

Low


return N_CANDIDATES_LSP if is_LSP_enabled() else N_CANDIDATES
_IS_LSP_ENABLED = is_LSP_enabled()
except ImportError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when would we get import errors?

@Saga4 Saga4 merged commit cdd16bb into main Sep 23, 2025
5 of 20 checks passed
@KRRT7 KRRT7 deleted the saga/fix_candidate_inline branch September 30, 2025 01:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants