Skip to content

Conversation

@onmete
Copy link
Contributor

@onmete onmete commented Sep 25, 2025

Description

Fix example run.yaml. The current version fails with

    raise ValueError(f"Provider `{provider.provider_type}` is not available for API `{api}`")
ValueError: Provider `inline::huggingface` is not available for API `Api.post_training`

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features
    • Added a configurable DPO output directory in post-training settings (dpo_output_dir), enabling users to choose where results are saved.
  • Chores
    • Updated the example configuration to use the GPU-backed Hugging Face provider for post-training by default.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 25, 2025

Walkthrough

Updated examples/run.yaml: added post_training.config.dpo_output_dir and changed the huggingface post_training provider_type from inline::huggingface to inline::huggingface-gpu.

Changes

Cohort / File(s) Summary of Changes
Post-training config updates
examples/run.yaml
- Added post_training.config.dpo_output_dir: .llama/distributions/ollama
- Updated post_training.config.provider_type from inline::huggingface to inline::huggingface-gpu in the huggingface provider entry

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

Thump-thump, I tweak with gentle cheer,
A path for DPO now appears—so clear!
GPUs hum where configs grew,
HuggingFace shifts to a speedier hue.
In burrows of YAML, I hop and rhyme,
Ready to train, one nibble at a time. 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly states that the PR fixes the example run.yaml, which aligns with the change updating the provider type and adding a configuration field to resolve the ValueError, so it accurately reflects the main purpose of the changeset.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6529c37 and a3960ed.

📒 Files selected for processing (1)
  • examples/run.yaml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: build-pr
  • GitHub Check: e2e_tests
🔇 Additional comments (1)
examples/run.yaml (1)

69-75: No changes required: inline::huggingface-gpu is available: it’s listed in docs/providers.md as a supported inline provider and its dependencies (trl, transformers, peft, datasets, torch) are declared in pyproject.toml.

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@asamal4 asamal4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tisnik tisnik merged commit 16ce492 into lightspeed-core:main Sep 25, 2025
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants