docs: add AutoGen integration cookbook#88
Conversation
|
@manojkulkarni123 is attempting to deploy a commit to the Moss Team Team on Vercel. A member of the Team first needs to authorize it. |
|
Thanks for the Pr @manojkulkarni123 , I have reviewed and added comments. |
…feat/autogen-cookbook
87766f7 to
3a1eeae
Compare
d71cd6b to
5739b32
Compare
|
Hey @yatharthk2 , thank you for the feedback! I have updated the PR to address all of your review comments:
Let me know if this looks good to you! |
| " start_time = time.perf_counter()\n", | ||
| " opts = QueryOptions(top_k=top_k)\n", | ||
| " results = await self.moss.query(index_name, query, opts)\n", | ||
| " end_time = time.perf_counter()\n", |
There was a problem hiding this comment.
Moss search result type provides the calculated time for query, please feel free to refer below. Might reduce the code complexity of the notebook.
https://docs.moss.dev/docs/reference/js/interfaces/SearchResult
| "\n", | ||
| " await load_index(\"product_catalog.json\", \"product_catalog_index\")\n", | ||
| " await load_index(\"shipping_policies.json\", \"shipping_policies_index\")\n", | ||
| " await load_index(\"return_policies.json\", \"returns_index\")\n", |
There was a problem hiding this comment.
load_index is just create_index in disguise and all your queries are going to moss cloud, this is also the reason you are seeing higher latency. Please feel free to refer the python docs at https://docs.moss.dev/docs/reference/js/classes/MossClient
to check the loadIndex() definition and can you please use local searching ?
There was a problem hiding this comment.
That'll definitely reduce code complexity, will use SearchResult timeTakenInMs
And yes, the load_index is create_index in disguise my bad, Will use loadIndex() directly to enable local search, which should significantly reduce latency.
|
is this stil being worked upon, should I close it ? |
|
Hey @yatharthk2 ill submit a pr soon sorry was caught up with some work |
…ub-millisecond (< 10ms) retrieval inside multi-agent orchestration loops. - Replaced manual perf_counter wrappers with Moss's native SearchResult.time_taken_ms property for elegant latency tracking. - Expanded the JSON datasets (Product, Shipping, Returns) with more data points.
c0ec90c to
67890ee
Compare
There was a problem hiding this comment.
🚩 No tests included, unlike the langchain cookbook example
The langchain cookbook at examples/cookbook/langchain/ includes a test_integration.py with unit tests and a pyproject.toml. This autogen example has no tests. CONTRIBUTING.md states "If you've added code that should be tested, add tests." However, the dspy example (examples/cookbook/dspy/) also has no tests, so this is consistent with at least one existing pattern. For a notebook-only cookbook, the lack of tests is arguably acceptable, but adding at least mock-based tests for the tool functions (like the langchain example does) would improve reliability.
Was this helpful? React with 👍 or 👎 to provide feedback.
|
Hey @yatharthk2, sorry for the delay, was caught up with work! I've updated the PR based on your review feedback. Here's a summary of the changes:
Please let me know if there's anything else you'd like me to change. |
…ime_taken_ms , unused imports, and latencies list by adding a try finally which will run latencies.clear() preventing stale latency values
| "async def setup_indices():\n", | ||
| " async def create_index_from_file(filename, index_name):\n", | ||
| " with open(f\"data/{filename}\", \"r\") as f:\n", | ||
| " data = json.load(f)\n", | ||
| " docs = []\n", | ||
| " for i, item in enumerate(data):\n", | ||
| " text_repr = \", \".join([f\"{k}: {v}\" for k, v in item.items()])\n", | ||
| " doc_id = item.get(\"id\", f\"doc_{i}\")\n", | ||
| " string_metadata = {k: str(v) for k, v in item.items()}\n", |
There was a problem hiding this comment.
can you please add a check if the index already exits ?
| "cell_type": "markdown", | ||
| "metadata": {}, | ||
| "source": [ | ||
| " The `moss_client.load_index()` command is the secret sauce for agent loops. It forces the cloud index into your machine's local RAM. Because agent loops require sequential data retrieval, using a standard cloud database adds 200-500ms of HTTP network overhead sequentially. Local cached execution allows Rust to perform vector dot-products in microseconds, bridging the gap between agent logic and data retrieval." |
There was a problem hiding this comment.
can you please remove, "Local cached execution allows Rust to perform vector dot-products in microseconds, bridging the gap between agent logic and data retrieval"
|
Overall looks great,Added 2 small comments to be addressed, will approve after this. |
…oved cookbook explanation
|
Hey @yatharthk2, thanks for the review! I've addressed your points:
if you are satisfied with these changes then please do approve it |
yatharthk2
left a comment
There was a problem hiding this comment.
Thanks for the PR, Looking forward to more collabs :)
|
Hey @yatharthk2 thank you for being so patient with me |
## Pull Request Checklist - [x] I have read the [CONTRIBUTING](CONTRIBUTING.md) guide. - [x] I have updated the documentation (if applicable). - [x] My code follows the style guidelines of this project. - [x] I have performed a self-review of my own code. - [ ] I have added tests that prove my fix is effective or that my feature works. - [ ] New and existing unit tests pass locally with my changes. ## Description This PR adds a comprehensive cookbook example demonstrating how to use Moss as a sub-10ms retrieval tool for AutoGen multi-agent conversations, fulfilling the requirements of the linked issue. ### Deliverables Addressed: - **`docs/examples/autogen.md`**: Created a full working example in Markdown format as requested. - **Tested end-to-end with current AutoGen version**: The example was built and verified end-to-end using the latest **AutoGen v0.4.x** (`autogen-agentchat`) API, utilizing `AssistantAgent` instead of the older `ConversableAgent` mentioned in the original issue description. This aligns the cookbook with AutoGen's current recommended patterns and ensures the documentation remains relevant. ### Additional Value: - **Runnable Notebook**: In addition to the requested markdown guide, I have included a runnable Jupyter notebook (`moss_autogen.ipynb`) to match the repository's established style for cookbooks. **Files Added:** - `docs/examples/autogen.md` - `examples/cookbook/autogen/moss_autogen.ipynb` Fixes usemoss#80 ## Type of Change - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [x] This change requires a documentation update <!-- devin-review-badge-begin --> --- <a href="https://app.devin.ai/review/usemoss/moss/pull/88" target="_blank"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1"> <img src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1" alt="Open with Devin"> </picture> </a> <!-- devin-review-badge-end -->
Pull Request Checklist
Description
This PR adds a comprehensive cookbook example demonstrating how to use Moss as a sub-10ms retrieval tool for AutoGen multi-agent conversations, fulfilling the requirements of the linked issue.
Deliverables Addressed:
docs/examples/autogen.md: Created a full working example in Markdown format as requested.autogen-agentchat) API, utilizingAssistantAgentinstead of the olderConversableAgentmentioned in the original issue description. This aligns the cookbook with AutoGen's current recommended patterns and ensures the documentation remains relevant.Additional Value:
moss_autogen.ipynb) to match the repository's established style for cookbooks.Files Added:
docs/examples/autogen.mdexamples/cookbook/autogen/moss_autogen.ipynbFixes #80
Type of Change