Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: DALM connectors for popular frameworks #15

Closed
EricLiclair opened this issue Sep 28, 2023 · 17 comments
Closed

Proposal: DALM connectors for popular frameworks #15

EricLiclair opened this issue Sep 28, 2023 · 17 comments

Comments

@EricLiclair
Copy link
Contributor

EricLiclair commented Sep 28, 2023

While exploring and building with prominent frameworks I've been wondering how DALMs would integrate into these.
The Arcee client helps executing dalm routines in its own setting, but shall we consider implementing or maintaining connectors for prominent frameworks like LangChain, semantic-kernel, DSPy etc.

People experiment with a couple of models and frameworks and try-out what suits best for them. Featuring connectors may result in active adoption.

What's proposed?

We could implement our connectors either in arcee-python or adding pr's to these frameworks. IMO the former is easier to maintain while we await merges for the connectors in the main repositories.

What's expected?

Implementing and maintaining these connectors is subject to discussion regarding how these would be developed and structured. It is also subjective whether to implement connectors for certain framework or not.
After providing an OOTB connector one could use arcee's client to build applications using DALMs.

A basic example usage for Microsoft's semantic-kernel

# install arcee-client and semantic-kernel like,
# !pip install arcee-python semantic-kernel

import arcee_python as arcee
import semantic_kernel as sk

# import dalms
from arcee_python.connectors.semantic_kernel import ArceeTextCompletion # dalm
# or
from semantic_kernel.connectors.ai.arcee_ai import ArceeTextCompletion # dalm

kernel = sk.Kernel()

# Prepare Arcee service using credentials stored in the `.env` file
api_key, org_id = arcee.settings_from_dot_env() # config
# or
api_key, org_id = sk.arceeai_settings_from_dot_env() # config

kernel.add_text_completion_service(
    "arcee", ArceeTextCompletion("DPT-PubMed-7b", api_key, org_id)
)

# Wrap your prompt in a function
prompt = kernel.create_semantic_function(
    """
    Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?
    """.strip()
)

# Run your prompt
print(prompt())
# => Based on the provided context, AI-driven music therapy has the potential to contribute to the rehabilitation of patients with disorders of consciousness. The use of AI agents in robotic therapy has already shown promising results in stroke rehabilitation, indicating that AI can assist in enhancing motor functions. Additionally, evidence-based neurorehabilitation interventions that incorporate principles of activity-dependent plasticity and motor learning have been developed, which can be further enhanced by AI-driven music therapy. However, it is important to note that the specific effectiveness and implementation of AI-driven music therapy in the rehabilitation of patients with disorders of consciousness would require further research and clinical trials.

A basic example usage for langchain

# install arcee-client and langchain like,
# !pip install arcee-python langchain

import arcee_python as arcee

# import dalms
from arcee_python.connectors.langchain import ArceeAI # dalm
# or
from langchain.llms import ArceeAI

# ===== use as single model =====
llm = ArceeAI("DPT-PubMed-7b", api_key=api_key, org_id=org_id)
prompt = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"

print(llm(prompt))
# => Based on the provided context, AI-driven music therapy has the potential to contribute to the rehabilitation of patients with disorders of consciousness. The use of AI agents in robotic therapy has already shown promising results in stroke rehabilitation, indicating that AI can assist in enhancing motor functions. Additionally, evidence-based neurorehabilitation interventions that incorporate principles of activity-dependent plasticity and motor learning have been developed, which can be further enhanced by AI-driven music therapy. However, it is important to note that the specific effectiveness and implementation of AI-driven music therapy in the rehabilitation of patients with disorders of consciousness would require further research and clinical trials.

# ===== run in chain =====
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

prompt = PromptTemplate(
    input_variables=["disease"],
    template="Can AI-driven music therapy contribute to the rehabilitation of patients with {disease}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("disorders of consciousness"))
# => Based on the provided context, AI-driven music therapy has the potential to contribute to the rehabilitation of patients with disorders of consciousness. The use of AI agents in robotic therapy has already shown promising results in stroke rehabilitation, indicating that AI can assist in enhancing motor functions. Additionally, evidence-based neurorehabilitation interventions that incorporate principles of activity-dependent plasticity and motor learning have been developed, which can be further enhanced by AI-driven music therapy. However, it is important to note that the specific effectiveness and implementation of AI-driven music therapy in the rehabilitation of patients with disorders of consciousness would require further research and clinical trials.

@Jacobsolawetz @Ben-Epstein your thoughts on this? 🤔

Resources:

semantic-kernel

  1. Base connector client

langchain

  1. Base model
@Jacobsolawetz
Copy link
Contributor

Jacobsolawetz commented Sep 28, 2023

@EricLiclair absolutely!

The LangChain integration is the top of the list for me. I have talked with @hwchase17 about this, so once we get something properly workshopped he is likely to accept!

I do think a from langchain.llms import Arcee - makes the most sense for the DALM models.

We should also have something built on BaseRetriever for people who only want to use the retrieval element. I think that will be quite common as well.

Let me know you you'd like to talk more with me and @Ben-Epstein or maybe it makes more sense to get ink on paper and we can work from there!

@EricLiclair
Copy link
Contributor Author

EricLiclair commented Oct 2, 2023

Hey @Jacobsolawetz .
It'd be good if we can first document how we plan the usage of dalms in langchain, both llm and retriever. This will help us outline necessary changes and track our progress effectively. And sure, we can discuss more on this.
I invested some time going through the langchain repository upto some extent. There are at least a few points I want to mention

  1. pydantic: v2 has some breaking changes/migrations. most of the packages have there own versioning for pydantic. *infact, importing arcee in google colab throws an error. maybe we should either use v1.x instead or at least add an issue to track for the same. we use pydantic validator here
image
  1. python version: langchain ships for python >=3.8.1,<4.0; we'll need to test and publish arcee for python >=3.8.1,<3.10 defined here

  2. config: we'll need to modify this logic at arcee.__init__.py and a method to set config for arcee from provided key in arguments (triggered from langchain's implementation)
    Maybe like,

# config.py
class Config:
    ENV_VARS: str = None

    def __init__(self):
        # set variables here
        self.[ENV_VARS] = self.get_conditional_configuration_variable

    @classmethod
    def get_conditional_configuration_variable(..args):...
        # previous logic
    
    @classmethod
    def set_config_var(key, val):
        if hasattr(cls, key):
            cls.[key] = val

config = Config()

# __init__.py
...
from arcee.config import config
...

Here's a sample implementation for arcee dalms:

# langchain.llms.arcee.py

from langchain.llms.base import LLM
from langchain.pydantic_v1 import Extra, root_validator
from langchain.schema.output import GenerationChunk

from langchain.utils import (
    check_package_version,
    get_from_dict_or_env,
)
from typing import Dict, Any, Optional, List, Mapping
from langchain.callbacks.manager import (
    CallbackManagerForLLMRun,
)


class Arcee(LLM):
    """Arcee's domain adapted language models (DALMs)."""

    client: Any = None  #: :meta private:

    model: str
    """Name of the domain adapted LM to use."""

    arcee_api_key: Optional[str] = None
    """API key for Arcee."""

    class Config:
        """Configuration for this pydantic object."""

        extra = Extra.forbid

    @property
    def _llm_type(self) -> str:
        """Return type of llm."""
        return "arcee"

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {"model": self.model}

    @root_validator
    def validate_environment(cls, values: Dict) -> Dict:
        """Validate import and environment variables."""

        values["arcee_api_key"] = get_from_dict_or_env(
            values, "arcee_api_key", "ARCEE_API_KEY"
        ) # raises an exception only if key is nowhere found;
        try:
            import arcee
            check_package_version("arcee-py", gte_version="0.0.17")
            # here, if api_key is passed as a parameter we'll need a modifiable (or set-able) config
            # from arcee import config
            # config.set_config_var("ARCEE_API_KEY", values.get("arcee_api_key"))


            values["client"] = arcee.get_dalm(
                name=values.get("model"),
            )

        except ImportError:
            raise ImportError(
                "Please install arcee-py to use the Arcee language model."
            )
        return values

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        r"""Call out to Arcee's generation endpoint.

        Args:
            prompt: The prompt to pass into the model.
        """
        size = kwargs.get("size", None)
        filters = kwargs.get("filters", None)
        response = self.client.generate(query=prompt, size=size, filters=filters)
        return response


# used like,
!pip install langchain arcee-py

from langchain.llms import Arcee

arcee = Arcee(model="DPT-PubMed-7b", arcee_api_key="DUMMY_KEY") # call with api key as parameter

Alternate approach

❗️If not for these: we can simply write a method handler to format requests and avoid importing arcee in langchain altogether. reference - langchain.llms.amazon_api_gateway.py


Will try some retrievers and think of how ArceeRetriever to be used.

@Jacobsolawetz
Copy link
Contributor

@EricLiclair ok very nice!

  1. I am on board with pydantic v1.x - cc @Ben-Epstein to weigh in as well
  2. Makes sense on expanding our python versions - I think that should test out alright?
  3. Sounds good to me to expand our config setting to accomodate that behavior

The sample implementation looks pretty good to me!

For the retriever only, something like

from langchain.retrievers import ArceeRetriever
arcee_retriever = ArceeRetriever(model="DALM-patent", arcee_api_key="DUMMY")

arcee_retriever.get_relevant_documents(query)

Interestingly enough - I expect to be at an in person event that Harrison will be at next Monday - so if we have something by then, perhaps I could get him to merge the PR then

Thanks for looking into this @EricLiclair!

@Ben-Epstein
Copy link
Contributor

Ben-Epstein commented Oct 2, 2023

We don't need to go to an older version of pydantic, as pydantic v2 ships with v1 for backwards compatibility, so I can handle that fix.

For the config, I don't think we need to make any changes, but I'll have to test. See Gradient below. You simply don't import Arcee until the env vars have been set. We may not need to import Arcee at all, and just leverage the requests library directly since it's so simple.

For the generator, we should use the same approach as GradientAI. Very straightforward, they don't even install their python sdk:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/gradient_ai.py

And something similar to the elasticsearch one for the retriever https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/elastic_search_bm25.py

@Ben-Epstein
Copy link
Contributor

@EricLiclair would you like to take this up, or shall I?

@EricLiclair
Copy link
Contributor Author

EricLiclair commented Oct 3, 2023

@Jacobsolawetz

  1. Makes sense on expanding our python versions - I think that should test out alright?

Yes, it should test out alright. Will add tox (or with invoke itself) as a test step and see if it works, otherwise will only validate for 3.8.x in a venv and raise a pr.


@Ben-Epstein

We don't need to go to an older version of pydantic, as pydantic v2 ships with v1 for backwards compatibility, so I can handle that fix.

I agree and sure, requesting you to please take this one up 😊. I did check the backward compatibility for v1. Only concern I could think of is handling the import statement (to maintain namespace). maybe we'll need a

try:
    from pydantic.v1.<> import <> # this would work if 2.x is added
except ImportError:
    from pydantic import <> # this will work if 1.x is added

Why a try-catch? For some reason (experienced this only in g-colab; it worked fine in other envs), pydantic defaults to v1.x in colab environment, even though the package installs v2.x when mentioned in the toml. And then fails to import from pydantic.v1.<>. Tried this with pip install git+https://<>.arcee/<branch with pydantic>=2>


For the config, I don't think we need to make any changes, but I'll have to test. See Gradient below. You simply don't import Arcee until the env vars have been set. We may not need to import Arcee at all, and just leverage the requests library directly since it's so simple.

Agreed, but the env variables are never set if the vars are passed as named parameters. Will need to check the behaviour of values dict passed to an LLM. But so far that I could trace, the get_from_dict_or_env doesn't actually set an env var. It checks if the value for the key is in a passed dict, or in the env. Will trace back values and check if we set those values in the env.
And sure, about not importing Arcee at all seems to be a good reason to keep the integration simple and straightforward. We'll need to add the validations on the filters and other params in the environment validation itself. (👍🏻 on this)


For the generator, we should use the same approach as GradientAI. Very straightforward, they don't even install their python sdk: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/gradient_ai.py

And something similar to the elasticsearch one for the retriever https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/elastic_search_bm25.py

Thanks for pointing these out. Will refer these.


I'll take these up.

  • implementation for Arcee in llm refering to gradient_ai's implementation, (for now without importing arcee)
  • a similar implementation for ArceeRetriever refering elasticsearch's implementation, (for now without importing arcee)
    Meanwhile, I'll wait for your comment and continue with the following
  • setting up tox (or with invoke itself), if not, testing arcee in v3.8.x [today]

@Ben-Epstein
Copy link
Contributor

Sounds good.

If we take the route without including the Arcee repo, then we don't need to make any changes (python 3.8, tox, pydantic, etc)

I think we should start with this approach, because it's the bare essentials and won't require any compatibility checks. Simply make the API requests with the user provided token.

So let's skip the python 3.8 testing and move straight to the other 2 tasks.

Happy to help with either if you need a reference, let me know. But if you feel comfortable, go for it!

@EricLiclair
Copy link
Contributor Author

True. I didn't think of this. Thanks for the heads up. And sure, I figured I can use invoke to download pyenv and the subsequent py version and test with the same commands for those version and then clean them by removing pyenv. No need of tox. Anyway,

for now, will take up the implementations. If time permits or later on we can test expanding python versions. 👍🏻

@EricLiclair
Copy link
Contributor Author

@Ben-Epstein
Where to raise the pr? Directly to langchain's repository? Or to a fork in the org Arcee-ai?

Meanwhile, here's a pr on my forked langchain repo:
https://github.com/EricLiclair/langchain/pull/2

@Ben-Epstein
Copy link
Contributor

Ben-Epstein commented Oct 6, 2023

@EricLiclair i think you should raise it directly into langchain, just like langchain-ai/langchain#10800

I think the way we've done it is ideal because it doesn't introduce any new dependencies, so it's easier to integrate with them

@Jacobsolawetz
Copy link
Contributor

@EricLiclair curious of any updates on your end here?

@EricLiclair
Copy link
Contributor Author

EricLiclair commented Oct 9, 2023

Hi @Jacobsolawetz
cc: @Ben-Epstein

I worked a little on the docs and attended to lint and format errors.
I do not have contexts to upload and test the changes. Requesting suggestions wherever necessary. I've added rough docs for both llm and retriever.

For tests, I didn't see anything solid to test for (most are network calls and private methods)
I do think at least one branch should be tested that covers most of these methods. But again, all primary methods requires network calls and mocking all the flows might not be a good choice?
Suggestions are appreciated.

Here's the pr - langchain-ai/langchain#11579

@Jacobsolawetz
Copy link
Contributor

Hey @EricLiclair - that is so awesome!

I got a chance to run through the demos. For generation, I fixed a bug with the check_status route where we weren't returning model_id - I think we might have changed our API for that. Langchain generation is functioning now: https://colab.research.google.com/drive/1VfMGG-MB2i1emv6qfTi1PXhDIGSQ9Dfy#scrollTo=1WePNxhkRWr3

For retrieval, I'm wondering, did you ever see that working?

Here's a notebook where I receive a retrieval error - https://colab.research.google.com/drive/1x9Df9sVAVVWZMBrz8CSJJJ7bBwUITUHn#scrollTo=k6DUibw-cjYq

I think we need to parse the retrieval response differently here in langchain

    def retrieve(
        self,
        query: str,
        **kwargs: Any,
    ) -> List[Document]:
        """Retrieve {size} contexts with your retriever for a given query

        Args:
            query: Query to submit to the model
            size: The max number of context results to retrieve. Defaults to 3.
            (Can be less if filters are provided).
            filters: Filters to apply to the context dataset.
        """

        response = self._make_request(
            method="post",
            route=ArceeRoute.retrieve,
            body=self._make_request_body_for_models(
                prompt=query,
                **kwargs,
            ),
        )
        ####Different Parse Here###
        print(response)
        return [Document(**doc) for doc in response["documents"]]

@Jacobsolawetz
Copy link
Contributor

BTW @EricLiclair sent you a linkedin DM :D

@EricLiclair
Copy link
Contributor Author

Hey @Jacobsolawetz, I had little idea of the response we would receive for the retrieval endpoint. Most of what I did was assuming arcee's api handler as source of reference. But as you had mentioned earlier that the api was still changing, I did expect to receive a different response and as pointed out it would need a slightly different parser.
Thanks for the notebook, I'll drop a fix for this.

@EricLiclair
Copy link
Contributor Author

Hey @Jacobsolawetz, does this conversion looks good for parsing the document retrieval response?
cc: @Ben-Epstein

class ArceeDocumentSource(BaseModel):
    """Source of an Arcee document."""
    document: str
    name: str
    id: str


class ArceeDocument(BaseModel):
    """Arcee document."""
    index: str
    id: str
    score: float
    source: ArceeDocumentSource


class ArceeDocumentAdapter:
    """Adapter for Arcee documents. Handles conversions between `ArceeDocument` and `Document` object."""

    
    @classmethod
    def adapt(cls, arcee_document: ArceeDocument) -> Document:
        """Adapt an `ArceeDocument` to a langchain's `Document` object."""
        return Document(
            page_content=arcee_document.source.document,
            metadata={
                # arcee document; source metadata
                "name": arcee_document.source.name,
                "source_id": arcee_document.source.id,

                # arcee document metadata
                "index": arcee_document.index,
                "id": arcee_document.id,
                "score": arcee_document.score
            }
        )

...

class ArceeWrapper:
    def retrieve():
        ...
        return [ArceeDocumentAdapter.adapt(ArceeDocument(**doc)) for doc in response["results"]]

baskaryan pushed a commit to langchain-ai/langchain that referenced this issue Oct 25, 2023
- **Description:** Response parser for arcee retriever, 
- **Issue:** follow-up pr on #11578 and
[discussion](arcee-ai/arcee-python#15 (comment)),
  - **Dependencies:** NA

This pr implements a parser for the response from ArceeRetreiver to
convert to langchain `Document`. This closes the loop of generation and
retrieval for Arcee DALMs in langchain.

The reference for the response parser is
[api-docs:retrieve](https://api.arcee.ai/docs#/v2/retrieve_model)

Attaching screenshot of working implementation:
<img width="1984" alt="Screenshot 2023-10-25 at 7 42 34 PM"
src="https://github.com/langchain-ai/langchain/assets/65639964/026987b9-34b2-4e4b-b87d-69fcd0c6641a">
\*api key deleted

---
Successful tests, lints, etc.
```shell
Re-run pytest with --snapshot-update to delete unused snapshots.
==================================================================================================================== slowest 5 durations =====================================================================================================================
1.56s call     tests/unit_tests/schema/runnable/test_runnable.py::test_retrying
0.63s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream
0.33s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_stream_iterator_input
0.30s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream_iterator_input
0.20s call     tests/unit_tests/indexes/test_indexing.py::test_cleanup_with_different_batchsize
======================================================================================================= 1265 passed, 270 skipped, 32 warnings in 6.55s =======================================================================================================
[ "." = "" ] || poetry run black .
All done! ✨ 🍰 ✨
1871 files left unchanged.
[ "." = "" ] || poetry run ruff --select I --fix .
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run black . --check
All done! ✨ 🍰 ✨
1871 files would be left unchanged.
[ "." = "" ] || poetry run mypy .
Success: no issues found in 1868 source files
poetry run codespell --toml pyproject.toml
poetry run codespell --toml pyproject.toml -w
```

Co-authored-by: Shubham Kushwaha <shwu@Shubhams-MacBook-Pro.local>
schadem pushed a commit to schadem/langchain that referenced this issue Oct 27, 2023
- **Description:** Response parser for arcee retriever, 
- **Issue:** follow-up pr on langchain-ai#11578 and
[discussion](arcee-ai/arcee-python#15 (comment)),
  - **Dependencies:** NA

This pr implements a parser for the response from ArceeRetreiver to
convert to langchain `Document`. This closes the loop of generation and
retrieval for Arcee DALMs in langchain.

The reference for the response parser is
[api-docs:retrieve](https://api.arcee.ai/docs#/v2/retrieve_model)

Attaching screenshot of working implementation:
<img width="1984" alt="Screenshot 2023-10-25 at 7 42 34 PM"
src="https://github.com/langchain-ai/langchain/assets/65639964/026987b9-34b2-4e4b-b87d-69fcd0c6641a">
\*api key deleted

---
Successful tests, lints, etc.
```shell
Re-run pytest with --snapshot-update to delete unused snapshots.
==================================================================================================================== slowest 5 durations =====================================================================================================================
1.56s call     tests/unit_tests/schema/runnable/test_runnable.py::test_retrying
0.63s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream
0.33s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_stream_iterator_input
0.30s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream_iterator_input
0.20s call     tests/unit_tests/indexes/test_indexing.py::test_cleanup_with_different_batchsize
======================================================================================================= 1265 passed, 270 skipped, 32 warnings in 6.55s =======================================================================================================
[ "." = "" ] || poetry run black .
All done! ✨ 🍰 ✨
1871 files left unchanged.
[ "." = "" ] || poetry run ruff --select I --fix .
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run black . --check
All done! ✨ 🍰 ✨
1871 files would be left unchanged.
[ "." = "" ] || poetry run mypy .
Success: no issues found in 1868 source files
poetry run codespell --toml pyproject.toml
poetry run codespell --toml pyproject.toml -w
```

Co-authored-by: Shubham Kushwaha <shwu@Shubhams-MacBook-Pro.local>
@Jacobsolawetz
Copy link
Contributor

@EricLiclair apologies for the delay - this looks great to me and very nice work!!

hoanq1811 pushed a commit to hoanq1811/langchain that referenced this issue Feb 2, 2024
- **Description:** Response parser for arcee retriever, 
- **Issue:** follow-up pr on langchain-ai#11578 and
[discussion](arcee-ai/arcee-python#15 (comment)),
  - **Dependencies:** NA

This pr implements a parser for the response from ArceeRetreiver to
convert to langchain `Document`. This closes the loop of generation and
retrieval for Arcee DALMs in langchain.

The reference for the response parser is
[api-docs:retrieve](https://api.arcee.ai/docs#/v2/retrieve_model)

Attaching screenshot of working implementation:
<img width="1984" alt="Screenshot 2023-10-25 at 7 42 34 PM"
src="https://github.com/langchain-ai/langchain/assets/65639964/026987b9-34b2-4e4b-b87d-69fcd0c6641a">
\*api key deleted

---
Successful tests, lints, etc.
```shell
Re-run pytest with --snapshot-update to delete unused snapshots.
==================================================================================================================== slowest 5 durations =====================================================================================================================
1.56s call     tests/unit_tests/schema/runnable/test_runnable.py::test_retrying
0.63s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream
0.33s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_stream_iterator_input
0.30s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream_iterator_input
0.20s call     tests/unit_tests/indexes/test_indexing.py::test_cleanup_with_different_batchsize
======================================================================================================= 1265 passed, 270 skipped, 32 warnings in 6.55s =======================================================================================================
[ "." = "" ] || poetry run black .
All done! ✨ 🍰 ✨
1871 files left unchanged.
[ "." = "" ] || poetry run ruff --select I --fix .
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run black . --check
All done! ✨ 🍰 ✨
1871 files would be left unchanged.
[ "." = "" ] || poetry run mypy .
Success: no issues found in 1868 source files
poetry run codespell --toml pyproject.toml
poetry run codespell --toml pyproject.toml -w
```

Co-authored-by: Shubham Kushwaha <shwu@Shubhams-MacBook-Pro.local>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants