Loom traces made with the MiniHF loom. The loom is a inference front end for MiniHF, TogetherAI, OpenAI Completions, and any other API compatible with them. It manages the users session as a branching tree of diffs on the context, allowing the user to back up, try different versions of a prompt, and generate multiple completions at the same time to find exactly the right thing. In essence the loom makes weak models stronger by letting the user rejection sample and branch, giving you a preview of what better models would be capable of. MiniLoom is inspired by Janus's loom, but uses a better data structure to store the session. MiniLoom sessions can have their branches extracted as training data for DPO/SPO/IPO/KTO without explicit thumbs up or thumbs down feedback.
You may contribute your own sessions by submitting a pull request to this repository. Keep in mind that by submitting you affirm that you hold all relevant copyrights to and release your submitted text into the public domain.
To install the loom do something like the following:
- git clone the MiniHF repository
- cd loom
- npm install
- npm start
You can use Together AI's inference services to loom even if you do not own GPUs. Because these sessions are intended to be used as training data submitting text generated with Facebook's LLaMa series of models is a violation of their terms of service, which prohibit using the LLaMa series to enhance other language models. For this reason I suggest using models like Mistral 7B (HuggingFace) (Together), Mixtral (HuggingFace) (Together), and SOLAR-10 (HuggingFace).
If you use the OpenAI Completions API with non-OpenAI models, be sure to change
your model name in the sidebar to reflect the model you are actually using, otherwise
the trace will record it as code-davinci-002
.
Happy weaving!