Skip to content

Conversation

srogmann
Copy link
Contributor

@srogmann srogmann commented Sep 26, 2025

Close #13552

This PR adds a download action in Svelte, mirroring the implementation from the previous React release (see #13552).
The filename now includes a prefix derived from the beginning of the conversation text.

Copy link
Collaborator

@mofosyne mofosyne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks reasonable. Matches PR description. Note that it outputs the conversation as JSON.

@srogmann
Copy link
Contributor Author

There was a JSON-download in

const downloadConversation = () => {

@allozaur
Copy link
Collaborator

Hey, @srogmann, thanks for this! I was thinking that maybe additional to this we could add importing/uploading conversations as well? Let me know if u would like to add this or rather have me take over this.

@srogmann
Copy link
Contributor Author

Hi @allozaur , I thought about a transfer of master...srogmann:llama.cpp:feature/import_export_all to Svelte.

@srogmann
Copy link
Contributor Author

I've added import and export functionality for all conversations. However, import currently only supports importing all conversations at once. Importing individual conversations is still missing (the import could detect if the JSON file contains just one conversation and then import only that one).

@allozaur
Copy link
Collaborator

Hey! I will take a closer look at these changes this week. I'll keep u posted 🙂

@allozaur allozaur self-assigned this Sep 29, 2025
@ServeurpersoCom
Copy link
Collaborator

Looks great this functionality is really a must-have for the web UI

@allozaur
Copy link
Collaborator

allozaur commented Oct 3, 2025

@srogmann could you please resolve the conflict? after that i'd love to take a look and test it out :)

@srogmann srogmann force-pushed the feature/svelte_conv_download branch from 5e5c06a to 9a28dc1 Compare October 4, 2025 12:16
@srogmann
Copy link
Contributor Author

srogmann commented Oct 4, 2025

@allozaur Conflicts are now resolved with a rebase, ready for your review and feedback!

ServeurpersoCom added a commit to ServeurpersoCom/llama.cpp that referenced this pull request Oct 4, 2025
@allozaur
Copy link
Collaborator

allozaur commented Oct 6, 2025

@srogmann i've pushed a PR with small UX improvements (srogmann/pull/1), please review it and if all is good, let's first merege it to your branch and finally we will have green light to merge this PR :)

…ments

UX Improvements for Export/Import feature
@allozaur
Copy link
Collaborator

allozaur commented Oct 6, 2025

@srogmann please just add the static build output to this PR and we're good to go :)

@srogmann
Copy link
Contributor Author

srogmann commented Oct 7, 2025

@allozaur The static build output has been updated.

@allozaur allozaur merged commit 4e0388a into ggml-org:master Oct 7, 2025
14 checks passed
@bughunter2
Copy link

bughunter2 commented Oct 7, 2025

I really like the new import feature. This allows me to keep my chat sessions cleaned up and still resume a chat later on by importing a JSON that I exported perhaps weeks or months earlier. I just built llama.cpp b6709 from source and the first impression is good. Edit: Reasoning output isn't shown in the web UI when using popular models like GPT-OSS and Qwen3, but I suppose the web UI developers are aware of this(?).

anyshu pushed a commit to anyshu/llama.cpp that referenced this pull request Oct 10, 2025
* master: (113 commits)
  webui: updated the chat service to only include max_tokens in the req… (ggml-org#16489)
  cpu : optimize the ggml NORM operation (ggml-org#15953)
  server : host-memory prompt caching (ggml-org#16391)
  No markdown in cot (ggml-org#16483)
  model-conversion : add support for SentenceTransformers (ggml-org#16387)
  ci: add ARM64 Kleidiai build and test support (ggml-org#16462)
  CANN: Improve ACL graph matching (ggml-org#16166)
  kleidiai: kernel interface refactoring (ggml-org#16460)
  [SYCL] refactor soft_max, add soft_max_back (ggml-org#16472)
  model: EmbeddingGemma Adding Support for SentenceTransformers Dense Modules (ggml-org#16367)
  refactor: centralize CoT parsing in backend for streaming mode (ggml-org#16394)
  Disable CUDA host buffers on integrated GPUs (ggml-org#16308)
  server : fix cancel pending task (ggml-org#16467)
  metal : mark FA blocks (ggml-org#16372)
  server : improve context checkpoint logic (ggml-org#16440)
  ggml webgpu: profiling, CI updates, reworking of command submission (ggml-org#16452)
  llama : support LiquidAI LFM2-MoE hybrid model (ggml-org#16464)
  server : add `/v1/health` endpoint (ggml-org#16461)
  webui : added download action (ggml-org#13552) (ggml-org#16282)
  presets : fix pooling param for embedding models (ggml-org#16455)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Misc. bug: missing messages in JSON export via llama-server web UI
5 participants