Skip to content

Conversation

@hanishkvc
Copy link
Contributor

updated server/public_simplechat additionally with a initial go at a simple minded minimal markdown to html logic, so that if the ai model is outputting markdown text instead of plain text, user gets a basic formatted view of the same. If things dont seem ok, user can disable markdown processing from settings in ui.

look into the previous PR #17451 in this series for details wrt other features added to tools/server/public_simplechat
like peeking into reasoning, working with vision models as well as built in support for a bunch of useful tool calls on the client side with minimal to no setup.

All features (except for pdf - pypdf dep) are implemented internally without depending on any external libraries, and inturn should fit within 50KB compressed. Created using pure html+css+js in general, with additionally python for simpleproxy to bypass the cors++ restrictions in browser environment for direct web access.

Moved it into Me->tools, so that end user can modify the same as
required from the settings ui.

TODO: Currently, if tc response is got after a tool call timed out
and user submitted default timed out error response, the delayed
actual response when it is got may overwrite any new content in
user query box, this needs to be tackled.
Now both follow a similar mechanism and do the following

* exit on finding any issue, so that things are in a known
  state from usage perspective, without any confusion/overlook

* check if the cmdlineArgCmd/configCmd being processed is a known
  one or not.

* check value of the cmd is of the expected type

* have a generic flow which can accomodate more cmds in future
  in a simple way
Ensure load_config gets called on encountering --config in cmdline,
so that the user has control over whether cmdline or config file
will decide the final value of any given parameter.

Ensure that str type values in cmdline are picked up directly, without
running them through ast.literal_eval, bcas otherwise one will have to
ensure throught the cmdline arg mechanism that string quote is retained
for literal_eval

Have the """ function note/description below def line immidiately
so that it is interpreted as a function description.
Add a config entry called bearer.insecure which will contain a
token used for bearer auth of http requests

Make bearer.insecure and allowed.domains as needed configs, and
exit program if they arent got through cmdline or config file.
As noted in the comments in code, this is a very insecure flow
for now.
Next will be adding a proxyAuth field also to tools.
User can configure the bearer token to send
instead of using the shared bearer token as is, hash it with
current year and use the hash.

keep /aum path out of auth check.

in future bearer token could be transformed more often, as well as
with additional nounce/dynamic token from server got during initial
/aum handshake as also running counter and so ...

NOTE: All these circus not good enough, given that currently the
simpleproxy.py handshakes work over http. However these skeletons
put in place, for future, if needed.

TODO: There is a once in a bluemoon race when the year transitions
between client generating the request and server handling the req.
But other wise year transitions dont matter bcas client always
creates fresh token, and server checks for year change to genrate
fresh token if required.
Add a new role ToolTemp, which is used to maintain any tool call
response on the client ui side, without submitting it to the server
ie till user or auto submit triggers the submitting of that tool
call response.

When ever a tool call response is got, create a ToolTemp role based
message in the corresponding chat session. And dont directly update
the user query input area, rather leave it to the updated simplechat
show and the new multichatui chat_show helper and inturn whether the
current chat session active in ui is same as the one for which the
tool call response has been recieved.

TODO: Currently the response message is added to the current
active chat session, but this needs to be changed by tracking
chatId/session through the full tool call cycle and then adding
the tool call response in the related chat session, and inturn
updating or not the ui based on whether that chat session is
still the active chat session in ui or not, given that tool call
gets handled in a asynchronous way.

Now when that tool call response is submitted, promote the equiv
tool temp role based message that should be in the session's chat
history as the last message into becoming a normal tool response
message.

SimpleChat.show has been updated to take care of showing any
ToolTemp role message in the user query input area.

A newer chat_show helper added to MultiChatUI, that takes care of
calling SimpleChat.show, provided the chat_show is being requested
for the currently active in ui, chat session. As well as to take
care of passing both the ChatDiv and elInUser. Converts users of
SimpleChat.show to use MultiChatUI.chat_show
Update the immidiate tool call triggering failure and tool call
response timeout paths to use the new ToolTemp and MultiChatUI
based chat show logics.

Actual tool call itself generating errors, is already handled
in the previous commit changes.
Pass chatId to tool call, and use chatId in got tool call resp,
to decide as to to which chat session the async tool call resp
belongs and inturn if auto submit timer should be started if auto
is enabled.
This should ensure that tool call responses can be mapped back to
the chat session for which it was triggered.
Rather simplify and make the content_equiv provide a relatively
simple and neat representation of the reasoning with content and
toolcall as the cases may be.

Also remove the partial new para that I had introduced in the
initial go for reasoning.
Update existing flow so that next Tool Role message is handled
directly from within
Also take care of updating the toolcall ui if needed from within
this.
Fix up the initial skeleton / logic as needed.

Remember that we are working with potentially a subset of chat
messages from the session, given the sliding window logic of
context managing on client ui side, so fix up the logic to use
the right subset of messages array and not the global xchat
when deciding whether a message is the last or last but one,
which need special handling wrt Assistant (with toolcall) and
Tool (ie response) messages.

Moving tool call ui setup as well as tool call response got ui
setup into ChatShow of MultiChatUI ensures that switching between
chat sessions handle the ui wrt tool call triggering ui and tool
call response submission related ui as needed properly.

Rather even loading a previously auto saved chat session if it had
tool call or tool call response to be handled, the chat ui will be
setup as needed to continue that session properly.
Also cleanup the minimal based showing of chat messages a bit

And add github.com to allowed list
Add a newline between name and content in the xml representation
of the tool response, so that it is more easy to distinguish things

Add github, linkedin and apnews domains to allowed.domains for
simpleproxy.py
Seperate out the message ui block into a container containing
a role block and contents container block.

This will allow themeing of these seperately, if required.
As part of same, currently the role has been put to the side
of the message with vertical text flow.
Also make reasoning easily identifiable in the chat
Define rules to ensure that chat message contents wrap so as to
avoid overflowing beyond the size of the screen being viewed.

The style used for chat message role to be placed with vertical
oriented text adjacent to the actual message content on the side
seems to be creating issue with blank pages in some browsers,
so avoid that styling when one is printing.
Create the DB store

Try Get and Set operations

The post back to main thread done from asynchronous paths.

NOTE: given that it has been ages since indexed db was used,
so this is a logical implementation by refering to mdn as needed.
Rather this wont work, need to refresh on regex, been too long.

Rather using split should be simpler

However the extraction of head and body parts with seperation
inbetween for transition should work

Rather the seperation is blindly assumed and corresponding line
discarded for now
Switch to the simpler split based flow.

Include tr wrt the table head block also.

Add a css entry to try and have header cell contents text aling
to left for now, given that there is no border or color shaded
or so distinguishing characteristics wrt the table cells for now.
User can enable or disable the simple minded bruteforce markdown
parsing from the per session settings.

Add grey shading and align text to left wrt table headings of
markdown to html converted tables.
@github-actions github-actions bot added examples python python script changes server labels Nov 25, 2025
@hanishkvc hanishkvc changed the title server/publc_simplechat tiny (50KB compressed) web ui updated with reasoning, vision, builtin clientside tool calls and markdown server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls and markdown Nov 25, 2025
@hanishkvc hanishkvc changed the title server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls and markdown server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls (and markdown wip) Nov 25, 2025
Save copy of data being processed.

Try and sanitize the data passed for markdown to html conversion,
so that if there are any special characters wrt html in the passed
markdown content, it gets translated into a harmless text.

This also ensures that those text dont disappear, bcas of browser
trying to interpret them as html tagged content.

Trap any errors during sanitizing and or processing of the lines
in general and push them into a errors array. Callers of this
markdown class can decide whether to use the converted html or
not based on errors being empty or not or ...

Move the processing of unordered list into a function of its own.
Rather the ordered list can also use the same flow in general except
for some tiny changes including wrt the regex, potentially.
Update regex to match both ordered and unordered list
Avoid seperate new list level logic for a fresh list and list with
in list paths. Rather adjust lastOffset specifically for fresh list.

All paths lead to need to insert list item and the difference to be
handled wrt starting or ending a list level is handled by respective
condition check blocks directly without delaying it for later so no
need for that sList state, so remove.

Avoid check for same level list item path, as nothing special needs
to be do in that path currently.

Live identify the last offset, when unwinding.

NOTE: Logic currently will handle ordered lists on their own or
unordered lists on thier own or intermixed list containing both
type of lists within them, however remember that all will be shown
as unordered lists.

ALERT: if there is a really long line, the logic currently doesnt
support it being broken into smaller line with same or greater
offset than the line identifying the current list item.
Start ordered or unordered list as the case may be and push the
same into endType for matching unwinding.

Ignore empty lines and dont force a list unwind.
If a split line is found which remains within the constraints of
the preceding list item, then dont unwind the list, rather for
now add the split line as a new item at the same level.
Rename from unordered to just list, given that the logic handles
both types of lists at a basic level now.
If the split lines dont have any empty lines inbetween and also
remain within the block area of the list item which they belong
to, then the split line will be appended to the corresponding
list item, ELSE a new list item will be created.

To help with same a generic keyed empty lines tracker logic has
been added.

TODO: Account similar semantic wrt paragraph related split lines
Had forgotten to include this in the examples before.
Similar to listitem before, now also allow a para to have its long
lines split into adjacent lines. Inturn the logic will take care of
merging them into single para.

The common logic wrt both flows moved into its own helper function.
Maintain raw and sanitized versions of line.

Make blockquote work with raw line and not the sanitized line.
So irrespective of whether sanitize is enabled or not, the logic
will still work. Inturn re-enable HtmlSanitize.
Also update readme a bit, better satisfying md file format.
Given that now fetch_web_url_raw can also fetch local files, if local
file access scheme is enabled in simpleproxy.py, so rename this
tool call by dropping web from its name, given that some ai models
were getting confused because of the same.
@hanishkvc hanishkvc changed the title server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls (and markdown wip) server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls and markdown Nov 26, 2025
@hanishkvc
Copy link
Contributor Author

hanishkvc commented Nov 26, 2025

Markdown to html logic should work ok enough in general now especially for basic markdown contents. And more complex markdown should also potentially display ok enough at a basic level.

@hanishkvc hanishkvc changed the title server/publc_simplechat tiny (50KB compressed) web ui++ updated with reasoning, vision, builtin clientside tool calls and markdown server/publc_simplechat tiny (50KB) web client updated with reasoning, vision, builtin clientside tool calls and markdown Nov 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples python python script changes server

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant