Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging #1146

Merged
merged 92 commits into from
Feb 15, 2024
Merged

Logging #1146

Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
9a551a1
WIP:logging
cheng-tan Jan 3, 2024
5e5c20b
serialize request, response and client
cheng-tan Jan 4, 2024
8d0d489
Resolved conflict, merge main.
afourney Jan 22, 2024
98e65d5
Fixed code formatting.
afourney Jan 22, 2024
a64aec9
Updated to use a global package, and added some test cases. Still ver…
afourney Jan 23, 2024
b67995e
Merge branch 'main' into logging
afourney Jan 23, 2024
7306b1c
Update work in progress.
afourney Jan 23, 2024
07dc5b8
adding cost
cheng-tan Jan 23, 2024
73220c8
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 23, 2024
5b5ec6f
log new agent
cheng-tan Jan 23, 2024
9796142
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 23, 2024
ff5863f
update log_completion test in test_agent_telemetry
cheng-tan Jan 24, 2024
c876d62
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 24, 2024
c27def1
tests
cheng-tan Jan 24, 2024
dab2530
fix formatting
cheng-tan Jan 24, 2024
ede5418
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 24, 2024
5862d0e
Added additional telemetry for wrappers and clients.
afourney Jan 25, 2024
526fd10
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 25, 2024
8ed603d
Merge branch 'logging' of github.com:cheng-tan/autogen into logging
cheng-tan Jan 25, 2024
33b188c
WIP: add test for oai client and oai wrapper table
cheng-tan Jan 25, 2024
b51bc29
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 26, 2024
d39b50f
update test_telemetry
cheng-tan Jan 28, 2024
6cbf22a
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 28, 2024
eb2dfa3
fix format
cheng-tan Jan 28, 2024
1ea7cf5
More tests, update doc and clean up
cheng-tan Jan 29, 2024
765f920
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 30, 2024
9b39f49
Merge branch 'main' into logging
afourney Jan 31, 2024
453b59e
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 31, 2024
580295c
small fix for session id - moved to start_logging and return from sta…
cheng-tan Jan 31, 2024
f1911ac
update start_logging type to return str, add notebook to demonstrate …
victordibia Jan 31, 2024
e2918fc
add ability to get log dataframe
victordibia Jan 31, 2024
53d520c
precommit formatting fixes
victordibia Jan 31, 2024
99d01f9
formatting fix
victordibia Jan 31, 2024
8118256
Remove pandas dependency from telemetry and only use in notebook
victordibia Jan 31, 2024
570f011
formatting fixes
victordibia Jan 31, 2024
512586a
log query exceptions
cheng-tan Jan 31, 2024
d9b12af
Merge branch 'logging' of github.com:cheng-tan/autogen into logging
cheng-tan Jan 31, 2024
595ffce
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Jan 31, 2024
79202a2
fix formatting
cheng-tan Jan 31, 2024
f7ef2d1
fix ci
cheng-tan Jan 31, 2024
4c4bac3
Merge branch 'main' into logging
cheng-tan Feb 1, 2024
3556600
fix comment - add notebook link in doc and fix groupchat serialization
cheng-tan Feb 2, 2024
e4569a1
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 2, 2024
a14c7c9
Merge branch 'logging' of github.com:cheng-tan/autogen into logging
cheng-tan Feb 2, 2024
ee3ddeb
small fix
cheng-tan Feb 2, 2024
e34a5f5
do not serialize Agent
cheng-tan Feb 2, 2024
f27a38c
formatting
cheng-tan Feb 2, 2024
27f92fb
wip
cheng-tan Feb 2, 2024
f031cf4
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 5, 2024
789a833
fix test
cheng-tan Feb 5, 2024
d6883cd
serialization bug fix for soc moderator
cheng-tan Feb 5, 2024
d78a790
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 5, 2024
d6f74b4
fix test and clean up
cheng-tan Feb 6, 2024
a4092b5
wip: add version table
cheng-tan Feb 6, 2024
011023a
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 6, 2024
9472ba4
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 6, 2024
3f6816b
fix test
cheng-tan Feb 6, 2024
249c435
fix test
cheng-tan Feb 6, 2024
90ed77c
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 6, 2024
5106b9b
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 9, 2024
4f7942b
fix test
cheng-tan Feb 9, 2024
87847c5
make the logging interface more general and fix client model logging
cheng-tan Feb 9, 2024
b3db6f5
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 12, 2024
5ba1593
fix format
cheng-tan Feb 12, 2024
bc3da16
fix formatting and tests
cheng-tan Feb 12, 2024
4e6024e
fix
cheng-tan Feb 12, 2024
3d4c0d6
fix comment
cheng-tan Feb 12, 2024
57e257a
Renaming telemetry to logging
cheng-tan Feb 12, 2024
fee17a6
update notebook
cheng-tan Feb 12, 2024
d457e19
update doc
cheng-tan Feb 12, 2024
ca76675
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 12, 2024
fdd210b
formatting
cheng-tan Feb 12, 2024
616e633
formatting and clean up
cheng-tan Feb 12, 2024
6a1d8dd
fix doc
cheng-tan Feb 12, 2024
cc86287
fix link and title
cheng-tan Feb 12, 2024
7e1e7c5
fix notebook format and fix comment
cheng-tan Feb 13, 2024
60d5a9e
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 13, 2024
7d8cc6c
format
cheng-tan Feb 13, 2024
a7c0510
Merge branch 'main' into logging
cheng-tan Feb 13, 2024
8e219bf
try fixing agent test and update migration guide
cheng-tan Feb 13, 2024
f56d1c4
Merge branch 'logging' of github.com:cheng-tan/autogen into logging
cheng-tan Feb 13, 2024
68ae33e
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 13, 2024
d59b4c2
fix link
cheng-tan Feb 13, 2024
b4e7829
debug print
cheng-tan Feb 14, 2024
861b742
debug
cheng-tan Feb 14, 2024
e02a12d
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 14, 2024
848e97b
format
cheng-tan Feb 14, 2024
a1d33f4
add back tests
cheng-tan Feb 14, 2024
7a223a0
Merge branch 'main' into logging
sonichi Feb 14, 2024
6912f83
fix tests
cheng-tan Feb 14, 2024
0e6a8c9
Merge branch 'main' of github.com:microsoft/autogen into logging
cheng-tan Feb 14, 2024
9adf462
Merge branch 'logging' of github.com:cheng-tan/autogen into logging
cheng-tan Feb 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions autogen/agentchat/assistant_agent.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from typing import Callable, Dict, Literal, Optional, Union

from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent


class AssistantAgent(ConversableAgent):
Expand Down Expand Up @@ -69,6 +70,7 @@ def __init__(
description=description,
**kwargs,
)
log_new_agent(self, locals())
cheng-tan marked this conversation as resolved.
Show resolved Hide resolved

# Update the provided description if None, and we are using the default system_message,
# then use the default description.
Expand Down
4 changes: 4 additions & 0 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from typing import Any, Awaitable, Callable, Dict, List, Literal, Optional, Tuple, Type, TypeVar, Union

from .. import OpenAIWrapper
from ..telemetry import log_new_agent
from ..cache.cache import Cache
from ..code_utils import (
DEFAULT_MODEL,
Expand Down Expand Up @@ -117,6 +118,7 @@ def __init__(
(e.g. the GroupChatManager) to decide when to call upon this agent. (Default: system_message)
"""
super().__init__(name)

# a dictionary of conversations, default value is list
self._oai_messages = defaultdict(list)
self._oai_system_message = [{"content": system_message, "role": "system"}]
Expand All @@ -136,6 +138,8 @@ def __init__(
self.llm_config.update(llm_config)
self.client = OpenAIWrapper(**self.llm_config)

log_new_agent(self, locals())

# Initialize standalone client cache object.
self.client_cache = None

Expand Down
2 changes: 2 additions & 0 deletions autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from ..code_utils import content_str
from .agent import Agent
from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -323,6 +324,7 @@ def __init__(
system_message=system_message,
**kwargs,
)
log_new_agent(self, locals())
cheng-tan marked this conversation as resolved.
Show resolved Hide resolved
# Store groupchat
self._groupchat = groupchat

Expand Down
2 changes: 2 additions & 0 deletions autogen/agentchat/user_proxy_agent.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from typing import Callable, Dict, List, Literal, Optional, Union

from .conversable_agent import ConversableAgent
from ..telemetry import log_new_agent


class UserProxyAgent(ConversableAgent):
Expand Down Expand Up @@ -93,3 +94,4 @@ def __init__(
if description is not None
else self.DEFAULT_USER_PROXY_AGENT_DESCRIPTIONS[human_input_mode],
)
log_new_agent(self, locals())
cheng-tan marked this conversation as resolved.
Show resolved Hide resolved
50 changes: 46 additions & 4 deletions autogen/oai/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from typing import Any, List, Optional, Dict, Callable, Tuple, Union
import logging
import inspect
import uuid
from flaml.automl.logger import logger_formatter

from pydantic import BaseModel
Expand All @@ -16,6 +17,8 @@
from autogen.token_count_utils import count_token
from autogen._pydantic import model_dump

import autogen.telemetry

TOOL_ENABLED = False
try:
import openai
Expand Down Expand Up @@ -104,6 +107,7 @@ def __init__(self, *, config_list: Optional[List[Dict[str, Any]]] = None, **base
base_config: base config. It can contain both keyword arguments for openai client
and additional kwargs.
"""
autogen.telemetry.log_new_wrapper(self, locals())
openai_config, extra_kwargs = self._separate_openai_config(base_config)
if type(config_list) is list and len(config_list) == 0:
logger.warning("openai client was provided with an empty config_list, which may not be intended.")
Expand All @@ -119,6 +123,7 @@ def __init__(self, *, config_list: Optional[List[Dict[str, Any]]] = None, **base
else:
self._clients = [self._client(extra_kwargs, openai_config)]
self._config_list = [extra_kwargs]
self.wrapper_id = id(self)

def _separate_openai_config(self, config: Dict[str, Any]) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""Separate the config into openai_config and extra_kwargs."""
Expand Down Expand Up @@ -149,8 +154,10 @@ def _client(self, config: Dict[str, Any], openai_config: Dict[str, Any]) -> Open
openai_config["azure_deployment"] = openai_config["azure_deployment"].replace(".", "")
openai_config["azure_endpoint"] = openai_config.get("azure_endpoint", openai_config.pop("base_url", None))
client = AzureOpenAI(**openai_config)
autogen.telemetry.log_new_client(client, self, openai_config)
else:
client = OpenAI(**openai_config)
autogen.telemetry.log_new_client(client, self, openai_config)
return client

@classmethod
Expand Down Expand Up @@ -232,6 +239,7 @@ def yes_or_no_filter(context, response):
"""
if ERROR:
raise ERROR
invocation_id = str(uuid.uuid4())
last = len(self._clients) - 1
for i, client in enumerate(self._clients):
# merge the input config with the i-th config in the config list
Expand Down Expand Up @@ -261,6 +269,7 @@ def yes_or_no_filter(context, response):
with cache_client as cache:
# Try to get the response from cache
key = get_key(params)
request_ts = autogen.telemetry.get_current_ts()
response: ChatCompletion = cache.get(key, None)

if response is not None:
Expand All @@ -271,6 +280,19 @@ def yes_or_no_filter(context, response):
response.cost = self.cost(response)
cache.set(key, response)
self._update_usage_summary(response, use_cache=True)

# Log the cache hit
autogen.telemetry.log_chat_completion(
invocation_id=invocation_id,
client_id=id(client),
wrapper_id=id(self),
request=params,
response=response,
is_cached=1,
cost=response.cost,
start_time=request_ts,
)

# check the filter
pass_filter = filter_func is None or filter_func(context=context, response=response)
if pass_filter or i == last:
Expand All @@ -280,9 +302,21 @@ def yes_or_no_filter(context, response):
return response
continue # filter is not passed; try the next config
try:
request_ts = autogen.telemetry.get_current_ts()
response = self._completions_create(client, params)
except APIError as err:
error_code = getattr(err, "code", None)
autogen.telemetry.log_chat_completion(
invocation_id=invocation_id,
client_id=id(client),
wrapper_id=id(self),
request=params,
response=f"error_code:{error_code}, config {i} failed",
is_cached=0,
cost=0,
start_time=request_ts,
)

if error_code == "content_filter":
# raise the error for content_filter
raise
Expand All @@ -298,6 +332,18 @@ def yes_or_no_filter(context, response):
with cache_client as cache:
cache.set(key, response)

# Log the telemetry
autogen.telemetry.log_chat_completion(
invocation_id=invocation_id,
client_id=id(client),
wrapper_id=id(self),
request=params,
response=response,
is_cached=0,
cost=response.cost,
start_time=request_ts,
)

# check the filter
pass_filter = filter_func is None or filter_func(context=context, response=response)
if pass_filter or i == last:
Expand Down Expand Up @@ -492,7 +538,6 @@ def _completions_create(self, client: OpenAI, params: Dict[str, Any]) -> ChatCom
response_contents[choice.index] += content
completion_tokens += 1
else:
# print()
pass

# Reset the terminal text color
Expand Down Expand Up @@ -684,6 +729,3 @@ def extract_text_or_completion_object(
choice.message if choice.message.function_call is not None else choice.message.content # type: ignore [union-attr]
for choice in choices
]


# TODO: logging
Loading
Loading