Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nothing logged to Application Insights #177

Closed
bwoods89 opened this issue Aug 19, 2019 · 1 comment
Closed

Nothing logged to Application Insights #177

bwoods89 opened this issue Aug 19, 2019 · 1 comment

Comments

@bwoods89
Copy link

I've tried all of the examples in the library, but none of my events are showing up in Application Insights. Here's what my code looks like right now.

from fastapi import FastAPI
import logging
from applicationinsights.logging import enable, LoggingHandler
from features import FeaturesRequest
from XGB_New_Cust_80_Feature import predict_output_new_customer
from XGB_Rep_Cust_80_Feature import predict_output_repeat_customer

app = FastAPI(openapi_prefix="/risk-model")

instrumentation_key = '<my instrumentation key>'
try:
    with open("/keyvault/ApplicationInsights__InstrumentationKey", "r")as f:
        instrumentation_key = f.read()
except IOError:
    pass       

enable(instrumentation_key)

handler = LoggingHandler(instrumentation_key)

logging.basicConfig(handlers=[ handler ], format='%(levelname)s: %(message)s', level=logging.DEBUG)
logger = logging.getLogger("main")

@app.get("/healthz", status_code=200)
async def liveliness_or_readiness_probe():
    logger.debug("health check")

@app.post("/risk-predictions/new-customers")
async def new_customer_risk_prediction(features: FeaturesRequest):
    logger.info(features)
    prediction = predict_output_new_customer(features)

    return {
        "prediction": prediction[0][0],
        "model": prediction[1]
    }

@app.post("/risk-predictions/existing-customers")
async def existing_customer_risk_prediction(features: FeaturesRequest):
    logger.info(features)
    prediction = predict_output_repeat_customer(features)

    return {
        "prediction": prediction[0][0],
        "model": prediction[1]
    }

I've also tried with an explicit telemetry client, and that approach doesn't work either.

from fastapi import FastAPI
from fastapi.exception_handlers import (
    http_exception_handler,
    request_validation_exception_handler,
)
from fastapi.exceptions import RequestValidationError
import logging
from applicationinsights.logging import enable, LoggingHandler
from applicationinsights import TelemetryClient
from applicationinsights.channel import TelemetryChannel
from features import FeaturesRequest
from XGB_New_Cust_80_Feature import predict_output_new_customer
from XGB_Rep_Cust_80_Feature import predict_output_repeat_customer
from starlette.exceptions import HTTPException as StarletteHTTPException
import uvicorn

app = FastAPI(openapi_prefix="/risk-model")

instrumentation_key = '<my instrumentation key>'
try:
    with open("/keyvault/ApplicationInsights__InstrumentationKey", "r")as f:
        instrumentation_key = f.read()
except IOError:
    pass       

enable(instrumentation_key, async_=True)

tc = TelemetryClient(instrumentation_key, )
tc.channel.sender.send_interval_in_milliseconds = 10 * 1000
tc.channel.sender.send_buffer_size = 10

handler = LoggingHandler(instrumentation_key)

logging.basicConfig(handlers=[ handler ], format='%(levelname)s: %(message)s', level=logging.DEBUG)
logger = logging.getLogger("main")

@app.get("/boom", status_code=500)
async def boom():
    raise StarletteHTTPException(500)

@app.get("/healthz", status_code=200)
async def liveliness_or_readiness_probe():
    logger.debug("health check")

@app.post("/risk-predictions/new-customers")
async def new_customer_risk_prediction(features: FeaturesRequest):
    logger.info(features)
    tc.track_trace("New customer prediction")
    tc.flush()
    prediction = predict_output_new_customer(features)

    return {
        "prediction": prediction[0][0],
        "model": prediction[1]
    }

@app.post("/risk-predictions/existing-customers")
async def existing_customer_risk_prediction(features: FeaturesRequest):
    logger.info(features)
    tc.track_trace("Existing customer prediction")
    tc.flush()
    prediction = predict_output_repeat_customer(features)

    return {
        "prediction": prediction[0][0],
        "model": prediction[1]
    }

# Exception Handlers
@app.exception_handler(StarletteHTTPException)
async def customer_http_exception_handler(request, exc):
    tc.track_exception(value=exc)
    tc.flush()
    return await http_exception_handler(request, exc)

@app.exception_handler(RequestValidationError)
async def validation_exception_handler(request, exc):
    tc.track_exception(value=exc)
    tc.flush()
    return await request_validation_exception_handler(request, exc)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
@c-w
Copy link
Contributor

c-w commented Aug 28, 2019

For web applications you'll usually want to send the telemetry in a background thread in case the main thread is too busy. (This is how both the Flask and Django integrations are implemented.) Did you try switching AppInsights to async mode? E.g. by passing async_=True to applicationinsights.logging.enable (see code) or by setting up an explicit AsynchronousSender and AsynchonousQueue for the TelemetryClient (see code).

@lzchen lzchen closed this as completed Jul 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants