May 24, 2024 • 2 min read

Highlight Pod #12: Traceloop Co-Founder Nir Gazit

author picture
Chris Esplin
Software Engineer

Watch on YouTube

Traceloop: Observability for LLM Applications

As more companies build production applications powered by large language models (LLMs), they are encountering challenges around monitoring and evaluating the quality of these AI model outputs.


Traceloop has created an open source project called OpenLLMetry that provides instrumentation to capture prompts, completions, and other metadata from LLMs during runtime. This data is sent to Traceloop's observability platform, which offers a suite of metrics to evaluate output quality aspects like relevance, repetitiveness, safety violations, and more. The platform allows narrowing in on instances where models may be hallucinating or generating low-quality responses. Traceloop is also working with vendors like Microsoft and Apple to define OpenTelemetry conventions specifically for LLM observability use cases. With AI systems becoming increasingly complex and multi-modal, having purpose-built monitoring tools will be critical for responsible enterprise adoption.


Comments (0)
Your Message

Other articles you may like

Supercharge Your Development Workflow with Code Generators
Highlight Pod #5: Replo with Yuxin Zhu
Managing our design tokens at Highlight
Try Highlight Today

Get the visibility you need