back

Time: 8 minute read

Created: September 17, 2024

Author: Cole Gottdank

What We've Shipped in the Past 6 Months

Over the past 6 months, we’ve been hard at work making Helicone even better for you. Your support and feedback have been invaluable, and we’re excited to share our journey and future plans with you.

What We’ve Built in the Past Six Months

🚀 Sessions

We’ve made it easier to group and visualize multi-step LLM interactions. By adding just two simple headers, you can track request flows across multiple traces and gain valuable insights into complex AI workflows.

✨ Prompt Management

You can manage and version your prompts directly in our UI, making it easy to collaborate with your team. If you prefer to handle prompts in your codebase, that’s fine too—we’ll automatically version them via our proxy without changing your workflow.

🧪 Prompt Experiments

Run experiments using historical datasets to test, evaluate, and improve your prompts over time. This helps prevent regressions in your production systems and ensures your prompts keep getting better.

🔔 Slack Alerts

Stay updated with real-time notifications right in Slack! You can now get alerted on errors or costs, helping you keep a close eye on your application’s performance without missing a beat.

⚡ Enhanced Speed and Performance

Filtering, searching, and aggregations have been massively improved—you can now load and analyze millions of requests in milliseconds! Your workflows just got a whole lot faster.

📚 Datasets

You can now build datasets from your requests. Add, duplicate, and edit requests in your dataset, and export them when you’re ready. Create a golden dataset to fine-tune your models and easily export it to your favorite fine-tuning platform like OpenPipe.

🛡️ LLM Moderations & Security

We’ve added features to help detect phishing, prompt injections, adversarial instructions, anomalies, and data exfiltration. These tools help you maintain the security and integrity of your AI applications.

📈 Scores

Evaluate and boost your model’s performance with our new Scores feature. Get insights and analytics to improve your outcomes.

🛠️ Entire Backend Rewrite: Never Lose a Log Again

We’ve completely overhauled our backend and integrated Kafka, achieving incredible scalability and bulletproof log persistence. What does this mean for you? Simply put, we promise we’ll never lose a log again.

Never Lose a Log Again

To give you an idea of the scale we’re operating at, we’ve handled over 2 billion LLM logs and processed more than 1.5 trillion tokens. We’re committed to supporting your needs, no matter how big they grow.

In our dedication to transparency, we’ve released anonymized stats at us.helicone.ai/open-stats, so you can see for yourself how robust Helicone’s services truly are.

Curious about how we made this happen? Dive into the details in our blog post: Handling Billions of LLM Logs with Upstash Kafka and Cloudflare Workers. We share the behind-the-scenes story of our transformation and how it benefits you.


What’s on the Roadmap?

We’re not stopping here! Here’s what you can look forward to in the near future:

  • The Ultimate Prompt Experiments Workflow: We’ve been listening to your feedback and are revamping our prompt experiments feature to make it more intuitive and powerful. We’re excited to share the new improvements with you soon! This is the feature our team is most excited about!

  • Evaluations: We’re introducing a new dashboard to help you track evaluations over time. You’ll be able to sample requests for evaluations and perform session-level assessments, making it easier to measure and enhance your LLM applications.

  • Connections: We’re enhancing our integrations to make your experience smoother. Soon, you’ll be able to connect to fine-tuning platforms, model load balancing services, evaluation tools, data analytics platforms, and more. Our goal is to bring all your AI tools together seamlessly.

  • Enhanced Platform Cohesion: We’re working on improving the overall cohesion across the entire platform. Our aim is to create a more seamless flow between features, reducing friction points and enhancing the user experience.


🎉 Meet Our New Team Members!

We’re thrilled to announce that our team is growing, thanks to your incredible support! Your trust in Helicone has allowed us to welcome two fantastic new members who are eager to help us make Helicone even better for you.

👋 Lina - Founding Designer

Lina

Say hello to Lina, our first dedicated designer! Lina has spent time designing in gaming, automotive, and fintech before joining Helicone. She loves storytelling through design and is looking forward to exploring the nature in San Francisco!

👋 Stefan - Founding Engineer

Stefan

We’re excited to welcome Stefan to our engineering team. With over 6 years of experience in engineering and fintech, Stefan has built LLM apps at startups ranging from pre-seed to Series A+. He’s here to help us push the boundaries of what’s possible with Helicone.

👋 Kavin - Software Engineer Intern

Kavin

We’re excited to welcome Kavin to our engineering team. Kavin is a Computer Engineering student at the University of Waterloo with experience in full-stack development and a passion for AI. He’s contributed to tech societies like the Exun Clan and the New Delhi Space Society, and brings focus and discipline from his lifelong practice of competitive yoga and kungfu.

Thank You from the Founders

We can’t thank you enough for being part of our journey. Your support from the early days of Helicone has allowed us to dedicate all our effort to building the ultimate LLM observability solution—something we truly love doing.

We’re committed to working even harder to make Helicone better for you, with our ultimate goal being to build a platform that can enable you to deliver the best LLM applications to your customers.

If you have any questions or feedback, please don’t hesitate to reach out. We’re here and would love to hear from you!

Helicone's Team


Stay tuned for more updates, and thank you for continuing this journey with us!

The Helicone Team ❤️