TurboStream v0.1.1: Scaling Real-Time AI Analysis

Published December 24, 2025

TurboStream v0.1.1: Scaling Real-Time AI Analysis
Release NotesGoAI

We are excited to announce the release of TurboStream v0.1.1. This update represents a significant step forward in our mission to build the most robust platform for real-time data streaming and AI-powered analysis.

While our previous version introduced the core concept of "signals from your data streams," v0.1.1 focuses on scalability, usability, and security. We have listened to developer feedback and re-architected key components of our Go backend and Terminal UI (TUI) to support more complex, concurrent workflows.

Concurrent Multi-Feed Analysis

The headline feature of this release is the ability to run independent AI analysis loops for multiple data feeds simultaneously.

In previous versions, the AI analysis engine was serialized; switching between feeds would pause the analysis of the background feed. This was a limitation for users monitoring diverse data sources—such as a BGP update stream and a Solana mainnet feed—at the same time.

With v0.1.1, we have introduced a concurrent state management system within the TUI. You can now:

  • Enable "Auto Mode" for Feed A to run every 10 seconds.
  • Enable "Auto Mode" for Feed B to run every 30 seconds.
  • Monitor both concurrently without interference.

The backend's SocketManager has been optimized to handle these parallel requests efficiently, leveraging Go's lightweight goroutines to maintain high throughput even under load.

Explicit Control and Token Economy

Running Large Language Models (LLMs) on high-frequency data streams can be expensive. To give users better control over their token consumption, we have removed implicit analysis triggers.

The system no longer defaults to a generic "Analyze this" prompt for empty inputs. Instead, users must now explicitly define a prompt (e.g., "Summarize anomalies in this price feed") before the AI engine engages. This ensures that your API credits are only spent on the specific insights you care about.

Enhanced Terminal Experience

For our power users who live in the terminal, we have significantly upgraded the TUI layout.

The "AI Analysis" window has been expanded vertically (from 12 to 25 lines), providing much-needed breathing room for complex reasoning outputs from models like GPT-4. We have also implemented a smart auto-scrolling mechanism that keeps the latest streaming tokens in view while ensuring your prompt input field remains visible and accessible at the bottom of the screen.

Security Hardening

As we move closer to a production-ready 1.0 release, security is paramount. This release includes several hardening measures:

  1. WebSocket Security: We now enforce strict Origin header validation to prevent Cross-Site WebSocket Hijacking (CSWH).
  2. Configuration Safety: The backend now actively warns administrators if the application starts with insecure default secrets (like JWT_SECRET), preventing accidental misconfiguration in production environments.
  3. Fuzz Testing: We have introduced fuzz testing for our critical authentication and message parsing paths, verifying system stability against malformed or malicious inputs.

What's Next?

This pre-production release sets the stage for our upcoming v0.2.0 milestone, where we plan to introduce persistent storage for analysis history and advanced filtering rules.

You can pull the latest changes from our GitHub repository today. As always, we welcome your feedback and contributions.