Comprehensive observability, testing, and optimization for AI-powered applications. Drop-in replacement with < 5ms latency.
"I don't care what they're talking about. All I want is a nice, fat recording." โ Harry Caul, The Conversation (1974)
No code changes required. Simply replace your LLM API endpoints with Union Square URLs.
Asynchronous recording with less than 5ms overhead. Your users won't notice a difference.
Capture every interaction for analysis, debugging, and testing. Never lose track of what happened.
Convert problematic conversations into automated tests to prevent regressions.
Track token usage, latency, costs, and performance metrics across all your AI interactions.
Configurable recording rules, PII detection, and self-hosted deployment for complete control.
Union Square follows strict type-driven development principles with a functional core, imperative shell architecture.
// Example: Type-safe session tracking
#[nutype(
validate(not_empty, regex = "^[a-zA-Z0-9-]+$"),
derive(Debug, Clone, PartialEq, Eq, Hash)
)]
pub struct SessionId(String);
pub async fn proxy_request(
session_id: SessionId,
request: ProxyRequest,
) -> Result<ProxyResponse, ProxyError> {
// Type safety throughout the pipeline
record_request(&session_id, &request).await?;
let response = forward_to_provider(request).await?;
record_response(&session_id, &response).await?;
Ok(response)
}
git clone https://github.com/jwilger/union_square.git
cd union_square
nix develop # or cargo build --release
# config.toml
[server]
port = 8080
[database]
url = "postgresql://localhost/union_square"
// Before
const API = "https://api.openai.com/v1/chat"
// After
const API = "https://your-proxy.com/openai/v1/chat"
Union Square is currently in early development. Core functionality is being implemented. Watch this space for the first release announcement!