
AI that listens, understands, and visualizes—instantly, on-air and on request
Live broadcasts — from sports to elections to breaking news — are unpredictable. Traditional graphics workflows rely on rigid pre-planning and manual operator intervention, which can’t keep up with fast-changing conversations. The result is missed moments, slower reactions, and less engaging coverage.
Pulse brings agility to live production by using AI to listen and respond in real time. It captures both on-air audio and direct producer vocal commands, interpreting them into structured prompts for graphics generation. Whether triggered by a commentator mentioning a player, a politician making a statement, or a producer saying “Show me the latest results”, Pulse instantly creates accurate, context-aware graphics. Producers always maintain oversight, with every visual available for preview and approval before it goes live.
How does it work?
Pulse uses ASR (automatic speech recognition) to capture speech and NLU (Natural Language Understanding) to interpret its meaning. These triggers are passed to Cortex, which orchestrates template selection, asset retrieval, and population. Graphics — from lower-thirds to video walls to AR — are generated instantly and displayed in the Pulse dashboard for preview or manual cueing. Producers can let graphics air automatically or intervene at any time. This hybrid approach combines real-time automation with editorial control.


Key features
- Real-time ASR/NLU tuned for broadcast environments
- Direct producer vocal command support for instant graphics requests
- Contextual mapping of spoken cues to templates
- Confidence scoring and filtering to prevent false triggers
- Hybrid dashboard for preview, override, and manual control
- Multilingual and dialect support for global teams