Imply warns rising observability costs cut visibility
Imply has released new research that reports enterprise observability costs are rising faster than perceived value, as surging telemetry data forces IT teams into trade-offs between budget and visibility.
The study, titled "The Breaking Point for Observability Leaders", surveyed senior observability and platform leaders at large enterprises. It examines how data growth, retention limits and cost pressures are reshaping how organisations manage logs and other observability data.
The findings indicate that organisations are concentrating spend on a small number of platforms while expressing low satisfaction with returns. More than half of leaders allocate over 25% of their observability budget to a single platform. Only 13% say they are very satisfied with the cost-to-value ratio.
Respondents report that platform-centric licensing models and rapid data growth are driving up costs each year. Many leaders describe an environment where retaining essential logs or adding new workloads requires difficult budget decisions.
Visibility trade-offs
The research highlights widespread cutbacks in data retention and collection as teams attempt to control spend. Nearly 80% of teams are filtering, archiving or offloading logs. This reduces the volume of data stored in primary observability platforms.
According to the report, these actions limit visibility, especially during incidents that require rapid access to historical and high-fidelity data. Leaders say common practices such as sending data to cheaper storage tiers or discarding logs before ingestion are now routine.
Respondents identify operational side effects. They include slower investigations when teams need to retrieve data from cold storage, weaker real-time insights during outages, and a higher risk that critical evidence is unavailable during security events.
One IT leader quoted in the research highlighted the strain on existing tools. "We're constantly having to choose between controlling costs and keeping the data we need. Splunk just doesn't scale with the volume we're dealing with."
Imply executives describe a widespread sense that budgets are forcing organisations to reduce observability coverage.
"Too many organizations are being priced into flying blind," said Eric Tschetter, Chief Architect, Imply. "They're cutting retention because budgets force their hand, and it shouldn't be that way. Teams tell us they're pushing data into cold storage to keep costs in check and that can slow investigations, can create dangerous blind spots, and can weaken resilience. In a crisis, those trade-offs are unacceptable."
Slow queries and data access
The survey also links data accessibility issues with slower decision-making. 87% of leaders report that slow queries on observability data were caused by inaccessible data. They say this delays workflows such as threat detection and incident response.
Shifting data into cheaper storage tiers can lengthen query times. Filtering out logs before ingestion reduces the completeness of search results. Leaders report that these technical consequences flow directly from current cost-control measures.
The report suggests that many teams now view performance bottlenecks as a structural outcome of their observability data strategies, rather than as isolated platform problems.
Appetite for alternatives
Despite the pressure on existing systems, most leaders do not plan large-scale replatforming projects. The study finds that 87% of respondents are exploring or open to platform alternatives that reduce cost and scale pressure without disrupting current workflows. 98% say they would adopt a fully compatible option.
Respondents place strong emphasis on continuity. Workflow patterns, existing dashboards, and established tools remain central. Leaders express frustration with cost and scale constraints, rather than with the ways their teams interact with observability data.
Tschetter said interest in new approaches centres on compatibility with existing environments. "Teams aren't looking for a rip and replace," said Tschetter. "They want to keep their workflows and scale them. If you can separate cost from data volume and work with the tools they already trust, that's a breakthrough."
Imply Lumi launch
Imply has positioned its newly introduced product, Lumi, in response to these findings. The company describes Lumi as a modern observability data layer that uses an "observability warehouse" model for high-volume log and event data.
The product acts as a backend system behind existing observability platforms. It stores and queries large datasets. Imply states that this reduces an organisation's primary data footprint.
The company says Lumi supports full-fidelity retention at petabyte scale. It also states that the product improves search speed on large observability datasets.
Lumi connects with commonly used tools. Imply says it extends Splunk deployments, integrates with data pipelines such as Cribl and OpenTelemetry, and links to visualisation products including Grafana and Tableau. The product also connects with AI assistants such as Anthropic's Claude and LangChain.
The research and launch underline a shift in focus among observability leaders. Many now prioritise approaches that break the link between data growth and platform costs while preserving existing practices for engineers and security teams.
Imply, which was founded by the original creators of Apache Druid and is backed by investors including a16z, Bessemer Venture Partners and Thoma Bravo, expects demand for alternative observability data architectures to rise as enterprises store more telemetry and log data across observability, security and AI workloads.