AI inference in practice: time is money
This Report is locked

Please sign in or register for a free public account to access this report.

Learn more about our packages

Inferencing is the real-time decision-making of AI in practice. In the telecoms industry, this could apply to the network, services, customer care or other corporate workloads. As AI adoption grows, inferencing will accelerate, raising the question of where workloads will be processed and how they translate into business benefits.

As outlined in “Distributed inference: how AI can turbocharge the edge”, published for GTC 2025, several factors support the case for AI at the edge: growing use of agentic AI, reducing compute latency, improving network resilience and energy efficiency, and ensuring data sovereignty. Goals include saving money, making money, reducing risk and helping customers.

To illustrate the impact of AI inference in practice, three Spotlights will focus on use cases, with each featuring an example provider from the telecoms AI ecosystem. The reports will be complemented by a plug-and-play calculator developed for network operators, their partners and enterprise buyers to simulate the potential impact of running inferencing workloads at various network locations. This analysis examines AI on the near edge in distributed telco data centres, with Kinetica highlighted as an example.

Authors

How to access this report

Annual subscription: Subscribe to our research modules for comprehensive access to more than 200 reports per year.

Enquire about subscription

Contact our research team

Get in touch with us to find out more about our research topics and analysis.

Contact our research team

Media

To cite our research, please see our citation policy in our Terms of Use, or contact our Media team for more information.

Learn more
Full access
Get full access to our research now, get in touch with us to find out more about our research topics and analysis
  • 200 reports a year
  • 50 million data points
  • Over 350 metrics