AI inference in practice: choosing the right edge
This Report is locked

Please sign in or register for a free public account to access this report.

Learn more about our packages

Inferencing is the real-time decision-making of an AI model. As AI adoption grows, inferencing will accelerate, raising questions about workload processing and business benefits. As outlined in "Distributed inference: how AI can turbocharge the edge", enterprises need to know that where inferencing happens – whether at the user edge (device and enterprise) or network edge (far edge, near edge and telco private cloud) – will have major implications for application performance, data sovereignty, resilience and energy efficiency.

This research forms part of a series illustrating the impact of AI inference, with each report focusing on a distinct edge location and featuring an example company. This analysis examines how running AI workloads on the edge can deliver improved outcomes, with Aible the featured company. 

Authors

How to access this report

Annual subscription: Subscribe to our research modules for comprehensive access to more than 200 reports per year.

Enquire about subscription

Contact our research team

Get in touch with us to find out more about our research topics and analysis.

Contact our research team

Media

To cite our research, please see our citation policy in our Terms of Use, or contact our Media team for more information.

Learn more
Full access
Get full access to our research now, get in touch with us to find out more about our research topics and analysis
  • 200 reports a year
  • 50 million data points
  • Over 350 metrics