Our GSMA Intelligence data and survey insights leveraged in a preview article by the Financial Times on Apple’s quarterly earnings: “Five things to look out for in Apple’s earnings”. We discuss the impact of Covid-19 on Apple’s US handset sales and how the company is future proofing their business services in regions with high smartphone and internet users growth.
The fight against the Covid-19 (coronavirus) pandemic could continue for years before a vaccine is widely available. This will clearly have a negative impact on global health and the global economy. But the good news is mobile technology, in the form of such things as contact tracing apps, can dampen the impact of the virus by reducing the number of people which need to remain in quarantine, leading to an effective allocation of public resources.
Epidemiologists have used contact tracing for decades to help identify people who may have been infected by diseases. Digital contact tracing apps can expedite this process, and can empower users to make decisions which positively impact their health, and the health of others.
High smartphone penetration and mobile broadband coverage provide the platform to develop contact tracing solutions.
The mobile industry has developed rapidly in recent years. Today mobile broadband networks cover more than 90 per cent of global population, with smartphones making up almost three quarters of total handsets, GSMA Intelligence figures show (see chart, below, click to enlarge).
The most severely affected countries in terms of Covid-19 cases, the USA; China; Italy; France; and the UK, all have relatively high smartphone penetration. This means smartphone solutions can be a vital tool in fighting the virus in these countries.
Current solutions for using mobile phones to track infections fall into two categories:
Centralised solutions based on GPS, mobile signals or other methods such as credit card records that collect the geographic movements of infected (or potentially infected) users.
Decentralised solutions using Bluetooth-based apps, where records of proximity to other users are stored in the users’ own mobile phones.
Centralised solutions have mainly been adopted in countries in East Asia. This option is, however, generally less accepted in countries with greater data privacy concerns. We are now seeing the development of decentralised solutions, notably via Apple and Google, which are jointly working on an API. This will allow public health agencies to hire developers to write their own apps which can securely collect anonymised information about user proximity. As well as helping to address concerns on data privacy, a Bluetooth solution requires less battery than GPS or mobile signals.
How effective is Bluetooth-based contact tracing?
A team of researchers at Oxford University in the UK has demonstrated how a contact tracing app could work. They built an epidemiological model based on the UK’s demographic structure and smartphone usage. Their simulation suggests that for the app to be effective in controlling the spread of the virus when the country emerges from lockdown, approximately 60 per cent of the country’s population would need to use it and follow-up its recommendations to self-isolate if necessary. The report states, one infection could be averted for every one-to-two users.
Clearly, the more people who use the app, the greater the impact will be in terms of suppressing the virus, but it will be a challenge to introduce it to the masses. For example, Singapore’s TraceTogether app was launched a month ago, but is only being used by around 1 million of the country’s 5.7 million population. Singaporeans were seemingly reluctant to use the initial version of the app (since rectified) which had to be used in the foreground for the device to work, which prevented the use of other apps and drained the battery.
It’s also likely some Singaporeans did not feel the urgency to install the app due to the country having relatively light lockdown regulations compared to European markets for example. In these countries the desire to end lockdown may be greater, which could motivate more smartphone users to install such an app. However, at the same time, individual concerns about data privacy are more prominent and could limit the adoption of digital contact tracing. Some people may fear hackers while others worry about surveillance and misuse of data by the government.
There is evidence, however, Europeans may be prepared to compromise more on privacy given the potential benefits of a contact tracing app in fighting Covid-19.
Oxford University worked with a team of behavioural economists, who surveyed 6,000 potential app users in the UK, France, Germany, Italy and the USA. The proportion of respondents that said they would install the app fell in a range of 67.5 per cent to 85.5 per cent. Another survey conducted by The Guardian and Essential Research showed around half of people in Australia believed the government’s Covidsafe contact tracing app would limit the spread of the virus and also speed the removal of physical distancing restrictions.
Contact tracing apps are based on a natural model of trace and self-isolation, but effectiveness can be improved if the data is shared further
The simple model is based on a scenario which identifies those that have been in contact with infected people and assumes they will self-isolate, but there is still space to accelerate the process and further reduce the cost to society. This, of course, depends on the availability of public resources and how much data individuals are willing to share with authorities.
If all contacted people are tested then only infected people need to stay at home, which will lessen the impact on economic activity. This, however, means the healthcare system must be able to offer enough tests and they must be able to deliver those tests in time. Both of these objectives will be much easier to achieve if the health system has access to the data generated by the app.
Beyond testing, there are plenty of other ways a healthcare system could leverage the data for good. Resources like hospital beds and ventilators, for example, could be dynamically configured based on the data collected, without waiting for self-reporting. We need these tools at the right place and time to save lives, but not so many that we have wasted resources on excess capacity for a once-in-a-decade epidemic.
What happens in countries with lower smartphone penetration? Tracking based on mobile signals and sending warnings via broadcast text messages can still help, although those approaches also raise privacy concerns (to the extent that any Covid-19 solution involves data from mobile network operators, the GSMA has developed guidelines  to help mobile operators support governments and health authorities while maintaining subscriber privacy and trust). These solutions could also present issues with battery life, and the data collected from basic and feature phones is likely to be less granular than that of smartphones and possibly less effective.
There is no universal solution which can apply to all markets. Each country will actually apply a basket of solutions to fight against coronavirus. As the creator of the TraceTogether app said in a recent blog post, “automated contact tracing is not a coronavirus panacea”. We need technology, but it cannot win on its own.
– Gu Zhang – senior analyst, Mobile Operators and Networks, GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
The collapse of OneWeb has sent shockwaves through satellite communications. Once a bright star in the sector with multi-billion dollar backing, the refusal of SoftBank to pour more money into the start-up led to OneWeb filing for Chapter 11 bankruptcy relief  at the end of March. The timing of the announcement was particularly brutal, just hours before OneWeb launched more than 30 micro satellites into orbit from Kazakhstan, part of a planned constellation of 720 satellites which is now in serious doubt.
The demise of OneWeb has been attributed to both Covid-19 (coronavirus) and SoftBank tightening its investment belt. The Japanese giant has projected $16.7 billion in losses for 2019 for its Vision Fund, with OneWeb identified as one of its most notable losing investments not included in that fund. But industry sources suggest the satellite start-up has been struggling for years to adapt its business model as the consumer LEO proposition became increasingly unrealistic. And SoftBank has had enough. Covid-19 might have just been the accelerator of a foretold death.
The advantage of LEO satellites over their high-orbit cousins
Low earth orbit (LEO) satellites have been hailed as the next big thing in non-terrestrial communications, with the potential to boost data backhaul for operators, bring broadband connectivity to underserved regions, and further enable cloud and edge-based services. The key advantage of LEO satellites over legacy geostationary earth orbit (GEO) satellite services is the much lower altitude. This not only slashes the cost of launching and maintaining the constellation, but also improves downlink speeds and latency due to a much-reduced data round trip.
However, one of the key drawbacks of the LEO proposition is the expense on the ground. In order to maintain a strong and constant data link, LEO satellite antennas need to be steerable: simply put, the dish has to move to track the satellite. This means the antenna are considerably more complex and expensive than comparable terrestrial wireless receivers or the basic passive aerial on a GEO satellite phone handset. Clearly there is an option to simply throw more birds into the air to reduce the need for steering, but aside from the huge additional cost this raises other issues in terms of precisely planning the satellite trajectories and avoiding space debris in our already-crowded skies. So far, the focus has been on the space segment of the LEO constellation solutions, with most of the investment going onto the design, production and launch of the satellites, without much concentration on the end-user device segment (the stuff on the ground).
The cost of these antennas varies, but is likely be higher than the comparable cost of a fibre (FTTP) connection for more than 99 per cent of homes and small businesses. SpaceX reportedly said the cost of an individual satellite terminal was around $1,000 and that mass-market production could bring the cost down to $300. It’s likely to be many years before we’re even close to developing LEO antennas cheap enough to support a consumer proposition and, by that time, other solutions such as 3G and 4G are likely to have reached all but the most remote areas of the planet.
Despite the demise of OneWeb, the LEO proposition is still flying
This is not to say LEO is dead in the sky. Other operators such as Canadian communication satellite veterans Telesat have made a firm decision to target the wholesale and enterprise sector, even designing satellites with built-in flexibility via steerable antenna to target receivers with the highest data demand. There is a major market opportunity here in offering additional backhaul to existing operators as well as data-hungry industry verticals. And there will always be the sectors that need reliable connectivity in unserved regions such as aviation; maritime; oil and gas; and defence.
The renaissance of satellite has been driven by LEO constellations, allowing operators to throw hundreds of satellites into the sky at a lower cost. But whether satellite can ever offer a realistic consumer service proposition depends on cutting costs in three areas: networks, service and devices. The cost of putting satellites in the air isn’t likely to fall much (unless there’s a run on rocket fuel), while the amount the operators can charge the end-user is constrained by the usual RoI and competition from other technologies. So, until devices, in this case the customer premise equipment (CPE), come down in price, LEO is unlikely to be anything more than a very niche consumer proposition.
What’s next for OneWeb?
It doesn’t look like buyers are queuing up to acquire the company’s constellation assets, especially given the hardware has apparently been designed with a consumer rather than an enterprise focus. Each LEO satellite provider has designed their satellites with specific requirements which match the shared spectrum they’ve got a licence for and the planned business model. So the brutal truth is the dozens of satellites already launched look set to become more space junk, until their orbits decay sufficiently that they succumb to gravity and burn up on re-entry.
The asset OneWeb is flaunting as the most sellable one, its licensed spectrum, also has its limitations OneWeb’s spectrum is on the shared Ku-band, and cannot be repurposed easily. Not all OneWeb competitors have Ku-band spectrum. Secondly, being shared spectrum, any changes in the satellites to use the spectrum would require extensive testing to make sure there is no interference with the other constellations using the same band.
Having around 70 satellites in operation , OneWeb passed the first of the new ITU milestones approved in 2019. As per the new ITU requirements for non-geostationary constellations, spectrum licensees need to deploy 10 per cent of their constellation within two years of the licence being granted to secure priority status against interference from other constellation hopefuls. But whoever acquires the spectrum would have to fulfil the other ITU requirements: launch 50 per cent of satellites within five years and 100 per cent within seven. Otherwise the licence is revoked or limited to the satellites launched. An alternative would be a pre-emptive strategy from another LEO constellation operator to block other entrants. Considering how expensive, on top of the limitations imposed by ITU, this unlikely route would require deep pockets for anyone choosing it.
Will other players in the LEO constellation race be affected?
The demise of OneWeb could also cast shadows on the viability of the other three main LEO satellite constellation projects, namely SpaceX, Amazon’s Project Kuiper, and Telesat. However, there are some big differences between OneWeb and the other three. They all have recurring revenues from related or, in the case of Amazon, unrelated businesses. Of the three, Telesat’s main business is providing services to the communications and broadcasting market segments with GEO satellites, which positions it well with an existing customer base for its enterprise and backhaul transport plans for its LEO constellation. Meanwhile SpaceX and Amazon, being vertically integrated, are better positioned than OneWeb and have clear business models outlined. Amazon, with its vast resources, might be even willing to take a stab at an end user terminal, if anything as a reference design, as it has done in other consumer ventures. Ultimately the downfall of OneWeb clears a market which was potentially becoming overcrowded from the start.
– Peter Boyland – lead analyst, Ecosystem Research, GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
Sri Lanka, the land of Ceylon tea, is among the top South Asian nations in various demographic and economic indicators. It maintains a high literacy rate of more than 90 per cent, and an impressive GDP per capita: $4,102 in 2018. The country’s telecom industry is also on a good footing with the presence of various established groups like Axiata, Bharti Airtel and CK Hutchison contributing to mobile subscriber penetration of almost 150 per cent, and 4G coverage of 95 per cent of the population.
This makes the country a prime candidate for 5G, right?
Maybe. But, two key realities on the ground suggest a most complicated story.
Limited 4G Network Options
While Dialog and Mobitel launched 4G in 2012, in line with players in other countries, Hutchison was a late entrant in the space (launched as late as Q1 2020). Airtel, meanwhile, has not even forayed into the 4G market.
Limited 4G Penetration
Eight years post-launch, 4G services have barely scratched the surface. Subscriber penetration remains at 14.66 per cent as of Q1 2020 and is expected to reach 33.1 per cent by 2025. That’s strong growth, sure. But it compares poorly to other South Asian nations including Bangladesh, where penetration is expected to outperform Sri Lanka to reach 46 per cent by 2025 (see chart, below, click to enlarge).
But, if Sri Lanka benefits from solid fundamentals, we must ask why 4G uptake is so low. Let’s look at a few key factors.
High Taxes: Taxes (such as service and telecom levies) make mobile internet usage an expensive affair. For 1GB of data, the poorest 20 per cent of the country’s population spends approximately 5.3 per cent of their monthly income. This figure increases to more than 7 per cent for a 5GB data
Low digital literacy: At the end of June 2019, digital literacy (id est, can people use a computer, laptop, tablet or smartphone on their own) in Sri Lanka stood at only 44.3 per cent.
Affordability (or lack thereof): While launching Hutchison’s 4G services, its CEO acknowledged affordability and limited business usage as uptake inhibitors: “Although 4G network services have already been launched in Sri Lanka by other operators several years ago, its mass adoption till recently has been limited due to several factors including the availability and affordability of 4G phones, and the requirement of applications that really needed 4G (instead of 3G) bandwidth.”
Stagnant smartphone penetration: 4G without smartphones doesn’t really make sense. Smartphone penetration being flat in the range of 40 per cent to 50 per cent since Q3 2017 hasn’t helped 4G’s prospects.
So, what is the industry doing to make mobile internet and 4G space more attractive?
To create a healthy, competitive 4G environment for operators and make 4G more accessible to the masses, the government has been taking various initiatives.
In late 2018, it removed floor rates for voice calls to improve the health of telecoms. The move was aimed at promoting cost optimisation and allowing more competition. In October 2019, the National Digital Policy was released, setting targets of at least 70 per cent internet penetration, more than 95 per cent indoor and outdoor high-speed broadband, 4G and 5G coverage, and various other development parameters.
Towards the end of 2019, we saw a reduction in Telecommunication Levy, a direct tax, by 25 per cent, bringing it down to 11.25 per cent. And this year? Already in 2020 we’ve seen the Telecoms Regulatory Commission re-initiate trials with all major operators to encourage the launch of 5G services which further connect with Smart Nation vision.
What does the future hold?
Well, that’s pretty much the most important question. Right?
Clearly, Sri Lanka could be a successful 5G market. But, first, fundamentals need to be addressed and 4G’s potential needs to be fully tapped. Better awareness and marketing for smartphones will help. Improved connectivity would too, especially in suburban and rural areas. Fixed wireless offers could help to move the market along.
How will we know if this is working? What are we watching? CAPEX and new product offers. Operators have announced network investment plans and we’ve seen those support 4G expansion. As we move into a post-pandemic world, we need to see how these continue.
Yes, as much as network investments are important, they are meaningless without solid product offerings. Here, too, we’ve seen progress, particularly as new 4G launches spur competitive offerings. Think Hutchison introducing new product offerings across data, voice and other services targeting specific customer segments soon after moving into 4G, along with launching post-paid packages for consumers for the first time in 20 years.
5G in Sri Lanka may not seem sensible today, but continued execution on these fundamentals will set the stage for its success.
– Ankit Sawhney – research manager; Charu Paliwal – team lead, GSMA Intelligence
The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.
From Brussels and Davos, the CEOs of Google and Microsoft respectively, recently voiced support for regulating AI. Over the last months, we have seen acceleration of ΑΙ regulation efforts coming out of the European Union (EU) with the European Commission (EC) vowing to deliver a legislative framework for AI within its five-year term to 2024.
Details about whether this will represent tweaks of existing frameworks, principally GDPR, the EU data regulation which set the standard for data privacy globally, or a more comprehensive set of additional measures, remain unknown. However, leaked documents provide indications about where the EC is heading with AI regulation, highlighting three trends and associated implications.
AI applications versus risk levels
A would-be AI regulatory framework might require companies to self-assess the potential risk attributable to their own AI-based services. Such a categorisation would exist along a continuum based on the sector and the application. For example, healthcare will be a high-risk sector when combined with AI, while autonomous cars will be a high-risk application because if AI fails, it could cause fatalities.
In this scenario, companies would have to put new procedures in place to understand and justify the levels of risk their AI-based services involve. Assessments would likely include the potential impact on consumers and seek to quantify the risk from algorithmic bias and data sourcing by third-party developers which contribute to a given company’s services. Regulators would have to carefully craft legislation that does not unduly put extra operational burden or re-engineering costs, particularly when GDPR is already in force. Against this backdrop, and given AI is a fast-moving field, it may be more feasible and instructive to adopt a phased approach prioritising high risk areas in the first tranche.
Tech innovations, privacy risks and empowering individuals
Another option the Commission is reportedly mulling involves giving people an enhanced personal data portability right. In the recent Consumer Insights GSMA Intelligence survey, we discovered a relatively large proportion of respondents (22 per cent, second in order of preference among tech companies, operators, regulators and state authorities) believed they themselves should be responsible for their own data safety. In this spirit, regulation should encourage AI services vendors to create tools which help users manage their privacy risks. This could include some or all of:
Explainable AI, an emerging branch of AI systems aiming to explain and reason every decision made in a manner easily understandable by humans affected by or related to it.
Digital sovereign identity, based on Blockchain, and enabling individual users to fully own and control their digital history, along with any data they shared.
Zero knowledge proofs, cryptographic methods by which a user can prove to another user, say an AI service vendor, that they know something to be true without conveying any additional information.
Data and data hubs – the role of public sector and industrial data
The EU announced it will spend €1 billion on creating “common European data spaces”, expanding its existing public sector data hub to also include industrial data sets. The EC’s strategy is to help European companies capitalise on the data they generate and eventually catch up in the AI race against the US and China. These can be a prime instrument for innovation, benefiting smaller companies and individuals with regards to more open, cheaper, better quality and more diverse data access. All of this is particularly relevant for accelerating AI innovation. However, interoperability across existing data hubs is far from a done deal. Scaling these at an EU-wide level would also raise intense debate about their governance.
To sum up, a mix of light and smart regulation won’t be an easy task as it would have to balance out a number of trade-offs like stricter rules versus space for companies to innovate. Emphasis on technologies which empower individuals and companies to manage AI-related risks is probably the most prudent thing to do for policy makers and regulators, but it’s also up to the tech companies to deliver through tech tools. Finally, without a doubt, the EU perceives AI as a core element of its Digital sovereignty strategy. However, the EC’s intentions towards common European data spaces remains unclear for now, risking delays to progress. What exactly does ‘European’ data mean and how is it possible that only European companies could be profiting from these, a point Politico reported Internal Market Commissioner Thierry Breton raised at a tech conference in December 2019 when he outlined a goal for European companies to have access to domestic data to “create value” in the bloc.
What difference would that make to current requirements for data centres to reside in the countries they operate in? And, how is that conducive to AI innovation?
The EU is setting the bar very high. It needs to give fast and bold answers before criticism starts to gather.
– Christina Patsioura – senior analyst, Emerging Technologies, GSMA Intelligence
In November 2018 we looked at the implications of a growing shift among operators to share or divest tower assets . Two clear takeaways emerged:
5G necessitates a different network strategy. Unlike previous generations, 5G deployment is not only about adding more sites and increasing backhaul capacity. In fact, it is more about rethinking the whole network architecture to make it agile. The high capacity requirements of 5G will necessitate the use of small cells in cities and areas of high footfall (such as airports) to complement national macro networks. Private networks (for example to sell into enterprise customers) and the concept of a neutral host (such as for sports stadiums) are further examples of diversification.
Decoupling. The traditional vertically integrated model of network ownership in which operators own and control all passive (sites and towers) and active (radio access) infrastructure is being joined by a decoupled approach where parts are owned and operated by a third-party (such as a tower company)
With the 5G becoming commonplace, operators are under constant pressure to address deepening financial and market pressures, as required infrastructure funding increases dramatically in the face of the investment-heavy evolution toward 5G. How much will this all cost? A report published in 2019 by GSMA Intelligence predicted an outlay of about $1 trillion  alone on 5G between 2019 and 2025.
This represents a massive cost pressure, particularly given low revenue growth. As a result, the industry has been driven to look at multiple ways of keeping costs in check. Most discussed and perused so far are voluntary network sharing, virtualisation and the shift to open RAN. But, as 5G opens up new network infrastructure models, new operations models need to be considered as well: small cells sharing (or partnerships with municipal authorities); private network launches; and creating neutral hosts. While, to date, infrastructure sale and leaseback deals are more common (where an operator sells towers to a TowerCo or other third-party on the proviso it will be able to lease access), the structural separation of assets is another option worth consideration.
Yet, as much as we might call these new models for a 5G era there are examples to look to in terms of guidance going forward.
In 2014, PPF Group, a local private equity fund, bought O2 Czech Republic from Telefonica and separated the network (privately held and known as Cetin) from the retail business (publically listed). The deal not only benefitted the new owners: the country received a significant infrastructure upgrade as well. The creation of a pure network infrastructure player lowered borrowing costs and improved capital access such that Cetin increased its network capital expenditures by 40 per cent a year after separation. From there on, capital expenditures increased by 13 per cent annually. This led to a jump in fibre coverage and broadband speeds at a level rarely seen in Europe.
Reliance Industries demerged its telecom infrastructure, including tower and fibre assets, into a separate company, which has been monetised through a deal with private equity company Brookfield Asset Management. Following the separation, Reliance Jio has become asset light, having a balance sheet size of $33 billion. Now, by putting all the digital entities under a new company, Reliance Industries is readying the services arm for a better customer experience and deeper penetration, eventually converting into better valuation. The company already made it clear that it will list Reliance Jio’s shares in the market. The new arrangement makes the operator, almost debt-free, thus improving its valuation.
Spun out its infrastructure assets into a new business called Telstra InfraCo, which managed approximately $11 billion in network infrastructure assets. Telstra started recognising internal access charges in its fiscal 2019 (the year to end-June 2019), the fees Telstra essentially pays itself for accessing its own infrastructure. With these charges included, InfraCo’s revenue rose 51.6 per cent to $4.95 billion. Telstra is now planning on transferring all of its mobile infrastructure into the business, which is now a semi-autonomous unit.
The Danish operator was acquired by a Macquarie-led consortium of buyers at a 34 per cent premium to the market price with a structural separation initiative as one of the core pillars of value creation justifying the takeover.
These examples might make network disintegration look lucrative, but it is not a simple copy-paste framework. It is a very complex and unique call to make for each operator; it can never be generalised. However, we can certainly look at some basic factors critical to be assessed while thinking about spinning off network operations:
Regulation on access and control.
Quality and scale of infrastructure already in place.
Level of site fibre and network sharing.
Lack of modularity and complexity in operations.
Regulation on wholesale pricing.
Obviously, lower pressure on any of these fronts drives better odds of success. Regardless, as we move into fuller 5G deployments from operators large and small, we can imagine these considerations may not always be top of mind, particularly where high-profile examples lead to a copy-paste mentality.
In the process, we will see new failures, new successes, and might even see new models develop.
– Aryan Jain – research manager, strategy, GSMA Intelligence
The mobile market in Europe has come to the end of its main phase of investments in 4G networks and operators are now turning their attention to 5G. Services have already been launched in some markets, for example in Switzerland and the UK, with more markets expected to launch in 2020.
Despite these positive steps, industry analysts do not expect a rapid rollout of 5G in Europe. On the contrary, most expect 5G deployments in European markets to lag behind countries such as the US, China, South Korea and Japan. The reason is that delivering 5G services will require large additional investments, and these will be a lot harder to justify in European markets that have recently delivered lower profit margins than other parts of the world.
With this largely subdued investment climate, what can be done so Europe doesn’t lose out on the 5G opportunity?
One thing which could change this is competition dynamics. More concentrated market structures (for example with less players) can deliver economies of scale, a more efficient utilisation of assets (such as sites and spectrum) and also enable large investments in 5G networks. However, concentrated markets can also raise flags with regulatory and competition authorities which may be concerned about consumer prices being higher.
Understanding the relationship between market structure and the quality, innovation and prices consumers can expect is, therefore crucial. A strong debate exists about the competition dynamics which will deliver best value for consumers in the 5G era. As arguments can be made in both directions, it is important to look at data from the recent past to help draw some lessons to inform decisions going forward.
What do we know from 4G?
This is precisely what we did. In a recent study , we evaluated how market structures impacted consumers during the 4G era in Europe. We looked at data covering the period from 2011 to 2018, for 29 European countries. We combined coverage and other publicly available data from operators with network-quality measurement data from Ookla, a global leader in mobile and broadband network intelligence, testing applications and technology.
And what did we find?
Overall, the 4G era was a positive and expansive one for European mobile consumers everywhere. While mobile performance is still far from perfect in all places, by 2016 already 90 per cent of consumers were covered by 4G networks. Since then, operators have delivered greater speeds and lower latencies (signal delay), resulting in a far superior consumer experience today. Download speeds increased on average from 2Mb/s in 2011 to 37Mb/s in 2018. The average price per MB also dropped sharply as mobile data became cheaper and users consumed ever-increasing volumes, with average monthly data usage increasing more than twelve-fold.
But while all European consumers experienced improvements during the 4G era, the study shows those in three-player markets benefitted the most from higher quality and innovation.
By the end of 2018, three-player markets were outperforming four-player markets by 4.5Mb/s in download speeds, and three quarters of that difference (around 3.5Mb/s) can be attributed to the role of market structure in three-player markets. In particular, operators in more concentrated markets were able to utilise assets more efficiently (especially spectrum) and generate higher returns that allowed them to invest more in their networks. This is an important insight when considering the best ways to unlock the full potential of 5G networks, including advanced applications that require very small signal delay, high speeds and plenty of network capacity.
These represent nice results. But, you might ask, did this come at the expense of higher prices?
On the basis of the pricing data we were able to analyse, it did not. In addition to general improvements in performance, prices also decreased across Europe in the 4G era, indicating more efficiency and better value for consumers over time. Implicit unit prices (id est, revenue per MB and revenue per user) decreased similarly in both three- and four-player markets.
In other words, during the 4G era, a European consumer in a three-player market typically experienced a better quality mobile broadband service while paying similar prices per MB of data to a consumer in a four-player market.
Lessons for 5G
Does this, therefore mean more consolidation in European markets is the only solution to deliver the right investments for 5G? Not necessarily.
An option often touted as an alternative to full consolidation is increased network sharing. Our analysis showed that in the 4G era, progressively deeper levels of network integration delivered improved performance, although they came short of full integration in terms of network quality. Network sharing could, therefore also help promote faster deployments of high-performing 5G networks in Europe over the coming years.
Ultimately, every case and country needs to be considered on the basis of its own merits and situation. What works in one country does not necessarily work in another, and operator incentives and consumer attitudes to products and services will differ from market to market.
But there is one key lesson from the 4G era that does apply to all countries equally: to support the delivery of high performing 5G networks, policymakers should fully consider all aspects of consumer welfare when assessing the relative advantages of more concentrated markets in merger control, antitrust policy and spectrum management.
– Pau Castells – director, economic analysis, GSMA Intelligence
With MWC traditionally being a showcase for network vendor innovations, keeping up with all launches from all key players is never an easy feat. That’s why suppliers often take time to update the media and analyst communities just before the show begins. MWC20 might have been cancelled, but those pre-briefs still went ahead.
Taken as a whole, these events might seem to present a picture of how networks will evolve. They don’t. Instead, they simply provide insight into how specific suppliers see the market. And, while it’s important to understand what each vendor is doing to move the market forward, Huawei’s analyst event from last week stands out for a few reasons:
Huawei is the market’s biggest networking player.
With network infrastructure, device, and enterprise businesses, it brings a unique perspective.
Winning a total of six GLOMO Awards, suggests impressive R&D efforts; and
The status of its annual analyst conference (an opportunity to dig deep into its product and technology roadmaps, traditionally scheduled for April in China) remains uncertain.
Returning, then, to the idea a vendor’s messaging (and products) reflects their view of market realities, we can ask “what have we learned from Huawei?” What does it see as major market priorities and drivers? What products follow? And, most importantly, do these views align with market realities?
5G credibility: momentum and solutions
Message and credentials
Like most vendors, Huawei is quick to point out its 5G successes (91 commercial contracts) and the breadth of its 5G offering. Including RAN, core, transport, IT and cloud infrastructure, and devices, the offer is certainly comprehensive.
While end-to-end solutions and references are a staple of vendor marketing, operators don’t necessarily prioritise them in their purchasing decisions: our latest research shows security, pricing and performance being more important. That said, if the value of early momentum and broad assets can be explained, it will be convincing.
Spectrum: 5G versus fragmentation
Message and credentials
With 5G best deployed in large bandwidths (100+ MHz), Huawei highlights the need for operators to leverage disparate spectrum assets, something few in the industry would dispute. To cope with this, Huawei introduced wideband radio assets (up to 400MHz across the C-band), an all-in-one active antenna unit (AAU) supporting multiple sub-6GHz bands, and played up the use of software algorithms to optimise 5G performance in a given amount of spectrum along with its CloudAir Dynamic Spectrum Sharing solution.
As a scarce (and costly) resource, operators continue to view any technologies which maximise the use of their spectrum as critical: in the 5G era, new spectrum allocations are one of the top investment considerations. Disappointingly, operators do not yet see the use of AI and automation for spectrum management being as important as other use cases, making messaging on this front all the more important.
Spectrum: super uplink
Message and credentials
Use of TDD spectrum has long promised an ability to tune uplink versus downlink capacity to meet market demands. Unfortunately, FDD spectrum allocations remain the norm in most markets and, regardless of allocation flexibility, propagation in higher frequencies represents a fundamental uplink barrier. Allowing TDD/FDD coordination and TDD/TDD coordination for improved uplink, Huawei highlighted its Super Uplink technology as a solution for improved uplink speeds and latency.
The Super Uplink solution isn’t new: it was released in June 2019. New messaging around end-to-end support, however, add credibility. This helps to move the technology from a concept to something that operators can understand how they might deploy. Perhaps more importantly, though, a reference highlighting the 3GPP has “officially accepted this innovative technology” takes it from a proprietary offer to something that could be supported by a broader ecosystem.
5G Services: slicing and the core
Message and credentials
Improved uplink and latency performance, speak directly to the specific requirements of enterprise and industrial 5G use cases. And, supporting consumers alongside enterprise verticals (leveraging a common network) speaks to the value of network slicing, with Huawei spotlighting its end-to-end offer including service management, slice management, performance management and AI-based operations tools.
Like its Super Uplink solution, talking up network slicing in support of industrial digital transformation is not new for Huawei. And, Huawei wasn’t alone in doing this: Nokia announced its own, end-to-end, network slicing offer  just this week, claiming to be first for 4G and 5G support. Regardless, added details around the components included in Huawei’s Slicing offer (CSMF, NSMF, et cetera.) and the network components touched (UE, RAN, transport, core) paint a picture of a more complete, and credible, offer. Compared with capabilities like URLLC support and a simplified network architecture, network slicing does not rank high as a standalone 5G benefit. If, however, end-to-end solutions (including simple operations and management) can bring network slicing closer to a scalable, commercial reality for operators, then that prognosis could improve.
Green operations: power consumption versus data consumption
Message and credentials
With the launch of every new mobile technology generation, a renewed focus on network operations energy efficiency is only natural. After all, electricity bills add to overall costs. In 2020, with signs of climate change registering across the globe, energy-efficient network operations are a greater priority than ever. For its part, Huawei highlighted AI and automation tools which regulate network operations (turning off channels, carriers or symbols as appropriate) across technologies and spectrum bands, while outlining how 5G paired with its massive MIMO innovations can deliver much more data (50-times compared with 4G) for a given amount of power consumption.
There can be no doubt that green network operations is a key operator priority. But, where the headline energy efficiency stories from Huawei (automation, and 5G data-versus-energy consumption) are also likely to be claimed by competitors, a host of other innovations tell a more complete story: silicon-level integration for improved efficiency, gallium nitride (GaN) power amplifiers, use of green technologies (solar, wind) in rural areas, and energy efficiency included as a consideration in site survey tools. Competitors will, doubtless, make similar claims. Regardless, treating energy efficiency holistically signals it is more than just a passing fad or narrow marketing message.
Enterprise communications: Wi-Fi for the campus
Message and credentials
In a departure from the discussion around 5G and mobile networks, Huawei took time to announce its HiCampus solution for Wi-Fi connectivity across enterprise campuses. Why? In part, because the vendor can claim its new Wi-Fi 6 products borrow from its 5G R&D. In part, because the enterprise campus is a critical centre of business activity. In part, because of some impressive product and performance claims: ten new Wi-Fi 6 APs, 10 milliseconds latency, 10Gb/s throughput, 100 Mb/s guaranteed across the deployment.
As a part of their 5G deployment planning, integration with Wi-Fi is not a major focus for operators: it’s one of the least important 5G RAN priorities. For Huawei, however it’s an important asset. It helps the vendor differentiate against others which don’t have the same depth of enterprise assets or scale. And, while Huawei’s Wi-Fi messaging could raise questions about use cases around Wi-Fi versus private networks versus network slicing, any discussions on this front simply provide solution suppliers with another opportunity to engage (and sell to) their customers.
Of course, these products and messages are just a sample of what was discussed at Huawei’s product and solution showcase last week, and a fraction of what they will likely be announcing over the next weeks. But, if we expect 5G roll-outs to pick up pace this year, then there’s no doubt competition across 5G network vendors will pick up as well. Understanding the strategic priorities of leading vendors, then, provides a glimpse into how that competition will play out over the year.
– Peter Jarich – head of GSMA Intelligence
Two months ago, it felt nearly profound to say we were living through unprecedented times as the enormity of the Covid-19 (coronavirus) pandemic became clearer. Today, it feels like a trite understatement.
Around the world, the first order of business has been to stop the spread of the novel coronavirus in an effort to support the health of national populations and healthcare systems. The strategy has included quarantines, stay at home orders and shutting down (most) economic activity to an extent that no sector of society or the economy will remain un-impacted.
The impact on mobile and telecoms businesses (in the near- and long-term) will be discussed and analysed for months, if not years to come. Personally, my favourite coverage includes the impact tracking done by Mobile World Live  along with the analyses my team has put together looking at diverse markets and topics, including:
5G Uptake and Adoption 
Mobile Network Resiliency 
Fixed and Pay TV Markets
Digital Consumer Markets
IoT and Enterprise Services
Supply and demand dynamics will, doubtless, conspire to hold back 5G adoption in the near-term. And, where an economic downturn impacts small businesses disproportionately, the impact on IoT and enterprise wireless could be profound. Long-term, however, the impacts are a broad set of unknowns. Will post-lockdown pent-up demand drive a wave of 5G adoption? Will several months of Netflix bingeing and online gaming drive greater usage in the long-term? Will working from home (and the associated increase in home broadband usage) become the new normal? Will the April Fools joke of Disney buying F1 become a reality as live sporting events struggle under the burden of social distancing? Or will social distancing give AR and VR the boost they need to be successful?
One thing that is for certain? In the here and now, mobile operators are leveraging their networks, services, and businesses to support people and societies struggling against the pandemic. You see this in the news as operators (and regulators) race to put new spectrum assets to use, work to spin-up proximity tracking tools and promise to keep people connected.
As an analyst, I find these anecdotes compelling, but of limited value. By definition, anecdotes only tell part of the story. That’s why, when pulled into an effort to survey operators about their efforts, I jumped at the chance. Anecdotes are one thing. Getting input on the breadth of activities from a significant share of the world’s mobile operators? That’s something you can draw conclusions from (see chart, below, click to enlarge).
Of course, there is an upside to anecdotes. Selecting the anecdotes you highlight allows you to craft the story you want to tell, the surprising sort of story that grabs attention. Surveys, on the other hand, often confirm what we already know. Like the fact the primary efforts from operators revolve around educating the public, ramping capacity where needed, and ensuring connectivity via zero-rating or other service schemes. Not too surprising, right? But when the survey asks sentiment questions, those which aren’t reflected by what we see in the news and press releases? That’s when things get fun. And that’s why we asked which efforts operators expect to benefit them going forward.
This doesn’t mean the results are any more surprising.
Expectations of long-term benefits from new data collaboration and analytics efforts are understandable as operators get pulled into proximity tracking efforts. Telemedicine, long before the promise of 5G remote surgery is realised, is a no-brainer during a global pandemic. And as network and service attacks spike, taking advantage of health concerns and increased network usage, anything operators can do to combat them should yield expertise and reputational benefits.
What’s more important here, however, is a reminder the Covid-19 pandemic will come to an end. Shops will re-open. People will go back to work. Societies will slowly go back to normal, whether or not an economic downturn proves lasting. And when this happens, the way we engage with communications service providers, and the way they engage with their customers, will not be the same.
Today, helping one another remain healthy and safe is of paramount importance. But a post-Covid world will provide opportunities for operators to execute on their strengths, if they can imagine what that world looks like and begin planning for it.
– Peter Jarich – head of GSMA Intelligence
In 2019 we saw great progress for the open source telecom networking community, with lots of announcements coming out of the TIP Summit in November, and trials and deployments across the year with some key examples including:
Using open RAN gear, Internet para Todos, a Telefonica investment, connected about half a million customers in Peru to 4G services for the first time.
MTN  committing to deploy Parallel Wireless open RAN solutions to around 5 per cent of its networks in Africa.
Vodafone Group’s game-changer tender  which opens up its European footprint for open RAN solutions.
Rakuten Mobile, a greenfield operator building its 5G networks using open and virtual RAN  architecture.
(For further insights on the above check out TIP Summit: unlocking vendor lock-in with Open RAN )
Today, while we are still in the first few weeks of 2020, new and exciting developments in the open source networking ecosystem are inciting more activity: two, in particular, stand out as instructive.
O2 UK’s open RAN partnerships boost a host of network enhancements
This month Telefonica’s UK arm O2 announced a partnership with three challenger vendors planning to commercially deploy open RAN solutions across the country.
DenseAir – to power 5G V2V and V2X testing
Working with DenseAir, O2 will focus on 4G and 5G networks over the open RAN solution. At the Millbrook Proving Ground, a technology park for vehicle testing in the UK, the partners will provide 5G public and private networks enabling testing and development of CAV technology. This collaboration will power the testing for ITS networks and enable the first V2V and V2X testing over open RAN.
Mavenir – to enhance coverage and capacity
This partnership will see O2’s coverage and capacity in high-density environments in London enhanced. Locations will include shopping centres and stadiums.
WaveMobile – to cover not-spots
The pair will develop solutions to provide community-based mobile services in so-called not spots. The vendor is already providing O2 with its open RAN network on several sites across the UK including carrying O2 customers.
As O2 is expecting the work to accelerate over the next 18 to 24 months, these partnerships highlight the growing vendor ecosystem which provides open source networking solutions and showcase a vast range of capabilities, while also making it clear O2 is still putting a diverse set of vendors through their paces.
Open source networking to get a boost from Washington
In light of persistent pressure from the US to exclude some vendors from global 5G networks, politicians around the world have taken notice of the small set of suppliers the telecom industry has been relying on to build mobile networks.
Of course, operators have long sought agile and cost-effective ways to build and maintain their infrastructure in response to persistent cost or cash flow pressures, and a broad set of suppliers is important on that front. But recent supply chain security concerns and efforts have brought awareness about the limited number of mobile network vendors to the fore.
Against this backdrop, the open source networking movement could receive an unusual boost. In the early weeks of 2020, the US Senate introduced bipartisan legislation which (luckily for open source networking) could see the Federal Communications Commission provide more than $1 billion to help develop western-based alternative vendor technologies.
The largest share of the monies, around $750 million or up to 5 per cent of annual auction proceeds sourced from new auctioned spectrum licences, would be reserved for open architecture (including open RAN) and software-defined development models.
While security concerns may have ignited this move amongst politicians, operators understand the open source model offers benefits beyond security including cost saving, leaner operations and a reduced vendor reliance (to name a few). This is the second key catalyst for open RAN in 2020, providing a financial boost for research and development of open source solutions.
What could slow down achieving scale?
While these developments showcase the immense accomplishments of operators and challenger vendors alike, there remain further barriers to scaling up which must be addressed.
From our recent work with operators to better understand their network transformation strategies, a limited vendor ecosystem was highlighted as the greatest barrier to deployment of open source and open networking technologies. The good news? The progress we’ve seen in the first few weeks of 2020 alone. O2’s announcement showcased three challenger vendors with strengths which will support coverage and capacity enhancement, provide connections to not-spots, and power V2V and V2X. And while the O2 plans show the deployment of open RAN in targeted settings, rather than large (national/regional) scale, the general path of adoption makes sense as the following two points illustrate:
High density deployments rely on expensive small cells. Due to the open interfaces of open RAN solutions, operators can share the cost of a given mast with peers and financial partners to reduce the effective cost.
The coverage of not-spots or sparsely populated areas with weak coverage traditionally have relied on fibre rollouts which, again, are commercially often unviable. Operators which can subcontract the capex to an open RAN supplier, such as in the O2-WaveMobile partnership, can retain the customer relationship while leasing spectral capacity (the local network) to their supplier and, thus, saving money and protecting assets from being stranded.
These building blocks, while small scale steps, will lead to larger deployments as more companies try out the concept and report back successful best practices, hopefully leading to a snowball effect attracting more operators as well as a larger share of their networks running on open solutions.
Further, funding support out of countries such as the US (and one can be hopeful more will follow the example) will further enable the industry to build a diverse supply chain. And, as other operators such as Telefonica and Vodafone Group continue to spearhead the movement, concerns around RoI and tech maturity will diminish over time.
As to lack of internal ownership and expertise, this highlights the greater role industry partners including system integrators like IBM and vendors with teams providing E2E services, for example Mavenir, will need to play, not only in stitching the technologies provided together, but also by coordinating or even educating internal staff as part of a handover process to enable greater ownership.
– Armita Satari – analyst, core mobility, GSMA Intelligence