Thursday, June 26, 2014

LTE World Summit 2014

This year's 10th edition of the conference, seems to have found a new level of maturity. While VoLTE, RCS, IMS are still subjects of interest, we seem to be past the hype at last (see last year), with a more pragmatic outlook towards implementation and monetization. 

I was happy to see that most operators are now recognizing the importance of managing video experience for monetization. Du UAE's VP of Marketing, Vikram Chadha seems to get it:
"We are transitioning our pricing strategy from bundles and metering to services. We are introducing email, social media, enterprise packages and are looking at separating video from data as a LTE monetization strategy."
As a result, the keynotes were more prosaic than in the past editions, focusing on cost of spectrum acquisitions and regulatory pressure in the European Union preventing operators to mount any defensible position against the OTT assault on their networks. Much of the agenda of the show focused on pragmatic subjects such as roaming, pricing, policy management, heterogeneous networks and wifi/cellular handover. Nothing obviously earth shattering on these subjects, but steady progress, as the technologies transition from lab to commercial trials and deployment. 

As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.

The next buzzword on the hype cycle to point its head is NFV with many operator CTOs publicly hailing the new technology as the magic bullet that will allow them to "launch services in days or weeks rather than years". I am getting quite tired of hearing that rationalization as an excuse for the multimillion investments made in this space, especially when no one seems to know what these new services will be. Right now, the only arguable benefit is on capex cost containment and I have seen little evidence that it will pass this stage in the mid term. Like the teenage sex joke, no one seems to know what it is, but everybody claims to be doing it. 
There is still much to be resolved on this matter and that discussion will continue for some time. The interesting new positioning I heard at the show is appliance vendors referring to their offering as PNF (as in physical) in contrast and as enablers for VNF. Although it sounds like a marketing trick, it makes a lot of sense for vendors to illustrate how NFV inserts itself in a legacy network, leading inevitably to a hybrid network architecture. 

The consensus here seems to be that there are two prevailing strategies for introduction of virtualized network functions. 

  1. The first one, "cap and grow" sees existing infrastructure equipments being capped beyond a certain capacity and little by little complemented by virtualized functions, allowing incremental traffic to find its way on the virtualized infrastructure. A variant might be "cap and burst" where a function subject to bursts traffic is dimensioned on physical assets to the mean peak traffic and all exceeding traffic is diverted to a virtualized function. 
  2. The second seems to favour the creation of vertical virtualized networks for market or traffic segments that are greenfield. M2M and VoLTE being the most cited examples. 

Both strategies have advantages and flaws that I am exploring in my upcoming report on "NFV & virtualization in mobile networks 2014". Contact me for more information.



Wednesday, June 18, 2014

Are we ready for experience assurance? part II

Many vendors’ reporting capabilities are just fine when it comes to troubleshooting issues associated with connectivity or health of their system. Their capability to infer, beyond observation of their own system, the health of a connection or the network is oftentimes limited. 

Analytics, by definition require a large dataset that is ideally covering several systems and elements to provide correlation and pattern recognition on otherwise seemingly random events. With a complex environment like the mobile network, it is extremely difficult to understand what a user’s experience is on their phone. There are means to extrapolate and infer the state of a connection, a cell, a service by looking at network connections fluctuations. 

Traffic management vendors routinely report on the state of a session by measuring the TCP connection and its changes. Being able to associate with that session the device type, time of the day, location, service being used is good but a far cry from analytics.
Most systems will be able to detect if a connection went wrong and a user had a sub-par experience. Being able to tell why, is where analytics’ value is. Being able to prevent it is big data territory.
So what is experience assurance? How does (should) it work?

For instance, a client calls the call center to complain about a poor video experience. The video was sluggish to start with, started 7 seconds after pressing play and started buffering after 15 seconds of playback.
A DPI engine would be able to identify whether TCP and HTTP traffic were running efficiently at the time of the connection.
A probe in the RAN would be able to report a congestion event in a specific location.
A video reporting engine would be able to look at whether the definition and encoding of the video was compatible with the network speed at the time.
The media player in the device would be able to report whether there was enough resources locally to decode, buffer, process and play the video.
A video gateway should be able to detect the connection impairment in real time and to provide the means to correct or elegantly notify of the impending state of the video before the customer experiences a negative QoE.
A big data analytics platform should be able to point out that the poor experience is the result of a congestion in that cell that occurs nearly daily at the same time because the antenna serving that cell is in an area where there is a train station and every day the rush hour brings throngs of people connecting to that cell at roughly the same time.
An experience assurance framework would be able to provide feedback instruction to the policy framework, forcing download, emails and non-real-time data traffic to be delayed to account for short burst of video usage until the congestion passes. It should also allow to decide what the minimum level of quality should be for video and data traffic, in term of delivery, encoding speed, picture quality, start up time, etc… and proactively manage the video traffic to that target when the network “knows” that congestion is likely

Experience assurance is a concept that is making its debut when it comes to data and video services. To be effective, a proper solution should ideally be able to gather real time events from the RAN, the core, the content, the service provider and the device and to decide in real-time what is the nature of the potential impairment, what are the possible course of actions to reduce or negate the impairment or what are the means to notify the user of a sub-optimal experience. No single vendor, to my knowledge is able to achieve this use case, either on its own or through partnerships at this point in time. The technology vendors are too specialized, the elements involved in the delivery and management of data traffic too loosely integrated to offer real experience assurance at this point in time.

Vendors who want to provide experience assurance should first focus on the data. Most systems create event or call logs, registering hundreds of parameters every session, every second. Properly representing what is happening on the platform itself is quite difficult. It is an exercise in interpretation and representation of what is relevant and actionable and what is merely interesting. This is an exercise in small data. Understanding relevance and discriminating good data from over engineered logs is key.


A good experience assurance solution must rely on a strong detection, analytics and traffic management solution. When it comes to video, this means a video gateway that is able to perform deep media inspection and to extract data points that can be exported into a reporting engine. The data exported cannot be just a dump of every event or every session. The reporting engine is only going to be as good as the quality of the data that is fed into it. This is why traffic management products must be designed with analytics in mind from the ground up if they are to be efficiently integrated within an experience assurance framework.

Tuesday, June 17, 2014

Are we ready for experience assurance? part I




As mentioned before, Quality of Experience (QoE) was a major theme in 2012-2013. How to detect, measure and manage various aspects of the customer experience has taken precedence in many cases to savings or monetization rhetoric at vendors and operators alike.

As illustrated in a recent telecoms.com survey, Operators see network quality as the most important differentiator in their market. They would like to implement in their overwhelming majority, business models where they receive revenue share for a guaranteed level of quality.  The problem comes with defining what quality means in a mobile network.


It is clear that many network operators in 2014 have come to the conclusion that they are ill-equipped to understand the consumer’s experience when it comes to data services in general and video in particular. It is not rare that a network operator’s customer care center would receive complaints about the quality of the video service, when no alarm, failure or even congestion has been detected. Obviously, serving your clients when you are blind to their experience is a recipe for churn.

As a result, many operators have spent much of 2013 requesting information and evaluating various vendors’ capability to measure video QoE.  We have seen (here and here) the different type of video QoE measurement. 

This line of questioning has spurred a flurry of product launches, partnerships and announcements in the field of analytics. Here is a list of announcements in the field in the last few months:
  • Procera Networks partners with Avvasi
  • Citrix partners with Zettics and launches ByteMobile Insight
  • Kontron partners with Vantrix and launches cloud based analytics
  • Sandvine launches the Real Time Entertainment Dashboard
  • Guavus partners with Opera Skyfire
  • Alcatel Lucent launches Motive Big Network Analytics
  • Huawei partners with Actix to deliver customer experience analytics…

Suddenly, everyone who has a web GUI and a reporting engine deliver delicately crafted analytics, surfing the wave of big data, Hadoop and NFV as a means to satisfy the operators’ ever growing need for actionable insight.

Unfortunately, in some cases, the operator will find itself with a collection of ill-fitting dashboards providing anecdotic or contradictory data. This is likely to lead to more confusion than problem solving. So what is (should be) experience assurance? The answer in tomorrow's post.


Tuesday, June 10, 2014

Cisco VNI global IP 2014: we live in a video world

As is now usual, after the february mobile Visual Networking Index, Cisco releases in June the global IP version. Here are a few interesting measurements and forecasts and some associated thoughts.

Global IP traffic's growth is slowing down somewhat, having grown five fold in the last five years and anticipated to grow threefold over the next five. This is not overly surprising, a 21% CAGR is a sign of a maturing market.

CDN

Surprisingly, more than half of the traffic next year should be local (as opposed to long-haul), which underlines the growing importance of CDNs to deliver content at the edge of the network.
CDN delivered in 2013 36% of the data traffic and are set to deliver 55% by 2018, with a growth spurred by OTT video. CDNs will deliver the 67% of all video traffic from 53% today.
I think here, mobile CDNs are not represented, which is unsurprising since most of the movement in that space has happened only recently. Mobile carriers CDNs will add to these numbers.


Mobile

Cisco predicts that mobile traffic (including wifi) will exceed fixed by 2018 (61 to 39% vs. 44 to 56% in 2013). Again, not so surprising, except we could see in my opinion fixed being overcome earlier than that. Machine to machine traffic over wireless, I think, is quite systematically underestimated.

Mobile data traffic, unsurprisingly still sees a 61% CAGR and will increase eleven fold by 2018.

Video

Video traffic will account for 79% of overall IP traffic by 2018 (66% today). If we add TV and video, we are looking at 90%...

OTT video viewed on connected TVs, consoles, sticks, etc... doubled in 2013 and is set to quadruple by 2018. I believe this is also under-evaluated. I think 4k will weigh heavily on these media and h265 / vp9 will be late to assuage the burden.



All in all, no great surprise this year, a confirmation of last year's trends. I believe that 4k, together with changes in so-called "net neutrality" provisions will accelerate most trends by 1 to 2 years.