Wednesday, April 27, 2016

NFV costs expectation gap

I am fresh off from an interesting week in sunny San Jose, at the NFV World Congress, where I chaired the operations stream on the first day.

As usual, it is a week where operators and vendors jostle to show off their progress since last year and highlight the challenges ahead. before I speak about the new and cool developments in terms of stateless VNFs, open source orchestration, containers, kubernetes and unikernels, I felt the need to share some observations regarding diverging expectations from traditional telecoms vendors, VNF vendors, systems integrators and operators.

While a large part of the presentations showed a renewed focus on operations in NFV, a picture started to emerge in my mind in terms of expectations between vendors, systems integrators and operators at the show.

Hardware
Essentially, everyone expects that the hardware bill for a virtualized network will reduce, due to the transition to x86 hardware. While this transition might mean less efficiency in the short term, all players seem to think that it will resolve itself over the next few years. In the meantime, DPDK and SR-IOV are used to address the performance gap between virtualization and traditional appliance, even at the cost of agility. By my estimate, the hardware cost reduction demonstrated by VNF vendors and systems integrators still falls short of operators expectations. Current figure places them around a 30% cost reduction vs. traditional model, whereas operators' expectations hover between 50 to 66%.

Software
This is an area where we see sharp expectations variations between all actors in the value chain.
VNF vendors expect to be able to somehow capture some of the hardware savings and translate them into additional license fees. This thinking is boosted by the need for internal business case to transition from appliance to software, to virtualized and eventually to orchestrated VNF. We are still very early in the market and software licensing models for VNFs are all over the place, in many case simply translated from the appliance model in other cases built from scratch but with little understanding of he value of specific functions in the overall service chain. Increased competition and market entering from non-traditional telco vendors will level the licensing structure over time.

Systems integrators are increasingly looking at VNFs as disposable. Operators tell them that they want to be able to have little dependency on vendors and to replace VNFs and vendors as needed, even running different vendors for the same function in different settings or slices. Systems integrator are buying into the rationale and are privileging their own VNFs, putting emphasis (and price premium) on their NFVI (infrastructure) and VNFM (management). Of course this leads also to the conclusion that while VNFs (and VNF vendors) should be interchangeable, the NFV MANO (management and orchestration) function will be very sticky and will likely stay a single vendor proposition in a given network. As a result, some are predicted the era of orchestrators war, which certainly feels timely, after the SDN management war (winner OpenStack), southbound interface war (winner OpenFlow), hypervisor war winner (KVM)...
I have spoken at length about the danger operators expose themselves if they vacate the orchestration field and leave systems integrators to rule it. It seems to have gained some traction with open source orchestration projects being pushed in standards. In any case, VNF vendors expect a growth in software licensing vs. appliance model, whereas integrators and operators expect a reduction.

Professional services
This is the area where everyone sees to agrees that an increase is inevitable. SDN and NFV provide layers upon layers of abstraction and while standards and open source are not fully defined, there is much integration and "enhancements" necessary to make a service on NFV work.
VNF vendors and operators who do not want to perform integration themselves usually expect a 50% increase vs. appliance projects, whereas integrators budget a robust 100% increase in average. This, of course, increases even further if the integrator is managing the infrastructure / service itself.

Maintenance and support
Vendors and integrators expect the ratio of these to be essentially comparable  to appliance models, whereas operators expect a sharp reduction, in light of all professional services being extended for integration and automation.

Total
VNF vendors behind closed doors will usually admit that, in the short term, the cost of rolling out a new VNF function /service might be a little higher than appliance, due to the performance gap and increase in professional services. There are sharp variations between traditional vendors that are porting their solutions to NFV and new vendors that cloud-native and have designed their solution for a software defined virtualized environment.
Systems integrator can show an overall cost reduction but usually because of proprietary "enhancements and optimization".
All are confident, though that automation and orchestration makes operation of existing services much cheaper and ramping up of new ones much faster. Expectations are that VNF architecture will be much more cost effective than appliance on a 3 to 5 years TCO model. Operators, on their end expect a NFV architecture to yield savings from day one, compared to appliance and to further increase this gap over a 3 years period.

Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Tuesday, April 19, 2016

Net neutrality, meet lawful interception

This post is written today from the NFV World Congress where I am chairing the first day track on operations. Many presentations in the pre-show workshop day point to an increased effort from standards bodies (ETSI, 3GPP..) and open source organizations (OpenStack, OpenDaylight...) to address security by design in next generations networks architecture.
Law enforcement agencies are increasingly invited to contribute or advise to the standardization work to ensure their needs are baked into the design of these networks. Unfortunately, it seems that there is a large gap between lawful agencies requirements, standards and regulatory bodies. Many of the trends we are observing in mobile networks, from software defined networking to network functions virtualization and 5G assume that operators will be able to intelligently route traffic and apportion resources elastically. Lawful interception regulations mandate that operators, upon a lawful request, may provide means to monitor, intercept, transcribe any electronic communication to security agencies.

It has been hard to escape the headlines, lately when it comes to mobile networks, law enforcement and privacy. On one hand, privacy is an inalienable right that we should all be entitled to, on the other hand, we elect governments with the expectation that they will be able to protect us from harm, physical or digital. 

Digital harm, until recently, was mostly illustrated by misrepresentation, scams or identity theft. Increasingly, though, it translates into the physical world, as attacks can impact not only one's reputation, credit rating but also one's job, banking and soon cars, and connected devices.

I have written at length about the erroneous assumptions that are underlying many of the discourses of net neutrality advocates. 
In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (HD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. A reaction to operators trying to perform traffic management functions is to encrypt traffic to obfuscate it. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

Now, who said access to wireless should be fair and equal? Unless the networks are nationalized and become government assets, I do not see why private companies, in a competitive market couldn't manage their resources in order to optimize their utilization.


If we transport ourselves in a world where all traffic becomes encrypted overnight, networks lose the ability to manage traffic beyond allowing / stopping and fixing high level QoS metrics to specific services. That would lead to network operators being forced to charge exclusively for traffic tonnage. At this point, everyone has to pay per byte transmitted. The cost to users would become prohibitive as more and more video of higher resolution flow through the networks. It would mean also that these video providers could asphyxiate the other services... More importantly, it would mean that the user experience would become the fruit of the fight between content providers' ability to monopolize network capacity, which would go again any net neutrality's principles. A couple of content providers could dominate not only service but the access to these service as well.

The problem is that encryption makes most traffic management and lawful interception provisions extremely unlikely or at the least very inefficient. Privacy is an important facet of net neutrality's advocates' discourse. It is indeed the main reason many content and service providers are invoking for encrypting traffic. In many case, this might be a true concern, but it is hard to reconcile that with the fact that many provide encryption keys and certificates to third party networks or CDNs for instance to improve caching ratios, perform edge packaging or advertising insertion. There is nothing that would prevent this model to be extended to wireless networks to perform similar operations. Commercial interest has so far prevented these types of models to emerge.

If encryption continues to grow, and service providers deny to operators the capability to decrypt traffic, the traditional burden of lawful interception might be transferred to the former. Since many providers are transnational, what is defined as lawful interception is unlikely to be unenforceable. At this stage we might have to choose, as societies between digital security or privacy.
In all likeliness, though, one can hope that regulatory bodies will up their technical game and understand the nature of digital traffic in the 21st century. This should lead to lawful interception mandate being applicable equally to all parts of the delivery chain, which will force collaborative behavior between the actors. 

Monday, April 4, 2016

MEC 2016 Executive Summary

2016 sees a sea change in the fabric of the mobile value chain. Google is reporting that mobile search revenue now exceed desktop, whereas 47% of Facebook members are now exclusively on mobile, which generates 78% of the company’s revenue. It has taken time, but most OTT services that were initially geared towards the internet are rapidly transitioning towards mobile.

The impact is still to be felt across the value chain.

OTT providers have a fundamentally different view of services and value different things than mobile network operators. While mobile networks have been built on the premises of coverage, reliability and ubiquitous access to metered network-based services, OTT rely on free, freemium, ad-sponsored or subscription based services where fast access and speed are paramount. Increase in latency impacts page load, search time and can cost OTTs billions in revenue.

The reconciliation of these views and the emergence of a new coherent business model will be painful but necessary and will lead to new network architectures.

Traditional mobile networks were originally designed to deliver content and services that were hosted on the network itself. The first mobile data applications (WAP, multimedia messaging…) were deployed in the core network, as a means to be both as close as possible to the user but also centralized to avoid replication and synchronization issues.
3G and 4G Networks still bear the design associated with this antiquated distribution model. As technology and user behaviours have evolved, a large majority of content and services accessed on cellular networks today originate outside the mobile network. Although content is now stored and accessed from clouds, caches CDNs and the internet, a mobile user still has to go through the internet, the core network, the backhaul and the radio network to get to it. Each of these steps sees a substantial decrease in throughput capacity, from 100's of Gbps down to Mbps or less. Additionally, each hop adds latency to the process. This is why networks continue to invest in increasing throughput and capacity. Streaming a large video or downloading a large file from a cloud or the internet is a little bit like trying to suck ice cream with a 3-foot bending straw.

Throughput and capacity seem to be certainly tremendously growing with the promises of 5G networks, but latency remains an issue. Reducing latency requires reducing distance between the consumer and where content and services are served. CDNs and commercial specialized caches (Google, Netflix…) have been helping reduce latency in fixed networks, by caching content as close as possible to where it is consumed with the propagation and synchronization of content across Points of Presence (PoPs). Mobile networks’ equivalent of PoPs are the eNodeB, RNC or cell aggregation points. These network elements, part of the Radio Access Network (RAN) are highly proprietary purpose-built platforms to route and manage mobile radio traffic. Topologically, they are the closest elements mobile users interact with when they are accessing mobile content. Positioning content and services there, right at the edge of the network would certainly substantially reduce latency.
For the first time, there is an opportunity for network operators to offer OTTs what they will value most: ultra-low latency, which will translate into a premium user experience and increased revenue. This will come at a cost, as physical and virtual real estate at the edge of the network will be scarce. Net neutrality will not work at the scale of an eNodeB, as commercial law will dictate the few applications and services providers that will be able to pre-position their content.

Mobile Edge Computing provides the ability to deploy commercial-off-the-shelf (COTS) IT systems right at the edge of the cellular network, enabling ultra-low latency, geo-targeted delivery of innovative content and services. More importantly, MEC is designed to create a unique competitive advantage for network operators derived from their best assets, the network and the customers’ behaviour. This report reviews the opportunity and timeframe associated with the emergence of this nascent technology and its potential impact on mobile networks and the mobile value chain.