Friday, 26 August 2016

Google fiber : technicolour bunny no longer in the pink

Google Fiber has been an iconic presence in debates over superfast broadband and FTTH. Initially announced in 2010, the Google Fiber bunny lolloped into its first market, Kansas City, in 2012. It has since expanded to five other cities in the US, offering symmetric 1 Gbps service at low prices.

"No new internet product has generated as much excitement in the technology world as Google Fiber", said the FTTH Council in 2014. Indeed, for fiber enthusiasts, Google Fiber has been emblematic of what should be possible. Here was a preeminent technology company, backing fiber all the way to the home. The fusty old telcos and cablecos simply didn't get it.

What a difference a couple of years make.

The Wall Street Journal recently reported [$] that Google is "rethinking its high-speed internet business after initial rollouts proved more expensive and time consuming than anticipated". Today come reports that Google Fiber is laying off half its staff. The company is also changing its strategy, with a shift to using wireless technologies to connect homes, rather than fiber.

I suspect we'll hear rather less about Google Fiber from the likes of the FTTH Council. But actually there are some valuable lessons here:

  • It's way too glib to say FTTH is 'future proof'. New technologies (such as the milimeter wireless Google are now moving to) are proving quicker and cheaper to deploy, while delivering similar performance to FTTH
  • A further technology problem for FTTH is that cable can now deliver speeds of 1 Gbps and more. This means that there's less reason for consumers to switch to a new entrant (where cable's available), and also puts the new entrant at a significant cost disadvantge. Upgrading the cable plant is far cheaper than deploying new fiber. Cable operators in Google Fiber cities have done exactly this, almost certainly reducing Google's uptake
  • It's easy to underestimate the 'friction' involved in doing any work that touches the last few meters to people's houses. Google has been accused of causing flooding in Austin, for example. Less dramatically, issues of home access, damage to gardens and so on all drive up costs
There's no reason to scorn the Google Fiber bunny. Google is a company that proceeds by experimentation, and not all experiments work. In the light of their experience, and of developing technology, they are revising their strategy. Utterly sensible - and a useful case study for those who continue to believe that it must be FTTH or nothing.

Monday, 11 July 2016

What do apps require? Shall we ask Marketing or Legal?

The new net neutrality guidelines currently being considered by BEREC (the European regulators group) propose that ISPs be required to publish:
"an explanation of speeds, examples of popular applications that can be used with a sufficient quality, and an explanation of how such applications are influenced by the limitations of the provided [service]"
In other words, they're going to need to make a formal disclosure of which applications are possible with which bandwidths.

Similar documents are already provided by a number of ISPs in the US, and they make interesting reading, not least because they often have a rather different emphasis than the language used in marketing materials. Here for instance is the legal disclosure of  United Communications (a Tennessee telco). They say - accurately - that their 25 Mbps ADSL service can support email, browsing, social media, HD video streaming, video calling and multiplayer online gaming.

For all their fibre products (up to 1 Gbps) they simply say "All the applications listed above". That is, no matter how fast the fibre is, it doesn't actually enable any applications not possible over ADSL.

Contrast this to the same company's marketing,which says:
"United’s fiber internet lets you stream movies and TV shows with no stutters or slowdowns... it gives you the power to handle the next generation of awesomeness: 'massively multiplayer online games'"
The Marketing Department's language isn't false. Fibre does let you do those things. But the Legal Department's filing shows it is incomplete. You don't need fibre to do those things.

Tuesday, 24 May 2016

NBN and the missing superfast customers


Australia’s national broadband network (NBN) is the Kim Kardashian of government broadband plans – bigger, bolder and more exposed than any other. (Though to be fair, it has left ‘breaking the internet’ to Telstra)

The NBN represents a renationalisation of the access network, as well as a significant upgrade to fibre - initially FTTP, now mixed technology. Whether or not government ownership is a good idea, it does at least bring a degree of openness on operational detail. This is very handy for broadband watchers around the world.

For instance, NBN publishes the mix of speeds its customers take, which are as follows:


Source: Various NBN co publications
Note: Excludes plans above 100/40, with trivial take-up

The most popular speed tier is 25 Mbps down, 5 Mbps up. At the end of March, 46% of customers were on this plan. Certainly such speeds would be beyond an ADSL connection, so NBN has provided some benefit.

However, a moderate uplift vs ADSL was never the goal for the NBN. When it was announced in 2009, its primary ambition was to offer “up to 100 Megabits per second”. In this regard, consumers seem to have been less impressed. Only 15% of those on FTTP are taking the 100/40 product, a percentage that has been steadily falling since 2012. Despite the fact that the premium for moving from 25/5 to 100/40 is just £10/US$14, the great majority of consumers simply aren’t willing to pay the extra. (The premium from 12/1 to 25/5 is half this).

Based on the European 30 Mbps threshold, just 19% of NBN FTTP customers (or 155,000) are on superfast. The original NBN plan expected roughly double this percentage (and a far higher absolute number).

One consequence of this change in speed mix is that the average capacity  of an NBN line is actually falling over time – from 36 to 34 Mbps between June 2014 and December 2015, for example. However, traffic per line has grown appreciably in the same period – yet more evidence that traffic and bandwidth growth are two very different things.

Source: Author's analysis of data from various NBN co publications


Oh, and demand for speeds beyond 100 Mbps, striving into a gigabit future? Just 65 customers have taken them – 40 on 250/100, 3 on 500/200 and 22 on 1000/400. Clearly Australians are not bitten by the gigabit bug.

So what does this mean for policy makers? I’d suggest:

  • Don’t overestimate willingness-to-pay for higher speeds
  • Leave room for price differentiation of fibre products – lots of customers are happy at the low end, and you need to enable appropriately low prices for them
  • Report equivalent data, so the debate about demand for bandwidth can be anchored in reality


(Note that all the above figures are unconnected with the change of the NBN plan to a mixed technology model. They are for customers who are on the FTTP portion of the network).

Thursday, 5 November 2015

Web page weights, and the rise of the baby hippo

Web pages, like our large friend on the left there, are big and getting bigger. Once upon a time, web pages were just text, but these days they may include many high-res image, Java script, fonts and many other elements that all contribute to the total amount of data that needs to be transferred to create the web page. This is leading to concerns over 'web bloat'.

Not all of these files to be downloaded before you start using the page. 'Below the fold' content (which initially sits off the bottom of your screen) can be downloaded while you're reading the content at the top.

For some sites, below the fold content is massive. The 'height' of the Daily Mail homepage is 5.16 meters, with less than 10% of the content initially visible. In one sense this approach is quite wasteful of internet traffic- the Daily Mail will send you all 5.16 meters, even if you never scroll past the top 30cm (assuming you don't click away elsewhere). But internet traffic is cheap, so the Mail isn't unduly worried.

The net result of larger and richer pages has been steady growth in 'page weights' - the amount of data that makes up a web page. They are now averaging a little over 2MB on the desktop:

Source: HTTP Archive
Technical change in Oct 2012 means data on either side not comparable

It's a toss-up whether this growth is exponential (20-25%) or linear (+345 KB/year), but either way it's substantial and ongoing. That means more traffic for networks to carry, and more bandwidth needed to ensure web pages load briskly. (In practice, for technical reasons, latency is often a more important factor than bandwidth, and beyond 5 Mbps there seem to be diminishing returns).

However, the growth in desktop website page weights is not the whole story - the mother hippo has been joined by a baby hippo. In recent years there has been a massive shift to mobile consumption, and page views from mobile devices now represent almost 40% of the total. (In Africa it's over 60%).

This matters in the context of page weights because mobile pages tend to be much lighter - roughly half the weight of fixed pages. For mobile devices web page designers need to be conscious of higher consumer data charges, they need to fit their content into smaller screens and so on. Consequently both the number and size of files transferred are lower.

Source: HTTP Archive, StatCounter, author's analysis
Weight average based on UK traffic mix

Clearly mobile page weights are growing steadily too, but because they start so much lower than desktop page weights, the shift to mobile is suppressing the growth in average consumed page weight, just as that hippo calf has reduced the average hippo weight in the enclosure. While desktop pages have been growing at +345KB, the average consumed page is only growing at +230KB. 

However, baby hippos don't stay baby hippos, and once the transition to mobile devices is complete, the growth in average page weight will accelerate again - unless of course we've shifted all our usage to apps by then, which are even lighter than mobile pages.

Thursday, 22 October 2015

4K TV : 0.004K TV after compression

High resolution video is often cited as a driver for ultra-fast broadband. Here, for example, is Hyperoptic (a UK ISP) suggesting that if you want to watch 4K TV, you need 1 Gbps. 100 Mbps supposedly won't be enough.

A 4K TV isn't for everyone - apart from anything else it's very large, as you can see (though it isn't mandatory to install two Korean ladies with each set). However, it's certainly becoming more popular, and by 2020 over a third of West European households are expected to have a 4K set.

But what is frequently glossed over in broadband discussions is how little bandwidth is currently required for 4K TV, and how much less will be required in future. To be sure, how much bandwidth is needed for 4K is not a simple question. It depends on (at least) three things: the resolution of the video, the nature of the content and the time you have to compress it.

Uncompressed, high quality 4K can require 3 Gbps or more. However, in practice 4K is never delivered to consumers uncompressed. A compression algorithm (codec) is used to convert the raw digital video into a far smaller data stream. Many techniques are used in such algorithms. For instance, if a portion of the image is unchanged since the previous frame, the algorithm may (effectively) say 'for this portion of the screen, same again'. This requires far less data than retransmitting each pixel in that part of the screen, Or, if a large part of the image is all the same colour, then the algorithm may transmit the boundaries of the colour block, rather than separately transmitting the colour of each pixel within it.

The effectiveness of such techniques depends on many things, including the sophistication of the algorithm, the available processing power & time and the nature of the content (content with lots of movement is inherently more difficult, for instance).

However, the reduction in bandwidth is generally dramatic. Netflix, who know as much about 4K streaming as anyone, say they average 15.6 Mbps. However, sports content (which has lots of movement and must be compressed in real-time) can require more. BT's 4K Sport currently uses 20-30 Mbps.

Thus even today 4K is well within the capabilities of sub-FTTH broadband, and it is baffling that Hyperoptic think 100 Mbps is insufficient. Moreover, 4K's requirements are only going to fall. Moore's Law means we have ever more processing power to play with, which can be traded-off against bandwidth, to maintain picture quality while using fewer Mbps. In addition, processing algorithms grow ever more sophisticated. As a result, roughly 9% less bandwidth has been needed each year to support a given picture quality. Simply because video is such an important component of traffic these days, it appears as if investment in codecs is growing, meaning that the 9% rate may actually accelerate.

Companies are already claiming dramatically lower bandwidths for 4K in trials. For instance, V-Nova has reported streaming 4K at just 6 Mbps in a trial with EE (the UK's largest mobile operator). Tveon, a Canadian start-up, is even more aggressive, suggesting that with their technology 2 Mbps will be enough. (That's better than a 1000:1 compression of the raw stream).

While these claims will need to be proven out, they nonetheless suggest the potential for dramatic improvement. Indeed, even at double V-Nova's 6 Mbps, most ADSL lines would be able to support 4K TV.

Your future TV may or may not be 4K, and you may or may not be able to see the difference even if it is. However, that monster TV won't be a justification for bring fibre to your front door.

Monday, 12 January 2015

Killer Gigabit Apps - and why 1,259 experts are wrong

Sandy Lindsay, Master of Balliol College Oxford (1924-49), was once locked in debate with the fellows (professors) at the college on a contentious issue. It came to a final vote, in which the fellows, to a man, voted against the Master. He scowled around the room, saying “Gentlemen, we appear to have reached an impasse.”

In this post I’m going to take a similarly hubristic approach, by disagreeing with 1,259 experts. The 1,259 experts are cited in a recent report from the Pew Research Center, Killer Apps in the Gigabit Age. The Pew Research Center is a US non-partisan body which publishes much valuable material on media and the internet (among other topics). I’ve frequently cited their work. This report too is full interesting ideas – my main problem with it is its title, for reasons I’ll come on to.

For the report Pew took responses from 1,464 experts, of whom 1,259 said they believed major new applications would capitalise on a significant rise in US bandwidth in the years ahead – the Gigabit Age of the title.

Pew also asked the experts what those applications might be – and here’s where it gets interesting. The experts had many many responses – Pew needs almost 50 pages just to summarise them. But almost none of the proposed applications need gigabit speeds or anything like it.

To take one example, telepresence is a recurring theme in the responses. This may or may not become widespread in the future -but the key point is that it does not require a gigabit. Even professional telepresence systems with a screen down the middle of the conference table seating six at your end and another six in Timbuktoo (or wherever your counterparts are) require just 18 Mbps according to Cisco and Polycom, who make such systems. So if you decide to chop your dining table in two and install multiple hi-def screens so you can have permanent telepresence with your Auntie Ethel, bandwidth will be the least of your worries.

Virtual reality is also oft mentioned in Pew's report. Oculus Rift is the closest we have to usable VR. It's in advance prototype stage, and is already impressive. The official verdict of this 90-year-old tester (having a vitual  tour of Tuscany) is 'holy mackerel!'

I haven't been able to track down official views on the bandwidth required for Oculus Rift, but the displays are 1,000 x 1,000 pixels per eye. In combination that's about a quarter of the resolution of a 4K TV (with similar frame rates). Given that 4K requires 16 Mbps, this suggests that VR may actually be a relatively low bandwidth application.

Some of the experts mentioned holographic displays.Bandwidth for these? Who knows. We'll put them in the 'maybe' category.

A number of the experts mentioned e-health, including monitoring vital signs, remote consultation and so one. Again, these are not high speed apps – they require kilobits or a few megabits at most. Several of the respondents cited that old chestnut, remote surgery. Does anyone seriously think this is enabled by improved home bandwidth?

Wearable computing, the internet of things, life logging and a wide array of other possibilities were mentioned in the report – but again, there is no reason to expect these to need gigabit speeds or anything like it.

So the real story here is not that there's a cornucopia of apps that require gigabits. Rather it is a respected research institute could ask over a thousand experts, and still not find a single clear case of an application requiring gigabit speeds. Change the title to 'Lack of Killer Apps for a Gigabit Age', and the Pew report is spot on.

Wednesday, 28 May 2014

Do we need the Delphic Oracle to make sensible telecoms investments?

The Delphic Oracle was the leading seer of ancient Greece. This was reliably established by King Croesus (as in 'rich as'), who had his messengers ask a sample of seers at a pre-agreed time what he was doing at that very moment. The Delphic Oracle correctly said he was making lamb and tortoise stew.

However, the Oracle's statements were rarely this unambiguous -  they tended to be a bit more, well, Delphic. For instance, asked about a prospective military expedition, she replied: "You will go you will return never in war will you perish" - place punctuation at your own risk.

It is sometimes argued that, absent reliable seers, we have to invest in superfast broadband because of the unknown unknowns - the applications that are surely coming, but which we just lack the foresight to predict today.

There are many problems with this argument, but one of them is that its proponents tend to underestimate our foresight. Here for instance the view of my old friends the FTTH Council on how little we knew in 2000 about the drivers of demand for today's broadband:


Just how accurate is it that these things were unforeseen in 2000?

Videoconferencing with Skype
Skype wasn't founded until 2003, but video calls over the internet have been long discussed - at least since 1995, when Stewart Loken of the Berkeley Lab said "internet videoconferencing is about to become commonplace". Obviously to know what bandwidth you might need, you don't need to know the name of the company that's going to be most successful, you just need to know what the application is, so the fact that Skype didn't exist in 2000 is neither here nor there.

HD-TVs with 42" and more in 3D
The first formal HDTV research programme began in 1970. Consumer sets went on sale in the US in 1998, and some of them had 55" displays. 3D TV was trialled as early as 1994.

Facebook
We certainly didn't know about Facebook in 2000, it wasn't founded until 2004. (Though you can discuss with the Winklevoss twins exactly when it was conceived). However social media had been around for a long time - GeoCities, an early example, was founded in 1994. And again, we don't need to know the name of the provider to know the necessary bandwidth. Facebook uses text, pictures and a bit of video - all well understood as internet media in 2000.

Online shops
Amazon was founded in 1994. 'nuff said.

Google
By 2000 Google was already available in 10 languages (and had hired its first chef a year prior)

Digital Photography
The first consumer digital cameras were released in 1990 (the same year as Photoshop). Webshots, founded in 1999, was one of the first web-based photo sharing sites, but consumers had been uploading photos to BBSs (not always savoury ones) for some years before that.

iPad and Smartphones
The Palm VII, one of the first PDAs with wireless capability for internet access, shipped in 1999. I'll give the FTTH Council the iPad, which wasn't widely anticipated. Of course, it doesn't need particularly high bandwidth (though it has driven more traffic by extending hours of internet use in the home).


So, of the Council's seven things "we did not know in 2000" it turns out we did know 6½ of them. Their view of our ignorance is a bit ... ignorant.

The vast majority of things we do with the internet today were in fact anticipated in 2000, at least in broad brush strokes. That's why it's particularly problematic for FTTH fans that there are (in their own words) 'no really compelling applications yet' for FTTH. In 2000 we knew (roughly) what we would do with broadband speeds. In 2014 we have no real idea what we might do with superfast.