Sunday, 31 July 2011

The Power of Openness

I was going to name this blog “why Samsung TVs are much better than Sony’s”. However, it may sound out of place in a technical blog focusing on computing and telco technologies.

As I have blogged recently about the issues of openness vs. closeness – e.g. Android vs. iOS, the same issue manifests itself in the broader consumer world, such as flat screen TVs.

With Australian government’s decision of moving from analogue TV to digital, the new flat screen TVs are selling like hot cakes. The available choices are dazzling – each boasts their own technical advantages in terms of picture quality, energy saving, sound quality etc. Your average consumers cannot care about all the technical mumbo jumbos, all they want is a TV! But is it?

As the display technology advances and matures, all the major manufacturers from all the major countries can produce decent display units/screens. So there is no point trying to analyse which technology is better in this regard – at the end of the day, what ever looks and sounds pleasing to you should win. So do those manufactures differ in any way? Yes they do – they differ in how open they are in terms of embracing different technologies.

Nowadays, it is common to record and view your favourite HD TV programs and movies from USB memory sticks or hard disks. A HD video can take any where from 1GB to 10GB of space. As we know, the major file systems supported by the TV manufactures are FAT/FAT32 and NTFS. With FAT32, there is an upper limit on the file size, being 4GB-1Byte. This basically rules out FAT or FAT32 as a valid format for your USB storage. This is where it matters – manufactures such as Sony and Panasonic do not support anything other than FAT/FAT32 on their TV and BluRay players (or USB 1 devices); whereas Samsung and most Chinese brands do.

There is good reason for Sony to be restrictive. It owns media companies too – Sony Pictures. It’s in their interest to stop you from recording or backing up your movies into one place to make things more convenient. So it is the same theme being played out in the computing industry for decades. Consumers can cast their vote with their hard-earned currency – the classical supply-demand interplay.

Friday, 22 July 2011

Getting Cloudy

Last week I attended the Amazon Web Services APAC Tour in Sydney. It was quite helpful to see the AWS in action and their customer testimonials in real life.

There is no doubt that the cloud environment can help many IT departments and internet-oriented businesses. The main benefits are cost reduction and improved scalability and reliability. Unlike many IT fads in previous years, there is real momentum and benefits of adopting the cloud approach for many businesses and government organisations alike.

When it comes to telco BSS and OSS applications, the use case is quite different from many other industries. At the bottom level, many BSS and OSS applications need to be connected to the network in real-time. This can be real-time charging, service control for BSS and real-time network performance management for OSS. These real-time interfaces are usually dedicated links using telco standards (e.g. SS7 based), which conventional cloud providers do not support. So you will have no choice but to host these applications locally.

For the non-real-time (i.e. offline) telco applications, many of them need to access the vast repository of usage records, transaction records and network events. A telco may choose to host only the front-office web applications in the cloud and have them retrieve the necessary data stored locally over the internet; or take advantage of the cloud storage and ship all the transaction data to the cloud over the internet in a batch mode frequently. Either way, the requirement on the WAN link between the telco and the cloud data centers is very high. Let’s assume 1 million subscribers with 0.5BHCA and the CDR size at 750 bytes (note that CDRs from the 3GPP Bi interface may be much larger). This produces 375MB of CDR data a day. When it comes to OSS network management, the alarm repository can easily go into gigabytes and terabytes.

There may be a compelling business case for small telcos who do not have large amount of data and a slow growth model. Still, reliable WAN link with high bandwidth is required. These links may not be cheap and are charged on an ongoing basis – unlike the local computer servers which are one-off expenditure.

On the contrary, large telcos do not have problem with network links because in many cases, they own the network. Yet their data volume can also be orders of magnitude higher than the small ones. If you are not physically located in the same country or continent with their cloud provider’s data centers, then having a reliable WAN link becomes even more expensive if it’s possible at all – because the submarine cables are owned by consortiums, not any single telcos.

Also, beware of the pricing model of the cloud providers. They charge for each virtual machine instance, each I/O, storage, etc. The other day, I played with AWS EC2 by creating a supposedly free Linux based micro-instance, and snooping around by changing directories and doing  ‘ls’, etc. before terminating the instance. To my surprise, there were total of 13,473 I/Os, which I could not account for. I got charged for a total of 5 cents for less than an hour of activities (most idle any way). So, take a close look at the pricing model of the cloud provider, it may turn out to be much cheaper to buy your own Windows servers to host your Outlook than doing it in the cloud.

So a middle ground solution is for a telco to host their own cloud environment and use this for their own BSS/OSS systems as well as providing cloud services to their customers. Quite a few telcos have already taken this route. Of course, hosting a cloud is expensive in terms of CAPEX and OPEX. There must be a good business case to do so.

Related Articles: