Sunday, 30 October 2011

On RSA Hack …

Recently our IT system admin surveyed the company about some changes on how we would use the RSA SecureID. As I was totally oblivious about the RSA hacking incident, I did not pay much attention to the survey. Then earlier this month when I was doing a product demo at a client site, I found that my company VPN login stopped working half way during the demo, and it remained so for the next couple of days until I rang up IT support and got it reset. Obviously, as a precautionary action, the sys admin had changed the RSA RADIUS policy …

Although the RSA hacking incident and the related APT attacks did not affect my company, it is enough to get the IT department worried. In the Open Letter to RSA Customers, RSA called the attack ‘extremely sophisticated’. However, knowing that it comes from the Chairman of RSA, you have to take it with a grain of salt. In the blog Anatomy of an Attack by Uri Rivner from RSA, we gain some insight into what had happened at a very high level. I find nothing new in the method of the attack though. However, there are a couple of points that bring a smirk or two.

First is the use of reverse-connect mode of the backdoor software to circumvent the firewall at the remote end. This reminds me of the bad/good old days before the proliferation of the internet. My job was to develop IVRs for customers across many industries. To maintain the system, we would need to connect to them remotely. In those days, the connection method of choice was via good old PSTN line modems, which are plugged into the IVR system. The ‘normal’ mode of connection would be to dial up to the modem and login (using the UNIX cu command). However, to save on long-distance call cost, we usually buried hidden option in the IVR menu. Upon entering a code, the IVR would invoke some shell script to get the remote modem to dial back to the office modem and give a login prompt upon connection – using the UNIX ct command. This use of reverse connection is totally legit, by the way.

The second and more obvious point is the use of social engineering as the starting point. We have seen from spy movies time and time again that human is the weakest link in the line of defence. This social engineering is made extremely easy and cheap today, thanks to the massive adoption of social network services. This is part of the reason I doubt the words of ‘extremely sophisticated’ coming from RSA Chairman. It is extremely easy to find out someone’s personal details from their social network web pages, and find a list of friends that they trust. This makes it extremely easy to spoof an email with backdoor attached and have a high probability of it getting viewed and opened. Once the backdoor is in, all bets are off. You don’t have to be ‘extremely sophisticated’ to snoop around and gain more data – e.g. most people use their mailbox as a file repository, when someone has access to the Outlook PST file, then they can see a history of everything. This is partly the reason I never use thick mail clients and use the webmail instead – after all, it is the era of the cloud man! Smile

Another interesting statistics is by looking at Who Else Was Hit by the RSA Attackers. Apparently, similar attacks have happened to hundreds of other companies recently and close to 90% of the attacks came from Beijing. No wonder people conclude that the attacks were state sponsored! China is a country where censorship is absolutely in every medium. The mass majority of the netizens are subjected to the GFW of China, which dynamically blocks destinations based on myriad of rules, including a government issued list of sensitive keywords. So in a country where vast number of web addresses are inaccessible to the normal citizens (including many social websites, and even this blogger site), only the privileged and well resourced can carry out such ‘extremely sophisticated’ attacks.

Friday, 16 September 2011

Oracle vs. Google Lawsuit

I have read news about Oracle’s lawsuit against Google regarding its Android platform, alleging that Google is copying from Java. Some had even concluded that Android is in jeopardy. I had not paid much attention to the details of the lawsuit, until today when I came across Google’s rebuttal by a technical expert Prof. Owen Astrachan of Duke University. The report is published on Groklaw - it’s really fun to read.

The central issue of the lawsuit seems to be whether the Java API (or API of any language, library, even C-header files) are copyrightable. Google’s argument is that APIs are not creative expression, hence not copyrightable; but the implementation of the APIs are. Prof. Owen gave strong arguments backed by facts and through analogies.

Surely, anyone who has done software programming would not consider interfaces copyrightable. What do you think? – click your preference in the poll on the upper right hand corner of this page to express your view.

Updated 2011-10-30: the verdict is out. Total votes: 6; Yes vote: 5 (83%); No vote: 1 (16%).

Sunday, 31 July 2011

The Power of Openness

I was going to name this blog “why Samsung TVs are much better than Sony’s”. However, it may sound out of place in a technical blog focusing on computing and telco technologies.

As I have blogged recently about the issues of openness vs. closeness – e.g. Android vs. iOS, the same issue manifests itself in the broader consumer world, such as flat screen TVs.

With Australian government’s decision of moving from analogue TV to digital, the new flat screen TVs are selling like hot cakes. The available choices are dazzling – each boasts their own technical advantages in terms of picture quality, energy saving, sound quality etc. Your average consumers cannot care about all the technical mumbo jumbos, all they want is a TV! But is it?

As the display technology advances and matures, all the major manufacturers from all the major countries can produce decent display units/screens. So there is no point trying to analyse which technology is better in this regard – at the end of the day, what ever looks and sounds pleasing to you should win. So do those manufactures differ in any way? Yes they do – they differ in how open they are in terms of embracing different technologies.

Nowadays, it is common to record and view your favourite HD TV programs and movies from USB memory sticks or hard disks. A HD video can take any where from 1GB to 10GB of space. As we know, the major file systems supported by the TV manufactures are FAT/FAT32 and NTFS. With FAT32, there is an upper limit on the file size, being 4GB-1Byte. This basically rules out FAT or FAT32 as a valid format for your USB storage. This is where it matters – manufactures such as Sony and Panasonic do not support anything other than FAT/FAT32 on their TV and BluRay players (or USB 1 devices); whereas Samsung and most Chinese brands do.

There is good reason for Sony to be restrictive. It owns media companies too – Sony Pictures. It’s in their interest to stop you from recording or backing up your movies into one place to make things more convenient. So it is the same theme being played out in the computing industry for decades. Consumers can cast their vote with their hard-earned currency – the classical supply-demand interplay.

Friday, 22 July 2011

Getting Cloudy

Last week I attended the Amazon Web Services APAC Tour in Sydney. It was quite helpful to see the AWS in action and their customer testimonials in real life.

There is no doubt that the cloud environment can help many IT departments and internet-oriented businesses. The main benefits are cost reduction and improved scalability and reliability. Unlike many IT fads in previous years, there is real momentum and benefits of adopting the cloud approach for many businesses and government organisations alike.

When it comes to telco BSS and OSS applications, the use case is quite different from many other industries. At the bottom level, many BSS and OSS applications need to be connected to the network in real-time. This can be real-time charging, service control for BSS and real-time network performance management for OSS. These real-time interfaces are usually dedicated links using telco standards (e.g. SS7 based), which conventional cloud providers do not support. So you will have no choice but to host these applications locally.

For the non-real-time (i.e. offline) telco applications, many of them need to access the vast repository of usage records, transaction records and network events. A telco may choose to host only the front-office web applications in the cloud and have them retrieve the necessary data stored locally over the internet; or take advantage of the cloud storage and ship all the transaction data to the cloud over the internet in a batch mode frequently. Either way, the requirement on the WAN link between the telco and the cloud data centers is very high. Let’s assume 1 million subscribers with 0.5BHCA and the CDR size at 750 bytes (note that CDRs from the 3GPP Bi interface may be much larger). This produces 375MB of CDR data a day. When it comes to OSS network management, the alarm repository can easily go into gigabytes and terabytes.

There may be a compelling business case for small telcos who do not have large amount of data and a slow growth model. Still, reliable WAN link with high bandwidth is required. These links may not be cheap and are charged on an ongoing basis – unlike the local computer servers which are one-off expenditure.

On the contrary, large telcos do not have problem with network links because in many cases, they own the network. Yet their data volume can also be orders of magnitude higher than the small ones. If you are not physically located in the same country or continent with their cloud provider’s data centers, then having a reliable WAN link becomes even more expensive if it’s possible at all – because the submarine cables are owned by consortiums, not any single telcos.

Also, beware of the pricing model of the cloud providers. They charge for each virtual machine instance, each I/O, storage, etc. The other day, I played with AWS EC2 by creating a supposedly free Linux based micro-instance, and snooping around by changing directories and doing  ‘ls’, etc. before terminating the instance. To my surprise, there were total of 13,473 I/Os, which I could not account for. I got charged for a total of 5 cents for less than an hour of activities (most idle any way). So, take a close look at the pricing model of the cloud provider, it may turn out to be much cheaper to buy your own Windows servers to host your Outlook than doing it in the cloud.

So a middle ground solution is for a telco to host their own cloud environment and use this for their own BSS/OSS systems as well as providing cloud services to their customers. Quite a few telcos have already taken this route. Of course, hosting a cloud is expensive in terms of CAPEX and OPEX. There must be a good business case to do so.

Related Articles:

Wednesday, 22 June 2011

Apple Development

When it comes to software development, I am absolutely an omnivore. I embrace any platform, computing languages and frameworks. I use them to develop my pet projects to form my basis of evaluating the technology.

However, I had never touched any Apple technology until last month when my wife twisted my arm to buy an iPhone 4.  Not because I dislike Apple’s philosophy and its snobbish ways, but because it was so irrelevant for me. But after I bought the iPhone 4 I got the urge of developing on the iOS and the iPhone so that my $1000+ investment on the iPhone was not a total waste.

So I happily registered to be an Apple Developer. Then I needed to get the SDK, an iOS simulator and and IDE. Xcode 4 was about the only tool available. When I logged in and ready to download Xcode 4, I was told:

Hi Romen,

You must be an iOS or Mac Developer Program member to download Xcode 4 or you can purchase Xcode 4 from the Mac App Store.

The membership costs $99 a year. This is not an option for I just want to try it out. The Xcode 4 costs $4.99 from the AppStore – hmm, not bad though the customer review was not very flattering… Then I realised that Xcode can only run on Mac OS X, which means I would have to fork out another $1000+ to buy an Apple computer just for using Xcode. I don’t mind an Apple computer, but there is no justification for its doubled price comparing to other brands with gross margin at 7 times of others. I do have an Mac OS X virtual machine image on VMWare, but it was too slow just to run itself, let alone Xcode. This reminds me of the early 1990s when you had to pay left and right just to learn a computing language, which was especially touch for students like me.

Also, once you finish developing your app, you’d have to go through Apple Store to sell them and get exploited by Apple again. Apple does not do a good job to keep a high quality of what they sell anyway – we only had the iPhone 4 for 3 weeks and already experienced buggy apps – buttons on the default SMS app and Facebook app were greyed out when they shouldn’t be and can only be fixed by bouncing the software or rebooting the phone (by the way, I do not see any rationale in Apple’s decision of not letting the user exit each app.)

The barrier to entry seems too high for a private user. I am used to the open source environments of many innovative technologies (led by Java perhaps). For me personally to buy into the Apple’s closed-door philosophy is just too much. So I have to ditch Apple technology as I always have.

There is plenty to explore in the Android domain.

Friday, 3 June 2011

Must-Haves On A New ₳₱₱£€ iPhone 4

The whole ₳₱₱£€ experience from start to end, from developer to user, from manufacturer to partner is just a big money grabbing exercise by Apple at every step involved. ₳₱₱£€ not only asks you to pay left and right but also restricts on what you can do and how you do things. To get back some sanity when using the new iPhone 4, here are some things that have to be done:

  1. Jailbreak it – for iPhone 4 running iOS 4.3.3 (8J2) here is a good starter.
  2. Change root password – after Jailbreaking, the phone’s (running BSD Unix) root password is automatically set to ‘alpine’. This must be changed. Note that the MobileTerminal app does not work on iOS 3 and 4 at the time of this post. So to do this, it is better to install OpenSSH and use a client such as PuTTY to connect to the iPhone with root login and use the passwd Unix command to change the root password.
  3. Ditch iTune – iTune forces you to sync apps between iPhone and the computer. This is not only waste of time and the computer disk space, but also creates problems when there are multiple computers. CopyTrans is a good solution to this.
  4. Get a decent GPS app – the default map application assumes that the phone is connected to the internet all the time, which can be extremely costly with many carriers and impractical in many countries. So there is a need to have a GPS app that can store map files on the iPhone locally. xGPS seems to fit the bill, although it cannot be used fully offline (like TomTom or Nokia Maps) – you’ll still need to be connected to Google when searching for addresses and working out routes.

As I am totally new to any ₳₱₱£€ product, any free apps suggestions are welcome.

Tuesday, 31 May 2011

RIP Symbian, Hello iPhone 4

The joint announcement from Nokia’s Kai Öistämö and Microsoft’s Andy Lees last month had formally declared the death of Symbian. The new N8’s and E7’s may well be the last emperors of the Symbian dynasty.

In reality, the Symbian applications community has been languishing for a long time, far behind the new comers. Trying to find a decent internet radio software today, the only option I could find was Nokia Internet Radio. I installed in on my N95 and it soon crashed my phone and caused it to reboot. Such problems have been reported back in 2008. Yet, it seems that nothing has been done.

On the other hand there are far more software in the iPhone and Android communities. Take internet radio for example, TuneIn supports just about any mobile OS except Symbian. It has access to over 40,000 stations and runs very smoothly on my new iPhone 4. It’s a far cry from the pathetic Nokia Internet Radio.

I had never owned any Apple product until yesterday – after some nagging I bought a white iPhone 4 for Rose. I always thought Apple products were for women and children for two reasons:

  1. they look cool
  2. they are easy to use

But after playing with the iPhone 4, I changed my mind – it’s not easy to use at all! It’s just as bad as any other products, if not worse.

First of all, you cannot even start using the iPhone without a MicroSIM card. At least, my trusted N95 could work without SIM card and function normally except for telco services. So the first hurdle for me was to cut my normal SIM card with scissors and make it into a MicroSIM. I was amazed that it actually worked!.

After that, I had to register on AppStore by providing my private information. Even if you want to install a free software, you are still required to put in your credit card detail - what the heck?!

Then I found that the iPhone refused to load any of my music files that were accumulated over the years – not even the polyphonic ring tone that I have been using for years. That was the final straw. I had no option but to jailbreak, after which the whole user experience became much more acceptable. At least the process was much easier and quicker than cracking my N95.

Now the women and children in my household can better enjoy the over hyped iPhone 4 before they get bored with it, just like the old XM5800.

Tuesday, 17 May 2011


A while ago, I blogged about the NBN Australia and how much benefits that city folks like myself will (not) receive. There is another little quirk that NBN Co. never mentions – throttling.

Throttling refers to the traffic control measure applied by all ISPs and telcos to limit the speed/bandwidth available to the end users – squeezing the pipe, if you like.

Have you ever wondered why your Limewire/Frostwire bandwidth is so low (as low as a few Kbps – lower than dial-up speed) although you have allowed the highest settings in the software? That is because ISPs throttle many P2P traffic including Limewire/Frostwire. Such traffic policy control has been standardised by the telco industry body 3GPP under the topics of PCRF and PCEF (TS 23.203), where DPI is usually used to detect the type of traffic (e.g. Limewire, IMAP, FTP, Skype, etc.) and apply the service policies and charging to the individual traffic sessions. These standards are implemented by all the major network equipment vendors.

Throttling is applied because such P2P traffic can amount to more than 80% of the total traffic of an ISP, which is not surprising considering a HD movie can be as big as 10s of Gigabytes. So it is pretty certain that even if you have an unlimited internet plan with 1Gbps burst rate, it can still take days/weeks to download that movie you are dying to see.

Sunday, 8 May 2011

Cheating Red Alert 3

I recently started to play Red Alert 3 (v1.0) just for the amusement. I am not a serious game player, so naturally I needed to cheap my way through some chapters.

Unlike games such as War Craft 3, which have built-in cheat codes, RA3 has none (or at least I could not find any on the net). Fortunately, there is CheatEngine – an very handy open source bundle of tools that have been especially built for cheating in games! Since CE has been built by hacker for hackers, documentation is scarce and not easy to find.

From a quick search, there are a couple of ways to cheat in RA3:

  1. Modify money - use the CE’s memory scanner to search for the memory addresses that store the money amount. Once found, modify them to give yourself virtually unlimited money. This approach has been demonstrated on YouTube. The video is blurry, but the method is exactly the same as the first tutorial of CE, which is bundled with the CE installation.
  2. God Mode, Unlimited Resource and Quick Research – this is done by code injection. The assembly source code is available on the CE Forum. However, the forum does not say how to apply the hack. Here, I will show a step by step guide on applying the hack.

Once both RA3 and CE are running (doesn’t matter which is first),

  1. go to CE and select the process named ‘…’ and open it.
  2. Click the ‘Memory view’ button on CE to open the memory view window.
  3. From the Memory Viewer window, select the menu Tools –> Auto Assemble (Ctrl-A).  This will pop up the Auto assemble window.
  4. Paste the assembly code found in the CE Forum (also listed below) into the Auto assemble window.
  5. Press Execute button. Click Yes on the confirmation message to inject the code. You should get another message saying successful.

That’s it. When you switch back to RA3, you will find that your money is about 100,000, researches are very fast and the units can take a constant pounding for several minutes without losing health.

The assembly code is shown below. Notice the highlighted lines – to turn the particular cheat off, change the value to 0. Also note that the code for later versions of RA3 is different and they can be found on the CE Forum as well.

// Command and Conquer - Red Alert 3 
// Game Version  : 1.0.3174.697 
// Script Version: 1.0 
// CE Version    : 5.4 
// Resource, Research and GodMode 
// 08-Nov-2008 



// Hacking Points 
 jmp _MonResource 
 jmp _GodMode 
 jmp _MonRPoints 
 jmp _MonPlayerID 

 push eax 
 mov eax,[iPlayerID] 
 cmp eax,[ecx+20]           // Player's?... 
 pop eax 
 mov ecx,[ecx+000000e4]     // Original code 
 mov [pResource],ecx        // Save ptr for debugging 
 jne _ExitMR                // ...Jump if false 

 cmp dword ptr [iEnableMR],0 
 je _ExitMR                // Jump if Monitor Resource is disabled 

 mov ecx,[ecx] 

 cmp dword ptr [ecx+04],#100000 
 jge _ExitMR                // Jump if greater then 100000 

 mov dword ptr [ecx+04],#100000 

 mov ecx,[pResource] 
 jmp _BackMR                // back to main code 

 push eax 

 mov eax,[esi-08]           // Get ptr to Unit 
 or eax,eax                 // Null Ptr? 
 jz _ExitGM                 // Jump if true 

 mov eax,[eax+00000418]     // Get ptr to Player 
 mov eax,[eax+20]           // Get ID 

 cmp eax,[iPlayerID]        // Player's?... 
 jne _ExitGM                // Jump if false 

 mov [pLastOne],esi        // Save ptr for debugging 

 mov eax,[esi+30] 
 cmp eax,00000070          // Is it an effect? 
 je _ExitGM                // Jump if true 

 cmp dword ptr [iEnableGM],0 
 je _ExitGM                // Jump if God Mode is disabled 

 movss xmm0,[esi+0c]       // Get Maximum HP 

 movss [esi+04],xmm0       // Original code 

 pop eax 
 test eax,eax              // Restore EFLAGS 
 jmp _BackGM               // Back to main code 

// Quick Research 
 push edx 

 movss [esi+2c],xmm0       // Original code 

 cmp dword ptr [iEnableMRP],0 
 je _ExitMRP               // Jump if Quick Research is disabled 

 mov edx,[esi+28] 
 mov edx,[edx+20] 
 cmp edx,[iPlayerID]       // Player´s research? 
 jne _ExitMRP              // Jump if false 

 mov edx,43af0000          // 350.0 
 cmp edx,[esi+2c] 
 jle _ExitMRP 

 mov [esi+2c],edx          

 pop edx 
 jmp _BackMRP              // Back to main code 

 mov eax,[edi+00000080] 
 mov [iPlayerID],eax       // Save Player ID for further use 
 jmp _BackMPI              // Back to main code 

// Variables 
 dd 0 
 dd 0 
 dd 0 
 dd 1 
 dd 1 
 dd 1 

// Original Codes 

 mov ecx,[ecx+000000e4] 
 movss [esi+04],xmm0 
 movss [esi+2c],xmm0 
 mov eax,[edi+00000080] 



Monday, 18 April 2011

IE9 and MIME Type Handling

I upgraded my Internet Explorer from v8 to v9 a couple of weeks ago. I immediately discovered that the SyntaxHighlighterI use on my Blog is no longer working. I turned on the console view (by pressing F12) of IE9 and it showed that the Javascript file of SyntaxHighlighter loaded from my Google Site is actually named syntaxhighlighter2.txt; however, the type of the link was text/javascript.

This mismatch between the file name extension and the specified MIME type upsets IE9, so it refuses to load my SyntaxHighlighter script. I tried to turn off the MIME handling feature in IE9 by changing the Windows registry following instructions in the following articles:

None worked.

So I took the simple approach – change my SyntaxHighlighter file extension from .txt to .js – and it worked.

Thursday, 24 March 2011

Hacking LinkedIn API – Take 2

I had taken a long holiday from work and from internet. So I hadn’t touched this blog till now. I discovered that many visits to my blog was to one of my post on Hacking LinkedIn API.

When the LinkedIn APIs first came out I played with it and bypassed the convoluted and stupid OAuth manual process. I documented the approach in my blog post. However, soon after I posted it, LinkedIn had changed the login page and the access code page so that the HTML posting and scraping code no longer worked. I promised to update my Java code, so here it is.

This time, I am using a later version of the Java wrapper for the LinkedIn APIs – LinkedIn-J 1.0.361. The Java API has totally changed from the previous one I used.

The main difference of my 2nd attempt of hacking the APIs are as following:

Using HTMLEditorKit

Once you go to the authorisation URL returned by LinkedIn, it displays a login form (you must clear your browser’s cookie to disable the auto-login). In this login form, there are a number of hidden fields just like before. However, this time LinkedIn had added a few more fields and a dynamic one – named csrfToken.  When we submit the form, we must include all the hidden field values as well. So we need to parse this HTML string to retrieve the dynamic field values. I used HTMLEditorKit library because it’s part of Java Swing so no external JARs are required. The login form looks something like this.


So to retrieve the field values, I added a HTML parser callback class.

class ReportAttributes extends HTMLEditorKit.ParserCallback {
 public String csrfToken, sourceAlias;

 public void handleStartTag(HTML.Tag tag, MutableAttributeSet attributes, int position) {
 public void handleSimpleTag(HTML.Tag tag, MutableAttributeSet attributes, int position) { 
 private void listAttributes(AttributeSet attributes) {
  if (attributes.containsAttribute(HTML.Attribute.ID, "csrfToken-oauthAuthorizeForm")) {
  } else if (attributes.containsAttribute(HTML.Attribute.ID, "sourceAlias-oauthAuthorizeForm")) {

Enabling Cookies

It turned out that you must enable cookies otherwise LinkedIn will complain when you try to submit the login form. So here is the snippet for enabling cookie.

CookieManager manager = new CookieManager();

The overall structure of the code is pretty similar to before. Here is the full source code. Just modify the highlighted lines and it should just work for you.

package com.laws.LinkedIn;


import javax.swing.text.AttributeSet;
import javax.swing.text.MutableAttributeSet;
import javax.swing.text.html.HTML;
import javax.swing.text.html.HTMLEditorKit;


class ParserGetter extends HTMLEditorKit {
   public HTMLEditorKit.Parser getParser() {
     return super.getParser();
class ReportAttributes extends HTMLEditorKit.ParserCallback {
 public String csrfToken, sourceAlias;

  public void handleStartTag(HTML.Tag tag, MutableAttributeSet attributes, int position) {
  public void handleSimpleTag(HTML.Tag tag, MutableAttributeSet attributes, int position) { 
  private void listAttributes(AttributeSet attributes) {
   if (attributes.containsAttribute(HTML.Attribute.ID, "csrfToken-oauthAuthorizeForm")) {
  } else if (attributes.containsAttribute(HTML.Attribute.ID, "sourceAlias-oauthAuthorizeForm")) {

public class Main {
 static final String apiKey="your api key";
 static final String secretKey="your secret key";
 static final String login="";
 static final String password="password";
 static public String getPin(String authUrl, String token) {
  DataOutputStream dataOut;
  ParserGetter kit = new ParserGetter();
     HTMLEditorKit.Parser parser = kit.getParser();
     ReportAttributes callback = new ReportAttributes();
     // must enable cookie, otherwise LinkedIn will not give you the access code
     CookieManager manager = new CookieManager();

        try {
         // this section gets the LinkedIn login form.
            URL url = new URL(authUrl);
            HttpURLConnection con = (HttpURLConnection)url.openConnection();
            con.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
            //SSLException thrown here if server certificate is invalid
            InputStreamReader reader = new InputStreamReader(con.getInputStream());
            parser.parse(reader, callback, true);
            // POST the login form and get the access/verification code.
            url = new URL("");
            con = (HttpURLConnection)url.openConnection();
            con.setRequestProperty("User-Agent", "Mozilla/4.0");
            con.setRequestProperty("Content-Type", "application/x-www-form-urlencoded");
            dataOut = new DataOutputStream(con.getOutputStream());

            String s="session_login="+login
             +"&session_password=" + password
             + "&duration=0&authorize=Ok%2C%20I'll%20Allow%20It&extra=&access=-3&agree=true&oauth_token="
            System.out.println("writing bytes: "+s);

            //SSLException thrown here if server certificate is invalid
            String returnedHtml=convertStreamToString(con.getInputStream());
            /* extract the pin from the html string. the block looks like this       
* It turns out that the whole html string only contains one 'div with class="access-code"' * also it seems that the pin is always 5-digit long, * so we will just crudely detect that string and get the pin out. * A proper HTML parser should be used in a real application. */ int i=returnedHtml.indexOf("access-code\">"); String pin = returnedHtml.substring(i+13, i+13+5); System.out.println("pin="+pin); return pin; } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); return null; } } /** * @param args */ public static void main(String[] args) { final LinkedInOAuthService oauthService = LinkedInOAuthServiceFactory.getInstance().createLinkedInOAuthService( apiKey, secretKey); LinkedInRequestToken requestToken = oauthService.getOAuthRequestToken(); System.out.println("request token: "); System.out.println(" auth URL: "+requestToken.getAuthorizationUrl()); System.out.println(" token: "+requestToken.getToken()); System.out.println(" token secret: "+requestToken.getTokenSecret()); System.out.println(" expiration time: "+requestToken.getExpirationTime()); // get the access code String pin=getPin(requestToken.getAuthorizationUrl(), requestToken.getToken()); LinkedInAccessToken accessToken = oauthService.getOAuthAccessToken(requestToken, pin); final LinkedInApiClientFactory factory = LinkedInApiClientFactory.newInstance(apiKey, secretKey); final LinkedInApiClient client = factory.createLinkedInApiClient(accessToken); // now we can call the LinkedIn APIs. Person profile = client.getProfileForCurrentUser(); System.out.println("I am "+profile.getFirstName()+" "+profile.getLastName()); } // Stolen liberally from public static String convertStreamToString(InputStream is) { /* * To convert the InputStream to String we use the BufferedReader.readLine() * method. We iterate until the BufferedReader return null which means * there's no more data to read. Each line will appended to a StringBuilder * and returned as String. */ BufferedReader reader = new BufferedReader(new InputStreamReader(is)); StringBuilder sb = new StringBuilder(); String line = null; try { while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } } catch (IOException e) { e.printStackTrace(); } finally { try { is.close(); } catch (IOException e) { e.printStackTrace(); } } return sb.toString(); } }

Sunday, 9 January 2011

Month without Internet

I had a wonderful month-long holiday in the Philippines. In the province (as they call the countryside in Pinoy lingo) home, I did not have any fixed line phone service, hence no wired (ADSL or cable) internet access. So I purchased wireless broadband USB dongles from both Smart and Globe (both made by Huawei). They claim to be 3G broadband. But there were no 3G coverage from Smart at where I lived; and there was 3G signal from Globe – sometimes 3G, sometimes HSPDA and sometimes just GPRS. Even with 3G coverage, the data speeds were pathetic – the overall speed was worse than dial up modems in the old days – definitely narrowband (speeds ranging from 0 to about 40kbps most of the time).

So instead of feeling frustrated with Philippines telcos (as I usually do), I decided to stay away from the internet altogether. Without internet and piano, I found that I had so much time and got to do what I had always wanted to do so badly for so long. I managed to read a Cisco CCNA book and practiced the labs using Cisco Packet Tracer and GNS3.

More rewardingly, I managed to start doing water colour paintings. I had always wanted to learn and practise water color for many years – I bought several books, some cheap brushes, colors and a pallet over the years. But I never had a chance to start painting until then. Once I started, I did not want to stop. I was painting at home, on tour, on other islands. I ended up with over a dozen nature and still life paintings. It was the first time that I used the proper gear – water color papers, brushes (they are absolute bargains from National Book Store in Tacloban – the same brushes would cost 10 times more in Sydney). Now that I am back in Sydney, I will keep painting and keep the online time to a minimum.