Thursday, 31 December 2009

Happy New Year!

As I look back the year that was 2009 I must say that I can’t complain much. I have achieved not too far from what I had expected. I can summarise my achievements in 2009 as  following

  • Work – I still got my job!
  • After Work – I learnt and used the ZK framework quite extensively and got conducted in the ZK Hall of Fame and ZK Blogsphere.
  • Hobby – I have had much improvement in my piano playing compared to last year (when I just started learning). I rewarded myself with a brand new Kawai K3.
  • Family – I had always travelled extensively in my previous jobs. So 2009 was the year that I spent the most time with my family since I started a family!

Have a happy and prosperous 2010!

 

Related Posts:

Wednesday, 23 December 2009

Enterprise Architect

In any IT forum, the topic of roles and skill sets of architects is a sure-fire flame bait. There are many roles with the title of ‘architect’ in the IT industry and they can be quite different. I summarise the IT architect jobs into the following categories:

  1. glorified software engineers/business analysts – some employers and employment agencies alike beef up the job position to attract more experienced applicants; of course, some employees/applicants do the same to their profile to attract better pay.
  2. domain focused solution designers – these architects/designers focus on specific domains of the IT space. The boundaries may vary depending on how you slice and dice the space: infrastructure, security, integration; service fulfilment, network performance, inventory management, etc.
  3. enterprise wide architect – responsible for IT governance, IT strategy alignment with the business strategy. It overarches the various domains mentioned in point 2 above.

The Enterprise Architect falls into the 3rd category. Not all organisations have or need to have such a dedicated role – a travel agency with 3 staff may well outsource their IT operations and the role of EA to an external party.  The role of EA is a product of a mature organisation (or mature org wannabees) that understands the importance of IT to their business.  Wikipedia has a pretty detailed description of what EA is. It uses the analogy of city planner (the EA) and domain specific designers and engineers in the building industry. This is a very good analogy and one that I often use. After all, the very term of architect is borrowed from built environment discipline.

It is understandable that many people including IT professionals do not understand the roles of architects, especially EA. The main reason for this lack of understanding is that in many organisations the role of architect is shared with others – e.g. development team lead taking on the responsibility of system architect. When it comes to EA it becomes more illusive because not all companies have one. So it’s no wonder we see in various forums silly questions like should EA be doing coding  (btw, my answer to this question is that EA should not do coding as his/her day job; but it certainly helps for EA to learn new technologies to better understand them – so coding afterhours is great for EAs).

The IT industry is very young and immature comparing to other disciplines liking medicine or built environment. As a result of reaching maturity, it is inevitable for the industry to become more and more specialised. Therefore, the roles and jobs become more and more fine-grained, focused and well-defined. No doubt, with IT becoming more mature, the roles of architect especially EA will become better defined and clearer to more people.

Tuesday, 15 December 2009

My Carbon Footprint

As Copenhagen becomes the hotspot of global warming spat for the second week, it seems less likely anything substantial  can be achieved by the summit. However, one positive effect it has is for sure – it has raised the awareness of the global warming issue across the world.

Just a few months ago there was a blackout in my house due to an unseasonal thunderstorm. I was home alone and suddenly felt so bored – without electricity there was no form of entertainment in the house! No TV, music, internet, game consoles and not even cooking! Whereas such blackouts were quite frequent where I lived during my childhood yet I quite enjoyed it for we got to have candles and play shadow puppets… So I decided to have a good old traditional form of entertainment by buying a piano.

Out of curiosity, I took part in calculating my carbon footprint using an Ecological Footprints calculator adopting Australian model. It turns out that it would take 2.4 planet Earths to support my lifestyle if everyone on Earth lived like me, which is below the Australian average of 3 according to WWF Australia.

image

However, if I factor in the fact that many of my consumption of resources are actually shared by my family, it only takes less than one Earth, which is ideal.

Looking at the result, about half of the footprint comes from food. I tried to recalculate it with total vegan, the result was actually 8% worse. So my love of meet is actually good for the planet :)

Another thing is that I fly a lot due to work. If I did not fly at all, my carbon footprint would reduce by about 8%. The good thing is that I don’t have to commute to work every day, which offsets my extra carbon emission.

There are many things people can do to improve the situation. I think it all bog down to 3 things: be frugal, share, recycle. These concepts are all familiar to IT professionals like myself because we have to create systems that are preferment and cheap at the same time, although we call them by different names like algorithm efficiency, resource-pooling, package reuse, etc. Sharing (e.g. taking public transport) and recycling have received a lot of public attention. But one thing that is often overlooked is frugality (even in the IT world as hardware becomes cheaper and cheaper). Frugality is a virtue in east Asian traditional cultures. However, due to the western influence and economic boom in the region this virtue is in danger of being lost. When everything is being labelled with and measured in monetary units, it is very easy to miss the actual impact of wastage.

Wednesday, 11 November 2009

The Wall

My last visit to the Berlin Wall was in 1998 while sections of the wall was still standing and Checkpoint Charlie was a tourist attraction. Today, as the west celebrates the 20th aniversary of the fall of the Berlin Wall, the Chinese netizens are joking about the Great Firewall of China (or GFW for short). I used to be sceptical about censorship in China - sure they block a few sites, but who cares? The Chinese can still visit majority of the websites, right? My view changed during my visit to China last month and personal experience of the wrath of the GFW.

I first tried to access my blogger site and my picasa photo albums in an internet cafe in Tianjin. It turned out that they were not accessible. In fact, the whole blogger.com was inaccessible! I thought this might be unique to that internet cafe, or Tianjin city. But I was wrong.

After a 25-hour train ride, I arrived in the southern city of GuangZhou. I experienced the same problems! People told me that even for unblocked sites, certain contents (i.e. parts of the web page) of those sites can still be blocked.

Now I understand why my blogger site has so few hits from China. The few red dots shown on the ClustrMaps are probably the result of automated search engines on the outside of the GFW.

Curiously, MSN sites (blog, photos, file sharing, etc.) are not blocked by the GFW. Maybe Microsoft had better political connections than Google. Now I have to create new photo albums on MSN just for the Chinese!

Saturday, 24 October 2009

Bulk Photo Scaler

After my holiday I captured hundreds of photos. I want to upload some of them to my online photo album. Before uploading the photos I need to scale the photos to a smaller size to speed up the file transfer and save my internet bandwidth. A popular bulk image processor - BIMP can be found from Cerebral Synergy.

I have some pretty unique requirements to bulk scale these photos. They have been captured using different cameras - N95, XM5800, Canon, Olympus, Pentax, etc. - each having different image resolutions and therefore sizes. So I cannot just apply a single factor to all the pictures. Also, I have some panorama pictures shot with Panoman which are very wide. So I cannot apply a single size to all pictures - I need to keep the aspect ratio. I decided to write a little bulk image scaler myself.

The basic image resizer is pretty simple using Java:

...
 BufferedImage img=ImageIO.read(new File(fileName));
 if(img==null)
  return;
   
 int w=(int)(img.getWidth()*factor);
 int h=(int)(img.getHeight()*factor);
 String imageType=getFileSuffix(fileName);
 Image scaledImg=img.getScaledInstance(w, h, Image.SCALE_DEFAULT);
 BufferedImage bi=new BufferedImage(w, h, img.getType());
 bi.createGraphics().drawImage(scaledImg, 0, 0,null);
 ImageIO.write(bi, 
  imageType, 
  new File(newFileName));
...

It is not the most efficient way to do this but it is simple and works. To keep aspect ratio is simply a matter of using the new width and the original picture's aspect ratio to calculate the new height:

 double ratio=(double)img.getHeight() / img.getWidth();
 int h=(int) (ratio*w);

The full source code is PhotoScaler.java and ScaleParameters.java. It still needs some refactoring and cleaning up. The usage of the utility:

Usage: java com.laws.photo.PhotoScaler directory_name [-f scale_factor] | [-s w
h] | [-k w]
-f scale by factor
-s scale by size, i.e. width and height
-k keep aspect ratio, use w for width and calculate the height
e.g. java PhotoScaler c:\temp\photos -f 0.5
     java PhotoScaler /tmp/photos -s 800 600
     java PhotoScaler c:\temp\photos -k 800

Tuesday, 22 September 2009

Confession of a Piano Shopper

I have been piano shopping in the last few weeks. During this time I have innundated myself with all sorts of information from all sources that I could find - dealers, community forums, pianists and educators. I have been working in the telco industry for many years and some of which were in the sales process. I found striking similarities between my piano shopping experience and the telco sales process (or perhaps sales in any industry). Many of the behaviour and psychology that I have been critical of also manifested in me and I have no intention of correcting them.

Wants vs. Needs

The purpose of buying a piano is two-fold: entertainment for me and education for my 7yo son. I am a self-taught beginner and have no plan of achieving any certification/qualification in music whatsoever. I just enjoy playing. Chances are that I will remain a beginner forever. My son is also a beginner but taught 'properly' in a piano school. He likes playing and needs a proper piano to practise on.

So for beginners like us, a new entry-level(110cm - 118cm) acoustic or a digital piano would be sufficient. However, I don't feel content to have the basic model. I want to have a 'professional' (>120cm) one to start with. Well, my excuse for having a professional model is that there are two players at home and I don't want to upgrade anytime soon. So I might as well have the room for potential growth. But does this justify doubling the price?

Bag a Bargain

So I need to squeeze the vendors to drop the price as much as I can. Normally (in Australia), the RRP of any new piano is inflated by at least 20%. So without any effort, I can get the price down by 20%. Anything below that I will have to work on it. The easiest way is to have 2 vendors out-bid each other.

But what about the value of the piano - the materials, the labor, the quality, the beauty and the enjoyment we can get out of it? Are they worth a few thousand dollors? My opion is a resounding 'yes'. A piece of furniture or even a mobile phone can cost thousands of dollors these days. However, being a buyer, I cannot help exercising my power and try to squeeze that last few dollors, an adjustable stool or an extra tuning from the vendor. Of course, to negotiate effectively, I need to equip myself with knowledge about pianos and the piano market.

Knowledge is Power?

Knowing your products definitly helps in negotiation. I have learnt so much about pianos from all sort of sources in the last few days - information as well as misinformation. I had narrowed down my choices to Yamaha T121 and Kawai K3. A Yamaha dealer told me that the new T121 are fully made in Japan, if anyone tells you it's made in Taiwan don't believe them because the Taiwanese factory had been closed down a few years ago. When I checked online I did find some comments about the T121 being partially made in Taiwan, but all those comments are in or before year 2006. Also, you have to take any online information with a grain of salt (the best online info about piano I found so far is Piano World Forum) When I visited a Kawai dealer, he told me again that the T121 was made in Taiwan and that is why its price is lower. He also sited the The Piano Book (which is an authoritative book on buying pianos) and true enough, the book said so too! When I checked the date of the book, it was last edited in year 2000. So I know in this case, who was giving me the true information.

Conversely, I quite like the Millennium III Action (by using carbon-fibre for certain parts of the action) invented by Kawai, which can increase play speed by up to 16% compared to traditional wood action. But when coming from a Yamaha dealer's mouth, it totally changed taste. He argued that 'the instrument needs to breath just like leather shoes vs. synthetic leather shoes... have you ever seen any musical instrument made from plastic?... carbon-fibre may be good for making spaceships or boats, but that does not mean it's appropriate for piano... if plastic is so good, why doesn't Steinway use it...' Looking at input from both sides it is very clear to me that the MIII does have its advantages and the Yamaha dealer's argument does not hold water. Both Yamaha and Kawai are reputable big piano makers. However, Yamaha is no.1 player in the market - 80% of pianos used in musical institutions in Japan are Yamaha. So naturally, as a no. 2 player, Kawai has to work harder by employing new technologies to achieve better results and lower cost. Having said all that, does the 16% faster response mean anything to me? I would say 80% chance 'no' since I will never reach the skills level required to enjoy the benefit; maybe better odds for my son.

Does all these product knowledge help when choosing a piano? A real player will tell you 'no'. Piano appreciation is very personal. The only ways to pick one are to listen, look and play. All the new technologies, great designs, country of origin and fine-grained spruce from Alaska will not help if you don't like the sound or touch of the overall product.

Apples and Oranges

Having considered the old and new; the Japanese brands, Korean brands and some German brands I have narrowed down to Kawai K3 and Yamaha T121. The short listing process is extremely painful because many of the pianos that I excluded were very good ones but out of my budget. It is also unfair to directly compare most of the models side by side because they were not made to be at the same level. For example, although all models have the same height, the T121, K3 and U1 are very different products targetting different market segments. The specifications for T121 and U1 on Yamaha's web site show almost identical data, yet U1 is almost double the price of T121 and many players swear that it has better sound. It just does not make sense to claim which one is 'better' especially when it's all about personal experience.

Conclusion

With K3 on top of my list and T121 as a backup, I am going to drive a hard bargain after my holiday. Hopefully, I will have a brand new K3 in my living room in a few weeks time.

Thursday, 6 August 2009

GPS Navigators on Symbian

As soon as Nokia XpressMusic 5800 came out in Australia, my wife conveniently lost her HTC touch smartphone. So I bought the XM 5800 for her from Bangkok (and it turned out to be genuine ). Then on the same day she conveniently found her HTC in her friend's car...

So I was charged to install all the must-have software onto the new phone. Since both XM 5800 and my old N95 support GPS, a GPS navigator became the top of the list. I have tried Garmine Mobile XT, Route 66, TomTom and Nokia Map using my N95. I quickly dismissed Garmin since it refused to start the trial period claiming some error requesting the server over the internet (and I am sure my wifi setting was correct and it did get through the internet to attempt to connect to the Garmin server).

I have been using R66 Mobile 7 and then 8 for a while until I successfully installed TomTom v6.02 on my N95. The R66 maps simply don't look good comparing to TT and Nokia Map.

So now I am using TT and Nokia Ovi Map (v3.01) at the same time.

Comparing to Nokia Map, TT is better at navigation. As soon as you type a letter, it intelligently prompts a list of names. When I typed Dolls Point, it automatically points to the beach which is exactly what I wanted. The lane guidance and camera alert work perfectly.

The disadvantage of TT comparing to Nokia Map is in its map browsing capability - perhaps rightly so, considering the fact that TT is a navigator, not a map system.

Nokia Map shines in its usability - you can easily download and configure maps and voice packs in different languages. Its maps are quite pretty and detailed. Here is a screenshot of Nokia Map showing where I am stranded (due to mechanical failure of Royal Brunei Airline flight).

However, the version that I am using does not seem to include camera alert or lane guidance.

When it comes to the XM 5800, there is not much choice because it is running S60 v5 and not equipped with any keyboard. The only choice on XM 5800 for me is Nokia Map.

Sunday, 26 July 2009

Using XML/SWF Gauge

I have been looking for flash widgets to show data in a dashboard. The XML/SWF Gauge has become my best choice so far. It only requires one gauge.swf file which takes in a XML configuration file to instruct it what and how to draw the gauge. I used it to implement a Radar View as part of my dashboard.

I like the simplicity of the XML/SWF Gauge when using it. Yet it generates versatile and visually pleasing results. It also supports dynamic data display and scheduled update/refresh.

Since I used it in a ZK web application, naturally I used ZK to generate the input XML file dynamically. By default XML files are not passed to the ZK Loader to handle. To force it to be passed to ZK Loader I had to add a servlet mapping in web.xml file:


   zkLoader
   *.xml

I can use the same gauge.swf to generate all sorts of views - radar, meter, dial ... all I needed is to generate the appropriate XML file to feed the gauge.swf as shown in the following ZUL file segment.
...


        // ...


 if (view.getName().equals(event.getData())) {
  // recalculate view attributes...
  // refresh:
  gaugeFlash.invalidate();
 }


I do have the following complaints on XML/SWF Gauge:
  1. It does not support concurrent/nested animation.
  2. It does not support event-driven updates (vs. scheduled updates).
If you have come across any good flash gauge packages, please leave a comment.

Wednesday, 1 July 2009

Of Twitter, Clouds and Google Goo

I have always thought Twitter is a time waster. However, I have also noticed that there are many companies and individuals using it as a marketing channel to reach the global mass, not to mention the propaganda tool as witnessed in the recent Iranian demonstrations. So instead of subscribing to RSS feeds, following tweets is now the "in" thing to do.

Using a social network for commercial and political gains is not new. We have seen it on Facebook and 2nd Life. However, to use tweets as an input for trading decision sounds dubious. Much of the twitter tweets are just noise and even garbage.

Today, out of curiosity I subscribed to Twitter. Within minutes of me opening my new Twitter account, I already got a follower. To be honest, I was pleasantly surprised and even flattered. Yet when I checked, it turned out to be a prostitute or cyber-pimp pushing some porn site . So to make use of tweets for trading, the system will have to identify which users to follow and filter out the fake, malicious and manipulating users - much like virus scanners rely on their virus database. This is a time-consuming and even labor intensive heuristic process relying on large volume of data. Then it has to filter and analyse the millions of messages per day to extract the useful information.

Let's put aside the ethical issues behind the practice of 'trading on rumors'. To say that a machine can determine market sentiment by reading tweets is at best an overstatement. Even human beings have trouble reading the sentiment in cyberspace, and that is why people have to add all sorts of smileys, emoticons and internet etiquette to assist the reader of the message. Also, the same words can have drastically different intentions and reactions based on different cultural, religional and circumstantial backgrounds. The idea of trawling through the internet to extract marketing information is not new. People have been attempting it on RSS feeds, newsgroups, user forums, etc. for a few years now. However, they are very focused/targetted on certain types of contents and are not of realtime nature -certainly not as ambitious as making trading decisions in real-time.

To apply complex fuzzy logic algorithms on large amount of (current as well as historical statistical) data is very CPU-, memory- and data-intensive. Such jobs are best suited for cloud-computing, which many big players are pushing - Sun, Microsoft, Amazon and Google. A couple of days ago I stumbled upon some cloud-computing PR articles and interviews and found this one - 谷雪梅谈云计算. I realised that the Google chief engineer interviewed in that video was my high school classmate. We affectionately called her 'Goo' back then. I guess now I have to call her the 'Goo of Google'. In that interview, a question was asked about how Google makes money. Well, I believe in 'Knowledge is power' and more so in the information age and that is how Google makes money. Comparing to the new comers (e.g. Bing, WolframAlpha) Google's search algorithm is quite lazy and unsophisticated - it relies on external links or the more you pay the higher the position in the list, yet it has accumulated vast amount of historical data and trained its systems to give better results. So Google has now taken the steps further to sell the infrastructure services and technologies such as GWT, cloud computing, Chrome and Google Wave. It seems Google has better things to do than to recycle the Twitter garbage - for now.

Monday, 22 June 2009

JAXB Custom Data Binding

In a previous post I experimented consuming WCF web services using various Java WS frameworks and tools. As pointed out by Alex, the one that I missed out was wsimport which is bundled as part of JSE6.

Like many other tools, it supports both Ant task and command line interface (CLI). The CLI for wsimport is quite simple - in my case I generated the source code and client stub library like so:

D:\Program Files\Java\jdk1.6.0_11\bin>wsimport -d /temp/generated -s /temp/gensrc -keep http://localhost/PromoService.svc?wsdl
parsing WSDL...


generating code...

D:\Program Files\Java\jdk1.6.0_11\bin>
All the rest is similar to the results of IntelliJ shown in my previous post. There are two problems with the generated PromoInfo.java which is a data/value object:
  1. string fields are generated as JAXBElement<String>
  2. dateTime fields are generated as XMLGregorianCalendar
I want to use core java data types on the data objects so that they can be easily integrated with other frameworks without having to do conversion. Examining my schema (on http://localhost/PromoService.svc?xsd=xsd2) the PromoInfo complex type is defined as


  
    
      
      
      
      
    
  


It is obvious that the xs:string and xs:dateTime were not converted into the desired java types. To solve my problems I specified customised JAXB binding rules in an external file - custombinding.xml like so

 
     
 

The attribute generateElementProperty="false" on line 2 tells wsimport not to generate JAXBElement but to generate native java data types instead.

The javaType element on line 3 defines the binding between "xs:dateTime" and "java.util.Date" because by default xml dateTime binds to javax.xml.datatype.XMLGregorianCalendar as shown here.

Once the binding is defined, rerunning the wsimport tool with the -b switch will produce the desired output:
D:\Program Files\Java\jdk1.6.0_11\bin>wsimport -d /temp/generated -s /temp/gensrc -b /temp/custombinding.xml http://localhost/PromoService.svc?wsdl
parsing WSDL...


generating code...
Note: D:\temp\gensrc\org\w3\_2001\xmlschema\Adapter1.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

This time the generated PromoInfo.java looks much better:
...
public class PromoInfo {

    @XmlElement(name = "PromoDateTime", type = String.class)
    @XmlJavaTypeAdapter(Adapter1 .class)
    @XmlSchemaType(name = "dateTime")
    protected Date promoDateTime;
    @XmlElement(name = "PromoDescription", nillable = true)
    protected String promoDescription;
    @XmlElement(name = "PromoName", nillable = true)
    protected String promoName;
    @XmlElement(name = "PromoVenue", nillable = true)
    protected String promoVenue;
...


Thursday, 18 June 2009

A Racist Job Market

My friend forwarded me an interesting article today - Ethnic names hinder job seeking: report. I have also seen similar reports on Channel 7 two days in a row.

The result of the research shows that job applicants with Anglo-Saxon names received more calls than those with Indigenous, Chinese, Middle Eastern or Italian names. Whether the business owners/employment agencies/HR people are consciously racist or not is debatable. However, it is proven that collectively the job market in Australia is racist.

The rationale behind this phenomenon is complex and multifaceted. Let's assume people are not intentionally or consciously racist for a while. There could be several reasons behind it:

Typecasting - Once someone sees a 'foreign' name, he/she automatically assign a profile to it. Of course, there could be good and bad aspects of this profile depending on the reviewer's personal experiences and belief. The biggest disadvantage of such profiles for foreign names is perhaps the assumption that they do not speak English well. Hence we see many job ads have requirements like "excellent communication skills" to deter non-English speakers from even applying. True, Australia is a country of immigrants and there are many new immigrants who cannot speak English well. However, majority of them are quick and keen learners, especially those who are actively seeking jobs. And let's not forget the vast number of 2nd-, 3rd-... generations of immigrants who identify themselves as true-blue Aussies.

"Not Made Here" syndrome - People are more comfortable with what they are already familiar with. A foreign name automatically triggers off a level of fear and set off the defense mechanism subconsciously (or unconsciously if you prefer Freudism). Many people tend to prefer interacting with people with similar backgrounds and interests forgetting the advantages of diversity, especially in a work environment. There are a large number of small businesses in Australia which have only a handful of employees in the workplace. Hiring people with the same traits/profile that everyone else is comfortable with becomes an important criterion.

Media - The mass media is the most powerful brainwashing machine in the world and often, by criticising their countries of origin the media cast negative lights on the ethnic minorities who live in Australia.

Although we have been told not to judge a book by its cover, people cannot easily shake off the racial prejudice that have been intrinsically wired into our brain and heart as a result of millions of years of evolution. To break down the racial barrier, we have to actively and consciously broaden our horizon; seek more interactions with all walks of life; treat individuals as individuals; do upon others that which you will have them do onto you.

Thursday, 11 June 2009

One Year On

I started this blog one year ago in June 2008. I have been using Google Analytics to track the site. Here are some visitor (mostly from IT and development communities) trends collected in the last year or so.

Browsers used:

OS used:

Browser + OS used:

Java support:

As of June 2009 the top 5 visited posts on this blog site are:

  1. Consuming WCF Web Service Using Java Client, July 2008
  2. Consuming Web Services from Android, August 2008
  3. IE8 + Outlook Web Access = Problems, May 2009
  4. WPF Splash with ProgressBar, July 2008
  5. How To Add Unicode Fonts to N95, September 2008

There are a couple of trends worth noticing:

  1. Firefox has a bigger market share than IE among the technical users and rightly so. It will be interesting to see how Chrome will measure up against the top two. Chrome has certainly great momentum considering it has only been launched 9 months ago and already in 3rd place.
  2. Java support is not as ubiquitous as I first thought. I wouldn't develop my next RIA business application on JavaFX any time soon. So right now I will stick to ZK (and maybe SmartSWT) and wait for HTML5 to take over.
  3. It only took less than a month for the IE8 + Outlook Web Access = Problems post to claim the no. 3 spot. It just shows how bad IE8 is.
  4. The post on number 5 spot has far more comments than the rest. It proves once again that non-technical people are far more social

Friday, 22 May 2009

IE8 + AVG = Problem?

My IE8 fails to connect to any site from time to time. This happens sporadically and on certain new tabs only. After trawling through the suggestions in this discussion group, I found that the following solutions from fufufufu worked for me.
  1. Work around: if one tab does not connect, try open another one. Some tabs will eventually work.
  2. Solution: disable my AVG Anti-Virus Free (v8.5)'s Link Scanner feature (done in AVG's GUI client)
Interestingly, none of these problems with IE had ever occurred to my trusted FireFox.

Tuesday, 19 May 2009

IE8 + Outlook Web Access = Problems

A couple of days ago I upgraded my old IE6 to IE8 because IE6 refused to display one of my ZK pages and spat out an error message box.

As I had to use Outlook Web Access (OWA) daily (which is the only reason why I still have IE on my machine), I ran into trouble straight away after the upgrade:

  1. Any editing screens in OWA - e.g. editing a mail was incredibly slow. This was fixed by clicking the Options view in OWA and scroll down to E-mail Security section and click Re-install button. This installed the latest S/MIME ActiveX.
  2. When downloading Office 2007 file types (docx, xlsx, pptx, etc.), OWA attempts to download them as .zip files. This was fixed by adding my OWA web site URL to the Trusted sites list in IE8 (Internet Options -> Security -> Trusted sites) as suggested here.
  3. Cannot send or reply any more! - After working for a couple of days, OWA played up again: when replying mail I got Error: ‎(0x8000ffff)‎: Catastrophic failure; when sending new mail, I got error messagebox saying access denied... Digging through Google, others have experienced the same problem but no solutions. :( The workaround is to uninstall the OWA S/MIME component (as shown in point 1 above)!
Now I know what makes dogs run around and around chasing their own tails - it's their breakfast!

Thursday, 14 May 2009

Server Push + Event in ZK

I recently implemented a dashboard type of web application in ZK to display a collection of KPI metrics data using various views, for example
  1. Dashboard view - by using some flash dial/meter widgets
  2. Tree view - see screenshot below
  3. Google map view - showing geographical metrics on the map

I want to use ZK's server push feature to dynamically update the views only when the corresponding metric value has been changed and detected from the back end/server side. The examples that are available from ZK small talks invariably pass the widget(s) to be updated to the worker thread and have them updated by the server side. This approach does not quite work in my application for the following reasons:

  1. The various views (see screenshot above) are opened/closed by the user dynamically. Therefore, I don't know which widgets are visible and should be updated;
  2. There are too many widgets to be updated - there could be dozens or hundreds of metrics displayed in the view. So one approach could be to pass the whole view to the server side and have it figure out which widgets to update. I feel that the server side shouldn't be bothered with such a responsibility and the shear number/volume of the widgets involved in the view could render this infeasible.

So I took an alternative approach by combining the server push and event posting that are available in ZK framework.

The main page (main.zul file) contains the code to turn on and off server push as part of its zscript:

void startServerPush() {
 if(!desktop.isServerPushEnabled()){
     desktop.enableServerPush(true);
 }
 MetricsUpdateThread mut = new MetricsUpdateThread(user, desktop);
 mut.start();
}

void endServerPush() {
 if(desktop.isServerPushEnabled()){
     desktop.enableServerPush(false);
 }
}
The MetricsUpdateThread class is shown below:
public class MetricsUpdateThread extends Thread {
 private static final int DELAY_TIME=2000; // 2 second.
 private Desktop _desktop;
 private boolean _ceased;
 
 public MetricsUpdateThread(Desktop desktop) {
  this._desktop=desktop;
 }
 public void updateChangedMetrics() {
  HashSet<Metric> metrics=new HashSet<Metric>();
  // find all changed metrics and put them in the metrics Set
  ...
  if(metrics.size()>0)
    Events.postEvent(new Event("onMetricChanged", null, metrics));
 }
 
 public void run() {
  if (!_desktop.isServerPushEnabled())
   _desktop.enableServerPush(true);
  try {
   while (!_ceased) {
    Executions.activate(_desktop);
    try {
     updateChangedMetrics();
    } finally {
     Executions.deactivate(_desktop);
    }
    Threads.sleep(DELAY_TIME); // Update delay time
   }
  } catch (DesktopUnavailableException due) {
   //System.out.println("Browser exited.");
  } catch (InterruptedException ex) {
   //System.out.println("Server push interrupted.");
  } catch (IllegalStateException ie) {
   //System.out.println("Server push ends.");
  } finally {
   if (_desktop.isServerPushEnabled()) {
    _desktop.enableServerPush(false);
    //System.out.println("Server push disabled.");
   }
  }
 }
}

Notice the Events.postEvent() in the updateChangedMetrics() method, which broadcasts an event if there are any changed metrics. These events are then handled by the corresponding view's zscript. For example, the treeview above has the event handler at its root component like so:
...


 metrics = event.getData();
 for(Metric metric : metrics) {
  tcMovement = win.getFellowIfAny(metric.getId()+"_move");
  if(tcMovement!=null) {
   tcMovement.setImage(Util.constructMovementImage(metric));
    
   tcValue = win.getFellow(metric.getId()+"_value");
   tcValue.setLabel(metric.getValueAsString(metric.getCurrentValue()));
  }
 }

...

...
This approach of combining Server Push and Event broadcasting achieves the effect that I wanted. However, I can't help feeling that it is a bit complicated. So I wonder whether there is a better, simpler or more standard approach to achieve the same user experience in ZK.

Sunday, 26 April 2009

Master-Detail View in ZK

In my latest ZK application, I implemented a typical master-detail view: a split bar in the middle; a tree view on the left and a table/grid on the right. Whenever an item in the treeview is clicked, the grid on the right-hand-side is updated to display the details of the clicked treeview item. The only thing 'special' about this application is that I factored the master (treeview) and details (grid) views as separate files and the main window/page uses <include> to put them together:
<?page title="" contentType="text/html;charset=UTF-8"?>
<zk>
<window border="none" width="100%" height="100%">
 <hbox spacing="0" width="100%" height="100%">
  <include src="analytical.zul"/>
  <splitter collapse="before"/>
  <include src="metricDetails.zul"/>
 </hbox>
</window>
</zk>

The <include> complicates things slightly: the master view which is firing the event has no visibility of who is going to handle the event. Therefore, the event's target field is set to null so that the event is broadcast to all root-level components on the page - including the included details view.

The event sending code is shown below. Note that the treeview of the master view is built using model and renderers and the event sending code is embedded in the renderer.

public class MetricTreeitemRenderer implements TreeitemRenderer {

 @Override
 public void render(Treeitem item, Object data) throws Exception {
  SimpleTreeNode t = (SimpleTreeNode)data;
  Metric metric = (Metric)t.getData();
  ... // construct Treecells
  Treerow tr = null;
  ... // construct the Treerow
  
  final Event e=new Event("onMetricClick", null, metric);
  tr.addEventListener(Events.ON_CLICK, new EventListener(){

   @Override
   public void onEvent(Event arg0) throws Exception {
    Events.postEvent(e);
   }
   
  });
 }
}
The event handling side is part of the included details view file:
<?page title="Metric Details" contentType="text/html;charset=UTF-8"?>
<?init class="org.zkoss.zkplus.databind.AnnotateDataBinderInit" ?>
<zk>
<window id="win" title="Metric Details" border="none" width="100%" height="100%">
<attribute name="onMetricClick">
... handle the event by populating the grid 
</attribute>
<grid id="grid" vflex="true" width="100%" height="100%">

  <columns sizable="true">
   <column label=""/>
   <column label=""/>
  </columns>
  <rows>
...
  </rows>
 </grid>
</window>
</zk>
Note that even though the <window> is inside of the <zk> tag, it is still the root component (since it has no parent component in the .zul file). Also, although the .zul file has been included in the main file, it seems that it still has its own life cycle and its root component is unchanged.

Thursday, 9 April 2009

Error: Not enough storage...

For the last few days I was having problems on my IE6. Every time I use Outlook WebMail to open a modal window - e.g. bringing up the address book, I would be greeted with an error message box saying 'not enough storage is available to complete this operation'. I tolerated the problem for all this time because I could still send mails and attach files, etc. But today, I had to use IE to create a travel request for my trip to Delhi and this problem is stopping me from filling in the request form (and the travel request web application does not work on Firefox). So I was stuck and the clock is ticking for I need to finish the travel request today.

After some digging from Google I found the solution: it was caused by the User Agent string being more than 260 characters long. Huh? I was gobsmacked when I saw this. But it really works: after I deleted all entries in the 'HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\5.0\User Agent\Post Platform', (don't forget to close all instances of IE after that) my problem disappeared. Then I did some experiment to figure out what was happening.

First of all, it is well known among web designers/programmers that different browsers render the pages differently although HTML and CSS have been standardised for many years. Hence, the User Agent string is checked by many web applications to find out what browser is being used. In IE, the user agent can be retrieved using a simple javascript function: navigator.userAgent. The way Microsoft IE gets the user agent string is by appending all the values under the above registry entry. You can see this by creating some new values under the above registry entry. You may get a user agent string like: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; New Value #1; lsfjsafj sf;dsfj sakf lksjf salkfjas lkfjlksfj safkjsa lkfsajf ;lksjflksajf lksa;fj salkfjsa kfdsalk). As more values being inserted in the entry, the user agent string will get longer and longer, until it exceeds 260 characters, then it will simply return: Mozilla/4.0 (compatible; MSIE 6.0), obviously some default value without appending any values from the Windows registry.

That would have been fine if there were no other side effects. Unfortunately, maybe some other Microsoft jscript libraries have not catered for this and produce the 'not enough storage...' problem.

In his blog, James Thompson blamed toolbars and spywares for the extra values in the registry entry. In my case, all the values in the registry entry were from Microsoft - it looks like everytime I upgraded .NET, a new value was created:

So if you have been doggedly upgrading .NET all the way from 1.0 to 3.5 like me, then you would be experiencing the same problem as well. Looks like Microsoft does a more thorough job than those spyware vendors.



Tuesday, 17 February 2009

Fontrouter Open Sourced

I learned that Fontrouter has been open sourced under Apache License 2.0 two days ago. The download site has been moved to http://code.google.com/p/fontrouter/. This is both a good news and bad news.

The good news is that there may be more people or organisations to work on the project, hopefully, giving it more resources that it deserves.

The bad news is that oasisfeng may not provide as much support to the general public as he used to, which had been excellent in the past. The once very active Fontrouter Forum is now in ruins - real support activities have dwindled and spams are rampant.

I sincerely hope that the Fontrouter project will continue and grow.

Related posts:

Thursday, 12 February 2009

SyntaxHilighter 2

I have been using SyntaxHighlighter 1.5 on my blog site from day one. Since I don't have my own file server to host the SyntaxHighlighter files, I have been borrowing the URLs from java.dzone .

A few days ago, I noticed that all my code snippets in my blogs were not being processed. I first thought it was a temporary glitch from blogger.com or java.dzone. Today, I realised that SyntaxHighlighter has released v2.0 and obviously java.dzone had upgraded promptly. So I upgraded my blog site to use v2.0 as well by following the official instructions. I also noticed that the author of SyntaxHighlighter is kind enough to host the files from the package which is good for bloggers like me (who do not have their own file servers on the internet).

Wednesday, 11 February 2009

Facebook API in Erlang

I have heard of Facebook APIs and Facebook applications so today I decided to check out what they can do.

Based on documentation on Facebook, an Application is something that is hosted on an external server (such as your ISP or company web/application server) and can be invoked/accessed from a Facebook page. This seems very disappointing - I first thought a Facebook Application was something you can create using Facebook widgets and be hosted on Facebook servers. This model is not appealing to me, then again, I am not an advertiser.

Nevertheless, I decided to give Facebook API a try. Looking through the list of supported languages, Java was discontinued by Facebook (what do you expect from a PHP shop?!) So the natural choice is Erlang using Erlang2Facebook client library. Before using the library, I had to prepare my environment:

  1. Install the latest Erlang - OTP R12B (v5.6.5). My old OTP R11B would not work because the library uses some new functions from the standard library (e.g. decode_packet).
  2. Download mochiweb_util.erl and mochijson2.erl as they are used by the erlang_facebook library.
  3. Download the erlang_facebook.erl

The Facebook APIs are RESTful services which support both GET and POST methods. Most API calls require a signature input parameter which is a MD5 hash of all the input parameters concatenated alphabetically. This is explained in Facebook Developers Wiki. Also, many API calls require the uid or session_key as input parameters. It is a bit convoluted to get the session key:

  1. Create your own Application by following the instructions from Facebook so that you will get your own API Key, Application Secret, etc.
  2. To get a session_key value, you have to get auth_token by accessing the URL: http://www.facebook.com/login.php?api_key=1f5f..., which will forward to your application's host's URL with an input parameter for the auth_key. In my case, it forwards to the URL: http://romenlaw.blogspot.com/?auth_token=e1761... So now I have an auth_token.
  3. Once I have the auth_token, I can call facebook.auth.getSession API to get the session_key and uid

In Erlang, this is shown below:

Erlang (BEAM) emulator version 5.6.5 [smp:2] [async-threads:0]

Eshell V5.6.5  (abort with ^G)
1> c("/projects/facebook_erl/erlang_facebook.erl").
{ok,erlang_facebook}
2> 
2> c("/projects/facebook_erl/mochiweb_util.erl").
{ok,mochiweb_util}
3> 
3> c("/projects/facebook_erl/mochijson2.erl").
{ok,mochijson2}
4> 
4> [ApiKey, Secret, AppId]=["1fef...", "99f2...", "41..."].
...
9> erlang_facebook:custom(ApiKey, Secret, "facebook.auth.getSession", [{"auth_token", "c92b9...this is the auth_token copied from step 2 above"}]).
{struct,[{<<"session_key">>,
          <<"2.Ryk_v_nVtG...">>},
         {<<"uid">>,109...},
         {<<"expires">>,123...}]}
...
19> erlang_facebook:custom(ApiKey, Secret, "facebook.users.getLoggedInUser", [{"session_key", SessionKey}]).
109...(same as the uid returned by getSession call above)
28> [Fid]=erlang_facebook:custom(ApiKey, Secret, "facebook.friends.get", [{"uid", "1092201851"}]).
[598...]
31> erlang_facebook:custom(ApiKey, Secret, "facebook.friends.get", [{"uid", "598..."}]).            
{struct,[{<<"error_code">>,10},
         {<<"error_msg">>,
          <<"Application does not have permission for this action">>},
         {<<"request_args">>,
...
36> erlang_facebook:custom(ApiKey, Secret, "facebook.users.getInfo", [{"uids", "109..."},{"fields", "uid, first_name, last_name, name, sex, birthday, affiliations, locale, profile_url, proxied_email"}]).
[{struct,[{<<"affiliations">>,
           [{struct,[{<<"nid">>,67...},
                     {<<"name">>,<<"Australia">>},
                     {<<"type">>,<<"region">>},
                     {<<"status">>,<<>>},
                     {<<"year">>,0}]}]},
          {<<"birthday">>,null},
          {<<"first_name">>,<<"Romen">>},
          {<<"last_name">>,<<"Law">>},
          {<<"name">>,<<"Romen Law">>},
          {<<"sex">>,null},
          {<<"uid">>,109...},
          {<<"locale">>,<<"en_US">>},
          {<<"profile_url">>,
           <<"http://www.facebook.com/people/Romen-Law/109...">>},
          {<<"proxied_email">>,
           <<"apps+419..."...>>}]}]

Friday, 23 January 2009

Happy 牛 Year!

Happy New Year of the Ox!

Source code available here.

Wednesday, 21 January 2009

JavaFX MediaPlayer Memory Leak?

I wrote an e-card using JavaFX 1.0 to celebrate the upcoming Chinese New Year. It's a typical little multimedia applet with some animation, music and sound effect - supposedly perfect for JavaFX. However, I found that the memory usage is steadily climbing even when there is no activity (animation) happening on the canvas. I refactored, double-checked, triple-checked my source code several times to make sure that there was no unnecessary object creation and to reuse objects (by changing the opacity) every time - but to no avail.

Then I did a little experiment and found out that the number-one culprit could be the javafx.scene.media.MediaPlayer. The test program has a MediaPlayer and blank canvas with two buttons - the Start button to play the media/music; and the Stop button to stop the music. The source code for this simple test is shown below.

package testjavafx;

import javafx.animation.Interpolator;
import javafx.animation.KeyFrame;
import javafx.animation.Timeline;
import javafx.ext.swing.SwingButton;
import javafx.scene.media.Media;
import javafx.scene.media.MediaPlayer;
import javafx.scene.media.MediaView;
import javafx.scene.Scene;
import javafx.scene.text.Font;
import javafx.scene.text.Text;
import javafx.stage.Stage;

var m=MediaPlayer {
    autoPlay: false
    repeatCount: MediaPlayer.REPEAT_FOREVER
    media: Media {
        source: "{__DIR__}bubugao.mp3"
    }
};
Stage {
    title: "Application title"
    width: 250
    height: 250
    scene: Scene {
        content: [
            MediaView {
                mediaPlayer: m
            }
            SwingButton {
                translateY:100
                text: "Start"
                action: function() {  
                    m.play();
                }
            },
            SwingButton {
                translateY: 150
                text: "Stop"
                action: function() {  
                    m.stop();
                }
            }
        ]
    }
}

Profiling the application, I found the same memory usage pattern: memory usage climbs steadily with every time the media is played (i.e. pressing the Start button). The heap graphs below are captured from NetBeans 6.5.

The notes on the first graph (on the left) are explained below:

  1. play - the Start button was pressed
  2. GC - garbage collection was forced by pressing the GC icon several times in NetBeans
  3. stop - the Stop button was pressed
  4. GC - garbage collection was forced

The notes on the second graph (on the right) are explained below:

  1. GC - garbage collection was forced by pressing the GC icon several times in NetBeans
  2. play - the Start button was pressed several times quickly
  3. stop - the Stop button was pressed
  4. GC - garbage collection was forced

The MediaPlayer also has very limited support on media formats - it does not support wave files, or MPEG-2.5 sound files... so that I couldn't use most of the sound-effect files available on the internet. So this is Sun's solution to multimedia applications?!

Wednesday, 7 January 2009

Moon Monsters in JavaFX

The Moon Monsters demo shows up as the first sample in Microsoft's Silverlight 1.0 Gallery. I thought it'd be great to test-drive JavaFX by porting this demo from Silverlight to JavaFX 1.0.

This seemingly simple demo actually touches on quite a few areas in the core stength of the JavaFX APIs - 2D graphics, data binding and input events handling. While the JavaFX port of the Moon Monsters is a pretty faithful implementation of the original features, it is not 100% complete. The following are not implemented here:

  1. The graphics for paintbrush and keyboard are not included because it is too laborious to copy the coordinates into the JavaFX script.
  2. The HTML Overlay feature is not implemented. I don't know how to do this in JavaFX because unlike Silverlight 1.0, JavaFX is not Javascript based and it is not bound to HTML either. If anyone knows how to do this in JavaFX, please leave a comment.
The source files are available here: alien.zip

Related Posts:

Friday, 2 January 2009

Problem Mixing SmartSWT and GWT Widgets

I set out implementing my Address GUI using SmartGWT 1.0b1 together with GWT 1.5.3 for Windows - both are latest releases available for download. The SmartGWT does not have any formal documentation. Information are scattered in Javadoc, showcase and developer/user forums. Although not ideal, the documentation is generally adequate.

I quickly ran into problem with SmartGWT. I wanted to have a menu bar at the top of my application as shown in the following diagram (which is a screenshot of SWT-Ext implementation of the same application).

SmartGWT does have a MenuBar in its API. However, this class does not seem fully implemented - there is no method to add menus (the addMenus() method is missing from the download although it appears in the online javadoc). As a workaround, I decided to use GWT's own MenuBar and MenuItem in combination with SmartGWT's ToolStrip and TabSet widgets as shown below.

I have done the same thing with the SWT-Ext version of the Address GUI (as shown in the first diagram above) using GWT's Menu with SWT-Ext's ToolBar and TabPanel widgets without any problems. However, in SmartGWT, the drop-down menu appears behind the SmartGWT widgets and get obscured by them. So I cannot really use the menu items any more. This result is shown in the above diagram.

It seams that SmartGWT is not so smart after all. [Update 2009-01-03]: Just joking!