Friday, April 27, 2007

Interesting panel records from TSSJS2007

TheServerSide has posted records of three interesting panel in TSSJS2007. They are worth watching.
SOA Technology Panel - TSSJS 2007
The Next Application Platform - TSSJS 2007
High Performance Computing Panel - TSSJS 2007

The keywords that are talked are SOA, EDA, EAI, high performance infrastructure, ESB, Ajax, mashup, management, and scalability(scale out).


Wednesday, April 11, 2007

Reload a page in Firefox

A weird problem happened to Firefox of my desktop today. Some Wikipedia pages could NOT be correctly loaded because of CSS problem. 'Reload the current page' just did not work, similar as F5. Restarting Firefox did not help either. Then I figured out that I need to reload and override the cache. That is Ctrl+F5. It is better to learn all the shortcuts and find more tricks.

Thursday, April 5, 2007

Created a calendar for University of Saskatchewan academic schedule

I always need to visit this page to get the university schedule like the close day and last day to do something. And I searched on Google Calendar, and did not find a similar calendar. So I decide to create one that may be useful for everyone. I include only the highlighted events, close date, and important day for graduate students there. I will try to remember to update the calendar when the schedule for 2007-2008 is available. The calendar's title is University of Saskatchewan Academic Schedule. The iCal link of the calendar is http://www.google.com/calendar/ical/rqor6pie0u4vq3fhfpvmpsu5cc%40group.calendar.google.com/public/basic.ics
Or you can subscribe the calendar by click



Wednesday, April 4, 2007

Got a booklet of UofS centennial stamps

I found an intra-campus envelop in my mailbox in the department office this morning, and it turned out to be a beautiful gift from the university, the stamps to celebrate the centennial of our university.



Wednesday, March 21, 2007

Evolution of eBay platform architecture

Thanks to Michael Platt's blog. I downloaded the slides of eBay architecture and just finished reading it. The interesting points to me are as follows.

1. Scalability and maintainability are so important for large-scale distributed systems. The scalability problem is the major reason for the evolution of eBay's platform. And maintainability is the major consideration for the future development.
2. Scaling-out is the proper long-term solution for increasing load.
3. The principle of separation of concerns benefits scalability besides easiness of development and maintenance.
4. Combination of synchronous and asynchronous interactions is a good practice.


Tuesday, March 20, 2007

Who understands the links?

Stefan considered that
as long as Web services don’t actually add something more to the Web than a single endpoint per service, they are not “on the Web”, but are indeed dead ends.

in the post of Links are for Humans Only? I Don't Think So.

I agree with his opinion. And it reminders me a question appearing in my mind last night. Why is it more difficult to collect the visit statistics of an XML feed than the web page of the same resource when there is no mean to access the server log? For web pages, just a snippet of script can make the visit recorded by a service on the web. But we cannot put the script in XML feeds, because the feed readers will do nothing about the script. In fact, the web browsers automate this browser to machine interaction described by the script.

If the scripts are removed from web pages, then what left are those for representation in web browsers. The links are among those representations. So who understands the links? Here is a paragraph from Fielding's paper:

Semantics are a byproduct of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI—they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.


For sure, the human can understand the links. However, they may be cheated by what they see on the page. They need to take the risk by clicking them. The web applications can also try to understand the links, although they do not need to. This needs special prescript of their functions. As Stefan pointed out, this can also be done in XML for machine to machine interaction, e.g. Atom publishing. The condition is that the application accessing the XML knows Atom protocol. Similarly, the web services supporting WS-Addressing is not dead ends, they can reply with 'links' to other services. Again, the condition is that the service consumers know WS-Addressing.

Sunday, March 11, 2007

Using JSON to access Google data

Google supports JSON format data for most services, including Google Reader shared items. I just tried to use JSON to read the entries from my shared items in Google Reader, and display them in the sidebar of index page using the native CSS. The code is as follows.


<div class="module-info module">
  <h2 class="module-header">
    My reading</h2>
  <div id="reading" class="module-content">
  </div>
</div>
<script>
function listEntries(root) {
var html = ['<ul class="module-list">'];
for (var i = 0; i < root.items.length; ++i) {
var entry = root.items[i];
var title = entry.title;
var link = entry.alternate.href;
html.push('<li class="module-list-item">',"<a href=\""+link+"\" target= \"_blank\">"+title+"</a>", '</li>');
}
document.getElementById("reading").innerHTML = html.join("");
}
</script>
<script src="http://www.google.com/reader/public/javascript/user/00357859579464014466/state/com.google/broadcast?n=5&callback=listEntries">
</script>