Wednesday, March 21, 2007

Evolution of eBay platform architecture

Thanks to Michael Platt's blog. I downloaded the slides of eBay architecture and just finished reading it. The interesting points to me are as follows.

1. Scalability and maintainability are so important for large-scale distributed systems. The scalability problem is the major reason for the evolution of eBay's platform. And maintainability is the major consideration for the future development.
2. Scaling-out is the proper long-term solution for increasing load.
3. The principle of separation of concerns benefits scalability besides easiness of development and maintenance.
4. Combination of synchronous and asynchronous interactions is a good practice.


Tuesday, March 20, 2007

Who understands the links?

Stefan considered that
as long as Web services don’t actually add something more to the Web than a single endpoint per service, they are not “on the Web”, but are indeed dead ends.

in the post of Links are for Humans Only? I Don't Think So.

I agree with his opinion. And it reminders me a question appearing in my mind last night. Why is it more difficult to collect the visit statistics of an XML feed than the web page of the same resource when there is no mean to access the server log? For web pages, just a snippet of script can make the visit recorded by a service on the web. But we cannot put the script in XML feeds, because the feed readers will do nothing about the script. In fact, the web browsers automate this browser to machine interaction described by the script.

If the scripts are removed from web pages, then what left are those for representation in web browsers. The links are among those representations. So who understands the links? Here is a paragraph from Fielding's paper:

Semantics are a byproduct of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI—they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.


For sure, the human can understand the links. However, they may be cheated by what they see on the page. They need to take the risk by clicking them. The web applications can also try to understand the links, although they do not need to. This needs special prescript of their functions. As Stefan pointed out, this can also be done in XML for machine to machine interaction, e.g. Atom publishing. The condition is that the application accessing the XML knows Atom protocol. Similarly, the web services supporting WS-Addressing is not dead ends, they can reply with 'links' to other services. Again, the condition is that the service consumers know WS-Addressing.

Sunday, March 11, 2007

Using JSON to access Google data

Google supports JSON format data for most services, including Google Reader shared items. I just tried to use JSON to read the entries from my shared items in Google Reader, and display them in the sidebar of index page using the native CSS. The code is as follows.


<div class="module-info module">
  <h2 class="module-header">
    My reading</h2>
  <div id="reading" class="module-content">
  </div>
</div>
<script>
function listEntries(root) {
var html = ['<ul class="module-list">'];
for (var i = 0; i < root.items.length; ++i) {
var entry = root.items[i];
var title = entry.title;
var link = entry.alternate.href;
html.push('<li class="module-list-item">',"<a href=\""+link+"\" target= \"_blank\">"+title+"</a>", '</li>');
}
document.getElementById("reading").innerHTML = html.join("");
}
</script>
<script src="http://www.google.com/reader/public/javascript/user/00357859579464014466/state/com.google/broadcast?n=5&callback=listEntries">
</script>




Thursday, March 8, 2007

Sharing my reading by Google Reader

Google Reader has a feature that generates a page and feed for your shared items. My shared items are here, and the feed is here as well. It can also generate a clip in javascript. I added the clip on the sidebar of this blog's index page.

Tuesday, March 6, 2007

About the "Web of Services for Enterprise Computing" workshop

Thanks to Steve's Post, I read most of the position papers and slides presented in the workshop. It is interesting to know what people from both REST and WS sides think about services and the Web. Eric has posted two summaries about the workshop. Paul posted a summary on his blog as well.

Thursday, March 1, 2007

REST vs ?

It is nice to read the blogs that I have missed for almost two months and write something on my own. I found many guys raised another tons of discussion about REST and web services. As I remember, they compared REST and SOAP at first, then REST and web services, and then REST and SOA. I do not know what is the next to be compared with REST.

Many like to predict the technical changes in the coming year. In his post, Carlos predicted that
"WS-* and its corresponding specifications like SOAP and WSDL will be declared dead by year end."

It is interesting that the technology is considered hopeless when the developers of SOAP engines are struggling to improve its performance. What I hope is by the end of 2007 the debate about REST vs ? will stop. Obviously, the "dead of web services" cannot assure that.

Mark Nottingham listed the "real and imagined issues" here. I agree with most of them. However, I still think that the following is really an issue when interpreting the REST as the Web technological style.
"False Choice: Machines vs. People
There’s an insistence from some quarters that somehow, HTTP and REST are only good for people sitting behind browsers. I think that this has been solidly and obviously disproven, and find it difficult to believe that such continued, vigorous assertions are anything other than FUD. *shrug*"

Why? It is because "one of the deeper REST constraints is using hypertext as the engine of application state". The Web is the space of information, or the virtual state machine of web pages. Mark Baker believes "the Web, since its inception, has always been about services, and therefore that “Web services” are redundant." Of course, that depends on how to define the "service". The success of Web results from its architecture, REST, or more concretely, client-server, request-response messaging, and loose-coupling by HTML, XML, and widely accepted scripts. To make all this happen, the browsers are the heroes in the background. The browsers are the agents working for people. People with the browsers trigger the state transfer of the Web as a virtual state machine. In the service scenarios, every service can trigger the state transfer of the virtual state machine of services. If a long long URL with all the request information encoded inside is tolerant and no MEP's other than request-response are needed, fine, let's just call it "services" not "web services", and make it just like a web page.