Thursday, November 29, 2007

Jetty 6 authentication by configuration XML without web.xml

I spent about a week for porting of NetKernel 3.3 from Jetty 5 to Jetty 6. It is done and I am testing the non-blocking IO (NIO) of Jetty 6 together with asynchronous processing feature of NetKernel. I also figure out how to configure Jetty 6 security in the configuration XML file. From the Jetty document about Realm, the security of an application can be configured in web.xml. However, there is no web.xml in my case, when the request received by Jetty be handled by a specific handler, which will call another handler-like facility for processing. The solution for Jetty 5 does not work for Jetty 6 for this case, since the corresponding API's of org.mortbay.jetty.Server are removed. I got some hint from the document about how to configure security for embedded Jetty. However, it is still about a web application via class WebAppContext. Then I wondered if org.mortbay.jetty.security.SecurityHandler will help (it has an interesting name). Yes, it is. The hack is done with a longer XML file creating a HashUserRealm and a ConstraintMappings. See the details in the following snippet.

<Set name="handler">
<New id="Handlers"
class="org.mortbay.jetty.handler.HandlerCollection">
<Set name="handlers">
<Array type="org.mortbay.jetty.Handler">
<Item>
<New id="BackendSecurity"
class="org.mortbay.jetty.security.SecurityHandler" />
</Item>
<Item>
<New id="BackendNetkernel"
class="org.ten60.transport.jetty.HttpHandler" />
</Item>
</Array>
</Set>
</New>
</Set>
<!-- =========================================================== -->
<!-- Configure BackendSecurity -->
<!-- Add a Realm and a ConstraintMappings to it. See -->
<!-- http://docs.codehaus.org/display/JETTY/How+to+Configure+Security+with+Embedded+Jetty -->
<!-- =========================================================== -->
<Ref id="BackendSecurity">
<Set name="UserRealm">
<New class="org.mortbay.jetty.security.HashUserRealm">
<Set name="name">Test Realm</Set>
<Set name="config">
<SystemProperty name="bootloader.basepath"
default=".." />/etc/realm.properties</Set>
</New>
</Set>
<Set name="AuthMethod">DIGEST</Set>
<Set name="ConstraintMappings">
<Array type="org.mortbay.jetty.security.ConstraintMapping">
<Item>
<New id="BSConstraintMapping"
class="org.mortbay.jetty.security.ConstraintMapping">
<Set name="Constraint">
<New class="org.mortbay.jetty.security.Constraint">
<Set name="Name">allSite</Set>
<Set name="Roles">
<Array type="java.lang.String">
<Item>admin</Item>
</Array>
</Set>
<Set name="Authenticate">true</Set>
</New>
</Set>
<Set name="PathSpec">/</Set>
</New>
</Item>
</Array>
</Set>
</Ref>


Tuesday, November 27, 2007

NetKernel backend securing by using Jetty realm

NetKernel uses Jetty for HTTP transport. By default, it opens two port, one for services, and one for management. The fresh installation does not secure the management port, the backend. However, you can configure it to do that by using HTAccessHandler of Jetty as described in http://www.1060.org/forum/topic/265/2 . I encountered problems when doing like that on Window system. So I tried to using Realm of Jetty to do that, and it works. Here is the configuration file.

<?xml version="1.0" encoding="utf-8"?>
<httpConfig>
<!--
*****************
Jetty HTTP Server
*****************
-->
<Configure class="org.mortbay.jetty.Server">
<!--
***********
Add Listeners
***********
-->
<!--Start addlisteners-->
<!--Add SocketListener with default port 1060-->
<Call name="addListener">
<Arg>
<New class="org.mortbay.http.SocketListener">
<Set name="Port">1060</Set>
<Set name="MinThreads">5</Set>
<Set name="MaxThreads">50</Set>
<Set name="MaxIdleTimeMs">30000</Set>
<Set name="LowResourcePersistTimeMs">5000</Set>
</New>
</Arg>
</Call>
<!--End addlisteners-->
<Call name="addRealm">
<Arg>
<New class="org.mortbay.http.HashUserRealm">
<Arg>Admin Realm</Arg>
<Put name="admin">yourpasshere</Put>
<Call name="addUserToRole">
<Arg>admin</Arg>
<Arg>server-administrator</Arg>
</Call>
</New>
</Arg>
</Call>
<!--
************
Add Server Contexts
************
-->
<!--Default context at root / -->
<Call name="addContext">
<Arg>/</Arg>
<Set name="realmName">Admin Realm</Set>
<Set name="authenticator">
<New class="org.mortbay.http.BasicAuthenticator" />
</Set>
<Call name="addHandler">
<Arg>
<New class="org.mortbay.http.handler.SecurityHandler" />
</Arg>
</Call>
<Call name="addSecurityConstraint">
<Arg>/</Arg>
<Arg>
<New class="org.mortbay.http.SecurityConstraint">
<Arg>Admin</Arg>
<Arg>server-administrator</Arg>
</New>
</Arg>
</Call>
<Call name="addHandler">
<Arg>
<New class="org.ten60.transport.jetty.HttpHandler">
<Set name="Name">BackendHTTPTransport</Set>
</New>
</Arg>
</Call>
</Call>
</Configure>
</httpConfig>

Jetty also provides HashUserRealm that reads a property file in which the user names and passwords can be specified.

Tuesday, October 30, 2007

The Web is Agreement

Wonderful work by Paul Downey.
Tower of WS-Babel, exactly.



The Day The Routers Died...

Via Stefan Tilkov via James Snell, The Day The Routers Died...,
"a song performed by the secret-wg in the closing plenary of the RIPE 55 conference"


Tuesday, September 25, 2007

Google Reader supports search now

This morning I found search interface appeared on Google Reader page finally. It is time to retire the Google custom search engine made for blogs.



Friday, September 21, 2007

Solution of window resizing and moving for MS Windows with multiple monitors

I have tried several free tools on Windows system for window resizing and moving in and around multiple monitors. They were not that good, and even did not work when I have two monitors with different resolution and also layout (one portrait and one landscape). However, WindowPad surprised me by how easy this can be done using AutoHotKey. The original version of WindowPad is available at http://www.autohotkey.com/forum/topic21703.html. I made some small changes to


  1. Set the NumLock of keyboard to be always on when the script runs;

  2. Restore a maximized window before moving it to the next monitor, and then maximize it again.



The first change fixes the problem that the NumLock is set off when I remote connect to the box from the laptop at home. The second change makes the script work for moving a maximized window from a high resolution monitor to a low resolution one. The original script has been updated to fix the problem mentioned here and other bugs. Please check the latest version.

Tuesday, August 28, 2007

Notes on installation and configuration of latex2html with MiKTeX

"Installing LATEX2HTML with MiKTEX" and "Tools for Publishing LaTeX Documents on the Web" are two good reference for dealing with latex2html. The following are a list of notes I wrote down when installing and configuring latex2html with MiKTeX.

  1. It is important to install MiKTeX, Ghostscript, and latex2html in a path with no space in between, the default place under "Program Files" will introduce problems.

  2. If you install NetPbm for Windows using the Binaries release, do not forget to drag the Dependencies as well. So you will not get error messages about missing dll's.

  3. Change the location where Ghostscript, NetPbm, and latex2html are installed in prefs.pm. The line numbers are not exact the same as in Installing LATEX2HTML with MiKTEX due to versioning.

  4. Change the $TMP in l2hconf.pm to a path with no space in between as well.

  5. While most configurations are done by updating l2hconf.pm in Installing LATEX2HTML with MiKTEX, I suggest use $INIT_FILE_NAME to do the customizations that you like. You need to change $INIT_FILE_NAME to a file name that Windows system can recognize like 'dot.latex2html-init'. The default '.latex2html-init' is not accepted by Windows.

  6. Jos has a latex2html-init file that fixed several bugs in the original one coming with latex2html distribution. I found it is very helpful. I reused all the perl functions in that file, but did not use his style part.

  7. In order to get white background of the images, I set

    @IMAGE_TYPES = qw(png gif);
    $IMAGE_TYPE = $IMAGE_TYPES[0];

    which means use png format.
    And in the latex2html-init file, I set

    $TRANSPARENT_FIGURES = 0; # default = 1
    $LOAD_LATEX_COLOR = "\\usepackage[dvips]{color}";
    $LATEX_COLOR = "\\pagecolor[gray]{1}"; # 1 means white and 0 means black


  8. In order to change the personal information in the $ADDRESS and "About this document", I set

    $DONGHOME = "http://homepage.usask.ca/~dol142";
    $DONG = "<A href=\"$DONGHOME\"> Dong Liu</A>";
    $address_data[0] = $DONG; # My real name
    $ADDRESS = "Copyright &#169; <I>$address_data[1]</I> <I>$DONG</I>"; # rewrite $ADDRESS




Thursday, August 9, 2007

Grammar check of documents generated by LaTeX

LaTeX is good for formatting academic papers and thesis with provided style files. Spelling check of LaTeX documents is never an issue by using a tool like Aspell or with the support of your text editor. However, grammar check still looks like an impossible task. For Windows users, this may be the most strong excuse to say Word is still better for everyday writing. Yes ... and why not use Word to do that task for LaTex documents? Word does not interpret LaTex, but how about RTF or HTML?

I have tried several applications converting pdf files to RTF including Acrobat professional. The generated files are messy. So I tried tools generating HTML. The one that I am using now is latex2html. Open the generated HTML files and let Word to spelling and grammar check them (F7), and then you can modify the tex file correspondingly.


Monday, July 30, 2007

Search blogs subscribed in Google Reader

Basically, there are two ways to searching all the blogs in your Google Reader. One is to use Google custom search engine, and the other is to use Google gear and Greasemonkey script.

The details about the latter option is described in Raúl's blog. The offline function of Google Reader works for me, but the Greasemonkey search script halts everytime I try to start the offline search. I cannot figure out what is the problem.

What I currently use is the first option: create a custom search engine that has all the links of subscribed blogs in its site list. The details is described on Google Operating System blog. The search function really helps when you want to testity some 'Déjà vu' of your reading.

Thursday, May 31, 2007

MS's new desktop

Just checked the advertisement of MS surface. Its cool. That's what a group of HCI folks are investigating on. Will it become the future standard equipment in luxury hotels, stores, or office?



Tuesday, May 15, 2007

HPCS2007 poster: Towards an HTTP-based Service Platform with High Scalability

This year, HPCS comes to Saskatoon, and I submitted a poster titled "Towards an HTTP-based Service Platform with High Scalability". The abstract is the following.

HTTP is a message transportation method used by the majority of so-called web services or services on the web, which are implemented by using SOAP or POX/JSON. The application of Ajax has dramatically increased the load on service platforms by long-lived connection and frequent polling. For most multithreading platforms implementing thread-per-request policy, this raises scalability problems for threads since the performance of each thread will be degraded when the total number of threads increases. Service orchestrations can also introduce thread scalability problems when they have long lifetime and involve message exchanges with multiple high-latency partner services.

This poster discusses the issues for an HTTP-based service platform to achieve high scalability. We compare the options for platform implementation of IO, thread pool, asynchrony, and continuation, and propose a design for highly scalable platform for both atomic services and service orchestrations.




Friday, April 27, 2007

Interesting panel records from TSSJS2007

TheServerSide has posted records of three interesting panel in TSSJS2007. They are worth watching.
SOA Technology Panel - TSSJS 2007
The Next Application Platform - TSSJS 2007
High Performance Computing Panel - TSSJS 2007

The keywords that are talked are SOA, EDA, EAI, high performance infrastructure, ESB, Ajax, mashup, management, and scalability(scale out).


Wednesday, April 11, 2007

Reload a page in Firefox

A weird problem happened to Firefox of my desktop today. Some Wikipedia pages could NOT be correctly loaded because of CSS problem. 'Reload the current page' just did not work, similar as F5. Restarting Firefox did not help either. Then I figured out that I need to reload and override the cache. That is Ctrl+F5. It is better to learn all the shortcuts and find more tricks.

Thursday, April 5, 2007

Created a calendar for University of Saskatchewan academic schedule

I always need to visit this page to get the university schedule like the close day and last day to do something. And I searched on Google Calendar, and did not find a similar calendar. So I decide to create one that may be useful for everyone. I include only the highlighted events, close date, and important day for graduate students there. I will try to remember to update the calendar when the schedule for 2007-2008 is available. The calendar's title is University of Saskatchewan Academic Schedule. The iCal link of the calendar is http://www.google.com/calendar/ical/rqor6pie0u4vq3fhfpvmpsu5cc%40group.calendar.google.com/public/basic.ics
Or you can subscribe the calendar by click



Wednesday, April 4, 2007

Got a booklet of UofS centennial stamps

I found an intra-campus envelop in my mailbox in the department office this morning, and it turned out to be a beautiful gift from the university, the stamps to celebrate the centennial of our university.



Wednesday, March 21, 2007

Evolution of eBay platform architecture

Thanks to Michael Platt's blog. I downloaded the slides of eBay architecture and just finished reading it. The interesting points to me are as follows.

1. Scalability and maintainability are so important for large-scale distributed systems. The scalability problem is the major reason for the evolution of eBay's platform. And maintainability is the major consideration for the future development.
2. Scaling-out is the proper long-term solution for increasing load.
3. The principle of separation of concerns benefits scalability besides easiness of development and maintenance.
4. Combination of synchronous and asynchronous interactions is a good practice.


Tuesday, March 20, 2007

Who understands the links?

Stefan considered that
as long as Web services don’t actually add something more to the Web than a single endpoint per service, they are not “on the Web”, but are indeed dead ends.

in the post of Links are for Humans Only? I Don't Think So.

I agree with his opinion. And it reminders me a question appearing in my mind last night. Why is it more difficult to collect the visit statistics of an XML feed than the web page of the same resource when there is no mean to access the server log? For web pages, just a snippet of script can make the visit recorded by a service on the web. But we cannot put the script in XML feeds, because the feed readers will do nothing about the script. In fact, the web browsers automate this browser to machine interaction described by the script.

If the scripts are removed from web pages, then what left are those for representation in web browsers. The links are among those representations. So who understands the links? Here is a paragraph from Fielding's paper:

Semantics are a byproduct of the act of assigning resource identifiers and populating those resources with representations. At no time whatsoever do the server or client software need to know or understand the meaning of a URI—they merely act as a conduit through which the creator of a resource (a human naming authority) can associate representations with the semantics identified by the URI. In other words, there are no resources on the server; just mechanisms that supply answers across an abstract interface defined by resources. It may seem odd, but this is the essence of what makes the Web work across so many different implementations.


For sure, the human can understand the links. However, they may be cheated by what they see on the page. They need to take the risk by clicking them. The web applications can also try to understand the links, although they do not need to. This needs special prescript of their functions. As Stefan pointed out, this can also be done in XML for machine to machine interaction, e.g. Atom publishing. The condition is that the application accessing the XML knows Atom protocol. Similarly, the web services supporting WS-Addressing is not dead ends, they can reply with 'links' to other services. Again, the condition is that the service consumers know WS-Addressing.

Sunday, March 11, 2007

Using JSON to access Google data

Google supports JSON format data for most services, including Google Reader shared items. I just tried to use JSON to read the entries from my shared items in Google Reader, and display them in the sidebar of index page using the native CSS. The code is as follows.


<div class="module-info module">
  <h2 class="module-header">
    My reading</h2>
  <div id="reading" class="module-content">
  </div>
</div>
<script>
function listEntries(root) {
var html = ['<ul class="module-list">'];
for (var i = 0; i < root.items.length; ++i) {
var entry = root.items[i];
var title = entry.title;
var link = entry.alternate.href;
html.push('<li class="module-list-item">',"<a href=\""+link+"\" target= \"_blank\">"+title+"</a>", '</li>');
}
document.getElementById("reading").innerHTML = html.join("");
}
</script>
<script src="http://www.google.com/reader/public/javascript/user/00357859579464014466/state/com.google/broadcast?n=5&callback=listEntries">
</script>




Thursday, March 8, 2007

Sharing my reading by Google Reader

Google Reader has a feature that generates a page and feed for your shared items. My shared items are here, and the feed is here as well. It can also generate a clip in javascript. I added the clip on the sidebar of this blog's index page.

Tuesday, March 6, 2007

About the "Web of Services for Enterprise Computing" workshop

Thanks to Steve's Post, I read most of the position papers and slides presented in the workshop. It is interesting to know what people from both REST and WS sides think about services and the Web. Eric has posted two summaries about the workshop. Paul posted a summary on his blog as well.

Thursday, March 1, 2007

REST vs ?

It is nice to read the blogs that I have missed for almost two months and write something on my own. I found many guys raised another tons of discussion about REST and web services. As I remember, they compared REST and SOAP at first, then REST and web services, and then REST and SOA. I do not know what is the next to be compared with REST.

Many like to predict the technical changes in the coming year. In his post, Carlos predicted that
"WS-* and its corresponding specifications like SOAP and WSDL will be declared dead by year end."

It is interesting that the technology is considered hopeless when the developers of SOAP engines are struggling to improve its performance. What I hope is by the end of 2007 the debate about REST vs ? will stop. Obviously, the "dead of web services" cannot assure that.

Mark Nottingham listed the "real and imagined issues" here. I agree with most of them. However, I still think that the following is really an issue when interpreting the REST as the Web technological style.
"False Choice: Machines vs. People
There’s an insistence from some quarters that somehow, HTTP and REST are only good for people sitting behind browsers. I think that this has been solidly and obviously disproven, and find it difficult to believe that such continued, vigorous assertions are anything other than FUD. *shrug*"

Why? It is because "one of the deeper REST constraints is using hypertext as the engine of application state". The Web is the space of information, or the virtual state machine of web pages. Mark Baker believes "the Web, since its inception, has always been about services, and therefore that “Web services” are redundant." Of course, that depends on how to define the "service". The success of Web results from its architecture, REST, or more concretely, client-server, request-response messaging, and loose-coupling by HTML, XML, and widely accepted scripts. To make all this happen, the browsers are the heroes in the background. The browsers are the agents working for people. People with the browsers trigger the state transfer of the Web as a virtual state machine. In the service scenarios, every service can trigger the state transfer of the virtual state machine of services. If a long long URL with all the request information encoded inside is tolerant and no MEP's other than request-response are needed, fine, let's just call it "services" not "web services", and make it just like a web page.

Wednesday, February 28, 2007

Intense discussion about performance of Java SOAP stacks

Stefan Tilkov posted an article about the discussion about Java SOAP stack performance on InfoQ. It seems that some developers of the three major Java SOAP stack projects, Axis2, Xfire, and JAX-WS, are really concerned about the topic. Although part of it was not so comfortable, the discussion can still help to improve the development of those projects. I think Steve did tell the essentials of SOAP performance that developers need to care about.

I have tried to learn all the three frameworks. To me, the learning curve and tools are also important for an open-source project to be accepted by the developers besides code reliability.



Tuesday, February 27, 2007

Switching to Google Reader

Not surprisingly, I switched to Google Reader as the tool to read blog feeds. I used to use the Sage plugin for firefox. The features of Sage are enough for my requirements. However, it is difficult to synchronize my readings (both subscriptions and reading progress) on various machines. From time to time, I need to remotely connect to the machine at office to synchronize. Google Reader is the solution if you need to read feeds from different machines. To switch from Sage to Google Reader is very easy, just export the OPML file from sage and import it in Google Reader. And then you can start to enjoy the new reader.

Wednesday, February 7, 2007

Interesting points from Grady Booch

I guess everyone using UML know Booch. Several years ago, he joined IBM when the Rational was sold. Gervas Douglas just posted some tidbits from his discussion with Booch.
Functional programming languages (like LISP, Scheme and SML) failed largely because they made it very easy to do very difficult things, but it was too hard to do the easy things.

Recently, I am reviewing and writing about the continuation and its applications to service orchestration or work flow implementations. The continuation is natively supported by LISP, Scheme and SML, but not by Java. I have no experience on those old languages, but I suspect that they must deal with the highly abstracted aspect of continuations.
The next big challenge in software architecture is concurrency. Raw clock speed has just about reached its physical limit. Chip companies are now putting multiple copies of the same CPU onto a single chip. The result is that applications can no longer just be run faster. They have to be run in parallel in some way.

I think concurrency and asynchronism are the major power and challenge of the SOA world.