Saturday, June 11, 2022

RFC 9110 HTTP Semantics - a refresh and must-read for everyone working on HTTP based applications

RFC 9110 HTTP Semantics, authored by Fielding, Nottingham, and Reschke, was just published. This is a great refreshing summary of what we learned about the web protocol evolving over three versions. As it states, the core semantics of HTTP did not change. That core and the ideas behind it were described by Fielding in his thesis, and known as REST

The document is long, and I just went through it once. It is a great technical document in terms of precision and self-descriptive. 

The Hypertext Transfer Protocol (HTTP) is a family of stateless, application-level, request/response protocols that share a generic interface, extensible semantics, and self-descriptive messages to enable flexible interaction with network-based hypertext information systems.

What a precise description. 

 

Wednesday, August 1, 2018

Idempotentise String Sanitizers

I just made up a word idempotentise — to make a given function idempotent. I am neither an XSS expert, nor an expert sanitizer user, but had to deal with sanitizers on both server side and client side in order to either prevent the injection or fix an injection incident. I did a little research before writing this note, and found idempotence is a standard characteristics for sanitizers. However, the sanitizer in your system(either home brewed or )might not be idempotent when your regular work or sleep is interrupted by a P0 security incident that a smart hacker with her/his robots just carefully constructs a new test string that is not in the cheatsheets.

What is idempotence anyway?

I stopped having difficulty to spell the work correctly until I read the wikipedia page and realized that idem means identical or the same, and potence is the power. So in mathematical notation, it means the output does not change no matter how many times a function is applied to the input, or
f(x) = f(f(x)),
where f is the function, and x is the input. Obviously, if the output does not change when the function is applied twice, it will not when the function is applied more times.

Idempotence issue for string sanitizers

Different from mathematical functions whose idempotence can be proved, we are very difficult to prove the idempotence of a sanitizer by testing. It is because we construct the sanitizers in a case-based way. We will only see the idempotence issue when the problem string instance is inputed. The idempotence issue become more complicated when a string goes through several sanitizers from user input to browser rendering.

Idempotentise it

The solution turns out to be really simple. We just need a wrapper function for any given sanitizer such that

w(w(s(x))) = w(s(x)),

where s is the sanitizer function, and w is the wrapper function. For any sanitizer, the wrapper can be implemented by apply s recursively up to k times:

w(x) = s(s(s(…s(x))))

such that s(w(x)) = w(x). If s(w(x)) != w(x), then let w(x) = empty string. We will want to log the input string, and harden the sanitizer function so that it converges within k steps.

Thursday, November 2, 2017

The multiton pattern on client side

A while ago, I wrote a post about the multiton pattern on server side that utilizes a fixed amount of computation resource to serve a large number of concurrent requests for the same representation that needs to be generated on demand. The pattern helps to avoid racing condition that the concurrent request processing could lead to.

Each resource instance in the registry or map works like a proxy. The proxy's interface is the same for the consumers no matter it arrives early or late. Inside the proxy, the representation is produced on demand, or retrieved from a cache or a persistent copy.

A client side multition can reduce not only client side load but also the server side load and the traffic between client and server. Consider a client side JS component that depends on a remote resource. When loading, the component will issue an AJAX call to the remote resource and render when the resource representation is available. A problem arises when there are many such component instances on a page targeting the same remote resource. The server side will see multiple concurrent requests of the same resource representation when the client loads such a page. That will make backend services busy for a while. While a server side multition can help, a client side multition will further reduce the client side load on a page and also reduce the connection numbers and network traffic to the server.

Tuesday, November 1, 2016

YouTube video play across linked devices and RESTful service composition

When I was asked to explain RESTful service composition, and what I mean by staged computation and baton passing, I always need to find a concrete example instead of using abstract terms. Watching YouTube video across devices is a good case for this purpose.

A user can find a video on YouTube on her/his mobile device, and then continue play it on any other TV devices linked to YouTube. S/he can also stop/resume/skip on the mobile device or the current playing device. It is also easy to switch between linked active TV devices.

We can think the video play as a service composition. The goal of the composition is to play the video to the audience. The composition is composed of YouTube, the mobile device, and linked devices. The audience is the consumer of the composition.

Watching video across devices can be viewed as a RESTful service composition. The computation is the play of the video from user starting to watch one to stopping it. It can have several stages, each of which the video is played on one device.

When the video play is switched from one device to the other, the computational baton is passed. The next device continued the computation according to where the play should start from, and perhaps audience's other preferences. But the computation has to be adjusted according to the current device's resource, like screen size and bandwidth.  

So is there a central conductor service? Maybe user's mobile device works like a conductor, but the playing of the video is in fact carried out by the linked devices and the mobile device itself. The linked device can also stop/resume/skip anytime.

Thursday, October 13, 2016

From User Generated Contents to User Defined Applications

In short: I think an interesting thing we can achieve on the Web is user-defined applications. The technologies, services, and most importantly, capability of web users will enable it.

You can't connect the dots looking forward; you can only connect them looking backwards.
---Steve Jobs, You've got to find what you love

We drive into the future using only our rearview mirror.
---Marshall McLuhan

We learned how to use email instantly

Ray Tomlinson will be remembered by his invention of email in 1971. @ became one of the most well known special characters used everyday. Although email at first imitated the paper mails, it has been eliminating paper mails, and furthermore changed the way of human communication. Still, in 1994, about 20 years after the invention of email, I knew nothing about email and did not know how to send an email when going to a university of about 50 thousand students. As I knew, at that time, there were only two computers that can be used to send emails on the campus. About one year after CERNET became available on campus, every students I met in computer classrooms had at least two email addresses. Everyone knew how to use email in a night. An interesting story is that the creator of a popular Foxmail email client in China, Zhang Xiaolong, recently leaded the development of WeChat application.

People know how to surf the Web

The first thing I would do was to open a browser when I spent booked computer time (in Chinese universities, it was called "machine time".). There were not too many websites in China at that time. I needed to write several popular domain names and interesting links on my paper notebook, and took it with me when I went to the computer classrooms. There were no real "personal" computers. Yahoo and other similar websites were big helpers for users like me at that time. Later when Google was known, all the portal web sites started their dying process. The new generation of web users in China, like my father, started to surf the web on their smart phones.

We like sharing

httpd was the starting point of the whole Apache foundation, and somehow the same for the open source movement. httpd gave individuals and organizations an online space to share contents about themselves. Almost everyone in academic research community got a personal homepage where they talked about their teaching, research, and personal life. They easily got attention from colleagues and students.

Blogs and wikis soon emerged as tools to encourage sharing, discussion, and collaboration. At the beginning, only experienced computer users knew how to set up blogs or wikis on organization servers or personal computers connected to research or commercial networks. There were so many blog software, and the one I tried was in Perl. Blog services soon kicked the blog software out and they became the major players. Most bloggers migrated their blogs from their personal or organization server to blog services. Blog readers, like Google reader, came to the hype when bloggers generated a big amount of blog daily.  The reader was later killed by Google in order to give way to Google's leading social platform Google+. Google+ was the Google's answer for the emerging social content and networking market led by Facebook.

We create media

If writing blogs and wikis takes a while, just wording a sentence, taking a picture or recording a video seems easier. The success of youtube and iPhone generated an explosion of user-created media. That also very naturally makes sharing digital media the No.1 feature of the new generation of social applications. The traditional way of content distribution saw an end of its life. Newspapers and magazines become pale when every normal people can publish and distribute their stories in the media. The reception of media, no matter positive or negative, makes everyone how deeply they are connected to the rest of world.

We created applications on the Web 

An application helps its users to perform a task. Hypermedia itself is an application. When creating hypermedia, we already created applications.

We put links in our pages to guide the readers to explore related concepts and stories. When a page is carefully crafted, the author knows where the readers will land.

We add forms in the pages to accept readers' inputs. Forms provide a way to start interactions between peers including the one who initially created the content. Comments and ratings are all forms.

Can we do more

Yes, people can do much more on the web. And in fact, we already started. We can easily define personal web sites including blogs and wikis. We can easily embed contents created and hosted on different providers. We can easily stream media from web across devices. We can easily set up personal shops without worrying about payments and shipping details. Can we go even further?

What is user defined application

A user-defined web application is a web application generated by a platform from user defined specifications. It provides both basic graphical user interface (GUI) and application program interface (API). Ideally, the platform provides an environment for users to compose and test the application specifications. A user defines
  • the structure and types of data that application will store, 
  • the representation of the data for human and other applications, 
  • the generation of new data and corresponding new representations that will be triggered by user or application interaction, and 
  • who or what applications can interact with the application. 

Monday, September 26, 2016

Observations of Netflix's Zuul 2 experience

An engineering team from Netflix just published their experience of building a new Zuul from synchronous blocking I/O to asynchronous non-blocking I/O. Here are interesting observations from my perspective.

1. Asynchronous programming is hard.

Concurrency might be one of the most challenging problem in programming. In Java, multithreading already became the norm for concurrency. The Java concurrency package made programmers' life easier than it really is. But it did not change the nature of concurrency. Testing and debugging of multithreading programs are difficult. Reproducing racing, locking, and suspicious execution sequences are tricking.

If a programmer likes to be the boss in her/his program, then she/he will be disappointed by asynchronous programming. No executing sequence is garnered by the sequence in code. The routines say "do not call me, I'll call you." right after the initial call. The way to testing and debugging is different from that for synchronous code. The biggest challenge is that programmers have to change their mindset from sequential to concurrent, from a single player game to a multiple player game, and accept the fact that there is no NOW!

2. For CPU-intensive independent jobs, there is no big performance difference between multi-threading sync blocking and event-based async non-blocking.

When discussing performance and scalability, we should first identify the bottleneck. We should see only one bottleneck at one time, and will see a new one emerging when the current one disappears. When the jobs are CPU-intensive, the bottleneck is the CPU, and the throughput is decided by the CPU. Although whether the I/O is blocking or non-blocking will affect how long the processes will wait, its impact on the throughput is shrouded by the processing time in CPU. However, it will definitely have an impact on the memory. I hope the Netflix team could compare the memory profiles.

In fact, multi-threading does not help accelerate CPU-intensive jobs. In theory, multiprocess with a non-blocking I/O feeding to the job queue will do the best, where the number of processes is decided by the number of cores. Timeout might be required when a job takes too long to finish.

3. Async is the natural way to work with non-blocking I/O (or async I/O), and it introduces capacity gain for I/O intensive jobs.

For I/O intensive jobs, the time to finish the I/O will be significant. If resources are occupied while waiting for the I/O to finish, they are wasted and do not contribute to the throughput but results in a high utilization. On the contrary, async frees resources when I/O is going on, and recapture them only when I/O is done and data is available. The C10K problem is the best resource discussing this on the OS level.

4. Contention and Coherency of a system are decided by both the characteristics of jobs and the system's implementation.

In the Universal Scalability Law (USL), the contention is the coefficient of the first order penalty for concurrency, and the coherency is that of the second order. The contention represents the part of job that cannot be paralleled. When the number of requests in a system is N, the contention will affect (N-1) of the rest requests. The coherency represents the part of each job that can result in the generation of a new contention, which results in the second order effect. Thrashing occurs typically when coherency penalty is dominant.  One example of coherency occurs when a request resulting in updating a DB record finds its copy of data is stale. Obviously, the request needs to wait and try again when its copy gets refreshed. The refreshment time can potentially be affected by all other requests that update the DB. This can also occur to shared memory of multiple threads.

In order to achieve better scalability, we need to design the system so that the contention and coherency can be reduced. Async can do nothing about the contention on CPU, but it can reduce the contention and coherency on I/O.

Wednesday, June 1, 2016

Refactoring an express.js application without IDE

Refactoring is scaring.

Without support from an IDE, it becomes even difficult. However, git can help. I have done refactoring for an express.js web application several times recently. I summaries the following rules to make the process less scaring.

Rule 1. branch before refactoring

If the refactoring is not going well, you can always go back.  In most cases, I have to touch the model, the view, and the controller. If you named your files according to RESTful resource name, it is very likely you need to update the file names.

Rule 2. rename files and then commit before touch the codes inside files

After add new files and rm old ones, git will recognized the changes are renaming files. If you make changes inside a renamed file, then the renaming will be lost in git history.

Rule 3. refactoring model first, then the views and client side JavaScript files, and finally the controller

In this sequence, you will be able to have a testable application after each code modification. When M-V-C are updated, and all tests passed, you can merge the refactoring branch back to the original one.