A colleague of mine recently had to implement a Web app that accesses a set of REST services running in another Web service. Being a little stale in the current tools - because they change yearly - he had a learn a set of new frameworks. He got up to speed quickly and things went pretty well until he tried to access the REST service directly from the Javascript side (bypassing his Web service) - at that point he hit "CORS" wall - the Web service did not set the "Access-Control-Allow-Origin" header.
He worked around that and things went fine until he tried to use a REST method that required some form parameters and also required a file attachment. He ended up wading through headers and the "multipart/form-data" versus "application/x-www-form-urlencoded" mess. It took him a week to figure out what the problem actually was and use his framework to format things the way that the REST service was expecting.
It doesn't have to be this way. Frankly, the foundation of the Web - HTTP - is a horrendous mess. From a computer science and software engineering perspective, it violates core principles of encapsulation, information hiding, and maintainability. HTTP mixes together directives for encoding with directives for control, and it is a forest of special cases and optional features that are defined in a never-ending sequence of add-on standards. The main challenge in using HTTP is that you cannot easily determine what you don't know but what matters for what you are doing. Case in point: my friend did not even know about CORS until his Javascript request failed - and then he had to Google for the error responses, which contained references to CORS, and then search out what that was, and eventually look at headers (control information). Figuring out exactly what the server wanted was a matter of trial and error - the REST interface does not define a clear spec for what is required in terms of headers for the range of usage scenarios that are possible.
Many of the attacks that are possible in the Web are the result of the fact that browsers exchange application level information (HTML) that places control constructs side by side with rendering constructs - it is this fact that makes Javascript injection possible.
Yet it could have been like this: Imagine that one wants to send a request to a server, asking for data. Imagine that the request could be written as in a programming language, such as,
getCustomerAddress(customerName: string) : array of stringOf course, one would run this through a compiler to generate the code that performs the message formatting and byte level encoding - application level programmers should not have to think about those things.
Yet today, an application programmer has to get down into the details of the way the URL is constructed (the REST "endpoint"), the HTTP headers (of which there are many - and all defined in different RFCs!), the type of HTTP method to use, and data encodings - and the many attacks that are possible if one is not very careful about encodings!
The result is terrible productivity for Web app development - especially when someone learns a new framework, which is a frequent activity nowadays.
The problem traces back to the origin of the Internet, and the use of RFCs - essentially suggestions for standards. It appears that early RFCs did not give much thought to how the Internet would be used by programmers. From the beginning, all the terrible practices that I talk about were used. Even the concept of Web pages and hyperlinking - something that came about much later - is terribly conceived: the RFC for URLs talks about "unsafe" characters in URLs. Instead, it should have defined an API function for constructing an encoded URL - making it unnecessary for application programmers to worry about it. The behavior of that function could be defined in a separate spec - one that most programmers would never have to read. Information hiding. Encapsulation of function. Separation of control and data. The same is true for HTTP and all of the other myriad specs that IETF and W3C have pumped out - they all suffer from over-complexity and a failure to separate what tool programmers need to know versus what application programmers need to know.
Today's younger programmers do not know that it could be better, because they have not seen it better. I remember the Object Management Group's attempt to bring order to the task of distributed computing - and how all that progress got swept away by XML-based hacks created to get through firewalls by hiding remote calls in HTTP. Today, more and more layers get heaped on the bad foundation that we have - more headers, more frameworks, more XML-based standards, except that now we have JSON, which is almost as bad. (Why is JSON bad? Reason: you don't find out if your JSON is wrong until runtime). We really need a clean break - a typesafe statically verifiable messaging API standard, as an alternative to the HTTP/REST/XML/JSON tangle, and a standard set of API-defined functions built on top of the messaging layer.
No comments:
Post a Comment