why WADL when you can RUN?
the recent posting of Google's V3 Python Client Design for "discovery-based APIs" coupled w/ similar twitter convos and REST-Discuss threads remind me of something i'd forgotten: the idea that WADL [can|should] be leveraged when building a system based on the REST network architectural style will never die.
there is no question that distributed online applications need some form of linking between states of the application (as seen from each client's point of view). for example, HTML uses a handful of hypermedia controls that allow developers to design response representations that contain all the metadata needed to advance to the next state. these controls include the ability to send request filters and write data to the server. other hypermedia media types (SMIL, VoiceXML, etc.) have similar application controls.
REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state.
however, some commonly-used media-types do not have these hypermedia controls built-in. For example XML and JSON have no native hypermedia application controls defined. that means developers who design response representations using these "un-hypermedia" types have a problem when it comes to telling clients how to change application state.
one approach is to provide the hypermedia information in a seperate document. a kind of "key to finding the treasure" pattern. this allows developers to emit response representations that contain no application control information; no links that tell clients how to advance the state of the application. instead clients are implemented to first discover and load an associated "map key" that contains a static list of all the connections, the rules for sending filtered requests, writing data to the server, etc.
"WADL is designed to provide a machine process-able description of HTTP-based Web applications."
the advantage of this approach is that developers can focus on emitting "raw data" responses and not be bothered with the details of how these data-only representations relate to each other or how they [can|should] be used in the current temporal context ("can i filter this data?", "how can i write data to the server?", etc.). this can speed along the process of implementing a server but, in the end, makes implementing clients more complicated. how complicated? just ask anyone who has attempted to implement a common RDF browser in the last several years.
because the downside is that someone, somewhere still has to deal with these hypermedia issues. using WADL|WSDL|etc. to handle all the details of application control for all possible users in all possible cases for any future scenario is a weighty task; one that is rarely handled by a human, but instead is usually auto-generated based on a single-instance, static model of the server-side application. and once the static model is locked in, it's even more complicated to implement a dynamic application that allows for varying response representations based on client preferences, user rights, and general changes in the state of shared data on one or more servers.
even more challenging is the possibility of client-side mashups. in these cases, raw-data responses delievered from multiple servers contain no hints of what can be filtered or written by the consuming client and there is no external "map key" to use as a guide. this means clients are either relegated to a read-only status or are left to simply making guesses about what filtering and writing is possible.
a better way to solve this problem is to employ Runtime Unambigious Navigation (RUN) within the response itself. basically, use media types that already contain native application controls or , if you don't want to use any existing hypermedia types, define (and document) hypermedia controls for un-hypermedia data formats (XML, JSON, etc.). that is how Atom and Atom Pub are designed and i covered some other possibilities in a series of blog posts earlier this year.
"Hypermedia is defined by the presence of application control information embedded within, or as a layer above, the presentation of information."
by using the RUN approach, any possible temporal variances need not be mapped out in advance and application dynamism can be dealt with at runtime within the response representations themselves instead of via external static maps created at design-time. this means the introduction of additional variables (e.g. changes in clients and/or changes in servers) are less likely to cause breaks in the runtime application and these changes are more easily supported as the application evolves over time. it also means that any mashup representations will have the application control information available at runtime in order to provide hints to the consumer.
yes, there's a downside: responses must be designed using hypermedia types, not raw-data. and clients must be implemented as state machines that are prepared to "understand" hypermedia application controls. but, believe me, the work pays off. just ask anyone who uses the most popular state-machine web client today: the common Web browser.
i have to ask...
the tools are available; the patterns and practices have been around for years. yet, i still see developers ignoring the established, proven path and making their own lives (and the lives of fellow developers) harder by ignoring the power of hypermedia application controls within response representations.
when i see this continue, i have to ask: