John Ousterhout's paper on why threads are a bad idea for most programming whereas instead events are a more easily understood paradigm (PostScript or Powerpoint) shifted my mind out of its parallel programming mode in 1996 in search of the simpler, less-likely-to-be-errorprone event-oriented paradigm that I've since been doing my thesis work on.
Rereading his slides I can recall the rational arguments he made against thread synchronizations, which burden the programmer with dealing with deadlocks and livelocks and preemptions and race conditions and subtle causality errors. Furthermore, debugging threads is painful. Threads should only be used when true CPU concurrency is required, as they are a general-purpose mechanism for managing concurrency; however, with event loops, an independent (short-lived) event handler can be invoked explicitly in a new thread for each incoming event, so the concurrency, though somewhat limited, comes free from thought of deadlocks and synchronization errors.
The main problem is sharing state across event handlers: then you have to introduce locks, bringing in all the baggage associated with them. But my intuition is that we can unify the API for event models at different levels of the application stack -- from GUIs to OS events to events between components of a document to events traveling across the Internet -- and in so doing garner the same kind of win with regard to adaptability that events in Ousterhout's talk had over threads.
Threads and events are not mutually exclusive. I'm a strong believer in event-driven programming, but there are times where life is just a lot simpler when you have two or three threads to play with instead of forcing everything into one thread. I am definitely NOT of the religion that events should always be waited for separately, one thread per event.
Your homework assignment is to understand and explain the various COM threading models -- single threaded, multi-threaded, and mixed. You can start with http://support.microsoft.com/support/kb/articles/Q150/7/77.asp. It all seems overly complicated, but I haven't wanted to investigate too deeply because I've been afraid of finding out that all that complexity was actually justified...
Roy brought Computing Surveys (v17 n4) to read me the gripping lede of Tanenbaum and van Renesse's classic: "Distributed operating systems have many aspects in common with centralized ones, but also differ in certain ways." Also for MsgList: "Another way of expressing the same idea is to say that the user views the system as a 'virtual uniprocessor', not as a collection of distinct machines. This is easier said than done."
The difference between requests and notifications: requests are things you wait for; notifications are the result of subscribing to an event. You can implement both on the same technology same syntax. Events vs messages? An event is a state change, which may or may not emit messages. But, in my world, if a tree falls without sending a message, it's a nonevent.
The implementation choice to block or not block is separate again from the expectation of response which differentiates requests (nonresponse is an error) and subscriptions (nonresponse is a sign nothing happened!). Hmmm. That implies notification requires a strictly more reliable network (more powerful abstraction) -- that might torpedo the equivalence theory. Failure on a physically-connected network is now different from across-the-public-internet. Or do we need heartbeats to equivalence the two?
"Events vs messages? An event is a state change, which may or may not emit messages." Sounds like as reasonable a definition as I have ever heard. Notifications are CAUSAL (that is, IF you subscribe to the polling service, THEN you will get event notifications) whereas requests are more CASUAL (that is, IF you want an event, THEN you have to pull it when you feel like doing so).
Document Object Model (DOM) events (inside a document) and Internet-scale event notification are similar problems, in my mind. I've been trying to reconcile whether event passing mechanisms on many different levels actually do work similarly. I still have a lot to read.
My intuition is that if we could unify the API for the event models at several different layers of the application stack -- from the way the GUI sends event to "documents", to how documents send events among the components (and scripts) embedded within those documents, to how the components in documents subscribe to (and notify) the components and scripts in other documents across the Internet -- well, then we'd have something truly powerful.
Not necessarily more efficient mind you, but more powerful to programmers adapting application events to new contexts: sending the "unitsOnHandChanged" event to a local HTML ticker, to a spreadsheet in another process on the same machine, or to a database store across the Internet. My hunch is that by unifying the notification models for different event sizes and performance requirements, we can simplify the programming model by abstracting that there are just components and a bus, and the components themselves can be local or distributed. Then, as many middleware researchers do, we can focus on intelligently supporting the kinds of performance constraints and distribution topologies necessary below this service level.
To be sure, this vision isn't a pachinko machine for distributing opaque balls-o-bits from event sources to sinks: it relies on a more principled format for describing data. If events were unified in an XML wrapper, then perhaps filtering, aggregation, and type conversion make sense as tree automata transformations. The way a "mouseDown" event percolates up the stack to become a "sellStock" transaction event in the database is akin to ascending a Jacob's Ladder of transformations, step by step.
The Document Object Model's event model could be designed in such a way that it matters not where the "components" of a document really are: they could be linked to elsewhere from within a document rather than be explicitly embedded within the document (XML allows this), and the event model would still work properly even though events must physically travel over the Internet. Components anywhere could send events to components anywhere without tunneling or other hacky tricks.
True, the DOM working group has a much more limited horizon: they want to make Web documents (*ML) interoperate with different scripting languages. So they're less interested in generic event models as they are in *specific event sets*: will it be mouseDown or buttonDown? Will we send pre-peering events (raw) or symbolic events (checkboxTicked)? What's the right API to an event-driven parser? It may very well be the case that some variant of SAX will be perfectly sufficient for DOM.
But I foresee a (potential) future where scripts, style sheets, parts of documents, and processors are completely separated across the public Internet. In this latency-is-no-longer-a-problem future, the browser-centered event model will need to be ported to incorporate decentralized notification protocols.
I would like my thesis to be about the unification of event models, from low-latency components talking among themselves locally on a client, through potentially high-latency components communicating with each other across the Internet. Beyond that, I suppose, anything is game still.
My intuition is that if we could unify the API for the event models at several different layers of the application stack -- from the way the GUI sends event to "documents", to how documents send events among the components (and scripts) embedded within those documents, to how the components in documents subscribe to (and notify) the components and scripts in other documents across the Internet -- well, then we'd have something truly powerful.
Er, well, you'd either have a very big API or a very generic API. I'm a big believer in generic APIs for Internet-scale messaging. HTTP, for example, is just a very generic API, but it is a network API rather than a programming API. The Web uses that genericity in order to transfer state representations using a consistent set of semantics (the Web's messaging paradigm).
That is also why the Web is not a distributed object system, and why a well-designed distributed object system doesn't have the same performance characteristics as a well-designed Web application. [It is possible to implement a distributed object system using HTTP, but that would be a poor design: it would suffer from both poor Internet-scale performance characteristics and a less optimal transfer syntax for fine granularity messages.]
Not necessarily more efficient mind you, but more powerful to programmers adapting application events to new contexts: sending the "unitsOnHandChanged" event to a local HTML ticker, to a spreadsheet in another process on the same machine, or to a database store across the Internet. My hunch is that by unifying the notification models for different event sizes and performance requirements, we can simplify the programming model by abstracting that there are just components and a bus, and the components themselves can be local or distributed. Then, as many middleware researchers do, we can focus on intelligently supporting the kinds of performance constraints and distribution topologies necessary below this service level.
Ergo, CORBA. The problem with that theory is that messaging is one component of an overall system. A system will fail if the wrong messaging paradigm is used, just as a system relying on "unitsOnHandChanged" events being a representation of time will fail across the Internet. The messaging paradigm determines things like granularity, round-trips per action, fault tolerance, client-side vs server-side state, caching vs replication, etc.
I have yet to see an API-based middleware that allowed the programmer to select the appropriate messaging paradigm according to the system needs. If you can do that, color me interested.
To be sure, this vision isn't a pachinko machine for distributing opaque balls-o-bits from event sources to sinks: it relies on a more principled format for describing data. If events were unified in an XML wrapper, then perhaps filtering, aggregation, and type conversion make sense as tree automata transformations. The way a "mouseDown" event percolates up the stack to become a "sellStock" transaction event in the database is akin to ascending a Jacob's Ladder of transformations, step by step.
I'd worry about how well such a system would scale.
But I foresee a (potential) future where scripts, style sheets, parts of documents, and processors are completely separated across the public Internet. In this latency-is-no-longer-a-problem future, the browser-centered event model will need to be ported to incorporate decentralized notification protocols.
I would like my thesis to be about the unification of event models, from low-latency components talking among themselves locally on a client, through potentially high-latency components communicating with each other across the Internet. Beyond that, I suppose, anything is game still.
You lost me here. How is this a "latency-is-no-longer-a-problem future"? It sounds more like a latency-is-THE-problem future. Systems without an understanding of latency tend to do things like multiple round-trips for simple actions and strong typing of interfaces: things that cause the system to keel over and die when faced with an uncontrolled or high-latency network. A browser designed without an understanding of user-perceived performance is an unusable browser.
I think it would be possible to create a generic event interface, such that events can be registered, filtered, and received through a single API. I'd do it with a typed link abstraction using URIs for the type, filter, and notification method/recipient, and then a media type for the notification message syntax (I'm still an XML heretic). That would allow the messaging paradigm to be selected by the registrant according to the expected latency of each action and the context in which it is made.
Sounds like an API that could be built on RVP. With the addition that RVP does use XML for the more complex stuff, such as ACLs and detailed kinds of notifications on an object. MIME is used to specify notification message format. I'm co-author on the RVP spec.
Not that I'm necessarily saying that RVP is a generic notifications tool. While we did try to keep it generic, we also had a very specific application in mind: people subscribing to people status.
I agree with you Roy, that what we really want is a single generic API for subscribing to notifiers and receiving events (with hooks for aggregation, filtering, forwarding, ...) in which the user can customize message characteristics such as message size and performance.
You lost me here. How is this a "latency-is-no-longer-a-problem future"?
Poor word choice on my part. I foresee a future where latency is not longer a problem not because latency issues go away but rather because we know how to deal with latency differences, with the aggregation of requests/notifications, with filtering, and so on.
I think the generic event API could be built atop RVP as Lisa suggested or SIP which also tried to stay generic, but also was designed with a specific application for rendezvous in mind: invite people to, establish, and control multimedia sessions or calls. Eve's here right now and she said "event notifications are like ACL's++" which I thought was cool.
Alternatively, the single generic event API could be built atop HTTP, which (I think) is the goal of GENA (Generic Event Notification Architecture). Any of these alternatives seem better than just writing up a new one from scratch.
I have yet to see an API-based middleware that allowed the programmer to select the appropriate messaging paradigm according to the system needs. If you can do that, color me interested.
This doesn't add anything to the conversation, but two of my odds-on favorites are BEA's Tuxedo and NCSA's Habanero.
Tuxedo is a high performance distributed transaction support system that allows you to define abstractions (which they call domains) for determining how and when messages are sent and what type of priority you can assign to each. French, and very corporate.
Habanero is a framework of collaborative tools whose main contribution is to take care of some of the messaging and fine-grained notification services for people collaborating on an activity. You can compose their 'services' into larger applications. They also have a good page on the lore of red hot chili peppers. Grad-schoolish, and very research.
First off, see An Extensible Java Distributed Component Framework by Ron Resnick and Mark Baker.
[Adam wrote] from what I recall reading your paper a few months ago -- correct me if I'm wrong -- you weren't calling for, as Roy says, generic API-based middleware that allows the user to tweak the messaging paradigm based on his own system conditions and personal preferences.
I'm think that's what we were calling for, but I'm not really sure I completely understand Roy's point. What's the difference between a "network API" and a "programming API"?
We *were* calling for a unification of all these siloed event/message/RPC mechanisms, just as Adam said; "[...]if we could unify the API for the event models at several different layers of the application stack [...]".
That's been our complaint with some of the funkier middleware we've seen. iBus, for all its really fabulous composable protocol stack stuff, didn't do anything to attempt to bridge its comms model with the Beans event model. That's half the work.
And the stuff that does do that, like T-Bone (built on Voyager), doesn't bother with the composable protocol stacks.
Bad terminology I guess. Instead of "network API", let's call it a "network-based API", and we can call the other a "library-based API" for lack of a better word.
A library-based API provides a set of code entry points and associated symbol/parameter sets so that a programmer can use someone else's code to do the dirty work of maintaining the actual interface between like systems, provided that the programmer obeys the architectural and language restrictions that come with that code. The assumption is that all sides of the communication use the same API, and therefore the internals of the interface are only important to the API developer and not the application developer.
A network-based API is an interface that exists on the network itself: a gateway through which different applications can communicate by restricting themselves to a set of well-known semantics (syntax too, but that's the easy part).
A library-based API does a lot more for the programmer, but in doing so creates a great deal more complexity and baggage than is needed by any one system, is less portable in a heterogeneous network, and always results in genericity being preferred over performance. As a side-effect, it also leads to lazy development (blaming the API code for everything) and failure to account for non-cooperative behavior by other parties in the communication.
A network-based API does not place any restrictions on the application code aside from the need to read/write to the network, but does place much larger restrictions on the set of semantics that can be effectively communicated across the interface. On the plus side, performance is only bounded by the protocol design and not by any particular implementation of that design.
Mind you, there are various layers involved here -- systems like the Web use one library API (sockets) in order to access several network-based APIs (e.g., HTTP and FTP), but the socket API itself is below the application-layer. Likewise, libwww is an interesting cross-breed in that it is a library-based API for accessing a network-based API, and thus provides reusable code without the assumption that the other communicating applications are using libwww as well. Contrast this with CORBA, which only allows communication via an ORB, thereby leading IIOP to assume too much about what the parties are communicating.
On a related note, does anyone know where I can get a readable (Word or PS) copy of the "real DCOM spec, as implemented". I've got draft-brown-dcom-v1-spec-03.txt, but the text conversion sucks. Likewise, the ridiculous font override on http://www.microsoft.com/oledev/olecom/title.htm makes me snowblind.
Adam Rifkin, http://www.ifindkarma.com/attic/
PhD-Related Documents, Caltech Infospheres Project
Last modified: Fri May 8 06:17:33 PDT 1998