HTTP and concurrency

2009-10-05 @ 04:51#

concurrency-checking is essential when handling writes/deletes in all multi-user applications. anyone who implements a multi-user application w/o concurrency-checking is taking extraordinary chances. when a customer finds out that concurrency-checking is missing, there will be big trouble. especially if the way a customer discovers this problem is when important data gets clobbered due to a concurrency conflict between users.

pessimistic concurrency

pessimistic concurrency locks resources and holds them against any updates except the one who requested the lock. there are write-locks and read-locks. for this discussion, write-locks are the interesting part. they prevent writes from any party other than the one that holds the write-lock.

pessimistic concurrency works well for local, homogeneous networks. networks where all the parties share the same data space. where the distance between parties is small and the overhead to manage locking details is minimal.

optimistic concurrency

optimistic concurrency uses a check-value to compare the record read to the record that is stored. if the values match, the write is allowed. if they don't match, the write is prevented. concurrency value mis-matches occur when the party attempting the write was working from an old version of the data or when some other party writes an update beforehand.

optimistic concurrency works well for widely-distributed, heterogeneous networks. networks where the parties involved are not always the same (different code, different purposes) and where the data may be far from the party attempting to write. managing writes using optimistic concurrency has very little overhead.

HTTP and concurrency

HTTP is an application protocol for widely-distributed networks. using pessimistic concurrency patterns over HTTP will introduce needless overhead. an example is the attempt to implement classic 2PC over HTTP. while possible, 2PC over HTTP introduces complexity, adds overhead, and exposes parties to additional details that increase the possibility of failure in HTTP conversations.

are you handling concurrency properly?

the important point is this: are you handling concurrency properly in your HTTP applications? are you using optimistic concurrency to protect all write operations? are you attempting to execute actions over HTTP that depend on pessimistic concurrency patterns?

are you providing your customers with the proper level of protection for their data?

if not, you've got some work to do - ASAP.

family