Tuesday, February 24, 2009

Unit Testing with Dates and Times

When running unit tests or regression tests, you should know exactly what all of your inputs are and be able to control them. If you have methods that use the current date or time, you need to do a bit of extra work to do that.



If you have a method that calls System.currentTimeMillis() or new java.util.Date(), that method will be receiving a different value when you run it on a different day or time. This contradicts the need to have exact control over all inputs for unit and regression testing. You need to refactor your method so that you can control the date or time value used by your method and thus be able to have repeatable and controllable tests.

To start, create a utility class with a method corresponding to each of the time-variant methods System.currentTimeMillis(), new Date(), or any other method whose value is a function of the current date and time.
import java.util.Date; public class TimeVariant { public static long currentTimeMillis() { return System.currentTimeMillis(); } public static Date newDate() { return new Date(); } }
Modify your code to replace all calls to new Date() by calls to TimeVariant.newDate(), replace all calls to System.currentTimeMillis() by calls to TimeVariant.currentTimeMillis(), etc.

If your application uses a database and takes advantage of any automatic timestamping by the database server, you will have to figure out a way to deal with those values. I'm not going to try to cover unit testing with a database; that is a big enough topic to warrant a separate post.

Upon completing this step, you have not changed any of the functionality of your code, but you have centralized all of the time-variant calls into a single location. Now we can modify the TimeVariant class to make it do what we want.

Test Hook

The general goal is to have methods in TimeVariant that behave as defined above during normal operation, but for which we can control the return value during testing. In order to keep the testing code out of the main code, we want to make minimal changes to TimeVariant and put as much of the support code as possible into another class that we use only in our testing environment.

We start by defining an interface that duplicates the functionality of the TimeVariant class:
import java.util.Date; public interface ITimeVariant { public long currentTimeMillis(); public Date newDate(); }
We will be writing a test class TestTimeVariant that implements this interface. If you are already using an injection framework such as Spring, you can modify our original TimeVariant class so that it implements the ITimeVariant interface with non-static methods, make your code use the instance of ITimeVariant supplied by your injection framework, and configure the framework to use an instance of TimeVariant in normal use and TestTimeVariant during testing.

If you are not using an injection framework, one way to allow using the methods in TestTimeVariant during testing is to modify TimeVariant to allow turning it into a proxy for an instance of ITimeVariant. Changes from the previous version of this class above are shown in bold.
import java.util.Date; public class TimeVariant { private static ITimeVariant tvDelegate = null; /** This method is for unit testing setup. */ protected static void setDelegate(ITimeVariant tv) { tvDelegate = tv; } public static long currentTimeMillis() { if (tvDelegate!=null) return tvDelegate.currentTimeMillis(); else return System.currentTimeMillis(); } public static Date newDate() { if (tvDelegate!=null) return tvDelegate.newDate(); else return new Date(); } }
From the setup method for your unit test you set the hook:
ITimeVariant tv = new TestTimeVariant(); TimeVariant.setDelegate(tv);
Once the hook is in place to invoke our test version of the time variant calls, we are done with the changes to the production code and we can move on to the implementation of the test class.

This post uses Java for all of the code examples, but the same approach as described here can be used in other languages (Ruby, C#), where their language features can make setting up the hook and delegate more elegant than in Java.

Controlled Values

Recall that the primary purpose of the test class is to ensure that we return a known and controlled value for each call to one of our methods. In the simplest case, we can just return a constant value, in which case our TestTimeVariant class might look like this:
import java.util.Date; public class TestTimeVariant implements ITimeVariant { private long simulatedTime = 1234567890L*1000; //a recent timestamp public long currentTimeMillis() { return simulatedTime; } public Date newDate() { return new Date(simulatedTime); } }
The above code satisfies our desire to have a controlled value, and it may be perfectly satisfactory for testing simple methods that call newDate() and use only the date portion or that call currentTimeMillis() only once, but it can be a bit unrealistic when our methods are called more than once.

Changing Values

We can modify the methods in our TestTimeVariant class to return different values on each call.

To start, we add a method that allows us to set the simulated current time. This allows us to set the date and time to specific values in order to test corner cases on our code, such as February 29 or December 31 on a leap year, or whatever other conditions are considered special by the method under test.
import java.util.Date; public class TestTimeVariant implements ITimeVariant { private long simulatedTime = 1234567890L*1000; //a recent timestamp public void setNow(long t) { simulatedTime = t; } public long currentTimeMillis() { return simulatedTime; } public Date newDate() { return new Date(simulatedTime); } }
With the setNow test method, we can create a unit test that alternately calls production code and changes the time. This is suitable when testing methods that contain a single call to one of our time methods, but if a method makes more than one call before returning, it will get the same return value every time.

To return different values when there are multiple calls where we are not able to call setNow between then, we modify our test code to add a fixed delta to the returned value on each call.
import java.util.Date; public class TestTimeVariant implements ITimeVariant { private long simulatedTime = 1234567890L*1000; //a recent timestamp private long deltaTime = 10; public void setNow(long t) { simulatedTime = t; } private void bumpTime() { simulatedTime += deltaTime; } public long currentTimeMillis() { long t = simulatedTime; bumpTime(); return t; } public Date newDate() { return new Date(currentTimeMillis()); } }
If we want more precise control, we can add the ability to preset a list of values to be returned, or equivalently a list of delta values to be added on each call:
import java.util.Date; public class TestTimeVariant implements ITimeVariant { private long simulatedTime = 1234567890L*1000; //a recent timestamp private int simCount = 0; private long[] deltaTimes = { 10 }; //default delta public void setNow(long t) { simulatedTime = t; } public void setDeltaTimes(long[] deltas) { if (deltas==null || deltas.length==0) throw new IllegalArgumentException("Must have at least one delta"); deltaTimes = deltas; //Could make a copy if you are concerned about caller changing it simCount = 0; } private void bumpTime() { simulatedTime += deltaTimes[simCount]; if (simCount < deltaTimes.length - 1) simCount++; } public long currentTimeMillis() { long t = simulatedTime; bumpTime(); return t; } public Date newDate() { return new Date(currentTimeMillis()); } }
We have defined our bumpTime method so that if we call one of our time methods more times than we have deltas, we just reuse the final delta for any additional calls. Depending on your desires, you could use other approaches, such as cycling back to the beginning of the list of deltas or using an algorithmic approach. For example, you could choose to use a pseudo-random number within some range as your delta, and as long as you seed your random number generator with the same seed each time you will get the same sequence of time values.

Capturing Values

Likely a more realistic series of time values is a sequence of time values from an actual run of your method. We can instrument our test class so that we can capture all of the returned time values during a test run, then play them back for our controlled testing. This adds a significant amount of additional code to our test class.

You may not need this level of functionality for your tests. In fact, you may need nothing more than returning a single fixed value, as shown in the Controlled Values section. There is no need to implement more functionality in the test code than you need for your situation, so you should think about why you need changing values in the first place. Note that none of this code is useful for any kind of timing test, since it does precisely the opposite: no matter how much real time elapses between calls, the same values are returned from the methods in TestTimeVariant. This is exactly the right behavior for repeatability, and exactly the wrong behavior for ensuring that the timing (and thus the performance) of your application has not changed. Running timing and performance tests on your application is a separate concern from running the type of functional unit and regression tests we are concerned with here.
import java.io.FileReader; import java.io.LineNumberReader; import java.io.PrintWriter; import java.util.ArrayList; import java.util.Date; public class TestTimeVariant implements ITimeVariant { private boolean capturing = false; private ArrayList<Long> captureTimes = null; private long simulatedTime = 1234567890L*1000; //a recent timestamp private int simCount = 0; private long[] deltaTimes = { 10 }; //default delta public void setNow(long t) { simulatedTime = t; } public void setDeltaTimes(long[] deltas) { if (deltas==null || deltas.length==0) throw new IllegalArgumentException("Must have at least one delta"); deltaTimes = deltas; //Could make a copy if you are concerned about caller changing it simCount = 0; } //Starts capture mode. Call saveCapture when done. public void startCapture() { capturing = true; captureTimes = new ArrayList<Long>(); } //Save the captured timestamps to a file for later loading by loadCapture. //The first line is the fill timestamp, the rest are deltas. public void saveCapture(String fileName) throws Exception { //You can use more precise exception list if you want PrintWriter pw = new PrintWriter(fileName); long lastTime = 0; for (Long t : captureTimes) { long delta = t - lastTime; pw.println(delta); lastTime = t; } pw.close(); } //Load a file created by saveCapture. //Sets simulatedTime and deltaTimes array. public void loadCapture(String fileName) throws Exception { //You can use more precise exception list if you want LineNumberReader lr = new LineNumberReader(new FileReader(fileName)); String line = null; boolean haveInitial = false; ArrayList<Long> dTimes = new ArrayList<Long>(); while ((line=lr.readLine())!=null) { long d = Long.valueOf(line); if (haveInitial) dTimes.add(d); else { simulatedTime = d; haveInitial = true; } } //Transfer into long[] to cal setDeltaTimes long[] da = new long[dTimes.size()]; int n = 0; for (long t : dTimes) da[n++] = t; setDeltaTimes(da); } private void bumpTime() { simulatedTime += deltaTimes[simCount]; if (simCount < deltaTimes.length - 1) simCount++; } public long currentTimeMillis() { if (capturing) { long now = System.currentTimeMillis(); captureTimes.add(now); //boxed return now; } else { long t = simulatedTime; bumpTime(); return t; } } public Date newDate() { return new Date(currentTimeMillis()); } }


Here is a little Main program that shows how the above classes work.
public class Main { public static void main(String[] args) throws Exception { TestTimeVariant tv = new TestTimeVariant(); if (args.length==0) appMethod(); //run without any test code else if (args[0].equals("-test")) { TimeVariant.setDelegate(tv); long[] zeroDelta = { 0 }; tv.setDeltaTimes(zeroDelta); //simulate fixed-time case appMethod(); } else if (args[0].equals("-delta")) { TimeVariant.setDelegate(tv); //without setting any deltas, default is 10ms appMethod(); } else if (args[0].equals("-save")) { TimeVariant.setDelegate(tv); tv.startCapture(); appMethod(); tv.saveCapture(args[1]); } else if (args[0].equals("-load")) { TimeVariant.setDelegate(tv); tv.loadCapture(args[1]); appMethod(); } else { System.out.println("unknown argument "+args[0]); //unrecognized } } private static void sleep(int ms) { try { Thread.sleep(ms); } catch (Exception ex) {} } //This is the application method under test public static void appMethod() { long startTime = TimeVariant.currentTimeMillis(); sleep(100); long midTime = TimeVariant.currentTimeMillis(); int n = 2; for (int i=0; i<1000000; i++) { n = n * n + i; //lots of overflows } long endTime = TimeVariant.currentTimeMillis(); System.out.println("App: start at "+startTime); long delta1Time = (midTime - startTime); System.out.println("App: delta1 is "+delta1Time); long delta2Time = (endTime - midTime); System.out.println("App: delta2 is "+delta2Time); } }
To run this demo, copy out the definitions for ITimeVariant and TimeVariant from the Test Hook section above, the definition for TestTimeVariant from the Changing Values section above and Main from this section each into an appropriately named file in a test directory. Compile all files, then run as follows to see the different cases:
  1. Run java Main to run the application as it normally runs, without any test code. The application prints out some timing values.
  2. Run java Main -test to wire in the test harness with no delta times, which behaves the same as the first example in the Controlled Values section above.
  3. Run java Main -delta to wire in the test harness with a fixed delta value, as you would have with the first example in the Changing Values section above.
  4. Run java Main -save filename to wire in the test harness to use real times and save the results to the specified file. The file is a text file, so you can view it to confirm the values it contains.
  5. Run java Main -load filename to wire in the test harness and load the values from the specified file.

Sunday, February 15, 2009

Concurrent Editing Without Locking

If you write an application that allows a user to edit data, you should consider what happens when multiple users attempt to edit the same data at the same time. If you don't, you are likely to end up with some unhappy users when they unexpectedly lose some of their data.



The approach described here can be applied to other situations, but here we focus on one particular kind of common application: a form-based web app that allows the user to edit a record in a database. In this kind of application, the user typically opens a record for editing, which downloads the data from an app server to his web browser (the client application), which displays it in an editable form. The user edits the data in the form as desired, then presses the Submit button. The web browser sends the form data back to the app server, which saves it to the database.

Now consider this problematic sequence of events, where two users concurrently try to edit the same record:
  1. User A opens record X for editing.
  2. User B opens record X for editing.
    At this point both users see the same data on their screens.
  3. User A submits his changes for record X.
  4. User B submits his changes for record X.
    How do these two submit operations interact?

The Other Ways

There are a number of approaches typically taken to handle concurrent editing.
  • Last in wins: Each user is allowed to open the record for editing and sees the same data. When User A submits his changes, they are saved to the database. When User B submits his changes, they are saved to the database. Whatever data User B submitted will overwrite the values User A submitted for those fields, so User A's changes will be lost.
  • Pessimistic locking: When User A opens a record for editing, the server locks the record. When User B asks to open the same record for editing, the sever denies the request. When User A submits his changes, the server unlocks the record. User B can not edit that record until User A is done.
  • Optimistic locking (first in wins): Each user is allowed to open the record for editing and sees the same data. When User A submits his changes, they are saved to the database. When User B submits his changes, the request is denied because the record changed after User B first opened that record.

Last In Wins

If you don't think about concurrent access, you will most likely end up with a last-in-wins system, although without an explicit review of the code for potential concurrent access problems and depending on the back end, you may also be vulnerable to data corruption as well as data loss.

The big disadvantage of this approach is pretty obvious: User A's changes are lost (referred to as a lost update), and nobody knows until later, when something else goes wrong. User A submitted his data, everything looked good, he might even have gone on to another step that used his changes, and he would have seen them in the database as long as he completed those tasks before User B pushed the Submit button.

Pessimistic Locking

A naive implementation of pessimistic locking is probably the simplest method of doing concurrent access control. You simply don't let two people open the same record for editing at the same time. When the first person opens the record for editing, you lock it. When the second person attempts to edit the same record, he gets an error message and is not allowed to edit. This is a typical solution, but it has some problems, especially for web apps. The problem arises when one person opens a record for editing, then takes a long time editing that record. During that long period, nobody else can update anything in that record. The epitome of this problem is when the user who is editing a record takes a break and walks away from his computer, gets sidetracked, and ends up going home (or out to dinner and a movie) without ever getting out of the editing screen for the locked record.

As an application designer, you have to decide how to handle this situation. Leaving the record locked as long as the user has it open is not a viable option. If the user not only went home but also went on vacation (or got sick), you could be waiting a very long time to unlock that record. So you have to break that lock at some point.

How do you decide how long to wait before breaking the lock? You could hold the lock until someone else wants to edit the record, then let the second person break the lock if it is older than some amount of time, but you still have to decide what that time limit should be. The longer you let the first person hold the lock, the more disruptive it is to other users who want to edit the same record, but if you break the lock too soon, what happens to User A when he finally does submit his changes? User A thinks he has a lock on the record, so should be able to submit, but if the lock has been broken you can't just let him submit his changes, or you are back to last-in-wins.

You have to decide what to do with User A after breaking the lock. The simplest thing to do is to reject the submit request from User A, but this is not a very pleasing solution to User A, especially if the reason it took so long to make the changes was because they were difficult to do.

There are things you can do to improve the situation when using pessimistic locking. You could add some code to try to detect when the user navigates away from the page, such as hooks on other pages of your web site that clear locks. You could have a client-side timer that warns the user shortly before his lock is about to expire and gives him the opportunity to reset the timeout, which would include sending a message back to the server with that request. You could notify the user when the lock has been broken, so that he can get annoyed right then rather than editing for another hour and getting even more annoyed when his submit is rejected. But every such change moves you further away from the simplicity that was one of the reasons for choosing pessimistic locking in the first place.

The bottom line is that pessimistic locking ends up being a lot more complicated than it might originally seem once you add everything you need to deal with the lock timeout problem.

Optimistic Locking

"Optmistic Locking" is a bit of a misnomer, since it doesn't necessarily lock. It is more accurately called optimistic concurrency, but calling it optimistic locking helps to draw the functional parallel to pessimistic locking.

With optimistic locking, multiple users are allowed to edit a record at the same time. When the first user edits the record, the server says "OK". When the second user edits the record while the first user is still editing, the server says "OK" again. Whichever user submits first wins; when the other user submits, it will be denied by the server because of the first submitter's changes. With some optimistic systems, the server will at least inform the second user that someone else is already editing the record when he first opens it, so the second user has the option of canceling right away and trying again later. On the other hand, similar to how some people react to a yellow light, the second user might interpret that warning as an exhortation to edit quickly so as to get his changes in before the first user.

Optimistic locking is called that because the application is being optimistic that it can let both users have what they want. This is based on the assumption that at least one of the users will exit the editing screen without making any changes. Assuming that happens, there are no problems caused by allowing both users to edit the same record, because one of them does not actually edit it.

Some systems do not give the user the option to view a record except by opening the record for editing. In such a system, there is a good chance that a user who opens a record for editing will not actually edit the record, so optimistic locking may seem like a reasonable approach. On the other hand, it would not be that difficult to add a View screen that allows the user to see what is in the record without editing it. You could put an Edit button right on the View screen to put the user into the Edit screen, or even automatically switch the user into Edit mode if he changes any fields on the form.

If you have a system in which the user can express an intent to edit rather than view a record, then there is a high probability that a user who opens a record for editing will actually submit some changes. In such a system an optimistic locking strategy is likely to be overly optimistic.

As a user you can ask yourself: Would it be more annoying to try to edit a record and be told right away that you can't because it is locked (pessimistic locking), or to spend time putting in your data, only to be told when you submit that you are not allowed to edit that record and have to start again (optimistic locking)?

Some people use the term "optimistic locking" to refer to a mechanism more like what I describe below, including both the database's rejection of the update request and the following resolving step. My usage of the term above is more typical when applied specifically to transactional database updates, in which only the database request is considered. This is also sometimes called Optimistic Offline Locking.

A Better Way

Let's reexamine our goals:
  1. User changes are never inadvertently lost.
  2. Users are never locked out of editing a record.
  3. Users are never required to throw away their edits.
None of the three methods described above satisfy all of these goals. Here's another approach. It's not Pessimistic Locking, and it's more than simple Optimistic Concurrency - let's call it Optimistic Resolving Concurrency. If you google for phrases like "optimistic concurrency" and "optimistic offline locking" and browse around, you will find a lot of descriptions that discuss in some detail the work of detecting update conflicts, after which they pretty much all punt on how those conflicts should be resolved, simply stating that it is the responsibility of your business logic to handle that. I describe in some detail one approach you can take in your business logic to do that.

If you think about the way many modern source code control systems work, such as Subversion or Git, they satisfy all of the goals listed just above. Their approach is straightforward:
  1. Always allow a user to edit a file.
  2. When a user submits changes, accept them if there are no conflicts with other changes.
  3. If there are conflicts, provide the user with tools to resolve the conflicts and allow the user to resubmit the changes after the conflicts have been resolved.
We can apply this approach to our editing web app example. Here are the basic steps:
  1. When a client application (web browser) loads a record for the user to edit, the server sends all of the data for the record, but does not lock the record.
  2. When the user pushes the submit button to save his changes, the client sends back to the server all of the original data along with the user's desired changes.
  3. The server matches the original data sent back by the client against the current contents of the database for the record being edited. If there have been no other changes, then the user's desired changes are saved and the server returns an indication of success to the client.
  4. If the record has been changed, as evidenced by a failure to match all of the original data returned by the client, then a resolving step must be undertaken. This is described in more detail below.
Without the resolving step, you could consider the above to be a description of optimistic concurrency. Personally, I consider the resolving step to be an integral part of the concurrency solution.

The code on the server side is relatively straightforward because we are pushing all of the work of resolving the differences to the client. But there is one interesting detail, and that is concerning how the server determines that the original record has not changed since the data for that record was sent to the client.

App Server Updating

We need to check if the data currently in the database for the row submitted by the client still matches what the client was originally given, and only perform the update with the user's desired changes if there were no other intervening conflicting changes. The obvious way to do this is to use a transaction. Within the transaction, we SELECT all of the fields from the row in question using a FOR UPDATE modifier so that the row is locked for the duration of the transaction. We then compare that data to the original data returned to us by the client requesting an update. If there were no changes we execute a database UPDATE on the row and commit the transaction, otherwise we simply abort the transaction (or commit it, since we only did a read).

In most situations the above approach would be a perfectly satisfactory solution. Our app server, which is presumably a client to the database server, will be locking one row, but the operation should be very fast and there is nothing else it needs to wait on in order to complete the transaction (in particular, we are not waiting for any user actions at this point). But there is a small chance that our app server will, for whatever reason, hang during the transaction, which could leave the row locked for longer than expected. And besides, the title of this article does say "without locking".

There is a simple trick you can use in this situation so that you don't have to separately read the current data from the database in the app server in order to do the comparison against the client's original data. Instead of having the app server do that comparison, we make the database server do it. We do this by having our app server execute a single UPDATE statement for the row to be updated, and we include a WHERE clause that not only includes our row identifier, but also matches each column against the client's original values. If none of the fields in that row have been changed since the client received the original field data, then the WHERE clause in the UPDATE statement will be true and the update will be executed on our row (and only our row, since we include our row identifier in the WHERE clause). If any of the fields have changed, then the WHERE clause will be false for our row (as well as for all others), so no rows will be updated. We can read back the row count of updated rows for the operation and check to see if it is 0 or 1, which tells us whether our row was updated. If 1, then we return a success indicator to our client; if 0, the update failed, in which case we need to do a SELECT on the record to read the current contents of all the fields and return that to the client for use in the resolving process.

With the WHERE clause in our UPDATE statement, we no longer need to lock the record in our app server. How the database server handles the write is its business. Any locking done within the database server would still be better than locking in our app server, and in any case is outside our scope. I think it is fair to say this is an approach without locking.

Rather than using all of the data fields in the WHERE clause of the UPDATE, you could maintain a single extra field which is a version number or an update timestamp. As long as that field is changed to a new value (never used before) every time any field in that row is changed, it works just fine as a check to prevent an UPDATE when someone else has modified the row. But we want to have all of the old field data around anyway so we can use it in our resolve step described below, so we might as well just use that data in our WHERE clause rather than adding a surrogate change field.

Defining Conflicts

If any of the fields of the target row have been changed since the submitting client received his original data, then there is the potential of a conflict between the data written by the previous updater and the data to be written for the submitting client. For each field of the record, we have three values:
  1. The original value of the field as of when the client opened the record for editing.
  2. The current value of the field in the database.
  3. The desired new value of the field as submitted by the client.
If the client does not pass in a desired new value for a field, then we assume the desired value is the same as the original value.

Here are the possible combinations of values for each field:

Original Current Desired Scenario
X X X 1. No change
X X Y 2. We want a change, noone else changed
X Y Y 3. Someone else changed, we want the same change
X Y X 4. Someone else changed, we did not
X Y Z 5. Someone else changed, we want a different change

If, for every field, we find ourselves in scenario 1 or 2, then we know no one else has made any changes, so we can save the desired new values without worrying about losing any data. If any field is in scenario 5, we know we have a conflict.

If scenario 3 arises, you might think it is no problem, since both users want the same change. On the other hand, you might want the second user to know that someone else modified the record, even if that modification was to put in the same data. If you choose to accept this scenario as a non-conflict, you can easily implement this in your WHERE-qualified UPDATE statement by modifying the WHERE clause so that each field can match either the original or the desired values.

Scenario 4 is a little bit trickier. It may be the case that two different people have edited the same record, each one changing a different field. If the fields are truly independent, then there should be no conflict. But if the fields are related, you may have a problem. For example, the two fields in question might be two different telephone numbers. Perhaps the data was originally entered incorrectly and the same phone number was put into both fields. Person A edits the record to correct it and updates the phone number in the first field, while person B concurrently edits the record and updates the phone number in the second field. If both changes go in, then both copies of the original phone number are lost. Thus the conservative approach is to treat this scenario as a conflict, the same as scenario 5. However, if you choose to treat this scenario as a non-conflict, you can easily do this by modifying the WHERE clause of the update to include only the fields in which the client submitted a new desired value.

Resolving Conflicts

When the app server determines that there is a conflict, it must tell the client which fields were in conflict and what their current values are. The client can then display that information to the user in a Resolve form that allows the user to determine how to resolve the conflict.

At a high level, the user has three options:
  1. Cancel his change request, discarding all his changes.
  2. Resolve the conflicts and resubmit the request.
  3. Postpone resolving the conflicts but continue editing.
Recall that the client has two pieces of information that it maintains and sends back to the server for each field on a submit request: the original value and the desired value. If the user wants to resolve a field conflict immediately, he needs to select which value should become the new desired value. This is typically either the current value or the previous desired value. The client's original value that it is storing for the field should be updated to be the current value as returned by the app server, so that on the next submit it will match the current value in the database.

If, however, the user would like to postpone resolving the conflict for a field, that can be done by keeping the old original value for the field so that the next time he submits, that field will once again not match the current value in the database, so he will be reminded about that conflict and can take care of it then.

So for each field, the user should be able to specify whether to keep the old original value or update it to the current value, and whether to use the current value as the desired value or keep the previous desired value.

One possible way to display all of this information is to show a table with one row for each data field and five columns: the first three columns are the original, current and desired values at the time of the submit. These are all buttons. The fourth column is a column of check boxes labeled "Resolve Later". By default these boxes are unchecked. For any field with a box which is checked, the client keeps the old original value for that field. For the fields for which the user leaves the box in its default unchecked state, the client updates its original value to the current value. The fifth column is the new desired value. The user can click on one of the buttons in the first three columns to pick which value to use as the new desired value in the fifth column. The value in the fifth column could also be directly editable. The form would have three submit buttons: Cancel, Submit, and Continue Editing (which is the "resolve later" option).

Once the user has filled out all of the fields in the Resolve form, the client now has updated values for both the original and the desired values for each field, ready for either continued editing or submitting to the app server. Submitting the form goes through the same process as before: if there are any conflicts remaining, the user will be put back into the Resolve form to take care of them. If the user has resolved all conflicts, but someone else happens to have made more changes in the mean time, the submit will fail again and the user will have to resolve the conflicts. If the user chooses to continue editing, we just put him back into the regular editing form. In either case, once we leave the Resolve form we are back in the regular code flow, we don't have to remember that we were in the Resolve form.

Theoretically it is possible for the user never to be able to save because there is always someone else making new edits to the record. I suspect this is not a problem for most real-world situations. If it is, then you might have to resort to pessimistic locking. Better still, redesign your system so that you don't have so much contention for that one record.


When defining what constitutes a conflict, the simplest solution is to treat every field as an independent value, so that there is a conflict whenever the same field is changed by two concurrent clients, and there is no conflict if no common fields have been changed. However, as suggested above, a more sophisticated definition may be more appropriate.

In most databases there are implicit relationships between fields such that you might want to consider changes to any of those fields to be in conflict with changes to any other of those fields. Sets of phone numbers, email addresses or similar lists are a good example of this. Note that these data may not all be stored in one table or shown on one screen, even when submitted by the client in one update request. For example, if you are editing an order with a list of order items, chances are good that each order item is stored in a separate record; but all of the order items are related, and you probably want to treat a change to any order item as potentially conflicting with a change to any other order item, just as with the phone number example above.

Similarly, if your application reads data from one record and is using that data to modify another record or create a new record, you might want to ensure that the first record has not changed before writing the other record, even if you are not changing the first record. In this situation you might not be able to use the UPDATE ... WHERE trick, in which case you can fall back on using a transaction in which you read and validate the old data before updating or inserting the new data.

In the other direction, there may be cases where you want to allow certain changes to one field by two clients to be treated as no conflict. For example, you may be storing a text field, and you may decide that each line in the text field should be treated as an independent entity for the purpose of conflicts. If User A changes line 3 of field X and User B changes line 5 of field X, by this definition those changes do not conflict, so should be allowed. Of course this requires some extra coding: you need to check for this situation in that field, merge the changes if they don't conflict, and provide tools for the user to merge changes that do conflict.

If you are storing and updating blobs in a database that does not allow blobs to be used in comparisons in WHERE clauses, you will not be able to use the UPDATE ... WHERE trick. I mentioned earlier how you can use a version or update timestamp field in the record as a surrogate to know when any data has changed; you can use that technique here, or you can be a bit more precise and add a version or timestamp field associated with the blob field, updating that version or timestamp field only when the corresponding blob field is modified. You can then use that surrogate field in your WHERE clause. Alternatively, you can fall back on the approach of using a transaction within which you read the current database data and compare to the original data in the application.


On the app server side:
  1. When a client edits a record, send the data and do not lock the record.
  2. When a client requests an update, use an UPDATE with a WHERE clause to make it an atomic update that succeeds only if there were no changes. If the update fails, load the current data from the record and send that back to the client.
On the client side:
  1. Maintain the original value for each field as well as the desired value for changes. Send both to the app server on a submit of an update.
  2. When the app server rejects an update, provide a Resolve form (with Cancel, Submit and Continue Editing buttons) allowing the user to choose, for each field in conflict, whether to accept the previous change or update with his desired value, and whether to keep the old original value for a later resolve or to update to the current value.
Some systems that use the UPDATE ... WHERE trick I describe above, although these references don't mention what the systems do when a conflict is detected:
  • Apple Enterprise Objects Framework Optimistic Locking
  • Apache Cayenne
  • Microsoft's LINQ to SQL detects conflicts and provides some support for how to write the selected values once that decision has been made, but does not include the mechanism to allow the user to resolve conflicts.
Some systems that implement the basic conflict resolving approach I describe above:

Friday, February 6, 2009

The Internet Made Me Stupid

I used to be a pretty smart guy. I took three years of college calculus and two of physics, including relativity and quantum mechanics. I knew how a computer worked inside out. I could hold up my end of the conversation with just about everyone I knew. It was a rare day when there was a discussion that I could not follow. Occasionally I would be reminded that there were still things about which I knew almost nothing, such as when I cracked open "Superstring Theory" (Volume 1, Introduction) by Green, Schwarz and Witten, and got lost on page three. But it was pretty easy to put that book back on my bookshelf and ignore it.

Today, that's not so easy to do. The Internet has made immense amounts of information available at the click of a hyperlink, and most of those links point to things of which I am mostly ignorant.

While trying to understand monads, I went to Wikipedia. It said monads as used in computer science derive from category theory. I followed that link, and within a few hours was amazed at how much there was about category theory and related subjects that I did not know. When I backed up to review my group theory, a subject about which I thought I at least knew a little, there were far more links to things I did not know about than to things I did know about. Even when chasing down topics in computer science, my college major, such as binary trees, there were all sorts of little pieces with which I was not familiar. Each little piece was no problem to read and understand, but it's not like a book where you can read through it and be done. With hyperlinks, they just go on and on and on.... no matter how far you go, there are always more links leading to little corners that you don't know about, from which the links sometimes open out into huge unfamiliar areas. So much to learn!

The Internet has also made it much easier for me to find people smarter than me - or at least people with way more knowledge about certain topics. I recently started paying a lot more attention to functional languages, and found a couple of groups in my area that get together occasionally to discuss topics in functional programming. There are generally five to ten people in these meetings, with one person making an informal presentation and the rest of the people asking intelligent questions. I'm not used to being the stupidest person in the room, but that's how I feel at these meetings. I struggle even to ask cogent questions. Either these are really some of the smartest people in the area, or I am not nearly as smart as I once thought I was.

I have joined some mailing lists for specialized subjects, including the Scala mailing list. These can be even more humbling than going to the local meetings. Likewise when reading blogs and responses. There are some really smart people in the world; mailing lists and blogs make it seem like they are all living on my block, and I'm just the local slow boy trying to tag along with them.

Other people have also wondered if the internet makes people stupid. Atlantic Magazine had an article in their July/August 2008 issue by Nicholas Carr titled "Is Google Making Us Stupid". The article says the Internet provides people with little snippets of information, and that people no longer have the patience to sit down and read a book. The author claims this makes people's knowledge superficial, because they don't delve deeply into a subject as they would with a book. I don't know who he talked to about that, but it wasn't me. I have spent hours digging into technical subjects on the Internet that I never would have bothered to pursue if I had to go out and get a book on the subject. When I say the Internet has made me stupid, I am saying something quite different than Mr. Carr in that Atlantic article.

The Internet has humbled me, but that's pretty easy to live with when I consider how much of an opportunity it has given me to continue my education. Rephrasing the words of many others, Einstein said "As our circle of knowledge expands, so does the circumference of darkness surrounding it." The Internet has helped me see a whole new magnitude of darkness.

Update 2009-05-27: In his article "The importance of stupidity in scientific research", Martin A. Schwartz writes about learning "that the scope of things I didn't know ... was, for all practical purposes, infinite." He learned it through research; I learned it from the Internet.