If transaction a is never committed, transaction b will remain stalled. Infinite wait, connection pileup, or other bad things ™ Process 81944 waits for ShareLock on transaction 924069 blocked by process 81941.ĬONTEXT: while inserting index tuple (0,20) in relation "."ģ. Deadlock! Postgres detects this after deadlock_timeout and one of the transactions is aborted with this somewhat confusing deadlock error: ERROR: deadlock detectedĭETAIL: Process 81941 waits for ShareLock on transaction 924071 blocked by process 81944. Now b is waiting on a and a is waiting on b. This causes transaction a to attempt to acquire a lock on transaction b. If the two transactions are each inserting multiple rows into the table, transaction a may attempt to insert a key previously inserted by transaction b. transaction b fails with the message duplicate key v violates unique constraint "." 2. There are 3 possible outcomes, ordered best to worst: 1. Transaction b is now blocked waiting for transaction a to finish. In our example, Postgres will determine the transaction ID of the other transaction ( transaction a) and transaction b will attempt to acquire a lock on the transaction ID of transaction a. If you want to wait for another transaction to finish, you can attempt to acquire a lock on that transaction ID, which will be granted when that transaction finishes. Every transaction holds an exclusive lock on its own transaction ID while the transaction is in progress. Postgres handles this situation by having transaction b wait until transaction a completes. In this case, it will find that another in-progress transaction ( transaction a) has already inserted v. Postgres will look for the tuple we’re attempting to insert in both the committed and “dirty” (not-yet-committed) sections of the index. A Postgres index stores not only committed data, but also stores data written by ongoing transactions. Transaction b inserts v which causes Postgres to check the unique index. Suppose transaction a has inserted v first but has not yet committed. For the purposes of our example we’ll have transaction a and transaction b.īoth transactions attempt to insert value v into the same table. If two transactions are writing to the index concurrently, the situation is more complex. Here’s the relevant function in the Postgres source. If it already exists, our insertion will fail because we violated a unique constraint. Postgres simply looks up the tuple we’re attempting to insert in the unique index. In the case where only one transaction is writing to a unique index, the process is straightforward. It’s not uncommon to be in the worst case scenario described above, until, by sheer luck, insert ordering triggers the deadlock detector and cancels the transactions. The Postgres deadlock detector will detect the deadlock after deadlock_timeout (typically one second) 2 and cancel one of the transactions. If you were a bit luckier, you could have created deadlock within Postgres. In most cases you will leak database connections until your application hangs. The Postgres deadlock detector cannot save you. You now have an application level deadlock. A similar behavior can occur if application thread 1 is blocked for an unrelated reason eg. This seems far fetched, but I’ve seen it multiple times in practice in seemingly reasonable code. application thread 1 now waits for something from application thread 2.transaction 2 and application thread 2 are now both blocked until transaction 1 is committed or rolled back
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |