paint-brush
Comparing Optimistic and Pessimistic Locking With GO and PostgreSQLby@yudaph
6,972 reads
6,972 reads

Comparing Optimistic and Pessimistic Locking With GO and PostgreSQL

by Yuda Prasetiya HaqqySeptember 24th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The locking mechanism is used to manage access to resources shared by user databases, tables, pages, and rows to guarantee the consistency of the transmitted data. There are two types of locking in databases, Optimistic Locking, and Pessimistic locking. Optimistic Locking. Optimistic locking is a technique for SQL database applications that do not hold row locks between selecting and updating or deleting a row. If the row does change, the update or delete will fail, and the application logic handles such failures by, for example, retrying the whole process.⁽¹⁾ Pessimistic Locking. The pessimistic locking model prevents simultaneous updates to records. As soon as one user starts to update a record, a lock is placed on it. Other users who attempt to update this record are informed that another user has an update in progress. The other users must wait until the first user has finished committing their changes, releasing the record lock. Only then can another user make changes based on the previous user's changes.⁽²⁾

Company Mentioned

Mention Thumbnail
featured image - Comparing Optimistic and Pessimistic Locking With GO and PostgreSQL
Yuda Prasetiya Haqqy HackerNoon profile picture


Comparison between Optimistic and Pessimistic locking with Golang and PostgreSQL

Locking mechanism in SQL database.

The locking mechanism is used to manage access to resources shared by user databases, tables, pages, and rows to guarantee the consistency of the transmitted data. There are two types of locking in databases, Optimistic Locking, and Pessimistic locking.

Optimistic Locking.

Optimistic locking is a technique for SQL database applications that do not hold row locks between selecting and updating or deleting a row. If the row does change, the update or delete will fail, and the application logic handles such failures by, for example, retrying the whole process.⁽¹⁾

Pessimistic Locking.

The pessimistic locking model prevents simultaneous updates to records. As soon as one user starts to update a record, a lock is placed on it. Other users who attempt to update this record are informed that another user has an update in progress. The other users must wait until the first user has finished committing their changes, releasing the record lock. Only then can another user make changes based on the previous user's changes.⁽²⁾

Implementation Locking using GO and PostgreSQL

For this implementation, we will create a use case with two repositories implementation, one using pessimistic and the other optimistic approaches, to simulate deposit action in the account table. For the sake of learning, we use 2 steps in this implementation, firstly we will select the account record, modify in GO, then update to the PostgreSQL.

Pessimistic Implementation

In the pessimistic implementation, the SQL looks like this :

BEGIN;
SELECT * FROM account WHERE username = $1 LIMIT 1 FOR NO KEY UPDATE;
UPDATE account SET balance = $1, version = version+1 WHERE id = $2;
COMMIT;

FOR NO KEY UPDATE will tell the database to lock the row for updates that only modify non-key fields.


Golang implementation with sqlx library :

const (
	selectAccountLock = "SELECT * FROM account WHERE username = $1 LIMIT 1 FOR NO KEY UPDATE;"
	depositAccount    = "UPDATE account SET balance = $1, version = version+1 WHERE id = $2;"
)

func (a *accountRepository) Deposit(ctx context.Context, request *entity.DepositRequest) error {
	// declare the transaction
    tx, err := a.db.Beginx()
	if err != nil {
		fmt.Println(err)
		return entity.ErrInternalServer
	}

    // select account from database
	rows, err := tx.QueryxContext(ctx, selectAccountLock, request.Username)
	if err != nil {
		return entity.ErrInternalServer
	}

	var account entity.Account
	for rows.Next() {
		err = rows.StructScan(&account)
		if err != nil {
			return entity.ErrInternalServer
		}
	}

	account.Balance += request.Amount
	_, err = tx.ExecContext(ctx, depositAccount, account.Balance, account.ID)
	if err != nil {
		tx.Rollback()
		fmt.Println(err)
		return entity.ErrInternalServer
	}

	tx.Commit()
	return nil
}

Optimistic Implementation

In the optimistic implementation, SQL will look like this:


SELECT * FROM account WHERE username = $1 LIMIT 1;
UPDATE account SET balance = $1, version = version + 1 WHERE id = $2 AND version = $3;


If we look at the update query there is a version = $3 on the where clause, it is used to check if there is any other modification on the row between our select and update queries. If the update query return zero rows affected it means the row has been modified between our select and update queries, then we have to retry those two queries.


Our optimistic locking Golang implementation needs to create a retry mechanism and a maximum number of retries, it will look like this:


const (
	maxRetries              = 3
	selectAccountByUsername = "SELECT * FROM account WHERE username = $1 LIMIT 1;"
	depositAccountOpt       = "UPDATE account SET balance = $1, version = version+1 " +
                              "WHERE id = $2 AND version = $3;"
)

func (a *accountRepositoryOpt) Deposit(ctx context.Context, request *entity.DepositRequest) error {
	return a.depositOpt(ctx, request.Username, request.Amount, 0)
}

func (a *accountRepositoryOpt) depositOpt(ctx context.Context, username string, amount float64, retry int) error {
	// check retry count
	if retry >= maxRetries {
		return entity.ErrMaxRetry
	}

	// select account
	rows, err := a.db.QueryxContext(ctx, selectAccountByUsername, username)
	if err != nil {
		return entity.ErrAccountByUsernameNotFound(username)
	}

	var account entity.Account
	for rows.Next() {
		err = rows.StructScan(&account)
		if err != nil {
			return entity.ErrInternalServer
		}
	}

	// add account balance with the depositOpt amount
	account.Balance += amount

	// update new account to db
	result, err := a.db.ExecContext(ctx, depositAccountOpt, account.Balance, account.ID, account.Version)
	if err != nil {
		fmt.Println(err)
		return entity.ErrInternalServer
	}

	// check rows affected count
	rowsAffected, err := result.RowsAffected()
	if err != nil {
		fmt.Println(err)
		return entity.ErrInternalServer
	}
	// retry if count rows affected by update less than one
	if rowsAffected == 0 {
		return a.depositOpt(ctx, username, amount, retry+1) // recursion
	}
	return nil
}


We use the recursion function to call retry if the rows affected are zero, and we will terminate recursion when retry is equal to the maximum number of retries (maxRetries).

Comparison

We will create some tests to compare the performance of optimistic and pessimistic locking mechanisms. Each test will be done deposit 1000 times and using a varying number of users that 100, 200, and 300.


It also uses semaphore for 100 concurrent processes at the time due to the limit in PostgreSQL connection.

The test will look like this:


func Test_accountUseCase_Deposit(t *testing.T) {
	type fields struct {
		accountRepo accountRepo
	}
	ctx := context.Background()
	tests := []struct {
		name    string
		fields  fields
		wantErr bool
	}{
		{
			name: "pessimistic",
			fields: fields{
				accountRepo: repository.NewAccountRepository(db),
			},
			wantErr: false,
		},
		{
			name: "optimistic",
			fields: fields{
				accountRepo: repository.NewAccountRepositoryOpt(db),
			},
			wantErr: false,
		},
	}
	for _, tt := range tests {
		t.Run(tt.name, func(t *testing.T) {
			a := &accountUseCase{
				accountRepo: tt.fields.accountRepo,
			}
			start := time.Now()
			if err := testSaveBulkData(a, ctx); (err != nil) != tt.wantErr {
				t.Errorf("Deposit() error = %v, wantErr %v", err, tt.wantErr)
			}
			fmt.Println("Execution time", time.Since(start))
		})
	}
}

func testSaveBulkData(a *accountUseCase, ctx context.Context) error {
	sem := make(chan struct{}, 100)
	var wg sync.WaitGroup
	for i := 0; i < 1000; i++ {
		wg.Add(1)
		go save(a, ctx, strconv.Itoa((i%100)+1), sem, &wg)
	}
	wg.Wait()
	return nil
}

func save(a *accountUseCase, ctx context.Context, username string, sem chan struct{}, wg *sync.WaitGroup) {
	defer wg.Done()
	sem <- struct{}{}
	err := a.Deposit(ctx, &entity.DepositRequest{Username: username, Amount: 1000})
	if err != nil {
		fmt.Println("Error", err)
	}
	<-sem
}

100 users in 1000 iteration

Optimistic

Pessimistic

In this test, we had similar performance between optimistic and pessimistic locking. But, in optimistic locking, we get around 20-25 failed executions due to max retry.

200 users in 1000 iteration

Optimistic

Pessimistic

In this test, we can see the optimistic function is faster than the pessimistic, but we still see some failed transactions in optimistic even though the probability is very small 0 to 2 failed execution.

300 users in 1000 iteration

Optimistic

Pessimistic

In this test, we can see the difference in performance is getting bigger by about 100 ms. Also, we rarely see failed execution of optimistic locking, although sometimes you can still see one unlucky failed execution.

Conclusion

As we can see from the test above, optimistic and pessimistic locking have their respective advantages. Optimistic locking is a useful approach when updating concurrent records is expected to be infrequent or locking overhead is high. Whereas, pessimistic locking is a useful approach when the next update can be delayed until the previous update is complete or when concurrent updates of records are expected to occur frequently.

References

⁽¹⁾ https://www.ibm.com/docs/en/db2/11.5?topic=overview-optimistic-locking

⁽²⁾ https://www.ibm.com/docs/en/rational-clearquest/7.1.0?topic=clearquest-optimistic-pessimistic-record-locking