HiveBrain v1.2.0
Get Started
← Back to all entries
patternrubyMinor

Read-write lock implementation for Ruby

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
readwriteforrubyimplementationlock

Problem

I recently answered a question on Stack Overflow where someone asked for advice about making a Hash thread-safe. They indicated that performance was important and that there would be a heavy read bias. I suggested a read-write lock, but couldn't find a Ruby implementation available anywhere. The only thing I found was a blog entry from 2007 where someone said they had built one, but the link was broken.

I thought that a freely available read-write lock implementation for Ruby would be a good thing for the community, so I tried coming up with one. If you can find any bugs, or see any way to increase performance, please comment!

The latest version of this file is here. There is a forked version here, which makes queued readers and writers interleave, rather than preferring to run the queued writers first. I haven't decided whether this is a good idea or not. If I determine that it is better, I will merge the change into 'master'.

```
# Ruby read-write lock implementation
# Allows any number of concurrent readers, but only one concurrent writer
# (And if the "write" lock is taken, any readers who come along will have to wait)

# If readers are already active when a writer comes along, the writer will wait for
# all the readers to finish before going ahead
# But any additional readers who come when the writer is already waiting, will also
# wait (so writers are not starved)

# Written by Alex Dowad
# Thanks to Doug Lea for java.util.concurrent.ReentrantReadWriteLock (used for inspiration)

# Usage:
# lock = ReadWriteLock.new
# lock.with_read_lock { data.retrieve }
# lock.with_write_lock { data.modify! }

# Implementation note: A goal for this implementation is to make the "main" (uncontended)
# path for readers lock-free
# Only if there is reader-writer or writer-writer contention, should locks be used

require 'atomic' # must install 'atomic' gem
require 'thread'

class ReadWriteLock
def initialize
@counter = Atomic.new(0) # single integer

Solution

I am not an expert in multithreading, but I reviewed the code and here are my thoughts:

The following code leads to deadlock:

l = ReadWriteLock.new
l.with_write_lock { puts 'passed' }


This happens because writer wait for himself to release lock (it increments @counter and then waits if @counter > 0:

while(true)
  c = @counter.value
  if @counter.compare_and_swap(c,c+WRITER_INCREMENT)
    @writer_mutex.synchronize do
      @writer_q.wait(@writer_mutex) if @counter.value > 0 # here
    end
    break
  end
end


Looks like increment and decrement for @counter are atomic operations, but change @counter + do operation are not atomic - possible race condition, for example:

# let say we have 1 writer working and 1 reader tries to do the job:
while(true)
  c = @counter.value 
  raise "Too many reader threads!" if (c & MAX_READERS) == MAX_READERS
  if c >= WRITER_INCREMENT # here @counter == WRITER_INCREMENT
    @reader_mutex.synchronize do
      if @counter.value >= WRITER_INCREMENT # here @counter == WRITER_INCREMENT
        #  deadlock
      end
    end
  else
    break if @counter.compare_and_swap(c,c+1)
  end
end


In your benchmark code I believe you wanted to write something like this

lock.with_write_lock do
  value = data + 1
  sleep(writer_sleep)
  data  = value + 1
end


instead of

lock.with_write_lock do
  value = data
  data = value + 1
  sleep(writer_sleep)
  data  = value + 1 # 2 times assign `value + 1` to `data`
end


I think it is too early to do performance tuning of the implementation, but here are results of the benchmark for Intel(R) Core(TM)2 Duo CPU P7570 @ 2.26GHz machine:

# MRI 1.9.2 (with global interpreter lock)
  0.090000   0.140000   0.230000 (  1.419659)
Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
  0.050000   0.090000   0.140000 (  2.356187)
Testing FreeAndEasy with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
BAD!!! Writers overlapped!
  0.010000   0.030000   0.040000 (  0.063878)

# MRI 1.9.3 (with global interpreter lock)
  0.110000   0.100000   0.210000 (  1.405219)
Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
  0.030000   0.070000   0.100000 (  2.292269)
Testing FreeAndEasy with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
BAD!!! Writers overlapped!
  0.010000   0.010000   0.020000 (  0.076124)

# Jruby 1.6.5 (with native threads)
  1.783000   0.000000   1.783000 (  1.783000)
Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
  2.484000   0.000000   2.484000 (  2.484000)
Testing FreeAndEasy with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
BAD!!! Writers overlapped!
  0.151000   0.000000   0.151000 (  0.151000)


Note: MRI benchmarks frequently fail with deadlock detected error (or hang in jruby).

Sidenote: I think it would be a good idead to put this project into github repo so other people can participate with pull requests/bug reports. I would participated :)

UPDATE after fix

I ran benchmarks after your fix and here are the results (I ran it several times and did not encountered a deadlock error):

```
READ INTENSIVE (80% read, 20% write):
Testing ReadWriteLock with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
0.070000 0.080000 0.150000 ( 0.557659)
WRITE INTENSIVE (80% write, 20% read):
Testing ReadWriteLock with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
0.140000 0.080000 0.220000 ( 2.015721)
BALANCED (50% read, 50% write):
Testing ReadWriteLock with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
0.090000 0.080000 0.170000 ( 1.286458)
READ INTENSIVE (80% read, 20% write):
Testing SimpleMutex with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
0.080000 0.070000 0.150000 ( 2.401325)
WRITE INTENSIVE (80% write, 20% read):
Testing SimpleMutex with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time
0.050000 0.090000 0.140000 ( 2.361558)
BALANCED (50% read, 50% write):
Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers it

Code Snippets

l = ReadWriteLock.new
l.with_write_lock { puts 'passed' }
while(true)
  c = @counter.value
  if @counter.compare_and_swap(c,c+WRITER_INCREMENT)
    @writer_mutex.synchronize do
      @writer_q.wait(@writer_mutex) if @counter.value > 0 # here
    end
    break
  end
end
# let say we have 1 writer working and 1 reader tries to do the job:
while(true)
  c = @counter.value 
  raise "Too many reader threads!" if (c & MAX_READERS) == MAX_READERS
  if c >= WRITER_INCREMENT # here @counter == WRITER_INCREMENT
    @reader_mutex.synchronize do
      if @counter.value >= WRITER_INCREMENT # here @counter == WRITER_INCREMENT
        # <- but here writer decrements WRITER_INCREMENT and does @reader_q.broadcast  (there is no synchronization to protect from this)
        @reader_q.wait(@reader_mutex) # then we wait until writer will broadcast, but it will not happen -> deadlock
      end
    end
  else
    break if @counter.compare_and_swap(c,c+1)
  end
end
lock.with_write_lock do
  value = data + 1
  sleep(writer_sleep)
  data  = value + 1
end
lock.with_write_lock do
  value = data
  data = value + 1
  sleep(writer_sleep)
  data  = value + 1 # 2 times assign `value + 1` to `data`
end

Context

StackExchange Code Review Q#9038, answer score: 2

Revisions (0)

No revisions yet.