ios - What advantage(s) does dispatch_sync have over @synchronized? -


lets want make code thread-safe:

- (void) addthing:(id)thing { // can called different threads     [_myarray addobject:thing]; } 

gcd seams preferred way of achieving this:

- (void) addthing:(id)thing {      dispatch_sync(_myqueue, ^{  // _myqueue serial.         [_myarray addobject:thing];     });     } 

what advantage(s) have on traditional method?

- (void) addthing:(id)thing {     @synchronized(_myarray) {         [_myarray addobject:thing];     } } 

wow. ok -- original performance assessment flat out wrong. color me stupid.

not stupid. performance test wrong. fixed. along deep dive gcd code.

update: code benchmark can found here: https://github.com/bbum/stackoverflow hopefully, correct now. :)

update2: added 10 queue version of each kind of test.

ok. rewriting answer:

• @synchronized() has been around long time. implemented hash lookup find lock locked. "pretty fast" -- fast enough -- can burden under high contention (as can synchronization primitive).

dispatch_sync() doesn't require lock, nor require block copied. specifically, in fastpath case, dispatch_sync() call block directly on calling thread without copying block. in slowpath case, block won't copied calling thread has block until execution anyway (the calling thread suspended until whatever work ahead of dispatch_sync() finished, thread resumed). 1 exception invocation on main queue/thread; in case, block still isn't copied (because calling thread suspended and, therefore, using block stack ok), there bunch of work done enqueue on main queue, execute, , resume calling thread.

• dispatch_async() required block copied cannot execute on current thread nor can current thread blocked (because block may lock on thread local resource made available on line of code after dispatch_async(). while expensive, dispatch_async() moves work off current thread, allowing resume execution immediately.

end result -- dispatch_sync() faster @synchronized, not meaningful amount (on '12 imac, nor '11 mac mini -- #s between 2 different, btw... joys of concurrency). using dispatch_async() slower both in uncontended case, not much. however, use of 'dispatch_async()' faster when resource under contention.

@synchronized uncontended add: 0.14305 seconds dispatch sync uncontended add: 0.09004 seconds dispatch async uncontended add: 0.32859 seconds dispatch async uncontended add completion: 0.40837 seconds synchronized, 2 queue: 2.81083 seconds dispatch sync, 2 queue: 2.50734 seconds dispatch async, 2 queue: 0.20075 seconds dispatch async 2 queue add completion: 0.37383 seconds synchronized, 10 queue: 3.67834 seconds dispatch sync, 10 queue: 3.66290 seconds dispatch async, 2 queue: 0.19761 seconds dispatch async 10 queue add completion: 0.42905 seconds 

take above grain of salt; micro-benchmark of worst kind in not represent real world common usage pattern. "unit of work" follows , execution times above represent 1,000,000 executions.

- (void) synchronizedadd:(nsobject*)anobject {     @synchronized(self) {         [_a addobject:anobject];         [_a removelastobject];         _c++;     } }  - (void) dispatchsyncadd:(nsobject*)anobject {     dispatch_sync(_q, ^{         [_a addobject:anobject];         [_a removelastobject];         _c++;     }); }  - (void) dispatchasyncadd:(nsobject*)anobject {     dispatch_async(_q, ^{         [_a addobject:anobject];         [_a removelastobject];         _c++;     }); } 

(_c reset 0 @ beginning of each pass , asserted == # of test cases @ end ensure code executing work before spewing time.)

for uncontended case:

start = [nsdate timeintervalsincereferencedate]; _c = 0; for(int = 0; < testcases; i++ ) {     [self synchronizedadd:o]; } end = [nsdate timeintervalsincereferencedate]; assert(_c == testcases); nslog(@"@synchronized uncontended add: %2.5f seconds", end - start); 

for contended, 2 queue, case (q1 , q2 serial):

    #define testcase_split_in_2 (testcases/2) start = [nsdate timeintervalsincereferencedate]; _c = 0; dispatch_group_async(group, dispatch_get_global_queue(dispatch_queue_priority_background, 0), ^{     dispatch_apply(testcase_split_in_2, serial1, ^(size_t i){         [self synchronizedadd:o];     }); }); dispatch_group_async(group, dispatch_get_global_queue(dispatch_queue_priority_background, 0), ^{     dispatch_apply(testcase_split_in_2, serial2, ^(size_t i){         [self synchronizedadd:o];     }); }); dispatch_group_wait(group, dispatch_time_forever); end = [nsdate timeintervalsincereferencedate]; assert(_c == testcases); nslog(@"synchronized, 2 queue: %2.5f seconds", end - start); 

the above repeated each work unit variant (no tricksy runtime-y magic in use; copypasta ftw!).


with in mind:

• use @synchronized() if how looks. reality if code contending on array, have architecture issue. note: using @synchronized(someobject) may have unintended consequences in may cause additional contention if object internally uses @synchronized(self)!

• use dispatch_sync() serial queue if thing. there no overhead -- faster in both contended , uncontended case -- , using queues both easier debug , easier profile in instruments , debugger both have excellent tools debugging queues (and getting better time) whereas debugging locks can pain.

• use dispatch_async() immutable data heavily contended resources. i.e.:

- (void) addthing:(nsstring*)thing {      thing = [thing copy];     dispatch_async(_myqueue, ^{         [_myarray addobject:thing];     });     } 

finally, it shouldn't matter 1 use maintaining contents of array. cost of contention exceedingly high synchronous cases. asynchronous case, cost of contention goes way down, but potential complexity or weird performance issues goes way up.

when designing concurrent systems, best keep boundary between queues small possible. big part of ensuring few resources possible "live" on both sides of boundary.


Comments