Possible Issue in connection:didReceiveData #329

Closed
couchdeveloper opened this Issue May 4, 2012 · 17 comments

Projects

None yet

5 participants

@couchdeveloper

I'm worrying about this implementation in AFURLConnectionOperation.m, please check:

- (void)connection:(NSURLConnection *)__unused connection 
    didReceiveData:(NSData *)data
{
    self.totalBytesRead += [data length];
    if ([self.outputStream hasSpaceAvailable]) {
        const uint8_t *dataBuffer = (uint8_t *) [data bytes];
        [self.outputStream write:&dataBuffer[0] maxLength:[data length]];
    }
    if (self.downloadProgress) {
        self.downloadProgress((long long)[data length], self.totalBytesRead, self.response.expectedContentLength);
    }
}

It seems, the bytes in data will be written only, if the stream's buffer has some space available. There is also no check whether the data could actually be written completely to the stream.

I've only glanced quickly over the code - but using streams this way this seems horribly wrong. Please prove me wrong ;)

@mattt
Contributor
mattt commented May 7, 2012

You're correct in pointing out that this method may benefit from an else for when there isn't any space available. But otherwise, this is what I would expect for using NSOutputStream; every use I've seen in documentation and in the wild includes this hasSpaceAvailable check. If you can point to something that explicitly calls something wrong about this behavior, please do so.

@couchdeveloper

Hi Matt!

It's not wrong to invoke hasSpaceAvailable in general. But you would use hasSpaceAvailable only if you use streams in a "polling approach". The other approach would use the "Run Loop scheduling" approach. The latter is preferred, because it avoids a couple of shortcomings inherent to the polling approach.

Let me just explain what might can happen in the current code (which is neither polling nor event):
Say, the consumer of the stream executes on a different thread (which is generally a good idea). Nonetheless, the producer (the connection) can be faster in producing data than the consumer is able to process. In this case, the NSStream object's internal buffer will eventually overflow (which can be fixed size for certain streams) - and would return NO for hasSpaceAvailable. When this happens, data will be lost in the void.

In order to avoid this in the "polling approach" you would need to loop - possibly infinitely - until all input data (provided by the NSData argument) is actually written into the stream. That is, you only can return from method didReceiveData: when all data has been successfully written to the stream. The write method returns the number of actually written bytes, so you can check the result.

However, "polling" has a number of shortcomings:
It's wasting CPU cycles. Furthermore, the polling approach can easily break - for instance, if both the consumer and the producer are executing on the same thread. Then polling can lock itself. It's generally tricky to make it smooth running under all conditions (e.g. using a temporary buffer in memory when the stream's buffer is full, instead of busy waiting).

IMHO, polling has so many disadvantages and is tricky to make it work that I wouldn't consider it at all. The docs recommend as well to use the "Run Loop approach".

Now, how about implementing the "Run-Loop approach"?

Basically, you schedule a NSStream on a Run-Loop. A stream delegate receives messages send from the stream on the Run-Loop's thread. That is, the stream plays the role of the "pro-actor" and the stream delegate is the "reactor".

Unfortunately, this pattern doesn't fit well into a NSURLConnection - since the connection delegate wants to play the "pro-actor" pattern - that is, it wants to decide when to write data into the stream (in method didReceiveData:). But it is actually the other way around: the stream tells the delegate when it is ready to receive data. hm ...

Well, it's possible to "work around" this problem - but honestly, it doesn't look nice.

There are other solutions - but this would avoid NSStream at all, and would require something different, something new to Cocoa, which is called a "Synchronous Queue". This approach would have some major advantages: easy to implement, faster, reliable.
However, it would require make an implementation of such Synchronous Queue (quite easy) - and it would require that producer (connection delegate) and consumer (the part which processes the data) execute on different threads.

Please see also:
Stream Programming Guide: https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Streams/Streams.html

Synchronous Queue (due to lack of a better description in the net):http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/SynchronousQueue.html
and http://www.cs.rice.edu/~wns1/papers/2006-PPoPP-SQ.pdf

Regards

@mattt
Contributor
mattt commented May 22, 2012

Thanks for your well-reasoned and informative response, @couchdeveloper. I really appreciate you taking the time to explain this all to me.

By default, when used as an in-memory buffer, the output stream is scheduled in the same runloop as the connection, which sends the delegate methods. If I'm understanding this correctly, wouldn't this preclude the connection from getting ahead of the output stream, since the write runs synchronously? Also, in this case, the stream is writing to an in-memory buffer, it would be rather unlikely for network data to outpace that, no?

You definitely raise an interesting point about all of this; I just want to make sure that the internals of NSURLConnection and / or NSOutputStream don't already have mechanisms to reconcile this, and that the current setup is indeed at risk.

It would seem that the cleanest solution would be to abstract the synchronous queue into an NSOutputStream subclass, which could transparently be substituted.

Thoughts?

@couchdeveloper

Hi Matt!

Thank you for replying.

On May 22, 2012, at 6:03 PM, Mattt Thompson wrote:

Thanks for your well-reasoned and informative response, @couchdeveloper. I really appreciate you taking the time to explain this all to me.

By default, when used as an in-memory buffer, the output stream is scheduled in the same runloop as the connection, which sends the delegate methods. If I'm understanding this correctly, wouldn't this preclude the connection from getting ahead of the output stream, since the write runs synchronously? Also, in this case, the stream is writing to an in-memory buffer, it would be rather unlikely for network data to outpace that, no?

If the NSOutputStream is implemented as a dynamic buffer - that is, using a NSMutableData internally, then you are correct: the bytes will be always synchronously written completely in one go, and this all would be pretty fast. Where the stream is scheduled, doesn't matter - since there is no delegate involved.

This approach is nothing more than using a NSMutableData buffer explicitly, and just appending data from the connection.

You definitely raise an interesting point about all of this;
Yes, actually. Essentially at the end of the day, you probably conclude that in this scenario where we use NSURLConnection there is no real benefit using NSStreams all :/

Well, in fact, I came to the conclusion. NSStream and NSURLConnection don't fit well together. NSStreams would make sense, if you would use a tied stream pair with a fixed size buffer. That is, one ostream which is used by the connection to write into it, and a "consumer" which uses the tied istream to simultaneously read data from - either running on two threads, or carefully scheduled in the same thread. In this case, the streams should be scheduled in a "Run Loop" - and not used with a "Polling" approach.

However, this is very difficult to accomplish with NSURLConnection.

I just want to make sure that the internals of NSURLConnection and / or NSOutputStream don't already have mechanisms to reconcile this, and that the current setup is indeed at risk.
As mentioned, NSURLConnection uses a "pro-active" pattern. That is, a NSURLConnection itself wants to decide when data is available and ready to be consumed.
A "Run Loop" scheduled OutputStream on the other side uses a "pro-active" pattern as well: it is the thing that wants to decide when data can be written to it.
Due to this, these both doesn't fit well together.

If you use OutputStream types File there is no risk .
If you use DynamicMemory, you have a relative risk - depending on the size of available men and the size of the data to be downloaded.
If you use a fixed size buffer in a tied stream pair configuration - it likely does not work.
If there is a custom stream, your current approach will have bugs - resulting in possibly loss of data - or crashes due to memory issues.

In any case you gain nothing but a tiny abstraction for using a File or a NSMutableData. Alternative, you could use a temporary file directly, or you could use a NSMutableData directly.

What you else get - if you want it or not:
In case of using a dynamic buffer, you may run the risk to run out of memory. The whole download needs to be stored in the mutable data in memory.
In case of a File, the user needs to set up and manage the file (which is elaborate).

You don't gain performance. Data needs to be copied back and force, or data needs to be written to file, NSMutableData needs to be reallocated …

It would seem that the cleanest solution would be to abstract the synchronous queue into an NSOutputStream subclass, which could transparently be substituted.

Thoughts?

Likely, such a "SyncQueueStream" is possible. It certainly needs further investigation.

You certainly won't use it in the "Run Loop" approach. You just invoke the read or write message in a synchronously manner - but pretending a polling approach - which would actually never be started (in order to be compatible with other stream types).

You also need to ensure that "producer" (the NSURLConnection) and the "consumer" (say, an XML or JSON parser) execute on different threads!

The write:maxLength: method would become blocking and synchronous - and would always write all data. Internally, it would create a NSData and wait for the queue for becoming free, and then putting the NSData object - and then waiting again until the data has been consumed.

Invoking the method write:maxLength: in connection:didReceiveData: would be simple in order to work correctly (but should pretend the polling approach).

On the other hand, or "on the other side" there is the InputStream. The method getBuffer:length: seems difficult to implement, though.
By definition what a stream actually is, this method also makes no sense at all: from the view of a client, there is no buffer. A stream has a "forward iterator", that is you can read one byte, then advance read the next and so force. You can never read the same byte twice or get back. Otherwise, you would reveal implementation details - which in turn would again make a Stream-Concept useless. This method above implies, that there is a buffer... ts ts.

The method read:maxLength: seems to be implementable, though. The "SyncQueueStream" needs to manage internally a "current_pointer". Once the NSData buffer is read completely, it removes it. This will cause the write method on the other side of the queue to be resumed.

Note: following the "rules of streams" - once you have read a byte from a stream - there is no way to read it again. A SyncQueueStream will utilize this definition.

The cool thing here would be, that SyncQueueStream is actually an OutputStream and an InputStream at once. Strictly, you wouldn't need to "tie a pair of stream". However, in order to subclass properly, you need two separate, an OutSyncQueueStream and an InSyncQueueStream, sharing the same SyncQueue object.

Scheduling on the "Run loop" must be implemented as well. This can become cumbersome.

What you actually can ensure in the AFNetworking framework is, that the connection delegate gets scheduled on a "private" thread - which is a secondary thread - since the sync queue is blocking. So, that way, the user won't execute the producer on that thread.

The disadvantage is, that performance may suffer a bit - compared to using a SyncQueue directly. The SyncQueueStream class would require to create NSData objects and copy bytes with every write, and copy data to an external buffer every read.

Why not using a SyncQueue directly?

I've implemented one. Additionally, it has a timeout parameter in put and get messages. And the queue can be *canceled". Imagine, the user switches to another view, where the currently active download makes no sense anymore. Reliably canceling a connection is already difficult, and canceling a resumed thread is kinda tricky. Canceling (from any thread) will effectively cause waiting consumers and producers to be resumed and recognizing that they are canceled. This works 99.9999% all the time - except for corner cases which cannot be handled without taking to much penalties, say, the time a "cancel" actually finishes.

Uff, was a long write ;)

Regards
Andreas (couchdeveloper)


Reply to this email directly or view it on GitHub:
#329 (comment)

@tonymillion
Contributor

Why are you even using a NSStream at all, since its an in-memory buffer why arent you just using NSMutableData and appending to the end, since you create a NSData from the contents of the NSStream anyway in connectionDidFinishLoading (which, by the way, has the potential to duplicate the data in memory (once for the NSStream and once for the freshly created NSData) while its 'probably not' going to happen, since its an opaque structure you can't rule it out.

I believe the "simplest" way to solve this is to remove all the NSOutputStream stuff entirely, use a NSMutableData and call [self.responseData appendData:data]; .

you can see an example of that here:

https://github.com/tonymillion/TMHTTPRequest/blob/master/TMGETRequest.m

(which is an old library I wrote that I dumped for AFNetworking, however, sometimes when I see code like this I wonder…. )

@couchdeveloper

Tony, it depends what are you trying to solve.

NSMutableData and NSOutputStream using a dynamic memory buffer are effectively the same. It's true that in this case a NSStream would not make things better.

However, a simple approach using a dynamic buffer (either a NSStream or NSMutableData) would require that ALL the data which shall be downloaded must fit into memory. If this is not the case, an App would crash. IMHO, a library should provide more flexibility and more robustness - that is it should be scalable.

Another simple approach would be to download into a (private) temporary file. And then return a NSData object to the user which has been memory mapped to the file. This approach is robust, but requires to manage the temporary files.

For both approaches, the data can be processed only after all data has been downloaded. This is suboptimal, performance wise. For large data sets, it would take a few seconds less when it would be possible to download and process the data simultaneously.

For instance, a 25 Mbyte JSON file takes roughly 5 seconds to parse (including the creation of a Foundation representation, on an iPad 2). The download would take 7 s over WiFi. When you could do it simultaneously, the total would take 7s (iPad has two CPUs), when you use the "temporary file" approach or a "dynamic buffer approach", it would take 12 seconds.

So, now when using a "SynchronousQueue" it has this advantages:

  • You can produce and consume simultaneously - running on two separate threads (utilizing available CPUs).
  • The producer will be blocked when the consumer is slow (preventing exhaustive memory consumption).
    That is, the NSURLConnection's delegate connection:didReceiveData:. will be blocked - and thus only one NSData object is floating around at any time. It will also eventually stop the underlaying network layer to read more and more incoming data - and thus, the NSURLConnection eventually stops generating NSData objets which require memory. When the data has been consumed, everything will proceeding.
  • Consumer and producer both operate on a "blocking" interface - as opposed to require polling or require a Run Loop.

What does that mean?

Well, the consumer's and producer's implementation become very simple:

- (void) connection:(NSURLConnection *)connection didReceiveData:(NSData *)data {
    [_syncQueue putData:data];
} 

Suppose, the consumer is a "recursive decent parser" (e.g. JSONKit). A recursive decent parser requires an input which can be sequentially read in one go. Unfortunately, a NSURLConnection sends the data in "chunks" - that is, the concatenation of the individually NSData objects comprises the input. Can, for example, JSONKit parse partial data? No - it cannot.

Unsolvable?

Not at all! With the blocking interface of getData method of the SynchronousQueue you can accomplish this quite easily. That is, a parser would consume the first NSData buffer (and possibly block until one is available) and iterate over its contents. Once it reaches the end of the buffer, it tries to consume the next NSData buffer - which may again block. That way, from the view point of the parser, it will see the input as one contiguous sequence of bytes.

@couchdeveloper

(sorry closed accidentally)

@tonymillion
Contributor

The problem is over-engineering.

If you are downloading a 25mb JSON blob via the API call on which this is based, then you are doing it wrong.

And in the case I mentioned above, downloading to a memory based NSStream THEN converting it to a NSData object, is potentially worse, or at the very least unnecessarily more complicated, than simply appending to a once-allocated NSMutableData object.

Streaming the data from the URLConnection through a parser in realtime during the download is a noble goal, but it is highly specialized behaviour which I believe is unsuitable and incompatible for the goals AFNetworking is trying to achieve.

@tonymillion
Contributor

I take back a little of what I said - I see its not converting the NSStream its grabbing the underlying NSData object rightout of the NSOutputStream, which completely mitigates the potential memory duplication.

Further, given the NSOutputStream looks like it can be swapped out for an alternative (which could potentially write to disk, or be a loop-back stream going to a secondary thread or even used as the input to NSJSONSerialization) then this really 'solves' all the problems (with the original exception that [self.outputStream hasSpaceAvailable] could return NO at which point the data would be lost).

@couchdeveloper

Thank you Tony for responding!

First, I would like to apologize, I'm not very familiar with AFNetworking. So, my ideas may sound somewhat strange - occasionally.

However, IMO the current implementation of connection:didReceiveData: has a potential issue. This was the reason for my first post.

Well, it probably is a good idea to list the requirements of the core of a hypothetical library, serving the same purpose as AFNetworking:

If this includes:

  1. Devices: equal or newer than iPhone 3G
  2. Downloadable content can be unrestricted in size (larger than internal memory).

Info: on my old iPhone 3G, and depending on the app, it may crash (due to memory issues) well before 3 Mbyte when the data will be hold in memory.

If this further includes:

3)The process of downloading shall maintain a low memory foot-print.

then, you might agree that a simple approach using a dynamic buffer is not sufficient.

While this is not the average case, but a data set of 25 MByte is not huge at all. How would one accomplish the initial "import" of a data base from a web service?

I remember that once in a former company where I worked, we figured a possibly data size of 2 Gbyte. We didn't use NSURLConnection, though. Sure, the subsequent synchronizations may require much less data sets. How would you accomplish the partitioning on the server? Would you split the download in say, 1000 connections? That would cause a significant performance penalty on the device.

IMHO, for a framework, there should be "built-in means" where a user can accomplish to download any data size.

If we can agree, that this is a reasonable requirement for AFNetworking, too, then we need something more advanced than a NSMutableData or NSSOutputStream with a dynamic buffer.

Regards
Andreas

On May 23, 2012, at 12:53 PM, Tony Million wrote:

I take back a little of what I said - I see its not converting the NSStream its grabbing the underlying NSData object rightout of the NSOutputStream, which completely mitigates the potential memory duplication.

Further, given the NSOutputStream looks like it can be swapped out for an alternative (which could potentially write to disk, or be a loop-back stream going to a secondary thread or even used as the input to NSJSONSerialization) then this really 'solves' all the problems (with the original exception that [self.outputStream hasSpaceAvailable] could return NO at which point the data would be lost).


Reply to this email directly or view it on GitHub:
#329 (comment)

@mattt
Contributor
mattt commented Jun 19, 2012

Thank so much for all of the discussion around this.

Based on conversations I had about internals like this at WWDC, I'm pretty confident with the current implementation. If it is demonstrated that this issue causes reproducible problems, we can look into solutions.

@mattt mattt closed this Jun 19, 2012
@abillingsley

Hi All

I might have the start of a use case where using an NSOutputStream could start causing problems.
I am working on an objective c implementation of SignalR which is a pub sub library and provides a nice abstraction over a variety of transports (longpolling, server sent events, and eventually websockets). Under all the covers the library is using AFNetworking and in general is working really well. Recently I came across an issue when using Server Sent Events, with this transport I access the NSOutputStream belonging to an AFHttpRequestOperation. Since SSE rarely ever closes it connection to the server I need to access the received data as it is received. To do this I set the NSOutputStreams delegate and handle the case NSStreamEventHasSpaceAvailable where I do something like this

    NSData *buffer = [(NSOutputStream *)stream propertyForKey:NSStreamDataWrittenToMemoryStreamKey];
    buffer = [buffer subdataWithRange:NSMakeRange(_offset, [buffer length] - _offset)];

    NSInteger read = [buffer length];            
    if(read > 0)
    {
        _offset = _offset + read;
        //Do something interesting with the new data received
    }

The problem that I am experiencing seems to stem from the fact that AFURLConnectionOperation schedules the runloop of the NSOutputStream on the AFNetworking Thread which is different from the thread that my NSStreamDelegate is called on and the fact that NSRunLoop is not thread safe. If I comment out the NSOutputStream's scheduledRunLoop in AFURLConnectionOperation and instead set it myself and on the same thread as the NSStreamDelegate then all issues that I was experiencing seem to go away.

At this point I am looking for any thoughts on how I might read the contents of the NSOutputStream in a thread safe way. I really hope that I am overlooking something and that there is a really easy way to do this within the current AFNetworking implementation since AFNetworking appears to be in a very stable state and moving away from the NSOutputStream might cause bigger problems then it solves.

Any thoughts are appreciated

Alex

@couchdeveloper

I don't believe you are using NSStreams the way they should ;) And, as already pointed you, there is rarely a use case where it is beneficial to use a NSStream object - and in the sole case were it would be useful (in a paired Output/Input stream using an underlaying ring buffer) it is difficult to use with NSURLConnection.

Fortunately, you could approach your problem differently in a much easier way if you find a way to override and implement the following method:

- (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data
{
    dispatch_async(workerQueue, ^{
        // process partial data
        ...
    });
}

The approach above will work very well for any data size, provided "process partial data" is faster than the connection receives data" (amortized over a couple of buffers). If processing is slower, you will queue an increasing number of NSData objects - which consume memory and your app may eventually crash.

You may also encounter, that the size of the buffers is quite small, ca. 7000 to 14000 bytes, which is a multiple of the window size of the network. You may accumulate a few small buffers to a buffer with a certain minimum size before processing them.

Your data processing method must also be able to deal with partial data - which I assume is the case.

You may also choose to process the data synchronously. But consider in case you really process the data slow you may somewhat block the thread where other connection delegates may be scheduled as well.

@abillingsley

Thanks for the response, it seems reasonable that your suggestion would work and it will probably simplify things a bit but I was hoping that I would be able to access this buffer without creating my own subclass of AFURLConnectionOperation.

In the short term I will probably move to something similar to what you suggest but if anyone has a suggestion about how I can access this buffer through the use of NSStream or some other means that can be achieved without requiring creating my own subclass I am interested in that too :)

@couchdeveloper

The approach you are seeking for can be accomplished through using a "paired" InputStream/OutputStream. This would require that you can set the OutputStream for the AFHTTPRequestOperation, which is currently not possible. The output stream is private and will be created internally by other means.

Well anyway, what you would need is paired Input/Output Stream which would be created as follows:

        CFReadStreamRef istream; 
        CFWriteStreamRef ostream;
        CFStreamCreateBoundPair(kCFAllocatorDefault, &istream, &ostream, 8*1024);

This will create both, the InputStream and the OutputStream, operating with a shared ring buffer of size 8*1024 bytes.

So, for a hypothetical interface of a "Producer" - which is actually represented by the AFHTTPRequestOperation, we would set the output stream as follows (assuming ARC):

    requestOp.outputStream = CFBridgingRelease(ostream);

And accordingly for the "Consumer" (your data processor):

    signalRhandler.inputStream = CFBridgingRelease(istream);

Getting this to work is actually tricky, though. First, you need a thread where you can schedule the delegates which must be implemented through the Consumer and Producer. Secondly, the read respectively the write methods of the streams can block. That is, if you produce more data than what fits into the ring buffer's size, you need to temporarily queue the remaining data somewhere, since you need to relinquish the thread (that is return to the Run Loop in order to let the consumer read the data from the ring buffer). Then you need to remember somewhere that you saved data elsewhere, and need to ensure that the Producer's delegate will actually receive an extra call in order to take the necessary actions to read from the temp buffer and hand it over to the consumer.

Furthermore, if the consumer is slow, you have to temporarily save whole buffers somewhere as well. And of course, you need to wrap around enough code to handle all this and keep things flowing. Getting this to work without running into deadlocks, or loosing data is not that easy.

You might consider to use dedicated threads for the Producer and Consumer which is less prone to dead locks. Maybe, letting the producer use the network thread:

With this thread setup, things become more simple. For connection:didReceiveData:

    int size = [data length];
    ...
    while (remainingBytes > 0) {
            int count = [_ostream write:buffer maxLength:remainingBytes];
            remainingBytes -= count;
    }

Note: write will block until after space is available in the stream's ring buffer. It may block infinitely if the consumer stalls!

And for the consumer, running on a different thread:

- (void)stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent
{
    uint8_t buffer[1024];
    switch (streamEvent) 
    {
        case NSStreamEventHasBytesAvailable:
            int count = 0;
            while ([(id)theStream hasBytesAvailable] && (count = [(NSInputStream*)theStream read:buffer maxLength:1024]) > 0 ) {
                _bytesRead += count;
               // process partial buffer with size 'count'
            }            
            break;
           ...
}

Note: read will block if there is no data available! Thus, we check for bytes available.

Edit:
We see that in this case we don't use delegate messages for the "Producer". The producer uses the -write:maxLength: method in a synchronous fashion. This method will block the thread if there is no space available in the ring buffer of the stream-pair.

The "Consumer" uses the delegate protocol employing an asynchronous style. But if its thread is dedicated for use with this consumer only, we could also use the -read:maxLength:method directly, using a synchronous or "blocking" approach:

@implementation Consumer

- (void) run {
    @autoreleasepool {
        _istream.delegate = nil;
        [_istream open];
        const int BUFFER_SIZE = 8*1024;
        uint8_t buffer[BUFFER_SIZE];
        int count = 0;
        while ((count = [_istream read:buffer maxLength:BUFFER_SIZE]) > 0 ) {
            _bytesRead += count;
            // process partial data
        }
    }    
}
@end

-read:maxLength: will block the thread until after data is available in the stream's ring buffer.

We then execute the Consumer's -runmethod on a dedicated thread:

[NSThread detachNewThreadSelector:@selector(run) toTarget:consumer withObject:nil];        

This will execute the -runmethod until -read:maxLength:returns zero - that is, until after the stream is finished sending data.

The "blocking" approach makes it possible to utilize "data processors" which aren't capable to handle partial data - for example, recursive decent parsers - which need to view its input as if it were a contigues sequence of bytes from start to end.

The drawback for the Producer employing a synchronous blocking approach is, that it is difficult - if not impossible - to cancel a connection when it is blocked in its delegate thread and if the cause of the block cannot be resolved otherwise.

@JDeokule

I'm not sure if this is related but I'm having very strange and inconsistent behavior (I'm new to AFNetworking).

I was previously using Three20's TTImageView to load a network image. Since upgrading to iOS6 I've been replacing Three20 with Nimbus/AFNetworking.

I have 2 UITableViewCell's on the screen, each having its own network image.

I'm inconsistently getting a deadlock situation (I think). 2 threads are both freezing at the same point after trying to write to the output stream in between the 2 "OSSpinLockLock" lines.

libsystem_c.dylib`OSSpinLockLock$VARIANT$mp + 4:
0x94d97354:  xorl   %eax, %eax
0x94d97356:  orl    $-1, %edx
0x94d97359:  lock   
0x94d9735a:  cmpxchgl%edx, (%ecx)
0x94d9735d:  jne    0x94d97360                ; OSSpinLockLock$VARIANT$mp + 16
0x94d9735f:  ret    
0x94d97360:  xorl   %eax, %eax
0x94d97362:  movl   $1000, %edx
0x94d97367:  pause  
0x94d97369:  cmpl   %eax, (%ecx)
0x94d9736b:  je     0x94d97356                ; OSSpinLockLock$VARIANT$mp + 6
0x94d9736d:  decl   %edx
0x94d9736e:  jne    0x94d97367                ; OSSpinLockLock$VARIANT$mp + 23
0x94d97370:  pushl  $1
0x94d97372:  pushl  $1
0x94d97374:  pushl  $0
0x94d97376:  pushl  $0
0x94d97378:  movl   $4294967235, %eax
0x94d9737d:  int    $-127
0x94d9737f:  addl   $16, %esp
0x94d97382:  jmp    0x94d97350                ; spin_lock$VARIANT$mp
@couchdeveloper

It sounds strange that two different threads write to one output stream. AF uses a private shared thread for scheduling the connection delegate methods. And even if each connection would have its own thread, there should only one output stream for each connection.

Could you please provide more information? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment