Wednesday, October 27, 2010

Important Wcf performance issue + workaround

@YaronNaveh

I have written about Wcf performance issues before, but this one seems to be the biggest. Valery had published in the Wcf forum an interesting performance issue. In short, a WCF client tries to consume a non-WCF service where the contract looks something like this:

class Foo
{
   
byte[] picture;
}


In soap, byte arrays are encoded as base64 strings so it can look like this:

<picture>/9j/4AAQSkZJReV6R8MLi7nW6UUUViWf/Z.....</picture>

or with line breaks after each 73 characters, like this:

<picture>/9j/4AAQSkZJReV6R8MLi7nW61+58zBz5Q+7Xpdj
/PK/4AAQSkPOIeV6R8MLi7nW61+58zBz5Q+7Xpdj
/9R/4AAQSkZJReV6R8MLi7nW6VZ788zBz5Q+7Xpdj
4U4wVoqwUUUViWf/Z</picture>

both options are valid according to the base64 RFC:

Implementations MUST NOT add line feeds to base-encoded data unless
the specification referring to this document explicitly directs base
encoders to add line feeds after a specific number of characters.

Ok so it does not really advocate this... But it is a fact that many soap stacks still use this MIME-originated format and also Wcf supports it.

So what is the problem?
It seems that when Wcf gets a message which contains base64 with CRLF, the processing is slower in a few seconds(!). A drill down shows that the problem is in the DataContract serializer. Take a look at this program:

[DataContract]
public class Foo
{
   
[DataMember]
   
public byte[] picture;
}

class Program
{
       
static void Main(string[] args)
       
{
           
var t1 = getTime(@"C:\temp\base64_with_line_breaks.txt");
           
var t2 = getTime(@"C:\temp\base64_without_line_breaks.txt");            
          
           
Console.WriteLine("Time with breaks: " + t1);
           
Console.WriteLine("Time with no breaks: " + t2);

           
Console.ReadKey();
       
}

       
static double getTime(string path)
       
{
           
var ser = new DataContractSerializer(typeof (Foo));
           
var stream = new FileStream(path, FileMode.Open);
           
var start = DateTime.Now;

           
for (int i = 0; i < 40; i++)
           
{
               
ser.ReadObject(stream);                
               
stream.Position = 0;
           
}

           
var end = DateTime.Now;
           
var t = end - start;
           
return t.TotalSeconds;
       
}
}

For those of you who are interested to test this, the files are here and here.

The output is:

Time with breaks: 10.8998196 seconds
Time with no breaks: 0.0029994 seconds

This clearly reveals a performance problem.

Why does this happen?

While debugging the .Net source code, I have found this in the XmlBaseReader class (code comments were in the source - they are not mine):


int ReadBytes(...)
{
  try
 {
   ...
 }

 catch (FormatException exception)
  
{
      
// Something was wrong with the format, see if we can strip the spaces

      int i = 0;
      
int j = 0;
      
while (true)
      
{
          
while (j < charCount && XmlConverter.IsWhitespace(chars[j]))
              
j++;
          
if (j == charCount)
               
break;
          
chars[i++] = chars[j++];
      
}
...
}
}

So the data contract serializer tries to read the base64 string, but for some reason succeeds only if the string does not have white spaces inside it (we can further debug to see how that happens but it is exhausting for one post :). The serializer then removes all the white spaces (which requires copying the buffer again) and tries again. This is definitely a performance issue.

Notes:

  • This happens with both .Net framework 3.5 and 4.0.

  • This is a DataContract specific issue - it does not happen when you use other .Net mechanisms such as Convert.FromBase64String

    I have reported this in Microsoft connect, you are welcome to vote this issue up.

    Workarounds

    There a few workarounds. The trade-offs are generally convenience of API (or "where you prefer to put the 'dirty' stuff").

    1. As Valery noticed, you can change the contract to use String instead of byte[]. Then Convert.FromBase64String will give you the byte array.

    2. Change your contracts to use the XmlSerializer instead of DataContract serializer. The former does not experience this issue. The XmlSerializer is generally slower (when base64 does not appear that it) so this is what you loose. You get a better API here as clients do not need to manipulate the base64 string.

    3. The best of course is to change the service implementation to return base64 without line breaks. Also if large binaries are returned anyway it may be a better idea to employ MTOM.

    4. A Wcf custom encoder can strip the spaces from the message before it is deserialized. However this also involves copy of strings and this is beneficial only in rare cases.
  • @YaronNaveh

    What's next? get this blog rss updates or register for mail updates!

    13 comments:

    Nen said...

    Wow

    Moshe said...

    Wow, it's a really deep dive!!!
    Thank you!!!

    Valéry Letroye said...

    This is really a great analysis!

    Thanks a lot for investigating so far this issue and reporting your findings.

    You suggest good alternatives to solve the immediate performance issue. But long term, for a company with 100s of developers creating business applications which consume web services, I still hope a fix in WCF to fully benefit the out-of-the-box automatic code generation.

    Dmitry Ornatsky said...

    Great job debugging this.
    I'm surprised that problem like that have been surviving for a long time. Especially keeping in mind that their workaround is so obviously hacky.

    Anonymous said...

    Thanks Man for the great investigations... !! You Rock !!
    -Honest Indian

    Valéry Letroye said...

    A fix won't be provided by Microsoft before the release of .Net 4.5.

    Here is the answer from a Microsoft's Distributed Services Team: "The current release of 4.0 and 3.5 does not include a fix for this issue, and the dev team is looking into this for a future release which would be either 4.5 or post-4.5 release. I don’t have a date for this though."

    Yaron Naveh (MVP) said...

    Thanks Valéry for the follow up!

    Sim said...

    Has anyone tried the XmlSerializer workaround? For me it didn't work. One megabyte of base64 encoded data takes more than a minute to deserialize anyway.
    To assure I did things right: I simply add XmlSerializer as the contents of the tag in the svcmap file.

    Yaron Naveh (MVP) said...

    Sim

    When you add the service reference (or use svcutil) you need to specify that the xml serializer should be used.

    Sim said...

    But isn't it exactly what I do by adding "XmlSerializer as the contents of the Serializer tag in the svcmap file?

    Reference.svcmap:
    ...
    XmlSerializer
    ...

    Yaron Naveh (MVP) said...

    that file is not meant to be hackable so even if it is related there is another way. In particular some attributes need to be added to the proxy.

    try to use svcutil with this flag:

    svcutil "w.wsdl" /serializer:XmlSerializer

    Sim said...

    Yaron

    I generated the proxy with svcutil and I can see all the XmlSerializer Attributes in the generated code. However it still takes multiple minutes to decode just one megabyte of base64 data. I can see in Fiddler that the reply is already sent and I can see in VS that 'System.Runtime.Serialization.resources.dll' is being loaded just in that moment.

    Anyways, we will stick with MTOM soon, when our server will be capable of doing so.

    Unknown said...

    4 Years leater this performance problem is still alive and rocking.

    Found the reason thanks to your post. The String-Workaround worked well. A 600kb binary data took ~3min. With the workaround <1sek