Wednesday, December 18, 2013

Consuming EBS-EDT SOAP service from WCF

@YaronNaveh

If you want personal guidance with EBS-EDT feel free to mail me at yaronn01@gmail.com

A while ago the Ontario Ministry of Health and Long-Term Care published this document, which explains how to consume their new SOAP web service. (In favor of Google the exact title is "Technical Specification for Medical Claims Electronic Data Transfer (MCEDT) Service via Electronic Business Services (EBS) Ministry of Health and Long-Term Care"). I have received over a dozen of questions about how to consume this service with WCF. Unfortunately it is not a simple task since the service uses a complex configuration which is not available in any of the built-in WCF bindings. However it is possible to do it with some custom code. Bellow I describe the general scheme for this to work. I know some community members are preparing a simple wrapper for this so I will publish it here once ready.


The Errors
Depending on which path you chose for implementation, the most common error message you are likely to receive is the dreadful:

The incoming message was signed with a token which was different from what used to encrypt the body. This was not expected.

There are other possible errors as well or some consumers may not know where to start.

The Solution
1. Since the client needs to send both username token and an X.509 certificate (and sign with the latter) we need to write a code binding:

var b = new CustomBinding();
var sec = (AsymmetricSecurityBindingElement)SecurityBindingElement.CreateMutualCertificateBindingElement(MessageSecurityVersion.WSSecurity10WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10);
sec.EndpointSupportingTokenParameters.Signed.Add(new UserNameSecurityTokenParameters());
sec.MessageSecurityVersion =MessageSecurityVersion.
WSSecurity10WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10;
sec.IncludeTimestamp = true;
sec.MessageProtectionOrder = System.ServiceModel.Security.MessageProtectionOrder.SignBeforeEncrypt;
sec.EnableUnsecuredResponse = true;
b.Elements.Add(sec);
b.Elements.Add(new TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8));
b.Elements.Add(new HttpsTransportBindingElement());
var c =
new ServiceReference1.SimpleServiceSoapClient(b, new EndpointAddress(new Uri("https://www.bankhapoalim.co.il/"), new DnsEndpointIdentity("WSE2QuickStartServer"), new AddressHeaderCollection()));
c.ClientCredentials.UserName.UserName = "yaron";
c.ClientCredentials.UserName.Password = "1234";
c.ClientCredentials.ServiceCertificate.Authentication.CertificateValidationMode =
System.ServiceModel.Security.X509CertificateValidationMode.None;
c.ClientCredentials.ServiceCertificate.DefaultCertificate = new X509Certificate2(@"C:\Program Files\Microsoft WSE\v2.0\Samples\Sample Test Certificates\Server Public.cer");
c.ClientCredentials.ClientCertificate.Certificate = new X509Certificate2(@"C:\Program Files\Microsoft WSE\v2.0\Samples\Sample Test Certificates\Client Private.pfx", "wse2qs");

One thing you want to notice in this code is that it contains the username and password, so change them according to your credentails.
Another thing to notice is that the client certificate is loaded from disk. You could change that to the windows certificate store if you wish. As for the server certificate, you could put any dummy certificate there, including the same one as the client certificate (it will not be used but WCF needs something in this setting).
Also note the EnableUnsecuredResponse=true. It is a key for the next steps.

2. Since the request needs to be signed only (not encrypted) let's configure the contract in reference.cs with the ProtectionLevel attribute:

using System.Net.Security;
[ServiceContract(..., ProtectionLevel=ProtectionLevel.Sign)]
public interface ServicePortType {...}

3. WCF is reluctant to decrypt the response. For this reason we need to do the decryption manually. This is the hardest part but I give most of the code here so hopefully it will be easier. You need to implement a custom message encoder and configure the binding above to use your encoder instead of text message encoder. Read here on how to implement an encoder.

4. You need to override the ReadMessage method of the encoder and decrypt the response message in it.

This code shows how to decrypt a message (not necessarily in the context of an encoder):

//decrypt the encryptedKey to get the session key:
//==================================================
string wrappingKey = "put here the value base64 CipherValue under the encryptedKey element";
X509Certificate2 serverCert = new X509Certificate2(File.ReadAllBytes(@"c:\temp\xws-security-server.p12"), "changeit");
RSACryptoServiceProvider rsa = (RSACryptoServiceProvider)serverCert.PrivateKey;
var enckey = rsa.Decrypt(Convert.FromBase64String(wrappingKey), true);
return enckey;
//decrypt the soap body using the session key (aes128):
//=====================================================
internal static byte[] ExtractIVAndDecrypt(SymmetricAlgorithm algorithm, byte[] cipherText, int offset, int count)
{
byte[] buffer2;
if (cipherText == null)
{
throw new Exception();
}
if ((count < 0) || (count > cipherText.Length))
{
throw new Exception();
}
if ((offset < 0) || (offset > (cipherText.Length - count)))
{
throw new Exception();
}
int num = algorithm.BlockSize / 8;
byte[] dst = new byte[num];
Buffer.BlockCopy(cipherText, offset, dst, 0, dst.Length);
algorithm.Padding = PaddingMode.ISO10126;
algorithm.Mode = CipherMode.CBC;
try
{
using (ICryptoTransform transform = algorithm.CreateDecryptor(algorithm.Key, dst))
{
buffer2 = transform.TransformFinalBlock(cipherText, offset + dst.Length, count - dst.Length);
}
}
catch (CryptographicException exception)
{
throw new Exception();
}
return buffer2;
}
//the IV is the first 16 byes (in our case) of the encrypted cipher (not key)
private byte[] GetIV( byte[] cypher)
{
byte[] IV = new byte[16];
Array.Copy(cypher, IV, 16);
return IV;
}
//note the cipher contains the iv inside it but we do not remove it since the methods expect to get the cipher with iv and ic separately
byte[] cypher = Convert.FromBase64String("put here the base64 encrypted cipherValue under the encrypted body");
byte[] key =this.key; //we are assumed to know the key. for example we decrypted it using the method above.
byte[] iv = GetIV(cypher);
var AesManagedAlg = new AesCryptoServiceProvider();
AesManagedAlg.KeySize = 128;
AesManagedAlg.Key = key;
AesManagedAlg.IV = iv;
var body = ExtractIVAndDecrypt(AesManagedAlg, cypher, 0, cypher.Length);
System.Out.Print("decrypted body is: " + UTF8Encoding.UTF8.GetString(body));
view raw decrypt hosted with ❤ by GitHub

This code needs access to your private key so it could extract the session key in the message and it also needs some elements from the response. Once you get the decypted message you can replace the encypted body part in the message provided by the encoder with the decrypted message.

5. The last mission to accomplish in the encoder is to delete the <security> element (and all of its child nodes) from the response message before you return it to WCF. Otherwise WCF will try to decrypt the message which is redundant since we just unencrypted it now (WCF decryption would fail anyway). Remember the EnableUnsecuredResponse flag from step #2? It tells WCF not to expect any security, so stripping the elements out is safe.

Information on some possible errors in this process is available here.

MIME Attachments

Hopefully by now you have a working client. Some of the operations also receive an attachment from the service. This attachment in SwA (Soap with Attachments) which is a MIME format a little different than the MTOM whcih WCF knows about.To extract this attachment you could use some kind of a mime parser library as the first step of your encoder (apply it over the raw bytes from the network). Copy the first MIME part to the Message object (this is the SOAP). The second part will be the attachment which you can keep on the custom encoder as a property or on some other context available to your application code.

Fault Contract
Since there is no formal fault contract in the WSDL you should inspect any incoming soap fault using a custom message inspector.

To sum up, consuming EBS-EDT from WCF is not easy but doable, good luck!

@YaronNaveh

What's next? get this blog rss updates or register for mail updates!

Tuesday, June 25, 2013

Validating Windows Mobile App Store Receipts Using Node.js

@YaronNaveh

When your windows phone user performs an in-app purchase you can get its receipt using the API:

<Receipt Version="1.0" ReceiptDate="2012-08-30T23:10:05Z" CertificateId="b809e47cd0110a4db043b3f73e83acd917fe1336" ReceiptDeviceId="4e362949-acc3-fe3a-e71b-89893eb4f528">
<AppReceipt Id="8ffa256d-eca8-712a-7cf8-cbf5522df24b" AppId="55428GreenlakeApps.CurrentAppSimulatorEventTest_z7q3q7z11crfr" PurchaseDate="2012-06-04T23:07:24Z" LicenseType="Full" />
<ProductReceipt Id="6bbf4366-6fb2-8be8-7947-92fd5f683530" ProductId="Product1" PurchaseDate="2012-08-30T23:08:52Z" ExpirationDate="2012-09-02T23:08:49Z" ProductType="Durable" AppId="55428GreenlakeApps.CurrentAppSimulatorEventTest_z7q3q7z11crfr" />
<Signature xmlns="http://www.w3.org/2000/09/xmldsig#">
<SignedInfo>
<CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<Reference URI="">
<Transforms>
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
</Transforms>
<DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<DigestValue>cdiU06eD8X/w1aGCHeaGCG9w/kWZ8I099rw4mmPpvdU=</DigestValue>
</Reference>
</SignedInfo>
<SignatureValue>SjRIxS/2r2P6ZdgaR9bwUSa6ZItYYFpKLJZrnAa3zkMylbiWjh9oZGGng2p6/gtBHC2dSTZlLbqnysJjl7mQp/A3wKaIkzjyRXv3kxoVaSV0pkqiPt04cIfFTP0JZkE5QD/vYxiWjeyGp1dThEM2RV811sRWvmEs/hHhVxb32e8xCLtpALYx3a9lW51zRJJN0eNdPAvNoiCJlnogAoTToUQLHs72I1dECnSbeNPXiG7klpy5boKKMCZfnVXXkneWvVFtAA1h2sB7ll40LEHO4oYN6VzD+uKd76QOgGmsu9iGVyRvvmMtahvtL1/pxoxsTRedhKq6zrzCfT8qfh3C1w==</SignatureValue>
</Signature>
</Receipt>
view raw gistfile1.xml hosted with ❤ by GitHub

You need to send this receipt to your backend system to validate the signature. If that backend system happens to be in C# your life is easy as the official documentation provides exact validation instructions. If you use another platform then be aware that there are a few gotchas to this validation process.

Based on several requests I have checked the feasibility to use my xml-crytpo node.js module to perform this validation. There were two challenges that required to patch xml-crypto. Both of them caused the signature digest calculation to differ from the stated one by the store. This had failed the validation process.

White spaces
As you can see in the sample receipt above it contains white spaces. White spaces in Xml are meaningful and once an Xml has been signed it is not legal to remove or alter the white spaces. However take a look at these lines from Microsoft C# validation instructions:

XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load("..\\..\\receipt.xml");
view raw gistfile1.cs hosted with ❤ by GitHub

The receipt is loaded into an XmlDocument. Since not defined otherwise, the PreserveWhitespace property defaults to false. This means that the actual validaiton code later on is performed not on the actuall Xml received from the network/disk but on an altered version of it which strips all white space. This is not standard and confusing. Anyway if you use node.js remember to first remove the whitespace before processing the document further. Unfortunately xmldom, the module which xml-crypto uses for xml processing, does not provide this utility. I did a quick patch for it and for now have put it in my private xmldom repo. Just initialize the parser with the ignoreWhiteSpace flag:

var dom = require('xmldom').DOMParser
var doc = new dom({ignoreWhiteSpace: true}).parseFromString(xmlstr);
view raw gistfile1.js hosted with ❤ by GitHub


Debugging this was quite painfull which is why I posted this tweet shortly after:



Implicit Exclusive Xml Canonicalization
Canonicalization is probably one of the most confusing topics regarding xml digital signature. In a nutshell, before processing Xml (either when signing it or when validating the signature) we need to transform it to a canonical form in terms of attribute order, namespace definition, whitespace and etc. There are multiple standard to do so and they can be chained together so the signed xml should state which standard(s) it uses:

<Transforms>
<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</Transforms>
view raw gistfile1.xml hosted with ❤ by GitHub

This is the transformation stated by the windows store receipt

<Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
view raw gistfile1.xml hosted with ❤ by GitHub

In practice when trying to validate according to this transformation xml-crypto shows this validation error:

[ 'invalid signature: for uri calculated digest is aBE4HWVLJ8EyKMu7Q3moFzcT72NPx9OngF0E1X4dRJw= but the xml to validate supplies digest Uvi8jkTYd3HtpMmAMpOm94fLeqmcQ2KCrV1XmSuY1xI=' ]
view raw gistfile1.txt hosted with ❤ by GitHub

I have built a simple C# app to do this validation according to the Microsoft sample - it worked. The sample uses the high level SignedXml class. I have then built a C# snippet to do the validation in a more low level way by applying the enveloped-signature transformation myself - this failed.

I had to dig deep in the reflector to find that internally the SignedXml .net class always applies the Exclusive Xml Canonicalization standard in addition to any explicitly defined transformation. This happens in TransformChain.TransformToOctetStream() method which is used internally in the signing process when it starts with SignedXml:

internal Stream TransformToOctetStream(object inputObject, Type inputType, XmlResolver resolver, string baseUri)
{
...
CanonicalXml xml4 = new CanonicalXml((XmlDocument) output, resolver);
return new MemoryStream(xml4.GetBytes());
}
view raw gistfile1.cs hosted with ❤ by GitHub

The CanonicalXml class has an internal property m_c14nDoc which holds the canonicalized version.

Once I have forced xml-crypto to use c14n at all times the validation was successful.

I have not found any evidence in the xml digital signature or exclusive xml canonicalization specs that the way SignedXml works complies with the definitions.

All Together Now
I have committed the changes to the xml-crytpo windows-store branch and have not checked them in to npm since this is not necessarily the correct behavior with other platforms. So you should use xml-crypto from there (a quick way is to "npm install xml-crypto" and then override the created folder with the windows-store branch zip. There is a way to tell npm to install directly from github but not sure if this can happen from a branch. This is the usage example:

var crypto = require('xml-crypto')
, Dom = require('xmldom').DOMParser
, fs = require('fs')
var xml = fs.readFileSync('./windows_store_signature.xml', 'utf-8');
var doc = new Dom({ignoreWhiteSpace: true}).parseFromString(xml);
xml = doc.firstChild.toString()
var signature = crypto.xpath(doc, "//*//*[local-name(.)='Signature' and namespace-uri(.)='http://www.w3.org/2000/09/xmldsig#']")[0];
var sig = new crypto.SignedXml();
sig.keyInfoProvider = new crypto.FileKeyInfo("./windows_store_certificate.pem");
sig.loadSignature(signature.toString());
var result = sig.checkSignature(xml);
if (result)
console.log("signature is valid")
else
console.log("signature is invalid: " + sig.validationErrors)
view raw gistfile1.js hosted with ❤ by GitHub

The pem file contains the raw content of the certificate which you get from the windows store provided url:

-----BEGIN CERTIFICATE-----
MIIDyTCCArGgAwIBAgIQNP+YKvSo8IVArhlhpgc/xjANBgkqhkiG9w0BAQsFADCB
jjELMAkGA1UEBhMCVVMxEzARBgNVBAgMCldhc2hpbmd0b24xEDAOBgNVBAcMB1Jl
ZG1vbmQxHjAcBgNVBAoMFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UECwwN
V2luZG93cyBTdG9yZTEgMB4GA1UEAwwXV2luZG93cyBTdG9yZSBMaWNlbnNpbmcw
HhcNMTExMTE3MjMwNTAyWhcNMzYxMTEwMjMxMzQ0WjCBjjELMAkGA1UEBhMCVVMx
EzARBgNVBAgMCldhc2hpbmd0b24xEDAOBgNVBAcMB1JlZG1vbmQxHjAcBgNVBAoM
FU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEWMBQGA1UECwwNV2luZG93cyBTdG9yZTEg
MB4GA1UEAwwXV2luZG93cyBTdG9yZSBMaWNlbnNpbmcwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQCcr4/vgqZFtzMqy3jO0XHjBUNx6j7ZTXEnNpLl2VSe
zVQA9KK2RlvroXKhYMUUdJpw+txm1mqi/W7D9QOYTq1e83GLhWC9IRh/OSmSYt0e
kgVLB+icyRH3dtpYcJ5sspU2huPf4I/Nc06OuXlMsD9MU4Ug9IBD2HSDBEquhGRo
xV64YuEH4645oB14LlEay0+JZlkKZ/mVhx/sdzSBfrda1X/Ckc7SOgnTSM3d/DnO
5DKwV2WYn+7i/rBqe4/op6IqQMrPpHyem9Sny+i0xiUMA+1IwkX0hs0gvHM6zDww
TMDiTapbCy9LnmMx65oMq56hhsQydLEmquq8lVYUDEzLAgMBAAGjITAfMB0GA1Ud
DgQWBBREzrOBz7zw+HWskxonOXAPMa6+NzANBgkqhkiG9w0BAQsFAAOCAQEAeVtN
4c6muxO6yfht9SaxEfleUBIjGfe0ewLBp00Ix7b7ldJ/lUQcA6y+Drrl7vjmkHQK
OU3uZiFbCxTvgTcoz9o+1rzR/WPXmqH5bqu6ua/UrobGKavAScqqI/G6o56Xmx/y
oErWN0VapN370crKJvNWxh3yw8DCl+W0EcVRiWX5lFsMBNBbVpK4Whp+VhkSJilu
iRpe1B35Q8EqOz/4RQkOpVI0dREnuSYkBy/h2ggCtiQ5yfvH5zCdcfhFednYDevS
axmt3W5WuHz8zglkg+OQ3qpXaXySRlrmLdxEmWu2MOiZbQkU2ZjBSQmvFAOy0dd6
P1YLS4+Eyh5drQJc0Q==
-----END CERTIFICATE-----
view raw gistfile1.txt hosted with ❤ by GitHub

I hope your app will be successful and you will use this procedure a lot...

@YaronNaveh

What's next? get this blog rss updates or register for mail updates!

Sunday, March 10, 2013

Test-Creep: Selective Test Execution for Node.js

@YaronNaveh

Check out test-creep to run your tests 10x times faster.

They tell us to write tests before, during and after development. But they don't tell us what to do with this forever-running and ever-growing list of tests. If your tests take a double digit number of seconds to execute then you're doing it wrong. Maybe you have already split your tests into fast unit tests that you run all the time, and slow integration tests that you run as needed. Wouldn't it be great to cherry pick those 2-3 integration tests that are relevant to the change you just made and run them? Wouldn't it be great to make unit tests run even faster by executing just those few that are affected by your current work? Test-Creep automatically runs just the subset of tests that are affected by your current work. Best part: This is done with seamlessly Mocha integration so you work as normal.

What is selective test execution?
Selective test execution means running just the relevant subset of your tests instead of all of them. For example, if you have 200 tests, and 10 of them are related to some feature, then if you make a change to this feature you should run only the 10 tests and not the whole 200. test-creep automatically chooses the relevant tests based on istanbul code coverage reports. All this is done for you behind the scenes and you can work normally with just Mocha.

Installation and usage
1. You should use Mocha in your project to run tests. You should use git as a source control.
2. You need to have Mocha installed locally and run it locally rather than globally:

$> npm install mocha
$> ./node_moduels/mocha/bin/mocha ./tests

3. You need to install test-creep:

$> npm install test-creep

4. When you run mocha specify to run the special test 'first.js' before all other tests:

$> ./node_modules/mocha/bin/mocha ./node_modules/test-creep/first.js ./tests

first.js is bundled with test-creep and monkey patchs mocha with the required instrumentation (via istanbul).

In addition, it is recommended to add .testdeps_.json to .gitignore (more on this file below).

How does this work?
The first time you execute the command all tests run. first.js monkey patches mocha with istanbul code coverage and tracks the coverage per test (rather than per the whole process). Based on this information test-creep creates a test dependency file in the root of your project (.testdeps_.json). The file specifies for each test which files it uses:

{
"should alert when dividing by zero": [
"tests/calc.js",
"lib/calc.js",
"lib/exceptions.js",
],
"should multiply with negative numbers": [
"tests/negative.js",
"lib/calc.js",
],
}
view raw gistfile1.js hosted with ❤ by GitHub


Next time you run the tests (assuming you add first.js to the command) test-creep runs 'git status' to see which files were added/deleted/modified since last commit. Then test-creep searches the dependency file to see which tesst may be affected and instructs mocha to only run these tests. In the example above, if you have uncommited changes only to lib/exceptions.js, then only the first test will be executed.

At any moment you can run mocha without the 'first.js' parameter in which case all tests and not just relevant ones will run.

When to use test-creep?
test-creep sweet spot is in long running test suites, where it can save many seconds or minutes each time you run tests. If you have a test suite that runs super fast (< 2 seconds) then test-creep will probably add more overhead than help. However whenever tests run for more than that test-creep can save you time.

More information
In github or ask me on twitter.

@YaronNaveh

What's next? get this blog rss updates or register for mail updates!