Friday, January 24, 2014

More EBS-EDT errors

In my recent post I have discussed some of the basic errors you may get when trying to consume EBS-EDT with WCF. Here are some more errors you should be aware of:

Error 1:

policy rejected

If the server returns the above error it typically means that you have issues with some values of elements of your request. Either you are using an older WSDL version, or you have given wrong values in some of the fields (e.g. auditID). This might also be related to using a mix of authenticaiton formats (MSA and IDP) instead of just using one of them and setting the other to null.

Error 2:

The algorithm '' is not accepted for operation 'Digest' by algorithm suite Basic128Sha256Rsa15


The algorithm '' is not accepted for operation 'Digest' by algorithm suite Basic128Rsa15

This error is thrown by WCF when it tries to validate the response signature. This happens because the signature uses a mix of SHA1 and SHA256 hash algorithms. There is nothing you can do to make WCF accept this. What you should do is implement a custom encoder (which you probably do anyway if you have read my last post) and in the encoder validate the signature by yourself and then remove it from the SOAP.
EDIT: Dwayne McKnight has commented that there is a way around a custom encoder.

What's next? get this blog rss updates or register for mail updates!

Wednesday, December 18, 2013

Consuming EBS-EDT SOAP service from WCF

A while ago the Ontario Ministry of Health and Long-Term Care published this document, which explains how to consume their new SOAP web service. (In favor of Google the exact title is "Technical Specification for Medical Claims Electronic Data Transfer (MCEDT) Service via Electronic Business Services (EBS) Ministry of Health and Long-Term Care"). I have received over a dozen of questions about how to consume this service with WCF. Unfortunately it is not a simple task since the service uses a complex configuration which is not available in any of the built-in WCF bindings. However it is possible to do it with some custom code. Bellow I describe the general scheme for this to work. I know some community members are preparing a simple wrapper for this so I will publish it here once ready.

The Errors
Depending on which path you chose for implementation, the most common error message you are likely to receive is the dreadful:

The incoming message was signed with a token which was different from what used to encrypt the body. This was not expected.

There are other possible errors as well or some consumers may not know where to start.

The Solution
1. Since the client needs to send both username token and an X.509 certificate (and sign with the latter) we need to write a code binding:

One thing you want to notice in this code is that it contains the username and password, so change them according to your credentails.
Another thing to notice is that the client certificate is loaded from disk. You could change that to the windows certificate store if you wish. As for the server certificate, you could put any dummy certificate there, including the same one as the client certificate (it will not be used but WCF needs something in this setting).
Also note the EnableUnsecuredResponse=true. It is a key for the next steps.

2. Since the request needs to be signed only (not encrypted) let's configure the contract in reference.cs with the ProtectionLevel attribute:

3. WCF is reluctant to decrypt the response. For this reason we need to do the decryption manually. This is the hardest part but I give most of the code here so hopefully it will be easier. You need to implement a custom message encoder and configure the binding above to use your encoder instead of text message encoder. Read here on how to implement an encoder.

4. You need to override the ReadMessage method of the encoder and decrypt the response message in it.

This code shows how to decrypt a message (not necessarily in the context of an encoder):

This code needs access to your private key so it could extract the session key in the message and it also needs some elements from the response. Once you get the decypted message you can replace the encypted body part in the message provided by the encoder with the decrypted message.

5. The last mission to accomplish in the encoder is to delete the <security> element (and all of its child nodes) from the response message before you return it to WCF. Otherwise WCF will try to decrypt the message which is redundant since we just unencrypted it now (WCF decryption would fail anyway). Remember the EnableUnsecuredResponse flag from step #2? It tells WCF not to expect any security, so stripping the elements out is safe.

Information on some possible errors in this process is available here.

MIME Attachments

Hopefully by now you have a working client. Some of the operations also receive an attachment from the service. This attachment in SwA (Soap with Attachments) which is a MIME format a little different than the MTOM whcih WCF knows about.To extract this attachment you could use some kind of a mime parser library as the first step of your encoder (apply it over the raw bytes from the network). Copy the first MIME part to the Message object (this is the SOAP). The second part will be the attachment which you can keep on the custom encoder as a property or on some other context available to your application code.

Fault Contract
Since there is no formal fault contract in the WSDL you should inspect any incoming soap fault using a custom message inspector.

To sum up, consuming EBS-EDT from WCF is not easy but doable, good luck!

What's next? get this blog rss updates or register for mail updates!

Tuesday, June 25, 2013

Validating Windows Mobile App Store Receipts Using Node.js

When your windows phone user performs an in-app purchase you can get its receipt using the API:

You need to send this receipt to your backend system to validate the signature. If that backend system happens to be in C# your life is easy as the official documentation provides exact validation instructions. If you use another platform then be aware that there are a few gotchas to this validation process.

Based on several requests I have checked the feasibility to use my xml-crytpo node.js module to perform this validation. There were two challenges that required to patch xml-crypto. Both of them caused the signature digest calculation to differ from the stated one by the store. This had failed the validation process.

White spaces
As you can see in the sample receipt above it contains white spaces. White spaces in Xml are meaningful and once an Xml has been signed it is not legal to remove or alter the white spaces. However take a look at these lines from Microsoft C# validation instructions:

The receipt is loaded into an XmlDocument. Since not defined otherwise, the PreserveWhitespace property defaults to false. This means that the actual validaiton code later on is performed not on the actuall Xml received from the network/disk but on an altered version of it which strips all white space. This is not standard and confusing. Anyway if you use node.js remember to first remove the whitespace before processing the document further. Unfortunately xmldom, the module which xml-crypto uses for xml processing, does not provide this utility. I did a quick patch for it and for now have put it in my private xmldom repo. Just initialize the parser with the ignoreWhiteSpace flag:

Debugging this was quite painfull which is why I posted this tweet shortly after:

Implicit Exclusive Xml Canonicalization
Canonicalization is probably one of the most confusing topics regarding xml digital signature. In a nutshell, before processing Xml (either when signing it or when validating the signature) we need to transform it to a canonical form in terms of attribute order, namespace definition, whitespace and etc. There are multiple standard to do so and they can be chained together so the signed xml should state which standard(s) it uses:

This is the transformation stated by the windows store receipt

In practice when trying to validate according to this transformation xml-crypto shows this validation error:

I have built a simple C# app to do this validation according to the Microsoft sample - it worked. The sample uses the high level SignedXml class. I have then built a C# snippet to do the validation in a more low level way by applying the enveloped-signature transformation myself - this failed.

I had to dig deep in the reflector to find that internally the SignedXml .net class always applies the Exclusive Xml Canonicalization standard in addition to any explicitly defined transformation. This happens in TransformChain.TransformToOctetStream() method which is used internally in the signing process when it starts with SignedXml:

The CanonicalXml class has an internal property m_c14nDoc which holds the canonicalized version.

Once I have forced xml-crypto to use c14n at all times the validation was successful.

I have not found any evidence in the xml digital signature or exclusive xml canonicalization specs that the way SignedXml works complies with the definitions.

All Together Now
I have committed the changes to the xml-crytpo windows-store branch and have not checked them in to npm since this is not necessarily the correct behavior with other platforms. So you should use xml-crypto from there (a quick way is to "npm install xml-crypto" and then override the created folder with the windows-store branch zip. There is a way to tell npm to install directly from github but not sure if this can happen from a branch. This is the usage example:

The pem file contains the raw content of the certificate which you get from the windows store provided url:

I hope your app will be successful and you will use this procedure a lot...

What's next? get this blog rss updates or register for mail updates!

Sunday, March 10, 2013

Test-Creep: Selective Test Execution for Node.js

Check out test-creep to run your tests 10x times faster.

They tell us to write tests before, during and after development. But they don't tell us what to do with this forever-running and ever-growing list of tests. If your tests take a double digit number of seconds to execute then you're doing it wrong. Maybe you have already split your tests into fast unit tests that you run all the time, and slow integration tests that you run as needed. Wouldn't it be great to cherry pick those 2-3 integration tests that are relevant to the change you just made and run them? Wouldn't it be great to make unit tests run even faster by executing just those few that are affected by your current work? Test-Creep automatically runs just the subset of tests that are affected by your current work. Best part: This is done with seamlessly Mocha integration so you work as normal.

What is selective test execution?
Selective test execution means running just the relevant subset of your tests instead of all of them. For example, if you have 200 tests, and 10 of them are related to some feature, then if you make a change to this feature you should run only the 10 tests and not the whole 200. test-creep automatically chooses the relevant tests based on istanbul code coverage reports. All this is done for you behind the scenes and you can work normally with just Mocha.

Installation and usage
1. You should use Mocha in your project to run tests. You should use git as a source control.
2. You need to have Mocha installed locally and run it locally rather than globally:

$> npm install mocha
$> ./node_moduels/mocha/bin/mocha ./tests

3. You need to install test-creep:

$> npm install test-creep

4. When you run mocha specify to run the special test 'first.js' before all other tests:

$> ./node_modules/mocha/bin/mocha ./node_modules/test-creep/first.js ./tests

first.js is bundled with test-creep and monkey patchs mocha with the required instrumentation (via istanbul).

In addition, it is recommended to add .testdeps_.json to .gitignore (more on this file below).

How does this work?
The first time you execute the command all tests run. first.js monkey patches mocha with istanbul code coverage and tracks the coverage per test (rather than per the whole process). Based on this information test-creep creates a test dependency file in the root of your project (.testdeps_.json). The file specifies for each test which files it uses:

Next time you run the tests (assuming you add first.js to the command) test-creep runs 'git status' to see which files were added/deleted/modified since last commit. Then test-creep searches the dependency file to see which tesst may be affected and instructs mocha to only run these tests. In the example above, if you have uncommited changes only to lib/exceptions.js, then only the first test will be executed.

At any moment you can run mocha without the 'first.js' parameter in which case all tests and not just relevant ones will run.

When to use test-creep?
test-creep sweet spot is in long running test suites, where it can save many seconds or minutes each time you run tests. If you have a test suite that runs super fast (< 2 seconds) then test-creep will probably add more overhead than help. However whenever tests run for more than that test-creep can save you time.

More information
In github or ask me on twitter.

What's next? get this blog rss updates or register for mail updates!

Monday, October 8, 2012

Crazy Social Analytics for C# Nuget

I'm happy to announce that C# nuget projects get some GitMoon love! This means you can get crazy social analytics about nuget projects. Check out your favorite ones:


Or check out famous head to head comparisons:

SignalR vs. ServiceStack
Mono.Cecil vs. LibGit2Sharp
sqlite-net vs. FluentMongo
TweetSharp vs. Facebook
Hammock vs. EasyHttp

What's next? get this blog rss updates or register for mail updates!

Thursday, October 4, 2012

10 Caveats Neo4j users should be familiar with

UPDATE: Michael Hunger from neo4j responds to some of my items in a comment.

Recently I used the Neo4j graph database in GitMoon. I have gathered some of the tricky things I learned the hard way and I recommend any Neo4j user to take a look.

1. Execution guard
By default queries can run forever. This means that if you have accidently (or by purpose) sent the server a long running query with many nested relationships, your CPU may be busy for a while. The solution is to configure neo to terminate all queries longer than some threshold. Here's how you do this:

in add this:


Then in add this:


where the limit value is in milliseconds, so the above will terminate each query that runs over 20 seconds. Your CPU will thank you for it!

2. ID (may) be volatile
Each node has a unique ID assigned to it by neo. so in your cypher you could do something like:

START n=node(1020) RETURN n

START n=node(*) where ID(n)=1020 return n

where both cyphers will return the same node.

Early on I was tempted to use this ID in urls of my app:


This was very convinient since I did not have a numeric natual key for nodes and I did not want the hassle of encoding strings in urls.

Bad idea. IDs are volatile. In theory, when you restart the db all nodes may be assigned with different IDs. IDs of deleted nodes may be reused for new nodes. In practice, I have not seen this happen, and I believe that with the current neo versions this will never happen. However you should not take it as guaranteed and should always come up with your own unique way to identify nodes.

3. ORDER BY lower case
There is no build in function that allows you to return results ordered by some field in lower case. You have to maintain a shadow field with the lower case values. For example:

RETURN ORDER BY n.name_lower

4. Random rows
There is no built in mechanism to return a random row.

The easiest way is to use a two-phase randomization - first select the COUNT of available rows, then SKIP rows until you get to that row:

START n=node(*)
WHERE n.type='project'
RETURN count(*)

// result is 1000
// now in your app code you make a draw and the random number is 512

START n=node(*)
WHERE n.type='project'
SKIP 512

An alternative is to use statistical randomization:

START n=node(*)
WHERE n.type='project' AND ID(n)%20=0

Where 20 is number you generated in your code. Of course this will never be fully randomized, and also requires some knowledge on the values distribution, but for many cases this may be good enough.

5. Use the WITH clause when cypher has expensive conditions
Take a look at this cypher:

START n=node(...), u=node(...)
MATCH p = shortestPath( n<-[*..5]-u) WHERE n.forks>20 AND length(p)>2
RETURN n, u, p

Here we will calculate the shortest path for all noes. This is a cpu intensive operation. How about separating concerns like this:

START n=node(...), u=node(...)
WHERE n.forks>20 AND length(p)>2
WITH n as n, u as u
MATCH p = shortestPath( n<-[*..5]-u ) WHERE length(p)>2
RETURN n, u, p

now the path is only calculated on relevant nodes which is much cheaper.

6. Arbitrary depth is evil
Always strive to limit the max depth of queries you perform. Each depth level increases the query complexity:

MATCH (n)<-[depends_on*0..4]-(x)

7. Safe shutdown on windows
When you run Neo4j on windows in interactive mode (e.g. not a service) do not close the console with the x button. Instead, always use CTRL+C and then wait a few seconds until the db is safety closed and the window disappears. If by mistake you did not safely close it then the next start will be slower (can take a few minutes or more) since neo will do recovery. In that case the log (see #8) will show this message:

INFO: Non clean shutdown detected on log [C:\Program Files (x86)\neo4j-community-1.8.M03\data\graph.db\index\lucene.log.1]. Recovery started ...

8. The log is your best friend
When crap hits the fan always turn out to /data/log. Especially if neo does not start you may find out that you have misconfigured some setting or recovery has started (see #7)

9. Prevent cypher injection
Take a look at this code:

"START n=node(*) WHERE n='"+search+"' RETURN n"

if "search" comes from an interactive user then you can imagine what kind of injections are possible. The correct way is to use cypher parameters which any driver should expose an api for. If you use the awesome node-neo4j api by aseemk you could do it like this:

qry = "START n=node(*) WHERE n={search} RETURN n"
db.query qry, {search: "async"}

10. Where to get help
The Neo4j Google group or the community github project are very friendly and responsive.

What's next? get this blog rss updates or register for mail updates!

Wednesday, October 3, 2012

How I Built GitMoon

I got some queries on how I built GitMoon so I decided to come up with this list in BestVendor:

How I Built a Viral Node.js App in Just One Weekend

You can read there about the technology and tool choices I've made and why. Got a cool cover image too. Check it out in BestVendor.

What's next? get this blog rss updates or register for mail updates!