I too agree with Harry. Jennifer is the expert in many areas!

I am not as good as she, but I will add my comments:
There are several quantitative measures that can be used. Here are a couple that came to mind:
1. You can have them complete a test before the training, then at the end of training, then possible a period of time after training. You can then statistically compare the scores of the test.
2. You can have people evaluate them at their job before and after. This might work better to have multiple evaluators, then average their evaluation. This increases inter-rater reliability. This may include subjective evaluations that are quantified. This can include a 1-5 (likert style evalution), or you could even have a + or - type evaluation. 0-100 is a good one. Also, it's possible to have an objective evaluator make estimations (possibly based on viewing the benchmark case) and making evaluations. Keep in mind personality/subjectivity issues can creep in with a single evaluator.
There are some things to keep in mind. First, you need to establish that training was effective. Many times this encompasses a test shortly after the training. Was the training effective?
Next, there should be motivation for maintaining the knowledge. Is the employee going to be rewarded for knowing the training? There might possibly be some evaluation (3-6 months after training session) to see how many retain the knowledge. If few retain the knowledge, it's possible the training was not effective, or there was not enough incentive for the individual to retain the knowledge.
Training measurement is a big old field. What type of training are you wanting to do? Have you assessed the objectives of the training? How long is the training session, and when do you expect measured results?
Also, this will assume appropriate tests/objective measurements are developed; where you don't have measurement error in the equation.