Eventually it dawned on me that the entire certification process is an insurance policy of sorts. Firstly, it confirms that the mesh is within the ASTM or ISO spec -- even though the spec allows for significant variation. Secondly, it meets the traceability demands of ISO mandates.
However, the inspection reports only minimally predict a sieve's performance. I recall a situation in which a customer with a high-powered QC program had trouble matching the performance of
The customer then used a procedure that compared the performance between the two batches. That process finally pinpointed the problem. This is what I think calibration is all about -- ensuring predictable performance in an operating environment.
Calibration techniques vary from comparing a sieve result with a master set of sieves (Master Stack) to comparing
Another approach to calibration involves utilizing calibration spheres or beads, which compares a specific sieve's performance to a traceable high-precision standard. It provides a result as a single specific quantitative measure of the expected sieve performance. The result is a mean sieve opening size.
Given that the high-precision beads are traceable to an ISO-recognized standard, this calibration method serves the
Whitehouse Scientific has a process that provides a sieve calibration that results in a mean sieve size to approximately +/- one micron. It took them approximately three years to develop their traceable calibration process. For more on this see Sieve Calibration
I hope this rant stimulates some questions and discussion.
Thanks for reading this.
A still perplexed,