First, let’s revisit how a pressure decay test works. The part under test is pressurized (i.e. filled with air or evacuated until it reaches a set pressure or vacuum), then isolated from the supply pressure. The pressure within the part is then monitored using a pressure sensor. As the air leaks out, the pressure drops. The leak rate can then be calculated based on the change in pressure over a certain period of time.
Using pressure change or decay results
A significant number of users, particularly those that perform medical device testing, test only to a pressure change or pressure decay result. In these cases, they elect not to do an empirical study to determine what air volume leak rate results in issues with their part.
Instead, they execute a simple design of experiments (DOE) that trial various timer settings to achieve highly consistent final pressure decay values on a given part or small sample of parts. They then fix those consistent timer settings and test a number of known good parts (10 to 10,000) and develop a distribution curve of results that yield both mean and standard deviation values. The user may choose reject limits around anywhere from three to six times the standard deviation value. In essence, they have classified “typical” good parts and set a window around that.
In contrast to a pressure change that is measured internally within the part, leak rate is the volume of air that is actually exiting or leaving the part. Despite this distinction, the method of test is exactly the same between these two techniques when using the pressure decay test method.
Moving to leak rate measurementConverting from a pressure loss reject limit to a leak rate, once a target leak rate is selected, is quite simple.
It starts with using a known, non-leaking part, and effectively teaching the leak test instrument the volume under test with no leak present. The instrument memorizes this decay value from a typical non-leaking part and enters that into memory as zero sccm.
We then retest, using the same known non-leaking test piece, along with an NIST-traceable certified leak of a known value in sccm. The instrument is given this certified leak value and when it executes the next test, it memorizes the additional decay that leak standard is responsible for.
The system now as two calibrated points – zero sccm, and, ideally, if the leak standard was built near the leak limit for that part, the leak limit threshold itself in sccm.
Why it’s better to implement a leak rateThere are multiple reasons. It is important to remember that based on the Ideal Gas Law (PV=nRT), the pressure change of any given test is supremely dependent upon the pressurized volume being tested.
- If a user has a family of parts used similarly in clinical function but having measurably different internal volumes, implementing a leak rate specification allows the user to easily reject the same criteria (same effective hole size) across the entire family of products including all relevant volumes under test.
- Over time, replacements of components such as seals, tooling, and connective hoses will be required due to wear. Each replacement has the potential to affect the total volume under test.
- As production levels increase, manufacturers may have to move from manual to more automated processes. The transition most often requires a change in the proximity of the test instrument to the part/fixture which inherently changes the volume under test.