When a new test is developed or taken to use in your laboratory, you need to do a statistical evaluation of different performance characteristics of the test. This means testing quite a lot of samples just to check whether the test performs as well as it should.
Right now, due to COVID-19 situation, both test manufacturers and diagnostic laboratories are under unprecedented pressure to get results fast, which rarely helps to concentrate on quality. The required reagents are scarce, and each sample used in validation or verification consumes them just the same as a routine sample would do. Getting a good set of positive samples for verifications can also be difficult.
So there must be a temptation to do this with as small amounts of samples as possible.
You cannot fight the statistics on how much data is needed to make conclusions, but there is a small trick that can help you in making some preliminary conclusions as you go, and to make educated decisions about when you have gathered enough data.
Look at the confidence interval
There are two ways to minimize the sample amount. One is to select a number that has been previously used for a similar purpose, and sort of externalize the responsibility of the choice to the person who used that number before you.
A smarter way is to look at the confidence intervals of your data.
For example, if you use 20 positive samples, and your test would give you one false negative, that would yield to 95% sensitivity. But if you look at the 95% confidence interval, it tells that your sensitivity can be anything between 76.4% and 99.1%.
76.4% is of course way better than tossing a coin, but it would mean that out of every ten infected people tested, two or three would get false negative results. From 1000 infected patients, we can expect to get 236 false negatives.
On the other hand, if the setup with 20 positive samples gives you three false negatives, that would give 85% sensitivity, which does not sound very trustworthy. But again, if you look at the confidence interval, the upper limit 94.8% is not so far from 95% which would be considered ok.
With Validation Manager, you cannot make this small sample amounts give any better results than that, but you will be able to easily see the confidence interval related to your results. Simply upload your data and view the report to see the values.
If you would not be able to trust a test whose performance is at the better end of the confidence interval, there is reason to suspect that there is a problem with the test or your measurement setup that needs to be fixed before there’s any point in continuing verifications.
On the other hand, if the better end of the confidence interval is ok, but the poor end is not, you should measure more samples before you can make the decision on whether the test is good enough or not. After each set of measurements, you can simply upload the new data to Validation Manager. The report is automatically updated to show all your data, so you can adjust your decisions during the verification.
Measuring sensitivity is of course not enough, but this same principle can be applied to other performance indicators as well, and you’ll get them out of Validation Manager just as easily.