Why is it so hard to make real comparisons?
The short answer is "statistics". For a longer answer, read the tale of woe which follows...
Imagine a shooter with a performance distribution as labelled "Technique 1" in the above diagram. (S)he's not exactly a bad shooter, but can be a bit variable sometimes, so resolves to make a change. The change results in the performance distribution labelled "Technique 2", which is actually worse than the previous approach. The change has been detrimental; however the shooter has no way of knowing this yet, as (s)he hasn't tested it yet. In due time, the shooter goes and fires another competition and entirely by chance gets a 50! Elated with the positive impact on performance, (s)he goes and has a celebratory shandy on the North London RC verandah and announces to all and sundry that everything is sorted. During the next day's shooting, regression to the mean* rears its ugly head and scores are abysmal.
You get the idea. The point here is that even though the change is detrimental to the shooter's performance, there is still a finite probability that they will get a score which is above average for them using their previous technique. The shooter needs to make several measurements to have a better understanding of the performance profile which results from the change. The more measurements, the better the understanding.
In summary, it is entirely possible to make a detrimental change and yet have one or more better-than-average shoots afterwards. This is something best avoided.
How do we use SCATT to make good comparisons?
One of the key uses of SCATT is the comparison of different techniques, positions and equipment prior to testing them in a livefire training session. The feedback that SCATT, Noptel, RIKA or another similar shooter training system can provide is worth a very great deal for a number of reasons:
Firstly, dry firing with such a system provides quantitative feedback in addition to the qualitative "feel" of trying something new in a simple dry fire or even a live fire environment. The trace length, shot release and shot aim data allow direct comparison of the different elements of groups of dry shots fired under different circumstances.
Secondly, the marginal cost of dry firing with a SCATT is essentially zero, so the only limit on the number and quality of tests that can be done is the amount of time that you are willing to spend upon testing. This allows much more reliable and detailed comparison than one done on a rifle range with live ammunition, where the cost of ammunition, target time and barrel life becomes a significant factor very rapidly.
There are five guidelines to making a good comparison using a shooter training system:
- Build a baseline / control
- Change one thing at a time (but use your common sense)
- Record everything
- Do enough shots to make the comparison meaningful
- Understand the limitations of the system
In any good experiment, one change and one change only is made between tests. If you make two changes then you test, then you don't know what the effect of each individual change is. So if you wanted to make two changes, you should test each change individually and then both changes together (as well as the tests needed to establish your baseline.) Of course, this presents us with an issue because if you want to play with lots of different settings, such as when you've just bought a new rifle with lots of spiffy adjustments to make, in theory you can end up with hundreds or even thousands of tests to do. To get around this, you should apply some common sense to a) reduce the number of tests you need to run, e.g. you can probably test eye relief as a discrete test once you've picked the remaining settings for your position, and b) eliminate the obvious no hopers, e.g. with some handstop settings your sling may be obviously too tight or obviously too loose.
It is critically important that you record everything for two main reasons: Firstly, if something doesn't work out, e.g. a handstop position, then you need to be able to revert to your baseline settings, but also so you can keep an accurate track of the changes you made and the results you got. Reproducibility is the key; there's no point doing a bunch of comparison tests to find the best position for you if you can't reproduce it once you've identified that it is the best position.
The key limitation to understand is that SCATT and similar systems are not perfect in their modelling of shots, so it is a very real possibility that you're not going to get the perfect position for live shooting from SCATT training; however what it does allow you to do is to make useful comparisons
Real life example; comparing underlayers
OK, so Kat, the kids and I moved to NZ. I had some (but not all) of my shooting kit with me and my new jacket arrived. I didn't have any underlayers with me for the Nationals so I shot using wearing a hoodie under my new Creedmoor jacket. In the fullness of time, the rest of our stuff arrived in boxes, including all my old underlayers. This left me with a dilemma: Did I continue shooting with the hoodie or did I revert to using my Kurt Thune coldwinner top?
Enter SCATT and Excel.
I fired a bunch of shots under as near identical conditions as I could make them, changing only the top I shot in. After doing a little calculation, I obtained the following results:
|Top Worn||Shots||Average Tracelength||Standard Deviation||Standard Error||Average Shot Release||Standard Deviation||Standard Error|
What this tells me is that shooting with the hoodie appears to provide a more consistent hold than the coldwinner top, all else being equal, because the average tracelength and shot release values are both lower (i.e. Better) for the hoodie. I can see that this is statistically very significant for the tracelength because the difference between the two average tracelength values is much greater than the standard error values. The difference between the two shot release values is not demonstrated to be statistically significant, as the difference between the two values is only about 1 standard error. Comparing more values may resolve this issue.
I'll be shooting in the hoodie at this year's Imperial Meeting. I also ran a similar comparison of low and high cheekpiece positions and found that the high cheekpiece position provided a better consistency of aim.
tl;dr version for company directors and other individuals with the attention spans of goldfish
SCATT is really good for trying out changes to your equipment, position or technique.
When using SCATT to compare changes you need to:
- Make sure you know your current standard of performance to act as a baseline
- Change only one thing at a time
- Keep an accurate note of the changes you've made
- Fire lots of shots with the change to make sure the comparison is a good one
- Recognise that SCATT isn't perfect