Oh, it’s definitely legit. I ran some tests modeling some of the SSCI feature matches and I was surprised how well it performed given that I’m using a few hacks and haven’t yet implemented all the features I’d like.
It basically works by using the hypergeometric distribution to initialise a set of probabilities for different agenda counts in the opening hand (and as more cards are drawn). The secret sauce comes in how those probabilities are updated whenever cards are accessed (from R&D, Archives or HQ). Given the results of that access (hit or miss) the probabilities are updated using Bayesian probability.
So suppose you run HQ, access one card and don’t hit any agendas. The model can then eliminate the possibility that there are 5 agendas in HQ. Similar, since the probability of missing is quite unlikely if there were 4 agendas in hand, we can “penalise” the probability of 4 (but not eliminate it completely), while increasing the probabilities of 0 and 1, since they are more likely based on the results.
The model uses Kalman filtering to update itself at each step. So if you were to keep accessing and missing in HQ the probabilities of larger numbers of agendas would continue to be reduced further and further. Similarly, if you were to repeatedly access R&D and miss, the model can infer that it’s likely that its initial belief about HQ was erroneous, and it’s more likely that there are more agendas in HQ.