It's hard to know at a glance if a variation is winning because it's actually better, or simply because it randomly got a few more clicks than another variation. It would be useful to show statistical significance (as % confidence that the winning variation is actually better than any of the others). This is pretty standard in most a/b testing tools and frameworks. Here's a few articles with more details: https://blog.asmartbear.com/easy-statistics-for-adwords-ab-testing-and-hamsters.html - this one is a good starting point, but overly simplistic. https://www.evanmiller.org/how-not-to-run-an-ab-test.html - This is more advanced, and in particular, it sounds like the “Bayesian A/B Testing” sub-article best fits what is needed for how thumbnailtest is currently designed.