Back to General discussions forum
I noticed that for some of the problems, N
solutions will be listed on the problem list, but less than N solutions will appear when looking at the "Solutions" page.
For example, Board Game Sequences
shows 10 total solvers, but the solutions page will only ever list the same 9.
As another less-obvious example, Squirrels vs Acorns
shows 49 total solvers, but the solutions page will only ever list the same 48 (capped at a random 15 per refresh).
This is also the case for Long Decimal Fractions
(13/14), Number Base Palindrome
(31/32), Binary Split Guessing Game
(12/13), The Shredder Conundrum
(15/16), and Image Cutting
(50/51). I suppose on the ones with >15 solutions I can't just be certain that I'm not getting terrible luck for page refreshes... but for the ones with <=15 solutions we can be certain.
Oddly, Whisky Blending
errs in the opposite direction, showing 39 solvers, yet listing at least 40! Same for Equal Hamming Distance
(22/21), 'Growth of Micro-Organisms' (7/5), Revoltle
(19/18), Easter Bunnies 2D
(24/23), E2C2S - try harder!
(9/8), and Number of steps in Euclidean Algorithm
(60/59).
Is there a reason for this? I was making a small program that collects data from the site (à la Social Web Scraper
), but this was causing errors, and now I'm curious :)
Sometimes it's the difference in solvers vs solutions by language. For example, if I submit solutions in different languages for the same problem, say in Java and Python, that counts as one solver but two solutions.
I guess Mathias correctly points out the reason of solutions > solvers
, while for the initial question, when
solutions < solvers
I vaguely recollect we have added filtering which prevents showing solutions which are obviously
blank or something like that (less than 13 characters - I checked now). Some people prefer not to send their code :(
To satisfy your curiosity (and actually I thought it may be useful anyway) I added two optional GET-parameters to the
page so you can try adding withblanks=1
and/or limit=50
to the url (limit is capped by 100 anyway just to save
us from some unlucky requests).
I was making a small program that collects data from the site
Hm-m-m... that should work, I don't remember any anti-curiousity means there :) I remember doing some scraping myself to calculate statistics from rosalind.info for example...
Thank you both! That does clear up any confusion and now I'm able to handle any exceptions I'm seeing.
And thank you Rodion for implementing those GET fields so quickly! That definitely makes things easier, avoiding having to re-load the page over and over...