Randomizer Theory

Thread in 'Research & Development' started by colour_thief, 12 Jan 2007.

  1. Can someone experimentally generate the single flood (two consecutive occurrences of same piece) rate for 7-bag? What it requires is generating a very large number of pieces using the 7-bag and checking how many "pairs" of any selected piece are there (there won't be any three consecutive occurrences in 7-bag anyway).
     
  2. Sorry for the triple posting :oops:. I managed to calculate the various values based on the transition diagram for bag randomizer (using composite states that this randomizer is amenable to).

    The first step was to draw the diagram (13 states and 7 levels). Second step was to calculate the stable state probabilities (which is effectively the relative frequency of states when you run the randomizer for very long). The third step was to complete the table, which is clumsy (169 entries) but nothing too tedious because calculating the values is quite straight-forward.

    Once you have these three things one can use it to calculate various values. Calculation for long term probability that an I-piece would occur after a given number of steps as soon as it has already occurred leads to the values:
    A = 49
    P(0) = 1/A; P(1) = 2/A; P(2) = 3/A; P(3) = 4/A; P(4) = 5/A; P(5) = 6/A; P(6) = 7/A; P(7) = 6/A; P(B) = 5/A; P(9) = 4/A; P(10) = 3/A; P(11) = 2/A; P(12) = 1/A
    These are the same values as have been calculated before.

    Calculating long term probabilities without assuming anything leads to:
    B = 7^3 = 343
    P(0) = 49/B; P(1) = 48/B; P(2) = 46/B; P(3) = 43/B; P(4) = 39/B; P(5) = 34/B; P(6) = 28/B; P(7) = 21/B; P(B) = 15/B; P(9) = 10/B; P(10) = 6/B; P(11) = 3/B; P(12) = 1/B

    These lead to the numbers posted by farter for the 7 bag above. The numbers I get after nearest integer rounding are (from n=0 to 12):
    142857, 139942, 134111, 125364, 113703, 99125, 81633, 61224, 43732, 29155, 17493, 8746, 2915

    The first set of probability values above are calculated by updating the stable state probabilities once a I-piece occurs. The second set of values are just calculated directly from the stable state probabilities.

    What you are saying is correct. My re-wording for this would be that the probabilities are about the very next I-piece being exactly the nth piece (what happens after that doesn't affect these probabilities).

    However, when you relate the pmf (probability mass function) and the values calculated by Zaphod, I don't understand the direct logical link. Perhaps I am missing something. For example,
    This is saying that the very next I-piece must exactly be the 11th one (or come exactly after an interval of 10 other pieces). This is assuming you interpret it the way you mentioned above.
    11th -- I-piece
    12th -- any
    13th -- any

    Literal translation is that the very next I-piece must exactly be the 11th, 12th, or 13th.
    So we are saying that the very next I-piece must be the 11th or later.
    11th -- I-piece
    12th -- any
    13th -- any
    OR
    11th -- not I
    12th -- I-piece
    13th -- any
    OR
    11th -- not I
    12th -- not I
    13th -- I-piece

    The two interpretations don't seem to be the same.

    However other than the interpretation is the fact that the first expression doesn't assume anything about our position in the randomizer part from the assumption that we have been running it very long (which would cause the relative frequencies of states to reflect the stable state probabilities). On the other hand, the second expression(pmf) assumes that an I-piece has just occurred in addition to the previous assumption. This alters the probability distribution for states.

    Perhaps I am missing something, but I don't see a direct logical link between the two expressions.

    ----

    Another thing is that I am calculating the probability for a single flood (two consecutive occurrences of same piece), using my method, in 3 piece single bag to be 1/27. Similar result for 7 piece single bag is giving 1/343.

    The flood possibility for a single specific tetromino should be 1/7^3. However when you add up the probabilities for all the flood strings (LL, JJ, II, OO, SS, TT, ZZ), given the completely symmetry of the bag, the probabilities for all these strings is 1/343. Adding them would become 1/7^2.

    ----

    Anyway, if the transition based method is correct, then it has a good thing about it scaling quite well for automated calculations (making an actual program would be a pain though) for 7n-bag in particular.

    Although it is a general method, only assuming that there is no further structure to state multiplicities (so it won't apply to tgm randomizers), time feasibility would, in general, become an issue.

    edit:corrections
     
    Last edited: 5 Dec 2012
  3. Re: the applicability of transition-based methods, I think it's the other way around - TGM or history-based randomisers are the best candidates because the state is naturally modelled by the contents of the history (the number of states can be cut down using symmetry arguments, and most of the entries of the transition matrix are zeroes anyway); although you can apply such a method to bag randomisers as well, it is probably easier to calculate the probabilities directly for each position in the bag and average them.

    At any rate, I've done my own independent calculations and so far they all agree with colour_thief's. I might analyse a few more randomisers if anyone wants to know the info.
     
  4. I was referring to higher values for n in the 7n-bag. The transition method should work quite well for those, as long as it is automated (doing by hand would become far too tedious after first few values).

    But it wouldn't be surprising at all if there was a simple pattern for kn-bag in general (as n increases). Might be instructive to use k=3 for that purpose and then extrapolate the results to 7n-bag.

    You are probably right. I am not good with probability, I was mixing up things. The long term values, I think, should be completely independent of the underlying implementation of randomness, and only depend on the probabilities attached to transitions.

    Symmetry does seem to work well for a lot of basic randomizers.

    I realized that strict history (of 4 pieces) is quite easily handled with just 5 states. Might check whether roll-4 and roll-6 can be handled by using appropriate modifications.
     
    Last edited: 5 Dec 2012
  5. Arcorann and wwksrdg, you two sound like some people I would very much like to get to know better. :D For whatever reason, I am "randomly" passionate about randomizers.

    Believe it or not I don't have a formal training with this sort of thing. I've worked out a lot of randomizers' interval probability curves, calculating without knowing the conventional approaches and making up my methods as I go along. I would definitely have a very high interest in learning more about this topic, so if you can recommend certain websites, search terms, or even textbooks on the subject I would appreciate it. Especially "transition-based methods" sounds particularly useful to me.

    Now a bunch of answers to various things relevant to my interests:

    This has already been done and it is consistent with my calculations.

    My methods are rather different (not transition based), but I've made a program that can calculate 7n-bag probabilities, and it has been verified to be consistent with data at least as far as n = 9 aka 63-bag. I can share the code if there's any interest. As a direct result, the lowest common denominator of all the piece interval probability fractions is given by the formula:
    n(7n choose n)^2

    I've also worked out d+1 bag, where the algorithm is that there are d distinct pieces (eg. Tetris has d=7) with one of each in the bag plus a random piece. Confirmed with actual data as well. The denominator in general is:
    (d^2 + 2d)(d+1 choose 2)

    See now this is EXACTLY what I'm talking about. I get all tingly reading this because I swear I'm doing this exact kind of thing, cutting down states using symmetries and everything. Believe it or not, I only recently figured out that the "flow" of probabilities had an analogous system of equations that could be solved to find the relative probabilities of the the different states as exact fractions. Previously, I had used the probabilities iteratively until the individual state probabilities had all stabilized to many decimal places. It's kind of cool to see it work iteratively, as you can start with any initial distribution and it'll always give the same answer. You can see for yourself in my spreadsheet below.

    I would love a second opinion on the relative proportion of TGM history diversities:

    4 distinct, eg. LTOZ
    3 distinct, eg. IJTJ
    2 distinct, eg. SZSZ
    1 distinct eg. TTTT

    I reduced the number of states to 8 with my calculation. And it's possible to solve TGM1 and TAP at the same time if you define the transitional probabilities with placeholder terms such as "IN_4" "IN_3" "IN_2" "IN_1" to denote the odds of getting a piece that is in the history given a history diversity of 4 through 1. Then you can plug in TGM or TAP's "IN_*" probabilities as the last step.

    For reference, my calculations are here:
    https://docs.google.com/spreadsheet/ccc?key=0AlZZx1grN1ngcHFHeUg5VlFwVXRkUFhiaEpYZmNuVWc

    Also, a question that has been nagging me for years:

    Given a known history diversity occurrence, is it probabilistically sound to simply weight the 4 different piece interval calculations according to these occurrence rates?

    I mean specifically when considering the TGM/TAP randomizers, but if there are cases where the answer is different that would be interesting to know too.

    Incidentally, TGM3's randomizer has 7!*7^39 states, or 8*7^35 assuming I've worked out the symmetry properly. I don't even know where to start with calculations.
     
    Last edited: 6 Dec 2012
  6. Regarding TGM history diversity proportions... I did that super long ago and forget all the symmetries I found. But looking at it again I think the minimal reduction might actually be 10 states. So yeah, a second opinion is definitely welcomed on that front.
     
  7. Look up Markov chains. A Markov chain is a system which can be in one of many states, and the probability of moving from one state to another is dependent only on the current state. Modelling the randomiser using a Markov chain is basically what I meant by "transition-based", though a lot of the stuff there you seem to have worked out independently already.

    My modelling actually used 15 states; I'm interested to see how your 8 states relate to my fifteen. The stationary probabilities for each of my fifteen states actually work out very elegantly, since the seven states with 2 history diversity all have the same probability, as do the six with 3 history diversity, and the ratios of these probabilities are also very elegant.

    Specifically, let p_k be the probability of being in one of the states where there are k distinct pieces in the history, and n be the number of rolls. Then for k = 1,2,3:
    p_(k+1) = p_k / (k+1)^(n-1) * (7^n - k^n)

    Thus, calculating exact values for the proportion of each diversity becomes a matter of computation (noting that there are seven states with 2 history diversity and six with 3 history diversity):

    TGM (4 rolls):
    1: 1/1121726 ~ 8.9148*10^-7
    2: 2100/1121726 = 1050/560863 ~ 0.0018721149
    3: 159000/1121726 = 79500/560863 ~ 0.1417458452
    4: 960625/1121726 ~ 0.8563811483

    TAP (6 rolls):
    1: 256/54739840949 ~ 4.6767*10^-9
    2: 6588288/54739840949 ~ 0.0001203564
    3: 2732570880/54739840949 ~ 0.0499192331
    4: 52000681525/54739840949 ~ 0.9499604059

    These match the figures in your spreadsheet to enough decimal places (8 for TAP, 5 for TGM) that I am reasonably confident that your method is giving correct answers.

    I'm not sure what you mean by the "4 different piece interval calculations". Could you clarify this a little bit?

    I was under the impression that TGM3 and TAP had the same randomiser. Care to explain any differences?
     
  8. First of all, hello.

    ...Oh fuck, where to start... I had this explained to me last night, and I'm not even sure I could explain it.

    Those are some interesting numbers. CT just asked me if it was possible to peek under the hood of my implementation of the TAP randomizer at the history states. With a slight hack, I was able to output this as "how many of the six pairings (01, 02, 03, 12, 13, 23) in the history have the same piece in them?" expressed as a piece (in alphabetical order). Some additional hacking later (because everyone loves SIGSEGVs caused by trying to write beyond the end of an array), and I was able to arrive at this output: http://www.twentysixfourteen.net/randomizers/taptrap-2bil.txt

    So it looks like we have this correct.

    Your 15 states... are they these, by any chance?
    Code:
    AAAA
    
    AAAB
    AABB
    ABBB
    ABAA
    AABA
    AABB
    ABAB
    
    AABC
    ABAC
    ABCA
    ABBC
    ABCB
    ABCC
    
    ABCD
     
  9. Most excellent, thank you for working those out Arcorann! I'm super glad that my calculations match yours as well. If I may ask, what tools did you use to work those out? Just pen and paper, or is there software that can make it easier? I'm finding it a little tedious to work out my system of equations by hand, even with only 8 states. I will definitely be reading up on Markov chains, thank you for the suggestion.

    Ok, I think I remember what I did. The first step takes you down from the 7^4 states the randomizer can naturally hold to a mere 15 by removing the piece types from the equation. Personally, I did this by labelling them A through D in the order in which they are encountered. This gives:

    ABCD
    AABC
    ABAC
    ABCA
    ABBC
    ABCB
    ABCC
    AAAB
    AABA
    ABAA
    AABB
    ABAB
    ABBA
    ABBB
    AAAA

    I'm almost certain this must be the same 15 you used. One detail to note is the piece A I'm starting with is the one that has just entered the history. Now comparing a few in particular we notice something interesting:

    ABCA
    ABCB
    ABCC

    You'll notice that on the next iteration, all of these will be reduced to ABC! This makes the 4th piece uninformative, as all 3 cases behave identically. As long as we know that the history diversity is 3 and not 4, then there is no room for mistaking these for ABCD. You can apply the same technique to AAB*, ABA*, and ABB*. This eliminates 5 possibilities dropping the total to 10.

    There is a lesson to be learned from the previous reduction, allowing it to be generalized further: What matters is where things are going. Changing to a different notation, one that describes the diversity "contribution" over 4 iterations, reveals the symmetry. So the original 15 can be converted as so:

    ABCD = (4,3,2,1)
    AABC = (3,2,1,1)
    ABAC = (3,2,2,1)
    ABCA = (3,3,2,1)
    ABBC = (3,2,2,1) x
    ABCB = (3,3,2,1) x
    ABCC = (3,3,2,1) x
    AAAB = (2,1,1,1)
    AABA = (2,2,1,1)
    ABAA = (2,2,2,1)
    AABB = (2,2,1,1) x
    ABAB = (2,2,2,1) x
    ABBA = (2,2,2,1) x
    ABBB = (2,2,2,1) x
    AAAA = (1,1,1,1)

    Or if you're more visual or just don't like counting, you can equivalently place a * wherever there's a duplicate within a history, and it identifies the same symmetry. Either way you're left with the 8 cases below:

    ABCD (unchanged)
    A*BC (from AABC)
    AB*C (from ABAC, ABBC)
    ABC* (from ABCA, ABCB, ABCC)
    A**B (from AAAB)
    A*B* (from AABA, AABB)
    AB** (from ABAA, ABAB, ABBA, ABBB)
    A*** (from AAAA)

    How does that compare?

    If you're calculating the probability of getting a piece repeat of a certain interval (eg. for the probability mass function), the history diversity affects the calculation. There are 4 possibilities for the history diversity, and above you have calculated their proportion. With the goal being to produce a final probability that accurately describes the randomizer's general character, my question is this:

    For a given interval size, is it ok to use a constant history diversity for the whole calculation, do it 4 times to cover the 4 history diversities, and then combine them with a weighting according to the history diversity proportions? Or does the probability have to branch, to account for the history diversities, with each piece dealt in the interval calculation? I've been doing a weighting for my probability curve, but I've always had a nagging feeling that that wasn't ok.

    That is what was believed for years, and they do share important algorithmic features. However there are substantial differences.

    There is still a history of 4 with 6 rolls.
    The game keeps track of the order of the 7 piece types from least to most recently received.
    The game has a bag of 35 pieces, initialized to contain 5 of every piece type.

    Instead of rolling from 1 to 7, the game rolls from 1 to 35 and removes the corresponding piece from the bag. It then places the least recently received piece in the bag. Even if the game rejects a roll, each individual attempt will still do this bag removal/replacement so it can happen multiple times per piece dealt.

    ...and there are also 2 separate bugs where under some very specific circumstances it fails to properly do the removal/replacement. I don't remember them off the top of my head but I'll look them up when I get a chance and report back.
     
  10. Re: the states that I used, apart from a difference in notation the set that both you and Pineapple posted was indeed the set that I came up with.

    Re: your interval calculations, I don't think what you've been doing is correct, though I'll need to look into it a little bit further. I do know that compared to my results, yours differ in every entry except the first for TAP, though that might be because your proportions aren't exact.

    Re: TGM3: Wow, that looks complicated. I think you've overestimated the number of states though - since the piece order inside the 35-piece bag is not important, the number of possibilities is only 41C6 = 4496388 as opposed to your 7^35. This is still completely unfeasible to calculate theoretically, though.
     
  11. Edo

    Edo a.k.a. FSY

    You're missing the 1/7.

    For convenience, here's the expression I gave:
    Code:
    P(X=x)= (1/7) * P(D>=x),    for x= 1, 2, ... , 13
    and the example calculation:
    Code:
    P(X=11)= (1/7) * P(D>=11)
           = (1/7) * ( P(D=11) + P(D=12) + P(D=13) )
           = (1/7) * ( (3/49) + (2/49) + (1/49) )
           = (1/7) * (6/49)
    The only way you can be at some unknown point in the sequence and have to wait exactly 11 pieces for your next I, is if you are in a drought of at least 11, i.e. D>=11. (It should be clear that a drought of size D<11 could not possibly accommodate a wait of X=11.) However, in each of the possible drought sizes that could possibly accommodate a wait of 11, it is required that you are indeed exactly 11 pieces from the next I; the probability of this occurring is 1/7.


    If you're still not convinced, imagine we have a sequence of 49n bags; that's 49n*7 tetrominos in total, and 49n*7 possible positions in the sequence at which we can be. For large n, the number of "droughts" or intervals between I's is ~49n, and the drought sizes are distributed according to the pmf. Now, there will be n droughts where D=13, which contain a single position that is 11 away from the next I. There will also be 2n droughts where D=12, which similarly contain a single position that is 11 away from the next I, and so on. Hence, of all the 49n*7 possible positions at which we could be, there will be n+2n+3n positions at which we would be 11 away from the next I. The probability of having to wait X=11 for the next I follows. (If you like, you can drop the n for convenience, it might make things clearer.)
     
  12. While I don't get what you are precisely saying, I think I have gotten the gist of your basic argument. I suppose it goes like this:
    Suppose you have a very long string of length X generated by the randomizer. Let
    Going from I-piece to I-piece directly:
    d(11) = number of times there were exactly 11 other pieces between two I's
    d(12) = number of times there were exactly 12 other pieces between two I's

    Going from start to end step by step:
    n(11) = number of times exactly 11 pieces between current position and the very next I were detected

    It becomes reasonably clear after some trial and error that:
    n(11) ~ d(12) + d(11)

    For example we could keep a temporary counters of n(11), d(12) and d(11) (initialized to 0) and move from I-piece to I-piece. We can simply discard all cases where the in-between distance is 10 or less (that is none of the counters gets a +1). However, when the in-between distance of 11 is detected both the counters for d(11) and n(11) must be increased by 1. When the in-between distance of 12 is detected similarly both the counters for d(12) and n(11) must be increased by 1.

    When we reach the end the result n(11) ~ d(12) + d(11) will hold true. Since:
    P(D=11) = d(11) / (X/7) (with slight change of convention for my terminology)

    This because in a very long length X of sequence there will be ~X/7 I's and those are the ones tested for the pmf.

    Similarly P(X=11) = n(11)/X

    A similar equation for P(D=12) and this leads to the equation you described.

    So this what you were saying I suppose? What you are saying definitely makes sense but I don't think it is trivial by any means. For one, the way I have been thinking about it is leaving a very very large number of samples in any state and leave them long enough so they get settled. In the long term, their proportions will balance out in terms of stable state probabilities.
     
  13. I now have an answer to this: Definitely not. The reason is that the proportions of each diversity are altered given the knowledge that the last I was 2/3/4/etc. pieces ago. For example, given that the last I was 2 pieces ago (i.e. the second-last piece), we know that the probability of diversity 1 must be zero, since the last piece is different from the second-last piece. Thus, we cannot determine the piece intervals in the way described.

    Also, looking at some earlier posts, this caught my eye:
    I thought a bit about it and realised that the interval distribution for an doubly (or above) recursive randomiser is the same as that for a singly recursive randomiser. In fact, the probabilities are almost as easy to calculate, because the only real change is that the 7 interval becomes slightly less likely.
     
  14. You guys should team up, write a paper and publish in a journal.
     
  15. Muf

    Muf

    Would be a breath of fresh air compared to the dime-a-dozen faux scientific articles written about Tetris up till now.
     
  16. Thank you for your clear explanation of what was wrong. I think I've worked out the correct way to do it. If I'm right, you need a separate equation for intervals 1, 2, 3, and 4, but intervals 5 and up can share a general equation. Could you check if your numbers are in agreement with my latest? Here's a sample from TAP:

    Interval 1 = 0.00837
    Interval 2 = 0.00835
    Interval 3 = 0.00832
    Interval 4 = 0.00827
    Interval 5 = 0.30736
    Interval 6 = 0.20898
    Interval 7 = 0.14246
    Interval 8 = 0.09731
    Interval 9 = 0.06656
    Interval 10 = 0.04552
    Interval 11 = 0.03113
    Interval 12 = 0.02129
    Interval 13 = 0.01456

    That's really cool! I can kind of see why that might be the case (a duplicate bag will give 7 intervals of 7) but I'd love to read a more detailed explanation.
     
  17. The values you are posting are for pmf I am assuming? Putting the string query I described before in more precise format we can write it:
    P(n) = (n non-1 blocks).1 (for n>=0)
    What happens after 1 is considered to be irrelevant so I just don't write anything after that.

    I think it is possible to relate P(n) and pmf(n) directly if we use a slight variant of equation posted by Edo before. If it is correct (it seems to be quite plausible) it would save the drudgery of calculating state values again after a 1 occurs.
    P(n) = (1/7) pmf(>=n)
    This is because 1 is supposed to occur about 1/7 times in a very long interval. Now writing for n+1 we have
    P(n+1) = (1/7) pmf(>=n+1)

    From this we get:
    pmf(n) = 7*[P(n+1) - P(n)]

    I think another point is that I don't think that using 15-states (posted above) is sound. I will try to explain.

    You see it is technically a correct way to encode all the 7^4 states of the randomizer. In fact, if you add up the number of states represented by each of the states above your result would add up to 7^4.

    It is also a correct way to calculate the sum of stationary probabilities for all the states with history diversity 1,2,3 and 4. Informally using symmetry (don't know of what kind :confused:) we can argue that each of the states that have history diversity 1 (in the original full state diagram with 7^4 states) have equal stationary probability. Similarly we can say the same for states having history diversity of 2 and so on. So if we say that:
    w = sum of prob for all states with diversity 1
    x = sum of prob for all states with diversity 1
    y = sum of prob for all states with diversity 1
    z = sum of prob for all states with diversity 1
    You would note that w,x,y and z are related to the values posted by Arcorann above:
    w = p_(1)
    x = 7*p_(2)
    y = 6*p_(3)
    z = p_(4)

    The issue is different. I don't believe that you can enter a simple string query to this 15-state diagram. "A" for example represents a completely different piece in different states in the original diagram. Same is true of every other transition label. There is no 'constant transition label' that represents the 'same piece' for every state in the original diagram (with 7^4 states).

    For example, if I wanted to post the query above (n non-1 blocks).1, there seems to be no simple way to do it.

    So if the results happen to be good, it is either because:
    a) a very sophisticated reason
    b) being numerically close to the actual results

    However, if (b) is true, that doesn't make this a sound method in my opinion.

    Anyway, a good simple check is to calculate the value of P(0). If you are getting that value 1/7 exactly, then it seems plausible that it might be correct. However, if you aren't getting exact 1/7, then using 15-states is definitely not sound for the kind of string query I posted.
     
  18. This is essentially what I'm doing. Except that I don't have the "7*" in there. For me, they don't sum to 1/7, they sum to 1. Personally I think the math is a whole lot cleaner when you look at things not from a particular piece type but from a more general perspective. The reason I ask for confirmation is that my method, by definition, will sum to 1. Summing to 1 is therefore not a useful confirmation that it has worked.

    About the 15 states: They definitely work. You can get away with a lot knowing that the piece types are equally probable, like removing the 1/7 above. If you tried actually working out the probability flow of 7^4 states, I'm sure you'd realise the patterns and consolidate them into 15 states in a hurry. For example, it doesn't matter what type exactly piece A or B are in AAAA -> BAAA, the event is equally probable.

    And incidentally, we've analyzed the results of long generated sequences and the history diversities are detected in appropriate proportions.
     
    Last edited: 9 Dec 2012
  19. This is quite interesting.

    At any rate, I made a slightly different model using 52 states, which is not quite different from the 15-state model above, apart from using 1 as an extra constant label and having history from right to left (in contrast to left to right as in your case). I completed the table in semi-automated way using spreadsheet. So, in a sense, I have the complete transition diagram.

    I got equivalent results for history diversity proportions (just had to form a few selected equations using my transition table). As long as the basic encoding of states is correct, and there is no mixing up of two or more different history diversities in a single state, I think the results should be definitely correct regardless of specific encoding used.

    I also checked the probability of P(0) using spreadsheet and got exactly 1/7. The down side of having so many states though is that getting a precise algebraic form becomes too difficult. In any case, I will make up a basic bare-bones program (by hard coding the states from spreadsheet to program) to calculate numerical results for queries related to pmf, general stationary probabilities and floods. If correct, they should, most likely, agree with the experimental numbers posted on the previous page for roll-4 and roll-6 (as my bag calculations also agreed).

    Between, you can compare your results with strict history. If your results are in numerical form or algebraic form you should definitely see the difference between the two approaching 0 as you increase n.

    Your motivation for using 15 states seems interesting (and also reasonably plausible at a glance). Indeed if the results are true this would also happen to be quite an elegant solution. Would be interesting to look into the reason for it being correct in more detail.

    In any case, I will make up my program first. Would take me more than a few hours given my poor programming skills :oops: ... but would provide good practice too :)
     
  20. +10000
     

Share This Page