Randomizer Theory

Thread in 'Research & Development' started by colour_thief, 12 Jan 2007.

  1. OK, I'm back. I've been trying to rigorously define "predictableness", and I think I have it. Let X_n be the nth piece and S_0 be the initial state of the system, which may or may not be fixed.

    \(P = \lim_{n \rightarrow \infty} E(\sum^n_{k=1} p_k(\mathcal{F}_{k-1}))/n\) where \(p_n(\mathcal{F}_{n-1}) =\) \(\max_x P(X_n = x | \mathcal{F}_{n-1})\) where \(\mathcal{F}_{n-1} = \{S_0, X_1, ..., X_{n-1} \}\)

    In layman's terms, F_n is the entire piece history as well as the initial state. p_n(F_{n-1}) is the maximum probability over all the pieces for the nth piece the probability of guessing correctly using the optimal guessing strategy, given F_n-1. The predictability is the limit of the average of the expected values of p_1 to p_n, taken as n goes to infinity.

    Why do I take the limit of the mean and not simply the limit of p_n? If p_n converges (as with history), the mean will also converge; if p_n is periodic (as with bag) this gives the average value.

    The predictability of some common randomisers:

    Memoryless: 1/7 = 14.29%
    TGM: 28.96%
    TAP: 31.80%
    7-bag: (1/7+1/6+1/5+1/4+1/3+1/2+1)*1/7 = 37.04%

    Computing the predictability of other randomisers will be more involved. I hope to have some results in the coming days.
     
    Last edited by a moderator: 22 Jun 2015
    clincher, Qlex, Kitaru and 1 other person like this.
  2. Would be nice to see how TGM3 compares to 7bag.
     
  3. I really like your definition Arcorann.

    Something that I think is interesting to consider is TGM3's dynamic piece probabilities. The exact probabilities of each piece type are hidden information, and the optimal choice is much more nuanced as a result. The predictableness assuming complete knowledge of the hidden information is certainly an upper bound. A lower bound is going to be TAP. But the real answer that lies somewhere in between will first require determining the optimal hidden information guessing heuristic which is a beefy problem in and of itself.

    It's crazy because technically quite a lot of the preceding piece sequence significantly alters the best guess of the hidden information. I am curious how many pieces on average it takes to completely rewrite every index of the 35 piece array.
     
  4. Took a stab at calculating the predictableness of 8-bag, came out to 28.51% for me.
     
  5. I am assuming that these are the same values one would get if we multiplied the long-term probability of each state (which I have been calling stable probability) with the highest probability chosen at each choice?

    If that is so, then would the values be necessarily the same if the initial state of the system was not sharp/fully-known (by fully known I mean probability 1 for some state)? Specifically for those randomisers (they should exist?) where, given full history, with increasing time the probability of precisely pinning down to a specific state doesn't converge to 1.

    I didn't understand the definition so this is a really rough/uneducated guess, but I can't resist the itch to point it out :p. What if the average of the expected values can also diverge (and so on etc.). Almost certainly, a precise definition like this could be made for randomisers that we are abstracting with state machines. But I am talking about the case in which we are abstracting a randomiser in a more general sense as any programmable sequence.

    To be honest though, I haven't really thought of any examples for any of these two points.
     
  6. Having considered it for a bit longer I realise that my first formulation was correct - I got a bit confused over the information that can be observed. Also, the state S_0 should actually be the observable properties of the initial state.

    @colour_thief, you're absolutely right. Even given all dealt pieces we can't see the probabilities, so getting any reasonable information from TGM3 would require simulation. I lost my simulation code but reconstructing it shouldn't be a problem, though it's not a priority right now.

    I had a look at 8-bag and I think it's 32.86%. I'm not sure what the difference in our workings is.

    @wksrdg: Re: sharp vs. weak intial states: As mentioned before I did need to modify my definition. This only becomes an issue if the dependence on the initial state doesn't reduce to zero over time, and I can't think of any randomiser that would satisfy this. Even if there was, the piece sequence would give information on the initial state, so as n goes to infinity it could be guessed with arbitrary confidence. I'm not sure how the analysis would work in that case, though.

    Re: undefined predictability: Since the probabilities are bounded, a situation like that would imply that the partial averages were diverging by bounded oscillation, and thus the probabilities would appear as increasingly long sequences of relatively low predictability and relatively high predictability. I don't think that would be a desirable property in any randomiser.

    (Also, are those math tags new?)
     
    Last edited: 26 Jun 2015
  7. @Arcorann I played around a little with your definition of predictability, except, instead of predicting the next piece, I tried to predict the piece n pieces in the future. It seems like an interesting way of looking at things.

    [​IMG]

    You'll have to forgive me, as I just did strict 4-history to keep the math simple, but the randomizer has an interesting pattern. I was surprised by how resilient predictability is. You have a 1/3 chance of guessing the 5th next piece right!

    This got me thinking about possible shortcomings of this metric. There is, for example, an enormous difference between 1/3 predictability with 3 candidates (eg. n=1) and 1/3 predictability with only 1 good candidate (eg. n=5). Calling it 1/3 the whole way through seems unintuitive, like it's winning by some technicality. Could it possibly be more appropriate to consider the standard deviation of the probability of all 7 candidates rather than the single best candidate probability? I graphed it for fun, at least:

    [​IMG]

    Another thing is that the exact timing of a piece is usually not a big deal. In a practical sense I think most of us are (perhaps subconsciously) planning in a way similar to "within the next n pieces I'll probably get x and y, and probably not get z". I do still like the existing predictability metric, but it's with the above in mind that another potentially interesting measure comes to mind:

    The probability of correctly guessing n elements from the set of the next {x} pieces.

    Unfortunately this isn't a single statistic, nor a 2 dimensional collection of statistics as graphed above, but rather a 3 dimensional collection of statistics. I'm not necessarily suggesting we go nuts calculating a million stats, there's probably a handful that are most useful.

    Something interesting is, assuming you have a fair algorithm (all piece types occur in equal proportions, ie. not GB Tetris lol) and you are trying to guess the complete unordered set of the next 7 pieces, I conjecture that the best guess regardless of algorithm will be one of each piece. Because of this, it is possible to scan a long sequence, in complete ignorance of the algorithm which generated it, and count the occurrences of 7 consecutive distinct piece types. Then you can calculate the statistic of how often you can predict the next 7 (unordered) pieces! You have to be willing to assume it's a fair algorithm though.

    I can think of similar data-driven ways of calculating the probability of guessing n elements from {7x} for any n and x, for unknown algorithms.
     
    Last edited: 27 Jun 2015
    Qlex likes this.
  8. Muf

    Muf

    They were requested a long time ago, I promised I'd put them in after moving to the new forum software and your post reminded me. :) I can't seem to get strikethrough to work on the LaTeX though, the forum strikethrough tag is incompatible with the processing MathJax does and LaTeX formula strikethrough is nonstandard.
     
  9. [​IMG] [​IMG]

    I played around with my idea of "what % of the time are 7 consecutive pieces 7 distinct types?", actually I worked it out for all 7 possible piece diversities. It's interesting. I'm curious what the actual TGM randomizers would look like. As is, it suggests that while 7-bag has fewer theoretical piece sequences, TGM randomizers may have a smaller number of commonly dealt sequences. Perhaps the nature of this statistic is skewing things, as it relates specifically to sequences of 7, which is pretty much as good as it's going to get for 7-bag's 7n sequence piece diversity.

    By the way I probably messed up my 8-bag calc, I will re-attempt later and confirm.
     
    Qlex likes this.
  10. Interesting indeed! I always had that feeling when playing TGM2+ where out seems like there's always a line piece after a t block... I should investigate :D
     
  11. Not sure if you're joking but just to be clear, when considering sequences of 2 pieces with TGM randomizers TT is uncommon but TI and all the others have the same probability.
     
  12. I found my error and now agree with your calculation of 32.86%.

    However, I think there may be a problem with the definition, it is not correctly accounting for the lack of independence. For example, 7-bag has predictableness of 37.04%. One might think that the probability of guessing 7 consecutive pieces correctly is rough (0.3704)^7, however it is pretty clear that it should be 1/7! no matter where you start in the bag.

    This is actually a big deal because (1/7!)^(1/7) ~= 29.59%
    TGM randomizers are in the vicinity of strict 4 history, which has predictableness 33.33%
    37.04% vs 29.59% is the difference between saying 7-bag is more predictable or less predictable than TGM randomizers. (I would actually conclude it is less predictable personally.)

    I'm not entirely sure how to reformulate the definition. I would expect that 8-bag is less predictable than (1/(7*7!))^(1/8 ) ~= 27.01% because the position of the random piece is unknown. However no matter how I calculate it I seem to find it is more predictable.
     
  13. I don't think there is any specific reason why the probability of guessing 7 consecutive pieces (or n) should be easily calculable based on predictability percentage. That just gives what it is supposed to, the long term percentage of correct guesses given best guessing strategy.

    Lets denoted p(n) for prob. of guessing "exactly" n consecutive pieces correct (over many trials). One of the things is that p(m), for any specific m, itself isn't an atomic stat. The reason is that over different states that value might be different and it has to weighted properly based on relative long-term frequencies of those states.

    Another thing is that I think it is a bit different from what you mentioned which would correspond to getting 7 or more pieces correct (as compared to getting exactly 7 correct). I will try to confirm this point though (as per my understanding), whether the way I am mentioning it is correct.

    I suppose it should relate to predictability stat. in the following way:
    Summation of p(0),p(1),p(2),p(3) and so on (infinite sum) should be equal to 1. Now if we took expectation value of the distance over which we go correct before we get 1 guess wrong should relate to predictability, I suppose. By expectation value of distance I mean sum of terms p(m)*m for m=0 to infinite.

    Now suppose we have predictabilty of 40% for some case. That means:
    40 right predictions and 60 wrong predictions
    2/3 right predictions and 1 wrong prediction

    For any value x of predictability
    x right predictions and 100-x wrong predictions
    x/(100-x) right predictions and 1 wrong prediction

    I think that last value x/(100-x) is the one that should equal to the expectation value of distance we get it right before we make a mistake.

    Probably arcorann could confirm whether this relation is correct or whether there is some mistake here.

    I think it is worth mentioning that regular randomisers form a very natural set. But even other than that, an important point relates to the fact that games operate at a fixed frequency, so the time for calculation that the computer has (for next piece) is always the same.

    Now given a fixed no. of pieces/symbols the computation (as long as the program is sanely written) for regular randomiser should take the same amount of time to calculate the next piece (or well at least bounded from above). For the general case of programmable randomiser (that isn't regular) this property won't hold in nearly all (or all?) cases.

    So the point is that unless the game runs for a short enough time the increase in calculation time for a non-regular randomiser would over-take the fixed time that is given by the game for computation.

    ---

    One point that is fairly obvious but wouldn't hurt to clarify is that above I wrote programmable sequence instead of programmable randomiser. I suppose the latter might be more appropriate usage. That's because suppose we have symbol set S={1,2,.....,7}. So by sequence we would mean a deterministic sequence over S, whereas by a randomiser we would mean a deterministic sequence where each element is a non-empty sub-set of S (in other words, a determinstic sequence where each symbol is chosen from 1 to 127 instead of 1 to 7).

    As for randomiser related hints that might actually help the player during the game, I don't know. It is something interesting to consider though. For example, for a downstack heavy game if you placed a red block (in background) on all points of a field where placing a block would lead to obstruction, I am pretty sure that would help most players up to mid level play. Though for high level players maybe the constant change in background might be more confusing than the help they would be getting.
     
    Last edited: 20 Jul 2015
  14. Actually, on a second thought, the definition part I highlighted is fallacious. The correct definition would be slightly more involved (and more interesting) than this. I will try to give a correct definition for the general case.

    ------

    I think this is how the definition should be. I am just editing this post instead of triple posting. Firstly lets discard the part I quoted above completely.

    Here is how I use the terms in the specific context:
    sequence and randomiser (same terms)
    deterministic sequence (any single finite or infinite string of atomic symbols)

    So we have some base set of symbols/alphabet. Lets say S={1,2,3,4,5,6,7} OR for example S={a,b,c,d,e,f,g}.

    Now we can say that a given infinite sub-set of S* defines a sequence/randomiser R, when the following conditions are satisfied:
    (1) suppose some string s belongs to the sequence. Then all the substrings of s also belong to the sequence.
    (2) suppose some string s belongs to the sequence. Then there is a string t such that:
    a) s is a sub-string of t
    b) s is not equal to t

    ------

    I think we should define something like the "characteristic function" of a randomiser. Then every randomiser would correspond to a unique characteristic function and vice versa.

    Lets introduce some terminology.
    S(n) = set of all strings of length n (formed from symbols in S)

    So we define a function f like that:
    f : N(non-negative numbers) ---> finite sub-sets of S*

    We say that the function f is the characteristic function of a given sequence/randomiser when:
    (1) the value f(n) maps n to some subset of S(n)
    (2) the range of f (values mapped to by f) is exactly equal to the sequence/randomiser R


    That seems reasonably satisfactory to me unless I made a mistake somewhere again. Introducing probability would take a little more modification, I guess.

    I think this should give some justification to the term "regular sequence/randomiser". We can say that this phrase corresponds to all regular languages (for a finite symbol/alphabet set) which satisfy the definition of a sequence/randomiser.
     
    Last edited: 20 Jul 2015
  15. Back, read the last few posts, and something occurred to me that seems so natural that I'm not sure why I didn't think of it earlier.

    Basically, it seems that what we want is to evaluate the entropy of a randomiser per piece. The definition is right there on the Wikipedia page. (Note that in the below I will be using log to base 2).

    Calculating the entropy per piece of a bag randomiser is easy, since we can consider the average over each bag. For example, a 7k-bag randomiser has entropy \(\frac{\log(\frac{(7k)!}{(k!)^7})}{7k} = \frac{\log((7k)!) - 7\log(k!)}{7k}\), which evaluates as follows:

    7-bag: 1.757 bits
    14-bag: 2.096 bits
    28-bag: 2.352 bits
    63-bag: 2.551 bits

    Memoryless is log(7) = 2.807 bits. 8-bag is 2.138 bits, while 6-bag is 2.050 bits. Strict 3-history would be 2 bits exactly. I am in the process of checking TGM and TAP.
     
  16. Might I suggest using base 7? Then things will range from 0 (completely predictable) to 1 (pure random). Not sure if I got all my calculations right but I was very much plotting something similar when I was doing the above. Though I wasn't thinking in terms of entropy, I was thinking in terms of the growth of the sequence space on a log7 scale.

    Entropy makes a lot more sense, I'm curious to see if my calculations are consistent.
     
  17. Back, again. Sorry for the delay.

    Base 7 is a good idea. Converting base 2 to base 7 logarithms is a simple multiplication, so I might as well do that now.

    6-bag: 0.7302 pieces
    7-bag: 0.6259 pieces
    8-bag: 0.7617 pieces
    14-bag: 0.7466 pieces
    28-bag: 0.8377 pieces
    63-bag: 0.9087 pieces

    For the history randomisers, we multiply the entropy given a particular number of distinct pieces of history with the probability of each state:

    2-roll: 0.9348 pieces
    TGM: 0.7590 pieces
    TAP: 0.6516 pieces
    Strict 3-history: 0.7124 pieces

    Anything else, just ask.
     
    colour_thief and Qlex like this.
  18. I could use some help understanding the subtleties of calculating entropy properly. Let me use NES as an example.

    You probably know the algorithm but to refresh it's something like this:
    • randomly roll one of 8 possibilities (1-7 for each piece, plus an extra 8th)
    • if you roll the most recent history or the 8th dummy piece, reroll 1-7 and accept that piece
    So at any given time the recent history piece has probability 2/56 and the 6 others have probability 9/56.
    Using the entropy formula that gives: 96.71%


    I'm pretty sure the above is the correct way to calculate it. However if I rephrase the algorithm:
    • 25% chance of pure random (ie. 1/7)
    • 75% chance of strict 1-history (ie. 1/6)
    Pure random has entropy of 100%.
    Strict one history has entropy of 92.08%.
    If I blend them together I get 94.06%.

    So:
    1. What am I missing?
    2. Do your calculations for TGM and TAP suffer from the same issue?
     
  19. Hey. Not very experienced in the field but I can give this boy a try.

    Today I learned! I thought NES was simply a one history one reroll type of randomizer, which made the previous probabilities be 1/49 and 8/49 respectively. It's a bit hard to see it the new way now haha.

    I'll try to formalize the whole thing even more so that I don't get too confused. The word "blend" might not designate how it actually works, but I'll do the calculation again.

    Let A be a set of six pieces (the tetraminos, none of which are in the history), and B a set of two pieces (the tetramino in the history and the dummy piece). In this case, B1 can be eventually drawn but not B2.

    A = {A1, A2, A3, A4, A5, A6}
    B = {B1, B2}

    p(A first) = 3/4
    p(B first) = 1/4

    Let's calculate some more ("/X" means "knowing that X is true")
    p(B1/A first) = 0 (if I have picked A first, I won't pick an element in B, let alone B1 itself)
    p(B1/B first) = 1/7 (if I have picked B first, I reroll again)

    p(B1) = p(A first)*p(B1/A first) + p(B first)*p(B1/B first) (law of total probability)
    = 3/4*0 + 1/4*1/7
    p(B1) = 1/28

    By the same token, n in {1..6} :

    P(An) = 3/4*1/6 + 1/4*1/7
    P(An) = 9/56

    ... That was hard. It corresponds, yay!

    I'll assume you calculated the entropy correctly.

    Now, I think that the way you phrase the next one is :

    • 1/4 chance that it WILL be pure random
    • 3/4 that it HAS BEEN 1-history
    So you only have this choice :
    • 1/4 of pure random
    1. 1/7 of which WILL be some sort of random (???)
    2. the remaining 6/7 having behaved like 1-history
    • 3/4 of 1-history
    And it does not work well... I think you need to look at that stuff : https://en.wikipedia.org/wiki/Conditional_entropy and tell me if something is more enlightening. Maybe the concept needs to involve a different entropy result.

    Also, I don't think that blending means taking an average of these guys. I think in the realm of entropy it behaves with a different operator, but I don't know this for a fact.

    Hope this helped!
     
    colour_thief likes this.
  20. Entropy is a measure of information content. If we generate a piece sequence, how much information does it contain?

    The issue here with your method of dividing the calculation into strict history and pure random is that the division here is also part of the information contained in the piece dealt, as opposed to being determined from previous pieces (as with the history in the correct calculation), so this gives the wrong answer.

    DTET (?): 0.8543 pieces
    7 from 8: 0.8093 pieces
    7-bag, seams have different pieces: log_7(6*6!)/7 = 0.6145 pieces
     

Share This Page