# Randomizer Theory

Thread in 'Research & Development' started by colour_thief, 12 Jan 2007.

1. ### Zaphod77Resident Misinformer

I have no idea how it morphed into a 2 before, because i had it with the n the first time and i copy/pasted it.

I also did not call anyone wrong in my initial post.

I apologize, but i do tend to lose it a little when i'm called wrong without people thinking it through. My calculations i worked out fully were 100% correct for what i as calculating which turned out not to be the same thing ct was calculating.

The semantic issue that caused all the confusion is that what exactly a drought is is not precisely defined. His definition for bag is the length of the interval between two consecutive Is. I calculated for a more general case, which i believed more useful, as i figured with bag, there are much more useful calculations that can be made if you have the knowledge for it, and provided the zero knowledge one.

For memoryless, because games are finite, and there's no guarantee you will see that I before the game ends, i again used a different definition of drought, which yielded a simpler calculation.

dividing by 7 does convert interval >=n to interval=n

But the statement.
appears to be complete nonsense to me.

The probability that you will be starved of at least one piece is 100% for any interval less than 7 regardless of randomizer. And I don't see the usefulness of that calculation. Dividing that by 7 most certainly does not produce the probability for one specific piece.

2. ### wksrdg

I think that the drought should simply be defined as number of steps before you get the very next of a specific piece. This means it can be defined for any point in the game. You could say the same for flood (although you would have to define the length and be precise whether you are talking about one specific piece or any of the seven).

It is true that you can basically ask many different kind of questions, some that even might be more relevant directly in terms of the game. For example, you can ask that at any random time in long term (stationary probability values for states) what if you just got an I-piece and the next three pieces didn't show one (assume the next feature in the game has three pieces). What would in general be the probability of getting an I-piece then?

Questions can be posed for more general piece strings of interest and so on, perhaps with some extra description as described above. But I do think that question about pmf and floods are among the most natural ones.

In general you could emulate the behavior of the randomizer if you wanted a numerical answer with respect to an arbitrary question. Solving for stationary probabilities of states, in general, would be a problem though for complex randomizers.

Apart from stationary positions in randomizer, you could ask for the most likely piece to occur given the whole history of pieces and whatever the next shows (at any point in the actual game). Although for 7-bag, strict history and roll these questions becomes quite easy to answer. There are probably more kind of questions that I am missing.

Anyway, my points simply is that pmf and floods are just one specific kind of question. Very natural and important ones but definitely not the only ones.

---

Anyway, I have figured a method to calculate the pmf values (and hence the general probabilities) for the general case of roll-n algebraically (purely in terms of formulas that is). It is quite tedious but still tractable (due to symmetry reasons) with pen and paper. The values become a simple formula after first 13 or 14 steps have passed (not important practically, but still interesting). The first 13 or 14 steps have to be calculated quite carefully. Flood values, in contrast to pmf, are quite easy to calculate though.

My method is different from the 15-state one, because I am not entirely sure yet about the correctness of that kind of state reduction. In any case, this would give a slightly different view on the same problem.

Getting direct expressions is much more desireable compared to directly emulation the randomizer for numerical values (which is kind of cheating ). Even if you put these direct expressions in a spreadsheet anyone can read them off and basically do the same calculations with a calculator.

Here are the only two values that I have calculated yet (these are for roll-4).
pmf(0) = 0.024427534

This is close to the experimental result if we take (look at page-7 for reference)
P(0) ~ 0.1430960000000
P(1) ~ 0.1395650000000

using pmf(n) = 7*[P(n)-P(n+1)]
pmf(0) ~ 7*[P(0)-P(1)]
pmf(0) ~ 0.024717

single flood probability (for one piece) = (1/7)*pmf(0) = 0.003489648
Single flood simply means getting two consecutive of one type. Getting three consecutive of one type would count as two single floods. This gives roughly 1 in ~287 pieces. Note that, unlike pmf, the single flood probability here does not make any assumptions about whether a certain piece has occurred or not.

If you were calculating flood for any of the seven piece types, you would multiply the value by 7.

I will report back whenever I get to calculate the full algebraic results for roll-n.

4. ### wksrdg

It just seems to be the difference in the value of n. The above numbers were for roll-4. Copying the relevant numbers in the file here:
I Distance 1: 2513088/299997083, 0.008377041
total sample = 2100000000

So we have pmf(0) ~ 0.008377041
single flood prob ~ 2513088/2100000000 ~ 0.001196709

I switched the values of n in my file to 6 and got:
pmf(0) = 0.008371458
single flood prob = 0.001195923

Also between, if you have a general formula for your results, you could try to check its internal consistency in another way. That would be calculating the expectation (average) value of distance at which next I is supposed to occur (after one has already occurred) using just your calculated pmf values (keep increasing the interval). With my convention it should be approaching exactly 6 and using your convention it should be 7 I think.

5. ### colour_thief

My bad, I read your post in a hurry before work and missed the bit about 4 rolls. Pretty exciting that you've worked it out to formulas! Would you mind giving a brief description of your method?

My method for the history diversity proportions is essentially a Markov chain using 8 states. Though I only iterated the transition a bunch of times, and never solved the system of equations to find the exact theoretical proportions.

My method for the pmf is similar to a Markov chain using the same general transition matrix, but the rows of the transition matrix sum to less than a probability of 1. Specifically I am removing the terms that correspond to the forbidden event. So it's "leaking" probability, basically. It takes 5 transition matrices:

1. All events corresponding to rolling the 1st most recent piece in history are removed.
2. All events corresponding to rolling the 2nd most recent piece in history are removed.
3. All events corresponding to rolling the 3rd most recent piece in history are removed.
4. All events corresponding to rolling the 4th most recent piece in history are removed.
5. The event of rolling 1 piece outside the history is removed.

I start with input history diversity proportions as derived from the complete transition matrix, which gives me my P(1). Then I multiply by the first transition, giving me P(2). I continue like this, and once I hit the 5th transition I just keep using it forever.

Because this "leaking probability" approach is always telling you how much probability is left, you have to take the difference of consecutive steps to find out the pmf's value. To put it your way:

pmf(n) = [P(n+1) - P(n)]

6. ### Arcorann

The argument between colour_thief and Zaphod has gotten very confusing, so I've decided to summarise the conclusions so that we can get back to more interesting matters.

Firstly, I'll define some notation. Define the random variable W to be the number of pieces until the next I appears. Define the random variable I to be the number of pieces until another copy of the current piece appears. Thus, if the nth to (n+2)th pieces are JIJ, W = 1 and I = 2. Also define D = I - 1. Assume that sufficiently many pieces have been dealt for the system to be in the stationary regime.

Let us now relate I and W. Consider an I interval of length k, where the first I but not the last I is retained. Thus, there are k pieces in the interval. Within this set of pieces, there is exactly one piece with W equalling each integer from 1 to k. Thus, in a long sequence of n pieces that starts with an I piece and ends just before an I piece, for every I piece with I >= k (i.e. D >= k-1), there is one piece with W equalling k. Hence, #(W = k) = #(I >= k & current piece = I-piece). Taking limits as n goes to infinity, P(W = k) = P(I >= k & current piece = I-piece) = P(I >= k)/7 by symmetry. This is what colour_thief meant when he said "If you took my probabilities and divide them by 7, you would get the drought probabilities for one specific piece (eg. I)."; the above result was already stated by Edo, incidentally. Now that we've cleared up the difference between these two variables, let's move on.

Regarding the applicability of the 15-state model for calculating intervals, colour_thief has basically described what I have done; I feel that the 15-state model is more robust because there can be some confusion using the 8-state model due to the stars representing two or more possibilities.

If you're not convinced about this, I have some formulae that I have derived based on my analysis:

Define the following:
C_1 = 2^(n-1)*3^(n-1)*4^(n-1).
C_2 = (7^n-1)*3^(n-1)*4^(n-1)
C_3 = (7^n-1)*(7^n-2^n)*4^(n-1)
C_4 = (7^n-1)*(7^n-2^n)*(7^n-3^n)
Let C_T = C_1 + 7*C_2 + 6*C_3 + C_4. It follows that C_k/C_T = P(H = k) where H is the random variable representing the history diversity after the current piece.

Then:
P(I >= 1) = 1 (trivial)
P(I >= 2) = (4*C_2 + 5*C_3 + C_4)/C_T, so P(I = 1) = (C_1 + 3*C_2 + C_3)/C_T
P(I >= 3) = (2*C_2 + 4*C_3 + C_4)/C_T, so P(I = 2) = (2*C_2 + C_3)/C_T
P(I >= 4) = (C_2 + 3*C_3 + C_4)/C_T, so P(I = 3) = (C_2 + C_3)/C_T
P(I >= 5 and H_4 = 1) = C_1*(7^n - 1)/(C_T*7^n) = C_2*2^(n-1)/(C_T*7^n)
P(I >= 5 and H_4 = 2) = 7*C_2*(7^n - 2^n)/(C_T*7^n) = 7*C_3*3^(n-1)/(C_T*7^n)
P(I >= 5 and H_4 = 3) = 6*C_3*(7^n - 3^n)/(C_T*7^n) = 6*C_4*4^(n-1)/(C_T*7^n)
P(I >= 5 and H_4 = 4) = C_4*(7^n - 4^n)/(C_T*7^n)
where H_k is the random variable representing the history diversity k pieces after the current piece.
Hence after a little bit of simplification, P(I >= 5) = (C_2*(7^n - 2^(n-1))/7^n + 2*C_3 + C_4)/C_T and P(I = 4) = (C_2*2^(n-1)/7^n + C_3)/C_T.

Incidentally, the last two of these do not agree with colour_thief's most recently posted results; my results are consistent with wksrdg's values for P(I = 1), however.

Finally, I've done some preliminary testing of the TGM3 randomiser using Excel. The conclusions made so far:

Intervals of length 1 to 4 have indeterminate change.
Intervals of length 5 to 6 are all less likely than in TAP.
Intervals of length 7 to 11 are all more likely than in TAP.
Intervals of length above 11 are all less likely than in TAP.

Interestingly, if you implement only the variable probability part without the history, the result is actually worse than memoryless.

7. ### PetitPrince

muf > please implement a tex parser/interpreter, this would make this thread even more epic
(this or that)

8. ### Muf

Sounds like the perfect thing for you to do after you've added TeDiGe support to the new forum. (I don't want to add, change or update anything in the current vBulletin installation because it will just break when we move to the new forum)

9. ### colour_thief

Yeah, I'm almost certain I goofed up some of my "leaky" transtition matrices (is there a proper name for those?). I'm convinced now that a lot of things get simpler with 15 states instead of 8. I'll give it another shot.

This is very interesting. I'll place my bet right now that I expect intervals 1 to 4 will be more likely than TAP. I don't suppose you'd be willing to share your preliminary Excel file?

At any rate, I did look up the Ti randomizer bugs, so here's a statement of the complete algorithm:

Additionally, the first piece of the game is handled specially:

It is strictly forbidden from being S, Z, or O.
It will update the history of 4.
It does not update the piece drought order or the bag of 35 pieces.

I know I said there were 2 bugs, but they kind of cancel each other out when they both happen. So the most succinct way of stating them is to combine them together.

When these 3 conditions are met:

1. The randomizer has just chosen the most droughted piece
2. A reroll happened during the choice of that piece.
3. Excluding the first piece of the game, and including this chosen piece, you have received at least one of each of the 7 piece types.

Then the roll that chose that most droughted piece will not update the bag of 35 pieces.

11. ### Arcorann

I've completed a major simulation of the TGM3 randomiser, bug included. After cleaning up the raw data a bit (the Matlab export didn't label any of the variables and came in scientific notation) here are my results:

History diversity:
1: 0/208873655
2: 61211/208873655 ~ 0.0002930528
3: 13903970/208873655 ~ 0.0665664131
4: 194908474/208873655 ~ 0.9331405342

Interval proportions:
1: 2530621/208873649 ~ 0.0121155589138006
2: 2205482/208873649 ~ 0.0105589288575123
3: 2023565/208873649 ~ 0.00968798606089368
4: 1912958/208873649 ~ 0.00915844583152756
5: 56718759/208873649 ~ 0.271545785078902
6: 36441354/208873649 ~ 0.174466018928027
7: 29659846/208873649 ~ 0.141998984275896
8: 26356603/208873649 ~ 0.12618443315461
9: 20729229/208873649 ~ 0.0992429112013072
10: 13767626/208873649 ~ 0.0659136567293848
11: 8253445/208873649 ~ 0.0395140556959389
12: 4456208/208873649 ~ 0.021334467135201
13: 2196960/208873649 ~ 0.0105181290723752
14: 986414/208873649 ~ 0.00472253922274322
15: 402847/208873649 ~ 0.00192866358168521
16: 152130/208873649 ~ 0.000728335051972018
17: 54129/208873649 ~ 0.00025914709806214
18: 17948/208873649 ~ 0.0000859275456043764
19: 5479/208873649 ~ 0.0000262311690643179
20: 1499/208873649 ~ 0.00000717658741146424
21: 413/208873649 ~ 0.00000197727191523331
22: 108/208873649 ~ 0.000000517058999625175
23: 18/208873649 ~ 0.0000000861764999375292
24: 8/208873649 ~ 0.0000000383006666389019
25+: 0/208873649

I guess colour_thief was right about intervals 1-4. The results for 5-11 agree with my preliminary results, though it is interesting to see how close the results for 7 and 12 (the two crossover points) are to the respective numbers for TAP. In addition, after the crossover at 12 the probabilities decline a lot faster - P(I >= 15) equals approx. 0.003.

I never did explain my claim for recursive 7-bag, so here it is:
The next piece that is the same as the last piece must be in the next bag. The probability of the piece being in a given position in the next bag is determined solely by whether the current bag and the next bag are in the same meta-bag, since we know that one-seventh of all bags within a meta-bag have the piece in that given position.

If the current bag is the last of the meta-bag, the chance of the piece being in a given position in the next bag is 1/7. If the current bag is not the last of the meta-bag, the chance of the piece being in the same position in the next bag as in this bag equals 719/5039, as there are 720 - 1 = 719 bags remaining with the same position out of 5040 - 1 = 5039 remaining in total, whereas there are 720 bags remaining for each other position. It follows that P(I = 7) = (5039/5040*719/5039 + 1/5040*1/7) = (5034/35280), while P(I = k) = (5039/5040*(7-abs(7-k))/7*720/5039 + 1/5040*(7-abs(7-k))/7*1/7) = (5041*(7-abs(7-k))/246960) for k = 1 to 6 or 8 to 13.

12. ### wksrdg

I don't know about Markov chains. Only vaguely heard about them. But it seems you are doing roughly similar calculation with matrices that I am doing visually.

As for my method, basically you list up all the states with zero 1's, one 1's, two 1's, three 1's and four 1's. You would get total of 52 states. After that, I organized the states based on their form as follows:
x -- any block
o -- any block other than 1
1 -- the block 1

1st level: 1xxx (15 states) -- this is the initial point after a 1 just occurs
2nd level: o1xx (10 states)
3rd level: oo1x (7 states)
4th level: ooo1 (5 states)
5th level: oooo (15 states)

Now after a 1 just occurs, I re-calculated the new state probability distribution first (based on the static distribution for states that has been posted before). Then you draw five separate diagrams showing links of 1st level to 2nd, 2nd to 3rd and so on. The fifth diagram shows the links within the 5th level.

I changed my history queue convention to left-->right from right-->left because it makes things simpler to see. The diagrams joining these levels might seem impossible to draw but become fairly easy once you adopt slightly more sophisticated convention of using blocks.

@Arcorann
I haven't had the chance to read the previous posts thoroughly. I have calculated values for P from I=1 to I=4. They are almost* exactly the same as yours. Can you also list your values for I equals 5,6,7,8 and so on, and also your general formula for pmf for an arbitrary I=n. For me, it seems that pmf should become a simple formula after I=6 or 7.

I have done spread sheet calculations till now (complete for flood and till I=4 for pmf). I will calculate algebraic expressions for pmf and flood after that. Most likely though, my pmf formulas are going to turn out to be exactly equivalent/same as yours.

Also, if you have calculated the flood values (single 1, two 1's, three consecutive 1's and so on) or formulas, can you also list them.

*Apparently there is a difference of one digit for I=2,3,4 at 25th decimal place or something. I don't have any term that is even close to that order. It seems to me that it is more likely issue with excel (old version) handling rationals as floating point numbers. I wasted a few hours just trying to see what was going on, and apparently, even expressions that are logically equal also differ on few occasions after 20th decimal place or something.

13. ### Arcorann

I didn't analyse flood originally. However:

P(last 2 entries are the same) = (C_1 + 3*C_2 + C_3)/C_T
P(last 3 entries are the same) = (C_1 + C_2)/C_T
P(last 4 entries are the same) = C_1/C_T
P(last k entries are the same) = C_1/(C_T*7^(n*(k-4))), k >= 5.

The first three come immediately from the steady-state probabilities in my model, the fourth follows very easily. P(last k entries are I) of course equals 1/7 of these values.

For TAP (i.e. n = 6):
P(last 2 entries are the same) = 458252288/54739840949
P(last 3 entries are the same) = 941440/54739840949
P(last 4 entries are the same) = 256/54739840949
P(last 2 entries are I) = 458252288/383178886643
P(last 3 entries are I) = 941440/383178886643
P(last 4 entries are I) = 256/383178886643

I attempted to derive a formula for P(I >= 5) and decided it wasn't worth the hassle. From my spreadsheet, for TAP, the values for P(I = k), k from 1 to 24 are:

0.008371458
0.00835426
0.008337066
0.008319877
0.307338112
0.208964347
0.142451775
0.097304602
0.066558256
0.045520932
0.031131197
0.021289967
0.014559811
0.009957193
0.006809545
0.004656925
0.003184787
0.002178018
0.001489508
0.001018647
0.000696635
0.000476416
0.000325812
0.000222817

For TGM1:

0.024427534
0.024159198
0.023891753
0.023625199
0.261288156
0.184714184
0.131202588
0.093498852
0.066766369
0.047655524
0.034009577
0.024270655
0.017320917
0.012361231
0.008821698
0.006295676
0.00449296
0.003206437
0.0022883
0.001633064
0.001165449
0.000831732
0.000593572
0.000423608

14. ### wksrdg

I completed my worksheet and I am getting the exact same values for pmf (up till the number of decimal places to which you have posted your values). Also I had the same value for floods, apart from dividing my own results by 7. I also extrapolated the values for pmf up to an interval of 200. The expectation value of distance to next piece as 1 falls to 6 (to 30 decimal places) after 100 or so entries.
Yeah maybe so. It seems that after four steps have passed (that is on the 5th level at oooo) it seems that state probability values become simple inductive relations (that is t+1 values given by t values) with given initial values. So once you have calculated the state probabilities as functions of initial values directly (which seems a matter of plugging in values to see the pattern unless I am missing something ofc) you have to multiply these values by the 'out' probabilities for a 1 occurring.

Simplification of the resulting expression might be quite tedious though. But even though there are 15 different states, the alot of them stay equal with time, so in effect there seem to be 8 different values.

It should be easy to compare the resulting expression directly with calculated values. Another interesting point would be to see how the resulting expression equals the expression for strict history as we increase n.

I don't think I am gonna calculate explicit formulas though, mainly because of burn out.

15. ### colour_thief

New algorithm worked out by Kitaru and myself: Arika's Tetris with Cardcaptor Sakura Eternal Heart

It has a history of 6, and up to 5 rolls.
The default history is empty.
The 5th (final) roll is especially weird. At that point instead of rolling for all pieces equally, it choose 50:50 between the 2nd and 6th most recent pieces in history. If the chosen history piece is not yet populated, which would happen if you've received less than 2 or 6 pieces respectively, then the game will resort to one of the 7 pieces chosen randomly.

The probability mass function looks very bizarre, though I've not attempted theoretical calculations yet.

16. ### Arcorann

Re: Tetris with Cardcaptor Sakura, seems like an interesting, if complex, randomiser to look at. I might take a stab at it later.

I wanted to check a couple of things regarding the workings of the Game Boy randomiser. Firstly, is it three rolls or four? Kitaru's post in the other thread here seemed to imply four, while pineapple's recent post on Hard Drop claimed it was three. Second, it seems that the orientation is stored with the current piece. Would this affect the OR check?

Alternatively, does anyone have the location of the piece generation code within the ROM so that I can check for myself?

17. ### colour_thief

I believe it is 3 rolls.

Randomizer code is from $2007 to$2070 if you would like to check.

18. ### Kitaru

I think it is three rolls, and the rotation info gets masked away with & 0xFC.

19. ### colour_thief

Time for a bump!!

I've been looking at the Game Boy version's randomizer a bit and have come to some interesting results. A refresher on the algo:

1. History of 2
2. Up to 3 rolls
3. Reroll condition is when 3-in-a-row is obtained but this is implemented improperly by misusing a bitwise OR

So the reroll condition takes the bitwise OR of the locking piece, the preview piece, and the rolled piece and compares this to the locking piece. If these are equal, then it rerolls.

This wouldn't necessarily be broken, depending on the binary representation of the piece types. If the pieces were all powers of 2 (eg. 0000001, 0000010, ..., 1000000), there would not be a problem.

The issue comes with the pieces being directly numbered 0 through 6, which in binary give you:

000, 001, 010, 011, 100, 101, 110

This opens up the possibility of pieces being mistaken for other pieces:

Pieces with 1 bit set to 1 (eg. 001) have a "false postive" (eg. 000).
Pieces with 2 bits set to 1 (eg. 011) have 3 "false postives" (eg. 000, 001, 010).

Any number OR'd with 0 gives the original number. Piece 0 is mistaken for every other piece!

The net effect of wasting rerolls on all these false positives is that 3-in-a-rows are not nearly as inhibited as the suggested design intended. But more seriously than that, false positives also cause a bias against certain piece types. In a way, you can say that yes, the game is not giving you enough I pieces, because it is trending below 1/7.

I initially calculated the approximate ratios using a 49x49 transition matrix, going from every possible piece pair, to every possible piece pair. However with a bit of thought I was able to reduce it to an 8x8 transition matrix:

Code:
	2,2 M	2,1 M	2,0 M	1,1 M	1,0 M	0,0 M	*,2 NM	*,1 NM
2,2 M	4.7%	0.00	0.00	0.00	0.00	0.00	14.3%	0.00
2,1 M	9.3%	0.00	0.00	0.00	0.00	0.00	28.6%	0.00
2,0 M	4.7%	0.00	0.00	0.00	0.00	0.00	14.3%	0.00
1,1 M	0.00	4.7%	0.00	1.2%	0.00	0.00	0.00	14.3%
1,0 M	0.00	4.7%	0.00	1.2%	0.00	0.00	0.00	14.3%
0,0 M	0.00	0.00	4.7%	0.00	1.2%	0.3%	0.00	0.00
*,2 NM	54.2%	58.9%	58.9%	58.6%	58.6%	49.9%	28.6%	42.9%
*,1 NM	27.1%	31.8%	36.4%	39.1%	40.2%	49.9%	14.3%	28.6%
The shorthand used for labeling the rows/columns is "X,Y Z" where:

X = the count of 1s in the locking piece's number
Y = the count of 1s in the preview piece's number
Z = whether the current pair is a "match" with a bitwise OR

If you multiply the matrix by 343 you'll see all the numerators of the fractions producing those percentages.

Anyhow, the results of the 8-state matrix were lining up perfectly with 49-state matrix, but more importantly the new matrix is much smaller and more reasonable to work with. So I was able to turn it into exact fractions!

two 1s in piece number = 551/3429
one 1s in piece number = 88217/644652
zero 1s in piece number = 7693/71628

Or to make it more comprehensible:

O, S, T = 16.1%
J, I, Z = 13.7%
L = 10.7%

For reference 1/7 = 14.3%

The piece proportions are a cool first step, but there are other interesting patterns produced by this buggy randomizer also. If nobody beats me to it I will explain what I find after I've explored a little more.

<EDIT> Also some empirical evidence (re-implemented algo) which supports my theoretical calculations:
http://www.twentysixfourteen.net/randomizers/gb-2bil.txt

Last edited: 2 May 2013
20. ### Xkeeper

Oh my god fuck this board. I forgot my password, tried to reset it, and it took over an hour for the e-mail to send... then I log in, change my password, and apparently it didn't change right (even though I made sure it was exactly what it should be!) and now I'm locked out again, fuck. xkeeper@gmail.com if anybody can reset the fucking thing.

Anyway, I got curious and wrote a script to test the game's randomness from inside the game, and here are the drop stats I'm looking at:

Code:
T   53340   21.34%
J   39402   15.76%
I   38093   15.24%
S   34778   13.91%
O   34696   13.88%
Z   26291   10.52%
L   23400    9.36%
250000 drops
It seems worse than what you predicted:
The fact that you get L pieces at half the rate of T is pretty staggering.