With Game Boy Tetris, it's possible to know if the (bitwise or) 3 in a row re-rolling is impossible. If 2 pieces already have clashing bit patterns then definitely the first piece roll will be accepted by the algorithm -- the next piece will be random. Likewise, if you know that 2 pieces have a matching bit pattern, then you know that certain pieces will be aggressively (up to 3 rolls) avoided. In TGM1 and TAP, even with just 1 piece preview, you develop a sixth sense for the "probable 2nd preview". When bitwise matches occur in GB Tetris, the same potential manifests itself. Perhaps players are already subconsciously catching on to the patterns. But even if not, they are orderly enough to be learned without too much trouble. Here is a diagram that illustrates the bitwise relationships: https://www.dropbox.com/s/67ipbq2aqypy70v/GB-Tetris-Circle.png?dl=0 To use this diagram: -look at the white shape that contains your active piece -follow the extension of this shape towards the middle -if your next piece is contained in this extended shape, you have a match -if you have a match, then pieces outside the extended shape are more probable while pieces inside the shape are less probable Ex 1: O piece with J coming next -O's white shape extends into a circle, encompassing O,I,L,J -this is a match with the J coming next -you will very likely receive one of T,S,Z afterwards Ex 2: I piece with L coming next -I's white shape extends into a almond, encompassing I,L -this is a match with the L -you will almost definitely not get one of I or L afterwards Ex 3: L piece with Z coming next -L's shape is the middle already, and includes only itself -Z clashes with the L -the next piece will be unpredictable Now call them what you will, circle/2-bit matches, almond/1-bit matches, triangle/0-bit matches, and clashes. It should be clear that all possible active+next piece combinations fall into one of those 4 categories. And the particular probabilities of the likely/unlikely pieces of these specific scenarios are straightforward to calculate. I'll leave that as an exercise to the reader. Now the way I calculated the theoretical piece distributions earlier in the thread gives some other cool info for free. You might be wondering, Is this all worth it? How often do you get clashes and how often do you get matches? Matches: 36211/107442 ~= 33.7% Clashes: 71231/107442 ~= 66.3% So events become predictable roughly a third of the time. Probably not game changing and perhaps even borderline useless. But cool to know nevertheless. Now, we know that the randomness feeding the algorithm is biased. So the specifics of what I've just outlined don't apply. But even with the bias, the algorithm is pushing things in this direction, and so the patterns should still hold to an extent.

I've made some progress in understanding some of the random bias in the GB Tetris randomizer. The 2 rerolls do happen a predictable number of cycles after the initial roll. And the DIV register that provides the randomness increments at a fixed rate of once every 256 clock cycles. There is an added unknown in that we don't know exactly how close the DIV register is its next increment. You can think of it like a number 0x##.## at time of initial roll. The roll exposes the integer portion but not the "decimals". One important thing we do know about the decimals, however, is that since all opcodes on the GB take a multiple of 4 clock cycles, the value will be a multiple of 4. So rather than 256 possibilities there, there are a mere 64. For the purpose of this second take at calculating the probabilities, I've assumed that the DIV register is a random number (0 through 255), and the fractional cycles are a random multiple of 4, (0 through 252). This gives 256*64 = 16384 possible sets of 3 rolls. To predict what the 2nd and 3rd rolls are following the 1st roll, I have mapped out the cycles of the algorithm. It's important to understand the structure of the modulus code. It's a loop that, each iteration, subtracts the random number by one and cycles a variable 0 -> 6 over and over one step at a time. This includes a branch for when that variable hits 7 and has to be reset to 0. 100 clock cycles required in the best case (mod 7 loop is not executed) 56 clock cycles required for the first 6 iterations of the modulus loop 52 clock cycles required for every 7th iteration of the modulus loop Now I've done my best to count the cycles, and triple checked them, but I really would appreciate if someone who actually knows what they are doing confirmed these cycle counts. The bias of the randomizer changes if even one of my counts is off. Using these counts it's possible to calculate how many cycles elapse between rolls with a simple formula. It's basically: 100 + (roll x 56) - (floor(roll/7) x 4) For 255 it would take 14236 clock cycles. To get the second roll: (initial roll) + floor( ( (initial fractional cycles) + (roll 1 cycle calculation) )/256 ) ...and the third: (initial roll) + floor( ( (initial fractional cycles) + (roll 1 cycle calculation) + (roll 2 cycle calculation))/256 ) Doing this for all 16384 initial states gives the probability of all the 7^3 = 343 possible sequences of 3 rolls. Examining the 343 sequences (some have a 0/16384 chance btw!) and interpreting which of the 3 rolls would be chosen in the different bitwise OR scenarios, you can reduce the probabilities to the following table: Code: | 0 Match | 1 Match | 2 Match | 3 Match | 4 Match | 5 Match | 6 Match | Clash 0 | 16 | 113 | 272 | 802 | 41 | 889 | 830 | 2368 1 | 2825 | 222 | 3218 | 820 | 3356 | 572 | 4331 | 2368 2 | 2703 | 3289 | 180 | 702 | 3112 | 4688 | 746 | 2368 3 | 2817 | 3195 | 3258 | 766 | 3254 | 4176 | 4623 | 2368 4 | 2672 | 3289 | 3176 | 4500 | 244 | 1034 | 652 | 2304 5 | 2671 | 3097 | 3115 | 4432 | 3019 | 606 | 4361 | 2304 6 | 2680 | 3179 | 3165 | 4362 | 3358 | 4419 | 841 | 2304 Where for scenario Column Label, you have probability Table Entry of getting piece Row Label. Probabilities are expressed as the numerators of fractions of 16384. I don't have theoretical piece proportions worked out for this yet, but hopefully Stella can whip up a simulation to see how it compares to Xkeeper's observed proportions.

To double check code, it might be useful to test with the following table. It should be equivalent to my the original naive description of the GB algorithm which assumes unbiased randomness (as in, it's a way different implementation with the exact same randomizer properties, you should get the same ratios). Code: | 0 Match | 1 Match | 2 Match | 3 Match | 4 Match | 5 Match | 6 Match | Clash 0 | 1 | 4 | 4 | 16 | 4 | 16 | 16 | 49 1 | 57 | 4 | 67 | 16 | 67 | 16 | 93 | 49 2 | 57 | 67 | 4 | 16 | 67 | 93 | 16 | 49 3 | 57 | 67 | 67 | 16 | 67 | 93 | 93 | 49 4 | 57 | 67 | 67 | 93 | 4 | 16 | 16 | 49 5 | 57 | 67 | 67 | 93 | 67 | 16 | 93 | 49 6 | 57 | 67 | 67 | 93 | 67 | 93 | 16 | 49 The denominator for this table is 343.

Consistent with my analysis. The table you posted differs from mine: Code: *h 0l 1l 2l 3l 4l 5l 6l 0 2368 55 190 257 780 100 869 838 1 2368 2854 288 3197 829 3339 707 4298 2 2368 2769 3359 140 632 3198 4675 881 3 2368 2803 3273 3421 843 3264 4262 4631 4 2304 2614 3175 3141 4600 167 972 552 5 2304 2631 2986 3110 4368 3047 490 4314 6 2304 2658 3113 3118 4332 3269 4409 870 16384 16384 16384 16384 16384 16384 16384 16384 I'm still working on the proportions, by the way. I might edit them in later.

Thanks for double checking! I don't have access to my previous spreadsheet right now, but I rebuilt one from scratch and got your exact table. So I think those are the correct numbers. Comparing the table with the naive pure random version, I don't think it'll change the piece ratios enough to match observation. In fact it may make things worse, even if it's more accurate on some level. If this is the case, then the remaining assumption of randomness (the DIV state at time of initial roll) must be contributing the majority of the bias. But lets hope not because that is significantly trickier to analyse.

Since it's somewhat relevant to this thread, I figured I'd spit out some numbers I've generated for NES's randomizer. Due to a routine running on the Level Select screen that cycles the randomizer until it finds two values in the range 0-9 (which I presume is related to the unfinished 2P Versus functionality, B-Type garbage generation, or both), there are only 7599 possibilities for the first two pieces. Here are the rankings: Code: 744 : T L, 700 : T J, 687 : S O, 491 : I S, 487 : I T, 483 : Z I, 480 : L T, 477 : J Z, 474 : L S, 466 : J I, 356 : O O, 354 : O L, 348 : O Z, 343 : O T, 241 : Z J, 234 : Z L, S I, I also did a quick total of which seed+history combinations yield which pieces. Things were looking quite balanced when empty history was included in the total, but I realized that wasn't a very good indicator of the character of the randomizer once the game gets going. It's a bit less balanced when limited to filled history: Code: Piece Count Absent Frames T 29687 12.9428998688% 3895 J 32421 14.1348656532% 3714 Z 32278 14.0725206981% 4246 O 32235 14.0537736137% 3759 S 36164 15.7667339527% 3075 L 34783 15.1646473586% 3600 I 31801 13.8645588549% 4274 For reference, there are 32767 seed states * 7 pieces = 229369 positions. The "absent frames" column indicates how many seeds from which a piece simply cannot be generated, regardless of history. Usually this means doubling up on one yield (25543 instances in my output where two different histories lead to the same yield), but occasionally there is tripling (510 instances). I haven't done much in the way of applied testing yet, but we could probably rig my PyNEStris thing for that at some point. EDIT: Oh yeah, it might be worth mentioning a little about the version included in the Nintendo World Championships 1990 competition cart. The randomizer is mostly the same, but made deterministic by removing the RNG cycling every vblank. There is a seeding loop before Tetris begins: for i in range(middle byte of Super Mario score), do random.next(). Testing with FCEUX indicates a race condition bug that may or may not mangle the seed state for Mario scores above (if I recall correctly) xx30xx, but otherwise the seed loop acts a bit like an offset -- some minor differences may be expressed at the beginning of the sequence, but otherwise they all converge due to re-rolling. The general length of a sequences is ~26,500 pieces long before reaching a loop point. EDIT2: So it occurred to managed to I overlook a very important detail when generating the overall tables. Pieces generated since start-up (stored in a byte) are also a factor in the selection method, so instead of seed+history (32767 * 7 = 229369 positions) it would be more like seed+history+pieces (229369 * 256 = 58718464 positions) to memoize. EDIT3: Here is big old lump sum data that takes into account the pieces generated byte: Code: 68379 : L T, 64743 : T Z, 60991 : T I, 60656 : I S, 56841 : O Z, 53415 : L O, 53312 : Z L, 53262 : I J, 53248 : L S, 53223 : Z J, 53167 : J L, 53141 : S J, 49332 : O T, 49193 : O L, 45766 : T O, 45697 : S I, 45671 : I Z, 45451 : J S, 45428 : T S, 45407 : J T, 45385 : Z T, 41941 : J I, 41938 : J Z, 41832 : T L, 38207 : I O, 38109 : Z O, 38095 : S T, 38093 : O I, 38046 : I T, 38042 : S L, 37975 : T J, 37970 : I L, 37923 : Z I, 37866 : Z S, 37813 : S Z, 37812 : S O, 33918 : L J, 30448 : O J, 30438 : O S, 30345 : J O, 30341 : L Z, 30313 : L I, 22714 : S S, 19124 : O O, 15178 : J J, 7704 : T T, 7653 : Z Z, 3798 : L L, Piece Count Absent Frames T 8128192 13.8426509249% 0 J 8782624 14.9571759915% 0 Z 8127264 13.8410705021% 0 O 7864096 13.3928843915% 0 S 9954784 16.9534134953% 0 L 7340928 12.5019074068% 0 I 8520576 14.5108972878% 0 Unsurprisingly, this change makes the old "absent frames" column useful for absolutely nothing, whee. Anyway, what we do see from this is: the full spread for starting roll variations (while the old data should still stand for possibilities from an initial power-on state) starting with doubles is predictably the least likely, and, interestingly; double I's on start never occurs* the total pool of possible rolls during the body of the game is still skewed. *(I very much considered the possibility that it was a problem in my logging code given the relative abundance of other repeat starts by comparison, but it's not looking like it thus far. Later, when I've got a fresh head, maybe I can try to think about the nature of the numbers and why consecutive seed double I's would be an impossibility -- its position in the 7th slot could have a lot to do with it, but that's just a conjecture.) So, that's kind of neat I guess. I'm not sure it's necessarily more than that, but that's fine. I'm tempted to ponder what would come of clamping things to realistic min/max frame spreads given the game's timing and frame rules, but currently I feel pretty secure in saying that any simple model would fail to be all that meaningful and that any sufficiently accurate model would be needlessly time consuming or ridiculous compared to just sampling the game itself as Xkeeper has for GB and TDX. Maybe it'll be an exciting failed experiment for another day, if I feel particularly keen to entertaining some dumb ideas. ...so, this post got pretty huge since I kept adding stuff to it over time and it's not 100% related to the thread anyway since it's a lot less "theory" and a lot more "yoooo let's generate a lot of numbers and data since info on the RNG implementation gives me the faculty to do so," so uh sorry about that, haha. Perhaps it should be split elsewhere.

i thought it might be handy to have some reference randomizers coded up. maybe have a shared repository somewhere? here's my stab at a Ti randomizer (any comments on its accuracy are very welcome) Code: # PETE ROOLZ # # you can get a quick sequence out with # python -c 'import tgm; r = tgm.TiRandomizer(); print "".join([r.next() for i in range(70)])' # # sample 1000-piece sequence: # # jlosiztjsozlijsotilsjtzoijtzlojsiztoljstiltzosjtzoiljzoslitjslizjsotlziosjlztsj # itlojztisozlitosjlizojtlzsotjislotjzlitsjoziljsoztlijsotilzstojzziiljsztljiotsj # iolszjotszjiloztisojlsztoiszljitzosltjijzosilzjtioszltjisotjilzostlzjoistjzilos # jtliozstjolstziljlsotslzjisozjitoilstzolitzjsioljztoljsztoisjltiszloijztolszito # jzilsjztisjltozsijoltzsoijlztijsotliszjloitzjoltsizjlsioojtzsloijszoitjzsltoijz # ltisoltjzsilitosjizoltjzosltjoisztljsitlzjosiltozijltosiztjsilojztlsjiozstjizsl # oitjslotzjliozjstlziosljtoszijtlziojlzstilozjstiljsziojtsloitzsosizjtolzjsoizjt # sliotzjlstiolstzojloizjtsiljzsoijzstoljsitzoslijoztsijoztllsitzlosjzijtlsojtilz # osijtzlsotzisojtilozjtisllozsjlotijsolztijlsozjtsizjositljsztijlzoitsjozltjsilo # zjstiolzstjlosiztlojlsitzosjizltojsilzjsilztsoijozltjsolzitsljioszjitolzsitljsz # toljsitzolijszotjlsizjoltisojtzslztislzoiojztilsjtolsjztiosljziotjzisotzisjtlzs # oiljtzoijtszloostijoslzitjoolszijosztloijtsziljositjzsltjoilzzositzosjtzoijlzsi # tlojstzliotslzjozisltjisztljizotjisolzjistljoisltozo import random import collections # On a Ti randomizer bug: # # > When these 3 conditions are met: # > # > 1. The randomizer has just chosen the most droughted piece # > 2. A reroll happened during the choice of that piece. # > 3. Excluding the first piece of the game, and including this chosen piece, you have received at least one of each of the 7 piece types. # > # > Then the roll that chose that most droughted piece will not update the bag of 35 pieces. # # -- colour_thief, http://tetrisconcept.net/threads/randomizer-theory.512/page-7 def TiRandomizer(): bag = ['j', 'i', 'z', 'l', 'o', 't', 's'] * 5 history = collections.deque(['s', 'z', 's', 'z']) drought_order = ['j', 'i', 'z', 'l', 'o', 't', 's'] count = { 'j' : 0, 'i' : 0, 'z' : 0, 'l' : 0, 'o' : 0, 't' : 0, 's' : 0 } n = 1 # first piece is special first = random.choice(['j','i','l','t']) history.popleft() history.append(first) yield first while True: for roll in range(6): i = random.randint(0, 34) piece = bag[i] if piece not in history: break if roll < 5: bag[i] = drought_order[0] count[piece] += 1 emulate_bug = all ([ piece == drought_order[0], roll > 0, 0 not in count.values() ]) if not emulate_bug: bag[i] = drought_order[0] drought_order.remove(piece) drought_order.append(piece) history.popleft() history.append(piece) yield piece if __name__ == '__main__': r = TiRandomizer() print "".join([r.next() for i in range(70)]) rehosted (possibly out of date) at http://codepad.org/Twb8rbTZ

Looks good to me! I guess if I want to be a shitter, I would say that random.randint is not exactly what happens in the code, since it's deterministic once you've hit start. My guess is that it saves the frame you've hit start and uses it in a big not-random function that returns a huge number modulo 35. So if I'm right, that would imply that before the loop you use frame = random.randint(0, 162345) numberOfPieces = 0 and then in the loop i = ((frame + 1000) * numberOfPieces) % 35 numberOfPieces += 1 But that does not make a difference here, so... eh. I did give it a thought during Eindhoven anyway, but I have no idea if people have managed to out precisely what happens (the NVRAM has to be used some way, for instance). Colour_thief?

i agree, it would be great to have an identical Ti randomizer reference, like Eirstt is for TAP. it'd need someone better at disassembly than me to chip in though. i kind of want to start a github repo and collect a bunch of these kinds of things

I'd like to help, I'm trying to emulate all the big randomizers, with and without bugs. In your Ti code, `drought_order` is already defined full, should it not start empty?

i'm not sure, depends how it works in real Ti. in the long term, it doesn't matter, because we only emulate the bug if you've had one of each piece already (in which case drought_order will be in a correct order), but it does mean early-piece rerolls might be inaccurate. we do need to initialize it to something for the reroll-fill however, assuming Ti always does the bag replacement and doesn't skip that phase until drought order is completed or such ideally we'd know how Ti does this - last I heard Altimor was looking at reverse-engineering Ti's custom rand() implementation, possibly him or someone else knows and can chip in on the drought ordering thing

The ordering is really a list of drought sizes. Functionally they search through for the largest drought, it's not an ordered list. I think they default to 5 (any tie is functionally equivalent).

Inspired by the new thread on randomisers on Hard Drop, I recreated my randomiser spreadsheet. After that, I wanted to sort the twenty randomisers in my new spreadsheet by Var(I), so... Memoryless: 42 63-bag: 34 + 82/275 = 34.29818181... NES (ideal): 32 + 2/3 = 32.66666666... Cultris II (sample): ~27.65 28-bag: 27 + 13/25 = 27.52 4 history 2 roll: ~20.56697 14-bag: 19 5-bag: 18 10-bag: 16 + 32/35 = 16.91428571... 7 from 8: 15 + 82/175 = 15.46857142... 1+7: 12 + 13/14 = 12.92857142... 6-bag: 12 + 5/6 = 12.83333333... 3 history strict: 12 8-bag: 11 + 43/56 = 11.76785714... 4 history 4 roll (TGM1): 10.13757557... 7-bag: 8 7-bag with seam match check: 7.5 4 history 6 roll (TGM2): 7.34494156... 4 history strict: 6 TGM3 (sample): ~5.31 Use this as you will.

Variance of the interval between the current piece and the next time the current piece will occur. This should give an idea of how variable the interval lengths are. If you want any other stats, just ask.

I'd be curious to see the predictableness statistic. Like, the % of your next piece guesses you'd expect to get right given perfect piece history knowledge. I imagine it would shake up the ordering a bit. Also curious to see anything else you've put together, and I'd totally geek out in your spreadsheet if you feel like sharing it.

Yeah I think variance alone isn't a complete indicator. The probability distributions are going to be pretty wildly different shapes for different randomisers and so variance alone isn't ever a complete like-for-like comparison. Would be interesting to see a few of them actually plotted out as a comparison.

I have forgotten most of the discussion from last time, but there are few points that came to my mind. Essentially we calculate stats based on some given, lets say, randomiser state (values of probabilities assigned to all the states). I think that knowledge of history would alter a lot of things in general. For example, we have to be fairly reserved with the application of static/stable state to calculate stats. What the stable configuration assumes is that the system is left for a very long time and the details of what block sequence was given in that time are not known. If we know we know the past block sequence (from a given initial randomiser state), even if the system is left for a long time, we know a lot better. For example, starting with empty bag, after 140 blocks we will always reach the empty bag again. After 141 (and say last block we know) blocks we still know very well where we are. If the initial randomiser state wasn't sharp, things would be a bit different I guess. For a specific block, we will have something like that(as an example). S1(0.1) S2(0.2) S3(0.3) S4(0.4) 0 --- --- --- --- 1 --- --- --- --- 2 --- --- --- --- 3 --- --- --- --- 4 --- --- --- --- ...... The numbers in brackets along with states represent the probability of that state (in a given randomiser state). And numbers corresponding to 0,1,2,3... represent probability that the given block will be next or occur after 1 block, or 2 or 3 blocks (and so on). Now it should be easy enough to calculate the average value at which we expect to see the given block next. I think what RostiLFC meant with a plot was graph of (something like) the length of interval constructed around the average (on x-axis) versus percentage of values occurring within that interval (on y-axis). What I mean is that if the average is say 5 and the interval is 2, then we want to know the percentage of values occurring between 3 and 7 inclusive (actual plot would be like a pictograph, we could smooth it out if we wanted to). ------ Now the stable state does correctly represent the long term relative frequency of the states (don't know how to prove that). So, in that sense it makes sense to use it to calculate averages (which would be 6 for reasonable randomisers) and plotting the variations (as I described in the above paragraph). However, something still seems missing to me. In some sense, I am trying to think of the deficiency of the plots for stable state. In another sense, I am trying to think of what other useful values can be extracted (for more specific situations). Few things I can think of: 1) The initial randomiser state. How long before it goes reasonably close to variation plot for stable state (and the evolution of variation plot during that time, if we want to get very detailed). 2) If, for example, we are talking about multiplayer game, it is possible that, the way the game is programmed, randomiser resets after every round (I don't know what actually happens). This will affect things drastically. The importance of short term behavior of randomiser will increase significantly. One thing one could do would be to measure to measure avg. number of blocks used/match (for average play, top-level play etc.) and then do a calculation based upon that. 3) Knowing the block sequence (given a well-defined start state). What kind of related useful values we can think of.

If we are given a specific randomiser and a given tetromino (say "I" as an example) then we can associate the following values, given that we can see the full sequence as it comes. -- best prediction possible -- worst prediction possible -- expected prediction value (for long-term) What I am trying to say is that before each piece randomiser hands out, we have to guess with a yes/no answer. Yes would mean that next piece will be an "I", and no would mean that next piece won't be an "I". The above three values would depend on what method we use to calculate the yes/no answer. Lets think about the optimum method first. For memory-less we have one state and we can only start there. The optimum method seems to be select No for each turn. Then we would have the following stats: best prediction possible = 100% worst prediction possible = 0% expected prediction value = (6/7)*100 = 83.3% For bag suppose,lets suppose for sake of simplicity/clarity that we start out with full bag (with all 7 pieces). We would have: best prediction possible = 100% worst prediction possible = (6/7)*100 = 83.3% The reason for second value is that in worst scenario, the randomiser can only trick us once in any 7 pieces. For example, if we selected 'No' for the first piece (as we should in an optimal method) and the randomiser just gave the piece we said 'No' about, then we know for sure that we can just select 'No' for next six times and get it right. What about the long-term expected value? That would depend on stable probability value for the states. But still first before that we need to select a 'Yes/No' answer for each state (to find the optimal method). When bag has 7 pieces --- answer = No (I probability is only 1/7) When bag has 6 pieces (I handed out) -- answer = No (I probability is 0) When bag has 6 pieces (I not handed out) -- answer = No (I probability is 1/6) When bag has 5 pieces (I handed out) -- answer = No (I probability is 0) When bag has 5 pieces (I not handed out) -- answer = No (I probability is 1/5) When bag has 4 pieces (I handed out) -- answer = No (I probability is 0) When bag has 4 pieces (I not handed out) -- answer = No (I probability is 1/4) When bag has 3 pieces (I handed out) -- answer = No (I probability is 0) When bag has 3 pieces (I not handed out) -- answer = No (I probability is 1/3) When bag has 2 pieces (I handed out) -- answer = No (I probability is 0) When bag has 2 pieces (I not handed out) -- answer = Yes/No (I probability is 1/2) When bag has 1 pieces (I handed out) -- answer = No (I probability is 0) When bag has 1 piece (I not handed out) -- answer = Yes (I probability is 1) Based on the stable probabilities of these 13 states we can calculate the long-term prediction value. It should not be difficult but I don't have it written before-hand so I will have to re-calculate these. That might not be too different from what I described based on one specific piece. Unless I am missing something, for each state I think we would want to choose the piece that has the highest probability (to be optimal), as compared to choosing whether a given piece has prob. more or less than 0.5. For example, for bag (assuming again that we start with full and hence can follow the trail accurately), if we have m number of pieces in it then we have 1/m probability for our guess to be correct. Assuming that long-term probability for having given number of pieces in a bag (1 to 7) is 1/7, then the answer for long-term prediction value should be: (1/7+1/6+1/5+1/4+1/3+1/2+1)*1/7 = 37% The optimal method would just be to guess for any piece that is still in the bag (not taken out). p.s. We were assuming sharp start states (and hence we can follow exact states given a sequence) for calculating best/worst/long-term prediction values. If they weren't, can it make a difference?