Randomizer Theory

Thread in 'Research & Development' started by colour_thief, 12 Jan 2007.

  1. Yeah deep bag concepts are neat, I should explore them more than I have. I have a framework where even for highly experimental randomizers I can automatically calculate exact drought curves, entropy, conditional entropy, perplexity etc. I keep meaning to properly write up the stuff I've discovered but have been perpetually side tracked.

    I don't think I've posted this here but this is probably as close as I'll get to a proper exposition for a while:
    twitch.tv/videos/1085579617
    When you internalize my methods a bit you start getting pretty nice intuitions about how these different randomizers will play out. :)

    I'm definitely eternally passionate about this stuff so if you check it out and have questions hit me up!
     
  2. I dug up the link I got from you a couple of years back and read through the stuff there; I see you've added some things since then.

    Speaking of things from a couple of years back, another thing I talked about on Discord back then but never publicly was Tetris with Cardcaptor Sakura's randomiser in Easy. Based on my notes, they tried to do a 3-history 4-roll randomiser with biased piece probabilities, but the reroll code was bugged so the history was effectively ignored. Unfortunately I seem to have forgotten to write down the exact probabilities of each piece type...
     
  3. That's pretty neat that they experimented with a couple different randomizers in there. Hopefully someone digs into it again at some point.
     
  4. I tried to implement what I'm calling (as a temporary name) the (1,6) bag stack (using the principle I described the other day with two bags of size 1 and size 6, with the 1 on the bottom) and it appears to have exposed a flaw in your entropy calculation -- running the calculation gave me an entropy of 0.00%, which is obviously wrong (exercise: show that this bag stack is equivalent to a strict 1 history randomiser). Are you interested in having a look to see whether you get the same result?
     
  5. Sure I'll give it a shot. Like post #260 yeah? What's the setup exactly (what bag sizes)? Just a single piece in the bottom layer? I must be missing something because that's sounds equivalent to pure random with forbidden repeats.
     
  6. It looks to me like you have the right model in mind. I picked a simple case for testing (the (2,5) and (3,4) cases are more interesting, as are those where we have 14 or more pieces in the system) and the fact that it's equivalent to strict 1 history makes it really easy to derive the correct values for everything. However, it also has the property that given the state the next piece is known exactly, which seems to be tripping up your entropy calculation.
     
  7. I tried to implement it today and I think I see the issue. To use my tools you have to represent a randomizer as a digraph with a finite number of states. To give a practical test to figure out when this is the case:

    If you know the current state and the piece dealt, and can deduce with certainty what the next state is, then these methods work. (example: NES, TGM1, TAP, 7-bag etc.)

    If, even if you know the current state and piece dealt, the next state is ambiguous, then these methods don't work. (example: TGM3 has not-player-visible rerolls that change the probabilities of piece types by weighting them differently in the bag of 35) But if you can express the same randomizer with unambiguous states then you're good.


    If you're familiar with some finite automata theory you'll know that automata with multi-digraph structures can be transformed into equivalent simple digraph structures using a "powerset construction"). eg. if your automata has n states, then you can simply create a new automata with 2^n states for the powerset of states.

    The issue with the powerset approach with our case (stochastic automata) is that instead of a powerset of all states (each a simple binary 1 or 0 for set inclusion), we have probabilities. And if you work out some examples by hand you'll see that there are actually infinite states when you try to represent the multidigraph this way. This doesn't mean there's no way to take a multidigraph and transform it into an equivalent digraph but this is a very hard general problem and I don't have a function that can do it yet.

    To go in specific detail about your deepbag example... You're getting 0 entropy because (I'm assuming) you programmed it in a way that implies you always know what's in the front bag. If you do, then (1,6) truly has 0 entropy. In reality even if you knew your current state, after seeing a piece dealt you would only know 6 possible next states each of probability 1/6. But you could go arbitrarily long without, say, moving the I piece into the front bag. And with each piece dealt the probability of the I piece is in the front bag can only stay the same or increase, getting arbitrarily closer to 100%. So we have an infinite state explosion problem when you try to represent the randomizer in graph form using a "stochastic power set" vector.

    This stuff is pretty interesting to me and I'd like to handle these cases more elegantly but it is a little over my head and I'm not even entirely certain that it's not an unsolved problem in theoretical computer science.
     
  8. Zaphod77

    Zaphod77 Resident Misinformer

    Interestingly Heboris U.E. has an odd family of randomizers called hebo2 through hebo7 that aren't used by default (it usually uses 4 history 6 rerolls)

    These randomizers draw the listed number of pieces from the bag, but will put the piece back in the bag and reroll if it matches the previous one.

    Additionally, they never start you with O S or Z by default.

    so hebo7 is even kinder than 7 bag, because it never gives a direct repeat ever. it can't give you ss or zz, or even szz or ssz or zzs or zss. it can still give you SZSZ and ZSZS, though. It makes tetris tetris tetris combo even harder though.

    hebo6 will never give you a repeat either, but it can starve you of a piece longer than bag can. so it's more unpredictable. with the lower numbers capable of starving you of more pieces at once.
     
  9. Zaphod77

    Zaphod77 Resident Misinformer

    I have finally sussed out Bloxeed's piece randomizer precisely.

    It's a variant of sega tetris, which has been reversed already.

    Instead of generating a 1k sequence, it runs the main generator once (identical to sega except that the seed is one higher), and stores all 32 bits of the result as the seed for the piece generator.

    Instead of taking the lower 6 bits of the result, and doing a modulo 7 to pick a piece it takes the lower SEVEN bits of the result. This introduces a slight bias towards I and Z, compared to the original sega slight bias for I.

    If the upper three bits of the result and 0X7f are all set (if ((result & 0x70)==0x70) ) then a flag is set to turn s and z into I for that roll.

    This means that compared to sega tetris, which had a slight bias for I and I only (10 in 64 chance, vs 9 in 64 chance for the others), Bloxeed has a bigger bias towards I, and hands out less S and Z. Piece probabilities are

    I:3/16
    S:1/8
    Z:1/8
    J:9/64
    L:9/64
    O:9/64
    T:9/64

    those last four are same as sega tetris.

    While this isn't an exact translation of the assembly code, it produces identical results, and replicates the poweron pattern exactly.

    This is to make up for the advancing garbage. Only correcting a roll of 127 was required to ring the results inline with original sega tetris, but this additional adjustment was clearly intended to make it not give out as many stupid snake sequences (and there are some really stupid ones in sega's poweron pattern).

    It also doesn't loop. it instead backs up the piece seed so it can turn back the clock when you continue.

    This is of course now in my Hebo U.E. fork.
     
  10. Anyone have any randomizer algorithms they're interested in simulating? I managed to code up the Master of Blocks 4 algorithm and it seems to be going well (it's an interesting mix of bag and history).
     
  11. I have a request actually! The distribution from my graph method looks weird, so I'd appreciate confirmation. The idea is from TPM, kind of like a conveyor belt:

    Start with 14-bag, however this bag won't be refilled in the normal way. Dealt pieces are placed on a "conveyor belt" for 7 pieces, after which they get added back to the bag. So typically there are 7 pieces in the bag, with each piece generation taking 1 and refilling 1.

    Personally I don't like these "garden of Eden" initial states, so I initialized it placing a 7-bag in the main bag and another 7-bag on the conveyor. If you wanted to be cute like TGM you allocate the initial 14-bag differently, like putting all SZO on the conveyor.
     
  12. You're right, your implementation has the wrong probabilities, and I was able to confirm this using my bag stack code and an independent simulation. I'm currently staring at your code trying to find where the bug is.

    EDIT: I found the bug comparing my code to yours, it's something I ran into with a different randomiser. It's the part when you reorganise the history and bag so that the last piece is now 0. Since history isn't strict we need to take this into account

    Bad code:
    Code:
    for i in range(7):
         if new_history[i] < roll:
             new_history[i] += 1
    new_history[0] = 0
    
    Good code (well, non-buggy code):
    Code:
    for i in range(7):
        if new_history[i] < roll:
            new_history[i] += 1
        elif new_history[i] == roll:
            new_history[i] = 0
    
    Anyway, the exact drought probabilities can be derived very simply:
    * For 0 to 6 (assuming the earliest is 0), 1/13
    * For 7 onwards, 6/13 * 2/7 * (5/7)^(d - 7).

    This is because for 0 to 6 we don't know where the other piece of the same type is, and if we distinguish the copies of each piece then a different piece copy gets dealt for each of the next 7 pieces, and by symmetry the chance of each one being the other copy of our piece is 1/13. After that both copies of the piece are in the 7-bag so we get a geometric distribution.
     
    Last edited: 9 Jul 2022
    Kitaru and colour_thief like this.
  13. Right on, that rules! Can't believe I made that mistake, I must have written that code snippet with the 0 case properly handled dozens of times. Mostly with this randomizer stuff it feels like me screaming into the void. Seeing someone else understand it well enough to spot a bug made my whole day so thank you so much!

    I agree with the probabilities but I'm having trouble following the intuitive argument for the case of 0 to 6. When I calculate it out everything just seems to magically balance out:

    First, there are 13 possibilities of where the 'other' piece could be.
    Case 0, there are 7 possibilities of 1/7 draw chance (when it's in the 7-bag) and then (on the conveyor) 6 possibilities where there is 0 chance. This sums to 7*1/7 + 0 = 1, giving us 1/13.
    Case 1, there are 7 possibilities of 6/7 * 1/7 = 6/49 draw chance, 1 possibility with 1 * 1/7 chance, and 6 possibilities with 0 chance. This sums to 7*6/49 + 1/7 + 0 = 1 giving us 1/13 again.
    Case 2, there are 7 possibilities of (6/7)^2 * 1/7 = 36/343 draw chance, 1 possibility with 1 * 6/7 * 1/7 chance, 1 possibility with 1 * 1/7 chance, and 5 possibilities with 0 chance. This sums to 7*36/343 + 6/49 + 1/7 + 0 = 1 giving us 1/13 again.
    etc.

    I lack formal education and often have funny knowledge gaps, so any direction that can help me fill them in is appreciated. Where would this sort of stuff get developed? Combinatorics or somewhere more basic?
     
    Kitaru likes this.
  14. It's a symmetry argument (I think it counts as combinatorics?) The idea is that if we just picked an I (for example), then the other I can be either in the 7-bag or in the 6 slots of the conveyor belt that aren't the most recent piece. Now, of those 13 pieces, 7 of them will be dealt in the next 7 pieces, and after that the I piece we just picked will join the other 6 in the 7-bag. The other I can be in any of those 13 slots with equal probability, by symmetry; thus the 1/13 probability.

    ----

    A thought I keep coming back to is how to make balanced randomisers that deals pieces with desired non-uniform long run probabilities (e.g. giving S and Z half the probability of the other pieces). For obvious reasons, we can do this easily with bag. Meanwhile, trying to modify a history randomiser is much more difficult -- simply putting in the probabilities we want for each roll won't work. This was the motivation behind Malte Skarupke and Jason Frank's algorithm (not designed for Tetris specifically), which basically "throws" pieces down a line (as I like to think of it) -- we move along the line until we hit a piece, then throw it further down a random distance. By choosing appropriate distributions for the throws, we can change the probabilities easily.
     
    Last edited: 24 Mar 2023
    Kitaru likes this.
  15. I actually made a randomizer like this once. Have an array of "distances" for each piece, initialized with a random float in range [0,7). Then on each step, decrease all of the distances by 1, while looking for the lowest. Deal the piece with the lowest distance, then pull its new distance from the Normal Distribution, with mean and variance both 7. Never tried applying this to unbalanced randomizers, though.

    Looking back at my code for this, It feels almost surreal that I wrote so many of these randomizers in C. Learning Ruby for just over a year has spoiled me heavily...

    A wasteful and wonky possibility: Store 12 counters, S Z and two for each other piece. This has drawbacks, such as not doing very much to mitigate the two iterations of each other piece from appearing next to each other. This also doesn't scale well at all, for the moments when you want a more nuanced piece bias.

    I am going to have to do some reading of this algo you linked. It looks interesting!
     
  16. I found this thread super helpful and interesting, so I wanted to share some experiments I've run myself. This is a wait time plot created from 100,000 iterations each of a few different randomizers. The randomizers compared are memoryless, a 7-bag generator, a history based generator modeled after tgm2, and two I came up with.This plot is similar to the one colour_thief shared showing drought intervals, but using my own code and some different randomizers.

    WaitTimes.png

    The two "new" methods I came up with are "choice of k" and "multiplicative weights". These are both designed to be somewhere between the bag method and the history method in terms of behavior, but with less state than something like TGM3's randomizer. Based on the plots, they seem to accomplish this goal, but I've not yet tried them in game. These are probably easiest to explain by sharing the python code:

    Code:
    def choice_of_k_generator(N, k):
        last_generated = np.zeros(N)
        while True:
            choices = np.random.randint(0, N, k)
            least_recent = np.argmax(last_generated[choices])
            shape = choices[least_recent]
            last_generated += 1
            last_generated[shape] = 0
            yield shape + 1
    
    def multiplicative_weights_generator(N, alpha):
        last_generated = np.zeros(N)
        while True:
            weights = alpha ** last_generated
            probabilities = weights / np.sum(weights)
            shape = np.random.choice(N, p=probabilities)
            last_generated += 1
            last_generated[shape] = 0
            yield shape + 1
    
    The "choice of k" method is inspired by the method in the paper "The power of two choices in randomized load balancing" M. Mitzenmacher where this was used for randomized load balancing. I think the "multiplicative weights" method is similar to methods already discussed here, but it's also very similar to an algorithm of that name used in machine learning, hence the name. In the plots I'm using k=4 and alpha=2.

    I'd be happy to share the code I used to make this plot in a github gist or similar if anyone wants it. I'd also be interested to see if these methods can be plugged into colour_thief's analysis framework. Both these methods use internal counts for the shapes, so I'm not sure if this invisible state excludes the method from the kind of analysis colour_thief is doing.
     

Share This Page