# Category Archives: Math

## Math and Campus Maps

It’s been a few days since I’ve posted as I’ve been pretty busy with frosh week events here at Waterloo. Since I’m going into software engineering, I’m technically part of both the engineering faculty and math faculty and therefore participate in events from both faculties. At this point, all “softies” have earned their engineering hard hats and pink math ties:

The other night, one of the Comp Sci guys on my floor came to my room with a pretty cool math problem that would have fit well on Project Euler. It basically wanted to know how many downward-pointing triangles were present in a pyramid made of triangles. Of course, these downward-pointing triangles could be built from multiple triangles; for 4 layers of pyramid the answer is 7 because of one bigger multi-triangle:

Arriving at a computer-friendly answer is very easy, and requires a simple nested for-loop. I also found a slightly more elegant solution with only a single for-loop, but again this solution was fairly easy to come by. The hard part began when a few more CS/Eng students arrived and asked if we could find a single equation that would work for all cases that was “calculator-friendly”, i.e. without any looping. This was tougher.

We worked for several hours that night and didn’t really produce any solid results. I decided to try again earlier this morning and I’ve finally come up with an extremely ugly solution for cases were the number of layers (n) is greater than 4:

I’m still not satisfied, but I’m putting it away because I’ve already put way too much time into it.

The second item I mentioned in my title is an idea for an app I’ve started working on. I used my iPhone + Google Maps to get around campus for the first few days, but this required some looking back/forth between the phone and the actual campus map. I think it would be handy to have a campus map super imposed onto Google Maps. In addition this map could take building names as start/end locations and plot the optimal route. I’m hoping to have this done by the time class starts on Monday. 😛

T

## Rubik’s Cupe Map – Part 2

Earlier in the week I mentioned that I was working on “cube mapping” one of the may ways to solve a Rubik’s Cube. I’ve finished rendering out the final video of the process, starting with a scrambled cube and ending at a solved Rubik’s Cube. The movement in the cube is admittedly a bit harder to follow than I expected it to be, but the results were still quite interesting.

I’ve uploaded two versions of the video. One’s rendered at 24 fps and the other at 8 fps:

Rubik’s Cube Map Fast (24 fps)

Rubik’s Cube Map Slow (8 fps)

T

## Rubik’s Cube Map – Part 1

As mentioned in my Chapters post, I’ve recently picked up a new classic 3x3x3 Rubik’s Cube. I’ve relearned a fairly simple solving method that only requires 6 unique move types to complete the entire cube, although it does take a lot longer than more advanced methods. Out of curiosity, I decided to try mapping the entire solving process into a series of images. The most logical way to do this was with a cube map, i.e “unfolding” the cube into 6 connected squares. A solved cube would look something like this:

There were only really two options to go about doing this. Solve it step by step and take 6 photos after each turn, or make a digital model in Blender3D. I chose the latter option, and built a digital Rubik’s cube from 27 unique cube objects. Animating the cube was simple; all I had to do was move the cubes I needed and insert them into new keyframes. I didn’t want any sort of rotational movement in the cube, just changing colors, so the keyframes are all one frame apart.

The first step was to “mess up” the digital cube. I rendered out an animation of this process, which is uploaded here:

http://vimeo.com/28371240

I realized quickly that moving the cube around digitally wasn’t going to be as easy as using the physical cube. To make the process easier, I decided to solve the physical cube simultaneously and make sure the digital model matched. This worked very well, until I messed up on the physical cube and couldn’t undo the error without bringing the two models out of sync. I ended up solving the physical cube completely and then following the steps from the digital cube’s animation to get back to where I needed to be. This was also around the same time when I began to wonder how good of an idea this whole project was. 😛

Eventually I got all 167 frames keyed in, including the scrambling process and a small delay after it. It was already pretty late, so I decided to leave things for the night and do the rendering and compositing today instead.

I ran the rendering process this morning, and I’ve now got a nice stack of 1,002 images that need to be merged into cube maps. Unfortunately I have no idea how to do this, aside from manually making each cube map. The Photoshop batch job system should be able to handle the task, I think, but I don’t know enough about it yet to make it do what I need. I’m looking into this now.

T

## Monopoly Math Theory

Like I said in the last post, I spent some time working on the actual probabilities for each square in Monopoly. First thing first: Chance/Chest cards suck.

About half of the Chance/Chest cards can be classified as “move cards”, as they instruct the player to move to a square. Lets say you draw the “Go to jail” card from Chance and move to jail. That card now goes to the bottom of the stack, so the probability of anyone getting that card is now 0 for the next 16 Chance draws(there are 17 Chance cards), and 1 on the 17th draw. Therefore the probability of the squares is affected by the initial order of the C/C cards, since its common for not all the cards to be drawn during shorter games.

The number of players also affects things; obviously more players will go through the C/C cards faster, but it also complicates the “get out of jail” cards. I never pay the \$50 to leave jail, therefore I either need a “get of jail” card or a doubles roll. If two other players already have the freedom cards, that means when I leave the jail it’ll be on a doubles roll. This means that I can’t land on squares 3,5,7,9, or 11 on a jail exit move. Though the effects are probably negligible in the grand scheme of things, they’re definitely present.

The only other factor, aside from standard two-dice probabilities, is that rolling three doubles in a row immediately moves the player to jail.

With all these factors considered, solving things gets pretty complex. Example: You’re on Water Works, and roll double 4s. You’re now on Chance. This Chance turns out to be a “Go Back 3 Spaces” card, putting you on Community Chest. The Community Chest card is an “Advance to Go” card. Since you had double 4s, you get to roll again; doing so gives you get double 1s, putting you on Community Chest. Go To Jail! At the end of it, you’ve visited four different squares on a single turn. On the other hand the same outcome could have occurred if double 1s were rolled from Water Works, as this would have put the player on the “Go To Jail” square.

Such complex move sequences are very rare, but are certainly possible. Therefore instead of trying to come up with an equation, I decided to calculate the potential pathways to each square and the probability of said pathways. I also took some liberties with C/C cards and simply treated them as a random draw. I’ve attached the final Excel sheet as usual:monopoly_final_paths

And, a screenshot of the final ordering and game board composition:

There are definitely some differences between the theory and the experimental results, but the general trends are fairly similar. Railways and orange still appear as the most popular properties. There’s a lot more “smoothness” in the theory though, especially in the middle range properties. The one last thing to do with this data would be to factor in costs of buying and rent, to see which are the best price wise. I’m sick of this game and data though, so maybe another day. 🙂

T

## Monopoly Math

So it was family game night tonight, and we decided to play Monopoly. Because the game can take a while, we generally set a time limit, something between 1.5 to 3 hours depending on how late we start the game. My strategy for playing is pretty straight forward: collect as many railways and utilities as possible, and go for orange, yellow, and dark blue property groups. It tends to work pretty well for me, so I decided to do some data recording this game. I tallied up every squared landed on in Excel, including Chance/Chest cards like “Go directly to jail”, “Advance to boardwalk”, “Go to go”, and so on. We don’t play with any custom rules that affect moving, so the data reflects standard Monopoly procedures.

Unfortunately the game only lasted around 1 hour 20 minutes (I had a lucky start & early win), so the dataset isn’t very big. I wasn’t about to ask for a rematch though and we probably won’t play Monopoly again for a week or two, so I decided to work with what I had. Here’s the full organized data and analysis; click for full-size of course:

Haha, go Orange and Railways! I knew I was backing the right horse. 🙂 I think the reasoning behind the high frequency of orange landings has to do with the fact that it’s outside of the jail. Some of the most probable rolls on two dice (6,8,9) will land you on orange. There are also Chance/Chest cards that take the player to the pink squares right before orange, and the railway right before orange. Railways/corners are obviously going to be popular because of the aforementioned cards, and because they’re well spread around the board. Same goes for Chance/Chest, which combined make up 6/40 or 15% of the game board. Also, I find it funny that Tax squares received the least hits.

There’s a lot more I could do with this as far as comparison to theory goes, but the problem is calculating theoretical probabilities for each square. Since there are a lot of different routes to get to squares, with the possibility of going to jail, getting a Chance/Chest card that moves the player, etc things get complicated. I do intend to sit down and work through the math later tonight/tomorrow, so I’ll post again with an update at that point. And, as always, if anyone wants the data to play with: Monopoly.xlsx. The first Sheet is the raw data, and the second is the analysis page shown above.

T

## Array Sort

Earlier in the week I was working on something completely unrelated to programming (sorting some school work actually) and I had an idea about sorting integers using array indices. I finally got around to implementing and benchmarking it in Java this morning.

The process is fairly simple, though it might be kinda hard to explain. To start with I’m going to describe a simplified version that uses a bit more array space than my optimized version.

Imagine you have an array of integers, something like    int myArray[] = {5,9,3,8,4,10,2,8,6,4};   for example. The first thing the code does is loop through the whole array once to determine the maximum value in the array. In this case that would be max:10. Using this value, we create a second empty array, as follows:    int sorted[] = new int[max+1];

Here’s where the trick comes in. Rather than using the array itself to sort the data, we use the array’s indices. This means that we now loop through the original myArray and for each value we encounter, we do   sorted[value]++;

Once we’re done looping through myArray, we’re left with:

sorted[0]=0    sorted[1]=0    sorted[2]=1    sorted[3]=1    sorted[4]=2

sorted[5]=1    sorted[6]=1    sorted[7]=0    sorted[8]=2    sorted[9]=1

sorted[10]=1

Since array indices are sorted, all we need to do now is loop through the array one more time and printed out the indices that have non-zero values. If the value is >1, we print out the number that many times. This gives us 2,3,4,4,5,6,8,8,9,10. And look, it’s sorted!

With a small-scale array like this, QuickSort is much faster. However with datasets at around 50,000 or more the “ArraySort” method starts to start looking very nice. I recorded and graphed some running times at intervals of 100,000, between 100,000 and 3,000,000. Each array contains n elements generated by a for loop, with each element being set to:  (int) (Math.random() * 1000000);

Here’s the resulting graph:

I couldn’t believe the results. I changed the ArraySort program to print out the sorted array at the end of the run, just to make sure it was actually sorting, and it was. I guess the difference here is that 1) ArraySort only uses a constant 3 loops and 2)QuickSort is recursive, so with more things to sort more method calls have to be made.

I plan on trying this again using C++ instead of Java later today to see how the performance compares.

T

## A Penny for Your Thoughts?

I’ve been saving all my cent coins in a jar for the last few months, with the purpose of counting up the years on them to see what sort of distribution there is. Today I counted my jar of 185 pennies, and the results were not entirely what I expected. I should mention that these are Canadian coins, as I am Canadian after all.

I’d gone into this assuming that 2010 would have the most, then 2009, then 2008, and so on. Some sort of exponential function would make sense. I wasn’t sure about 2011 because I don’t know when the mints make coins, or if they’re consistently minted throughout the year or if there’s one day/week/month where the penny machines get cranked up to the 11 setting.

To start with, here’s the raw data in case anyone else wants to take a look: raw_penny_data. In addition to the year:count data it also includes a cumulative count column and a weighted average column. Feel free to do whatever you want with it, citing it if you use it somewhere else of course.

And here’s the resulting graph from Excel. I’ve add in some labels with Photoshop on some of the relative peaks:

I knew 2010 would be high, but holy shit that’s a big peak. And what’s up with 1994? I’ve heard before that after 5 years 50% of pennies from a year have dropped out of circulation. 1994 has held in there pretty well, and 1995 is right up there too. I wonder if there was a surge in penny production over that timeframe? Also, 1987 looks relatively large when compared with the other counts around it. The overall trend looks exponential though, as predicted.

After seeing the results, I think I’m going to continue saving my pennies and add the new data at the end of the year, to see what a larger sample size will produce. I’m curious if the 1994 spike will become absorbed with a larger sample size, or if there’s actually some reason for there to be more pennies from that area.

Also, not relating to the data, I found a single penny that completely lacked a date. I’ve never seen that before. It looked like a 1995-2005ish penny based on the level of wear and tear, and with the exception of the lack of a date it’s identical to all the other pennies. I couldn’t find any information on these type of pennies either, maybe it’s some sort of defect? At any rate, it’s not a common feature on Canadian coins so it might be worth something. I intend to hang onto this dateless penny.

Here’s a photo of the full penny pile:

And the dateless penny. The date is normally right about the CA in Canada, but is nowhere to be found on this coin:

That’s all for now.

T