Hosting An Advanced Yii2 Application on Heroku

Yii2 is still not production ready, but I really wanted to take a crack at getting a test project up and running on Heroku, my host of choice. I think I've gotten a pretty good setup for the advanced application, but this is my first crack at it. I'll keep the blog updated with things I learn.

What is the Advanced Application Template?

This looks advanced.

This looks advanced.

Good question. Yii2 comes with a few application templates, basic and advanced. The basic template is much like Yii1.X, you'll find that one very familiar. The advanced template is a little bit different in that it separates the fronted and the backend components of your site. (Think of the fronted as your site, and the backend as the admin area.) At first I didn't like this method because it felt like there were too many configuration files that were unnecessary, but I've come to see it as both powerful and flexible. Use the advanced template, you'll like it.

Here's the problem though, in the docs they have you set up your local server in the following way: 

  • for frontend /path/to/yii-application/frontend/web/ and using the URL http://frontend/

  • for backend /path/to/yii-application/backend/web/ and using the URL http://backend/

This is all well and good on your local machine, but when it comes time to push it to a virtual host, you are going to run into a few problems because you wont be able to do that. We're going to get around that by using the .htaccess file. We'll put "frontend" at root and "backend" at "/admin". 


Now, I'm no expert in .htaccess rules, but there are tons of resources out there that will help. The htaccess tester was super handy in trying to figure out all my stupid mistakes. That's just one resource, albeit my most used one, there are plenty of others that are a short Google search away. 

Let me first show you my htaccess file, and then we'll walk through it.

<IfModule mod_rewrite.c>
Options +FollowSymlinks
RewriteEngine On

<IfModule mod_rewrite.c>
# deal with admin first
RewriteCond %{REQUEST_URI} ^/(admin)
RewriteRule ^admin/assets/(.*)$ backend/web/assets/$1 [L]
RewriteRule ^admin/css/(.*)$ backend/web/css/$1 [L]

RewriteCond %{REQUEST_URI} !^/backend/web/(assets|css)/
RewriteCond %{REQUEST_URI} ^/(admin)
RewriteRule ^.*$ backend/web/index.php [L]

RewriteCond %{REQUEST_URI} ^/(assets|css)
RewriteRule ^assets/(.*)$ frontend/web/assets/$1 [L]
RewriteRule ^css/(.*)$ frontend/web/css/$1 [L]

RewriteCond %{REQUEST_URI} !^/(frontend|backend)/web/(assets|css)/
RewriteCond %{REQUEST_URI} !index.php
RewriteCond %{REQUEST_FILENAME} !-f [OR]
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^.*$ frontend/web/index.php


The first IfModule part just tells Apache to turn the Rewrite Engine on. The real important stuff starts after the "deal with the admin first" comment. What we want to do is tell the server that anytime "/admin" is requested, we really are requesting "/backend/web/". There are three scenarios that we'll need to cover: 

  1. web/assets
  2. web/css
  3. everything else.

The first rewrite condition tests to see if the Request URI starts with "/admin". If it does, those two rules rewrite "admin/assets" and "admin/css" to "backend/web/assets" and "backend/web/css". This ensures that our admin site is pulling the correct assets. We'll talk about how to get Yii to request those URLs in a moment, for now, let's make sure the URLs are going to the correct place on the server.

The next set of rules makes sure that the Request URI isn't a backend asset URL and that it starts with "/admin". If those conditions are met, it reroutes the request to "backend/web/index.php", which is the Yii bootstrap file for the backend. Now we're all set up so that any request we make to "/admin" gets transferred to Yii. Off to a good start.


Now to the frontend. We want the frontend to live at the root domain. Again the first condition checks to see if it's an asset URL, which will get rewritten  to "frontend/web/assets" or "frontend/web/css" if it is. Then we move on to the "everything else category". There are a lot of conditions there to make sure we haven't already rewritten the URL to either a frontend or backend asset, and that it doesn't contain index.php. If it passes all of those tests, we rewrite it to "frontend/web/index.php", which will pass the request off to Yii.

What about the [L] Flag?


Yeah, so this is one of the things I learned when dealing with htaccess. The [L] flag does indeed prevent any more rules from being executed, but, it's only for that iteration. What I mean to say is this: If a rule is matched and an [L] flag encountered, no more rules will be processed and a new, rewritten URI will be in effect. This new, rewritten URI will then go back through all the rules. So the [L] flag does not mean "stop forever", it means "stop for this cycle". That's why we need to put conditions in to make sure that we don't either a) end up in an infinite loop or b) rewrite the URIs in a way that we aren't expecting. Confusing? I was certainly confused as to why it wasn't working, but now I know that it was working, I was just ignorant.

Getting Yii to Understand What the F Is Going On

Poor Yii. We're really confusing it at this point, we're going to help it out. After a lot of digging into the core Yii code, I've found a fairly simple way to make it work without mucking about with the core Yii (which means you can still use composer). 

The way we're going to do this is to hijack the Request component and inject some of our own functions. Below you'll find the code, which we'll go through.


namespace common\components;

class Request extends \yii\web\Request {
public $web;
public $adminUrl;

public function getBaseUrl(){
return str_replace($this->web, "", parent::getBaseUrl()) . $this->adminUrl;

If you don't have this function, the admin site will 404 if you leave off
the trailing slash.


Wouldn't work:

Would work:

Using this function, both will work.
public function resolvePathInfo(){
if($this->getUrl() === $this->adminUrl){
return "";
return parent::resolvePathInfo();

Put this is your "common/components" folder and name it "Request.php". The component has two methods: "getBaseUrl" and "resolvePathInfo". The getBaseUrl method takes the URL from the parent class, and replaces the web variable with an empty string. In practice, this means that "frontend/web/assets/test.png" gets turned into "assets/test.png", which is what we want. Our htaccess file will take that URL and transform it *back* into "frontend/web/assets/test.png". Lots of shenanigans going on, but it makes it work and it makes it pretty.

The second method takes care of a curious phenomenon whereby if you request "/admin", Yii explodes. If you request "/admin/", you get to where you hope to be. I dug into the resolvePathInfo method on the parent and found that it was returning false instead of a blank string. This little bootstrap function will take care of that.

Installing the Component

Installation is pretty straightforward, in your main.php file of your frontend, add the following code in the components section:

'class' => 'common\components\Request',
'web'=> '/frontend/web'

You'll want your 'web' to point to the location of your frontend web folder, which is "frontend/web" by default.

In the main.php file on your backend, add the following code to your components section

'class' => 'common\components\Request',
'web'=> '/backend/web',
'adminUrl' => '/admin'

You can change the 'adminUrl' to whatever you want it to be. If you do change it, you *must* change it in your htaccess file too, but that's super easy.

Dats All

Now you can push your repository up to Heroku and you'll have access to both your front and backends on the same domain. Like I said, this is my first crack at this and I may come across better ways to do it, or ways I've done it horribly. Please feel free to leave any suggestions in the comments.

Automatic Parameter Binding to Post Variables in Yii2

I've been working a lot with the alpha of Yii2 recently and have been loving it, but have run into a few issues. I'm going to try to post them here as a help to those of you who may have the same issues down the line.

Automatic Parameter Binding

What is automatic parameter binding? If you read the Yii 1.1 guide to Controllers, you'll see that if you define an action with a parameter of $id, and then add a querystring of ?id=1 to your request, that parameter will automatically be bound to the $id parameter. It's a nice convenience. For some reason, Yii2 only automatically binds GET variables, not POST. I wanted restrict certain actions to POST verbs and pass data through and have it be automatically be bound. But it only binds GET vars. Dang.

Fixing It.

It's super easy, actually. All you have to do is intercept the runAction method in your controller of choice and do a little tweaking. Here's the code: 

public function runAction($id, $params=array()){
$params = array_merge($_POST, $params);
parent::runAction($id, $params);

You can either add this to your particular controller that you'd like to automatically bind POST vars to, or you can create a base controller and extend all your other controllers off of that.


Happy coding. 

Encrypting and Encoding Information in URLs with PHP



 You can skip down to "Guidelines" if you don't care why I needed encrypted URLs. Similarly, if you only care about the code, you can get it on github here.

Mantis is coming along at a pretty slow pace, partially because we have a stakeholder problem. What I mean to say is that the business owners (who decide to use Mantis) really want the information and the accuracy that comes with it. The workers, however, don't want to be bothered to keep their time, they'd rather just continue guesstimating. And in many cases the person that's in Mantis day-to-day is not the business owner but more of an office admin, and therefore has less invested in ensuring that the workers stay compliant.

Bosses want it, workers can't be bothered, admins don't care so much.

We're trying a few different things to address this issue, one of which is delivering daily reports to the business owner detailing yesterday's hours worked, payroll, etc. Push the information to them and get them addicted, right? I remembered reading Thomas Fuchs's write-up about how they did this for their time-tracking tool for freelancers, Freckle. I decided to take the same route for Mantis. We are going to be sending them daily charts as images and needed a way to store secure information in the URL as opposed to the database. Thomas open sourced his code for how he did it in Ruby but since Mantis is PHP, I set off to port it over (not knowing any Ruby, mind you). It was an exercise partially for utility, and partially just to learn a thing or two.


I wanted basically the same thing out of my URLs that Thomas and Amy had for Freckle:

  • clean: only alphanumeric characters. Looks prettier that way.
  • no vowels: prevents curse words that can get caught in spam filters.
  • Securely encrypted.

Let the Porty Begin

The first thing I needed to do was set up my table of allowable characters, which are the letters a-z with no vowels, and digits 0-9. This leaves us with exactly 32 characters, which is perfect for what we'll need later down the road. Below is the code used to set up the "table". Vowels are now numbers, extra numbers go at the end.

static $table = "1bcd2fgh3jklmn4pqrstAvwxyz567890";

These are the characters from which we'll be forming our "pretty URL". That's the easy part, the tricky part is condensing a character set that is much larger than 32 (the encrypted string) down to a mere 32 characters. Please note that I'll be mostly covering the encoding, as opposed to the encrypting. You can read lots of better, more in-depth articles about encryption.

Fiddling with Bits

Here's where it actually gets fun. (Yes, truly.) Once I wrapped my head around the basic premise, it actually makes a lot of sense and is particularly clever. Hats off to Thomas. We're going to loop through our string, character by character and do the following:

  • convert the character to it's ASCII representation (a base 10 digit)
  • convert the base 10 digit into its binary representation (0's and 1's) 
  • pad the binary representation with 0's, ensuring it's length is exactly 8
  • shove this 8 character string of 1's and 0's on the end of a master string
  • go to the next character

Now when we're done, we're left with a master string of 1's and 0's that is 8*[Length of String To Encode] characters long. Put another way: 8 bits for every character in your original string. 

Below is me doing my best to figure out what *exactly* I was trying to accomplish. Pseudo-code is often very helpful. I suggest you try to understand the concept as well, as it's exceedingly interesting.

Trying to think. It's hard sometimes. 

Trying to think. It's hard sometimes. 

Hang On, Friend.

You may be asking yourself at this point: WTF, why are you not using proper integers to keep track of all those bits? You should be bit shifting and generally doing a better job. Fair point. Unfortunately, PHP has a max integer value of either 2 billion or 9E18, depending on your version of PHP and your OS. (Ruby, seemingly, has no limit.) Either way, both of those values are quite restrictive when you're no longer using the bits to keep track of numbers, as we're not. And therein lies the rub. While it may seem obscene to using a string to keep track of "bits", that's what has to be done, as far as I can tell.

Suggestions welcome.

The Magic of 31

Remember when I said, regarding our allowable characters, that 32 was the perfect number? That's because 31 is the highest number that can be represented in 5 bits (0b11111 = 31), meaning that we can fit 32 distinct numbers into 5 bits (0-31). What this means for us, practically, is that we are now going to take that long string of bits, originally 8 per character, and chop them up into 5 bit chunks. Then we'll convert those 5 bits *back* into a decimal, knowing that it has to result in a number between 0 and 31. After we have it as a base 10 digit (a decimal), we simply go to our table and pick the character that lies at that index. If the decimal is "0", we select the 0th element from our table: "1". If our decimal is "1", then we select the 1st element from our table: "b". (Recalling, of course, that arrays are 0-based, meaning that the first element has an index of 0. The second, 1. Etc.)

Perhaps a picture would help. The example below assumes we are encoding a string "abcde", which yields the encoded string "mfrggzdf". Pretty neat.



You Cheated

It's true, my example is contrived. I picked a string that was exactly five characters long, such that 5 (characters) * 8 (bits) = 40. Forty is a great number because it can be split evenly into 8 chunks of 5 or 5 chunks of 8. What if your string is *six* characters long, leaving you with a 48 bit string, a number not evenly divisible by 5? Simple, we just pad it to the right with 0's until we reach 50, a number that is happily divisible by 5. When we decode, we work from left to right and the extraneous 0's are simply ignored. Bingo.

The Last Little Bit (Pun Intended)

Now you know how to use PHP to encode information in URLs. You can use this to encode insensitive information, or encrypt *and* encode sensitive information. (This would be a great way to generate URLs with an expiry date, for example.) It would be beneficial to be reminded that URLs should generally not exceed 2,000 characters. Since every 5 bits is converted to 8 bits, we can calculate our overhead factor to be 1.6x, meaning that our new string is 1.6 times as long as our old string (8 bits/5 bits = 1.6). 

I'm always happy to accept pull requests if you can improve upon what I've done here. This was a great learning experience for me and I'm under no delusions that the code is perfect.


Github repo: 


Remaking the Cellphone Game Snake, in Microsoft Excel



Surely you remember that old cell phone game, "Snake"? The one where you had to eat the apples but not run into yourself? Gah, that game was a blast, and I spent a ton of time playing it in my younger years. It'd been a while since I had played, so I thought I would remake it in Excel. You know.. for fun.


I used to work at an accounting firm , so I spent a lot of time in Excel. I picked up a book about programming VBA for Excel, because it really helped me do my job much better. (As a sidenote: learn to code not to know how to code, but so you can use it to accomplish some other thing. I love coding, but I love *doing things* with code more.)

I was at a training event for the firm once, and on the flight home rather than watching TV or playing a game, I thought I'd try to make Snake in Excel.  



It's Alive!

It seemed like a good idea, just because the old cell phone screens were so grid based, and Excel is the same way. Why not just use cells as giant "pixels" and use them draw the game? So that's what I did. The first thing the game does is resize the columns and rows to make cells squares. Then it draws up the borders, drops in a snake and an apple, and you're off to the races. I had to import a few libraries to get it working, the main one is the timer library that allows the program to "run". Each time the timer ticks, it moves the snake in whichever direction it's going and see if it has collided with anything.

The snake, the apple, and the walls are simply cells that are formatted in different, defined ways. Every time the snake moves, it checks to see if it hit an apple, if so, it shows the apple moving through the snake's body and then extends the snake by one cell. If it's a wall, the borders all turn red, and you're dead.


After a while, the main level gets boh-ring. It is just a box after all.  So why don't we mix it up?


Like the walls, the snake, and the apple, the walls are just another cell that has been formatted in a certain way. The "apple" algorithm will place the apple randomly in the cage, but never on the walls. That would be frustrating, huh. But how do you make levels? Easy, I made a way for you. 


With the level editor, all you really have to know how to do is use the "Format Painter." You can add rows and columns to make the level wider or taller and then use the format painter to add walls. To place the snake, you just put an "X" in the cell you want the head to be in and choose the starting direction. (Make sure you leave enough room for the snake's body!)

You can share and import levels too, right from the front-end, no coding required. 

If you want to get really funky, you can look at the "Levels" sheet to see how all that data is stored.


The scoring is weighted based on your chosen speed, and the "softness" of the walls you choose. I used a formula to determine the points-per-apple: 

Score = ApplesEaten * (1 + (Speed-3)*.3 + (3-Forgiveness)*.2)

If you set the speed and forgiveness factors to 3, apples will be worth 1 point each. If you turn the speed all the way up and the forgiveness all the way down, you'll get a value of 2 points per apple.  The speed is weighted more heavily than the forgiveness, so crank up that speed and soften up those walls.


Every time you open the workbook, it will check for a text file on Amazon's S3 that has a version number and the URL to the latest workbook, should I ever update the thing. 


You can find the workbook at The code is wide open so feel free to look at it and offer any suggestions! I will probably put it on GitHub before too long. 

Need Help?

Are you trying to write macros that actually accomplish business needs, rather than ones that waste time? I could help with that too. Reach out for my availability for your project! 

Using MySQL Triggers To Ensure Immutability

Thank goodness.

Thank goodness.

Developers are not always database admins, and database admins are not always developers; bootstrappers often have to be both. As we're building products, we're writing the code and designing the database, hopping back and forth between the two. It's fairly often that we just use the tool we know to accomplish the purpose, rather than using the optimal tool. Sure, you can pound a nail in with a set of vice grips, but wouldn't a hammer make more sense?

Let's talk about databases.

Solving it With Yii

In building the invoicing piece of Mantis, I've created a "user status history" table that keeps track of when employees change plans, are no longer employed, or are no longer hourly workers (therefore are not billed). Anytime an employee's attributes change, a new record is created in the status history table that shows all the employee's current attributes and the old record is completed by inserting the end time of the previous plan.

Take a look at the picture below to get a better understanding. 

Screen Shot 2013-06-26 at 5.18.52 PM.png

The employee's current state is always the record with the NULL value in the `to` field, indicating that that state has not yet come to an end. Employee #1 went from hourly to non-hourly (records 1&2). Employee #6 went from active to inactive (records 5&10).

I use this data to build my invoices. If an employee got let go (went inactive) halfway through the month, I'm not going to charge them for a whole month, so I need to know when they went inactive. Because this is a historical table that I'll be basing invoicing on, I want to ensure that this data can *never* change.

My first thought was to copy the data over to another table that would separate the data from the status history table and put it in a different table that is only used to run invoices. That may seem like an OK solution, but it's not. You really shouldn't ever be duplicating data in your database. 

I kept trying to think up a way to solve the problem with Yii, because that was the hammer that I had, the one that I use every day. The solution really lies in the database itself, below the application altogether. I needed to get some advice from someone who really knows databases, so I called my Dad. He's been doing this stuff for years.

Enter: Triggers

My dad educated me about Triggers in MySQL. Triggers are like hooks that allow you to execute logic based on events, a lot like Yii has "before" and "after" filters.  These triggers allow you to keep some basic data integrity logic out of your application layer and put that burden on the database, keeping your code uncluttered and your data safe. Let me show you the SQL and then I'll walk you through it.



CREATE TRIGGER lock_down BEFORE UPDATE ON tbl_user_status_history

IF ( != OR
old.user_id != new.user_id OR
old.from != new.from OR
old.hourly != new.hourly OR != OR
old.plan != new.plan) THEN

CREATE TRIGGER no_delete BEFORE DELETE on tbl_user_status_history


The first thing this does is set the delimiter to "$$" so we can safely use the semicolon elsewhere. We set it back to the semicolon at the end. 

The create trigger syntax is pretty straightforward: you give the trigger a name, declare the event that the trigger is based on, the table, and then the body of the trigger. In my case I've name my first trigger "lock_down", the action is "BEFORE UPDATE", and the table is "tbl_user_status_history". Now: the body. The body of the statement starts checking if the old `to` field was NULL before the update. If the old `to` field was not NULL, that means that record should be set in stone, so we throw an error by using the SIGNAL statement. So now we've taken care of the instance when the record has been completed, it can't be changed. 

Another case we need to worry about is when the `to` field IS NULL, we need to make sure that only the `to` field can be updated. We don't want the status to be changed once it has been set, we just want to be able to close out that particular row. That's what the second IF statement does: it compares every old value to every new value (except the `to` field, obviously) and if any of them don't match, we throw an error. This way, any value change creates an error, but you are allowed to update the `to` field.

The second trigger we need to create is a trigger that makes sure you can't delete a row. This is our BEFORE DELETE trigger, and it's super simple. We just throw an error, every time. Pretty simple.

Now our history records are truly historical records: they cannot change. I'm confident that I can calculate an invoice upon these records and the invoices will always be the same. This keeps our PHP code nice and clean, leaving it up to the DB engine to enforce these (extremely important) rules. It also gives us a sense of security, knowing that no matter what happens, these records will always be correct.

Yii and the Asset Pipeline: Part 2

A while back, I wrote a post called Yii, Heroku, and the Asset Pipeline where I tried to come up with a reasonable way to manage publishing assets with Yii on Heroku. (Note, this applies not only to Heroku, but to any host that runs on top of Amazon EC2, or Amazon EC2 itself.) What I came up with in that first post worked decently well. I had a system that would allow me to work locally with files that were always up to date and then change a version number and publish them all for my staging and production environments. There were still problems though, things that totally bugged me.

The Problems

Let me quickly rehash the problems with the basic Yii pipeline, as far as I'm concerned, and then I'll talk about the problems with the system I was using up until yesterday. 

Problems With Yii's Pipeline

The problem with the Yii asset pipeine is not really Yii's fault, but stems from the fact that Yii wasn't built with an ephemeral filesystem host in mind. The problem we run into is that an ephemeral filesystem will occasionally blow away your assets, because the filesystem can't be trusted to persist. This pushes us to an external static asset store, like Amazon S3. This, it turns out, is a good thing. Cookieless domains reduce overhead and decrease page load times. My first system fixed those problems, but it had problems of its own. 

Problems With My (First) Solution

So. much. manual. labor. 

So. much. manual. labor. 

My fist solution was far from perfect. What I hated about it most was that it left me with a lot of manual processes (which means a lot of potential mistakes) and a subpar experience for my users. 

When it came time to publish, I had to manually combine and minify all my CSS and JS. More than one time I forgot to do that. More than forgetting to do it a couple times, the painful part was having to do it every time I made a small CSS tweak. I looked at a few minifiers that minify on the fly, but I was never really that comfortable with the way they worked. I wanted everything to be done locally and not have that load placed on the server to do the minifying.

The other problem with my system was mass versioning. Any time I tweaked a CSS file or added a bit of JS code, every asset got invalidated when I incremented my version number. Way less than ideal for my end users who have their caches all invalidated when the assets go up a version. That's what I set out to fix with the Mantis Asset Manager. 

The Solution: The Mantis Manager - Overview

If you're wondering where the Mantis Manager name comes from, it's from the SaaS app I'm building for timecard management for small businesses: Mantis. I built the Manager specifically for Mantis, to hopefully solve my asset issue once and for all. (And I think I have!)

You can get all the code and a pretty technical explanation of the Mantis Manager over at the github repo: I'll give an overview here.

The Good Stuff

Let me just first say what the Mantis Manager gets you, and then I'll tell you more about it. The Mantis Manager combines, minifies, and publishes assets based on the SHA of their contents, publishing only files that have changed. It gives you a way to work locally and remotely without muddying the waters between the two. Here's the money shot:


Look how awesome that is. Mantis is happily and automatically looping through my assets folder checking files, ignoring stuff, minifying, updating, and publishing. This is the result, let's talk about the process. 

The Two Types of Assets

In my mind, Yii has two types of assets. The first type is our  assets. The assets we use to build our sites, the CSS we write, the images we need, and the Javascript we code. This means that the second type is, obviously, not our assets. These are the Yii JS files, component CSS/JS files, or extension files. These are assets that our sites use, but we don't write or maintain. We are going to treat these two sets of assets differently, because they are indeed quite different. 

With our assets, they are likely to change fairly often as we build. When we build a new feature or fix a bug, it's likely that some of our assets are going to change. This is totally different with other people's assets , they are not likely to change. How often do the Yii JS files change? Hardly ever. Only when we update the Yii framework, I would suspect. So almost never. We'll treat these assets differently then. For our assets, we'll use the Mantis Manger, for other people's assets, we'll rely on the traditional publish() call. We'll need the two systems to work happily together.

One of my many pages of possible solutions.

One of my many pages of possible solutions.

The Solution: Details


Let's take a look at how I managed to solve the problem. The first thing I wanted to do was to publish only  changed assets. I looked at using the modified time of the file, but I found that to be less reliable than I'd like. There are times when the modified time could change, but the contents haven't changed at all. I settled on using PHP's sha1_file() function that digests the file and spits out a 20 character string. This seemed to be the best method of ensuring that a file has changed.

Before I can go all SHA crazy on the files, I need a list of files. I like to store my files in  "protected/assets" because it isn't web accessible, which means I get to explicitly determine what gets published and what doesn't. If you look at the code on GitHub, you can see that this is a configurable option.

Now that Mantis knows where to look, it'll loop through every file in that folder and do it's thing. You can see from the photo above that some files are ignored; you can set ignore patterns in the config. As it loops through, it calculates the SHA of each file and stores the list in an array. When you run the command a second time, you'll see that it compares the SHAs to see if anything has changed. If nothing has changed, it moves on to the next asset.

Minifying and Combining

If you'll recall, something else I wanted was automatic minifying and combining of files so I only had to serve one file of each type rather than several. Mantis takes care of this by leveraging a few open source libraries: minify for CSS and JShrink for javascript. Mantis will create and published a minified version of your file, leaving your original (development) version of the file untouched and totally readable. To combine files, you simply feed Mantis an array of the files you'd like to combine and where you'd like this new file to be created, and the Mantis Manager will take care of the rest.

Busting Cache

It's a best practice to have a CDN sit in front of your asset server so that your assets are as close to your end users as possible, cached and ready to go. You could use Cloudfront to sit in front of your S3 bucket. With that comes a problem though, it could take a few hours to have asset updates pushed through to every node of the CDN. The best way to do this is to just change the URL of the requested asset altogether. The MM does this by sticking a version in the front of the path, your image could end up having assets/2/d4k8d33/image.png as its path, for example. That 2 represents the version it's currently on. That way the freshest possible version of each asset is being served, but we're not serving new versions of assets that haven't changed.

Referencing Assets In CSS

Having explained that, you may be thinking: "How can I reference my images from my CSS if all the assets' paths are constantly changing?". Good question. You can't just say ../images/image.png in your CSS, because you have no idea where that image is actually going to end up. Definitely not in the folder you think it's going to end up in, that's for sure. In your CSS, you can simply include this template {{asset("/images/image.png")}} and the MM will take care of the rest. Part of the processing loop is to look through the CSS files and replace references to assets with their current versions.

Referencing Assets in Yii

I've created a modified version of the standard CController that all my controllers inherit from that has an asset() method that you can call to get the appropriate reference to an asset. When you are in a view, you simply call $this->asset("/original/path/to/image.png")  and Mantis will return the appropriate URL for the latest version of that asset.

Local vs Remote

You aren't going to want to be publishing to S3 for every little development change you make, so there is an option on the mantis command to specify local or remote. (local is default)

$ ./yiic mantis --type=local
$ ./yiic mantis --type=remote

Local and remote keep their own separate caches with the SHAs, so you can publish 100 times locally and then run publish once to remote and all the files will be compared to the last time you published to remote. The local publish makes use of the standard Yii CAssetManager, while the remote publish relies on a heavily modified version of the S3AssetManager.


The Mantis Manager took me a couple days to write, I don't anticipate it should take you that long to implement, but I will warn you: This is not a drop-in replacement for any AssetManager class you are already using, it is much more integrated than that. There are likely things I haven't covered here that may be important, feel free to reach out or raise an issue over on the GitHub repo. I'm happy to do what I can.

If nothing else, I hope you can at least use pieces of my code to implement your own smart asset manager. 

The only thing I'd still like to do is integrate this into a pre-commit git hook, so that before I push code to my staging/production sites, all my assets are ensured to be up to date.

I'll save that for another day though!


My Aperture Backup Strategy

Something like this.

Something like this.

I had a bad experience when I was a kid. I was taking apart my computer to heat-shrink the cables, like all 14 year olds do.  I had taken the plastic end cap off of the hard drive power cable and was extremely careful to keep the cables in the correct order. What I wasn't careful about, however, was the orientation of the plastic cap. I put it back on upside down, reversing the order of the cables. When I plugged the cable back in to the drive 12V was sent to the board instead of the 5V it expected and the drive started smoking. I tried everything to get the data back, including expensive data recovery. All to no avail. Not a good day, I had permanently lost a lot of photos and projects. From then on, I've had a pretty extensive backup strategy that has saved me several times. 

I'm going to talk mainly about how I back up my Aperture photos, because it's a huge amount of data and they are extremely important to me.

Reference or Manage?

In Aperture you have two options when it comes to dealing with your images, you can either let Aperture manage them, or you can reference them off your hard drive. I much prefer the latter. With referenced images, you have full control over the directory structure of your master images and it makes it a lot easier to do your backups. 

First Level - Online Backup

I store all of my originals on an external hard drive that is attached to my iMac. When I first import my photos, I store them on my local drive in an "Aperture Holding" folder. I use cloud backup service Backblaze to backup my computer to the cloud. Backblaze is great because they offer unlimited backup for five bucks. Including external drives! I've got a couple of terabytes in their cloud. Thanks for all the space Backblaze!

If I ever lose any of my photos, I can login to Backblaze, hit restore, and bring down a zip. Because photos can be so big, that's not an ideal solution for restoring a whole library, but it has saved my hide in scenarios where I realize I've accidentally deleted some photos. You'd think I'd learn.

Second Level - Physical Media

Here's where it gets interesting. The second level of backup I have is DVDs. Good ol' optical media. One of the glorious, life-saving features of my optical media strategy is that failures can't cascade. I had an instance where some of my photos got corrupted and Backblaze happily backed them up (as it should have), but suddenly that backup was useless. The DVDs, however, were sitting in my closet, unaware that they should be corrupted. I popped in the matching DVD and overwrote the corrupted files and everything went back to normal.

The key to backing up to physical media is having a system. And boy do I have a system. I make use of Aperture tags, Applescript, Automator, and a little bit of bash. You can totally implement it too though. Let's take a look.

Screen Shot 2013-05-24 at 12.51.02 PM.JPG

The "Not Backed Up" Album

After a photo is backed up, it is tagged with the "DVD" tag. That way I have a smart album that simply selects every photo that isn't tagged with "DVD". I know that those photos are in mortal danger and need to be backed up pretty soon. But as a geek, there's no way I'm going to do that manually every time. So how do the photos get backed up and tagged with "DVD"? Enter Automator.


Automator is an extremely powerful, underused Apple program. It allows you to automate a wide number of programs so that you don't have to repeat actions. The first automator action I use is really just an Applescript that tells you the size of all the images you have selected. The way I use this is to go into my @Not Backed Up Album and select a bunch of images, click the "Calculate Size of Selected Items" service, and try to get as close to 4 gigabytes as possible. (DVDs hold 4.6 Gb of data, I like to leave a little wiggle room, just in case.) Once I get around the 4Gb mark, I've got my selected set of images for the new DVD backup.

Screen Shot 2013-05-24 at 1.17.09 PM.JPG

With all of those images selected (that total to ~4Gb).  I select the "Burn Masters to DVD" service and sit back and marvel at the power of Automator. (You can download the automator actions here: Place them in your Users > YourName > Library > Services folder.) Below is a photo of the main action. Don't freak out, we'll go over it.


Whew. Ok. Right off the bat, it's going to ask you to make sure a DVD is inserted. Everything will go horribly wrong if you try to burn a DVD when no DVD is inserted. The next thing it does is reads the value out of a file. This is just a simple text file with a single number in it. You'll need to create a .txt file and put the number 1 in it. That's all.

Once you've done this, change the reference in step two to the path of your new text file, not mine. The Applescript will read that variable and add enough zeros to make it a three character string, like "001" or "010" and then pass it to the variable "Name". This is going to be the name of your DVD and your backup tag.

The next couple of steps takes all the selected images and exports the masters to a folder that you designate, getting ready to burn them to a disk. There's an action called "Get Folder Contents" that takes all the items from that Temporary folder and passes them to the "Burn a Disc" action that actually does the burning. Once the disc is burned, we throw away the temporary folder (the masters still exist, it's the temp folder that is now gone) and eject the disk.

The last thing we need to do is tag all those images with "DVD" and the backup number in Aperture. The Applescript will go through and tag them all for you. In addition to the number (from number.txt) it will look for a file called letter.txt and put that at the front, so make sure you create a letter.txt file and update the reference to the location of that file. I just put "A" in my file, you can put anything you want. You can put "backup-" and the tag will come out to be "backup-001". Mine are "A001, A002, ....". That's just my system though. If I ever get to 999 I'll change the letter.txt to "B" and update number.txt to "1". Unlikely to ever happen though!

Once the tags are finished, a (hacky) shell script tries to increment the number in the file by one, getting you ready for your next burn. (There is probably a wayyy cleaner script for that part, but hey, it works!)

Should Something Go Wrong

I've found that sometimes the tagging part can fail, so I made another action that just does the tagging, should it fail. Don't deselect the photos until it's done so you can ensure that you can just hit the "Tag Selected With Backup #" action.  Once you deselect the photos, they'll disappear from the @Not Backed Up folder, leaving you with any other photos that need to run through the system.

The Discs

I write the name of the backup ("A029") and then the date on the disc. I buy the discs by the 100 and store them on the spools that they originally come on. I take one off the top, burn it, write on it and then put it on the bottom of the stack. Easy enough. 

Every single photo in my Aperture library is tagged with "DVD" and "A0XX". Every photo I look at, I know there is a corresponding DVD in my closet with a copy of the photo.

Third Level - Classic Backup

For the third level, I use a classic backup strategy of just replicating my external drive to another external drive. You could use time machine or any other number of back up tools like that. This is probably (definitely) the least interesting, so I won't really talk about it. 

Am I Crazy?

Maybe. But hey, I had a bad experience. Eventually you will lose something too, it will happen. I guess I'm lucky that I lost stuff at 14 when I had no kids, no serious professional work, or any client data. What about you? What important things do you have and how do you ensure they'll never disappear? You do ensure that, right?


Scheduling Jobs With Yii


Sometimes in your app you'll have a long running, or computationally expensive job that really shouldn't be handled by your web server while your user waits, but should be processed in the background. You could also want to wait to execute a job until a later time, like send then user a report at the end of the day. The solution to both of these scenarios is to have a job queue and a different, non-web server processing your job queue.

Heroku Scheduler

Heroku offers a free add-on called "Heroku Scheduler" that can run a job every 10 minutes, every hour, or once a day. We are going to use this scheduler to call our Yii Console command.

The Queue

I'm going to use a MySQL table as a queue, even though some people think that's a bad design pattern. For what I'm doing, I won't run into any of the issues mentioned in that article. You could use any of the *MQ add-ons if you like.

The Queue Table Migration

The migration for the queue table is pretty simple. We have a timestamp that we want to execute the job after, when the job was actually executed, whether or not it succeeded, the action we want to perform, the parameters, and an optional execution result. We'll talk about how these field are used when we get to the console command. (You can find the code for this and the other code in this post at

$this->createTable('tbl_cron_jobs', array(
'id' => 'pk',
'execute_after' => 'timestamp',
'executed_at' => 'timestamp NULL',
'succeeded' => 'boolean',
'action' => 'string NOT NULL',
'parameters' => 'text',
'execution_result' => 'text'

The CronJob Model

The CronJob model is the standard Gii generated model, except for the following two methods.

public function beforeValidate(){
if(gettype($this->parameters) !== "string"){
$this->parameters = serialize($this->parameters);
return parent::beforeValidate();

public function afterFind(){
$this->parameters = unserialize($this->parameters);
return parent::afterFind();

The BeforeValidate method serializes any parameters and the AfterFind method unserializes them. Say, for example, that we want to send a welcome email two hours after the user signs up, we could create a new CronJob and pass an array with the user's id. The model will serialize it, save it to the DB, and when we get it back it will be unserialized and ready to go.

The Yii Console Command

Now that we have a model to work off of, we can look at the actual CronCommand that we'll be running. (Again, you can get the code at I'll only cover the relevant parts here.

The entry point for our script in this example is going to be the Index action. This is what we'll call from the Heroku Scheduler, and then we'll let the index action determine what needs to be processed. (This is only one way of doing it. I like to do it this way because I only want to deal with the logic in one place. If I add a new command I need to process, all I have to do is add it to the CronCommand and the next time Heroku calls my index action, it can process the new command. The alternative would be to create an action{New} for every new action and then add a schedule on Heroku to call that new action. I prefer having one entry point: actionIndex.)

The first thing we do is get a list of jobs that need to be processed. We select any jobs where Now is greater than the execute_after time (meaning it should be executed) and it hasn't already been executed. We sort by ID, ascending, so that the oldest jobs will come first. (If we had a really active queue, we may never get to the oldest jobs if we don't select the oldest ones first.)

$jobs = CronJob::model()->findAll('execute_after <:now AND executed_at IS NULL ORDER BY id ASC', array(':now'=>$now));

We then start to loop through all the jobs and process them. The first thing we need to do is make sure that we have a method to handle it. If we do, we call that method.

if(method_exists($this, $job->action)){
$result = $this->{$job->action}($job->parameters);

So if you save a CronJob model with "testJob" as the action, the CronCommand is going to call $this->testJob($job->parameters) when it processes that particular job. If the result is FALSE, we happily skip it and process it again next time. Otherwise, you should return an array with 'succeeded' as a boolean and an 'execution_result' if you like.

Making It Run

Now that you have your Cron command set up, you can test it by running ./yiic from your Terminal in the protected folder. You should see "cron" as an option. Now try running "./yiic cron". If you have any jobs stored in your table already, you should see them processing. 

One last thing we need to do to get it running on Heroku is set up a bash script to call it. It's a super simple two-liner:

export LD_LIBRARY_PATH=/app/php/ext
bin/php www/protected/yiic.php cron

The first line sets a path required for PHP to run properly from the command line (thanks to Norbert Kéri for pointing that out) and the second line calls our yiic.php with the cron command. Save this as "" or whatever you want to call it and then add it to your Heroku scheduler by entering the following command:


Now your actionIndex will be run every 10 minutes, grabbing items off the queue, processing them, and saving the results back.


Let me know if you have any issues, I'll do my best to help.

Further Reading

If you want to read more about scheduling jobs in Yii, a great in-depth explanation is available in the Yii Rapid Application Development book (chapter 7), written by my friends over at Plum Flower Software. I highly recommend the book, and their services, should you need a Yii application built.

Automating Yii Migrations on Heroku

Yii migrations are quite useful, allowing you to keep your database version controlled along with your code. It's all pretty easy to work with locally: you write your migration, migrate up, and then write the code to take advantage of that new database structure.

Catch 22

Catch 22

When it comes time to push your code to a remote server, however, you run into a catch 22. Your migration is stored in the code, and you've already written code against the new structure of the database. So if you push the code and then use some sort of web GUI to migrate, then there could be a time between when the code is pushed and the migration is run where the database and code are not on the same version. In the same way, if you run the migration before you push the code, your code and database are out of sync again.

The ideal solution to this problem is to have the migration execute and the code go live at the exact same time.

How Heroku Works

As we move toward our solution, it's important to understand how Heroku works. When you push your code to Heroku, it spins up a new server, waits for it to come online, and then routes all requests over to that server, taking your old one offline. This ensures that you have no downtime while your new code is going up. It's actually a pretty slick process.

The Heroku server uses a script called "" to set variables and start the Apache server. This is what we're going to hook into to execute our migrations as the server starts up, before traffic is directed to the new server.


The Heroku Procfile is basically the set of instructions that Heroku uses to know what processes it needs to run as it's booting up. The standard (web only) procfile for PHP looks like this

web:    sh

That's it. All it does is define the web process as, which is the boot script supplied by Heroku. At the time of writing, that file looks like this:

for var in `env|cut -f1 -d=`; do
echo "PassEnv $var" >> /app/apache/conf/httpd.conf;
touch /app/apache/logs/error_log
touch /app/apache/logs/access_log
tail -F /app/apache/logs/error_log &
tail -F /app/apache/logs/access_log &
export LD_LIBRARY_PATH=/app/php/ext
export PHP_INI_SCAN_DIR=/app/www
echo "Launching apache"
exec /app/apache/bin/httpd -DNO_DETACH

It passes in some config variables, sets up some logs, and launches Apache. We're going to modify this file to also run our migrations.


In Yii, the migration command is 

./yiic migrate

That command comes back with the "do you want to migrate?" question, to which you have to respond "Y/N". For the remote server, we need to run this automatically without the question. There is a flag we can use to turn this off; by setting "interactive" to "0", it will run automatically. (

Now our command looks like this:

./yiic migrate --interactive=0

It'll happily migrate without your intervention. One step closer. Now we need to put this into our modified Copy the standard and name it something clever, like Create a file called "Procfile" in the root of your app, and change the name of the boot script to, like so:

web:    sh

The last thing we have to do is modify our new to include the Yii migrate command. This is what you should end up with:

for var in `env|cut -f1 -d=`; do
echo "PassEnv $var" >> /app/apache/conf/httpd.conf;
touch /app/apache/logs/error_log
touch /app/apache/logs/access_log
tail -F /app/apache/logs/error_log &
tail -F /app/apache/logs/access_log &
export LD_LIBRARY_PATH=/app/php/ext
export PHP_INI_SCAN_DIR=/app/www
echo "Launching apache"
bin/php www/protected/yiic.php migrate --interactive=0
exec /app/apache/bin/httpd -DNO_DETACH

Notice the new line

bin/php www/protected/yiic.php migrate --interactive=0

That's the line that uses PHP to run the migrate action in the yiic.php file without interactions. Now, every time you push code to the server, your migrations will automatically run. Programmer's paradise.

Wrangling Timezones in PHP, MySQL, and Yii

Not to mention a dose of problems with hosted database solutions, yay hooray!

One of the key features of Mantis, actually, the whole product, relies on keeping track of time. Specifically,  the time an employee clocks in and the time they clock out. If I'm going to be keeping track of hours worked, that little bit of information would be key, you'd think.

You'd be right.

This is your mind after encountering timezones, daylight savings time, and leap seconds in PHP/MySQL. A total wasteland. (Salvador Dali)

This is your mind after encountering timezones, daylight savings time, and leap seconds in PHP/MySQL. A total wasteland.

(Salvador Dali)

Best Practices

There are a few best practices with dates and the PHP/MySQL stack. One of the biggest is always store times in the database as UTC/GMT (+00:00). You want to have a baseline, something against which you can always define truth. In this case, I'm always certain that times in the database are timezone agnostic, i.e. they are always GMT. Timezones are messy, this gives me a warm fuzzy feeling that I at least know one thing is true. It goes downhill from here.

This means that whenever I get the data out of the database, I'll have to convert it to local time in PHP, ie: at the last possible moment. (Another best practice: convert it at the very end).

The Process

In Mantis, we allow users to define a timezone for their company (eventually we'll allow timezones/team). This is another bit of truth I have, thank goodness. I don't have to ask the server what timezone we need to use, which is great because it will often lie. Since I know have two bits of truth, (that all database times are GMT and all times displayed are a user specified time we'll call CST) I can start doing some interactions.

Putting the Data In

To put the data into the database, all you have to do is use the UTC_TIMESTAMP() MySQL function. This will give us exactly what we want: the current time in GMT. In Yii, that looks like this:

$model->in_time = new CDbExpression('UTC_TIMESTAMP()');

That sets up a query to run such that when it is inserted, MySQL will evaluate the UTC_TIMESTAMP() to the current GMT time.

Getting the Data Out

Getting the data out, you'll need to convert it to the user's timezone. When the user logs in, I store the user's timezone in a session variable. To properly convert the date, use PHP's glorious DateTime object. The DateTime object has a parameter that lets you define the timezone and a method that lets you change it. How convenient!

// gets the local timezone out of the session
$local_timezone = $this->getLocalTimezone();

// create a new DateTime object with the time from the database
// and the timezone of GMT
$date = new DateTime($gmttime, new DateTimeZone('GMT'));

// translate to the user's timezone
$date->setTimezone(new DateTimeZone($local_timezone));

The first thing to do is get the timezone of the company (we'll talk about where those come from in a bit). Then, since I know that the data came out as GMT, I set the second parameter to a GMT timezone object. The last thing you have to do is translate it to the local timezone by calling the setTimezone method.

Comparisons and Calculations In MySQL

We've covered the PHP side of things, but sometimes we want to perform aggregation calculations in MySQL rather than pulling it out and looping it in PHP. For example: Mantis offers a report that displays hours worked by day across the entire company. If I was to just group it by day in the database, I'd end up with a sum of hours per GMT day, rather than local day. That's no good. We've got to get those times into the local timezone.

To convert the date in MySQL, use the CONVERT_TZ() function in MySQL. CONVERT_TZ takes three parameters: the time, the current timezone, and the desired timezone. For the current timezone and desired timezone, you can either pass GMT offsets (-05:00) or named timezones. Use named timezones. Named timezones give you the huge benefit of taking care of daylight savings for you, not to mention all the other strange discrepancies throughout history, of which there are many. Here's how that part of the query looks in my code:

DATE_FORMAT(CAST(CONVERT_TZ(`in`,"GMT",:mysql_timezone) AS DATE),\'%m/%d\') as `date`

Working from the inside out, the first function called is CONVERT_TZ and pass in the mysql_timezone (covered momentarily). This converts the time from GMT to local. After that, I just cast it as a date and format it as MM\DD for grouping. Now we have a query that returns a sum-by-day on local days rather than GMT days.

Named Timezones

Named timezones cover a multitude of sins, daylight savings primarily. We need to get a hold of those names.


PHP's named timezones are pretty simple to implement, and they have a complete referenced list on their site:


The Money Tables

The Money Tables

This one is a little trickier. To use named zones in the CONVERT_TZ function, you have to have several tables set up properly in your installation of  MySQL. I didn't have any of those tables... of course.  Why should things be easy?

(Further reading:

There is a utility to install those tables, but seeing as I wouldn't have the same level of access to that on my database-as-a-service provider, I didn't want to go that route. Thankfully, the MySQL docs site offers those tables for download. Download those tables and add them to your mysql database and your CONVERT_TZ function will magically start working properly.

Making Them Work Together

To link up the timezones across both PHP and MySQL, I've set up a table that has the PHP name, the MySQL name, and a user-friendly name to display to the end user. When the user selects their timezone, we automatically have the right PHP and MySQL names for that zone.

Database as a Service Woes

Once I finally figured out how to do everything on my local machine, it was time to take it to my staging environment to see if I could get it up and running there. Obviously I couldn't. I think I've mentioned before, either here or on my podcast that I had settled on ClearDB as a hosted MySQL solution. I've been pretty happy with them up until last night. I queried my database to see if they had the time_zone tables installed and was greeted with the happy answer of: NO. I couldn't even get at the tables. I emailed support and asked if there was any way to add those tables or give me access to them, and the reply was:

Thanks for reaching out. After looking up your account with us via your email address, I found that your ClearDB database is running on our multi-tenant platform. Unfortunately, access to the mysql database (including timezone tables) is restricted on our multi-tenant clusters. We apologize for any inconvenience that this may cause. We do however support timezone capabilities via MySQL for our dedicated cluster customers. Please let us know if you would like to have a conversation about moving your existing ClearDB database over to a dedicated ClearDB cluster.
Fine then, let's look at the dedicated cluster. FOUR HUNDRED DOLLARS a month. No way I'm paying $400 for timezone support.

Heroku has another hosted MySQL add-on: xeround. With xeround, I can deal with those tables directly... for free. Problem solved.

Hopefully this will help you avoid hours of searching and struggling, trying to wrangle timezones. Feel free to leave any corrections/questions below!


Daylight Savings Time

Damn You Ben Franklin!