Jun 3 2014

Firefox Accounts OAuth Explorations

As we build Firefox Accounts, a key part to the whole experience is allowing a user to divvy out information to apps. We’ll be doing so with an OAuthish experience.


The first obvious place to look was the OAuth2 spec. We’ve based most of our experience on this model. Using the spec, a flow for an imaginary website Cuddly Foxes would like this:

  1. Cuddly Foxes would register with our oauth server, supplying a redirect_uri, and we’ll give you back a client_id and client_secret.
  2. Cuddly Foxes will make their login button redirect the browser to our server, passing the client_id, a random state, and desired scope, such as profile:email.
  3. Our server will show the user some information about who is asking and what info is being asked for, and ask them to confirm. They can uncheck any scopes they don’t want to give out.
  4. The server will generate a code representing the current auth request, and redirect back to Cuddly Foxes’ redirect_uri, including the random state and a code parameter.
  5. Cuddly Foxes first verifies that the returned state is one they sent, and then sends the code back to our server, along with the client_secret they received at registration.
  6. Our server will verify the client_secret matches what is associated with the code, and then will send back a token and the scopes the token has been approved for.
  7. Cuddly Foxes would then use that token whenever asking for the user’s email address, from our profile server, since that’s the scope that was asked for.

So far, standard OAuth2. The client_secret, code, and token are 32-byte random hex strings, and we store a hash of them in our database, reducing the damage done if it gets compromised. Now, let’s add in another service provider: FoxCoin, the newest hotness in privacy-respecting crypto-currency. Cuddly Foxes wants to set up a recurring subscription to send new Foxes every month to users.

That means that they ask our OAuth server for a token with scopes ‘profile:email’ and ‘foxcoin’. With the token in hand, they ask the Profile server for the user’s email, providing said token as proof that they can receive profile information, and they receive it. But! The profile server just received a token that it can use to access the user’s FoxCoin information, acting as Cuddly Foxes. Yikes!

Of course, we can assume the Profile server wouldn’t do anything so nefarious, but having that power is still dangerous. And imagine as we add in more 3rd-party attached services, which inherently are less trustworthy. Additionally, with the recent discovery in OpenSSL, we don’t want to trust TLS alone to protect against sniffing the data as it passes. So, passing around a Bearer token in plain text is unacceptable.1


The next step was to consider using a secret token to sign a request, so that the original token is never revealed. This has been excellently explored already by the Hawk scheme. The short of it is that 2 parties who share a secret can sign the request with an HMAC, proving that the request and it’s payload came from one of them. The receiver just computes the same HMAC, and compares signatures. The original secret is never leaked to anyone. Many cookies were had by all.

Adapting that to our OAuth flow, we would return a random token like before, and Cuddly Foxes would use it to generate a Hawk authorization header, and send it to our Profile server. The Profile server, not knowing the secret token, would tediously need to send the various bits of the request making up the signature, plus the authorization header, to our OAuth server. The OAuth server would look up the secret token, compute the HMAC, and return whether it was valid.2

This is an improvement, since the secret token is never visible on the wire, nor does the Profile server receive it. However, a downside is that for this to work, the OAuth server needs to keep the original secret token in plain text. Before, we were keeping a hashed copy of it, which meant that a snapshot of our database would not reveal everyone’s secret tokens. We didn’t like this disadvantage, and so continued to explore.

OAuth with Public Key Signing

We wanted to keep the request signature, since that doesn’t leak the secret to anyone else, while not having to retain the original secret ourselves. It turns out, there is a technology that does exactly this: asymmetric public key cryptography. However, using RSA or DSA keys has its problems: signing and verifying is slow, generating new keys is slow, and sending public keys with each request is a lot of bytes. That’s when my colleague Brian Warner brought up the newest hotness: elliptic curve public keys. Particularly, Ed25519. It’s super fast to create keys, signing and verifying are fast, and public keys are 32-byte strings. The secret keys are likewise 32 bytes, and completely random, so brute force guessing takes longer than any human could ever wait.

So what’s that look like for Firefox Accounts? The updated flow looks like this:

  1. Stays the same.
  2. Stays the same.
  3. Stays the same.
  4. Stays the same.
  5. Cuddly Foxes first verifies that the returned state is one they sent. They generate a new ed25519 keypair for this user+scope, and then sends the pubkey, the code, and the client_secret they received at registration to the server. This registers that public key with our OAuth service.
  6. Our server will verify the client_secret matches what is associated with the code, save the public key, and return the scopes that have been approved.
  7. Cuddly Foxes would then use that private key to sign a request asking for the user’s email address, from our Profile server, since that’s the scope that was asked for.

Afterwards, the Profile server can verify the signature by itself, since it contains the public key in it. This removes the need for each attached service to figure out what parts of a request to forward to the OAuth server. It also means that all service providers will handle their own hash computing, reducing strain on our OAuth server. Once a signature is verified, the Profile server can simply ask the OAuth server what scopes are provided for that public key, and then act accordingly.

Here’s an example request:

GET /v1/email
Authorization: 'Gryphon pubkey="461d65b867d02ddf7f0d0bf3c2746c823605dec5e9f221ca7f451113fcddaf9f", ts="1400641081466", nonce="992022dd", sig="f1pIEz5y9sN6Bsc00iIy9YcEBFRLqCAtkTspvqQPb4FKUIMwrXxXiqBYXJbdAXc0FM1R6H9bdD+Pkx8klFUNCA=="'

The signature proves that the request originated from the owner of the pubkey, and the payload hasn’t been modified.

There be Gryphons

The authorization scheme in the example above is “Gryphon”. It was partly influenced by Hawk, but felt like a more powerful version. Mozilla has a habit of naming projects after mythological creatures. Most importantly, gryphons “are known for guarding treasures or priceless possessions.” Certainly, user data is a priceless possession.

Gryphon isn’t complete. It’s currently in a proof-of-concept stage. There’s a working branch of our oauth server using it. However, we’d like to get more eyeballs on it before feeling confident about shipping. Are there missing pain points, or use cases not covered? Send me a comment, or write up some analysis and send me the link, or come chat in #fxa, or anything, really.

  1. This issue doesn’t appear in all OAuth models. The issue comes from us having multiple mutually-distrusting services, being gated by our OAuth server. We plan to allow clients, such as your website, request data from a service provider, run by your digital neighbor, about a user.

    In most cases, all the data comes from the same entity that runs the OAuth server, and so there’s no worry that it will mishandle the power it gives itself. 

  2. A downside here is that this means the OAuth server is doing all hashing for all requests, which puts a requirement on our OAuth server using more resources. 

Apr 1 2014

Please Replace Credit Cards

Technology has greatly improved things this past decade. It’s peculiar that messaging nonsense has seen so much work, but something that quite literally costs people money continues to be so flawed. I’m talking about credit cards. It’s worth pointing out that I’m not a security researcher, just a concerned citizen.1

The flaw is a fundamental part of the design: for every charge, we must give the entire card number to the seller. That card number is everything. It gives all the information and power to charge as much money as the recipient wants. Holy carps. The sellers don’t charge as much as they want, because that would be illegal, and they’d lose their merchant account. Still, employees could keep the number and sell it. Or, more likely, as sellers keep a record of the number, hackers can steal them, and get all the moneys. Until you notice, report fraud, and the banks just swallow it.

One supposed fix is the Swipe-and-PIN enhancement. This helps prevent copying of a card at a Point-of-Sale terminal. However, advances in magic (the Internet) have greatly increased the amount of online shopping. Want a new spiked collar for Fluffles? Just open your browser, and type in your credit card number. Nothing to worry about, I’m sure they won’t record it. Oh what’s that? An email from Exotic Collars that their database was hacked, and they actually did have your credit card. Time to call the bank, and fix up all your auto-billing subscriptions.

You get a token, and you get a token…

We can try to create rules around how to store credit cards, but just like passwords, it’s really hard to do correctly. Also, just as with passwords, merchants should never receive such powerful information in the first place. A solution could be providing the merchant a one time use token authorized for a specific amount for a specific merchant. The merchant charges the token, and it then becomes useless. It’s impossible to charge for more than agreed upon. Stealing the token is useless, because it only works for that merchant, it expires, and a credit account will only accept any token once.

If the source card or key which is used to generate tokens is compromised, a user can contact their credit provider, and generate a new private key. No merchants are affected, since they never had your key to begin with.

Recurring charges would be trickier. It’d require more cooperation among banks, standardizing some sort of unique account ID. Tokens could include the account ID, and merchants could safely hold onto that. When they need to make a new charge, they could request a new token for a certain amount. The user can approve the charge which sends a new token, or perhaps mark that a certain amount from a specific merchant every so often is auto-approved. I’m sure plenty of things could be done here to make the user experience easy. And there’s incentives for as easy as possible: Easier means users will spend more.

Credit accounts could provide apps to their users to make sending and approving tokens easy from our phones. Additionally, the app could also optionally prompt for approval when a merchant charges a token, to ensure there was no mistake or the token wasn’t somehow hijacked2.

Stand back, I don’t know crypto

I’m certain smarter people than myself could make a really secure design, but I’ll entertain you by stumbling around with mine. The implementation could look something like Persona. The tokens passed around could be JWTs. It could follow something like these steps:

  1. A merchant could ask a user account for a token, including details like items purchased, total amount, and a merchant ID, with the blob signed by their private key.
  2. The user sees the charge request, sees the details match the signed blob, and approves it.
  3. The user’s account includes the original request, the user’s ID, and a signed blob using the user’s private key. This JWT would be sent to the merchant account.
  4. The merchant account would then submit the token to credit company.
  5. The credit company would verify the user’s blob against their current public key, and verify the merchant’s blob against their current public key.
  6. Optionally request final approval from user.
  7. Transfer specified amount of money from user’s account to merchant’s account.

It would take a lot of work to move the world over to this system, but the end result should be much more secure. It should mean much less fraud, and much fewer stories like what recently happened with Target. Can we please do this?

  1. Or, I have no idea what I’m talking about. 

  2. The design reduces the risk of a stolen token, since it’s generated for a specific merchant. However, it could be that a hacker gets control of a merchant account, or their private key, and can claim to be the merchant. 

Mar 25 2014

Your Password is Insecure

We know that you should have a unique, sufficiently-long, sufficiently-randomized password for every property that requires one. We also know that if you most likely don’t do this whatsoever. There’s no way we’re going to change users’ habits. So this is the reason why we need to get rid of passwords.1

You may think the danger exists with someone guessing your password at your bank, or your email account. Instead, those websites have teams of professionals who spend their whole working day keeping out hackers. That’s not where the danger starts. The danger starts at a tiny e-commerce site, or webforum, or other small-scale site. Some site where you’d think “I don’t care if this account is stolen.” Those are the dangerous sites. Even if you think your password is a pretty good one, because it doesn’t contain any personalized information, and looks like gibberish: if you use the same password, then your password is as weak as the weakest site you use it at.

What really happens: a mom-and-pop shop that sells honey decides to sell more via a website, and has you log in to remember your shipping address. They’re not security experts. They didn’t hire any either. A hacker aims for sites like those. The hacker only has to get passed the minimal security of Honey Buns, to find a list of e-mails and passwords. Maybe the passwords aren’t even hashed; they’re just sitting there in plain text. You shouldn’t be worried that the hacker can ship an insane amount of honey to your house. They wouldn’t bother. Instead, they will take that list, and try each e-mail/password combo on important sites: Wells Fargo, Bank of America, Gmail, Paypal, etc. You used the same email and password on one of those sites as you did with Honey Buns? Then the hacker has just successfully logged in as you, and it mostly looks like a normal login. They then transfer money to their account, and carry on.2

  1. I’ve been explaining this to anyone who has asked me about Persona and passwords, and figured it’d be nice to have it in a linkable quotable location. 

  2. Of course, those sites try to protect against this too. They might notice the IP address is from a completely different part of the world. And they might prevent dangerous actions from that IP until you’ve confirmed another e-mail challenge. But the point still stands. 

Sep 30 2013


A couple months ago, I was blagging on about logging libraries in nodejs. After pointing out how annoying it is to use logging libraries with 3rd party modules, I declared that all modules should simply use console.log(), and let applications hijack the console. Then, I looked around the npms, and couldn’t find an excellent example of a logging library that embraced that idea.

So, clearly, that meant I needed to write my own. It’s called intel. Why another logging library? Well duh, cause this one is better.

Loggers get names

Before nodejs, I came from Pythonia. Over yonder, they have an excellent logging module as part of the standard library. It’s glorious. It uses hierarchal named loggers. Winston, the library we currently use in Persona, while being perfectly awesome, doesn’t have support for this. It does allow you to define Containers and Categories, but it’s not as powerful as I’d like.

Specifically, intel adds 2 noteworthy features to Loggers: hierarchy and humane setup.

  1. Loggers use an hierarchy based on their names. Logger is a child of foo. So is When a logger receives a log message, after handling itself, it passes the message up to its parents.

    For instance, you could decide that all log messages from foo should go to the terminal. However, your foo/bar.js module has some critical code you want to keep tabs on. You could add a handler to the logger that sends all messages of WARN and greater to email. After that handler, will hand the message to its parent foo, and also send it to the terminal

  2. Of the other libraries I could fine that supported multiple named loggers, they all required that you instantiate the loggers with all options ahead of time. This adds more friction having a named logger per module in your app. Instead, intel makes it super easy for you get a new named logger.

     var logger = intel.getLogger('');

    You don’t have to have added any handlers to your newly minted logger. It will just pass the messages right up to it’s parents. The messages will still keep the name of the originating logger, though.

Powerful configuration

Named loggers allow for really powerful yet easy-to-use logging. Combined with a little bit of configuration, and it’s all magical. You can setup a couple root loggers, with various levels pumping messages to various handlers (Console, File, Email, Database, etc). You can see an example in the docs.

Infiltrating the console

The motivating reason I started intel was to do exactly this. I want my apps to have the power to configure logging just the way I want, and I want all my dependencies to play along with my logging rules. So, after you setup your loggers and handlers, you can inject intel into the global console object, and watch as any dependencies that use console.log follow your rules, and automatically get assigned the correct names.

 // in express.js
 console.log('new request'); // automatically gets assigned to logger.


I’m starting by a 0.1 release of intel. Any bugs or feature requests can filed in the issue tracker. I don’t want intel to be one of those libraries that is forever sub-1.0. After some use in the wild, with bugs being fixed and possibly APIs being made better, I’d like to get to a 1.0 soon.

So, try swapping out your current logger for intel. Name some loggers. You’ll come around.

Aug 8 2013

Gmail Bridge for Persona

Since shifting to the Identity team last year, I’ve been working hard on making Persona a true solution to the login problem of the web. As I said then:

If we do our job right, eventually when my friends ask me what I do, I can say: I helped make it so you no longer need to use passwords everywhere. I helped make your online identity more secure. I helped make signing into the Internet awesomer.

We’re getting closer.

What is the Gmail Bridge?

Today, we’re announcing to the world that our Gmail Identity Bridge is online. Excuse me. What? No, I’m fine. It’s alright, it’s actually quite simple.

The way Persona normally works, after checking to see if your email provider natively supports the protocol, is that Persona will fallback to what we call a secondary provider. This is the point where most users end up creating a password for Persona, and then going to their email to verify to us that they really own their email address. If the email provider did support the protocol, they would get sent over to them to authenticate, and we’d step out of the way.

So, we made an Identity Bridge that we host, and uses Google’s OpenID endpoint to verify the user. The experience is pretty much exactly what it should feel like if there was native support from Google.

Why this matters

With both Gmail and Yahoo bridges online, over half of all users are just a couple clicks away from logging in with Persona.

So how does this affect you? If you have a website that has user accounts, you can switch to using Persona as your authentication system. In most cases, it should be a better experience for your users, and easier for you.

If you don’t have a website, you can still help. Find a website you log in to frequently, and ask them to implement Persona. Tell them about this new bridging. Push for the change.

Soon, everyone will notice: we made signing into the Internet awesomer.

Jul 25 2013

console.log() all the things!

Let’s talk about logging. We like logging things. Logging to console, to files, to syslog, maybe even sending some over email. Formatting our logging is pretty fun. Oh, the colors! Of course, we must colorize our logs. Or not, if your colorist. But there’s a problem with logging. There is? Yup. We can’t agree how everything should log its logs.

Let’s look at this from two levels: libraries and apps.

If you follow The Node Way, then you make and use focused packages or libraries from npm to accomplish specific tasks. Sometimes, those tasks need to tell the developer what happened, or otherwise record for eternity their minor quibbles. So we end up with libraries that want to log things.

The modules are no good without plugging them together to make an app. Many apps are web apps, but they could be anything, like flying nodecopters or some such. So you have a bunch of app specific code tying together libraries, and you want to log all over the place. You want to know response codes, response times, when errors occur, when people attack your app, and when unicorns invade.

So what’s the problem?

At the end of the day, you want to ship an app. We all ship apps. And while you were devving away on your machine, logging to console was perfectly fine. But once you start shipping, you can’t just watch the console blast by on hundreds of production machines like The Matrix. This is typically when you decide that you need a configurable logging library, so you can rotate log files, send some to syslog, email exception logs, and fill them full of colors.

To use a real example, we’ll play with winston. To use winston, we create a logger, specify some transports, and then pass the logger all over the application.

const winston = require('winston');
var logger = new (winston.Logger)({ transports: [
    new (winston.transports.File)({ 
        filename: '/var/log/app.log', 
        colorize: true 
    new (winston.transports.Console)({ colorize: true })
] });
module.exports = logger;

Then, elsewhere, you would require that module we just defined, and log stuff.

const logger = require('./lib/logger');'blast off');

This works for a while in your own app, but you’ll notice that you can’t make any of your dependencies use your logger. It’s also pretty terrible when you decide you want to uplift one of your lib modules into a standalone package. You realize you still want the logging messages, but now you can’t depend on the app giving you a logger. There goes another unicorn.

We’re never going to agree on a library that all apps and libs should use, and we shouldn’t! Competition, blah blah, etc. I’ll walk through probably the most obvious solution, show why it’s not good enough, and then propose the real solution.

Log, log, pass?

Libraries could accept a logger option. This works fine when the library provides a constructor. It’s terrible when the library simply provides a handful of static utility functions. Those that do have constructors could allow a logger option, but default to console, and still benefit from the real solution below.

I propose, instead, that all libraries just use console.log and logging libraries overload console.log. Well, clearly, overload all of console. Craziness? Maybe, but then we already get paid to write JavaScript.


In node.js, the console object comes with debug, log, info, warn, and error methods already. So a library can depend on this universal global, and log things at the correct levels. The application, at the main module, can create a fancy pants logger, and overload console with it. Now, all library code is pumping its logs through your fancy pants, and is none the wiser. How fancy.

Here’s how we’d do that with our winston logger:

const winston = require('winston');

var logger = new (winston.Logger)({
  // transports and whatever loggy stuffs


var log = console.log;
console.log = function hijacked_log(level) {
  if (arguments.length > 1 && level in this) {
    log.apply(this, arguments);
  } else {
    var args =;
    log.apply(this, args);

So say we all?

Oct 29 2012

An Expandable PC

Recently, there’s been this idea that we’re in a “post-PC” era. That the PC is no longer relevant, it’s all mobile now. I’d argue that the most “PC” PC I’ve ever had is my phone: it’s a computer I have on my person at all times. And while I agree that our PCs are now these hand-held devices, I’ve always loved having my monster desktop ready to crunch all computing challenges I could throw at it.

As computers have gotten smaller, we’ve also seen that desktop computer performance doesn’t need to improve any more. When friends ask me what to look in for a laptop, I tell them the processor and such don’t matter; look for battery life, weight, and quality of input devices (keyboard and trackpad). We just don’t need more power to answer our email, click on Like buttons, watch cat videos, and fill out spreadsheets. The power on hand-held devices, however, is starting to reach that threshold where it’s adequate to do everything as well.

So when I see other people mention the same ideas bouncing around in my head, I’m sure we have to be getting close:

Jeff Atwood:

Our phones are now so damn fast and capable as personal computers that I’m starting to wonder why I don’t just use the thing I always have in my pocket as my “laptop”, plugging it into a keyboard and display as necessary.

David Pierce:

I can drop my Series 7 tablet into a dock, add a Bluetooth mouse and keyboard, and connect a monitor — poof, I’ve got a full-fledged dual-monitor setup going. When I want to go somewhere, I just pick the device up out of the dock, and walk out the door tablet in hand.

This is exactly what I want. I love my smartphone, because it’s always with me. And I love my desktop because it’s so much easier to type on a full keyboard, have decent speakers, and use multiple big monitors. I don’t necessarily need the giant Tower part in the middle1. What if my phone or my tablet was the computer at my desktop, with all the useful peripherals plugged into it?

Microsoft’s Surface and Windows 8 feel like a prime candidate for this experiment. It’s still got Windows, so I can still do my usual work of testing in browsers, writing code, playing with git, and the like. There would be no obstacles to productivity even, since I’ll be using my laptop and monitors. And then, when I’m done, and want to go into the living room, or hang out in Starbucks to get my people dose for the week, I can just pick it up and go.

I realize that many people have been doing that with laptops for years, but I have never been impressed by it. I certainly don’t bring my laptop to the couch to casually use between commercials. And it’s extra space that is needed means I need more space (and a power cord) when I leave the house. Plus it weighs at least double, more like quadruple what a tablet weighs.

The Surface is now out, and while it remains to be seen if it is the right fit for this expandable PC, it certainly looks like the closest product yet. We’re not too far away from a time when tablets replace everything.

  1. I do, actually, since I compile code, and run massive test suites, and I don’t want those going any slower than they do. Plus, gaming. 

Feb 21 2012

The Shipyard Mindset

I’ve been working quite a bit on this little JavaScript MVC framework called Shipyard. Those who know me may recall that I used to write a lot about MooTools, and may wonder why I’ve moved off it and am writing my own framework instead. I figured I’d take the time to explain why I felt the need existed for Shipyard, and the goals it tries to accomplish:

  • Being truly modular
  • Command line tests run after each save
  • Easy access to data from various sources
  • Auto-updating views

Let’s get to it.


JavaScript has a tendency to turn into spaghetti pretty quickly, what with the callback parties and people saying it’s “just a scripting language”, applications tend to lack structure. MooTools has had a modular design from the beginning, by separating pieces of functionality into individual Classes. This concept has carried over into Shipyard, but Shipyard does so much more.

With other frameworks, such as MooTools or YUI, developers are asked to pick the components that they want to use when downloading the library. The goal is that people only ever download the JavaScript they need for their application. Unfortunately, people usually just download the full “default” build, that contains everything, because they don’t know what they want yet. It’s certainly a pain to try to be smart about your selections at first, and then later realize you need to download a couple more modules (plus all their dependencies) mid-development. So most people download the entire thing, and never chop out the unneeded afterwards.

Shipyard says that you should download the entire thing from the start. Download the entire repo, all the modules. As you write your application, you specify your dependencies naturally in each module (using require), and so you never need to look at a list and pick what you think you might one day sort of use. It’s all there on your computer, and you just use it naturally. When it comes time to ship to production, the build step (you’re already minimizing anways, right?) that comes with Shipyard only bundles the dependencies you specifically used. Shipyard takes an active stance in reducing the amount of wasted bytes that your users will have to download.


Testing is great. It’s the law. It’d be a good idea. Something like that, right? In many other frameworks, it’s very easy to write test suites for your applications, but JavaScript applications are strikingly absent of tests. Part of the reason is that many test frameworks make it difficult to test. I need to be able to test with the simplest of commands. Even have the possibility to test on a pre-commit hook, or even per save. That means testing needs to be easy, and fast. If it’s not, it just won’t happen.

Automatic tests run after each save in JavaScript, you say? But it’s hard to test JavaScript from a command line, because you need to test in browsers, right? Well, yes, you should do that too, but so much of our JavaScript applications nowadays is code that has nothing to do with the browser. So much of it can be isolated and tested in units. And while you should browser test your application as well, if a test breaks on the command-line, you know it will break in the browser before even having to load the test page. Fail faster.

Shipyard helps do this by makng it’s test runner run with nodejs. With the strict usage of the dom module whenever touching anything global and DOM related, the test runner is able to make the dom module work in nodejs with the help of jsdom. So you can actually test expected DOM behavior from the command line, each time you save your file.

You could even put your JavaScript test suite on CI, similar to how Shipyard’s own test suite runs on travis-ci with every commit.

Model Syncs

Getting into the more MVC part of Shipyard, I had explored this idea of various sync locations with Models before. The idea is that applications have data, and we structure it with Models. The data comes from somewhere, and while it used to only ever come from the host server, increasingly it is coming from various sources. A common example that would benefit from this is an application with offline mode. You need the data of your models to sync with the server, but if the user is offline, you want the data to save locally, perhaps in localStorage or IndexedDB, and then be able to send the data to the server at a later point. Perhaps you want to cache the data in localStorage, and so when the user comes back to your site, you first look there, and then fall back to asking the server for the data.

It should be as simple as:

Recipe.find().addListener('complete', function(recipes) {
    if (recipes.length === 0) {
        Recipe.find({ using: 'server' }).addListener('complete', listRecipes);
    } else {

Shipyard makes it so.

Automatic Views

The DOM sucks. It’s powerful, but it’s complicated, and has inconcistencies. I don’t think developers should have to touch the DOM, in most cases. Instead, Shipyard exposes Views. They’re kind of like Elements, but far more powerful. Specifically, Views can be made up of several elements, and not bother you about it. As well, you don’t have to fret over which of those elements needs event handlers, you just listen to events from the View itself.

Even cooler is the idea of binding data to Views. My first foray in JavaScript MVC had me re-rendering entire sections of the DOM when things had changed. Not only is that weak, performance-wise, but it’s boiler-plate that I had to worry about. You can end up with several places in your UI that reflect the same data, shown in slightly different ways, and then you’re left with remembering each place when you offer a new way to alter the data. Of course the UI should update when the data changes, I just shouldn’t have to remember that myself. I am only human, after all. Other frameworks have this (I met it when using Adobe’s Flex Builder), and so does Shipyard.


If you were nodding when reading the above, then get on the Shipyard train. Development continues strongly, it already powers a big application, and I hope it works out for you.

Aug 4 2011

Good Things Come to Those Who Ask

Maybe you’ve heard the saying “good things come to those who wait.” That’s all well and good, but I’d like to take some time point out that good things also come to those who look for them. I changed jobs at the beginning of this year, and several people I know were curious as to how I managed it. It’s because I asked. I sought. For months.1

In and Out

Most of my professional life has been like this. At my first job, I happened to work at In’N’Out. I thought I was going to be in the service industry for the rest of my life. It was my living, and every pay raise meant more of a living. It took me 4 months to get 3 promotions. The next promotion took longer only because the skill required was much tougher. There were some other employees around me that would complain and murmur. Most of them had been working there for much longer than I had, and yet I quickly passed them by. They had no idea why I had gotten promoted so quickly.

I asked for them.

In order to get promotions, you had to work on the next skill. You had to be good at it (which only really took hard work), and you had to have a manager write up a review. You needed 4 passing reviews to be eligible for the next promotion. So every day, at a less busy hour, I would ask my manager to put me on the next station, and I would ask for a review. They would often forget to pay attention enough to give me a review, but since I would ask so often, I rather quickly gained all my reviews for each level. The other employees? They just sat around, some working hard, some hardly working, thinking that the manager would one day put them on the next position. They thought they’d get their promotions eventually, by waiting.

Entering the Tech World

Eventually, I started to wonder if I could put my programming knowledge to use in a professional way. I scoured Craigslist, and eventually found a nice listing that didn’t require me to have a degree, instead only requiring that I pass some programming challenges. I showed up and passed all the challenges. However, the CEO was busy, and didn’t pay much attention to my application status. So, I called the executive assistant of the office, and asked to remind the CEO of my application. Every day. Finally, one of those days, the assistant replied back that my persistence paid off: the CEO had considered my application, and was preparing an offer letter.

'Expect to be a slave'

Fast forward a little, and I was at my previous company. I loved it there. The guys rock, and my job was almost always interesting. Only a few things about it killed me. It tried to be a SaaS company, and I had done a lot of work on that application. I love application development. However, since the revenue from the subscriptions revenue was growing too slowly, the company had to revert to servicing clients to pay the bills. That meant my job had largely changed from application development to brochure website CSS development. I personally find that less interesting, and the clients tend to be frustrating. At the same, while we had put a lot of work into the software platform, very few people were using it, and hardly any were appreciating it.

I spoke to someone during the summer of last year about wanting to be in a different place professionally. I had hopes and dreams about how we as a company (myself included) could focus on areas that would make us better, and have more fun at the same time. I was advised that I’m young, and I’ve got several years to go before I should expect to be at a good place. I’m in the years of having to slave away. That isn’t the first time I’ve heard such a notion.

As I left school, and headed into the working class, my father mentioned that right now I should expect to be a slave to my work, so that I could eventually be in place where I don’t have to. Friends, roommates, and colleagues have tried to pound this idea into me ever since, and I’ve just been too stubborn to believe it.

Moving On

After a severe car accident, the combination of the higher bills, my boredom, and uncertainty of my job security got me looking for a different job. I researched companies that I would love to work at, sent out resumes, customized cover letters, and did plenty of phone interviews. After several months of looking, I started to wonder if I should just stay content with what I had, because it takes a lot of effort to continuously job search.

Thankfully, I kept looking and asking, and found a new rockin’ place to work at Mozilla. I started contracting in January 2011, and hired full-time in April. I work on Add-on Builder, so my desire of making an application that many people use is satisfied. I work with very awesome people, and for a company who wants its products to be the awesomest they can be.

Don’t be afraid to ask

I don’t say all of this to brag, or say “look at me”. I want to give an example of how it is possible to get what you want. People who get what they want, get it because they aren’t quiet about wanting it. Don’t be afraid to ask about what you need to do to get to the position you want. Ask your manager what you could improve on to get a promotion. Or ask that company you’ve been eyeing to hire you. You have to assume you are a good fit for the job, and then ask for it2.

  1. What follows is a bit of life story, including only the bits where I asked and asked for something better. 

  2. It should go without saying, but I must say it anyways: you must also be hard working and decent at what you do. 

Sep 29 2010

The Galaxy in Review

I’ve had my Galaxy S (the T-Mobile Vibrant model) for about two months now, and I wanted to spell out my impressions.

The Screen

The screen is gorgeous. Granted, it’s not as bright as the iPhone 4, but I, personally, don’t really notice the difference in the Retina Display that the 4 carries. It’s Super AMOLED screen beautifully renders Avatar that comes on the phone, and the pretty wallpapers that come on the device. Contrary to my wife’s myTouch, I can operate with the brightness at the lowest setting most of the time. I only ever turn it up when I’m outside, and have my sunglasses on. The good thing is, turning the brightness to the max makes it very easy to see everything on the screen while boldly confronting the elements.

It’s also a fantastic 4 inches. Which is, again, great for viewing movies, but also great at viewing websites, or just plain reading. Reading is quite the pleasure on this phone. I’ve picked up a nightly habit of reading from my Instapaper account1 before falling asleep.

I’m one of those people who doesn’t break his stuff: none of my previous phones broke, and I only have a small handful of scratches from dropping my Sidekick once or twice. So, therefore, I don’t have a screen protector. The screen still has no scratches at all. Nothing. According to more abusive readers, it seems to stand up to a lot of punishment.


As with all Android phones since Google turned on multi-touch support, the Galaxy supports it. And from the videos I’ve seen of it on the Nexus One, it seems like the Galaxy does it better. The Nexus showed some issues with tracking multi-touch when your fingers crossed axes. No hiccups here. Pinching and pushing feels like gliding.

Touch Buttons

So far the touch buttons seem to work out just fine. I don’t have the same issue that Nexus One owners have reported where sometimes touching near the bottom of the screen triggers the touch buttons. My only gripe, is that sometimes, mostly when reading at night, the LEDs turn off after no input for a few seconds, and I need to touch the main screen before they reappear. I fail too often when guessing that I’ve clicked a button when the LEDs are off. But I guess I can’t call that much of a complaint.

Otherwise, they’ve been perfect.


Again, with the 4 inches, it feels great. It doesn’t feel too big; it feels just right. Picking up an iPhone 4 feels just a little too small. I can’t imagine typing on a smaller screen.

It’s also ultra thin, only 0.4 inches. It fits perfectly in my pocket.


The Galaxy weighes a mere 4.16 ounces. According to Apple, the iPhone 4 is only 0.6 ounces heavier, but I recently compared both at the Apple store, and it felt sufficiently heavier. Granted, the iPhone also feels more solid. Some people really like that, but of the two, I enjoy a lighter device, and have no worries about breaking my Galaxy.


TouchWiz is what Samsung is calling their custom UI. While I don’t understand why hardware companies are trying to write software better than a software company, this one isn’t too terrible. They have a cool gesture in the Contacts list to swipe left on a contact to call and swipe right to text. The way the application drawer works is pretty neat as well, where it has multiple pages of apps instead of one giant scrollable view.

That’s all the positive words I can muster. I said it wasn’t terrible, but I almost immediately switched to a different home launcher.2

I’m not the biggest fan of the entire UI being given a blue theme, but I don’t hate it that much either. That’s mostly just minor aesthetic taste. Yet, with their questionable surgery, they’ve ruined some of the core Android applications. Contacts no longer has a Favorites tab. Instead, they have an Updates tab. They also ruined the Calendar app. I mean, it’s not horrible, but the way they show full day events is hard to notice, and in the stock calendar, you could tap on an event in the week to reveal a small tooltip of the title. Samsung decided you should skip that tooltip.

Speaking of the Updates tab, it seems that most manufacturers feel the need to make the Android UI integrate more with social media. It makes me die a little with each new “custom UI”. Maybe less nerdy people than myself will just setup the Feeds & Updates widget to pull in Facebook and Twitter, but I find it ugly and irrelevant. I don’t need my social media bleeding through every orifice of my phone. When I want Twitter, I’ll launch the Twitter app.

They also stupidly took away several of the default widgets in favor of their own concoctions. They suck hard. Most importantly, I miss having the stock calendar widget, which shows you in an attractive way the next event on your calendar. In its place is the abbysmal Daily Briefing, where, alongside your recent calendar events, it also pulls in weather from AccuWeather, stocks from whatever, mobile news from Yahoo, and looks terrible to boot. Crazier still, these widgets are only available from the TwLauncher3. Having switched to ADWLauncher, I still don’t get the stock Android widgets, nor Samsung’s crappy replacements. I had to download a simple calendar widget, and I honestly couldn’t find one that looked similar to the stock calendar widget.


The phone comes with a lot of Crapware installed by default. Some applications are free trials that want you to buy the full version; some are just free apps that I don’t particularly need. None of them can be deleted. And yet, it doesn’t bother me that much , because I’ve found work arounds, and I hardly pop open the Applications Drawer anyways.

Ease of Rooting

For some, part of the appeal of Android is gaining root access and doing whatever you want with your device. This isn’t something that a casual consumer would do, but considering that some phones try vigorously to prevent rooting, it’s worth noting that rooting a Samsung Vibrant is beyond easy. It’s almost as if Samsung was fully expecting people to do so.

Google Suite

Gmail is my portal to all my email; I see all the email from all my various email accounts in Gmail. So I’m very excited that Gmail integrates so well in Android. Gmail’s list view is excellent and looks nice to boot. The email view leaves something to be desired, though. Specifically, the reply and forward buttons are very weird to use: they don’t look or feel like buttons, and tapping them doesn’t feel smooth. Yet, I can’t stress how pleased I am with my phone’s integration with Gmail. My previous phone, a Sidekick 2008, tried to connect through IMAP to my Gmail account. It tried like the Little Engine That Could, or rather, like the Sidekick That Couldn’t. Even with IMAP, it constantly would not sync my read and deleted mail from my computer, so I would have to deal with it on my phone and on the computer. With the Galaxy, even though my phone fetches the email immediately, it also immediately removes it if I dealt with it on my desktop computer. Update: It seems with Android 2.2 and a new version of Gmail, the message view is much more pleasant.

My wife and I share our various calendars through Google Calendar. The stock Android calendar, which my wife gets to use, is great. It has a pleasing aesthetic, shows a tooltip when you tap an event for a summary, and has a decent widget. For mine, however, Samsung seemed to think they could do better. So they inverted the colors (the background is now all black), removed the tooltip summary, and made all-day events harder to recognize. Sigh.

The other Google apps work as expected: I’ve been using GTalk more and more, and had been using Google Voice as my business line for a while. Android offers to easily manage all your regular voice mail through Google Voice as well, and automatically uses Voice if I make an international call. Awesome.

Maps is maps. And, boy, is it maps. I tend to keep my printer dry of ink, just to prove that I don’t need to print as much as I think I do, so I hated having to look up directions, then copy and paste the directions into an email and send it to my Sidekick. Sometimes, my Sidekick wouldn’t sync the new mail until I had painfully found my way to my destination through trial and error. Now with my Galaxy I can get directions to anywhere, at anytime. More importantly, I prefer to look at the map view, since I’m a visually oriented guy. Directions are too abstract; I like to see where I am. Ta-da.


This is really minor, but I noticed it when I was out with some friends that have iPhones. We had gotten a bill after eating some excellent sushi, and they both pulled out their phones to split the bill. They both, in that short minute of calculating, cursed the iPhone Calculator app. In contrast, I pulled out my phone, intrigued by their grief, and checked on the calculator that came with my Galaxy. It was more than I could ask for. I’m used to most basic calculator apps providing numbers, 4 operators, and a single display of the most recently outputted number. To my delight, this app contained a scrolling output area to show previous calculations, and listed the entire equation out similar to when I used a TI-83 in calculus class. Turn the phone horiztonal, and the Calculator goes all Scientific Mode on you.


I think iOS is a great piece of software. The iPhone really is an awesome device. Yet, now that Android hardware can compare to Apple hardware, I’m also glad that my phone runs Android. Why? Is it because of the famed “openness”? Well, not primarily.

The Android platform is certainly more open. Not 100% open, but far more open then the Apple App Store. I like that I can side load apps without jailbreaking or rooting, because Google lets me. I’m glad that developers can create apps that compete with default Android applications, providing me with possibly better choices than default, like a better SMS client, or a browser with tabs. More so, I’m glad that I can write my own software for my phone, and not worry that some overlord is going to deny my hard work, nor do I have to pay a developer license to do it. Yet, as I said, the openness isn’t what I like most about Android.

I actually love how the Android OS allows applications to be used by the user. Here’s the things that really shine to me:

  • Notifications: Everyone always mentions Android notifications in a good light, and that’s because they work great. They’re out of the way, only in the top status bar, and can be accessed from inside applications by just pulling down from the top of the phone. While I like the red pills in iOS, the notifications in Android let you peek at the title of the notification and clear them out if they turn out to be not as important. In iOS, you have no such liberty.

  • The Activity Stack and Back: In iOS, it’s up to the application developer to add a Back button to any given screen. And usually, it doesn’t truly mean “Back”, but more like “Up One Folder”. Worse still, is that if an application opens another, like Safari or Mail, there is no “Back” to the previous application4. On Android, the Back button goes to previous applications and Activities. This is because in Android, every action is an Activity. Activities can start new ones, and they get added on top of the Activity stack, until an Activity has finished. Pressing Back will finish the current Activity and head to the previously active one.

  • Apps can communicate with Intents: Providing a generic way for applications to send messages to each other via Intents means that no matter what RSS Reader application you use, and whether you use Instapaper or ReadItLater, they can talk to each other on your phone. So if you use something a little less popular, you aren’t doomed to never having other applications support it.

  • Real honest-to-goodness multi-tasking: It can really multi-task. No, not that fake crap iOS claims to do. Real multi-tasking.

I love the Galaxy.

  1. I currently access my articles through the Hard Copy app, from Tony Cosentini

  2. I don’t need 7 home screens, so I dropped down to 3, but the programmers determine the middle as screen 4, no matter what. Did no one ever teach them how to find the middle of something? 

  3. TouchWiz default launcher. 

  4. In iOS 4, if the application has been re-compiled to use fast-app switching, you can switch back to an app now. 

Page 1 of 3