seanmonstar

Jun 4 2014
Jun 3 2014

Firefox Accounts OAuth Explorations

As we build Firefox Accounts, a key part to the whole experience is allowing a user to divvy out information to apps. We’ll be doing so with an OAuthish experience.

OAuth2

The first obvious place to look was the OAuth2 spec. We’ve based most of our experience on this model. Using the spec, a flow for an imaginary website Cuddly Foxes would like this:

  1. Cuddly Foxes would register with our oauth server, supplying a redirect_uri, and we’ll give you back a client_id and client_secret.
  2. Cuddly Foxes will make their login button redirect the browser to our server, passing the client_id, a random state, and desired scope, such as profile:email.
  3. Our server will show the user some information about who is asking and what info is being asked for, and ask them to confirm. They can uncheck any scopes they don’t want to give out.
  4. The server will generate a code representing the current auth request, and redirect back to Cuddly Foxes’ redirect_uri, including the random state and a code parameter.
  5. Cuddly Foxes first verifies that the returned state is one they sent, and then sends the code back to our server, along with the client_secret they received at registration.
  6. Our server will verify the client_secret matches what is associated with the code, and then will send back a token and the scopes the token has been approved for.
  7. Cuddly Foxes would then use that token whenever asking for the user’s email address, from our profile server, since that’s the scope that was asked for.

So far, standard OAuth2. The client_secret, code, and token are 32-byte random hex strings, and we store a hash of them in our database, reducing the damage done if it gets compromised. Now, let’s add in another service provider: FoxCoin, the newest hotness in privacy-respecting crypto-currency. Cuddly Foxes wants to set up a recurring subscription to send new Foxes every month to users.

That means that they ask our OAuth server for a token with scopes ‘profile:email’ and ‘foxcoin’. With the token in hand, they ask the Profile server for the user’s email, providing said token as proof that they can receive profile information, and they receive it. But! The profile server just received a token that it can use to access the user’s FoxCoin information, acting as Cuddly Foxes. Yikes!

Of course, we can assume the Profile server wouldn’t do anything so nefarious, but having that power is still dangerous. And imagine as we add in more 3rd-party attached services, which inherently are less trustworthy. Additionally, with the recent discovery in OpenSSL, we don’t want to trust TLS alone to protect against sniffing the data as it passes. So, passing around a Bearer token in plain text is unacceptable.1

OAuth2 HMAC

The next step was to consider using a secret token to sign a request, so that the original token is never revealed. This has been excellently explored already by the Hawk scheme. The short of it is that 2 parties who share a secret can sign the request with an HMAC, proving that the request and it’s payload came from one of them. The receiver just computes the same HMAC, and compares signatures. The original secret is never leaked to anyone. Many cookies were had by all.

Adapting that to our OAuth flow, we would return a random token like before, and Cuddly Foxes would use it to generate a Hawk authorization header, and send it to our Profile server. The Profile server, not knowing the secret token, would tediously need to send the various bits of the request making up the signature, plus the authorization header, to our OAuth server. The OAuth server would look up the secret token, compute the HMAC, and return whether it was valid.2

This is an improvement, since the secret token is never visible on the wire, nor does the Profile server receive it. However, a downside is that for this to work, the OAuth server needs to keep the original secret token in plain text. Before, we were keeping a hashed copy of it, which meant that a snapshot of our database would not reveal everyone’s secret tokens. We didn’t like this disadvantage, and so continued to explore.

OAuth with Public Key Signing

We wanted to keep the request signature, since that doesn’t leak the secret to anyone else, while not having to retain the original secret ourselves. It turns out, there is a technology that does exactly this: asymmetric public key cryptography. However, using RSA or DSA keys has its problems: signing and verifying is slow, generating new keys is slow, and sending public keys with each request is a lot of bytes. That’s when my colleague Brian Warner brought up the newest hotness: elliptic curve public keys. Particularly, Ed25519. It’s super fast to create keys, signing and verifying are fast, and public keys are 32-byte strings. The secret keys are likewise 32 bytes, and completely random, so brute force guessing takes longer than any human could ever wait.

So what’s that look like for Firefox Accounts? The updated flow looks like this:

  1. Stays the same.
  2. Stays the same.
  3. Stays the same.
  4. Stays the same.
  5. Cuddly Foxes first verifies that the returned state is one they sent. They generate a new ed25519 keypair for this user+scope, and then sends the pubkey, the code, and the client_secret they received at registration to the server. This registers that public key with our OAuth service.
  6. Our server will verify the client_secret matches what is associated with the code, save the public key, and return the scopes that have been approved.
  7. Cuddly Foxes would then use that private key to sign a request asking for the user’s email address, from our Profile server, since that’s the scope that was asked for.

Afterwards, the Profile server can verify the signature by itself, since it contains the public key in it. This removes the need for each attached service to figure out what parts of a request to forward to the OAuth server. It also means that all service providers will handle their own hash computing, reducing strain on our OAuth server. Once a signature is verified, the Profile server can simply ask the OAuth server what scopes are provided for that public key, and then act accordingly.

Here’s an example request:

HTTP/1.1
GET /v1/email
Host: profile.accounts.firefox.com
Authorization: 'Gryphon pubkey="461d65b867d02ddf7f0d0bf3c2746c823605dec5e9f221ca7f451113fcddaf9f", ts="1400641081466", nonce="992022dd", sig="f1pIEz5y9sN6Bsc00iIy9YcEBFRLqCAtkTspvqQPb4FKUIMwrXxXiqBYXJbdAXc0FM1R6H9bdD+Pkx8klFUNCA=="'

The signature proves that the request originated from the owner of the pubkey, and the payload hasn’t been modified.

There be Gryphons

The authorization scheme in the example above is “Gryphon”. It was partly influenced by Hawk, but felt like a more powerful version. Mozilla has a habit of naming projects after mythological creatures. Most importantly, gryphons “are known for guarding treasures or priceless possessions.” Certainly, user data is a priceless possession.

Gryphon isn’t complete. It’s currently in a proof-of-concept stage. There’s a working branch of our oauth server using it. However, we’d like to get more eyeballs on it before feeling confident about shipping. Are there missing pain points, or use cases not covered? Send me a comment, or write up some analysis and send me the link, or come chat in #fxa, or anything, really.


  1. This issue doesn’t appear in all OAuth models. The issue comes from us having multiple mutually-distrusting services, being gated by our OAuth server. We plan to allow clients, such as your website, request data from a service provider, run by your digital neighbor, about a user.

    In most cases, all the data comes from the same entity that runs the OAuth server, and so there’s no worry that it will mishandle the power it gives itself. 

  2. A downside here is that this means the OAuth server is doing all hashing for all requests, which puts a requirement on our OAuth server using more resources. 

Apr 1 2014

Please Replace Credit Cards

Technology has greatly improved things this past decade. It’s peculiar that messaging nonsense has seen so much work, but something that quite literally costs people money continues to be so flawed. I’m talking about credit cards. It’s worth pointing out that I’m not a security researcher, just a concerned citizen.1

The flaw is a fundamental part of the design: for every charge, we must give the entire card number to the seller. That card number is everything. It gives all the information and power to charge as much money as the recipient wants. Holy carps. The sellers don’t charge as much as they want, because that would be illegal, and they’d lose their merchant account. Still, employees could keep the number and sell it. Or, more likely, as sellers keep a record of the number, hackers can steal them, and get all the moneys. Until you notice, report fraud, and the banks just swallow it.

One supposed fix is the Swipe-and-PIN enhancement. This helps prevent copying of a card at a Point-of-Sale terminal. However, advances in magic (the Internet) have greatly increased the amount of online shopping. Want a new spiked collar for Fluffles? Just open your browser, and type in your credit card number. Nothing to worry about, I’m sure they won’t record it. Oh what’s that? An email from Exotic Collars that their database was hacked, and they actually did have your credit card. Time to call the bank, and fix up all your auto-billing subscriptions.

You get a token, and you get a token…

We can try to create rules around how to store credit cards, but just like passwords, it’s really hard to do correctly. Also, just as with passwords, merchants should never receive such powerful information in the first place. A solution could be providing the merchant a one time use token authorized for a specific amount for a specific merchant. The merchant charges the token, and it then becomes useless. It’s impossible to charge for more than agreed upon. Stealing the token is useless, because it only works for that merchant, it expires, and a credit account will only accept any token once.

If the source card or key which is used to generate tokens is compromised, a user can contact their credit provider, and generate a new private key. No merchants are affected, since they never had your key to begin with.

Recurring charges would be trickier. It’d require more cooperation among banks, standardizing some sort of unique account ID. Tokens could include the account ID, and merchants could safely hold onto that. When they need to make a new charge, they could request a new token for a certain amount. The user can approve the charge which sends a new token, or perhaps mark that a certain amount from a specific merchant every so often is auto-approved. I’m sure plenty of things could be done here to make the user experience easy. And there’s incentives for as easy as possible: Easier means users will spend more.

Credit accounts could provide apps to their users to make sending and approving tokens easy from our phones. Additionally, the app could also optionally prompt for approval when a merchant charges a token, to ensure there was no mistake or the token wasn’t somehow hijacked2.

Stand back, I don’t know crypto

I’m certain smarter people than myself could make a really secure design, but I’ll entertain you by stumbling around with mine. The implementation could look something like Persona. The tokens passed around could be JWTs. It could follow something like these steps:

  1. A merchant could ask a user account for a token, including details like items purchased, total amount, and a merchant ID, with the blob signed by their private key.
  2. The user sees the charge request, sees the details match the signed blob, and approves it.
  3. The user’s account includes the original request, the user’s ID, and a signed blob using the user’s private key. This JWT would be sent to the merchant account.
  4. The merchant account would then submit the token to credit company.
  5. The credit company would verify the user’s blob against their current public key, and verify the merchant’s blob against their current public key.
  6. Optionally request final approval from user.
  7. Transfer specified amount of money from user’s account to merchant’s account.

It would take a lot of work to move the world over to this system, but the end result should be much more secure. It should mean much less fraud, and much fewer stories like what recently happened with Target. Can we please do this?


  1. Or, I have no idea what I’m talking about. 

  2. The design reduces the risk of a stolen token, since it’s generated for a specific merchant. However, it could be that a hacker gets control of a merchant account, or their private key, and can claim to be the merchant. 

Mar 25 2014

Your Password is Insecure

We know that you should have a unique, sufficiently-long, sufficiently-randomized password for every property that requires one. We also know that if you most likely don’t do this whatsoever. There’s no way we’re going to change users’ habits. So this is the reason why we need to get rid of passwords.1

You may think the danger exists with someone guessing your password at your bank, or your email account. Instead, those websites have teams of professionals who spend their whole working day keeping out hackers. That’s not where the danger starts. The danger starts at a tiny e-commerce site, or webforum, or other small-scale site. Some site where you’d think “I don’t care if this account is stolen.” Those are the dangerous sites. Even if you think your password is a pretty good one, because it doesn’t contain any personalized information, and looks like gibberish: if you use the same password, then your password is as weak as the weakest site you use it at.

What really happens: a mom-and-pop shop that sells honey decides to sell more via a website, and has you log in to remember your shipping address. They’re not security experts. They didn’t hire any either. A hacker aims for sites like those. The hacker only has to get passed the minimal security of Honey Buns, to find a list of e-mails and passwords. Maybe the passwords aren’t even hashed; they’re just sitting there in plain text. You shouldn’t be worried that the hacker can ship an insane amount of honey to your house. They wouldn’t bother. Instead, they will take that list, and try each e-mail/password combo on important sites: Wells Fargo, Bank of America, Gmail, Paypal, etc. You used the same email and password on one of those sites as you did with Honey Buns? Then the hacker has just successfully logged in as you, and it mostly looks like a normal login. They then transfer money to their account, and carry on.2


  1. I’ve been explaining this to anyone who has asked me about Persona and passwords, and figured it’d be nice to have it in a linkable quotable location. 

  2. Of course, those sites try to protect against this too. They might notice the IP address is from a completely different part of the world. And they might prevent dangerous actions from that IP until you’ve confirmed another e-mail challenge. But the point still stands. 

Mar 11 2014

Persona is dead, long live Persona

The transition period was really tough for me. It felt like we were killing Persona. But more like tying a rope around it and dragging it behind us as we road tripped to Firefox OS Land. I first argued against this. Then, eventually I said let’s at least be humane, and take off the rope, and put a slug in its head. Like an Angel of Death. That didn’t happen either. The end result is one where Persona fights on.

Persona is free open source software, and has built up a community who agree that decentralized authentication is needed o the Internet. I still think Persona is the best answer in that field, and the closest to becoming the answer. And it’s not going away. We’re asking that the Internet help us make the Internet better.

Firefox Accounts

In the meantime I’ll be working on our Firefox Accounts system, which understandably could not rely entirely on Persona1. We need to keep Firefox competitive, since it’s what pays for us to do all the other awesomizing we do. Plus, as the Internet becomes more mobile and more multi-device, we need to make sure there is an alternative that puts users first. A goal of Firefox Accounts is to be pluggable, and to integrate with other services on the Web. Why should your OS demand you use their siloed services? If you want to use Box instead of iCloud, we want you to use it.

How does this affect Persona? We’re actually using browserid assertions within our account system, since it’s a solved problem that works well. We’ll need to work on a way to get all sorts of services working with your FxAccount, and it might include proliferating browserid assertions everywhere2. As we learn, and grow the service so that millions of Firefox users have accounts, we can explore easing them into easily and automatically being Persona users. This solves part of the chicken-egg problem of Persona, by having millions of users ready to go.

I’d definitely rather this have ended up differently, but I can also think of far worse endings. The upside is, Persona still exists, and could take off more so with the help of Firefox. Persona is dead, long live Persona!


  1. Sync needs a “secret” to encrypt your data before it’s sent to our servers. The easiest solution for users is to provide us a password, and we’ll stretch that and make a secret out of it (so, we don’t actually know your password). Persona doesn’t give us passwords, so we can’t use it. 

  2. Where “browserid” assertions are accepted, Persona support can also be found. 

Feb 11 2014

intel v0.5

intel is turning into an even more awesome logging framework than before, as if that was possible! Today, I released version 0.5.0, and am now here to hawk it’s newness. You can check out the full changelog yourself, but I want to highlight a couple bits.

JSON function strings

intel.config is really powerful when coupled with some JSON config files, but Formatters and Filters were never 100% in config, because you could pass a custom function to either to customize to your little kidney’s content. It’s not possible to include typical functions in JSON. Much sad face. So, the formatFn and filterFn options allow you to write a function in a string, and intel will try to parse it into a function. Such JSON.

Logger.trace

A new lowest level was introduced, lower than even VERBOSE, and that’s TRACE. Likewise, Logger.trace behaves like console.trace, providing a stack trace with your message. If you don’t enable loggers with TRACE level logging, then no stacks will be traced, and everything will choo-choo along snappy-like.

Full dbug integration

This is the goods. intel is an awesome application logging library, since it lets you powerfully and easily be a logging switchboard: everything you want to know goes everywhere you want. However, stand-alone libraries have no business deciding where logs go. Libraries should simply provide logging when asked to, and shut up otherwise. That’s why libraries should use something like dbug. Since v0.4, intel has been able to integrate somewhat with dbug, but with 0.5, it can hook directly into it, meaning less bugs, and better performance. Examples!

// hood/lib/csp.js
var dbug = require('dbug')('hood:csp');

exports.csp = function csp(options) {
    dbug('csp options:', options);
    if (!options.policy) {
        dbug.warn('no policy provided. are you sure?');
    }
    // ...
};

// myapp/server.js
var intel = require('intel');
intel.console({ debug: 'hood' });
// will see: 'myapp.node_modules.hood.csp:DEBUG csp options: {}'

Dare I say, using intel and dbug together gives the best logging solution for libraries and apps.

Jan 21 2014
Jan 2 2014

syslogger

When recently writing an intel-syslog library, I noticed that somehow, npm was lacking a sane syslog library. The popular one, node-syslog, is a giant singleton, meaning it’s impossible to have more than one instance available. That felt wrong. Plus, it’s a native module, and for something so simple, I’d rather not have to compile anything.

That’s where syslogger comes in. It’s pure JavaScript, and has a simple API to allow you to create as many instances as you’d like.

var SysLogger = require('syslogger');
var logger = new SysLogger({
  name: 'myApp',
  facility: 'user',
  address: '127.0.01',
  port: 514
});

logger.log(severity, message);
// or
logger.notice(message); //etc

Enjoy.

+

intel 0.4

I released v0.4 of intel last month, but never got around to writing up what’s new.

debug/dbug

I started out all this logging business claiming we should console.log() all the things. intel can easily handle any libaries using console.log. However, I started to see how frustrating it could be for libraries to be spilling logs all over stdout without being asked. dbug is a utility for libraries to provide logging, but to be quiet by default.

intel v0.4 includes the ability to play nice with dbug (and debug). You can call intel.console({ debug: someStr }) to turn on dbug in libraries matching your string. Due to how debug works, by checking an environment variable right when it is required, you’ll need to run intel.console() before requiring any library that uses debug.

performance

As with every release, things get faster. According to meaningless benchmarks, we’re now ~2x faster than winston. Whatever that means.

syslog

Not part of the actual release, but released at the same time, is an intel-syslog library. This provides a simple syslog handler that you can easily use in your app.

require('intel').config({

  handlers: {
    'syslog': {
      'class': 'intel-syslog',
      'address': '127.0.0.1',
      'facility': 'user' //etc...
    }
  },

  loggers: {
    'myapp': {
      'handlers': ['syslog']
    }
  }

});

I’ve created a wiki page to contain handlers such as intel-syslog that work with intel. If you create or find a library for intel, please add it to the list so we all can be happier logging things.

Dec 16 2013
Page 1 of 45