Font Points and the Web

When sizing fonts with CSS, there’s a rule of thumb that states:

1em = 12pt = 16px = 100%

There are even handy tools that help you calculate sizes based on this rule.

But this rule of thumb leaves out a very important piece of information. And without that information, you could be left wondering why the size of your type doesn’t look right.

Testing Points and Ems

To illustrate, let’s take a look at some work I’ve been doing recently to measure text accurately on the web. I have a simple test page that displays text in a single font at various sizes:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8" />
  <title>Em Test</title>
  <meta name="generator" content="BBEdit 10.5" />
  <style>
    body {
      font-family: "Helvetica Neue", sans-serif;
      font-size: 100%;
      line-height: 1;
    }
    p {
      padding: 0;
      margin: 16px 0 16px 0;
    }
  </style>
</head>
<body>

<p style="font-size: 2em; background-color: #eee;">MjṎ @ 24pt / 2em</p>

</body>
</html>

You can view a slightly more complex version of the page here.

The W3C recommends specifying fonts using either ems or pixels. Since high resolution displays are becoming ever more common, that really leaves you with just one choice: the em.

(Note: In the examples that follow, I’m using text set at 2em since it’s easier to see the problem I’m going to describe. The effect, however, will happen with any em setting.)

The body is set in Helvetica Neue at 100% so 1em will be the equivalent of 16px when the user’s browser is using a default zoom setting. The line-height is 1 so there’s no additional padding around the text: the default of “normal” would make the line approximately 20% taller.

When you display the test page, everything looks good because the sizes are all scaled correctly relative to each other. A quick check of the light gray background with xScope shows that the height of the paragraph element is 32 pixels (2 ems):

Browser

But then things got confusing.

Points aren’t Points

I had just been measuring some text in TextEdit and noticed something was amiss. Comparing the two “24 point” fonts looked like this:

TextEdit versus Browser

I’m no typography expert, but there’s something clearly wrong here: a 24pt font in TextEdit was noticeably smaller than the same size in my web browser.

I confirmed this behavior in three browsers running on my Mac: Safari/Chrome (rendering with WebKit) and Firefox (rendering with Gecko) displayed the larger version of 24 point text. Why were the sizes different?

After some experimentation, it appeared that rendered glyphs were from a 32pt font:

72 DPI

What the hell?

A Brief History of Type

When confronted with a problem like this, it’s always a good idea to question your assumptions. Was I measuring the right thing? So I started learning more about type…

In general, points are meaningless. It’s a relic of the days of metal type where the size specified the height of the metal, not the mark it made on the page. Many people mistake the size of a glyph (shown below with red bars) with the size of a point (green bars):

Point versus Glyph

(Photo by Daniel Ullrich. Modifications by yours truly.)

Even things like the word “Em” got lost in translation when moving to the web: it originally referred to the width of a typeface at a given point size. This worked in the early days of printing because the metal castings for the letter “M” were square. Thankfully, we’re no longer working in the 16th century.

In modern fonts, like Helvetica Neue, a glyph for a capital “M” may be square, but the flexibility of digital type means that the width and height of a point rarely match.

Thanks Microsoft!

After doing this research, I was sure I was measuring the right thing. The sizes of the glyphs should match, regardless of point size. Something else was clearly in play here.

Let’s just say I spent a lot of quality time with Google before eventually stumbling across a hint on Microsoft’s developer site. The document talks about using a default setting of 96 DPI. I’ve been spending a lot of time lately with the Mac’s text system, so I knew that TextEdit was using 72 DPI to render text.

I quickly opened a new Photoshop document and set the resolution to 96 pixels per inch (DPI). And guess what?

96 DPI

All the text got larger and it matched what I saw in my browser. Mystery solved!

72 DPI on the Mac is 75% smaller than 96 DPI used by the Web. So the 24pt font I was seeing in TextEdit was 75% smaller than the 32pt font used in the browser.

Further research about how 96 DPI is used on the web turned up this Surfin’ Safari post on CSS units written by Dave Hyatt back in 2006:

This is why browsers use the 96 dpi rule when deciding how all of the absolute units relate to the CSS pixel. When evaluating the size of absolute units like pt browsers simply assume that the device is running at 96 CSS pixels per inch. This means that a pt ends up being 1.33 CSS pixels, since 96/72 = 1.33. You can’t actually use the physical DPI of the device because it could make the Web site look horribly wrong.

That’s another way to think about this problem: a single point of text on your Mac will be 1.33 times larger in your browser.

Now What?

Now that we know what caused the problem, how can we avoid it?

The key piece of information that’s missing in the “1em = 12pt = 16px = 100%” rule is resolution. If you’re specifying point sizes without also specifying resolution, you can end up with widely varying results. For example, each of these is “24 points tall”:

Comparison

If you’re lucky enough to have a Retina Display on your Mac, you’ll be rendering text at 144 DPI (2 × 72 DPI). That’s 20% larger than the last example above (Windows’ 120 DPI setting.) Display resolution is increasing and so are the number of pixels needed to represent “a point”.

Note that you can’t “fix” this problem by specifying sizes in pixels. If you specify a font size of 16px, your browser will still treat that as 12pt and display it accordingly.

As a designer or developer, you’ll want to make sure that any layout you’re doing for the web takes these size differences into account. Some apps, like Photoshop, allow you to specify the resolution of a document (that’s how I performed my sizing tests.) Make sure it’s set to 96 DPI. Resolution may not matter for exporting images from Photoshop, but it will make a difference in the size of the text in your design comps.

Most other Mac apps will rely on Cocoa’s text system to render text, so if there’s no setting to change the document’s resolution, a default of 72 DPI should be assumed.

An alternative that would suck is to start doing your design and development work on Windows where the default is 96 DPI. That setting also explains why web browsers use this resolution: remember when Internet Explorer had huge market share?

And finally, expect some exciting new features for measuring, inspecting and testing text in your favorite tool for design and development. I’m doing all this precise measurement for a reason :-)

Updated February 25, 2014: It turns out Todd Fahrner first identified this problem back in the late 1990’s. His work with the W3C was instrumental in standardizing on 96 DPI. What’s depressing is that I knew about this work: Jeffrey Zeldman and I even developed the Photoshop filters mentioned on his site. Maybe I should send in my AARP membership form now.

Fingerprints

It looks like my hunch about the iPhone invite was right: new phones are likely to have “silver rings” that are fingerprint sensors embedded into the home button.

So what does this mean? Most people assume that it’s just going to be easier to access your home screen (without entering a passcode.) But I think it goes much deeper than that.

iCloud services are a huge part of Apple’s next decade. Everything the company is doing these days has some kind of connection to data that is stored remotely. They’re investing heavily in new data centers.

And anytime you want to access this data, you’re logging into iCloud. Wouldn’t it be great if you could skip the part where you have to type in your Apple ID?

It’s clear to me that your unique fingerprint will be tied to your unique Apple ID. Once this connection between your physical and online presences is established, some very interesting things become possible. Let’s take a look at a few things I think might happen:

Protecting access to apps

From the beginning, I’ve wanted a way to protect my personal information when sharing a device with friends and family. But any secure solution to that problem would be a pain in the butt. Typing a password before launching an app? No thanks!

But imagine if opening your favorite Twitter app only required a fingerprint scan before proceeding. Everyone’s favorite Twitter prank would thankfully die. And more importantly, we’d all feel a lot better about using things like online banking apps.

Corporate security

Most corporate networks are protected by VPN. The profiles that enable this network configuration often specify that a user must use a passcode lock. And it’s rarely a simple passcode. And it kicks in immediately.

Imagine needing to type in a eight character password with letters and numerals just to check the current weather. That’s a reality for millions of people who use their device for both personal and business tasks.

A fingerprint scanner that avoids this complex password will sell a lot of devices.

Multiple accounts

There are many scenarios where you want to let someone do whatever they want with your personal device: a partner providing directions while you drive, a kid playing games in a waiting room, your parents looking at vacation photos. All these people have something different than you: a fingerprint.

Entering a login and password has always seemed out of place in the simplified world of iOS. But detecting which account to use when tapping on the home button actually makes it easier for members of your family to use your personal device: they don’t even have to slide to unlock.

And once everyone has their own personal space on the device, no one can mess with it. This is important in many contexts: even something simple like a game has sensitive information as soon as a sibling comes along and screws up high scores or play position.

Borrowing devices

Most of us backup our data to iCloud. That data is restored to a device when it’s first put into service or if something goes wrong.

Now imagine if you had the ability to restore your data onto any device with the touch of your finger. Borrow an iPad and make it yours instantly. Businesses that strive to make a customer feel at home, like hotels and airlines, would love this capability.

Personal identification

If you think Apple is going to give developers access to this biometric information, think again. Google would love this data, so you know it’s not going to happen.

Slowly but surely

Don’t expect all these things to appear on September 10th. Apple will start out simply and continue to improve the software that uses this new sensor. That’s how they roll.

Acknowledgments: The genesis of these ideas came from a conversation with my colleague Sean Heber. He’s the one that first made the connection between iCloud and your finger. Thanks also go to Ryan Jones for the links about the sensor.

Cheap as in Computing

There’s a good case for an iPhone that costs less. But with this lower cost, developers fear that device specifications will suffer:

As someone who’s actively working on an iOS 7 update, I’m noticing a definite pattern emerging: we’re removing a lot of shadows, gradients, and transparency. A lot of views that were previously required to make an app look at home on iOS 6 are no longer needed.

The visual simplification of iOS has led directly to a simplified implementation. As every developer knows, the less work your app does on a mobile device, the better it performs. It’s a lot easier now to make an app that feels fluid and uses less CPU and GPU resources.

While everyone focuses on what Jony Ive has put on the screen, he’s also made the hardware under that screen able to do more with less. And yet again, Apple’s tight integration of hardware and software is going to kick everyone’s ass.

Updated August 14th, 2013: A lot of Twitter followers are saying, “But what about the live blurs? Aren’t those CPU/GPU intensive?”

Yes, they are. And you should also note that access to those features is strictly through private API. An iPhone 4 turns off blur, an iPhone 5C could do the same if necessary.

(If you look closely at the blur behind a toolbar, you notice that there’s some kind of sub-sampling of the screen image. Because this implementation is private, the algorithm could also be adapted for other devices.)

There’s also the question of all the new dynamics in the UI (like bouncy messages.) The highest costs in a GPU, both with computation and hardware components, is with dealing a lot of textures. The math for a physics engine is relatively easy to handle.

AMBER Alert Usability

If you live in the State of California, you got an AMBER Alert last night just before 11 PM. If you have an iOS device, you saw something like this on your lock screen:

Or like this in Notification Center:

And if you’re like most people, you had no idea what it meant. Considering it’s a broadcast to locate missing children, that’s a bad thing. Let’s examine the usability of this alert and think about how this system can be improved.

Updated August 6th, 2013: Lex Friedman has written a great summary of AMBER Alerts at Macworld. Among the revelations: the text of the messages are limited to just 90 characters!

First Impressions

In our household, there are many iOS devices. At the time of the alert, we were in the living room with two iPads and an iPhone. The alert’s sound is designed to get your attention, and that it did!

Both my wife and I gave each other that “what the hell is wrong” look as I grabbed my iPhone from my pocket. Turns out we weren’t the only ones who were frightened by the sound:

My wife’s first question as I looked at my phone was “Are we having a tsunami?” (we’ve had these kind of emergency broadcasts before.) I replied with, “No, It’s an AMBER Alert”. To which she replied, “What’s that?”

And therein lies the first problem: I had no idea.

Unlike all other notifications on my iPhone, I couldn’t interact with the alert. There was no way to slide the icon for more information or tap on it in the Notification Center to get additional information. Through a combination of Google and my Twitter timeline, I eventually figured it out:

But I was also seeing a lot of people on Twitter whose response to the confusion was to ask how to turn the damn thing off. And since AMBER Alerts aren’t affected by the “Do Not Disturb” settings, a lot of people went to Settings > Notification Center so they wouldn’t get woken again in the future.

That’s exactly what you don’t want to happen when a child is abducted.

Alert Text

In looking at the message itself, it’s hard to tell what it’s about. Starting an alert with an acronym may make sense to the government, but there’s wording that could resonate much effectively with normal folks:

It’s also hard to tell if this is a problem with “Boulevard, CA” or a “Nissan Versa”. And where is Boulevard? And what does a Nissan Versa look like? Who do I call if I see license 6WCU986? In summary, this notification provides more questions than answers.

This one image has answers to all of these questions and more. Why can’t I see that image after tapping on the notification?

The Alert Sound

As I mentioned above, the sound definitely let us know that something was wrong. But we were sitting comfortably in our living room. A friend of mine was driving at the time and probably listening to music from their iPhone on a car stereo. Being startled at high speed is dangerous:

Unlike the devices that existed when the AMBER Alert system was first introduced in 1996, our iPhones and iPads do a lot more than calls and texts. Music and video are immersive activities and hearing a loud horn can be a cure that’s worse than the disease.

Also, we’re all conditioned to immediately look at our phone when the normal alert sounds are used: a simple ding would have gotten just as much attention.

Do Not Disturb

And what the heck is up with this crazy sound happening if Do Not Disturb is turned on? Dammit, it’s my phone and I get to tell it what to do. Stupid Apple!

Well, take a look at your government first:

And if you want the details, it’s only going to cost you $205 to download:

Bottom line: these bugs aren’t going to be easy fixes.

(And if you think getting woken up because of an AMBER Alert is such a terrible thing, why don’t you explain your pitiful situation to this boy’s parents.)

File A Radar

Even though these bugs probably aren’t Apple’s direct problem, I’m still going to file a bug report. If you have a developer account, please feel free to duplicate that Radar.

Apple is in a very unique position to address these issues:

  • It has direct access to millions of customers and their mobile devices.
  • It employs many people with a deep understanding of how mobile devices are affecting our lives.

This is clearly a problem where cooperation between Apple, the Department of Justice, and the public can improve a system where everyone benefits. Better usability with AMBER Alerts is case where “think of the children” isn’t a trite platitude.

The Origin of Tweet

Last week, one of my colleagues informed me that the word “tweet” was now included in the Oxford English Dictionary (see “Quiet announcement” at the end of the page.)

The noun and verb tweet (in the social-networking sense) has just been added to the OED. This breaks at least one OED rule, namely that a new word needs to be current for ten years before consideration for inclusion. But it seems to be catching on.

John Simpson
Chief Editor, Oxford English Dictionary

It’s not everyday that a word you helped create gets added to this prestigious publication, so I thought I’d share a bit of the early history of the word “tweet.”

In the early days of Twitter, the service used “twittering” as a verb and “twitter-ers” as a noun. The December 2006 newsletter is a great example of the lingo that was in use back then:

From: "Biz Stone"  
Date: Wed, 06 Dec 2006 07:51:31 +1100
Subject: Twitter Now Supports AIM and More

Hello Twitter-ers!

Things are really heating up at Twitter headquarters these days.  Last month, the number of Twitter users doubled and it looks like it's going to double again before we run out of December. We  thought the best way to react to all this positive activity would be to keep on making improvements. So come on by and check out  http://twitter.com!

…

That's it for now, Happy Twittering!
Biz Stone and The Twitter Team
http://twitter.com/biz

I started started using Twitter at the beginning of December. Like John Gruber and my colleagues at the Iconfactory, I loved our new “water cooler for the Internet.” I was, however, unhappy with using Twitter via the website or Dashboard widgets.

While taking a shower in the middle of December, an idea struck me: it wouldn’t be hard to hook up Twitter’s new API to the Cocoa networking classes and display a table with tweets. So I dried off and started prototyping: the next day I had the world’s first Twitter client running on my Mac.

A few days later, I checked all my code into our repository and Twitterrific was born:

r174 | craig | 2006-12-20 17:54:11 -0800 (Wed, 20 Dec 2006) | 1 line

Initial import

There was a problem, however.

As I started to implement the user interface, it was clear that nouns and verbs were needed. Menu items with labels like “Post a Twitter Update” were both wordy and boring. And as someone who loves language, using a phrase like “Refresh Twitterings” made the hairs on the back of my neck stand up.

So we started calling them “twits”; more as a placeholder than anything. Using the term in January 2007 felt just as awkward as it does today:

@Dan: my wife was using the Mac when your twit popped up, followed by “there’s a guy named Dan who’s looking for love”. Classic.

(Thanks to Jordan Kay at Twitter for digging that tweet up. Also, the “Dan” is Dan Benjamin—in those early days we hadn’t figured out screen names, and as far as I knew, there was only one Dan on Twitter!)

Luckily, things were about to change. On the 3rd of January, I checked in a temporary application icon. I always check in something crappy as an inspiration for the talented folks I work with:

r215 | craig | 2007-01-03 13:46:11 -0800 (Wed, 03 Jan 2007) | 1 line

Added a temporary application icon. Updated preferences panel (with quit and about info.) Tweaked on window size. Added expanded size to preferences. 

Less than 24 hours later, David Lanham sent me something much better.

I couldn’t check it in quickly enough:

r225 | craig | 2007-01-04 13:21:01 -0800 (Thu, 04 Jan 2007) | 2 lines

Added new app icon

We didn’t know it at the time, but this is also the moment when a bird became synonymous with Twitter. Prior to that point in time, Twitter’s only identity was a logotype.

Work was proceeding at a very fast pace during the first week of January 2007. Beta releases were frequent and widely distributed. Fortunately, the folks at Twitter were using our app with its snazzy new bird icon. One of our beta testers was an API engineer named Blaine Cook who sent me the following email:

From: "Blaine Cook" <[REDACTED]>
To: "Craig Hockenberry" <[REDACTED]>
Date: Thu, 11 Jan 2007 18:29:01 -0800
Subject: Twitterific

Hey Craig,

I work on Twitter with Jack, and just wanted to ping you re: Twitterific. First off, great work - I love it, and I think so does everyone else in the office! :-)

Two thoughts; first, how about changing "twit" to "tweet" - the "official noun" is "Twitter Update", but that's boring...? 

Second, I'm in the process of building a real (re: flickr-style) authentication API, so that you'll be able to obtain tokens for users without having to store their usernames and passwords. I don't want its pending release to stall your development or release date, since we will continue to support basic auth for existing clients, but would you be interested in beta-testing the api machinations? 

Thanks again for Twitterific!

blaine.

(In retrospect, this email is also the first hint that we spelled the app’s name wrong!)

It’s rare to have unanimous agreement when naming things in software, but in this case everyone loved the word “tweet”. I quickly updated both the app and the website:

r263 | craig | 2007-01-12 11:46:22 -0800 (Fri, 12 Jan 2007) | 1 line

Changed "twits" to "tweets". Removed auto launch of Twitterrific web page. Added link to Twitterrific web page in config window.
r2183 | craig | 2007-01-12 11:44:07 -0800 (Fri, 12 Jan 2007) | 1 line

Changed "twits" to "tweets"

And not a moment too soon! We released the first version of Twitterrific just three days later with the following copy on the product website:

About Twitterrific

Twitterrific is a fun little application that lets you both read and publish posts or “tweets” to the Twitter community website. The application’s user interface is clean, concise and designed to take up a minimum of real estate on your Mac’s desktop.

This is the first time “tweet” appeared on the Internet in reference to something you do on Twitter.

Our new app was a hit with users everywhere. We even got a mention in Twitter’s next newsletter:

From: "Biz Stone"  
Date: Wed, 17 Jan 2007 07:37:07 +1100
Subject: =Things Are Heating Up At Twitter

Hello Twitter-ers!

…

Continued Hotness: Twitterrific

The folks over at The Icon Factory have created a brilliant little application that lets you both read and post updates to Twitter. The application's user interface is clean, concise and designed to take up a minimum of real estate on your Mac's desktop. Twitter headquarters is sporting this little jobbie on every monitor in the house! Download it for free from The Icon Factory web site.

Twitterrific: http://iconfactory.com/software/twitterrific

…

Things are heating up!
-Biz Stone and The Twitter Team
http://twitter.com/biz

Unfortunately, the work I had done to expunge “twit” from our app was not complete: I had forgotten to remove it from some button tooltips. After the release of version 1.1 on January 25th, “twit” was gone for good.

Even though the use of the word “tweet” spread quickly, Twitter took awhile to embrace the new terminology. Even after several months, “twitter-ers” were still “twittering”.

As far as I know, the first time “tweet” appeared on Twitter’s site was when they referred to our product. (And yes, there was a point in time when the founder of Twitter would send us feature suggestions!)

Almost a year later, “tweet” was used on the company blog (but again, in a quote by a third party).

The word “tweet” didn’t appear unquoted on Twitter’s blog until June of 2008. Coincidentally, it was in reference to the WWDC where we won an Apple Design Award for Twitterrific!

It still feels strange to hear a word I helped create be mentioned over and over again in the media. It’s a great word to go along with a great service, and in the end, I’m just happy we’re not calling each other twits!

Addendum – July 13th, 2013

I dug around through my Twitter archive and came up with some interesting tweets that show how and when the first Twitter client came to be.

The first notion that Twitter didn’t belong solely on the web happened on December 5th, 2006 (a couple of weeks before the first check-in in the Twitterrific repository.) This line of thinking was inspired by a tweet John Gruber had posted about the Dashboard widget he was using:

Less than an hour later, I posted this:

A couple of days later, I started poking around with the new Twitter API:

(Note that in the early days of Twitter, the website was designed around @jack’s original idea of posting status messages. The user name was implied in the post, hence “Craig Hockenberry” “is experimenting”.)

By the 20th of December, I felt confident enough in the state of the project to go public (that’s also the day that the prototype code got checked into the repository):

At the Iconfactory, everyone gets a two week Christmas break. I’d rather code than shop :-)

Unlike many products, we knew the name of what we were building very early on:

And from this point on, I never posted from the website again:

Here’s the first tweet with a source parameter (that tells Twitter which client posted the tweet.) We worked with Twitter engineering to implement this, and for a long time Twitterrific was the only source besides “web”. Note that this happened just two days before the release!

Of course on January 14th, 2007, the release was announced on Twitter: