A tweet this morning by James Duncan Davidson got me thinking about the future of Retina images on the web. Before I start talking about the future, let’s take a quick look at the past.
There was a time, not too long ago, where many of us were trapped behind 56kbps modems with 256 color displays. Much time and effort was spent on hacks that made images work well within those constraints. High compression settings and dithering with limited color palettes all combined to get the best quality from the hardware of the day.
Thankfully, as hardware became more capable, these hacks have become a distant memory. Our “solutions” completely ignored the effect of increased bandwidth and didn’t age gracefully.
Now, let’s look at the present. We’re at another tipping point with network speeds and display resolutions. Mobile networks are getting a lot faster and displays have reached a point where we can’t see individual pixels. And attempts to work around current limitations feel dangerously close to those of the past.
The main problem is that it’s very difficult to determine what type of image is best for the viewer. Screen size is no longer a good metric: my iPad 3 has more pixels than my desktop computer. Similarly, my iPhone’s LTE is nearly an order of magnitude faster than the wired DSL connection in my office. As the device and network ecosystems become more diverse, this problem will continue to get harder to solve. (Remember how much fun sniffing User Agent strings was?)
The only sensible way to move forward is to use a “one size fits all” approach.
Of course, your visitors with the most capable hardware and network won’t be served the most optimal image. Nor will those who have older screens on slower connections.
Your job is to figure out what aggregate size works best for your audience, not to try and outguess what kind of device or network I’m using. Use image compression to create a payload that keeps everyone happy and as needs change adjust that size accordingly.