Fear China

I’ve been using the Internet in one form or another since the mid-80’s. In that time, I’ve seen a lot of strange stuff happening on our global network. On Tuesday, I experienced something extraordinary.

It all started with a text message from my partner Ged at 8:30 AM:

Mail server down. Please take a look when you can, thx.

I verified that the mail server was down from the west coast as well as the east coast, then started poking around to see what was wrong. When I looked at the server traffic, there was only one thing I could say:

daily

“Holy shit.”

Unless you’re a network engineer, that graph won’t mean much. The data shown is the amount of bandwidth into the Iconfactory’s main server. The blue line is the number of megabits per second for requests and the green area is the amount for responses to those requests. Normally, the blue line is much smaller than the green area: a small HTTP request returns larger HTML, CSS and images.

The number of requests peaked out at 52 Mbps. Let’s put that number in perspective: Daring Fireball is notorious for taking down sites by sending them about 500 Kbps of traffic. What we had just experienced was roughly the equivalent of 100 fireballs.

If each of those requests were 500 bytes, that’s 13,000 requests per second. That’s about a third of Google’s global search traffic. Look at how much careful planning went into handling Kim Kardashian’s butt at 8,000 requests per second.

All of this traffic directed at one IP address backed by a single server with a four core CPU.

Like I said, “Holy shit.”

Regaining Control

The first course of business was to regain control of the server. Every service on the machine was unresponsive, including SSH. The only thing to do was perform a remote restart and wait for things to come back online.

As soon as I got a shell prompt, I disabled the web server since that was the most likely source of the traffic. I was right: things quieted down as soon as traffic on port 80 and 443 was rejected. It was 9:30 AM (and you can see it in the graph above.)

The first log I looked at showed a kernel panic at 3:03 AM in zalloc. This was right at the time of the biggest spike. The system.log showed similar problems: the high level of traffic was causing all kinds of memory issues caused by too many processes.

As a test, I turned the server back on for a minute and immediately maxed out Apache’s MaxClients. Our server simply isn’t capable of handling thousands of Apache child processes (it normally runs with less than a hundred.)

So where the hell was all this traffic coming from?

Triage

Since I knew the traffic was from the web, it was likely that Apache’s logs would tell me something. Given that our Apache logs are usually in the 10 MB range, the current 100 MB log file surely contained a lot of useful information.

The first thing I noticed was a lot of requests being returned with a 403 status code. The paths for those requests also made no sense at all: one of the most common began with “/announce”. But there was also a lot of requests that looked like they were intended for CDNs, YouTube, Facebook, Twitter and other places that were not the Iconfactory.

As a test, I updated Apache’s CustomLog configuration with %{Host}i so it would show me the host headers being sent with the requests. I then turned the web server back on for 30 seconds and collected data. Indeed, the traffic we were seeing on our server was destined for someplace else.

The CHOCK was pretty proud to be serving traffic for cdn.gayhotlove.com, but I sure wasn’t.

Clearly there was some kind problem with traffic being routed to the wrong place. The most likely candidate would, of course, be DNS. While looking at IP addresses in my logs, I noticed something interesting: all of this traffic was coming from China.

The pieces were starting to fall into place. I understood the problem:

(Note: The “GFW” in responses refers to China’s Great Firewall. I’m pretty good with acronyms, but this was a new one to me!)

Now all I had to do was find a way to deal with the traffic.

Apache Configuration

My first thought was to deal with the traffic was by handling the HTTP traffic more efficiently.

We host several sites on our server and use VirtualHost to route traffic on a single IP address to multiple websites. Virtual hosts rely on the “Host:” header in the HTTP request to determine where the traffic should head, and as we’ve seen above, the host information was totally bogus.

One thing I learned is that Apache can have problems figuring out which virtual host to use in some cases:

If no ServerName is specified, then the server attempts to deduce the hostname by performing a reverse lookup on the IP address.

Remember that millions of requests had a host name that would need to be looked up. After consulting the documentation, I setup a virtual host that would quickly return a 404 error for the request and display a special message at the root directory. Here’s what it looks like:

<VirtualHost _default_:80>
    ServerName default
    DocumentRoot "/Web/Sites/default"
    <Directory "/Web/Sites/default">
        Options None
        AllowOverride None
        DAV Off
    </Directory>
    LogLevel warn
</VirtualHost>

If you run a server, take a second right now to make sure that it’s doing the right thing when presented with a bad header:

$ curl -H "Host: facebook.com" http://199.192.241.217

All of this helped deal with the traffic, but it only slowed down the amount of time it took Apache to max out the child processes. A Twitter follower in China also reminded me that their day was just beginning and traffic would be picking up. At 8 PM, the trend for traffic didn’t look good, so I turned off the web services and had a very stiff drink.

Then something strange happened at 11:30 PM: the inbound requests started to die off. Someone in China had flipped a switch.

I was tempted to bring the web server back up, but experience told me to leave things as they were. Michter’s and bash don’t make a good pair.

This problem would have to wait for another day.

Hello BitTorrent

The next morning, I tried bringing up the web server. Things ran fine for awhile, but after 10 minutes or so, Apache processes started climbing again.

Most of the traffic was to the BitTorrent /announce URL. BitTorrent clients in China still thought my server was a tracker and were noticing that port 80 was alive again.

And it’s not like there are just a couple of people using BitTorrent in China.

The direct traffic from DNS may have gone away, but secondary traffic from cached information was still killing us. At this point, the only recourse was to block IP addresses.

Blocking China

I’m a big believer in the power of an open and freely accessible Internet: I don’t take blocking traffic from innocent people lightly. But in this case, it’s the only thing that worked. If you get a DDOS like what I’ve described above, this should be the first thing you do.

The first step is to get a list of all the IP address blocks in the country. At present that’s 5,244 separate zones. You’ll then need to feed them to your firewall.

In our case, we use ipfw. So I wrote a script to create a list of rules from the cn.zone file:

#!/bin/sh

# cn.zone comes from http://www.ipdeny.com/ipblocks/
#
# build the rules with:
#
# $ build_rules > /tmp/china_rules
#
# apply rules with:
#
# $ sudo ipfw /tmp/china_rules 

r=1100
while read line; do
	echo "add $r deny ip from " $line " to any in";
	r=$(( $r + 1 ))
done < cn.zone

You’ll want to adjust the starting rule number (1100 above) to one that’s before the allow on port 80.

After setting these new rules, traffic on our server immediately returned to normal.

Digging Deeper

Now that I had my server back, I could take some time to look at logs more closely and see if anyone else had seen similar issues.

First Hits

BitTorrent /announce traffic turned up a few clues. I had noticed a few 5 Mbps spikes in our request traffic late on Thursday, January 15th and on the following Saturday:

weekly

Initially, I just chalked it up to random bullshit traffic on the Internet, much like the packets from Romania looking for phpMyAdmin. In retrospect, that was dumb.

If you look at the origins of those first packets, you’ll see that it’s not a regional problem: the IP addresses are physically located all the way from densely populated Hong Kong to the remoteness of Xinjiang province (north of Tibet.)

Was this traffic a probe or an unintentional screwup? I don’t know.

(Note: I have archived all of the logs mentioned above. If you have legitimate reason to analyze these logs, please get in touch.)

We’re Not Alone

More concerning, is that other site owners are seeing similar behavior starting in early January. I took some comfort in knowing that we weren’t alone on the 20th.

But at the end of the day, every machine in China has the potential be a part of a massive DDOS attack on innocent sites. As my colleague Sean quipped, “They have weaponized their entire population.”

Conclusion

Will this happen again? For everyone’s sake, I hope not. The people of China will only end up being banned from more websites and site owners will waste many hours in total panic.

But if it does happen, I hope this document helps you deal with China’s formidable firehose.

Other Resources

If you’re using ngnix instead of Apache, here are some instructions for blocking BitTorrent requests from China.

For those of you using iptables on Linux, here’s a tutorial for blocking IPs on that platform. It’s also interesting to note that Matt’s site is running on Linode: don’t assume that big providers will offer any protection upstream.

This thread has a good discussion with other site owners experiencing the BitTorrent traffic.

Another option to consider is moving the server’s IP address. You’ll have to deal with the normal DNS propagation and reconfigure reverse DNS (especially if you’re running a mail server on the box), but this may be quick and effective way to avoid the firehose.

Updated January 24th, 2015: Added the section above with additional resources for those of you who are experiencing the problem. Good luck!

Updated January 28th, 2015: I’ve written some opinions on the incident described above.

Clearing the Icon Services cache in Yosemite

After installing or updating a system to Yosemite, I have seen blank (missing) icons on several Macs. I have also seen cases where the desktop doesn’t get updated after a change to the document icons in the application itself (which happens as developers update their apps to use the new icon guidelines.)

This is an example of just another cut:

xScope_IconBug

If you’re seeing any weird behavior with icons in Yosemite, the chances are good that the Icon Services cache is corrupted and needs to be reset. Here’s how you do it using the Terminal:

$ sudo find /private/var/folders/ \
  -name com.apple.dock.iconcache -exec rm {} \;

$ sudo find /private/var/folders/ \
  -name com.apple.iconservices -exec rm -rf {} \;

$ sudo rm -rf /Library/Caches/com.apple.iconservices.store

After running those commands, you’ll need to restart your Mac.

Ideally, there would be some way to do this using the underlying iconservicesd or iconservicesagent commands, but that didn’t work out too well. Hopefully Apple will find a better way to do this than using a Terminal.

Death by a thousand cuts

Dear Tim,

I’m writing today about the latest kerfuffle on the state of Apple’s business.

Marco is a brilliant developer and I’m proud to call him a friend. I also know that, like many of us geeks, sometimes his words verge on hyperbole.

You’ve always struck me as someone who relies on facts for your decision making, so I’d like to provide some background from a geek’s point-of-view. No hyperbole. No bullshit.

The reasons Marco’s piece got a lot of initial attention was because the basic message resonated with a lot of people in our community. But as usual, idiots with ulterior motives twisted the original message to suit their own purposes.

The good news is that none of the problems us geeks are seeing are show stoppers. We’re not complaining about software quality because things are completely broken. There’s still a lot to love about OS X and iOS.

But this good news is also bad news. Our concerns come from seeing the start of something pernicious: our beloved platform is becoming harder to use because of a lot of small software failures.

It’s literally a death by a thousand tiny little cuts.

Apple may not be aware of the scope of these issues because many of these annoyances go unreported. I’m guilty of this when I open a Finder window on a network share. While the spinner in the window wastes my time, I think about writing a Radar, but a minute later it’s forgotten. Until the next time.

As a developer, I know that Radar is my best channel to give Apple constructive feedback, but writing a report is a time consuming process. Those links above consumed about two days of my productivity. Nonetheless, I’ll keep harping on my 20K+ Twitter followers to do their duty. These problems won’t fix themselves.

I’m also wary of looking at past OS X versions with rose colored glasses: there always have been bugs, there always will be bugs. But I have a pretty simple metric for the current state of Apple’s software: prior to the release of Yosemite, I could go months between restarts (usually only to install updates.) Lately, I feel lucky to go a week without some kind of problem that requires a complete reset. I experience these problems on a variety of machines: from a 2009 Mac Pro to the latest Retina iMac.

A bigger concern than my own productivity is how these quality issues affect Apple’s reputation. Geeks are early adopters and effusive when things work well. But when we encounter software that’s broken, we have a tendency to vent publicly. Yesterday’s post by Marco was the latter (spend some time on his blog and you’ll see plenty of the former, as well.)

Because a lot of regular folks look to us for guidance with Apple products, our dissatisfaction is amplified as it trickles down. When we’re not happy, Apple loses leverage.

It also leads to a situation where your product adoption slows. To give you some personal, and anecdotal, examples:

  • Every holiday season, my wife and I make sure that everyone’s computer is up-to-date and running smoothly. This year, for the first time ever, we didn’t install the latest version of OS X. The problems with Screen Sharing are especially problematic: it’s how we do tech support throughout the year.
  • My wife hasn’t updated to Yosemite because others at her workplace (a large, multi-national corporation) have encountered issues with Wi-Fi. Problems like this spread like wildfire in organizations that are accustomed to avoiding issues with Microsoft Windows.

Before I close this letter, I’d like to offer a final observation. Apple is a manufacturing powerhouse: the scale of your company’s production line is an amazing accomplishment. Unfortunately, software development is still a craft: one that takes time and effort to achieve the fit and finish your customers expect.

Apple would never ship a device that was missing a few screws. But that’s exactly what’s happening right now with your software products.

I’ve always appreciated the access Apple’s executive team provides via email, so thanks for your time and attention to this matter. Please feel free to contact me for any further information or clarification.

Yours truly,

Craig Hockenberry

Bezel and xScope

Look at your wrist: notice something missing?

Yeah, it’s “Early 2015” and you still don’t have an Apple Watch. Damn!

Luckily, my colleague Troy Gaul has just released something that can tide you over while you work on your wrist apps: a developer tool called Bezel. Things get even better when you display Bezel’s window with the Mirror in xScope. You’ll have a development environment where you can get instant feedback about the physical aspects of your design.

It’s very likely that the display on the Apple Watch is 326 ppi and that’s conveniently the same as the iPhone’s Retina Display. That means you can get a pixel perfect layout of your wrist app on your iOS device. Here is a photo comparing a printout of Thibaut Sailly’s layout PDF with an image generated by Bezel:

PDF Comparison

You can even take it a step further and pretend that your app is running on your wrist:

PDF Comparison

Don’t be afraid to get creative with rubber bands or other types of fasteners to attach your phone to your arm. Note that xScope’s Mirror works fine in landscape orientation if you’re experimenting like this.

And if you’re wondering why xScope doesn’t send taps back to Bezel or the iOS Simulator, I only have two words to say about that: Application Sandbox.

Twitter Nostalgia

December 1st, 2006. Something important in my life began rather inauspiciously:

My first tweet.

Little did I realize that these tweets would become a log of important events in my life. And now thanks to Twitter’s new search capabilities, I can remember that past. Please indulge me as I sift through these moments and get nostalgic…

It turns out I was the sixth person to mention “iPhone” on Twitter. My colleague Corey beat me by a few hours and the guy who started Twitter was first. There must have been something in the water at the Iconfactory water that day. I wish all my predictions on Twitter were so prescient!

(Thanks to Dan Frommer for doing the legwork on this one.)

Interestingly, the very next tweet in my timeline was the start of the world’s first Twitter client:

These two tweets, separated by only a few hours, are an amazing summary of what was about to happen.

But first, another important event transpired: I started writing publicly. Twitter was clearly an inspiration here: I loved those 140 characters, but found that I needed another venue to expand upon my thoughts:

Note the date on that last tweet: the day before the original iPhone went on sale. My first post stated that I didn’t know where there I was going with the blog. A few days later, I had a pretty good idea:

I had just bought an iPhone.

And remember that “video iPhone nano gaming system”? Here I am being the first person to display a Twitter timeline on it:

Worlds were colliding: Twitter, iPhone, and a place to talk about both.

Twitter was always an outlet for my strange sense of humor. Depending on your point-of-view, April Fool’s in 2008 was either the best or worst day ever:

It’s now commonly known as the CHOCK LOCK, but it took almost five months for someone to christen it:

And amazingly, just six minutes later:

Both Seth and Michael were spurred on by Dan Wood, so I guess we can blame him!

The iPhone SDK was released in February 2008 and a lot of that early hacking I did on the iPhone was finally turning into a real product. It’s likely that this affection with capital letters was triggered by a shitload of coding.

But all that hard work paid off:

I tweeted that just after being handed an Apple Design Award. Those colliding worlds were good to me.

I’m a firm believer of looking forward in your work, but there’s also value in remembering how you got to where you are today.

And speaking of today, guess when the bulk of this post was written?

Some things never change.