A while ago I was trying to get descriptors asynchronously from devices since in some situations, like if you catch a Mass Storage device while it's busy servicing a CDB, the device could drop control pipe requests on the floor, causing DeviceIoControl to block until Windows loses patience with the device and resets it (about five seconds).

The naïve approach is to use an OVERLAPPED structure with DeviceIoControl, since that's how you do async IO on Windows, but this doesn't work. It's up to the device driver to determine whether the call will be completed synchronously or not, and the Windows USB drivers complete all calls synchronously unless the pipe gets stalled by the device (normally between a URB submission and completion the device just NAKs until the result is ready). This is extremely uncommon, and impossible for the control pipe (which is where descriptor requests go) because the control pipe is used to clear stalls. Stall the control pipe and you wedge the device, leaving a device reset as the only option. In five seconds.

The solution I wound up using is not one I'm particularly proud of, but it does have the advantage of working: I didn't mind a synchronous call (in fact, it made things easier), but I didn't want to deal with getting wedged when a device went out to lunch, so I spawned a thread1 and used WaitForSingleObject to set my timeout. Plan on waiting 80ms or so per quarter-kilobyte expected. Measured very unscientifically, a config descriptor without interfaces (nine bytes) takes less than 10ms, average 4.7ms, standard deviation 0.00831.

Note that you want to open the HANDLE you're using in the DeviceIoControl call in the thread too—if there's another request in flight, opening the file handle will block until it's completed, which is exactly what we're trying so hard to avoid. You shouldn't really be calling for another descriptor from the same device right after it's failed though, because it's likely about to be reset by Windows.

Bonus pitfall: For configuration and string descriptors, you don't know how big the descriptor is, so you can grab a portion of it, get the bLength (wTotalLength for configuration descriptors) and use that to get the rest of the descriptor, or you can just request UINT16_MAX and cross your fingers.

It turns out that neither of these approaches work for all devices for any value of n. To save some space, but keep the number of round trips to the device low, I always used a 256 byte buffer in my requests, and failed over to a larger buffer if the descriptor was too big. Unfortunately, in the classic "just bang on it until it starts working" approach of USB vendors, some devices will just not respond if you make a request with a buffer size that is not the exact size of the descriptor or sizeof UsbConfigurationDescriptor (nine bytes). The reason for this appears to be one of convenience—Windows itself only requests either nine bytes or the entirety of descriptors, so some device firmwares were written with only these two cases in mind.


1 more accurately: reused a vthread, but the internal implementation of our threading library is not germane to this article

In the physical world the race to the bottom is won by mass manufacturers in third world countries paying their workers 2¢ an hour. That's bad. And in the race for lowest per-unit cost, low-quality products get shipped.

In the software world there are no significant manufacturing costs, so commoditization doesn't rely on reducing per-unit product cost and actually results in higher product quality. This is achieved when someone builds and releases a functioning1 Thing for free with the source code available, and other people take interest and offer to help. Game over, this thing is free and getting attention and if it doesn't exactly fit your needs, you can adapt it at a lower cost than writing another Thing from scratch. The result is always better than something you built yourself. This allows you to concentrate on the truly new and unexplored work behind the great products of today. The flywheel of software development gets a little bit faster.2

Open source is not the Only Way. People are entitled to be rewarded for coming up with new and interesting things that other people want. Successful free software tends to be commoditized software—stuff that other people have written many times before. These projects benefit most from the "collective bargaining" development style of open source, with the mistakes of all those previous implementations in mind and addressed in a package that anyone can use.

OpenSSL is a great example of this. Encryption software is terrifically hard to write, and everyone should be using it. Any bug could mean that someone owns your sensitive data, and even if it runs correctly, you're probably vulnerable to side channel attacks that monitor timing, power usage, sound, or math faults. Everyone who writes their own security routines falls for this stuff, including the open source libraries.

The best part about commoditization in software: you're better off using OpenSSL instead of DIYSSL because they've already run into these problems. Early versions had security vulnerabilities, but those have been fixed and the library is under constant analysis and attack by others and you benefit from it directly just by consuming it3. You stand on the shoulders of giants.

It's not just security either. Want high performance code? Someone probably wrote a free library that does what you want. And someone else came along later and improved it. This iteration is what makes open source software such a powerful force.

Open source software has a place in the world, and everyone in the ecosystem relies on it. As the industry matures, it becomes impossible to build a compelling new product without it4.


1 many open source projects fail at "functioning", a sign that the technology they're attempting to implement has not yet become sufficiently commoditized.
2 this is easy to prove: every modern product with a microprocessor contains at least one open source component.
3 as long as you don't screw with it
4 but you should probably bring something of your own to the table, too.

in Safari, this is the context menu for a link:
 

wouldn't it be nice if, when interacting with a link that will open in a new window, the "Open Link in New Window" option was "Open Link in Current Tab/Window" instead?


nowadays I only have one thing that holds me back on OS X and as much as I try to get over it, it still trips me up. that thing is application-centric switching.

the interface that does exist has some pretty maddening inconsistencies. ⌘ + ⇥ can be thought of as moving an app to the top of your attention stack. when you select it, its neighbours change and switching becomes a case of most-recently-used. unfortunately, ⌘ +` works more like traversing a list, neighbours never change. these two actions should be acting the same!

application switching is almost never what I want. thanks to Spaces, nowadays I can break tasks up by desktop and most spaces will not have more than one window per application, but at the end of the day what I really want is something like this:

only show visible windows. obvious which app they belong to. easy to tell what windows they are. if I want to go to a window or app that is not visible, I can use the Dock.

I'd accept being able to override the ⌘ + ⇥ key binding, being given the necessary WindowServer access to get the needed information (without resorting to dirty hacks), and writing it myself. that would be ok.