Well, no sense in waiting. Now that I had a list of the startup procedure, I might as well get cracking.
I had covered the first two entries in the startup list already -- registering to be notified when matching hardware was found or inserted. (That part I hadn't tried to port from the Linux code; I just replaced it with the OS X equivalent.)
So the next step was
iwl_trans_pcie_alloc
, which had the basic PCIe setup code.
Right at the top, there were three things I needed to deal with: spin locks, a mutex, and a wait queue. Well, OS X has spin locks and mutexes covered pretty easily. I was going to have to take a peek at the syntax and see whether I could use a macro substitution or whether I needed a code replacement, but that should be easy enough either way.
Better, I thought, to start with the harder part, the wait queue. I wasn't that familiar with wait queues, but I found a
nice introduction here.
Still, it wasn't exactly obvious to me what this was used for in the PCIe setup. The wait queue I was looking at was used to wait on a firmware write to complete. In comparison, the firmware read code used a completion variable, and other code waited on the completion variable to be set. So what's the difference between a wait queue and a completion variable? (
Research ensues...) Ahh. There is no difference. A completion variable is a wrapper around a wait queue. (See the definition of
struct completion
in here.)
OK, so the bottom line is, one or more threads can end up waiting on some condition. When the condition arrives, whatever code processes that can wake up one or all of the waiting threads.
Now, what would I use on OS X to achieve this?
IOKit Options
It seems like the IOKit lock a.k.a.
IOLock
has that capability:
IOLockSleep
or
IOLockSleepDeadline
, and then
IOLockWakeup
. Functionally, it mostly seemed like it would work, with one key difference:
Linux Wait Queues
For a Linux wait queue, you can specify a boolean condition, and the thread will keep sleeping until that condition is true. Because the implementation relies on macros, it checks the condition with the code you provide every time the thread is awakened.
Code such as this:
wait_event(some_wait_queue, foo == bar);
Gets transformed by
a macro that I'm seriously abbreviating here:
#define ___wait_event(wq, condition, ...) \
({ \
for (;;) { \
prepare_to_wait_event(wq, ...); \
if (condition) \
break; \
schedule(); \
} \
finish_wait(wq, ...); \
})
Into code that I gather looks something like this:
({
for (;;) {
prepare_to_wait_event(some_wait_queue, ...);
if (foo == bar)
break;
schedule();
}
finish_wait(some_wait_queue, ...);
})
In that transformed code, the
prepare_to_wait_event
call puts the thread into a suspended state (e.g.
TASK_UNINTERRUPTIBLE
), the
finish_wait
call sets it back to the usual
TASK_RUNNING
state, and in between every time the thread wakes up, if the condition isn't yet true, it goes right back to sleep thanks to the
schedule()
call that will let other threads run since this one is no longer in a runnable state.
IOLock Waiting
The IOLock calls, by contrast, take an event object instead of a condition to test, and only wake the thread if the event object passed to the wake call is the same as the one passed to the sleep call:
IOLockLock(some_lock);
IOLockSleep(some_lock, foo, THREAD_UNINT);
...
IOLockUnlock(some_lock);
// Then in some other thread:
IOLockWakeup(some_lock, foo, true);
This is a lot more straightforward, on the surface. But some of the differences are:
- The thread that might sleep must acquire the lock first
- Instead of a separate wait queue for every situation, you could use one lock with multiple event types
- There's no obvious support for a boolean wait condition
Now in practice, the third issue may or may not matter. In the code I was looking at, the wait condition was a simple boolean variable, and the only time
wake_up
was called was immediately after that boolean was set to true. In other words, I could skip the condition part altogether in that case. But I saw four wait queues in the project, so if I went this way I either had to convince myself that was true of all four (and pay attention to any future updates), or go ahead and put some kind of macro around the sleep call to check the condition like in Linux. Well, I'd probably need a macro to acquire the lock anyway, so I guess I could just roll it all in there.
But since this is all part of a lock rather than a native wait queue, and would need some massaging in any case, it wasn't obvious that it would be the best fit.
Mach Options
Since IOLock wasn't just a drop-in replacement, I figured I'd better look further. A little poking around revealed a Mach
wait_queue_t
and a Mach mutex type
lck_mtx_t
.
I actually found these by inspecting the
definitions of the IOLock functions above. I saw, for instance,
IOLockGetMachLock
which returns a
lck_mtx_t
. Then I followed IOLockWakeup to
thread_wakeup_prim to e.g.
wait_queue_wakeup_all which used
wait_queue_t
and so on.
So then I thought maybe I shouldn't be using the higher-level
IOLock
wrapper. I could just use a
lck_mtx_t
directly, and call
lck_mtx_sleep
and... Well, then, I'd have to call
thread_wakeup_prim
or something, which sort of spoiled it, because it wasn't very symmetrical and seemed like I would be mixing a mutex API and a thread scheduling API.
So maybe instead of mutexes, I should be just looking at the Mach wait queues, which maybe corresponded more directly to the Linux wait queues. But the Mach wait queues seemed really low-level. There were obvious wakeup methods, but the way to make the current thread wait (sleep) on a wait queue was not obvious. There were both 32-bit and 64-bit versions. I could probably have worked something out, but it was starting to look like
IOLock
was the convenient abstraction over all this stuff that I wanted.
However, I wasn't ready to commit just yet:
BSD Options
Having found both an IOKit option and some Mach options, I naturally assumed there must be a BSD option. (
Why have one API when you can have three for thrice the price?)
Ah, yes. Google led me to a
FreeBSD man page, which gave me some search terms for the XNU source, and I ended up with things like
sleep and
wakeup. Those inverted the parameters a bit in that you provided a "channel" first, which was just a unique address, and a
wakeup
on a specified channel would wake the threads that went to
sleep
on that channel.
But there was also a priority, which I didn't really want to deal with. I could set it to 0 ("don't change") except that I wasn't really sure whether or when I should
want the thread priority to change. And for waiting on a timeout, you had to specify the timeout as a struct. And there was a
char *
message parameter, which the XNU source was silent on, but the BSD man page said should be no longer than 6 characters.
All of a sudden there were a lot of decisions to make, for an API that had initially looked simpler. And for all I know there are other BSD options -- this was a pretty cursory search.
All in all, it seemed like
IOLock
might be a pretty convenient abstraction.
Final Decision
No surprise here -- I decided that I'd try
IOLock
first. I still haven't decided whether to handle the condition argument, though I will probably take a stab at it.
But this whole thing made me a little grumpy. Why again should there be three or four different APIs for this? (And each a little different.) And why shouldn't there be a page somewhere saying "Here are all your options, and here's when you should use each one."
Mainly, it's not so much this one case, as I fear I'll have to go through this whole process again for every Linux function I need to translate.
And I feel like I dodged a bullet. All the IOLock functions are defined in IOLocks.cpp. Fortunately, the entire thing is wrapped with an
extern "C" block, so they work just fine to call from a C program. But what if the next one is really a C++ class instead? I gather I could just change file extensions and compile all the driver code as C++ in order to call other C++ code, but there's no way I'm about to do that. At a minimum I it seems like it would change many of the struct sizes which would make a huge mess. So for the moment, C++ is to be avoided, and I only narrowly managed.
I guess we'll see what the future holds.