Poster of Linux kernelThe best gift for a Linux geek
 Linux kernel map 
⇦ prev ⇱ home next ⇨

16.4. Some Other Details

This section covers a few other aspects of the block layer that may be of interest for advanced drivers. None of the following facilities need to be used to write a correct driver, but they may be helpful in some situations.

16.4.1. Command Pre-Preparation

The block layer provides a mechanism for drivers to examine and preprocess requests before they are returned from elv_next_request. This mechanism allows drivers to set up the actual drive commands ahead of time, decide whether the request can be handled at all, or perform other sorts of housekeeping.

If you want to use this feature, create a command preparation function that fits this prototype:

typedef int (prep_rq_fn) (request_queue_t *queue, struct request *req);

The request structure includes a field called cmd, which is an array of BLK_MAX_CDB bytes; this array may be used by the preparation function to store the actual hardware command (or any other useful information). This function should return one of the following values:

BLKPREP_OK

Command preparation went normally, and the request can be handed to your driver's request function.

BLKPREP_KILL

This request cannot be completed; it is failed with an error code.

BLKPREP_DEFER

This request cannot be completed at this time. It stays at the front of the queue but is not handed to the request function.

The preparation function is called by elv_next_request immediately before the request is returned to your driver. If this function returns BLKPREP_DEFER, the return value from elv_next_request to your driver is NULL. This mode of operation can be useful if, for example, your device has reached the maximum number of requests it can have outstanding.

To have the block layer call your preparation function, pass it to:

void blk_queue_prep_rq(request_queue_t *queue, prep_rq_fn *func);

By default, request queues have no preparation function.

16.4.2. Tagged Command Queueing

Hardware that can have multiple requests active at once usually supports some form of tagged command queueing (TCQ). TCQ is simply the technique of attaching an integer "tag" to each request so that when the drive completes one of those requests, it can tell the driver which one. In previous versions of the kernel, block drivers that implemented TCQ had to do all of the work themselves; in 2.6, a TCQ support infrastructure has been added to the block layer for all drivers to use.

If your drive performs tagged command queueing, you should inform the kernel of that fact at initialization time with a call to:

int blk_queue_init_tags(request_queue_t *queue, int depth, 
                        struct blk_queue_tag *tags);

Here, queue is your request queue, and depth is the number of tagged requests your device can have outstanding at any given time. tags is an optional pointer to an array of struct blk_queue_tag structures; there must be depth of them. Normally, tags can be passed as NULL, and blk_queue_init_tags allocates the array. If, however, you need to share the same tags between multiple devices, you can pass the tags array pointer (stored in the queue_tags field) from another request queue. You should never actually allocate the tags array yourself; the block layer needs to initialize the array and does not export the initialization function to modules.

Since blk_queue_init_tags allocates memory, it can fail; it returns a negative error code to the caller in that case.

If the number of tags your device can handle changes, you can inform the kernel with:

int blk_queue_resize_tags(request_queue_t *queue, int new_depth);

The queue lock must be held during the call. This call can fail, returning a negative error code in that case.

The association of a tag with a request structure is done with blk_queue_start_tag, which must be called with the queue lock held:

int blk_queue_start_tag(request_queue_t *queue, struct request *req);

If a tag is available, this function allocates it for this request, stores the tag number in req->tag, and returns 0. It also dequeues the request from the queue and links it into its own tag-tracking structure, so your driver should take care not to dequeue the request itself if it's using tags. If no more tags are available, blk_queue_start_tag leaves the request on the queue and returns a nonzero value.

When all transfers for a given request have been completed, your driver should return the tag with:

void blk_queue_end_tag(request_queue_t *queue, struct request *req);

Once again, you must hold the queue lock before calling this function. The call should be made after end_that_request_first returns 0 (meaning that the request is complete) but before calling end_that_request_last. Remember that the request is already dequeued, so it would be a mistake for your driver to do so at this point.

If you need to find the request associated with a given tag (when the drive reports completion, for example), use blk_queue_find_tag:

struct request *blk_queue_find_tag(request_queue_t *qeue, int tag);

The return value is the associated request structure, unless something has gone truly wrong.

If things really do go wrong, your driver may find itself having to reset or perform some other act of violence against one of its devices. In that case, any outstanding tagged commands will not be completed. The block layer provides a function that can help with the recovery effort in such situations:

void blk_queue_invalidate_tags(request_queue_t *queue);

This function returns all outstanding tags to the pool and puts the associated requests back into the request queue. The queue lock must be held when you call this function.

    ⇦ prev ⇱ home next ⇨
    Poster of Linux kernelThe best gift for a Linux geek