Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Commits on Jan 2, 2009
  1. @fujita

    [SCSI] block: make blk_rq_map_user take a NULL user-space buffer for …

    fujita authored James Bottomley committed
    …WRITE
    
    The commit 8188276 (block: make
    blk_rq_map_user take a NULL user-space buffer) extended
    blk_rq_map_user to accept a NULL user-space buffer with a READ
    command. It was necessary to convert sg to use the block layer mapping
    API.
    
    This patch extends blk_rq_map_user again for a WRITE command. It is
    necessary to convert st and osst drivers to use the block layer
    apping API.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Acked-by: Jens Axboe <jens.axboe@oracle.com>
    Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
  2. @fujita

    [SCSI] block: fix the partial mappings with struct rq_map_data

    fujita authored James Bottomley committed
    This fixes bio_copy_user_iov to properly handle the partial mappings
    with struct rq_map_data (which only sg uses for now but st and osst
    will shortly). It adds the offset member to struct rq_map_data and
    changes blk_rq_map_user to update it so that bio_copy_user_iov can add
    an appropriate page frame via bio_add_pc_page().
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Acked-by: Jens Axboe <jens.axboe@oracle.com>
    Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Commits on Oct 9, 2008
  1. @fujita

    block: make blk_rq_map_user take a NULL user-space buffer

    fujita authored Jens Axboe committed
    This patch changes blk_rq_map_user to accept a NULL user-space buffer
    with a READ command if rq_map_data is not NULL. Thus a caller can pass
    page frames to lk_rq_map_user to just set up a request and bios with
    page frames propely. bio_uncopy_user (called via blk_rq_unmap_user)
    doesn't copy data to user space with such request.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
  2. @fujita

    block: add blk_rq_aligned helper function

    fujita authored Jens Axboe committed
    This adds blk_rq_aligned helper function to see if alignment and
    padding requirement is satisfied for DMA transfer. This also converts
    blk_rq_map_kern and __blk_rq_map_user to use the helper function.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Jens Axboe <jens.axboe@oracle.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
  3. @fujita

    block: introduce struct rq_map_data to use reserved pages

    fujita authored Jens Axboe committed
    This patch introduces struct rq_map_data to enable bio_copy_use_iov()
    use reserved pages.
    
    Currently, bio_copy_user_iov allocates bounce pages but
    drivers/scsi/sg.c wants to allocate pages by itself and use
    them. struct rq_map_data can be used to pass allocated pages to
    bio_copy_user_iov.
    
    The current users of bio_copy_user_iov simply passes NULL (they don't
    want to use pre-allocated pages).
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Jens Axboe <jens.axboe@oracle.com>
    Cc: Douglas Gilbert <dougg@torque.net>
    Cc: Mike Christie <michaelc@cs.wisc.edu>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
  4. @fujita

    block: add gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov

    fujita authored Jens Axboe committed
    Currently, blk_rq_map_user and blk_rq_map_user_iov always do
    GFP_KERNEL allocation.
    
    This adds gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov
    so sg can use it (sg always does GFP_ATOMIC allocation).
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Signed-off-by: Douglas Gilbert <dougg@torque.net>
    Cc: Mike Christie <michaelc@cs.wisc.edu>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Commits on Jul 26, 2008
  1. @fujita @torvalds

    block/blk-map.c: use the new object_is_on_stack() helper

    fujita authored torvalds committed
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
    Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
    Cc: Tejun Heo <htejun@gmail.com>
    Cc: Jens Axboe <jens.axboe@oracle.com>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Commits on Jul 4, 2008
  1. @fujita

    block: blk_rq_map_kern uses the bounce buffers for stack buffers

    fujita authored Jens Axboe committed
    blk_rq_map_kern is used for kernel internal I/Os. Some callers use
    this function with stack buffers but DMA to/from the stack buffers
    leads to memory corruption on a non-coherent platform.
    
    This patch make blk_rq_map_kern uses the bounce buffers if a caller
    passes a stack buffer (on the all platforms for simplicity).
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
    Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
    Cc: Tejun Heo <htejun@gmail.com>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Commits on Jul 3, 2008
  1. @fujita

    block: add bounce support to blk_rq_map_user_iov

    fujita authored Jens Axboe committed
    blk_rq_map_user_iov can't handle the bounce buffer (it means that the
    bio_map_user_iov path doesn't work with a LLD that needs GFP_DMA).
    
    This patch fixes blk_rq_map_user_iov to support the bounce buffer.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Mike Christie <michaelc@cs.wisc.edu>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Commits on Apr 29, 2008
  1. @fujita

    block: add dma alignment and padding support to blk_rq_map_kern

    fujita authored Jens Axboe committed
    This patch adds bio_copy_kern similar to
    bio_copy_user. blk_rq_map_kern uses bio_copy_kern instead of
    bio_map_kern if necessary.
    
    bio_copy_kern uses temporary pages and the bi_end_io callback frees
    these pages. bio_copy_kern saves the original kernel buffer at
    bio->bi_private it doesn't use something like struct bio_map_data to
    store the information about the caller.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Tejun Heo <htejun@gmail.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Commits on Apr 21, 2008
  1. @fujita

    block: move the padding adjustment to blk_rq_map_sg

    fujita authored Jens Axboe committed
    blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
    that req->data_len (the true data length) is equal to sum(bio). It
    broke the scsi command completion code.
    
    commit e97a294 was introduced to fix
    the above issue. However, the partial completion code doesn't work
    with it. The commit is also a layer violation (scsi mid-layer should
    not know about the block layer's padding).
    
    This patch moves the padding adjustment to blk_rq_map_sg (suggested by
    James). The padding works like the drain buffer. This patch breaks the
    rule that req->data_len is equal to sum(sg), however, the drain buffer
    already broke it. So this patch just restores the rule that
    req->data_len is equal to sub(bio) without breaking anything new.
    
    Now when a low level driver needs padding, blk_rq_map_user and
    blk_rq_map_user_iov guarantee there's enough room for padding.
    blk_rq_map_sg can safely extend the last entry of a scatter list.
    
    blk_rq_map_sg must extend the last entry of a scatter list only for a
    request that got through bio_copy_user_iov. This patches introduces
    new REQ_COPY_USER flag.
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Tejun Heo <htejun@gmail.com>
    Cc: Mike Christie <michaelc@cs.wisc.edu>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
  2. @fujita

    block: add bio_copy_user_iov support to blk_rq_map_user_iov

    fujita authored Jens Axboe committed
    With this patch, blk_rq_map_user_iov uses bio_copy_user_iov when a low
    level driver needs padding or a buffer in sg_iovec isn't aligned. That
    is, it uses temporary kernel buffers instead of mapping user pages
    directly.
    
    When a LLD needs padding, later blk_rq_map_sg needs to extend the last
    entry of a scatter list. bio_copy_user_iov guarantees that there is
    enough space for padding by using temporary kernel buffers instead of
    user pages.
    
    blk_rq_map_user_iov needs buffers in sg_iovec to be aligned. The
    comment in blk_rq_map_user_iov indicates that drivers/scsi/sg.c also
    needs buffers in sg_iovec to be aligned. Actually, drivers/scsi/sg.c
    works with unaligned buffers in sg_iovec (it always uses temporary
    kernel buffers).
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Cc: Tejun Heo <htejun@gmail.com>
    Cc: Mike Christie <michaelc@cs.wisc.edu>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Commits on Mar 4, 2008
  1. @fujita

    block: restore the meaning of rq->data_len to the true data length

    fujita authored Jens Axboe committed
    The meaning of rq->data_len was changed to the length of an allocated
    buffer from the true data length. It breaks SG_IO friends and
    bsg. This patch restores the meaning of rq->data_len to the true data
    length and adds rq->extra_len to store an extended length (due to
    drain buffer and padding).
    
    This patch also removes the code to update bio in blk_rq_map_user
    introduced by the commit 40b01b9.
    The commit adjusts bio according to memory alignment
    (queue_dma_alignment). However, memory alignment is NOT padding
    alignment. This adjustment also breaks SG_IO friends and bsg. Padding
    alignment needs to be fixed in a proper way (by a separate patch).
    
    Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
    Signed-off-by: Jens Axboe <axboe@carl.home.kernel.dk>
Something went wrong with that request. Please try again.