static void ptoninvalidate(struct cam_periph *periph) { struct pt_softc *softc; struct bio *q_bio; struct buf *q_bp; softc = (struct pt_softc *)periph->softc; /* * De-register any async callbacks. */ xpt_register_async(0, ptasync, periph, periph->path); softc->flags |= PT_FLAG_DEVICE_INVALID; /* * Return all queued I/O with ENXIO. * XXX Handle any transactions queued to the card * with XPT_ABORT_CCB. */ while ((q_bio = bioq_takefirst(&softc->bio_queue)) != NULL) { q_bp = q_bio->bio_buf; q_bp->b_resid = q_bp->b_bcount; q_bp->b_error = ENXIO; q_bp->b_flags |= B_ERROR; biodone(q_bio); } xpt_print(periph->path, "lost device\n"); }
static int adaclose(struct disk *dp) { struct cam_periph *periph; struct ada_softc *softc; union ccb *ccb; int error; periph = (struct cam_periph *)dp->d_drv1; if (periph == NULL) return (ENXIO); cam_periph_lock(periph); if ((error = cam_periph_hold(periph, PRIBIO)) != 0) { cam_periph_unlock(periph); cam_periph_release(periph); return (error); } softc = (struct ada_softc *)periph->softc; /* We only sync the cache if the drive is capable of it. */ if ((softc->flags & ADA_FLAG_CAN_FLUSHCACHE) != 0 && (softc->flags & ADA_FLAG_PACK_INVALID) == 0) { ccb = cam_periph_getccb(periph, CAM_PRIORITY_NORMAL); cam_fill_ataio(&ccb->ataio, 1, adadone, CAM_DIR_NONE, 0, NULL, 0, ada_default_timeout*1000); if (softc->flags & ADA_FLAG_CAN_48BIT) ata_48bit_cmd(&ccb->ataio, ATA_FLUSHCACHE48, 0, 0, 0); else ata_28bit_cmd(&ccb->ataio, ATA_FLUSHCACHE, 0, 0, 0); cam_periph_runccb(ccb, /*error_routine*/NULL, /*cam_flags*/0, /*sense_flags*/0, softc->disk->d_devstat); if ((ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) xpt_print(periph->path, "Synchronize cache failed\n"); if ((ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) cam_release_devq(ccb->ccb_h.path, /*relsim_flags*/0, /*reduction*/0, /*timeout*/0, /*getcount_only*/0); xpt_release_ccb(ccb); } softc->flags &= ~ADA_FLAG_OPEN; cam_periph_unhold(periph); cam_periph_unlock(periph); cam_periph_release(periph); return (0); }
static void pmpcleanup(struct cam_periph *periph) { struct pmp_softc *softc; softc = (struct pmp_softc *)periph->softc; xpt_print(periph->path, "removing device entry\n"); cam_periph_unlock(periph); /* * If we can't free the sysctl tree, oh well... */ if ((softc->flags & PMP_FLAG_SCTX_INIT) != 0 && sysctl_ctx_free(&softc->sysctl_ctx) != 0) { xpt_print(periph->path, "can't remove sysctl context\n"); } free(softc, M_DEVBUF); cam_periph_lock(periph); }
static void ptdtor(struct cam_periph *periph) { struct pt_softc *softc; softc = (struct pt_softc *)periph->softc; xpt_print(periph->path, "removing device entry\n"); devstat_remove_entry(softc->device_stats); cam_periph_unlock(periph); destroy_dev(softc->dev); cam_periph_lock(periph); free(softc, M_DEVBUF); }
static void ptdtor(struct cam_periph *periph) { struct pt_softc *softc; softc = (struct pt_softc *)periph->softc; devstat_remove_entry(&softc->device_stats); cam_extend_release(ptperiphs, periph->unit_number); xpt_print(periph->path, "removing device entry\n"); dev_ops_remove_minor(&pt_ops, periph->unit_number); kfree(softc, M_DEVBUF); }
static void pmponinvalidate(struct cam_periph *periph) { struct cam_path *dpath; int i; /* * De-register any async callbacks. */ xpt_register_async(0, pmpasync, periph, periph->path); for (i = 0; i < 15; i++) { if (xpt_create_path(&dpath, periph, xpt_path_path_id(periph->path), i, 0) == CAM_REQ_CMP) { xpt_async(AC_LOST_DEVICE, dpath, NULL); xpt_free_path(dpath); } } pmprelease(periph, -1); xpt_print(periph->path, "lost device\n"); }
static void ptoninvalidate(struct cam_periph *periph) { struct pt_softc *softc; softc = (struct pt_softc *)periph->softc; /* * De-register any async callbacks. */ xpt_register_async(0, ptasync, periph, periph->path); softc->flags |= PT_FLAG_DEVICE_INVALID; /* * Return all queued I/O with ENXIO. * XXX Handle any transactions queued to the card * with XPT_ABORT_CCB. */ bioq_flush(&softc->bio_queue, NULL, ENXIO); xpt_print(periph->path, "lost device\n"); }
static void ptdone(struct cam_periph *periph, union ccb *done_ccb) { struct pt_softc *softc; struct ccb_scsiio *csio; softc = (struct pt_softc *)periph->softc; csio = &done_ccb->csio; switch (csio->ccb_h.ccb_state) { case PT_CCB_BUFFER_IO: case PT_CCB_BUFFER_IO_UA: { struct bio *bp; bp = (struct bio *)done_ccb->ccb_h.ccb_bp; if ((done_ccb->ccb_h.status & CAM_STATUS_MASK) != CAM_REQ_CMP) { int error; int sf; if ((csio->ccb_h.ccb_state & PT_CCB_RETRY_UA) != 0) sf = SF_RETRY_UA; else sf = 0; error = pterror(done_ccb, CAM_RETRY_SELTO, sf); if (error == ERESTART) { /* * A retry was scheuled, so * just return. */ return; } if (error != 0) { if (error == ENXIO) { /* * Catastrophic error. Mark our device * as invalid. */ xpt_print(periph->path, "Invalidating device\n"); softc->flags |= PT_FLAG_DEVICE_INVALID; } /* * return all queued I/O with EIO, so that * the client can retry these I/Os in the * proper order should it attempt to recover. */ bioq_flush(&softc->bio_queue, NULL, EIO); bp->bio_error = error; bp->bio_resid = bp->bio_bcount; bp->bio_flags |= BIO_ERROR; } else { bp->bio_resid = csio->resid; bp->bio_error = 0; if (bp->bio_resid != 0) { /* Short transfer ??? */ bp->bio_flags |= BIO_ERROR; } } if ((done_ccb->ccb_h.status & CAM_DEV_QFRZN) != 0) cam_release_devq(done_ccb->ccb_h.path, /*relsim_flags*/0, /*reduction*/0, /*timeout*/0, /*getcount_only*/0); } else { bp->bio_resid = csio->resid; if (bp->bio_resid != 0) bp->bio_flags |= BIO_ERROR; } /* * Block out any asyncronous callbacks * while we touch the pending ccb list. */ LIST_REMOVE(&done_ccb->ccb_h, periph_links.le); biofinish(bp, softc->device_stats, 0); break; } case PT_CCB_WAITING: /* Caller will release the CCB */ wakeup(&done_ccb->ccb_h.cbfcnp); return; } xpt_release_ccb(done_ccb); }