// Theory of operation for waitForConditionOrInterruptNoAssertUntil and markKilled: // // An operation indicates to potential killers that it is waiting on a condition variable by setting // _waitMutex and _waitCV, while holding the lock on its parent Client. It then unlocks its Client, // unblocking any killers, which are required to have locked the Client before calling markKilled. // // When _waitMutex and _waitCV are set, killers must lock _waitMutex before setting the _killCode, // and must signal _waitCV before releasing _waitMutex. Unfortunately, they must lock _waitMutex // without holding a lock on Client to avoid a deadlock with callers of // waitForConditionOrInterruptNoAssertUntil(). So, in the event that _waitMutex is set, the killer // increments _numKillers, drops the Client lock, acquires _waitMutex and then re-acquires the // Client lock. We know that the Client, its OperationContext and _waitMutex will remain valid // during this period because the caller of waitForConditionOrInterruptNoAssertUntil will not return // while _numKillers > 0 and will not return until it has itself reacquired _waitMutex. Instead, // that caller will keep waiting on _waitCV until _numKillers drops to 0. // // In essence, when _waitMutex is set, _killCode is guarded by _waitMutex and _waitCV, but when // _waitMutex is not set, it is guarded by the Client spinlock. Changing _waitMutex is itself // guarded by the Client spinlock and _numKillers. // // When _numKillers does drop to 0, the waiter will null out _waitMutex and _waitCV. // // This implementation adds a minimum of two spinlock acquire-release pairs to every condition // variable wait. StatusWith<stdx::cv_status> OperationContext::waitForConditionOrInterruptNoAssertUntil( stdx::condition_variable& cv, stdx::unique_lock<stdx::mutex>& m, Date_t deadline) noexcept { invariant(getClient()); { stdx::lock_guard<Client> clientLock(*getClient()); invariant(!_waitMutex); invariant(!_waitCV); invariant(0 == _numKillers); // This interrupt check must be done while holding the client lock, so as not to race with a // concurrent caller of markKilled. auto status = checkForInterruptNoAssert(); if (!status.isOK()) { return status; } _waitMutex = m.mutex(); _waitCV = &cv; } if (hasDeadline()) { deadline = std::min(deadline, getDeadline()); } const auto waitStatus = [&] { if (Date_t::max() == deadline) { cv.wait(m); return stdx::cv_status::no_timeout; } return getServiceContext()->getPreciseClockSource()->waitForConditionUntil(cv, m, deadline); }(); // Continue waiting on cv until no other thread is attempting to kill this one. cv.wait(m, [this] { stdx::lock_guard<Client> clientLock(*getClient()); if (0 == _numKillers) { _waitMutex = nullptr; _waitCV = nullptr; return true; } return false; }); auto status = checkForInterruptNoAssert(); if (!status.isOK()) { return status; } if (hasDeadline() && waitStatus == stdx::cv_status::timeout && deadline == getDeadline()) { // It's possible that the system clock used in stdx::condition_variable::wait_until // is slightly ahead of the FastClock used in checkForInterrupt. In this case, // we treat the operation as though it has exceeded its time limit, just as if the // FastClock and system clock had agreed. markKilled(ErrorCodes::ExceededTimeLimit); return Status(ErrorCodes::ExceededTimeLimit, "operation exceeded time limit"); } return waitStatus; }
/** * This must be called whenever a new thread is started, so that active threads can be tracked * so each thread has a Client object in TLS. */ void Client::initThread(const char *desc, AbstractMessagingPort *mp) { invariant(currentClient.get() == 0); string fullDesc; if (mp != NULL) { fullDesc = str::stream() << desc << mp->connectionId(); } else { fullDesc = desc; } setThreadName(fullDesc.c_str()); mongo::lastError.initThread(); // Create the client obj, attach to thread Client* client = new Client(fullDesc, mp); client->setAuthorizationSession( new AuthorizationSession( new AuthzSessionExternalStateMongod(getGlobalAuthorizationManager()))); currentClient.reset(client); // This makes the client visible to maintenance threads boost::lock_guard<boost::mutex> clientLock(clientsMutex); clients.insert(client); }
bool Client::shutdown() { if (!inShutdown()) { boost::lock_guard<boost::mutex> clientLock(clientsMutex); clients.erase(this); } return false; }
CurOp::~CurOp() { if ( _wrapped ) { boost::mutex::scoped_lock clientLock(Client::clientsMutex); _client->_curOp = _wrapped; } _client = 0; }
bool GlobalEnvironmentMongoD::killOperation(AtomicUInt opId) { scoped_lock clientLock(Client::clientsMutex); bool found = false; // XXX clean up { for( set< Client* >::const_iterator j = Client::clients.begin(); !found && j != Client::clients.end(); ++j ) { for( CurOp *k = ( *j )->curop(); !found && k; k = k->parent() ) { if ( k->opNum() != opId ) continue; k->kill(); for( CurOp *l = ( *j )->curop(); l; l = l->parent() ) { l->kill(); } found = true; } } } if ( found ) { interruptJs( &opId ); } return found; }
Client::~Client() { if ( ! inShutdown() ) { // we can't clean up safely once we're in shutdown { boost::lock_guard<boost::mutex> clientLock(clientsMutex); clients.erase(this); } } }
void ServiceContextMongoD::setKillAllOperations() { stdx::lock_guard<stdx::mutex> clientLock(_mutex); _globalKill = true; for (const auto listener : _killOpListeners) { try { listener->interruptAll(); } catch (...) { std::terminate(); } } }
bool Client::shutdown() { _shutdown = true; if ( inShutdown() ) return false; { boost::lock_guard<boost::mutex> clientLock(clientsMutex); clients.erase(this); } return false; }
void GlobalEnvironmentMongoD::setKillAllOperations() { boost::lock_guard<boost::mutex> clientLock(Client::clientsMutex); _globalKill = true; for (size_t i = 0; i < _killOpListeners.size(); i++) { try { _killOpListeners[i]->interruptAll(); } catch (...) { std::terminate(); } } }
bool GlobalEnvironmentMongoD::killOperation(unsigned int opId) { boost::lock_guard<boost::mutex> clientLock(Client::clientsMutex); for(ClientSet::const_iterator j = Client::clients.begin(); j != Client::clients.end(); ++j) { Client* client = *j; bool found = _killOperationsAssociatedWithClientAndOpId_inlock(client, opId); if (found) { return true; } } return false; }
AutoStatsTracker::AutoStatsTracker(OperationContext* opCtx, const NamespaceString& nss, Top::LockType lockType, boost::optional<int> dbProfilingLevel) : _opCtx(opCtx), _lockType(lockType) { if (!dbProfilingLevel) { // No profiling level was determined, attempt to read the profiling level from the Database // object. AutoGetDb autoDb(_opCtx, nss.db(), MODE_IS); if (autoDb.getDb()) { dbProfilingLevel = autoDb.getDb()->getProfilingLevel(); } } stdx::lock_guard<Client> clientLock(*_opCtx->getClient()); CurOp::get(_opCtx)->enter_inlock(nss.ns().c_str(), dbProfilingLevel); }
Client::~Client() { if ( ! inShutdown() ) { // we can't clean up safely once we're in shutdown { boost::lock_guard<boost::mutex> clientLock(clientsMutex); if ( ! _shutdown ) clients.erase(this); } CurOp* last; do { last = _curOp; delete _curOp; // _curOp may have been reset to _curOp->_wrapped } while (_curOp != last); } }
void KillCurrentOp::blockingKill(AtomicUInt opId) { bool killed = false; LOG(1) << "KillCurrentOp: starting blockingkill" << endl; boost::scoped_ptr<scoped_lock> clientLock( new scoped_lock( Client::clientsMutex ) ); boost::unique_lock<boost::mutex> lck(_mtx); bool foundId = _killImpl_inclientlock(opId, &killed); if (!foundId) { // don't wait if not found return; } clientLock.reset( NULL ); // unlock client since we don't need it anymore // block until the killed operation stops LOG(1) << "KillCurrentOp: waiting for confirmation of kill" << endl; while (killed == false) { _condvar.wait(lck); } LOG(1) << "KillCurrentOp: kill syncing complete" << endl; }
void ServiceContext::setKillAllOperations() { stdx::lock_guard<stdx::mutex> clientLock(_mutex); // Ensure that all newly created operation contexts will immediately be in the interrupted state _globalKill.store(true); // Interrupt all active operations for (auto&& client : _clients) { stdx::lock_guard<Client> lk(*client); auto opCtxToKill = client->getOperationContext(); if (opCtxToKill) { killOperation(opCtxToKill, ErrorCodes::InterruptedAtShutdown); } } // Notify any listeners who need to reach to the server shutting down for (const auto listener : _killOpListeners) { try { listener->interruptAll(); } catch (...) { std::terminate(); } } }
bool GlobalEnvironmentMongoD::killOperation(unsigned int opId) { boost::mutex::scoped_lock clientLock(Client::clientsMutex); bool found = false; // XXX clean up { for(ClientSet::const_iterator j = Client::clients.begin(); !found && j != Client::clients.end(); ++j ) { for( CurOp *k = ( *j )->curop(); !found && k; k = k->parent() ) { if ( k->opNum() != opId ) continue; k->kill(); for( CurOp *l = ( *j )->curop(); l; l = l->parent() ) { l->kill(); } found = true; } } } if (found) { for (size_t i = 0; i < _killOpListeners.size(); i++) { try { _killOpListeners[i]->interrupt(opId); } catch (...) { std::terminate(); } } } return found; }
// Theory of operation for waitForConditionOrInterruptNoAssertUntil and markKilled: // // An operation indicates to potential killers that it is waiting on a condition variable by setting // _waitMutex and _waitCV, while holding the lock on its parent Client. It then unlocks its Client, // unblocking any killers, which are required to have locked the Client before calling markKilled. // // When _waitMutex and _waitCV are set, killers must lock _waitMutex before setting the _killCode, // and must signal _waitCV before releasing _waitMutex. Unfortunately, they must lock _waitMutex // without holding a lock on Client to avoid a deadlock with callers of // waitForConditionOrInterruptNoAssertUntil(). So, in the event that _waitMutex is set, the killer // increments _numKillers, drops the Client lock, acquires _waitMutex and then re-acquires the // Client lock. We know that the Client, its OperationContext and _waitMutex will remain valid // during this period because the caller of waitForConditionOrInterruptNoAssertUntil will not return // while _numKillers > 0 and will not return until it has itself reacquired _waitMutex. Instead, // that caller will keep waiting on _waitCV until _numKillers drops to 0. // // In essence, when _waitMutex is set, _killCode is guarded by _waitMutex and _waitCV, but when // _waitMutex is not set, it is guarded by the Client spinlock. Changing _waitMutex is itself // guarded by the Client spinlock and _numKillers. // // When _numKillers does drop to 0, the waiter will null out _waitMutex and _waitCV. // // This implementation adds a minimum of two spinlock acquire-release pairs to every condition // variable wait. StatusWith<stdx::cv_status> OperationContext::waitForConditionOrInterruptNoAssertUntil( stdx::condition_variable& cv, stdx::unique_lock<stdx::mutex>& m, Date_t deadline) noexcept { invariant(getClient()); { stdx::lock_guard<Client> clientLock(*getClient()); invariant(!_waitMutex); invariant(!_waitCV); invariant(0 == _numKillers); // This interrupt check must be done while holding the client lock, so as not to race with a // concurrent caller of markKilled. auto status = checkForInterruptNoAssert(); if (!status.isOK()) { return status; } _waitMutex = m.mutex(); _waitCV = &cv; } // If the maxTimeNeverTimeOut failpoint is set, behave as though the operation's deadline does // not exist. Under normal circumstances, if the op has an existing deadline which is sooner // than the deadline passed into this method, we replace our deadline with the op's. This means // that we expect to time out at the same time as the existing deadline expires. If, when we // time out, we find that the op's deadline has not expired (as will always be the case if // maxTimeNeverTimeOut is set) then we assume that the incongruity is due to a clock mismatch // and return _timeoutError regardless. To prevent this behaviour, only consider the op's // deadline in the event that the maxTimeNeverTimeOut failpoint is not set. bool opHasDeadline = (hasDeadline() && !MONGO_FAIL_POINT(maxTimeNeverTimeOut)); if (opHasDeadline) { deadline = std::min(deadline, getDeadline()); } const auto waitStatus = [&] { if (Date_t::max() == deadline) { Waitable::wait(_baton.get(), getServiceContext()->getPreciseClockSource(), cv, m); return stdx::cv_status::no_timeout; } return getServiceContext()->getPreciseClockSource()->waitForConditionUntil( cv, m, deadline, _baton.get()); }(); // Continue waiting on cv until no other thread is attempting to kill this one. Waitable::wait(_baton.get(), getServiceContext()->getPreciseClockSource(), cv, m, [this] { stdx::lock_guard<Client> clientLock(*getClient()); if (0 == _numKillers) { _waitMutex = nullptr; _waitCV = nullptr; return true; } return false; }); auto status = checkForInterruptNoAssert(); if (!status.isOK()) { return status; } if (opHasDeadline && waitStatus == stdx::cv_status::timeout && deadline == getDeadline()) { // It's possible that the system clock used in stdx::condition_variable::wait_until // is slightly ahead of the FastClock used in checkForInterrupt. In this case, // we treat the operation as though it has exceeded its time limit, just as if the // FastClock and system clock had agreed. if (!_hasArtificialDeadline) { markKilled(_timeoutError); } return Status(_timeoutError, "operation exceeded time limit"); } return waitStatus; }
void ServiceContextMongoD::registerKillOpListener(KillOpListenerInterface* listener) { stdx::lock_guard<stdx::mutex> clientLock(_mutex); _killOpListeners.push_back(listener); }
void GlobalEnvironmentMongoD::registerKillOpListener(KillOpListenerInterface* listener) { boost::lock_guard<boost::mutex> clientLock(Client::clientsMutex); _killOpListeners.push_back(listener); }
bool ImageClientSingle::UpdateImage(ImageContainer* aContainer, uint32_t aContentFlags) { AutoLockImage autoLock(aContainer); Image *image = autoLock.GetImage(); if (!image) { return false; } if (mLastPaintedImageSerial == image->GetSerial()) { return true; } if (image->GetFormat() == PLANAR_YCBCR) { EnsureTextureClient(TEXTURE_YCBCR); PlanarYCbCrImage* ycbcr = static_cast<PlanarYCbCrImage*>(image); if (ycbcr->AsSharedPlanarYCbCrImage()) { AutoLockTextureClient lock(mTextureClient); SurfaceDescriptor sd; if (!ycbcr->AsSharedPlanarYCbCrImage()->ToSurfaceDescriptor(sd)) { return false; } if (IsSurfaceDescriptorValid(*lock.GetSurfaceDescriptor())) { GetForwarder()->DestroySharedSurface(lock.GetSurfaceDescriptor()); } *lock.GetSurfaceDescriptor() = sd; } else { AutoLockYCbCrClient clientLock(mTextureClient); if (!clientLock.Update(ycbcr)) { NS_WARNING("failed to update TextureClient (YCbCr)"); return false; } } } else if (image->GetFormat() == SHARED_TEXTURE) { EnsureTextureClient(TEXTURE_SHARED_GL_EXTERNAL); SharedTextureImage* sharedImage = static_cast<SharedTextureImage*>(image); const SharedTextureImage::Data *data = sharedImage->GetData(); SharedTextureDescriptor texture(data->mShareType, data->mHandle, data->mSize, data->mInverted); mTextureClient->SetDescriptor(SurfaceDescriptor(texture)); } else if (image->GetFormat() == SHARED_RGB) { EnsureTextureClient(TEXTURE_SHMEM); nsIntRect rect(0, 0, image->GetSize().width, image->GetSize().height); UpdatePictureRect(rect); AutoLockTextureClient lock(mTextureClient); SurfaceDescriptor desc; if (!static_cast<SharedRGBImage*>(image)->ToSurfaceDescriptor(desc)) { return false; } mTextureClient->SetDescriptor(desc); } else { nsRefPtr<gfxASurface> surface = image->GetAsSurface(); MOZ_ASSERT(surface); EnsureTextureClient(TEXTURE_SHMEM); nsRefPtr<gfxPattern> pattern = new gfxPattern(surface); pattern->SetFilter(mFilter); AutoLockShmemClient clientLock(mTextureClient); if (!clientLock.Update(image, aContentFlags, pattern)) { NS_WARNING("failed to update TextureClient"); return false; } } Updated(); if (image->GetFormat() == PLANAR_YCBCR) { PlanarYCbCrImage* ycbcr = static_cast<PlanarYCbCrImage*>(image); UpdatePictureRect(ycbcr->GetData()->GetPictureRect()); } mLastPaintedImageSerial = image->GetSerial(); aContainer->NotifyPaintedImage(image); return true; }
bool DeprecatedImageClientSingle::UpdateImage(ImageContainer* aContainer, uint32_t aContentFlags) { AutoLockImage autoLock(aContainer); Image *image = autoLock.GetImage(); if (!image) { return false; } if (mLastPaintedImageSerial == image->GetSerial()) { return true; } if (image->GetFormat() == PLANAR_YCBCR && EnsureDeprecatedTextureClient(TEXTURE_YCBCR)) { PlanarYCbCrImage* ycbcr = static_cast<PlanarYCbCrImage*>(image); if (ycbcr->AsDeprecatedSharedPlanarYCbCrImage()) { AutoLockDeprecatedTextureClient lock(mDeprecatedTextureClient); SurfaceDescriptor sd; if (!ycbcr->AsDeprecatedSharedPlanarYCbCrImage()->ToSurfaceDescriptor(sd)) { return false; } if (IsSurfaceDescriptorValid(*lock.GetSurfaceDescriptor())) { GetForwarder()->DestroySharedSurface(lock.GetSurfaceDescriptor()); } *lock.GetSurfaceDescriptor() = sd; } else { AutoLockYCbCrClient clientLock(mDeprecatedTextureClient); if (!clientLock.Update(ycbcr)) { NS_WARNING("failed to update DeprecatedTextureClient (YCbCr)"); return false; } } } else if (image->GetFormat() == SHARED_TEXTURE && EnsureDeprecatedTextureClient(TEXTURE_SHARED_GL_EXTERNAL)) { SharedTextureImage* sharedImage = static_cast<SharedTextureImage*>(image); const SharedTextureImage::Data *data = sharedImage->GetData(); SharedTextureDescriptor texture(data->mShareType, data->mHandle, data->mSize, data->mInverted); mDeprecatedTextureClient->SetDescriptor(SurfaceDescriptor(texture)); } else if (image->GetFormat() == SHARED_RGB && EnsureDeprecatedTextureClient(TEXTURE_SHMEM)) { nsIntRect rect(0, 0, image->GetSize().width, image->GetSize().height); UpdatePictureRect(rect); AutoLockDeprecatedTextureClient lock(mDeprecatedTextureClient); SurfaceDescriptor desc; if (!static_cast<DeprecatedSharedRGBImage*>(image)->ToSurfaceDescriptor(desc)) { return false; } mDeprecatedTextureClient->SetDescriptor(desc); #ifdef MOZ_WIDGET_GONK } else if (image->GetFormat() == GONK_IO_SURFACE && EnsureDeprecatedTextureClient(TEXTURE_SHARED_GL_EXTERNAL)) { nsIntRect rect(0, 0, image->GetSize().width, image->GetSize().height); UpdatePictureRect(rect); AutoLockDeprecatedTextureClient lock(mDeprecatedTextureClient); SurfaceDescriptor desc = static_cast<GonkIOSurfaceImage*>(image)->GetSurfaceDescriptor(); if (!IsSurfaceDescriptorValid(desc)) { return false; } mDeprecatedTextureClient->SetDescriptor(desc); } else if (image->GetFormat() == GRALLOC_PLANAR_YCBCR) { EnsureDeprecatedTextureClient(TEXTURE_SHARED_GL_EXTERNAL); nsIntRect rect(0, 0, image->GetSize().width, image->GetSize().height); UpdatePictureRect(rect); AutoLockDeprecatedTextureClient lock(mDeprecatedTextureClient); SurfaceDescriptor desc = static_cast<GrallocPlanarYCbCrImage*>(image)->GetSurfaceDescriptor(); if (!IsSurfaceDescriptorValid(desc)) { return false; } mDeprecatedTextureClient->SetDescriptor(desc); #endif } else { nsRefPtr<gfxASurface> surface = image->GetAsSurface(); MOZ_ASSERT(surface); EnsureDeprecatedTextureClient(TEXTURE_SHMEM); MOZ_ASSERT(mDeprecatedTextureClient, "Failed to create texture client"); AutoLockShmemClient clientLock(mDeprecatedTextureClient); if (!clientLock.Update(image, aContentFlags, surface)) { NS_WARNING("failed to update DeprecatedTextureClient"); return false; } } Updated(); if (image->GetFormat() == PLANAR_YCBCR) { PlanarYCbCrImage* ycbcr = static_cast<PlanarYCbCrImage*>(image); UpdatePictureRect(ycbcr->GetData()->GetPictureRect()); } mLastPaintedImageSerial = image->GetSerial(); aContainer->NotifyPaintedImage(image); return true; }