CoordinateReference::CoordinateReference() : Reference_<double>(), _coordinateValueFunction(_coordinateValueFunctionProp.getValueObjPtrRef()), _defaultWeight(_defaultWeightProp.getValueDbl()) { setAuthors("Ajay Seth"); _names.resize(getNumRefs()); _names[0] = getName(); }
LLPointer<LLImageRaw> LLImageRaw::duplicate() { if(getNumRefs() < 2) { return this; //nobody else refences to this image, no need to duplicate. } //make a duplicate LLPointer<LLImageRaw> dup = new LLImageRaw(getData(), getWidth(), getHeight(), getComponents()); return dup; }
// Called from LLVolumeMgr::cleanup bool LLVolumeLODGroup::cleanupRefs() { bool res = true; if (mRefs != 0) { llwarns << "Volume group has remaining refs:" << getNumRefs() << llendl; mRefs = 0; for (S32 i = 0; i < NUM_LODS; i++) { if (mLODRefs[i] > 0) { llwarns << " LOD " << i << " refs = " << mLODRefs[i] << llendl; mLODRefs[i] = 0; mVolumeLODs[i] = NULL; } } llwarns << *getVolumeParams() << llendl; res = false; } return res; }
void MarkersReference::updateInternalWeights() const { // if weights are not being changed, do not rebuild list of weights. if (isObjectUpToDateWithProperties()) return; // Begin by assigning default weight to each. Markers that do not have a // weight specified in the marker_weights property use the default weight. _weights.assign(getNumRefs(), get_default_weight()); // Next fill in the marker weights that were specified in the // marker_weights property int wix = -1; int ix = 0; // Build flat lists of marker weights in the same order as the marker names for (const std::string &name : _markerNames) { wix = get_marker_weights().getIndex(name, wix); // Associate user weights (as specified in the marker_weights property) // with the corresponding marker by order of marker names if (wix >= 0) _weights[ix++] = get_marker_weights()[wix].getWeight(); } }
/** get the weighting (importance) of meeting this Reference */ void CoordinateReference::getWeights(const SimTK::State &s, SimTK::Array_<double> &weights) const { weights.resize(getNumRefs()); weights[0] = _defaultWeight; }
/** get the values of the CoordinateReference */ void CoordinateReference::getValues(const SimTK::State &s, SimTK::Array_<double> &values) const { SimTK::Vector t(1, s.getTime()); values.resize(getNumRefs()); values[0] = _coordinateValueFunction->calcValue(t); }
void AIStateMachine::multiplex(event_type event) { // If this fails then you are using a pointer to a state machine instead of an LLPointer. llassert(event == initial_run || getNumRefs() > 0); DoutEntering(dc::statemachine(mSMDebug), "AIStateMachine::multiplex(" << event_str(event) << ") [" << (void*)this << "]"); base_state_type state; state_type run_state; // Critical area of mState. { multiplex_state_type_rat state_r(mState); // If another thread is already running multiplex() then it will pick up // our need to run (by us having set need_run), so there is no need to run // ourselves. llassert(!mMultiplexMutex.isSelfLocked()); // We may never enter recursively! if (!mMultiplexMutex.tryLock()) { Dout(dc::statemachine(mSMDebug), "Leaving because it is already being run [" << (void*)this << "]"); return; } //=========================================== // Start of critical area of mMultiplexMutex. // If another thread already called begin_loop() since we needed a run, // then we must not schedule a run because that could lead to running // the same state twice. Note that if need_run was reset in the mean // time and then set again, then it can't hurt to schedule a run since // we should indeed run, again. if (event == schedule_run && !sub_state_type_rat(mSubState)->need_run) { Dout(dc::statemachine(mSMDebug), "Leaving because it was already being run [" << (void*)this << "]"); return; } // We're at the beginning of multiplex, about to actually run it. // Make a copy of the states. run_state = begin_loop((state = state_r->base_state)); } // End of critical area of mState. bool keep_looping; bool destruct = false; do { if (event == normal_run) { #ifdef CWDEBUG if (state == bs_multiplex) Dout(dc::statemachine(mSMDebug), "Running state bs_multiplex / " << state_str_impl(run_state) << " [" << (void*)this << "]"); else Dout(dc::statemachine(mSMDebug), "Running state " << state_str(state) << " [" << (void*)this << "]"); #endif #ifdef SHOW_ASSERT // This debug code checks that each state machine steps precisely through each of it's states correctly. if (state != bs_reset) { switch(mDebugLastState) { case bs_reset: llassert(state == bs_initialize || state == bs_killed); break; case bs_initialize: llassert(state == bs_multiplex || state == bs_abort); break; case bs_multiplex: llassert(state == bs_multiplex || state == bs_finish || state == bs_abort); break; case bs_abort: llassert(state == bs_finish); break; case bs_finish: llassert(state == bs_callback); break; case bs_callback: llassert(state == bs_killed || state == bs_reset); break; case bs_killed: llassert(state == bs_killed); break; } } // More sanity checks. if (state == bs_multiplex) { // set_state is only called from multiplex_impl and therefore synced with mMultiplexMutex. mDebugShouldRun |= mDebugSetStatePending; // Should we run at all? llassert(mDebugShouldRun); } // Any previous reason to run is voided by actually running. mDebugShouldRun = false; #endif mRunMutex.lock(); // Now we are actually running a single state. // If abort() was called at any moment before, we execute that state instead. bool const late_abort = (state == bs_multiplex || state == bs_initialize) && sub_state_type_rat(mSubState)->aborted; if (LL_UNLIKELY(late_abort)) { // abort() was called from a child state machine, from another thread, while we were already scheduled to run normally from an engine. // What we want to do here is pretend we detected the abort at the end of the *previous* run. // If the state is bs_multiplex then the previous state was either bs_initialize or bs_multiplex, // both of which would have switched to bs_abort: we set the state to bs_abort instead and just // continue this run. // However, if the state is bs_initialize we can't switch to bs_killed because that state isn't // handled in the switch below; it's only handled when exiting multiplex() directly after it is set. // Therefore, in that case we have to set the state BACK to bs_reset and run it again. This duplicated // run of bs_reset is not a problem because it happens to be a NoOp. state = (state == bs_initialize) ? bs_reset : bs_abort; #ifdef CWDEBUG Dout(dc::statemachine(mSMDebug), "Late abort detected! Running state " << state_str(state) << " instead [" << (void*)this << "]"); #endif } #ifdef SHOW_ASSERT mDebugLastState = state; // Make sure we only call ref() once and in balance with unref(). if (state == bs_initialize) { // This -- and call to ref() (and the test when we're about to call unref()) -- is all done in the critical area of mMultiplexMutex. llassert(!mDebugRefCalled); mDebugRefCalled = true; } #endif switch(state) { case bs_reset: // We're just being kick started to get into the right thread // (possibly for the second time when a late abort was detected, but that's ok: we do nothing here). break; case bs_initialize: ref(); initialize_impl(); break; case bs_multiplex: llassert(!mDebugAborted); multiplex_impl(run_state); break; case bs_abort: abort_impl(); break; case bs_finish: sub_state_type_wat(mSubState)->reset = false; // By default, halt state machines when finished. finish_impl(); // Call run() from finish_impl() or the call back to restart from the beginning. break; case bs_callback: callback(); break; case bs_killed: mRunMutex.unlock(); // bs_killed is handled when it is set. So, this must be a re-entry. // We can only get here when being called by an engine that we were added to before we were killed. // This should already be have been set to NULL to indicate that we want to be removed from that engine. llassert(!multiplex_state_type_rat(mState)->current_engine); // Do not call unref() twice. return; } mRunMutex.unlock(); } { multiplex_state_type_wat state_w(mState); //================================= // Start of critical area of mState // Unless the state is bs_multiplex or bs_killed, the state machine needs to keep calling multiplex(). bool need_new_run = true; if (event == normal_run || event == insert_abort) { sub_state_type_rat sub_state_r(mSubState); if (event == normal_run) { // Switch base state as function of sub state. switch(state) { case bs_reset: if (sub_state_r->aborted) { // We have been aborted before we could even initialize, no de-initialization is possible. state_w->base_state = bs_killed; // Stop running. need_new_run = false; } else { // run() was called: call initialize_impl() next. state_w->base_state = bs_initialize; } break; case bs_initialize: if (sub_state_r->aborted) { // initialize_impl() called abort. state_w->base_state = bs_abort; } else { // Start actually running. state_w->base_state = bs_multiplex; // If the state is bs_multiplex we only need to run again when need_run was set again in the meantime or when this state machine isn't idle. need_new_run = sub_state_r->need_run || !sub_state_r->idle; } break; case bs_multiplex: if (sub_state_r->aborted) { // abort() was called. state_w->base_state = bs_abort; } else if (sub_state_r->finished) { // finish() was called. state_w->base_state = bs_finish; } else { // Continue in bs_multiplex. // If the state is bs_multiplex we only need to run again when need_run was set again in the meantime or when this state machine isn't idle. need_new_run = sub_state_r->need_run || !sub_state_r->idle; // If this fails then the run state didn't change and neither idle() nor yield() was called. llassert_always(!(need_new_run && !sub_state_r->skip_idle && !mYieldEngine && sub_state_r->run_state == run_state)); } break; case bs_abort: // After calling abort_impl(), call finish_impl(). state_w->base_state = bs_finish; break; case bs_finish: // After finish_impl(), call the call back function. state_w->base_state = bs_callback; break; case bs_callback: if (sub_state_r->reset) { // run() was called (not followed by kill()). state_w->base_state = bs_reset; } else { // After the call back, we're done. state_w->base_state = bs_killed; // Call unref(). destruct = true; // Stop running. need_new_run = false; } break; default: // bs_killed // We never get here. break; } } else // event == insert_abort { // We have been aborted, but we're idle. If we'd just schedule a new run below, it would re-run // the last state before the abort is handled. What we really need is to pick up as if the abort // was handled directly after returning from the last run. If we're not running anymore, then // do nothing as the state machine already ran and things should be processed normally // (in that case this is just a normal schedule which can't harm because we're can't accidently // re-run an old run_state). if (state_w->base_state == bs_multiplex) // Still running? { // See the switch above for case bs_multiplex. llassert(sub_state_r->aborted); // abort() was called. state_w->base_state = bs_abort; } } #ifdef CWDEBUG if (state != state_w->base_state) Dout(dc::statemachine(mSMDebug), "Base state changed from " << state_str(state) << " to " << state_str(state_w->base_state) << "; need_new_run = " << (need_new_run ? "true" : "false") << " [" << (void*)this << "]"); #endif } // Figure out in which engine we should run. AIEngine* engine = mYieldEngine ? mYieldEngine : (state_w->current_engine ? state_w->current_engine : mDefaultEngine); // And the current engine we're running in. AIEngine* current_engine = (event == normal_run) ? state_w->current_engine : NULL; // Immediately run again if yield() wasn't called and it's OK to run in this thread. // Note that when it's OK to run in any engine (mDefaultEngine is NULL) then the last // compare is also true when current_engine == NULL. keep_looping = need_new_run && !mYieldEngine && engine == current_engine; mYieldEngine = NULL; if (keep_looping) { // Start a new loop. run_state = begin_loop((state = state_w->base_state)); event = normal_run; } else { if (need_new_run) { // Add us to an engine if necessary. if (engine != state_w->current_engine) { // engine can't be NULL here: it can only be NULL if mDefaultEngine is NULL. engine->add(this); // Mark that we're added to this engine, and at the same time, that we're not added to the previous one. state_w->current_engine = engine; } #ifdef SHOW_ASSERT // We are leaving the loop, but we're not idle. The statemachine should re-enter the loop again. mDebugShouldRun = true; #endif } else { // Remove this state machine from any engine, // causing the engine to remove us. state_w->current_engine = NULL; } #ifdef SHOW_ASSERT // Mark that we stop running the loop. mThreadId.clear(); if (destruct) { // We're about to call unref(). Make sure we call that in balance with ref()! llassert(mDebugRefCalled); mDebugRefCalled = false; } #endif // End of critical area of mMultiplexMutex. //========================================= // Release the lock on mMultiplexMutex *first*, before releasing the lock on mState, // to avoid to ever call the tryLock() and fail, while this thread isn't still // BEFORE the critical area of mState! mMultiplexMutex.unlock(); } // Now it is safe to leave the critical area of mState as the tryLock won't fail anymore. // (Or, if we didn't release mMultiplexMutex because keep_looping is true, then this // end of the critical area of mState is equivalent to the first critical area in this // function. // End of critical area of mState. //================================ } } while (keep_looping); if (destruct) { unref(); } }