OsisSegmentStart::OsisSegmentStart(QDomElement& osisElement, QString& elementName, QObject *parent) : QObject(parent) , OsisData(OsisSegmentStart::staticMetaObject, osisElement, elementName) , SegmentId(GetAttributeInt(Segment_ID)) , CategoryId(GetAttributeInt(Category_ID)) { }
UINT CXmlConfig::GetProfileInt(LPCTSTR lpszSection, LPCTSTR lpszEntry, int nDefault) { bool bResult = false; UINT uiVal = GetAttributeInt(CString(lpszSection) + _T("\\") + lpszEntry,nDefault,&bResult); if (bResult) return uiVal; return nDefault; }
/** * Returns the index of the stage with the specified type and name or -1 * if the stage does not exist. * @param stageType The type of stage to find. * @param name The name of the stage to find. */ int MaterialDoc::FindStage(int stageType, const char* name) { for(int i = 0; i < editMaterial.stages.Num(); i++) { int type = GetAttributeInt(i, "stagetype"); idStr localname = GetAttribute(i, "name"); if(stageType == type && !localname.Icmp(name)) return i; } return -1; }
int CWrapEngine::GetWidthFromCache(const char* name) { if (!m_use_cache) return 0; if (!name || !*name) return 0; // We have to synchronize access to layout.xml so that multiple processed don't write // to the same file or one is reading while the other one writes. CInterProcessMutex mutex(MUTEX_LAYOUT); wxFileName file(COptions::Get()->GetOption(OPTION_DEFAULT_SETTINGSDIR), _T("layout.xml")); TiXmlElement* pDocument = GetXmlFile(file); if (!pDocument) return 0; TiXmlElement* pElement = pDocument->FirstChildElement("Layout"); if (!pElement) { delete pDocument->GetDocument(); return 0; } wxString language = wxGetApp().GetCurrentLanguageCode(); if (language.empty()) language = _T("default"); TiXmlElement* pLanguage = FindElementWithAttribute(pElement, "Language", "id", language.mb_str()); if (!pLanguage) { delete pDocument->GetDocument(); return 0; } TiXmlElement* pDialog = FindElementWithAttribute(pLanguage, "Dialog", "name", name); if (!pDialog) { delete pDocument->GetDocument(); return 0; } int value = GetAttributeInt(pDialog, "width"); delete pDocument->GetDocument(); return value; }
bool GenerateId(uint32_t &id) { static const char * MGMT_ID = "MgmtId"; if (GetAttributeInt(HEADER_CLUSTER, HEADER_PROC, MGMT_ID, (int *) &id) < 0) { id = 2; // Id 1 is reserved for the Scheduler } if (SetAttributeInt(HEADER_CLUSTER, HEADER_PROC, MGMT_ID, (int) ++id)) { return false; } return true; }
/** * Sets an attribute int in the material or a stage. * @param stage The stage or -1 for the material. * @param attribName The name of the attribute. * @param value The value to set. * @param addUndo Flag that specifies if the system should add an undo operation. */ void MaterialDoc::SetAttributeInt(int stage, const char* attribName, int value, bool addUndo) { //Make sure we need to set the attribute int orig = GetAttributeInt(stage, attribName); if(orig != value) { idDict* dict; if(stage == -1) { dict = &editMaterial.materialData; } else { assert(stage >= 0 && stage < GetStageCount()); dict = &editMaterial.stages[stage]->stageData; } dict->SetInt(attribName, value); manager->AttributeChanged(this, stage, attribName); OnMaterialChanged(); } }
/** * Writes a single stage. * @param stage The stage to write. * @param file The file where the stage should be wirtten */ void MaterialDoc::WriteStage(int stage, idFile_Memory* file) { //idStr stageName = GetAttribute(stage, "name"); int type = GetAttributeInt(stage, "stagetype"); //if(!stageName.Icmp("diffusemap") || !stageName.Icmp("specularmap") || !stageName.Icmp("bumpmap")) { if(type == STAGE_TYPE_SPECIALMAP) { WriteSpecialMapStage(stage, file); return; } file->WriteFloatString( "\t{\n" ); idStr name = GetAttribute(stage, "name"); if(name.Length() > 0) { file->WriteFloatString("\t\tname\t\"%s\"\n", name.c_str()); } WriteMaterialDef(stage, file, MaterialDefManager::MATERIAL_DEF_STAGE, 2); file->WriteFloatString( "\t}\n" ); }
/** * Writes a set of material attributes to a file. * @param stage The stage to write or -1 for the material. * @param file The file where the stage should be wirtten. * @param type The attribute grouping to use. * @param indent The number of tabs to indent the text. */ void MaterialDoc::WriteMaterialDef(int stage, idFile_Memory* file, int type, int indent) { idStr prefix = ""; for(int i = 0; i < indent; i++) { prefix += "\t"; } MaterialDefList* defs = MaterialDefManager::GetMaterialDefs(type); for(int i = 0; i < defs->Num(); i++) { switch((*defs)[i]->type) { case MaterialDef::MATERIAL_DEF_TYPE_STRING: { idStr attrib = GetAttribute(stage, (*defs)[i]->dictName); if(attrib.Length() > 0) { if((*defs)[i]->quotes) file->WriteFloatString("%s%s\t\"%s\"\n", prefix.c_str(), (*defs)[i]->dictName.c_str(), attrib.c_str()); else file->WriteFloatString("%s%s\t%s\n", prefix.c_str(), (*defs)[i]->dictName.c_str(), attrib.c_str()); } } break; case MaterialDef::MATERIAL_DEF_TYPE_BOOL: { if(GetAttributeBool(stage, (*defs)[i]->dictName)) file->WriteFloatString("%s%s\t\n",prefix.c_str(), (*defs)[i]->dictName.c_str()); } break; case MaterialDef::MATERIAL_DEF_TYPE_FLOAT: { float val = GetAttributeFloat(stage, (*defs)[i]->dictName); file->WriteFloatString("%s%s\t%f\n", prefix.c_str(), (*defs)[i]->dictName.c_str(), val); } break; case MaterialDef::MATERIAL_DEF_TYPE_INT: { int val = GetAttributeInt(stage, (*defs)[i]->dictName); file->WriteFloatString("%s%s\t%d\n", prefix.c_str(), (*defs)[i]->dictName.c_str(), val); } break; } } }
bool GetSubmitterId(const char *name, uint64_t &id) { uint32_t mgmtId; if (GetAttributeInt(HEADER_CLUSTER, HEADER_PROC, name, (int *) &mgmtId) < 0) { if (!GenerateId(mgmtId)) { // Failed to generate a new id, this seems fatal return false; } if (SetAttributeInt(HEADER_CLUSTER, HEADER_PROC, name, (int) mgmtId)) { // Failed to record the new id, this seems fatal return false; } } // The ((uint64_t) 0) << 32 id space is reserved for us id = (uint64_t) mgmtId; return true; }
//--------------------------------------------------------------------------- void DagmanClassad::InitializeMetrics() { Qmgr_connection *queue = OpenConnection(); if ( !queue ) { return; } int parentDagmanCluster; if ( GetAttributeInt( _dagmanId._cluster, _dagmanId._proc, ATTR_DAGMAN_JOB_ID, &parentDagmanCluster ) != 0 ) { debug_printf( DEBUG_DEBUG_1, "Can't get parent DAGMan cluster\n" ); parentDagmanCluster = -1; } else { debug_printf( DEBUG_DEBUG_1, "Parent DAGMan cluster: %d\n", parentDagmanCluster ); } CloseConnection( queue ); DagmanMetrics::SetDagmanIds( _dagmanId, parentDagmanCluster ); }
bool CWrapEngine::LoadCache() { // We have to synchronize access to layout.xml so that multiple processed don't write // to the same file or one is reading while the other one writes. CInterProcessMutex mutex(MUTEX_LAYOUT); wxFileName file(COptions::Get()->GetOption(OPTION_DEFAULT_SETTINGSDIR), _T("layout.xml")); CXmlFile xml(file); TiXmlElement* pDocument = xml.Load(); if (!pDocument) { m_use_cache = false; wxMessageBox(xml.GetError(), _("Error loading xml file"), wxICON_ERROR); return false; } bool cacheValid = true; TiXmlElement* pElement = pDocument->FirstChildElement("Layout"); if (!pElement) pElement = pDocument->LinkEndChild(new TiXmlElement("Layout"))->ToElement(); const wxString buildDate = CBuildInfo::GetBuildDateString(); if (GetTextAttribute(pElement, "Builddate") != buildDate) { cacheValid = false; SetTextAttribute(pElement, "Builddate", buildDate); } const wxString buildTime = CBuildInfo::GetBuildTimeString(); if (GetTextAttribute(pElement, "Buildtime") != buildTime) { cacheValid = false; SetTextAttribute(pElement, "Buildtime", buildTime); } // Enumerate resource file names // ----------------------------- TiXmlElement* pResources = pElement->FirstChildElement("Resources"); if (!pResources) pResources = pElement->LinkEndChild(new TiXmlElement("Resources"))->ToElement(); wxString resourceDir = wxGetApp().GetResourceDir(); wxDir dir(resourceDir); wxLogNull log; wxString xrc; for (bool found = dir.GetFirst(&xrc, _T("*.xrc")); found; found = dir.GetNext(&xrc)) { if (!wxFileName::FileExists(resourceDir + xrc)) continue; wxFileName fn(resourceDir + xrc); wxDateTime date = fn.GetModificationTime(); wxLongLong ticks = date.GetTicks(); TiXmlElement* resourceElement = FindElementWithAttribute(pResources, "xrc", "file", xrc.mb_str()); if (!resourceElement) { resourceElement = pResources->LinkEndChild(new TiXmlElement("xrc"))->ToElement(); resourceElement->SetAttribute("file", xrc.mb_str()); resourceElement->SetAttribute("date", ticks.ToString().mb_str()); cacheValid = false; } else { const char* xrcNodeDate = resourceElement->Attribute("date"); if (!xrcNodeDate || strcmp(xrcNodeDate, ticks.ToString().mb_str())) { cacheValid = false; resourceElement->SetAttribute("date", ticks.ToString().mb_str()); } } } if (!cacheValid) { // Clear all languages TiXmlElement* pLanguage = pElement->FirstChildElement("Language"); while (pLanguage) { pElement->RemoveChild(pLanguage); pLanguage = pElement->FirstChildElement("Language"); } } // Get current language wxString language = wxGetApp().GetCurrentLanguageCode(); if (language == _T("")) language = _T("default"); TiXmlElement* languageElement = FindElementWithAttribute(pElement, "Language", "id", language.mb_str()); if (!languageElement) { languageElement = pElement->LinkEndChild(new TiXmlElement("Language"))->ToElement(); languageElement->SetAttribute("id", language.mb_str()); } // Get static text font and measure sample text wxFrame* pFrame = new wxFrame; pFrame->Create(0, -1, _T("Title"), wxDefaultPosition, wxDefaultSize, wxFRAME_TOOL_WINDOW); wxStaticText* pText = new wxStaticText(pFrame, -1, _T("foo")); wxFont font = pText->GetFont(); wxString fontDesc = font.GetNativeFontInfoDesc(); TiXmlElement* pFontElement = languageElement->FirstChildElement("Font"); if (!pFontElement) pFontElement = languageElement->LinkEndChild(new TiXmlElement("Font"))->ToElement(); if (GetTextAttribute(pFontElement, "font") != fontDesc) { SetTextAttribute(pFontElement, "font", fontDesc); cacheValid = false; } int width, height; pText->GetTextExtent(_T("Just some test string we are measuring. If width or heigh differ from the recorded values, invalidate cache. 1234567890MMWWII"), &width, &height); if (GetAttributeInt(pFontElement, "width") != width || GetAttributeInt(pFontElement, "height") != height) { cacheValid = false; SetAttributeInt(pFontElement, "width", width); SetAttributeInt(pFontElement, "height", height); } pFrame->Destroy(); // Get language file const wxString& localesDir = wxGetApp().GetLocalesDir(); wxString name = GetLocaleFile(localesDir, language); if (name != _T("")) { wxFileName fn(localesDir + name + _T("/filezilla.mo")); wxDateTime date = fn.GetModificationTime(); wxLongLong ticks = date.GetTicks(); const char* languageNodeDate = languageElement->Attribute("date"); if (!languageNodeDate || strcmp(languageNodeDate, ticks.ToString().mb_str())) { languageElement->SetAttribute("date", ticks.ToString().mb_str()); cacheValid = false; } } else languageElement->SetAttribute("date", ""); if (!cacheValid) { TiXmlElement* dialog; while ((dialog = languageElement->FirstChildElement("Dialog"))) languageElement->RemoveChild(dialog); } if (COptions::Get()->GetOptionVal(OPTION_DEFAULT_KIOSKMODE) == 2) { m_use_cache = cacheValid; return true; } wxString error; if (!xml.Save(&error)) { m_use_cache = false; wxString msg = wxString::Format(_("Could not write \"%s\": %s"), file.GetFullPath().c_str(), error.c_str()); wxMessageBox(msg, _("Error writing xml file"), wxICON_ERROR); } return true; }
bool CXmlUtil::GetAttributeBool(const XMLDOMElementPtr& ele, const wchar_t* name, BOOL defValue) { return GetAttributeInt(ele, name, defValue) != 0; }
TimeEntry::EntryType PlanEntryModel::GetEntryType() { return static_cast<EntryType>(GetAttributeInt(TYPE)); }
int do_Q_request(ReliSock *syscall_sock,bool &may_fork) { int request_num = -1; int rval; syscall_sock->decode(); assert( syscall_sock->code(request_num) ); dprintf(D_SYSCALLS, "Got request #%d\n", request_num); switch( request_num ) { case CONDOR_InitializeConnection: { // dprintf( D_ALWAYS, "InitializeConnection()\n" ); bool authenticated = true; // Authenticate socket, if not already done by daemonCore if( !syscall_sock->triedAuthentication() ) { if( IsDebugLevel(D_SECURITY) ) { MyString methods; SecMan::getAuthenticationMethods( WRITE, &methods ); dprintf(D_SECURITY,"Calling authenticate(%s) in qmgmt_receivers\n", methods.Value()); } CondorError errstack; if( ! SecMan::authenticate_sock(syscall_sock, WRITE, &errstack) ) { // Failed to authenticate dprintf( D_ALWAYS, "SCHEDD: authentication failed: %s\n", errstack.getFullText().c_str() ); authenticated = false; } } if ( authenticated ) { InitializeConnection( syscall_sock->getOwner(), syscall_sock->getDomain() ); } else { InitializeConnection( NULL, NULL ); } return 0; } case CONDOR_InitializeReadOnlyConnection: { // dprintf( D_ALWAYS, "InitializeReadOnlyConnection()\n" ); // Since InitializeConnection() does nothing, and we need // to record the fact that this is a read-only connection, // but we have to do it in the socket (since we don't have // any other persistent data structure, and it's probably // the right place anyway), set the FQU. // // We need to record if this is a read-only connection so that // we can avoid expanding $$ in GetJobAd; simply checking if the // connection is authenticated isn't sufficient, because the // security session cache means that read-only connection could // be authenticated by a previous authenticated connection from // the same address (when using host-based security) less than // the expiration period ago. syscall_sock->setFullyQualifiedUser( "read-only" ); // same as InitializeConnection but no authenticate() InitializeConnection( NULL, NULL ); may_fork = true; return 0; } case CONDOR_SetEffectiveOwner: { MyString owner; int terrno; assert( syscall_sock->get(owner) ); assert( syscall_sock->end_of_message() ); rval = QmgmtSetEffectiveOwner( owner.Value() ); terrno = errno; syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() ); char const *fqu = syscall_sock->getFullyQualifiedUser(); dprintf(D_SYSCALLS, "\tSetEffectiveOwner\n"); dprintf(D_SYSCALLS, "\tauthenticated user = '******'\n", fqu ? fqu : ""); dprintf(D_SYSCALLS, "\trequested owner = '%s'\n", owner.Value()); dprintf(D_SYSCALLS, "\trval %d, errno %d\n", rval, terrno); return 0; } case CONDOR_NewCluster: { int terrno; assert( syscall_sock->end_of_message() );; errno = 0; rval = NewCluster( ); terrno = errno; dprintf(D_SYSCALLS, "\tNewCluster: rval = %d, errno = %d\n",rval,terrno ); if ( rval > 0 ) { dprintf( D_AUDIT, *syscall_sock, "Submitting new job %d.0\n", rval ); } syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; dprintf(D_FULLDEBUG,"schedd: NewCluster rval %d errno %d\n",rval,terrno); return 0; } case CONDOR_NewProc: { int cluster_id = -1; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->end_of_message() );; errno = 0; rval = NewProc( cluster_id ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); if ( rval > 0 ) { dprintf( D_AUDIT, *syscall_sock, "Submitting new job %d.%d\n", cluster_id, rval ); } syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; dprintf(D_FULLDEBUG,"schedd: NewProc rval %d errno %d\n",rval,terrno); return 0; } case CONDOR_DestroyProc: { int cluster_id = -1; int proc_id = -1; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->end_of_message() );; errno = 0; rval = DestroyProc( cluster_id, proc_id ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; dprintf(D_FULLDEBUG,"schedd: DestroyProc cluster %d proc %d rval %d errno %d\n",cluster_id,proc_id,rval,terrno); return 0; } case CONDOR_DestroyCluster: { int cluster_id = -1; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->end_of_message() );; errno = 0; rval = DestroyCluster( cluster_id ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; return 0; } #if 0 case CONDOR_DestroyClusterByConstraint: { char *constraint=NULL; int terrno; assert( syscall_sock->code(constraint) ); assert( syscall_sock->end_of_message() );; errno = 0; rval = DestroyClusterByConstraint( constraint ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } free( (char *)constraint ); assert( syscall_sock->end_of_message() );; return 0; } #endif case CONDOR_SetAttributeByConstraint: case CONDOR_SetAttributeByConstraint2: { char *attr_name=NULL; char *attr_value=NULL; char *constraint=NULL; int terrno; SetAttributeFlags_t flags = 0; assert( syscall_sock->code(constraint) ); dprintf( D_SYSCALLS, " constraint = %s\n",constraint); assert( syscall_sock->code(attr_value) ); assert( syscall_sock->code(attr_name) ); if( request_num == CONDOR_SetAttributeByConstraint2 ) { assert( syscall_sock->code( flags ) ); } assert( syscall_sock->end_of_message() );; if (strcmp (attr_name, ATTR_MYPROXY_PASSWORD) == 0) { errno = 0; dprintf( D_SYSCALLS, "SetAttributeByConstraint (MyProxyPassword) not supported...\n"); rval = 0; terrno = errno; } else { errno = 0; rval = SetAttributeByConstraint( constraint, attr_name, attr_value, flags ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); if ( rval == 0 ) { dprintf( D_AUDIT, *syscall_sock, "Set Attribute By Constraint %s, " "%s = %s\n", constraint, attr_name, attr_value); } } syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } free( (char *)constraint ); free( (char *)attr_value ); free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_SetAttribute: case CONDOR_SetAttribute2: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; char *attr_value=NULL; int terrno; SetAttributeFlags_t flags = 0; const char *users_username; const char *condor_username; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_value) ); assert( syscall_sock->code(attr_name) ); if( request_num == CONDOR_SetAttribute2 ) { assert( syscall_sock->code( flags ) ); } users_username = syscall_sock->getOwner(); condor_username = get_condor_username(); if (attr_name) dprintf(D_SYSCALLS,"\tattr_name = %s\n",attr_name); if (attr_value) dprintf(D_SYSCALLS,"\tattr_value = %s\n",attr_value); assert( syscall_sock->end_of_message() );; // ckireyev: // We do NOT want to include MyProxy password in the ClassAd (since it's a secret) // I'm not sure if this is the best place to do this, but.... if (attr_name && attr_value && strcmp (attr_name, ATTR_MYPROXY_PASSWORD) == 0) { errno = 0; dprintf( D_SYSCALLS, "Got MyProxyPassword, stashing...\n"); rval = SetMyProxyPassword (cluster_id, proc_id, attr_value); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); } else { errno = 0; rval = SetAttribute( cluster_id, proc_id, attr_name, attr_value, flags ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); // If we're modifying a previously-submitted job AND either // the client's username is not HTCondor's (i.e. not a // daemon) OR the client says we should log... if( (cluster_id != active_cluster_num) && (rval == 0) && ( strcmp(users_username, condor_username) || (flags & SHOULDLOG) ) ) { dprintf( D_AUDIT, *syscall_sock, "Set Attribute for job %d.%d, " "%s = %s\n", cluster_id, proc_id, attr_name, attr_value); } } free( (char *)attr_value ); free( (char *)attr_name ); if( flags & SetAttribute_NoAck ) { if( rval < 0 ) { return -1; } } else { syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() ); } return 0; } case CONDOR_SetTimerAttribute: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; int duration = 0; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); if (attr_name) dprintf(D_SYSCALLS,"\tattr_name = %s\n",attr_name); assert( syscall_sock->code(duration) ); dprintf(D_SYSCALLS,"\tduration = %d\n",duration); assert( syscall_sock->end_of_message() );; errno = 0; rval = SetTimerAttribute( cluster_id, proc_id, attr_name, duration ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); dprintf( D_AUDIT, *syscall_sock, "Set Timer Attribute for job %d.%d, " "attr_name = %s, duration = %d\n", cluster_id, proc_id, attr_name, duration); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_BeginTransaction: { int terrno; assert( syscall_sock->end_of_message() );; errno = 0; rval = 0; // BeginTransaction returns void (sigh), so always success BeginTransaction( ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_AbortTransaction: { int terrno; assert( syscall_sock->end_of_message() );; errno = 0; rval = 0; // AbortTransaction returns void (sigh), so always success AbortTransaction( ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_CommitTransactionNoFlags: case CONDOR_CommitTransaction: { int terrno; int flags; if( request_num == CONDOR_CommitTransaction ) { assert( syscall_sock->code(flags) ); } else { flags = 0; } assert( syscall_sock->end_of_message() );; errno = 0; CondorError errstack; rval = CheckTransaction( flags, & errstack ); terrno = errno; dprintf( D_SYSCALLS, "\tflags = %d, rval = %d, errno = %d\n", flags, rval, terrno ); if( rval >= 0 ) { errno = 0; CommitTransaction( flags ); // CommitTransaction() never returns on failure rval = 0; terrno = errno; dprintf( D_SYSCALLS, "\tflags = %d, rval = %d, errno = %d\n", flags, rval, terrno ); } syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); const CondorVersionInfo *vers = syscall_sock->get_peer_version(); if (vers && vers->built_since_version(8, 3, 4)) { // Send a classad, for less backwards-incompatibility. int code = 1; const char * reason = "QMGMT rejected job submission."; if( errstack.subsys() ) { code = 2; reason = errstack.message(); } ClassAd reply; reply.Assign( "ErrorCode", code ); reply.Assign( "ErrorReason", reason ); assert( putClassAd( syscall_sock, reply ) ); } } assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetAttributeFloat: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; float value = 0.0; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); assert( syscall_sock->end_of_message() );; errno = 0; if( QmgmtMayAccessAttribute( attr_name ) ) { rval = GetAttributeFloat( cluster_id, proc_id, attr_name, &value ); } else { rval = -1; } terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( syscall_sock->code(value) ); } free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetAttributeInt: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; int value = 0; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); dprintf( D_SYSCALLS, " attr_name = %s\n", attr_name ); assert( syscall_sock->end_of_message() );; errno = 0; if( QmgmtMayAccessAttribute( attr_name ) ) { rval = GetAttributeInt( cluster_id, proc_id, attr_name, &value ); } else { rval = -1; } terrno = errno; if (rval < 0) { dprintf( D_SYSCALLS, "GetAttributeInt(%d, %d, %s) not found.\n", cluster_id, proc_id, attr_name); } else { dprintf( D_SYSCALLS, " value: %d\n", value ); dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); } syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( syscall_sock->code(value) ); } free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetAttributeString: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; char *value = NULL; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); assert( syscall_sock->end_of_message() );; errno = 0; if( QmgmtMayAccessAttribute( attr_name ) ) { rval = GetAttributeStringNew( cluster_id, proc_id, attr_name, &value ); } else { rval = -1; } terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( syscall_sock->code(value) ); } free( (char *)value ); free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetAttributeExpr: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); assert( syscall_sock->end_of_message() );; char *value = NULL; errno = 0; if( QmgmtMayAccessAttribute( attr_name ) ) { rval = GetAttributeExprNew( cluster_id, proc_id, attr_name, &value ); } else { rval = -1; } terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); if ( !syscall_sock->code(rval) ) { free(value); return -1; } if( rval < 0 ) { if ( !syscall_sock->code(terrno) ) { free(value); return -1; } } if( rval >= 0 ) { if ( !syscall_sock->code(value) ) { free(value); return -1; } } free( (char *)value ); free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetDirtyAttributes: { int cluster_id = -1; int proc_id = -1; ClassAd updates; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->end_of_message() );; errno = 0; rval = GetDirtyAttributes( cluster_id, proc_id, &updates ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); if ( !syscall_sock->code(rval) ) { return -1; } if( rval < 0 ) { if ( !syscall_sock->code(terrno) ) { return -1; } } if( rval >= 0 ) { assert( putClassAd(syscall_sock, updates) ); } assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_DeleteAttribute: { int cluster_id = -1; int proc_id = -1; char *attr_name=NULL; int terrno; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->code(attr_name) ); assert( syscall_sock->end_of_message() );; errno = 0; rval = DeleteAttribute( cluster_id, proc_id, attr_name ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } free( (char *)attr_name ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetJobAd: { int cluster_id = -1; int proc_id = -1; ClassAd *ad = NULL; int terrno; bool delete_ad = false; assert( syscall_sock->code(cluster_id) ); dprintf( D_SYSCALLS, " cluster_id = %d\n", cluster_id ); assert( syscall_sock->code(proc_id) ); dprintf( D_SYSCALLS, " proc_id = %d\n", proc_id ); assert( syscall_sock->end_of_message() );; // dprintf( D_ALWAYS, "(%d.%d) isAuthenticated() = %d\n", cluster_id, proc_id, syscall_sock->isAuthenticated() ); // dprintf( D_ALWAYS, "(%d.%d) getOwner() = %s\n", cluster_id, proc_id, syscall_sock->getOwner() ); errno = 0; // Only fetch the jobad for legal values of cluster/proc if( cluster_id >= 1 ) { if( proc_id >= 0 ) { const char * fqu = syscall_sock->getFullyQualifiedUser(); if( fqu != NULL && strcmp( fqu, "read-only" ) != 0 ) { // expand $$() macros in the jobad as required by GridManager. // The GridManager depends on the fact that the following call // expands $$ and saves the expansions to disk in case of // restart. ad = GetJobAd_as_ClassAd( cluster_id, proc_id, true, true ); delete_ad = true; // note : since we expanded the ad, ad is now a deep // copy of the ad in memory, so we must delete it below. } else { ad = GetJobAd_as_ClassAd( cluster_id, proc_id, false, false ); } } else if( proc_id == -1 ) { // allow cluster ad to be queried as required by preen, but // do NOT ask to expand $$() macros in a cluster ad! ad = GetJobAd_as_ClassAd( cluster_id, proc_id, false, false ); } } terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE) ); } // If we called GetJobAd() with the third bool argument set // to True (expandedAd), it does a deep copy of the ad in the // queue in order to expand the $$() attributes. So we must // delete it. if (delete_ad) delete ad; assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetJobByConstraint: { char *constraint=NULL; ClassAd *ad; int terrno; assert( syscall_sock->code(constraint) ); assert( syscall_sock->end_of_message() );; errno = 0; ad = GetJobByConstraint_as_ClassAd( constraint ); terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE) ); } FreeJobAd(ad); free( (char *)constraint ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetNextJob: { ClassAd *ad; int initScan = 0; int terrno; assert( syscall_sock->code(initScan) ); dprintf( D_SYSCALLS, " initScan = %d\n", initScan ); assert( syscall_sock->end_of_message() );; errno = 0; ad = GetNextJob( initScan ); terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE) ); } FreeJobAd(ad); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetNextJobByConstraint: { char *constraint=NULL; ClassAd *ad; int initScan = 0; int terrno; assert( syscall_sock->code(initScan) ); dprintf( D_SYSCALLS, " initScan = %d\n", initScan ); if ( !(syscall_sock->code(constraint)) ) { if (constraint != NULL) { free(constraint); constraint = NULL; } return -1; } assert( syscall_sock->end_of_message() );; errno = 0; ad = GetNextJobByConstraint( constraint, initScan ); terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE) ); } FreeJobAd(ad); free( (char *)constraint ); assert( syscall_sock->end_of_message() );; return 0; } case CONDOR_GetNextDirtyJobByConstraint: { char *constraint=NULL; ClassAd *ad; int initScan = 0; int terrno; assert( syscall_sock->code(initScan) ); dprintf( D_SYSCALLS, " initScan = %d\n", initScan ); if ( !(syscall_sock->code(constraint)) ) { if (constraint != NULL) { free(constraint); constraint = NULL; } return -1; } assert( syscall_sock->end_of_message() ); errno = 0; ad = GetNextDirtyJobByConstraint( constraint, initScan ); terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE) ); } FreeJobAd(ad); free( (char *)constraint ); assert( syscall_sock->end_of_message() ); return 0; } case CONDOR_SendSpoolFile: { char *filename=NULL; int terrno; assert( syscall_sock->code(filename) ); assert( syscall_sock->end_of_message() );; errno = 0; rval = SendSpoolFile( filename ); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); #if 0 syscall_sock->encode(); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } assert( syscall_sock->end_of_message() );; #endif free( (char *)filename ); return 0; } case CONDOR_SendSpoolFileIfNeeded: { int terrno; ClassAd ad; assert( getClassAd(syscall_sock, ad) ); assert( syscall_sock->end_of_message() );; errno = 0; rval = SendSpoolFileIfNeeded(ad); terrno = errno; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); return 0; } case CONDOR_GetAllJobsByConstraint: { char *constraint=NULL; char *projection=NULL; ClassAd *ad; int terrno; int initScan = 1; classad::References proj; if ( !(syscall_sock->code(constraint)) ) { if (constraint != NULL) { free(constraint); constraint = NULL; } return -1; } if ( !(syscall_sock->code(projection)) ) { if (projection != NULL) { free(constraint); free(projection); projection = NULL; } return -1; } dprintf( D_SYSCALLS, " constraint = %s\n", constraint ); dprintf( D_SYSCALLS, " projection = %s\n", projection ? projection : ""); assert( syscall_sock->end_of_message() );; // if there is a projection, convert it into a set of attribute names if (projection) { StringTokenIterator list(projection); const std::string * attr; while ((attr = list.next_string())) { proj.insert(*attr); } } syscall_sock->encode(); do { errno = 0; ad = GetNextJobByConstraint( constraint, initScan ); initScan=0; // one first time through, otherwise 0 terrno = errno; rval = ad ? 0 : -1; dprintf( D_SYSCALLS, "\trval = %d, errno = %d\n", rval, terrno ); assert( syscall_sock->code(rval) ); if( rval < 0 ) { assert( syscall_sock->code(terrno) ); } if( rval >= 0 ) { assert( putClassAd(syscall_sock, *ad, PUT_CLASSAD_NO_PRIVATE, proj.empty() ? NULL : &proj) ); FreeJobAd(ad); } } while (rval >= 0); assert( syscall_sock->end_of_message() );; free( (char *)constraint ); free( (char *)projection ); return 0; } case CONDOR_CloseSocket: { assert( syscall_sock->end_of_message() );; return -1; } } /* End of switch */ return -1; } /* End of function */
//------------------------------------------------------------- VCNResID VCNMeshLoader::LoadMeshElementTextureCoordXML( XMLNodePtr elementNode, VCNCacheType coordType ) { XMLNodePtr node = NULL; // Fetch the node we need from the element node switch( coordType ) { case VT_DIFFUSE_TEX_COORDS: elementNode->selectSingleNode( (VCNTChar*)kNodeDiffuseTexCoords, &node ); break; case VT_NORMAL_TEX_COORDS: elementNode->selectSingleNode( (VCNTChar*)kNodeNormalTexCoords, &node ); break; default: VCN_ASSERT( false && "Trying to load unrecognized coord type!" ); } // If we didn't find it, we don't have it if( node == NULL ) return kInvalidResID; // Get the expected size of the array VCNUInt size = 0; GetAttributeUInt( node, kAttrVertexTexCoordsSize, size ); // If we don't have any, leave. if( size == 0 ) return kInvalidResID; // Create an array to contain all of this (2 floats per position) VCNUInt stride = size * kTexCoordFloats; VCNFloat* buffer = new VCNFloat[ stride ]; // Create some tools... VCNFloat* ptrFloat = buffer; VCNInt safety = 0; // Read the XML and fill the array! XMLNodeListPtr textureCoords = 0; node->selectNodes( (VCNTChar*)kNodeVertexTexCoord, &textureCoords ); VCNLong textureCoordsLength = 0; textureCoords->get_length( &textureCoordsLength ); VCN_ASSERT( textureCoordsLength == size && "FILE IS CORRUPTED!" ); for( VCNLong i=0; i<textureCoordsLength; i++ ) { // Get the first one XMLNodePtr textureCoordNode = 0; textureCoords->get_item( i, &textureCoordNode ); // Read the U GetAttributeFloat( textureCoordNode, kAttrVertexTexCoordU, *ptrFloat ); ptrFloat++; // Read the V GetAttributeFloat( textureCoordNode, kAttrVertexTexCoordV, *ptrFloat ); ptrFloat++; // Verify the safety to make sure we're reading in the right order GetAttributeInt( textureCoordNode, kAttrVertexTexCoordID, safety ); VCN_ASSERT( safety==i && "VERTEX AREN'T READ IN ORDER!" ); } // Now give the information to the cache manager // (he'll take care of making this data API specific) VCNResID cacheID = VCNRenderCore::GetInstance()->CreateCache( coordType, buffer, stride*sizeof(VCNFloat) ); // Clear the buffer delete [] buffer; // Return the cache ID return cacheID; }
int CXmlNode::ReadAttributeInt(const CString& str, const int& nDef) { return GetAttributeInt(str, nDef); }
//------------------------------------------------------------- VCNResID VCNMeshLoader::LoadMeshElementPositionXML( XMLNodePtr elementNode, VCNSphere* bounding, VCNAabb* aabb /*= NULL*/ ) { // Fetch the node we need from the element node XMLNodePtr node = 0; elementNode->selectSingleNode( (VCNTChar*)kNodeVertexPositions, &node ); VCN_ASSERT( node != NULL && "No positions in mesh!" ); // Get the expected size of the array VCNUInt size = 0; GetAttributeUInt( node, kAttrVertexPositionsSize, size ); // If we don't have any, leave. if( size == 0 ) return kInvalidResID; // Create an array to contain all of this (3 floats per position) VCNUInt stride = size * kPositionFloats; VCNFloat* buffer = new VCNFloat[ stride ]; // Create some tools... VCNFloat* ptrFloat = buffer; VCNInt safety = 0; // Keep track of the min and max VCNFloat minX, maxX; VCNFloat minY, maxY; VCNFloat minZ, maxZ; minX = minY = minZ = kMaxFloat; maxX = maxY = maxZ = kMinFloat; // Read the XML and fill the array! XMLNodeListPtr positions = 0; node->selectNodes( (VCNTChar*)kNodeVertexPosition, &positions ); VCN_ASSERT( positions != 0 && "FILE IS CORRUPTED!" ); VCNLong positionsLength = 0; positions->get_length( &positionsLength ); VCN_ASSERT( positionsLength == size && "FILE IS CORRUPTED!" ); for( VCNLong i=0; i<positionsLength; i++ ) { // Get the element's node XMLNodePtr positionNode = 0; positions->get_item( i, &positionNode ); // Read the X GetAttributeFloat( positionNode, kAttrVertexPositionX, *ptrFloat ); if( *ptrFloat < minX ) minX = *ptrFloat; if( *ptrFloat > maxX ) maxX = *ptrFloat; ptrFloat++; // Read the Y GetAttributeFloat( positionNode, kAttrVertexPositionY, *ptrFloat ); if( *ptrFloat < minY ) minY = *ptrFloat; if( *ptrFloat > maxY ) maxY = *ptrFloat; ptrFloat++; // Read the Z GetAttributeFloat( positionNode, kAttrVertexPositionZ, *ptrFloat ); if( *ptrFloat < minZ ) minZ = *ptrFloat; if( *ptrFloat > maxZ ) maxZ = *ptrFloat; ptrFloat++; // Verify the safety to make sure we're reading in the right order GetAttributeInt( positionNode, kAttrVertexPositionID, safety ); VCN_ASSERT( safety==i && "VERTEX AREN'T READ IN ORDER!" ); } // Now give the information to the cache manager // (he'll take care of making this data API specific) VCNResID cacheID = VCNRenderCore::GetInstance()->CreateCache( VT_POSITION, buffer, stride*sizeof(VCNFloat) ); // Clear the buffer delete [] buffer; Vector3 minVect ( minX, minY, minZ ); Vector3 maxVect ( maxX, maxY, maxZ ); Vector3 diagonal = (maxVect - minVect) / 2.0f; // If he wants us to fill the AABB, we'll do it for him if( bounding ) { VCNSphere tmpSphere( diagonal.Length(), minVect + diagonal ); *bounding = tmpSphere; } if (aabb) { VCNAabb tempAabb(minVect, maxVect); *aabb = tempAabb; } // Return the cache ID return cacheID; }
//------------------------------------------------------------- /// A lighting array is composed of normals and colors //------------------------------------------------------------- VCNResID VCNMeshLoader::LoadMeshElementLightingXML( XMLNodePtr elementNode ) { // Fetch the normals from the element node XMLNodePtr normals = 0; elementNode->selectSingleNode( (VCNTChar*)kNodeVertexNormals, &normals ); bool hasNormals = (normals != NULL); // Fetch the colors from the element node XMLNodePtr colors = 0; elementNode->selectSingleNode( (VCNTChar*)kNodeVertexColors, &colors ); bool hasColors = (colors != NULL); // Get the expected size of the normals VCNUInt normalSize = 0; if( hasNormals ) { GetAttributeUInt( normals, kAttrVertexNormalsSize, normalSize ); if( normalSize == 0 ) hasNormals = false; } // Get the expected size of the colors VCNUInt colorSize = 0; if( hasColors ) { GetAttributeUInt( colors, kAttrVertexColorsSize, colorSize ); if( colorSize == 0 ) hasColors = false; } // If we have neither, then no lighting information at all if( !hasColors && !hasNormals ) return kInvalidResID; // If we have both, they MUST be of same size! if( hasColors && hasNormals && (normalSize != colorSize) ) { VCN_ASSERT_FAIL( "LIGHTING REJECTED!" ); return kInvalidResID; } // Now just retain one of the sizes VCNLong size = (VCNLong)(hasNormals?normalSize:colorSize); // Create an array to contain all of this (6 floats per vertex) VCNUInt stride = size * (kNormalFloats+kColorFloats); VCNFloat* buffer = new VCNFloat[ stride ]; // Create some tools... VCNFloat* ptrFloat = buffer; VCNInt safety = 0; // Pick out the nodes of every normal (if we have them) XMLNodeListPtr normalElements; if( hasNormals ) { normalElements = 0; normals->selectNodes( (VCNTChar*)kNodeVertexNormal, &normalElements ); VCNLong normalElementsLength = 0; normalElements->get_length( &normalElementsLength ); VCN_ASSERT( normalElementsLength == size && "FILE IS CORRUPTED!" ); } // Pick out the nodes of every color (if we have them) XMLNodeListPtr colorsElements; if( hasColors ) { colorsElements = 0; colors->selectNodes( (VCNTChar*)kNodeVertexColor, &colorsElements ); VCNLong colorElementsLength = 0; normalElements->get_length( &colorElementsLength ); VCN_ASSERT( colorElementsLength == size && "FILE IS CORRUPTED!" ); } // Now read it in! for( VCNLong i=0; i<size; i++ ) { // Normals if( hasNormals ) { // Get the element's node XMLNodePtr normalNode = 0; normalElements->get_item( i, &normalNode ); // Read the X GetAttributeFloat( normalNode, kAttrVertexNormalX, *ptrFloat ); ptrFloat++; // Read the Y GetAttributeFloat( normalNode, kAttrVertexNormalY, *ptrFloat ); ptrFloat++; // Read the Z GetAttributeFloat( normalNode, kAttrVertexNormalZ, *ptrFloat ); ptrFloat++; // Verify the safety to make sure we're reading in the right order GetAttributeInt( normalNode, kAttrVertexNormalID, safety ); VCN_ASSERT( safety==i && "VERTEX AREN'T READ IN ORDER!" ); } else { // Put three zeros instead *ptrFloat = 0.0f; ptrFloat++; *ptrFloat = 0.0f; ptrFloat++; *ptrFloat = 0.0f; ptrFloat++; } // Then colors if( hasColors ) { // Get the element's node XMLNodePtr colorNode = 0; colorsElements->get_item( i, &colorNode ); // Read the X GetAttributeFloat( colorNode, kAttrVertexColorR, *ptrFloat ); ptrFloat++; // Read the Y GetAttributeFloat( colorNode, kAttrVertexColorG, *ptrFloat ); ptrFloat++; // Read the Z GetAttributeFloat( colorNode, kAttrVertexColorB, *ptrFloat ); ptrFloat++; // Verify the safety to make sure we're reading in the right order GetAttributeInt( colorNode, kAttrVertexColorID, safety ); VCN_ASSERT( safety==i && "VERTEX AREN'T READ IN ORDER!" ); } else { // Put three ones instead (white) *ptrFloat = 1.0f; ptrFloat++; *ptrFloat = 1.0f; ptrFloat++; *ptrFloat = 1.0f; ptrFloat++; } } // Now give the information to the cache manager // (he'll take care of making this data API specific) VCNResID cacheID = VCNRenderCore::GetInstance()->CreateCache( VT_LIGHTING, buffer, stride*sizeof(VCNFloat) ); // Clear the buffer delete [] buffer; // Return the cache ID return cacheID; }
void doContactSchedd() { int rc; Qmgr_connection *schedd; BaseJob *curr_job; ClassAd *next_ad; char expr_buf[12000]; bool schedd_updates_complete = false; bool schedd_deletes_complete = false; bool add_remove_jobs_complete = false; bool update_jobs_complete = false; bool commit_transaction = true; int failure_line_num = 0; bool send_reschedule = false; std::string error_str = ""; StringList dirty_job_ids; char *job_id_str; PROC_ID job_id; CondorError errstack; dprintf(D_FULLDEBUG,"in doContactSchedd()\n"); initJobExprs(); contactScheddTid = TIMER_UNSET; // vacateJobs ///////////////////////////////////////////////////// if ( pendingScheddVacates.getNumElements() != 0 ) { std::string buff; StringList job_ids; VacateRequest curr_request; int result; ClassAd* rval; pendingScheddVacates.startIterations(); while ( pendingScheddVacates.iterate( curr_request ) != 0 ) { formatstr( buff, "%d.%d", curr_request.job->procID.cluster, curr_request.job->procID.proc ); job_ids.append( buff.c_str() ); } char *tmp = job_ids.print_to_string(); if ( tmp ) { dprintf( D_FULLDEBUG, "Calling vacateJobs on %s\n", tmp ); free(tmp); tmp = NULL; } rval = ScheddObj->vacateJobs( &job_ids, VACATE_FAST, &errstack ); if ( rval == NULL ) { formatstr( error_str, "vacateJobs returned NULL, CondorError: %s!", errstack.getFullText().c_str() ); goto contact_schedd_failure; } else { pendingScheddVacates.startIterations(); while ( pendingScheddVacates.iterate( curr_request ) != 0 ) { formatstr( buff, "job_%d_%d", curr_request.job->procID.cluster, curr_request.job->procID.proc ); if ( !rval->LookupInteger( buff.c_str(), result ) ) { dprintf( D_FULLDEBUG, "vacateJobs returned malformed ad\n" ); EXCEPT( "vacateJobs returned malformed ad" ); } else { dprintf( D_FULLDEBUG, " %d.%d vacate result: %d\n", curr_request.job->procID.cluster, curr_request.job->procID.proc,result); pendingScheddVacates.remove( curr_request.job->procID ); curr_request.result = (action_result_t)result; curr_request.job->SetEvaluateState(); completedScheddVacates.insert( curr_request.job->procID, curr_request ); } } delete rval; } } schedd = ConnectQ( ScheddAddr, QMGMT_TIMEOUT, false, NULL, myUserName, CondorVersion() ); if ( !schedd ) { error_str = "Failed to connect to schedd!"; goto contact_schedd_failure; } // CheckLeases ///////////////////////////////////////////////////// if ( checkLeasesSignaled ) { dprintf( D_FULLDEBUG, "querying for renewed leases\n" ); // Grab the lease attributes of all the jobs in our global hashtable. BaseJob::JobsByProcId.startIterations(); while ( BaseJob::JobsByProcId.iterate( curr_job ) != 0 ) { int new_expiration; rc = GetAttributeInt( curr_job->procID.cluster, curr_job->procID.proc, ATTR_TIMER_REMOVE_CHECK, &new_expiration ); if ( rc < 0 ) { if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } else { // This job doesn't have doesn't have a lease from // the submitter. Skip it. continue; } } curr_job->UpdateJobLeaseReceived( new_expiration ); } checkLeasesSignaled = false; } // end of handling check leases // AddJobs ///////////////////////////////////////////////////// if ( addJobsSignaled || firstScheddContact ) { int num_ads = 0; dprintf( D_FULLDEBUG, "querying for new jobs\n" ); // Make sure we grab all Globus Universe jobs (except held ones // that we previously indicated we were done with) // when we first start up in case we're recovering from a // shutdown/meltdown. // Otherwise, grab all jobs that are unheld and aren't marked as // currently being managed and aren't marked as not matched. // If JobManaged is undefined, equate it with false. // If Matched is undefined, equate it with true. // NOTE: Schedds from Condor 6.6 and earlier don't include // "(Universe==9)" in the constraint they give to the gridmanager, // so this gridmanager will pull down non-globus-universe ads, // although it won't use them. This is inefficient but not // incorrect behavior. if ( firstScheddContact ) { // Grab all jobs for us to manage. This expression is a // derivative of the expression below for new jobs. We add // "|| Managed =?= TRUE" to also get jobs our previous // incarnation was in the middle of managing when it died // (if it died unexpectedly). With the new term, the // "&& Managed =!= TRUE" from the new jobs expression becomes // superfluous (by boolean logic), so we drop it. sprintf( expr_buf, "%s && %s && ((%s && %s) || %s)", expr_schedd_job_constraint.c_str(), expr_not_completely_done.c_str(), expr_matched_or_undef.c_str(), expr_not_held.c_str(), expr_managed.c_str() ); } else { // Grab new jobs for us to manage sprintf( expr_buf, "%s && %s && %s && %s && %s", expr_schedd_job_constraint.c_str(), expr_not_completely_done.c_str(), expr_matched_or_undef.c_str(), expr_not_held.c_str(), expr_not_managed.c_str() ); } dprintf( D_FULLDEBUG,"Using constraint %s\n",expr_buf); next_ad = GetNextJobByConstraint( expr_buf, 1 ); while ( next_ad != NULL ) { PROC_ID procID; BaseJob *old_job; int job_is_matched = 1; // default to true if not in ClassAd next_ad->LookupInteger( ATTR_CLUSTER_ID, procID.cluster ); next_ad->LookupInteger( ATTR_PROC_ID, procID.proc ); bool job_is_managed = jobExternallyManaged(next_ad); next_ad->LookupBool(ATTR_JOB_MATCHED,job_is_matched); if ( BaseJob::JobsByProcId.lookup( procID, old_job ) != 0 ) { JobType *job_type = NULL; BaseJob *new_job = NULL; // job had better be either managed or matched! (or both) ASSERT( job_is_managed || job_is_matched ); if ( MustExpandJobAd( next_ad ) ) { // Get the expanded ClassAd from the schedd, which // has the GridResource filled in with info from // the matched ad. delete next_ad; next_ad = NULL; next_ad = GetJobAd(procID.cluster,procID.proc); if ( next_ad == NULL && errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } if ( next_ad == NULL ) { // We may get here if it was not possible to expand // one of the $$() expressions. We don't want to // roll back the transaction and blow away the // hold that the schedd just put on the job, so // simply skip over this ad. dprintf(D_ALWAYS,"Failed to get expanded job ClassAd from Schedd for %d.%d. errno=%d\n",procID.cluster,procID.proc,errno); goto contact_schedd_next_add_job; } } // Search our job types for one that'll handle this job jobTypes.Rewind(); while ( jobTypes.Next( job_type ) ) { if ( job_type->AdMatchFunc( next_ad ) ) { // Found one! dprintf( D_FULLDEBUG, "Using job type %s for job %d.%d\n", job_type->Name, procID.cluster, procID.proc ); break; } } if ( job_type != NULL ) { new_job = job_type->CreateFunc( next_ad ); } else { dprintf( D_ALWAYS, "No handlers for job %d.%d\n", procID.cluster, procID.proc ); new_job = new BaseJob( next_ad ); } ASSERT(new_job); new_job->SetEvaluateState(); dprintf(D_ALWAYS,"Found job %d.%d --- inserting\n", new_job->procID.cluster,new_job->procID.proc); num_ads++; if ( !job_is_managed ) { rc = tSetAttributeString( new_job->procID.cluster, new_job->procID.proc, ATTR_JOB_MANAGED, MANAGED_EXTERNAL); if ( rc < 0 ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } } } else { // We already know about this job, skip // But also set Managed=true on the schedd so that it won't // keep signalling us about it delete next_ad; rc = tSetAttributeString( procID.cluster, procID.proc, ATTR_JOB_MANAGED, MANAGED_EXTERNAL ); if ( rc < 0 ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } } contact_schedd_next_add_job: next_ad = GetNextJobByConstraint( expr_buf, 0 ); } // end of while next_ad if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } dprintf(D_FULLDEBUG,"Fetched %d new job ads from schedd\n",num_ads); } // end of handling add jobs // RemoveJobs ///////////////////////////////////////////////////// // We always want to perform this check. Otherwise, we may overwrite a // REMOVED/HELD/COMPLETED status with something else below. { int num_ads = 0; dprintf( D_FULLDEBUG, "querying for removed/held jobs\n" ); // Grab jobs marked as REMOVED/COMPLETED or marked as HELD that we // haven't previously indicated that we're done with (by setting // JobManaged to "Schedd". sprintf( expr_buf, "(%s) && (%s) && (%s == %d || %s == %d || (%s == %d && %s =?= \"%s\"))", ScheddJobConstraint, expr_not_completely_done.c_str(), ATTR_JOB_STATUS, REMOVED, ATTR_JOB_STATUS, COMPLETED, ATTR_JOB_STATUS, HELD, ATTR_JOB_MANAGED, MANAGED_EXTERNAL ); dprintf( D_FULLDEBUG,"Using constraint %s\n",expr_buf); next_ad = GetNextJobByConstraint( expr_buf, 1 ); while ( next_ad != NULL ) { PROC_ID procID; BaseJob *next_job; int curr_status; next_ad->LookupInteger( ATTR_CLUSTER_ID, procID.cluster ); next_ad->LookupInteger( ATTR_PROC_ID, procID.proc ); next_ad->LookupInteger( ATTR_JOB_STATUS, curr_status ); if ( BaseJob::JobsByProcId.lookup( procID, next_job ) == 0 ) { // Should probably skip jobs we already have marked as // held or removed next_job->JobAdUpdateFromSchedd( next_ad, true ); num_ads++; } else if ( curr_status == REMOVED ) { // If we don't know about the job, act like we got an // ADD_JOBS signal from the schedd the next time we // connect, so that we'll create a Job object for it // and decide how it needs to be handled. // TODO The AddJobs and RemoveJobs queries shoule be // combined into a single query. dprintf( D_ALWAYS, "Don't know about removed job %d.%d. " "Will treat it as a new job to manage\n", procID.cluster, procID.proc ); addJobsSignaled = true; } else { dprintf( D_ALWAYS, "Don't know about held/completed job %d.%d. " "Ignoring it\n", procID.cluster, procID.proc ); } delete next_ad; next_ad = GetNextJobByConstraint( expr_buf, 0 ); } if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } dprintf(D_FULLDEBUG,"Fetched %d job ads from schedd\n",num_ads); } if ( RemoteCommitTransaction() < 0 ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } add_remove_jobs_complete = true; // Retrieve dirty attributes ///////////////////////////////////////////////////// if ( updateJobsSignaled ) { dprintf( D_FULLDEBUG, "querying for jobs with attribute updates\n" ); sprintf( expr_buf, "%s && %s && %s && %s", expr_schedd_job_constraint.c_str(), expr_not_completely_done.c_str(), expr_not_held.c_str(), expr_managed.c_str() ); dprintf( D_FULLDEBUG,"Using constraint %s\n",expr_buf); next_ad = GetNextDirtyJobByConstraint( expr_buf, 1 ); while ( next_ad != NULL ) { ClassAd updates; char str[PROC_ID_STR_BUFLEN]; next_ad->LookupInteger( ATTR_CLUSTER_ID, job_id.cluster ); next_ad->LookupInteger( ATTR_PROC_ID, job_id.proc ); if ( GetDirtyAttributes( job_id.cluster, job_id.proc, &updates ) < 0 ) { dprintf( D_ALWAYS, "Failed to retrieve dirty attributes for job %d.%d\n", job_id.cluster, job_id.proc ); failure_line_num = __LINE__; delete next_ad; goto contact_schedd_disconnect; } else { dprintf (D_FULLDEBUG, "Retrieved updated attributes for job %d.%d\n", job_id.cluster, job_id.proc); dPrintAd(D_JOB, updates); } if ( BaseJob::JobsByProcId.lookup( job_id, curr_job ) == 0 ) { curr_job->JobAdUpdateFromSchedd( &updates, false ); ProcIdToStr( job_id, str ); dirty_job_ids.append( str ); } else { dprintf( D_ALWAYS, "Don't know about updated job %d.%d. " "Ignoring it\n", job_id.cluster, job_id.proc ); } delete next_ad; next_ad = GetNextDirtyJobByConstraint( expr_buf, 0 ); } } update_jobs_complete = true; // if ( BeginTransaction() < 0 ) { errno = 0; BeginTransaction(); if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } // requestJobStatus ///////////////////////////////////////////////////// if ( pendingJobStatus.getNumElements() != 0 ) { JobStatusRequest curr_request; pendingJobStatus.startIterations(); while ( pendingJobStatus.iterate( curr_request ) != 0 ) { int status; rc = GetAttributeInt( curr_request.job_id.cluster, curr_request.job_id.proc, ATTR_JOB_STATUS, &status ); if ( rc < 0 ) { if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } else { // The job is not in the schedd's job queue. This // probably means that the user did a condor_rm -f, // so return a job status of REMOVED. status = REMOVED; } } // return status dprintf( D_FULLDEBUG, "%d.%d job status: %d\n", curr_request.job_id.cluster, curr_request.job_id.proc, status ); pendingJobStatus.remove( curr_request.job_id ); curr_request.job_status = status; daemonCore->Reset_Timer( curr_request.tid, 0 ); completedJobStatus.insert( curr_request.job_id, curr_request ); } } // Update existing jobs ///////////////////////////////////////////////////// ScheddUpdateRequest *curr_request; pendingScheddUpdates.startIterations(); while ( pendingScheddUpdates.iterate( curr_request ) != 0 ) { curr_job = curr_request->m_job; dprintf(D_FULLDEBUG,"Updating classad values for %d.%d:\n", curr_job->procID.cluster, curr_job->procID.proc); const char *attr_name; const char *attr_value; ExprTree *expr; bool fake_job_in_queue = false; curr_job->jobAd->ResetExpr(); while ( curr_job->jobAd->NextDirtyExpr(attr_name, expr) == true && fake_job_in_queue == false ) { attr_value = ExprTreeToString( expr ); dprintf(D_FULLDEBUG," %s = %s\n",attr_name,attr_value); rc = SetAttribute( curr_job->procID.cluster, curr_job->procID.proc, attr_name, attr_value); if ( rc < 0 ) { if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } else { // The job is not in the schedd's job queue. This // probably means that the user did a condor_rm -f, // so pretend that all updates for the job succeed. // Otherwise, we'll never make forward progress on // the job. // TODO We should also fake a job status of REMOVED // to the job, so it can do what cleanup it can. fake_job_in_queue = true; break; } } } } if ( RemoteCommitTransaction() < 0 ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } schedd_updates_complete = true; // Delete existing jobs ///////////////////////////////////////////////////// errno = 0; BeginTransaction(); if ( errno == ETIMEDOUT ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } pendingScheddUpdates.startIterations(); while ( pendingScheddUpdates.iterate( curr_request ) != 0 ) { curr_job = curr_request->m_job; if ( curr_job->deleteFromSchedd ) { dprintf(D_FULLDEBUG,"Deleting job %d.%d from schedd\n", curr_job->procID.cluster, curr_job->procID.proc); rc = DestroyProc(curr_job->procID.cluster, curr_job->procID.proc); // NOENT means the job doesn't exist. Good enough for us. if ( rc < 0 && rc != DESTROYPROC_ENOENT) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } } } if ( RemoteCommitTransaction() < 0 ) { failure_line_num = __LINE__; commit_transaction = false; goto contact_schedd_disconnect; } schedd_deletes_complete = true; contact_schedd_disconnect: DisconnectQ( schedd, commit_transaction ); if ( add_remove_jobs_complete == true ) { firstScheddContact = false; addJobsSignaled = false; } else { formatstr( error_str, "Schedd connection error during Add/RemoveJobs at line %d!", failure_line_num ); goto contact_schedd_failure; } if ( update_jobs_complete == true ) { updateJobsSignaled = false; } else { formatstr( error_str, "Schedd connection error during dirty attribute update at line %d!", failure_line_num ); goto contact_schedd_failure; } if ( schedd_updates_complete == false ) { formatstr( error_str, "Schedd connection error during updates at line %d!", failure_line_num ); goto contact_schedd_failure; } // Clear dirty bits for all jobs updated if ( !dirty_job_ids.isEmpty() ) { ClassAd *rval; dprintf( D_FULLDEBUG, "Calling clearDirtyAttrs on %d jobs\n", dirty_job_ids.number() ); dirty_job_ids.rewind(); rval = ScheddObj->clearDirtyAttrs( &dirty_job_ids, &errstack ); if ( rval == NULL ) { dprintf(D_ALWAYS, "Failed to notify schedd to clear dirty attributes. CondorError: %s\n", errstack.getFullText().c_str() ); } delete rval; } // Wake up jobs that had schedd updates pending and delete job // objects that wanted to be deleted pendingScheddUpdates.startIterations(); while ( pendingScheddUpdates.iterate( curr_request ) != 0 ) { curr_job = curr_request->m_job; curr_job->jobAd->ClearAllDirtyFlags(); if ( curr_job->deleteFromGridmanager ) { // If the Job object wants to delete the job from the // schedd but we failed to do so, don't delete the job // object yet; wait until we successfully delete the job // from the schedd. if ( curr_job->deleteFromSchedd == true && schedd_deletes_complete == false ) { continue; } // If wantRematch is set, send a reschedule now if ( curr_job->wantRematch ) { send_reschedule = true; } pendingScheddUpdates.remove( curr_job->procID ); pendingScheddVacates.remove( curr_job->procID ); pendingJobStatus.remove( curr_job->procID ); completedJobStatus.remove( curr_job->procID ); completedScheddVacates.remove( curr_job->procID ); delete curr_job; } else { pendingScheddUpdates.remove( curr_job->procID ); if ( curr_request->m_notify ) { curr_job->SetEvaluateState(); } } delete curr_request; } // Poke objects that wanted to be notified when a schedd update completed // successfully (possibly minus deletes) int timer_id; scheddUpdateNotifications.Rewind(); while ( scheddUpdateNotifications.Next( timer_id ) ) { daemonCore->Reset_Timer( timer_id, 0 ); } scheddUpdateNotifications.Clear(); if ( send_reschedule == true ) { ScheddObj->reschedule(); } // Check if we have any jobs left to manage. If not, exit. if ( BaseJob::JobsByProcId.getNumElements() == 0 ) { dprintf( D_ALWAYS, "No jobs left, shutting down\n" ); daemonCore->Send_Signal( daemonCore->getpid(), SIGTERM ); } lastContactSchedd = time(NULL); if ( schedd_deletes_complete == false ) { error_str = "Problem using DestroyProc to delete jobs!"; goto contact_schedd_failure; } scheddFailureCount = 0; // For each job that had dirty attributes, re-evaluate the policy dirty_job_ids.rewind(); while ( (job_id_str = dirty_job_ids.next()) != NULL ) { StrToProcIdFixMe(job_id_str, job_id); if ( BaseJob::JobsByProcId.lookup( job_id, curr_job ) == 0 ) { curr_job->EvalPeriodicJobExpr(); } } dprintf(D_FULLDEBUG,"leaving doContactSchedd()\n"); return; contact_schedd_failure: scheddFailureCount++; if ( error_str == "" ) { error_str = "Failure in doContactSchedd"; } if ( scheddFailureCount >= maxScheddFailures ) { dprintf( D_ALWAYS, "%s\n", error_str.c_str() ); EXCEPT( "Too many failures connecting to schedd!" ); } dprintf( D_ALWAYS, "%s Will retry\n", error_str.c_str() ); lastContactSchedd = time(NULL); RequestContactSchedd(); return; }
//------------------------------------------------------------- VCNResID VCNMeshLoader::LoadMeshElementFaceXML( XMLNodePtr elementNode ) { // Fetch the node we need from the element node XMLNodePtr node = 0; elementNode->selectSingleNode( (VCNTChar*)kNodeFaces, &node ); // It might very well be that we aren't using indexes if( node == NULL ) return kInvalidResID; // Get the expected size of the array VCNUInt size = 0; GetAttributeUInt( node, kAttrFacesSize, size ); // If we don't have any, leave. if( size == 0 ) return kInvalidResID; // Create an array to contain all of this (3 indexes per face) VCNUInt stride = kFaceUShorts * kCacheStrides[VT_INDEX]; VCNUInt numBytes = size * stride; VCNByte* buffer = new VCNByte[numBytes]; // Create some tools... VCNUShort* ptrFaces = (VCNUShort*)buffer; VCNInt safety = 0; // Read the XML and fill the array! XMLNodeListPtr faces = 0; node->selectNodes( (VCNTChar*)kNodeFace, &faces ); VCNLong facesLength = 0; faces->get_length( &facesLength ); VCN_ASSERT( facesLength == size && "FILE IS CORRUPTED!" ); for( VCNLong i=0; i<facesLength; i++ ) { // Get the element's node XMLNodePtr faceNode = 0; faces->get_item( i, &faceNode ); // Read the X GetAttributeUShort( faceNode, kAttrFace1, *ptrFaces ); ptrFaces++; // Read the Y GetAttributeUShort( faceNode, kAttrFace2, *ptrFaces ); ptrFaces++; // Read the Z GetAttributeUShort( faceNode, kAttrFace3, *ptrFaces ); ptrFaces++; // Verify the safety to make sure we're reading in the right order GetAttributeInt( faceNode, kAttrVertexPositionID, safety ); VCN_ASSERT( safety==i && "VERTEX AREN'T READ IN ORDER!" ); } // Now give the information to the cache manager // (he'll take care of making this data API specific) VCNResID cacheID = VCNRenderCore::GetInstance()->CreateCache( VT_INDEX, buffer, numBytes ); // Clear the buffer delete [] buffer; // Return the cache ID return cacheID; }
void update_job_status( struct rusage *localp, struct rusage *remotep ) { int status = -1; double utime = 0.0; double stime = 0.0; int tot_sus=0, cum_sus=0, last_sus=0; char buf[1024*50]; // If the job completed, and there is no HISTORY file specified, // the don't bother to update the job ClassAd since it is about to be // flushed into the bit bucket by the schedd anyway. char *myHistoryFile = param("HISTORY"); if ((Proc->status == COMPLETED) && (myHistoryFile==NULL)) { return; } if (myHistoryFile) { free(myHistoryFile); } if (!JobAd) { EXCEPT( "update_job_status(): No job ad"); } JobAd->LookupInteger(ATTR_TOTAL_SUSPENSIONS, tot_sus); JobAd->LookupInteger(ATTR_CUMULATIVE_SUSPENSION_TIME, cum_sus); JobAd->LookupInteger(ATTR_LAST_SUSPENSION_TIME, last_sus); //new syntax, can use filesystem to authenticate if (!ConnectQ(schedd, SHADOW_QMGMT_TIMEOUT) || GetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_STATUS, &status) < 0) { EXCEPT("Failed to connect to schedd!"); } job_report_update_queue( Proc ); if( status == REMOVED ) { dprintf( D_ALWAYS, "update_job_status(): Job %d.%d has been removed " "by condor_rm\n", Proc->id.cluster, Proc->id.proc ); } else { SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_TOTAL_SUSPENSIONS, tot_sus); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_CUMULATIVE_SUSPENSION_TIME, cum_sus); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_LAST_SUSPENSION_TIME, last_sus); update_job_rusage( localp, remotep ); Proc->image_size = ImageSize; SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_IMAGE_SIZE, ImageSize); // For standard universe. MemoryUsed==ImageSize, no need to param this one. // because imagesize is already the best measure of memory usage. SetAttribute(Proc->id.cluster, Proc->id.proc, ATTR_MEMORY_USAGE, "((ImageSize+1023)/1024)"); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_EXIT_STATUS, JobExitStatus); rusage_to_float( Proc->local_usage, &utime, &stime ); SetAttributeFloat(Proc->id.cluster, Proc->id.proc, ATTR_JOB_LOCAL_USER_CPU, utime); SetAttributeFloat(Proc->id.cluster, Proc->id.proc, ATTR_JOB_LOCAL_SYS_CPU, stime); rusage_to_float( Proc->remote_usage[0], &utime, &stime ); SetAttributeFloat(Proc->id.cluster, Proc->id.proc, ATTR_JOB_REMOTE_USER_CPU, utime); SetAttributeFloat(Proc->id.cluster, Proc->id.proc, ATTR_JOB_REMOTE_SYS_CPU, stime); dprintf(D_FULLDEBUG,"TIME DEBUG 3 USR remotep=%lu Proc=%lu utime=%f\n", remotep->ru_utime.tv_sec, Proc->remote_usage[0].ru_utime.tv_sec, utime); dprintf(D_FULLDEBUG,"TIME DEBUG 4 SYS remotep=%lu Proc=%lu utime=%f\n", remotep->ru_stime.tv_sec, Proc->remote_usage[0].ru_stime.tv_sec, stime); if( sock_RSC1 ) { float TotalBytesSentUpdate = TotalBytesSent + sock_RSC1->get_bytes_sent() + BytesSent; float TotalBytesRecvdUpdate = TotalBytesRecvd + sock_RSC1->get_bytes_recvd() + BytesRecvd; SetAttributeFloat( Proc->id.cluster, Proc->id.proc, ATTR_BYTES_SENT, TotalBytesSentUpdate ); SetAttributeFloat( Proc->id.cluster, Proc->id.proc, ATTR_BYTES_RECVD, TotalBytesRecvdUpdate ); float RSCBytesSentUpdate = sock_RSC1->get_bytes_sent() + RSCBytesSent; float RSCBytesRecvdUpdate = sock_RSC1->get_bytes_recvd() + RSCBytesRecvd; SetAttributeFloat( Proc->id.cluster, Proc->id.proc, ATTR_RSC_BYTES_SENT, RSCBytesSentUpdate ); SetAttributeFloat( Proc->id.cluster, Proc->id.proc, ATTR_RSC_BYTES_RECVD, RSCBytesRecvdUpdate ); } if( ExitReason == JOB_CKPTED || ExitReason == JOB_NOT_CKPTED ) { SetAttributeInt( Proc->id.cluster, Proc->id.proc, ATTR_LAST_VACATE_TIME, time(0) ); } if( ExitReason == JOB_CKPTED || LastCkptTime > LastRestartTime ) { int uncommitted_suspension_time = 0; JobAd->LookupInteger(ATTR_UNCOMMITTED_SUSPENSION_TIME, uncommitted_suspension_time); if( uncommitted_suspension_time > 0 ) { int committed_suspension_time = 0; GetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_COMMITTED_SUSPENSION_TIME, &committed_suspension_time); committed_suspension_time += uncommitted_suspension_time; SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_COMMITTED_SUSPENSION_TIME, committed_suspension_time); } } // if we had checkpointed, then save all of these attributes as well. if (LastCkptTime > LastRestartTime) { SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_LAST_CKPT_TIME, LastCkptTime); CommittedTime=0; GetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_COMMITTED_TIME, &CommittedTime); CommittedTime += LastCkptTime - LastRestartTime; SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_COMMITTED_TIME, CommittedTime); LastRestartTime = LastCkptTime; SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_NUM_CKPTS, NumCkpts); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_NUM_RESTARTS, NumRestarts); if (Executing_Arch) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_CKPT_ARCH, Executing_Arch); } if (Executing_OpSys) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_CKPT_OPSYS, Executing_OpSys); } // If we wrote a checkpoint, store the location in the // LastCkptServer attribute. If we didn't use a checkpoint // server (i.e., we stored it locally), then make sure // no LastCkptServer attribute is set. if (LastCkptServer) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_LAST_CKPT_SERVER, LastCkptServer); } else { DeleteAttribute(Proc->id.cluster, Proc->id.proc, ATTR_LAST_CKPT_SERVER); } if (LastCkptPlatform) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_LAST_CHECKPOINT_PLATFORM, LastCkptPlatform); } } // if the job completed, we should include the run-time in // committed time, since it contributed to the completion of // the job. Also, commit the exit code/signal stuff, plus any // core filenames. if (Proc->status == COMPLETED) { int exit_code, exit_signal, exit_by_signal; int pending; // update the time. CommittedTime = 0; GetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_COMMITTED_TIME, &CommittedTime); CommittedTime += Proc->completion_date - LastRestartTime; SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_JOB_COMMITTED_TIME, CommittedTime); // if there is a core file, update that too. if (JobAd->LookupString(ATTR_JOB_CORE_FILENAME, buf, sizeof(buf))) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_JOB_CORE_FILENAME, buf); } // only new style ads have ATTR_ON_EXIT_BY_SIGNAL, so only // SetAttribute for those types of ads if (JobAd->LookupInteger(ATTR_ON_EXIT_BY_SIGNAL, exit_by_signal)==1) { SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_ON_EXIT_BY_SIGNAL, exit_by_signal); if (exit_by_signal == 1) /* exited via signal */ { JobAd->LookupInteger(ATTR_ON_EXIT_SIGNAL, exit_signal); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_ON_EXIT_SIGNAL, exit_signal); } else { JobAd->LookupInteger(ATTR_ON_EXIT_CODE, exit_code); SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_ON_EXIT_CODE, exit_code); } } // and now, let's try and mark this job as a terminate pending // job. If the job already is, then fine. We'll mark it again. if (JobAd->LookupBool(ATTR_TERMINATION_PENDING, pending)) { SetAttribute(Proc->id.cluster, Proc->id.proc, ATTR_TERMINATION_PENDING, pending?"TRUE":"FALSE"); } else { // if it isn't in the job ad, then add it to the saved ad in the // schedd. SetAttribute(Proc->id.cluster, Proc->id.proc, ATTR_TERMINATION_PENDING, "TRUE"); } // store the reason why the job is marked completed. if (JobAd->LookupString(ATTR_TERMINATION_REASON, buf, sizeof(buf))) { SetAttributeString(Proc->id.cluster, Proc->id.proc, ATTR_TERMINATION_REASON, buf); } // Set up the exit code the shadow was about to exit with to // help support the terminate pending "state". SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_TERMINATION_EXITREASON, ExitReason); // Put the job status as created by waitpid() into the job ad // itself. This is to implement the terminate_pending feature. It // is done like this because EVERYWHERE in this codebase we do // stuff like WIFEXITED(JobStatus) and it turns out there are no // user level macros to will one of those status values as returned // by waitpid() into existance. So, we'll put it directly into the // job ad to prevent me having to reimplement a few large functions // which deal with JobStatus directly--as it is sadly a global // variable. SetAttributeInt(Proc->id.cluster, Proc->id.proc, ATTR_WAITPID_STATUS, JobStatus); } } if (!DisconnectQ(0)) { EXCEPT("Failed to commit updated job queue status!"); } }
bool CImportDialog::ImportLegacySites(TiXmlElement* pSitesToImport, TiXmlElement* pExistingSites) { for (TiXmlElement* pImportFolder = pSitesToImport->FirstChildElement("Folder"); pImportFolder; pImportFolder = pImportFolder->NextSiblingElement("Folder")) { wxString name = GetTextAttribute(pImportFolder, "Name"); if (name == _T("")) continue; wxString newName = name; int i = 2; TiXmlElement* pFolder; while (!(pFolder = GetFolderWithName(pExistingSites, newName))) { newName = wxString::Format(_T("%s %d"), name.c_str(), i++); } ImportLegacySites(pImportFolder, pFolder); } for (TiXmlElement* pImportSite = pSitesToImport->FirstChildElement("Site"); pImportSite; pImportSite = pImportSite->NextSiblingElement("Site")) { wxString name = GetTextAttribute(pImportSite, "Name"); if (name == _T("")) continue; wxString host = GetTextAttribute(pImportSite, "Host"); if (host == _T("")) continue; int port = GetAttributeInt(pImportSite, "Port"); if (port < 1 || port > 65535) continue; int serverType = GetAttributeInt(pImportSite, "ServerType"); if (serverType < 0 || serverType > 4) continue; int protocol; switch (serverType) { default: case 0: protocol = 0; break; case 1: protocol = 3; break; case 2: case 4: protocol = 4; break; case 3: protocol = 1; break; } bool dontSavePass = GetAttributeInt(pImportSite, "DontSavePass") == 1; int logontype = GetAttributeInt(pImportSite, "Logontype"); if (logontype < 0 || logontype > 2) continue; if (logontype == 2) logontype = 4; if (logontype == 1 && dontSavePass) logontype = 2; wxString user = GetTextAttribute(pImportSite, "User"); wxString pass = DecodeLegacyPassword(GetTextAttribute(pImportSite, "Pass")); wxString account = GetTextAttribute(pImportSite, "Account"); if (logontype && user == _T("")) continue; // Find free name wxString newName = name; int i = 2; while (HasEntryWithName(pExistingSites, newName)) { newName = wxString::Format(_T("%s %d"), name.c_str(), i++); } TiXmlElement* pServer = pExistingSites->LinkEndChild(new TiXmlElement("Server"))->ToElement(); AddTextElement(pServer, newName); AddTextElement(pServer, "Host", host); AddTextElement(pServer, "Port", port); AddTextElement(pServer, "Protocol", protocol); AddTextElement(pServer, "Logontype", logontype); AddTextElement(pServer, "User", user); AddTextElement(pServer, "Pass", pass); AddTextElement(pServer, "Account", account); } return true; }
OsisSegmentRunning::OsisSegmentRunning(QDomElement& osisElement, QString& elementName, QObject *parent) : QObject(parent) , OsisData(OsisSegmentRunning::staticMetaObject, osisElement, elementName) , SegmentID(GetAttributeInt(Segment_ID)) { }
OsisElement::OsisElement(QDomElement& osisElement, QString& elementName, QObject *parent) : QObject(parent) , OsisData(OsisElement::staticMetaObject, osisElement, elementName) , Ind(GetAttributeInt(Index)) { }
int CXmlNode::ReadAttributeInt(const std::wstring& str, const int& _default) { return GetAttributeInt(str, _default); }