LoadResult DiagLoader::readLocation(CXLoadedDiagnosticSetImpl &TopDiags, RecordData &Record, unsigned &offset, CXLoadedDiagnostic::Location &Loc) { if (Record.size() < offset + 3) { reportInvalidFile("Corrupted source location"); return Failure; } unsigned fileID = Record[offset++]; if (fileID == 0) { // Sentinel value. Loc.file = 0; Loc.line = 0; Loc.column = 0; Loc.offset = 0; return Success; } const FileEntry *FE = TopDiags.Files[fileID]; if (!FE) { reportInvalidFile("Corrupted file entry in source location"); return Failure; } Loc.file = (void*) FE; Loc.line = Record[offset++]; Loc.column = Record[offset++]; Loc.offset = Record[offset++]; return Success; }
unsigned SDiagsWriter::getEmitCategory(unsigned int category) { if (Categories.count(category)) return category; Categories.insert(category); // We use a local version of 'Record' so that we can be generating // another record when we lazily generate one for the category entry. RecordData Record; Record.push_back(RECORD_CATEGORY); Record.push_back(category); StringRef catName = DiagnosticIDs::getCategoryNameFromID(category); Record.push_back(catName.size()); Stream.EmitRecordWithBlob(Abbrevs.get(RECORD_CATEGORY), Record, catName); return category; }
void Database::replace(const RecordID& id, const RecordData& record) { Dbt key(id.data(), id.size()); const std::string str = record.data(); Dbt data(const_cast<char*>(str.c_str()), str.size()); dbMain_.put(nullptr, &key, &data, /*flags*/0); }
// Insert multiple records and verify their contents by calling dataFor() // on each of the returned RecordIds. TEST(RecordStoreTestHarness, DataForMultiple) { unique_ptr<HarnessHelper> harnessHelper(newHarnessHelper()); unique_ptr<RecordStore> rs(harnessHelper->newNonCappedRecordStore()); { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); ASSERT_EQUALS(0, rs->numRecords(opCtx.get())); } const int nToInsert = 10; RecordId locs[nToInsert]; for (int i = 0; i < nToInsert; i++) { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); { stringstream ss; ss << "record----" << i; string data = ss.str(); WriteUnitOfWork uow(opCtx.get()); StatusWith<RecordId> res = rs->insertRecord(opCtx.get(), data.c_str(), data.size() + 1, false); ASSERT_OK(res.getStatus()); locs[i] = res.getValue(); uow.commit(); } } { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); ASSERT_EQUALS(nToInsert, rs->numRecords(opCtx.get())); } for (int i = 0; i < nToInsert; i++) { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); { stringstream ss; ss << "record----" << i; string data = ss.str(); RecordData record = rs->dataFor(opCtx.get(), locs[i]); ASSERT_EQUALS(data.size() + 1, static_cast<size_t>(record.size())); ASSERT_EQUALS(data, record.data()); } } }
void CRecordManager::AddFileChildRecord(string& strFileName,vector<string>& vecRef) { if (!m_mapNewRecord.count(strFileName)) m_mapNewRecord[strFileName] = new RecordData(); RecordData* pdata = m_mapNewRecord[strFileName]; FILETIME time; GetFileAmendTime(strFileName,&time); if (time.dwLowDateTime == pdata->ftLastWriteTime.dwLowDateTime && time.dwHighDateTime == pdata->ftLastWriteTime.dwHighDateTime) { pdata->AddFile_toChildMuster(vecRef); } else { pdata->ftLastWriteTime = time; pdata->SetChildFileMuster(vecRef); } }
TEST( RocksRecordStoreTest, Snapshots1 ) { unittest::TempDir td( _rocksRecordStoreTestDir ); scoped_ptr<rocksdb::DB> db( getDB( td.path() ) ); DiskLoc loc; int size = -1; { RocksRecordStore rs( "foo.bar", db.get(), db->DefaultColumnFamily(), db->DefaultColumnFamily() ); string s = "test string"; size = s.length() + 1; MyOperationContext opCtx( db.get() ); { WriteUnitOfWork uow( opCtx.recoveryUnit() ); StatusWith<DiskLoc> res = rs.insertRecord( &opCtx, s.c_str(), s.size() + 1, -1 ); ASSERT_OK( res.getStatus() ); loc = res.getValue(); } } { MyOperationContext opCtx( db.get() ); MyOperationContext opCtx2( db.get() ); RocksRecordStore rs( "foo.bar", db.get(), db->DefaultColumnFamily(), db->DefaultColumnFamily() ); rs.deleteRecord( &opCtx, loc ); RecordData recData = rs.dataFor( loc/*, &opCtx */ ); ASSERT( !recData.data() && recData.size() == 0 ); // XXX this test doesn't yet work, but there should be some notion of snapshots, // and the op context that doesn't see the deletion shouldn't know that this data // has been deleted RecordData recData2 = rs.dataFor( loc/*, &opCtx2 */ ); ASSERT( recData.data() && recData.size() == size ); } }
CollectionOptions MMAPV1DatabaseCatalogEntry::getCollectionOptions(OperationContext* txn, RecordId rid) const { CollectionOptions options; if (rid.isNull()) { return options; } RecordStoreV1Base* rs = _getNamespaceRecordStore(); invariant(rs); RecordData data; invariant(rs->findRecord(txn, rid, &data)); if (data.releaseToBson()["options"].isABSONObj()) { Status status = options.parse(data.releaseToBson()["options"].Obj()); fassert(18523, status); } return options; }
void KVCatalog::init(OperationContext* opCtx) { // No locking needed since called single threaded. scoped_ptr<RecordIterator> it(_rs->getIterator(opCtx)); while (!it->isEOF()) { RecordId loc = it->getNext(); RecordData data = it->dataFor(loc); BSONObj obj(data.data()); // No locking needed since can only be called from one thread. // No rollback since this is just loading already committed data. string ns = obj["ns"].String(); string ident = obj["ident"].String(); _idents[ns] = Entry(ident, loc); } // In the unlikely event that we have used this _rand before generate a new one. while (_hasEntryCollidingWithRand()) { _rand = _newRand(); } }
Status Collection::aboutToDeleteCapped(OperationContext* txn, const RecordId& loc, RecordData data) { /* check if any cursors point to us. if so, advance them. */ _cursorManager.invalidateDocument(txn, loc, INVALIDATION_DELETION); BSONObj doc = data.releaseToBson(); _indexCatalog.unindexRecord(txn, doc, loc, false); return Status::OK(); }
unsigned SDiagsWriter::getEmitFile(const char *FileName){ if (!FileName) return 0; unsigned &entry = Files[FileName]; if (entry) return entry; // Lazily generate the record for the file. entry = Files.size(); RecordData Record; Record.push_back(RECORD_FILENAME); Record.push_back(entry); Record.push_back(0); // For legacy. Record.push_back(0); // For legacy. StringRef Name(FileName); Record.push_back(Name.size()); Stream.EmitRecordWithBlob(Abbrevs.get(RECORD_FILENAME), Record, Name); return entry; }
unsigned SDiagsWriter::getEmitFile(SourceLocation Loc) { SourceManager &SM = Diags.getSourceManager(); assert(Loc.isValid()); const std::pair<FileID, unsigned> &LocInfo = SM.getDecomposedLoc(Loc); const FileEntry *FE = SM.getFileEntryForID(LocInfo.first); if (!FE) return 0; unsigned &entry = Files[FE]; if (entry) return entry; // Lazily generate the record for the file. entry = Files.size(); RecordData Record; Record.push_back(RECORD_FILENAME); Record.push_back(entry); Record.push_back(FE->getSize()); Record.push_back(FE->getModificationTime()); StringRef Name = FE->getName(); Record.push_back(Name.size()); Stream.EmitRecordWithBlob(Abbrevs.get(RECORD_FILENAME), Record, Name); return entry; }
unsigned SerializedDiagnosticConsumer::getEmitFile(StringRef Filename) { // NOTE: Using Filename.data() here relies on SourceMgr using // const char* as buffer identifiers. This is fast, but may // be brittle. We can always switch over to using a StringMap. unsigned &entry = State->Files[Filename.data()]; if (entry) return entry; // Lazily generate the record for the file. Note that in // practice we only expect there to be one file, but this is // general and is what the diagnostic file expects. entry = State->Files.size(); RecordData Record; Record.push_back(RECORD_FILENAME); Record.push_back(entry); Record.push_back(0); // For legacy. Record.push_back(0); // For legacy. Record.push_back(Filename.size()); State->Stream.EmitRecordWithBlob(State->Abbrevs.get(RECORD_FILENAME), Record, Filename.data()); return entry; }
BSONObj KVCatalog::_findEntry(OperationContext* opCtx, StringData ns, RecordId* out) const { RecordId dl; { stdx::lock_guard<stdx::mutex> lk(_identsLock); NSToIdentMap::const_iterator it = _idents.find(ns.toString()); invariant(it != _idents.end()); dl = it->second.storedLoc; } LOG(3) << "looking up metadata for: " << ns << " @ " << dl; RecordData data; if (!_rs->findRecord(opCtx, dl, &data)) { // since the in memory meta data isn't managed with mvcc // its possible for different transactions to see slightly // different things, which is ok via the locking above. return BSONObj(); } if (out) *out = dl; return data.releaseToBson().getOwned(); }
// Insert a record and verify its contents by calling dataFor() // on the returned RecordId. TEST(RecordStoreTestHarness, DataFor) { unique_ptr<HarnessHelper> harnessHelper(newHarnessHelper()); unique_ptr<RecordStore> rs(harnessHelper->newNonCappedRecordStore()); { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); ASSERT_EQUALS(0, rs->numRecords(opCtx.get())); } string data = "record-"; RecordId loc; { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); { WriteUnitOfWork uow(opCtx.get()); StatusWith<RecordId> res = rs->insertRecord(opCtx.get(), data.c_str(), data.size() + 1, false); ASSERT_OK(res.getStatus()); loc = res.getValue(); uow.commit(); } } { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); ASSERT_EQUALS(1, rs->numRecords(opCtx.get())); } { unique_ptr<OperationContext> opCtx(harnessHelper->newOperationContext()); { RecordData record = rs->dataFor(opCtx.get(), loc); ASSERT_EQUALS(data.size() + 1, static_cast<size_t>(record.size())); ASSERT_EQUALS(data, record.data()); } } }
std::vector<std::string> KVCatalog::getAllIdents(OperationContext* opCtx) const { std::vector<std::string> v; scoped_ptr<RecordIterator> it(_rs->getIterator(opCtx)); while (!it->isEOF()) { RecordId loc = it->getNext(); RecordData data = it->dataFor(loc); BSONObj obj(data.data()); v.push_back(obj["ident"].String()); BSONElement e = obj["idxIdent"]; if (!e.isABSONObj()) continue; BSONObj idxIdent = e.Obj(); BSONObjIterator sub(idxIdent); while (sub.more()) { BSONElement e = sub.next(); v.push_back(e.String()); } } return v; }
RecordID Database::add(const RecordData& record) { RecordID newId; Dbt key(newId.data(), 0); key.set_flags(DB_DBT_USERMEM); key.set_ulen(RecordID::size()); const std::string str = record.data(); Dbt data(const_cast<char*>(str.c_str()), str.size()); const int err = dbMain_.put(nullptr, &key, &data, DB_APPEND); assert (err == 0); assert (key.get_size() == RecordID::size()); return newId; }
void SDiagsRenderer::emitNote(SourceLocation Loc, StringRef Message) { Writer.Stream.EnterSubblock(BLOCK_DIAG, 4); RecordData Record; Record.push_back(RECORD_DIAG); Record.push_back(DiagnosticsEngine::Note); Writer.AddLocToRecord(Loc, Record, SM); Record.push_back(Writer.getEmitCategory()); Record.push_back(Writer.getEmitDiagnosticFlag(DiagnosticsEngine::Note)); Record.push_back(Message.size()); Writer.Stream.EmitRecordWithBlob(Writer.Abbrevs.get(RECORD_DIAG), Record, Message); Writer.Stream.ExitBlock(); }
// Insert a record and try to perform an in-place update on it. TEST( RecordStoreTestHarness, UpdateWithDamages ) { scoped_ptr<HarnessHelper> harnessHelper( newHarnessHelper() ); scoped_ptr<RecordStore> rs( harnessHelper->newNonCappedRecordStore() ); if (!rs->updateWithDamagesSupported()) return; { scoped_ptr<OperationContext> opCtx( harnessHelper->newOperationContext() ); ASSERT_EQUALS( 0, rs->numRecords( opCtx.get() ) ); } string data = "00010111"; RecordId loc; const RecordData rec(data.c_str(), data.size() + 1); { scoped_ptr<OperationContext> opCtx( harnessHelper->newOperationContext() ); { WriteUnitOfWork uow( opCtx.get() ); StatusWith<RecordId> res = rs->insertRecord( opCtx.get(), rec.data(), rec.size(), false ); ASSERT_OK( res.getStatus() ); loc = res.getValue(); uow.commit(); } } { scoped_ptr<OperationContext> opCtx( harnessHelper->newOperationContext() ); ASSERT_EQUALS( 1, rs->numRecords( opCtx.get() ) ); } { scoped_ptr<OperationContext> opCtx( harnessHelper->newOperationContext() ); { mutablebson::DamageVector dv( 3 ); dv[0].sourceOffset = 5; dv[0].targetOffset = 0; dv[0].size = 2; dv[1].sourceOffset = 3; dv[1].targetOffset = 2; dv[1].size = 3; dv[2].sourceOffset = 0; dv[2].targetOffset = 5; dv[2].size = 3; WriteUnitOfWork uow( opCtx.get() ); ASSERT_OK( rs->updateWithDamages( opCtx.get(), loc, rec, data.c_str(), dv ) ); uow.commit(); } } data = "11101000"; { scoped_ptr<OperationContext> opCtx( harnessHelper->newOperationContext() ); { RecordData record = rs->dataFor( opCtx.get(), loc ); ASSERT_EQUALS( data, record.data() ); } } }
LoadResult DiagLoader::readDiagnosticBlock(llvm::BitstreamCursor &Stream, CXDiagnosticSetImpl &Diags, CXLoadedDiagnosticSetImpl &TopDiags){ if (Stream.EnterSubBlock(clang::serialized_diags::BLOCK_DIAG)) { reportInvalidFile("malformed diagnostic block"); return Failure; } OwningPtr<CXLoadedDiagnostic> D(new CXLoadedDiagnostic()); RecordData Record; while (true) { unsigned blockOrCode = 0; StreamResult Res = readToNextRecordOrBlock(Stream, "Diagnostic Block", blockOrCode); switch (Res) { case Read_EndOfStream: llvm_unreachable("EndOfStream handled in readToNextRecordOrBlock"); case Read_Failure: return Failure; case Read_BlockBegin: { // The only blocks we care about are subdiagnostics. if (blockOrCode != serialized_diags::BLOCK_DIAG) { if (!Stream.SkipBlock()) { reportInvalidFile("Invalid subblock in Diagnostics block"); return Failure; } } else if (readDiagnosticBlock(Stream, D->getChildDiagnostics(), TopDiags)) { return Failure; } continue; } case Read_BlockEnd: Diags.appendDiagnostic(D.take()); return Success; case Read_Record: break; } // Read the record. Record.clear(); StringRef Blob; unsigned recID = Stream.readRecord(blockOrCode, Record, &Blob); if (recID < serialized_diags::RECORD_FIRST || recID > serialized_diags::RECORD_LAST) continue; switch ((serialized_diags::RecordIDs)recID) { case serialized_diags::RECORD_VERSION: continue; case serialized_diags::RECORD_CATEGORY: if (readString(TopDiags, TopDiags.Categories, "category", Record, Blob, /* allowEmptyString */ true)) return Failure; continue; case serialized_diags::RECORD_DIAG_FLAG: if (readString(TopDiags, TopDiags.WarningFlags, "warning flag", Record, Blob)) return Failure; continue; case serialized_diags::RECORD_FILENAME: { if (readString(TopDiags, TopDiags.FileNames, "filename", Record, Blob)) return Failure; if (Record.size() < 3) { reportInvalidFile("Invalid file entry"); return Failure; } const FileEntry *FE = TopDiags.FakeFiles.getVirtualFile(TopDiags.FileNames[Record[0]], /* size */ Record[1], /* time */ Record[2]); TopDiags.Files[Record[0]] = FE; continue; } case serialized_diags::RECORD_SOURCE_RANGE: { CXSourceRange SR; if (readRange(TopDiags, Record, 0, SR)) return Failure; D->Ranges.push_back(SR); continue; } case serialized_diags::RECORD_FIXIT: { CXSourceRange SR; if (readRange(TopDiags, Record, 0, SR)) return Failure; llvm::StringRef RetStr; if (readString(TopDiags, RetStr, "FIXIT", Record, Blob, /* allowEmptyString */ true)) return Failure; D->FixIts.push_back(std::make_pair(SR, createCXString(RetStr, false))); continue; } case serialized_diags::RECORD_DIAG: { D->severity = Record[0]; unsigned offset = 1; if (readLocation(TopDiags, Record, offset, D->DiagLoc)) return Failure; D->category = Record[offset++]; unsigned diagFlag = Record[offset++]; D->DiagOption = diagFlag ? TopDiags.WarningFlags[diagFlag] : ""; D->CategoryText = D->category ? TopDiags.Categories[D->category] : ""; D->Spelling = TopDiags.makeString(Blob); continue; } } } }
/// \brief Read the declaration at the given offset from the PCH file. Decl *PCHReader::ReadDeclRecord(uint64_t Offset, unsigned Index) { // Keep track of where we are in the stream, then jump back there // after reading this declaration. SavedStreamPosition SavedPosition(DeclsCursor); // Note that we are loading a declaration record. LoadingTypeOrDecl Loading(*this); DeclsCursor.JumpToBit(Offset); RecordData Record; unsigned Code = DeclsCursor.ReadCode(); unsigned Idx = 0; PCHDeclReader Reader(*this, Record, Idx); Decl *D = 0; switch ((pch::DeclCode)DeclsCursor.ReadRecord(Code, Record)) { case pch::DECL_ATTR: case pch::DECL_CONTEXT_LEXICAL: case pch::DECL_CONTEXT_VISIBLE: assert(false && "Record cannot be de-serialized with ReadDeclRecord"); break; case pch::DECL_TRANSLATION_UNIT: assert(Index == 0 && "Translation unit must be at index 0"); D = Context->getTranslationUnitDecl(); break; case pch::DECL_TYPEDEF: D = TypedefDecl::Create(*Context, 0, SourceLocation(), 0, 0); break; case pch::DECL_ENUM: D = EnumDecl::Create(*Context, 0, SourceLocation(), 0, SourceLocation(), 0); break; case pch::DECL_RECORD: D = RecordDecl::Create(*Context, TagDecl::TK_struct, 0, SourceLocation(), 0, SourceLocation(), 0); break; case pch::DECL_ENUM_CONSTANT: D = EnumConstantDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0, llvm::APSInt()); break; case pch::DECL_FUNCTION: D = FunctionDecl::Create(*Context, 0, SourceLocation(), DeclarationName(), QualType(), 0); break; case pch::DECL_OBJC_METHOD: D = ObjCMethodDecl::Create(*Context, SourceLocation(), SourceLocation(), Selector(), QualType(), 0, 0); break; case pch::DECL_OBJC_INTERFACE: D = ObjCInterfaceDecl::Create(*Context, 0, SourceLocation(), 0); break; case pch::DECL_OBJC_IVAR: D = ObjCIvarDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0, ObjCIvarDecl::None); break; case pch::DECL_OBJC_PROTOCOL: D = ObjCProtocolDecl::Create(*Context, 0, SourceLocation(), 0); break; case pch::DECL_OBJC_AT_DEFS_FIELD: D = ObjCAtDefsFieldDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0); break; case pch::DECL_OBJC_CLASS: D = ObjCClassDecl::Create(*Context, 0, SourceLocation()); break; case pch::DECL_OBJC_FORWARD_PROTOCOL: D = ObjCForwardProtocolDecl::Create(*Context, 0, SourceLocation()); break; case pch::DECL_OBJC_CATEGORY: D = ObjCCategoryDecl::Create(*Context, 0, SourceLocation(), SourceLocation(), SourceLocation(), 0); break; case pch::DECL_OBJC_CATEGORY_IMPL: D = ObjCCategoryImplDecl::Create(*Context, 0, SourceLocation(), 0, 0); break; case pch::DECL_OBJC_IMPLEMENTATION: D = ObjCImplementationDecl::Create(*Context, 0, SourceLocation(), 0, 0); break; case pch::DECL_OBJC_COMPATIBLE_ALIAS: D = ObjCCompatibleAliasDecl::Create(*Context, 0, SourceLocation(), 0, 0); break; case pch::DECL_OBJC_PROPERTY: D = ObjCPropertyDecl::Create(*Context, 0, SourceLocation(), 0, SourceLocation(), QualType()); break; case pch::DECL_OBJC_PROPERTY_IMPL: D = ObjCPropertyImplDecl::Create(*Context, 0, SourceLocation(), SourceLocation(), 0, ObjCPropertyImplDecl::Dynamic, 0); break; case pch::DECL_FIELD: D = FieldDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0, 0, false); break; case pch::DECL_VAR: D = VarDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0, VarDecl::None); break; case pch::DECL_IMPLICIT_PARAM: D = ImplicitParamDecl::Create(*Context, 0, SourceLocation(), 0, QualType()); break; case pch::DECL_PARM_VAR: D = ParmVarDecl::Create(*Context, 0, SourceLocation(), 0, QualType(), 0, VarDecl::None, 0); break; case pch::DECL_FILE_SCOPE_ASM: D = FileScopeAsmDecl::Create(*Context, 0, SourceLocation(), 0); break; case pch::DECL_BLOCK: D = BlockDecl::Create(*Context, 0, SourceLocation()); break; case pch::DECL_NAMESPACE: D = NamespaceDecl::Create(*Context, 0, SourceLocation(), 0); break; } assert(D && "Unknown declaration reading PCH file"); LoadedDecl(Index, D); Reader.Visit(D); // If this declaration is also a declaration context, get the // offsets for its tables of lexical and visible declarations. if (DeclContext *DC = dyn_cast<DeclContext>(D)) { std::pair<uint64_t, uint64_t> Offsets = Reader.VisitDeclContext(DC); if (Offsets.first || Offsets.second) { DC->setHasExternalLexicalStorage(Offsets.first != 0); DC->setHasExternalVisibleStorage(Offsets.second != 0); DeclContextOffsets[DC] = Offsets; } } assert(Idx == Record.size()); // If we have deserialized a declaration that has a definition the // AST consumer might need to know about, notify the consumer // about that definition now or queue it for later. if (isConsumerInterestedIn(D)) { if (Consumer) { DeclGroupRef DG(D); Consumer->HandleTopLevelDecl(DG); } else { InterestingDecls.push_back(D); } } return D; }
Status MultiIndexBlock::init(const std::vector<BSONObj>& indexSpecs) { WriteUnitOfWork wunit(_txn); invariant(_indexes.empty()); _txn->recoveryUnit()->registerChange(new CleanupIndexesVectorOnRollback(this)); const string& ns = _collection->ns().ns(); Status status = _collection->getIndexCatalog()->checkUnfinished(); if ( !status.isOK() ) return status; for ( size_t i = 0; i < indexSpecs.size(); i++ ) { BSONObj info = indexSpecs[i]; string pluginName = IndexNames::findPluginName( info["key"].Obj() ); //log() << "PLUGIN IS " << pluginName; // CUSTOM if (pluginName == "test") { // YOU SHOULD BE ABLE TO MAKE TE INDEX HERE //log() << "INX_CREATE NUM REC:" << _collection->numRecords(_txn); RecordIterator* ri = _collection->getIterator(_txn); std::vector<Entry*> initialEntries; while (!ri->isEOF()) { //log() << "SEE ITEM"; RecordData recordData = ri->dataFor(ri->curr()); //log() << "VALU IS " << recordData.toBson(); std::vector<double> lower; std::vector<double> upper; bool foundOK = false; if (recordData.toBson().getFieldDotted("loc")["lng"].ok()) { // A POINT log() << "ELE IS A POINT: " << recordData.toBson().getFieldDotted("loc")["lng"].Double() << " AND " << recordData.toBson().getFieldDotted("loc")["lat"].Double(); lower.push_back(recordData.toBson().getFieldDotted("loc")["lng"].Double()); lower.push_back(recordData.toBson().getFieldDotted("loc")["lat"].Double()); upper.push_back(recordData.toBson().getFieldDotted("loc")["lng"].Double()); upper.push_back(recordData.toBson().getFieldDotted("loc")["lat"].Double()); foundOK = true; } else { //log() << "NOT OK LINE!"; } if (recordData.toBson().getFieldDotted("loc")["type"].ok()) { if (recordData.toBson().getFieldDotted("loc")["type"].String() == "Polygon") { //log() << "ITS A POLY"; // << recordData.toBson().getFieldDotted("loc")["rew"].Double(); lower.push_back(recordData.toBson().getFieldDotted("loc")["coordinates"].Array().at(0).Array().at(0).Array().at(0).Double()); lower.push_back(recordData.toBson().getFieldDotted("loc")["coordinates"].Array().at(0).Array().at(0).Array().at(1).Double()); upper.push_back(recordData.toBson().getFieldDotted("loc")["coordinates"].Array().at(0).Array().at(2).Array().at(0).Double()); upper.push_back(recordData.toBson().getFieldDotted("loc")["coordinates"].Array().at(0).Array().at(2).Array().at(1).Double()); foundOK = true; } } else { //log() << "NOT OK POLY!"; } if (foundOK) { std::unordered_map<int, std::string> newDoc; BoundingBox I = BoundingBox(lower, upper); Entry* myEnt = new Entry(I, newDoc); initialEntries.push_back(myEnt); } ri->getNext(); } int dimensions = 2; int max = 6; int min = 3; log() << "RTRee creation has begun!"; //create a new Node, this will be the root node std::vector<Entry*> newV; Node* R = new Node(dimensions, newV, max, min, true); //create a new RTree RTree myIndex = RTree(dimensions, R, max, min); //insert the entries we created into myIndex RTree for (int i = 0; i<initialEntries.size(); i++){ //log() << "started inserting entry " << i << " which has lat "; Entry* current = initialEntries.at(i); //log() << "started inserting entry " << i << " which has lat " << current->getI().get_ithLower(0) << "," << current->getI().get_ithLower(1); //log() << "started inserting entry " << i << " which has lat " << current->getI().get_ithUpper(0) << "," << current->getI().get_ithUpper(1); myIndex.insert(current); //log() << "finished inserting entry " << i; } log() << "inserted all initial entries!"; myIndex.theTree = &myIndex; double rand1 = -100;//0;//-2.0;// 2.0;// double rand2 = -100;//0;//-3.0;// 1.0;// double rand3 = 100;//50;//2.0;// 4.0;// double rand4 = 100;//50;//3.0;// 4.0;// std::vector<double> lowerBB; lowerBB.push_back(rand1); lowerBB.push_back(rand2); std::vector<double> upperBB; upperBB.push_back(rand3); upperBB.push_back(rand4); BoundingBox* IBB = new BoundingBox(lowerBB, upperBB);//this is the bounding box we will be searching for std::vector<Entry*> overlapping = myIndex.search(IBB); // just leave this for now //cout << "found: " << overlapping.size() << " search results when searching for I = "; //cout << "lower bounds: " << I.getLower() << endl; //cout << "upper bounds: " << I.getUpper() << endl; /*log() << "SEARCH FOUND " << overlapping.size() << " RESULTS" << endl; for(int i=0; i<overlapping.size();i++){ log() << "Entry " << i << endl; //TODO print Entry log() << overlapping.at(i)->getI().get_ithLower(0)<< " "<< overlapping.at(i)->getI().get_ithLower(1) << endl; log() << overlapping.at(i)->getI().get_ithUpper(0)<< " " << overlapping.at(i)->getI().get_ithUpper(1) << endl; }*/ return Status::OK(); } // CUSTOM if ( pluginName.size() ) { Status s = _collection->getIndexCatalog()->_upgradeDatabaseMinorVersionIfNeeded(_txn, pluginName); if ( !s.isOK() ) return s; } // Any foreground indexes make all indexes be built in the foreground. _buildInBackground = (_buildInBackground && info["background"].trueValue()); } for ( size_t i = 0; i < indexSpecs.size(); i++ ) { BSONObj info = indexSpecs[i]; StatusWith<BSONObj> statusWithInfo = _collection->getIndexCatalog()->prepareSpecForCreate( _txn, info ); Status status = statusWithInfo.getStatus(); if ( !status.isOK() ) return status; info = statusWithInfo.getValue(); IndexToBuild index; index.block = boost::make_shared<IndexCatalog::IndexBuildBlock>(_txn, _collection, info); status = index.block->init(); if ( !status.isOK() ) return status; index.real = index.block->getEntry()->accessMethod(); status = index.real->initializeAsEmpty(_txn); if ( !status.isOK() ) return status; if (!_buildInBackground) { // Bulk build process requires foreground building as it assumes nothing is changing // under it. index.bulk.reset(index.real->initiateBulk(_txn)); } const IndexDescriptor* descriptor = index.block->getEntry()->descriptor(); index.options.logIfError = false; // logging happens elsewhere if needed. index.options.dupsAllowed = !descriptor->unique() || _ignoreUnique || repl::getGlobalReplicationCoordinator() ->shouldIgnoreUniqueIndex(descriptor); log() << "build index on: " << ns << " properties: " << descriptor->toString(); if (index.bulk) log() << "\t building index using bulk method"; // TODO SERVER-14888 Suppress this in cases we don't want to audit. audit::logCreateIndex(_txn->getClient(), &info, descriptor->indexName(), ns); _indexes.push_back( index ); } // this is so that operations examining the list of indexes know there are more keys to look // at when doing things like in place updates, etc... _collection->infoCache()->addedIndex(_txn); if (_buildInBackground) _backgroundOperation.reset(new BackgroundOperation(ns)); wunit.commit(); return Status::OK(); }
void StoreData(int CurrentId, int Type, HINSTANCE InstHndl, HWND WndHndl, UINT message, WPARAM wParam, LPARAM lParam) { static int SelectedVarId = 0; static int VarId[2000]; static int MaxIndex = 0; static int SelectedType = 0; static TCHAR SpecifiedPrefix[32]; static int SpecifiedStartNum = 0; static TCHAR Msg1[100]; static TCHAR Msg2[100]; if (Type == 2) { lstrcpyn(Msg1, MyMsgProc::GetMsg(MyMsgProc::PROP_LOAD_VAR), 100); lstrcpyn(Msg2, MyMsgProc::GetMsg(MyMsgProc::PROP_LOAD_CONT), 100); } else { lstrcpyn(Msg1, MyMsgProc::GetMsg(MyMsgProc::PROP_STORE_VAR), 100); lstrcpyn(Msg2, MyMsgProc::GetMsg(MyMsgProc::PROP_STORE_CONT), 100); } RECT Rect; GetClientRect(WndHndl, &Rect); if (message == WM_CREATE) { StDtRdoBtn1 = CreateWindow(_T("BUTTON"), Msg1, WS_CHILD | WS_VISIBLE | BS_RADIOBUTTON, Rect.left + 10, 110, Rect.right - 20, 20, WndHndl, (HMENU)IDC_LOADSTORE_RADIO1, InstHndl, NULL); CreateWindow(_T("STATIC"), MyMsgProc::GetMsg(MyMsgProc::PROP_DATA_COMM), WS_CHILD | WS_VISIBLE, 30, 142, 150, 20, WndHndl, NULL, InstHndl, NULL); StDtVarHndl = CreateWindowEx(WS_EX_CLIENTEDGE, _T("COMBOBOX"), _T(""), WS_CHILD | WS_VISIBLE | CBS_DROPDOWNLIST | WS_VSCROLL, 200, 140, 230, 200, WndHndl, (HMENU)IDC_LOADDATA_VAR, InstHndl, NULL); StDtRdoBtn2 = CreateWindow(_T("BUTTON"), Msg2, WS_CHILD | WS_VISIBLE | BS_RADIOBUTTON, Rect.left + 10, 200, Rect.right - 20, 20, WndHndl, (HMENU)IDC_LOADSTORE_RADIO2, InstHndl, NULL); CreateWindow(_T("STATIC"), MyMsgProc::GetMsg(MyMsgProc::PROP_DATA_PREF), WS_CHILD | WS_VISIBLE, 60, 232, 180, 20, WndHndl, NULL, InstHndl, NULL); StDtPrefixHndl = CreateWindowEx(WS_EX_CLIENTEDGE, _T("EDIT"), _T(""), WS_CHILD | WS_VISIBLE | ES_AUTOHSCROLL, 300, 230, 150, 24, WndHndl, NULL, InstHndl, NULL); CreateWindow(_T("STATIC"), MyMsgProc::GetMsg(MyMsgProc::PROP_DATA_NUM), WS_CHILD | WS_VISIBLE, 60, 262, 230, 20, WndHndl, NULL, InstHndl, NULL); StDtNumHndl = CreateWindowEx(WS_EX_CLIENTEDGE, _T("EDIT"), _T(""), WS_CHILD | WS_VISIBLE | ES_AUTOHSCROLL, 300, 260, 100, 24, WndHndl, NULL, InstHndl, NULL); SendMessage(StDtPrefixHndl, EM_SETLIMITTEXT, (WPARAM)26, (LPARAM)0); SendMessage(StDtNumHndl, EM_SETLIMITTEXT, (WPARAM)5, (LPARAM)0); // コミュニケーション用変数をコンボボックスの項目として追加する MaxIndex = 0; RecordData* VarRecs = VarCon_GetVariableRecords(); RecordData* CurVarRec = VarRecs; while (CurVarRec != NULL) { ColumnDataInt* VarTypeCol = (ColumnDataInt*)CurVarRec->GetColumn(3); int VarType = VarTypeCol->GetValue(); if (VarType == 0) { ColumnDataInt* VarIdCol = (ColumnDataInt*)CurVarRec->GetColumn(0); ColumnDataWStr* VarNameCol = (ColumnDataWStr*)CurVarRec->GetColumn(1); TCHAR* VarName = VarNameCol->GetValue(); VarId[MaxIndex] = VarIdCol->GetValue(); MaxIndex++; SendMessage(StDtVarHndl, CB_ADDSTRING, 0, (LPARAM)VarName); } CurVarRec = CurVarRec->GetNextRecord(); } delete VarRecs; SelectedVarId = GetStoreVarId(CurrentId); for (int Loop = 0; Loop < MaxIndex; Loop++) { if (VarId[Loop] == SelectedVarId) { SendMessage(StDtVarHndl, CB_SETCURSEL, Loop, 0); } } // 操作種別の初期化 SelectedType = GetLoadStoreType(CurrentId); ChangeSettingType(SelectedType); // プレフィックス設定 GetLoadStorePrefix(CurrentId, SpecifiedPrefix); SendMessage(StDtPrefixHndl, WM_SETTEXT, (WPARAM)0, (LPARAM)SpecifiedPrefix); // プレフィックス後の採番開始番号設定 SpecifiedStartNum = GetLoadStoreStartNum(CurrentId); TCHAR Buf[10]; wsprintf(Buf, _T("%d"), SpecifiedStartNum); SendMessage(StDtNumHndl, WM_SETTEXT, (WPARAM)0, (LPARAM)Buf); } if (message == WM_COMMAND) { if (HIWORD(wParam) == CBN_SELCHANGE) { if (LOWORD(wParam) == IDC_LOADDATA_VAR) { int TmpVarIndex = (int)SendMessage(StDtVarHndl, CB_GETCURSEL, 0, 0); SelectedVarId = VarId[TmpVarIndex]; } } if (HIWORD(wParam) == BN_CLICKED) { if (LOWORD(wParam) == IDC_BTNOK) { SetStoreVarId(CurrentId, SelectedVarId); SetLoadStoreType(CurrentId, SelectedType); SendMessage(StDtPrefixHndl, WM_GETTEXT, (WPARAM)256, (LPARAM)SpecifiedPrefix); SetLoadStorePrefix(CurrentId, SpecifiedPrefix); TCHAR Buf[10]; SendMessage(StDtNumHndl, WM_GETTEXT, (WPARAM)10, (LPARAM)Buf); SpecifiedStartNum = StrToInt(Buf); SetLoadStoreStartNum(CurrentId, SpecifiedStartNum); } if (LOWORD(wParam) == IDC_LOADSTORE_RADIO1) { ChangeSettingType(0); SelectedType = 0; } if (LOWORD(wParam) == IDC_LOADSTORE_RADIO2) { ChangeSettingType(1); SelectedType = 1; } } } }
Status RecordStoreValidateAdaptor::validate(const RecordId& recordId, const RecordData& record, size_t* dataSize) { BSONObj recordBson = record.toBson(); const Status status = validateBSON( recordBson.objdata(), recordBson.objsize(), Validator<BSONObj>::enabledBSONVersion()); if (status.isOK()) { *dataSize = recordBson.objsize(); } else { return status; } if (!_indexCatalog->haveAnyIndexes()) { return status; } IndexCatalog::IndexIterator i = _indexCatalog->getIndexIterator(_opCtx, false); while (i.more()) { const IndexDescriptor* descriptor = i.next(); const std::string indexNs = descriptor->indexNamespace(); int indexNumber = _indexConsistency->getIndexNumber(indexNs); ValidateResults curRecordResults; const IndexAccessMethod* iam = _indexCatalog->getIndex(descriptor); if (descriptor->isPartial()) { const IndexCatalogEntry* ice = _indexCatalog->getEntry(descriptor); if (!ice->getFilterExpression()->matchesBSON(recordBson)) { (*_indexNsResultsMap)[indexNs] = curRecordResults; continue; } } BSONObjSet documentKeySet = SimpleBSONObjComparator::kInstance.makeBSONObjSet(); BSONObjSet multikeyMetadataKeys = SimpleBSONObjComparator::kInstance.makeBSONObjSet(); MultikeyPaths multikeyPaths; iam->getKeys(recordBson, IndexAccessMethod::GetKeysMode::kEnforceConstraints, &documentKeySet, &multikeyMetadataKeys, &multikeyPaths); if (!descriptor->isMultikey(_opCtx) && iam->shouldMarkIndexAsMultikey(documentKeySet, multikeyMetadataKeys, multikeyPaths)) { std::string msg = str::stream() << "Index " << descriptor->indexName() << " is not multi-key, but a multikey path " << " is present in document " << recordId; curRecordResults.errors.push_back(msg); curRecordResults.valid = false; } for (const auto& key : multikeyMetadataKeys) { _indexConsistency->addMultikeyMetadataPath(makeWildCardMultikeyMetadataKeyString(key), indexNumber); } const auto& pattern = descriptor->keyPattern(); const Ordering ord = Ordering::make(pattern); bool largeKeyDisallowed = isLargeKeyDisallowed(); for (const auto& key : documentKeySet) { if (largeKeyDisallowed && key.objsize() >= static_cast<int64_t>(KeyString::TypeBits::kMaxKeyBytes)) { // Index keys >= 1024 bytes are not indexed. _indexConsistency->addLongIndexKey(indexNumber); continue; } // We want to use the latest version of KeyString here. KeyString ks(KeyString::kLatestVersion, key, ord, recordId); _indexConsistency->addDocKey(ks, indexNumber); } (*_indexNsResultsMap)[indexNs] = curRecordResults; } return status; }
/// \brief Reads attributes from the current stream position. Attr *PCHReader::ReadAttributes() { unsigned Code = DeclsCursor.ReadCode(); assert(Code == llvm::bitc::UNABBREV_RECORD && "Expected unabbreviated record"); (void)Code; RecordData Record; unsigned Idx = 0; unsigned RecCode = DeclsCursor.ReadRecord(Code, Record); assert(RecCode == pch::DECL_ATTR && "Expected attribute record"); (void)RecCode; #define SIMPLE_ATTR(Name) \ case Attr::Name: \ New = ::new (*Context) Name##Attr(); \ break #define STRING_ATTR(Name) \ case Attr::Name: \ New = ::new (*Context) Name##Attr(*Context, ReadString(Record, Idx)); \ break #define UNSIGNED_ATTR(Name) \ case Attr::Name: \ New = ::new (*Context) Name##Attr(Record[Idx++]); \ break Attr *Attrs = 0; while (Idx < Record.size()) { Attr *New = 0; Attr::Kind Kind = (Attr::Kind)Record[Idx++]; bool IsInherited = Record[Idx++]; switch (Kind) { default: assert(0 && "Unknown attribute!"); break; STRING_ATTR(Alias); UNSIGNED_ATTR(Aligned); SIMPLE_ATTR(AlwaysInline); SIMPLE_ATTR(AnalyzerNoReturn); STRING_ATTR(Annotate); STRING_ATTR(AsmLabel); SIMPLE_ATTR(BaseCheck); case Attr::Blocks: New = ::new (*Context) BlocksAttr( (BlocksAttr::BlocksAttrTypes)Record[Idx++]); break; SIMPLE_ATTR(CDecl); case Attr::Cleanup: New = ::new (*Context) CleanupAttr( cast<FunctionDecl>(GetDecl(Record[Idx++]))); break; SIMPLE_ATTR(Const); UNSIGNED_ATTR(Constructor); SIMPLE_ATTR(DLLExport); SIMPLE_ATTR(DLLImport); SIMPLE_ATTR(Deprecated); UNSIGNED_ATTR(Destructor); SIMPLE_ATTR(FastCall); SIMPLE_ATTR(Final); case Attr::Format: { std::string Type = ReadString(Record, Idx); unsigned FormatIdx = Record[Idx++]; unsigned FirstArg = Record[Idx++]; New = ::new (*Context) FormatAttr(*Context, Type, FormatIdx, FirstArg); break; } case Attr::FormatArg: { unsigned FormatIdx = Record[Idx++]; New = ::new (*Context) FormatArgAttr(FormatIdx); break; } case Attr::Sentinel: { int sentinel = Record[Idx++]; int nullPos = Record[Idx++]; New = ::new (*Context) SentinelAttr(sentinel, nullPos); break; } SIMPLE_ATTR(GNUInline); SIMPLE_ATTR(Hiding); case Attr::IBActionKind: New = ::new (*Context) IBActionAttr(); break; case Attr::IBOutletKind: New = ::new (*Context) IBOutletAttr(); break; SIMPLE_ATTR(Malloc); SIMPLE_ATTR(NoDebug); SIMPLE_ATTR(NoInline); SIMPLE_ATTR(NoReturn); SIMPLE_ATTR(NoThrow); case Attr::NonNull: { unsigned Size = Record[Idx++]; llvm::SmallVector<unsigned, 16> ArgNums; ArgNums.insert(ArgNums.end(), &Record[Idx], &Record[Idx] + Size); Idx += Size; New = ::new (*Context) NonNullAttr(*Context, ArgNums.data(), Size); break; } case Attr::ReqdWorkGroupSize: { unsigned X = Record[Idx++]; unsigned Y = Record[Idx++]; unsigned Z = Record[Idx++]; New = ::new (*Context) ReqdWorkGroupSizeAttr(X, Y, Z); break; } SIMPLE_ATTR(ObjCException); SIMPLE_ATTR(ObjCNSObject); SIMPLE_ATTR(CFReturnsNotRetained); SIMPLE_ATTR(CFReturnsRetained); SIMPLE_ATTR(NSReturnsNotRetained); SIMPLE_ATTR(NSReturnsRetained); SIMPLE_ATTR(Overloadable); SIMPLE_ATTR(Override); SIMPLE_ATTR(Packed); UNSIGNED_ATTR(PragmaPack); SIMPLE_ATTR(Pure); UNSIGNED_ATTR(Regparm); STRING_ATTR(Section); SIMPLE_ATTR(StdCall); SIMPLE_ATTR(TransparentUnion); SIMPLE_ATTR(Unavailable); SIMPLE_ATTR(Unused); SIMPLE_ATTR(Used); case Attr::Visibility: New = ::new (*Context) VisibilityAttr( (VisibilityAttr::VisibilityTypes)Record[Idx++]); break; SIMPLE_ATTR(WarnUnusedResult); SIMPLE_ATTR(Weak); SIMPLE_ATTR(WeakRef); SIMPLE_ATTR(WeakImport); } assert(New && "Unable to decode attribute?"); New->setInherited(IsInherited); New->setNext(Attrs); Attrs = New; } #undef UNSIGNED_ATTR #undef STRING_ATTR #undef SIMPLE_ATTR // The list of attributes was built backwards. Reverse the list // before returning it. Attr *PrevAttr = 0, *NextAttr = 0; while (Attrs) { NextAttr = Attrs->getNext(); Attrs->setNext(PrevAttr); PrevAttr = Attrs; Attrs = NextAttr; } return PrevAttr; }