void ParallelBlockCommunicator2D::duplicateOverlaps(MultiBlock2D& multiBlock, modif::ModifT whichData) const { MultiBlockManagement2D const& multiBlockManagement = multiBlock.getMultiBlockManagement(); PeriodicitySwitch2D const& periodicity = multiBlock.periodicity(); // Implement a caching mechanism for the communication structure. if (overlapsModified) { overlapsModified = false; LocalMultiBlockInfo2D const& localInfo = multiBlockManagement.getLocalInfo(); std::vector<Overlap2D> overlaps(multiBlockManagement.getLocalInfo().getNormalOverlaps()); for (pluint iOverlap=0; iOverlap<localInfo.getPeriodicOverlaps().size(); ++iOverlap) { PeriodicOverlap2D const& pOverlap = localInfo.getPeriodicOverlaps()[iOverlap]; if (periodicity.get(pOverlap.normalX,pOverlap.normalY)) { overlaps.push_back(pOverlap.overlap); } } delete communication; communication = new CommunicationStructure2D ( overlaps, multiBlockManagement, multiBlockManagement, multiBlock.sizeOfCell() ); } communicate(*communication, multiBlock, multiBlock, whichData); }
void SerialBlockCommunicator2D::copyOverlap ( Overlap2D const& overlap, MultiBlock2D const& fromMultiBlock, MultiBlock2D& toMultiBlock, modif::ModifT whichData ) const { MultiBlockManagement2D const& fromManagement = fromMultiBlock.getMultiBlockManagement(); MultiBlockManagement2D const& toManagement = toMultiBlock.getMultiBlockManagement(); plint fromEnvelopeWidth = fromManagement.getEnvelopeWidth(); plint toEnvelopeWidth = toManagement.getEnvelopeWidth(); SparseBlockStructure2D const& fromSparseBlock = fromManagement.getSparseBlockStructure(); SparseBlockStructure2D const& toSparseBlock = toManagement.getSparseBlockStructure(); plint originalId = overlap.getOriginalId(); plint overlapId = overlap.getOverlapId(); SmartBulk2D originalBulk(fromSparseBlock, fromEnvelopeWidth, originalId); SmartBulk2D overlapBulk(toSparseBlock, toEnvelopeWidth, overlapId); Box2D originalCoords(originalBulk.toLocal(overlap.getOriginalCoordinates())); Box2D overlapCoords(overlapBulk.toLocal(overlap.getOverlapCoordinates())); PLB_PRECONDITION(originalCoords.x1-originalCoords.x0 == overlapCoords.x1-overlapCoords.x0); PLB_PRECONDITION(originalCoords.y1-originalCoords.y0 == overlapCoords.y1-overlapCoords.y0); AtomicBlock2D const* originalBlock = &fromMultiBlock.getComponent(originalId); AtomicBlock2D* overlapBlock = &toMultiBlock.getComponent(overlapId); plint deltaX = originalCoords.x0 - overlapCoords.x0; plint deltaY = originalCoords.y0 - overlapCoords.y0; overlapBlock -> getDataTransfer().attribute(overlapCoords, deltaX, deltaY, *originalBlock, whichData); }
void saveFull( MultiBlock2D& multiBlock, FileName fName, IndexOrdering::OrderingT ordering ) { global::profiler().start("io"); SparseBlockStructure2D blockStructure(multiBlock.getBoundingBox()); Box2D bbox = multiBlock.getBoundingBox(); if (ordering==IndexOrdering::forward) { plint nBlocks = std::min(bbox.getNx(), (plint)global::mpi().getSize()); std::vector<std::pair<plint,plint> > ranges; util::linearRepartition(bbox.x0, bbox.x1, nBlocks, ranges); for (pluint iRange=0; iRange<ranges.size(); ++iRange) { blockStructure.addBlock ( Box2D( ranges[iRange].first, ranges[iRange].second, bbox.y0, bbox.y1 ), iRange ); } } else if (ordering==IndexOrdering::backward) { plint nBlocks = std::min(bbox.getNy(), (plint)global::mpi().getSize()); std::vector<std::pair<plint,plint> > ranges; util::linearRepartition(bbox.y0, bbox.y1, nBlocks, ranges); for (pluint iRange=0; iRange<ranges.size(); ++iRange) { blockStructure.addBlock ( Box2D( bbox.x0, bbox.x1, ranges[iRange].first, ranges[iRange].second ), iRange ); } } else { // Sparse ordering not defined. PLB_ASSERT( false ); } plint envelopeWidth=1; MultiBlockManagement2D adjacentMultiBlockManagement ( blockStructure, new OneToOneThreadAttribution, envelopeWidth ); MultiBlock2D* multiAdjacentBlock = multiBlock.clone(adjacentMultiBlockManagement); std::vector<plint> offset; std::vector<plint> myBlockIds; std::vector<std::vector<char> > data; bool dynamicContent = false; dumpData(*multiAdjacentBlock, dynamicContent, offset, myBlockIds, data); if (ordering==IndexOrdering::backward && myBlockIds.size()==1) { PLB_ASSERT( data.size()==1 ); Box2D domain; blockStructure.getBulk(myBlockIds[0], domain); plint sizeOfCell = multiAdjacentBlock->sizeOfCell(); PLB_ASSERT( domain.nCells()*sizeOfCell == (plint)data[0].size() ); transposeToBackward( sizeOfCell, domain, data[0] ); } plint totalSize = offset[offset.size()-1]; writeOneBlockXmlSpec(*multiAdjacentBlock, fName, totalSize, ordering); writeRawData(fName, myBlockIds, offset, data); delete multiAdjacentBlock; global::profiler().stop("io"); }
void ParallelBlockCommunicator2D::communicate ( std::vector<Overlap2D> const& overlaps, MultiBlock2D const& originMultiBlock, MultiBlock2D& destinationMultiBlock, modif::ModifT whichData ) const { PLB_PRECONDITION( originMultiBlock.sizeOfCell() == destinationMultiBlock.sizeOfCell() ); CommunicationStructure2D communication ( overlaps, originMultiBlock.getMultiBlockManagement(), destinationMultiBlock.getMultiBlockManagement(), originMultiBlock.sizeOfCell() ); global::profiler().start("mpiCommunication"); communicate(communication, originMultiBlock, destinationMultiBlock, whichData); global::profiler().stop("mpiCommunication"); }
void ParallelBlockCommunicator2D::communicate ( CommunicationStructure2D& communication, MultiBlock2D const& originMultiBlock, MultiBlock2D& destinationMultiBlock, modif::ModifT whichData ) const { bool staticMessage = whichData == modif::staticVariables; // 1. Non-blocking receives. communication.recvComm.startBeingReceptive(staticMessage); // 2. Non-blocking sends. for (unsigned iSend=0; iSend<communication.sendPackage.size(); ++iSend) { CommunicationInfo2D const& info = communication.sendPackage[iSend]; AtomicBlock2D const& fromBlock = originMultiBlock.getComponent(info.fromBlockId); fromBlock.getDataTransfer().send ( info.fromDomain, communication.sendComm.getSendBuffer(info.toProcessId), whichData ); communication.sendComm.acceptMessage(info.toProcessId, staticMessage); } // 3. Local copies which require no communication. for (unsigned iSendRecv=0; iSendRecv<communication.sendRecvPackage.size(); ++iSendRecv) { CommunicationInfo2D const& info = communication.sendRecvPackage[iSendRecv]; AtomicBlock2D const& fromBlock = originMultiBlock.getComponent(info.fromBlockId); AtomicBlock2D& toBlock = destinationMultiBlock.getComponent(info.toBlockId); plint deltaX = info.fromDomain.x0 - info.toDomain.x0; plint deltaY = info.fromDomain.y0 - info.toDomain.y0; toBlock.getDataTransfer().attribute ( info.toDomain, deltaX, deltaY, fromBlock, whichData, info.absoluteOffset ); } // 4. Finalize the receives. for (unsigned iRecv=0; iRecv<communication.recvPackage.size(); ++iRecv) { CommunicationInfo2D const& info = communication.recvPackage[iRecv]; AtomicBlock2D& toBlock = destinationMultiBlock.getComponent(info.toBlockId); toBlock.getDataTransfer().receive ( info.toDomain, communication.recvComm.receiveMessage(info.fromProcessId, staticMessage), whichData, info.absoluteOffset ); } // 5. Finalize the sends. communication.sendComm.finalize(staticMessage); }
void SerialBlockCommunicator2D::duplicateOverlaps(MultiBlock2D& multiBlock, modif::ModifT whichData) const { MultiBlockManagement2D const& multiBlockManagement = multiBlock.getMultiBlockManagement(); LocalMultiBlockInfo2D const& localInfo = multiBlockManagement.getLocalInfo(); // Non-periodic communication for (pluint iOverlap=0; iOverlap<localInfo.getNormalOverlaps().size(); ++iOverlap) { copyOverlap(localInfo.getNormalOverlaps()[iOverlap], multiBlock, multiBlock, whichData); } // Periodic communication PeriodicitySwitch2D const& periodicity = multiBlock.periodicity(); for (pluint iOverlap=0; iOverlap<localInfo.getPeriodicOverlaps().size(); ++iOverlap) { PeriodicOverlap2D const& pOverlap = localInfo.getPeriodicOverlaps()[iOverlap]; if (periodicity.get(pOverlap.normalX, pOverlap.normalY)) { copyOverlap(pOverlap.overlap, multiBlock, multiBlock, whichData); } } }
void dumpData( MultiBlock2D& multiBlock, bool dynamicContent, std::vector<plint>& offset, std::vector<plint>& myBlockIds, std::vector<std::vector<char> >& data ) { MultiBlockManagement2D const& management = multiBlock.getMultiBlockManagement(); std::map<plint,Box2D> const& bulks = management.getSparseBlockStructure().getBulks(); plint numBlocks = (plint) bulks.size(); std::map<plint,plint> toContiguousId; std::map<plint,Box2D>::const_iterator it = bulks.begin(); plint pos = 0; for (; it != bulks.end(); ++it) { toContiguousId[it->first] = pos; ++pos; } std::vector<plint> const& myBlocks = management.getLocalInfo().getBlocks(); myBlockIds.resize(myBlocks.size()); data.resize(myBlocks.size()); std::vector<plint> blockSize(numBlocks); std::fill(blockSize.begin(), blockSize.end(), 0); for (pluint iBlock=0; iBlock<myBlocks.size(); ++iBlock) { plint blockId = myBlocks[iBlock]; SmartBulk2D bulk(management, blockId); Box2D localBulk(bulk.toLocal(bulk.getBulk())); AtomicBlock2D const& block = multiBlock.getComponent(blockId); modif::ModifT typeOfVariables = dynamicContent ? modif::dataStructure : modif::staticVariables; block.getDataTransfer().send(localBulk, data[iBlock], typeOfVariables); plint contiguousId = toContiguousId[blockId]; myBlockIds[iBlock] = contiguousId; blockSize[contiguousId] = (plint)data[iBlock].size(); } #ifdef PLB_MPI_PARALLEL global::mpi().allReduceVect(blockSize, MPI_SUM); #endif offset.resize(numBlocks); std::partial_sum(blockSize.begin(), blockSize.end(), offset.begin()); }
void writeOneBlockXmlSpec( MultiBlock2D& multiBlock, FileName fName, plint dataSize, IndexOrdering::OrderingT ordering ) { fName.defaultExt("plb"); MultiBlockManagement2D const& management = multiBlock.getMultiBlockManagement(); std::vector<std::string> typeInfo = multiBlock.getTypeInfo(); std::string blockName = multiBlock.getBlockName(); PLB_ASSERT( !typeInfo.empty() ); XMLwriter xml; XMLwriter& xmlMultiBlock = xml["Block2D"]; xmlMultiBlock["General"]["Family"].setString(blockName); xmlMultiBlock["General"]["Datatype"].setString(typeInfo[0]); if (typeInfo.size()>1) { xmlMultiBlock["General"]["Descriptor"].setString(typeInfo[1]); } xmlMultiBlock["General"]["cellDim"].set(multiBlock.getCellDim()); bool dynamicContent = false; xmlMultiBlock["General"]["dynamicContent"].set(dynamicContent); Array<plint,4> boundingBox = multiBlock.getBoundingBox().to_plbArray(); xmlMultiBlock["Structure"]["BoundingBox"].set<plint,4>(boundingBox); xmlMultiBlock["Structure"]["EnvelopeWidth"].set(management.getEnvelopeWidth()); xmlMultiBlock["Structure"]["NumComponents"].set(1); xmlMultiBlock["Data"]["File"].setString(FileName(fName).setExt("dat")); if (ordering == IndexOrdering::forward) { xmlMultiBlock["Data"]["IndexOrdering"].setString("zIsFastest"); } else { xmlMultiBlock["Data"]["IndexOrdering"].setString("xIsFastest"); } XMLwriter& xmlBulks = xmlMultiBlock["Data"]["Component"]; xmlBulks.set<plint,4>(multiBlock.getBoundingBox().to_plbArray()); xmlMultiBlock["Data"]["Offsets"].set(dataSize); xml.print(FileName(fName).defaultPath(global::directories().getOutputDir())); }
void writeXmlSpec( MultiBlock2D& multiBlock, FileName fName, std::vector<plint> const& offset, bool dynamicContent ) { fName.defaultExt("plb"); MultiBlockManagement2D const& management = multiBlock.getMultiBlockManagement(); std::map<plint,Box2D> const& bulks = management.getSparseBlockStructure().getBulks(); PLB_ASSERT( offset.empty() || bulks.size()==offset.size() ); std::vector<std::string> typeInfo = multiBlock.getTypeInfo(); std::string blockName = multiBlock.getBlockName(); PLB_ASSERT( !typeInfo.empty() ); XMLwriter xml; XMLwriter& xmlMultiBlock = xml["Block2D"]; xmlMultiBlock["General"]["Family"].setString(blockName); xmlMultiBlock["General"]["Datatype"].setString(typeInfo[0]); if (typeInfo.size()>1) { xmlMultiBlock["General"]["Descriptor"].setString(typeInfo[1]); } xmlMultiBlock["General"]["cellDim"].set(multiBlock.getCellDim()); xmlMultiBlock["General"]["dynamicContent"].set(dynamicContent); xmlMultiBlock["General"]["globalId"].set(multiBlock.getId()); Array<plint,4> boundingBox = multiBlock.getBoundingBox().to_plbArray(); xmlMultiBlock["Structure"]["BoundingBox"].set<plint,4>(boundingBox); xmlMultiBlock["Structure"]["EnvelopeWidth"].set(management.getEnvelopeWidth()); xmlMultiBlock["Structure"]["GridLevel"].set(management.getRefinementLevel()); xmlMultiBlock["Structure"]["NumComponents"].set(bulks.size()); xmlMultiBlock["Data"]["File"].setString(FileName(fName).setExt("dat")); XMLwriter& xmlBulks = xmlMultiBlock["Data"]["Component"]; std::map<plint,Box2D>::const_iterator it = bulks.begin(); plint iComp=0; for(; it != bulks.end(); ++it) { Box2D bulk = it->second; xmlBulks[iComp].set<plint,4>(bulk.to_plbArray()); ++iComp; } if (!offset.empty()) { xmlMultiBlock["Data"]["Offsets"].set(offset); } // The following prints a unique list of dynamics-id pairs for all dynamics // classes used in the multi-block. This is necessary, because dynamics // classes may be ordered differently from one compilation to the other, // or from one compiler to the other. // // Given that the dynamics classes are unique, they can be indexed by their // name (which is not the case of the data processors below). std::map<std::string,int> dynamicsDict; multiBlock.getDynamicsDict(multiBlock.getBoundingBox(), dynamicsDict); if (!dynamicsDict.empty()) { XMLwriter& xmlDynamicsDict = xmlMultiBlock["Data"]["DynamicsDict"]; for( std::map<std::string,int>::const_iterator it = dynamicsDict.begin(); it != dynamicsDict.end(); ++it ) { xmlDynamicsDict[it->first].set(it->second); } } // This is the only section in which actual content is stored outside the // binary blob: the serialization of the data processors. This // serialization was chosen to be in ASCII, because it takes little space // and can be somewhat complicated. // // It is important that the processors are indexed by a continuous index // "iProcessor". They cannot be indexed by the class name ("Name") or static // id ("id") because several instances of the same class may occur. XMLwriter& xmlProcessors = xmlMultiBlock["Data"]["Processor"]; std::vector<MultiBlock2D::ProcessorStorage2D> const& processors = multiBlock.getStoredProcessors(); for (plint iProcessor=0; iProcessor<(plint)processors.size(); ++iProcessor) { int id = processors[iProcessor].getGenerator().getStaticId(); if (id>=0) { Box2D domain; std::string data; processors[iProcessor].getGenerator().serialize(domain, data); xmlProcessors[iProcessor]["Name"].set(meta::processorRegistration2D().getName(id)); xmlProcessors[iProcessor]["Domain"].set<plint,4>(domain.to_plbArray()); xmlProcessors[iProcessor]["Data"].setString(data); xmlProcessors[iProcessor]["Level"].set(processors[iProcessor].getLevel()); xmlProcessors[iProcessor]["Blocks"].set(processors[iProcessor].getMultiBlockIds()); } } xml.print(FileName(fName).defaultPath(global::directories().getOutputDir())); }