void ParallelBlockCommunicator2D::duplicateOverlaps(MultiBlock2D& multiBlock, modif::ModifT whichData) const { MultiBlockManagement2D const& multiBlockManagement = multiBlock.getMultiBlockManagement(); PeriodicitySwitch2D const& periodicity = multiBlock.periodicity(); // Implement a caching mechanism for the communication structure. if (overlapsModified) { overlapsModified = false; LocalMultiBlockInfo2D const& localInfo = multiBlockManagement.getLocalInfo(); std::vector<Overlap2D> overlaps(multiBlockManagement.getLocalInfo().getNormalOverlaps()); for (pluint iOverlap=0; iOverlap<localInfo.getPeriodicOverlaps().size(); ++iOverlap) { PeriodicOverlap2D const& pOverlap = localInfo.getPeriodicOverlaps()[iOverlap]; if (periodicity.get(pOverlap.normalX,pOverlap.normalY)) { overlaps.push_back(pOverlap.overlap); } } delete communication; communication = new CommunicationStructure2D ( overlaps, multiBlockManagement, multiBlockManagement, multiBlock.sizeOfCell() ); } communicate(*communication, multiBlock, multiBlock, whichData); }
void ParallelBlockCommunicator2D::communicate ( std::vector<Overlap2D> const& overlaps, MultiBlock2D const& originMultiBlock, MultiBlock2D& destinationMultiBlock, modif::ModifT whichData ) const { PLB_PRECONDITION( originMultiBlock.sizeOfCell() == destinationMultiBlock.sizeOfCell() ); CommunicationStructure2D communication ( overlaps, originMultiBlock.getMultiBlockManagement(), destinationMultiBlock.getMultiBlockManagement(), originMultiBlock.sizeOfCell() ); global::profiler().start("mpiCommunication"); communicate(communication, originMultiBlock, destinationMultiBlock, whichData); global::profiler().stop("mpiCommunication"); }
void saveFull( MultiBlock2D& multiBlock, FileName fName, IndexOrdering::OrderingT ordering ) { global::profiler().start("io"); SparseBlockStructure2D blockStructure(multiBlock.getBoundingBox()); Box2D bbox = multiBlock.getBoundingBox(); if (ordering==IndexOrdering::forward) { plint nBlocks = std::min(bbox.getNx(), (plint)global::mpi().getSize()); std::vector<std::pair<plint,plint> > ranges; util::linearRepartition(bbox.x0, bbox.x1, nBlocks, ranges); for (pluint iRange=0; iRange<ranges.size(); ++iRange) { blockStructure.addBlock ( Box2D( ranges[iRange].first, ranges[iRange].second, bbox.y0, bbox.y1 ), iRange ); } } else if (ordering==IndexOrdering::backward) { plint nBlocks = std::min(bbox.getNy(), (plint)global::mpi().getSize()); std::vector<std::pair<plint,plint> > ranges; util::linearRepartition(bbox.y0, bbox.y1, nBlocks, ranges); for (pluint iRange=0; iRange<ranges.size(); ++iRange) { blockStructure.addBlock ( Box2D( bbox.x0, bbox.x1, ranges[iRange].first, ranges[iRange].second ), iRange ); } } else { // Sparse ordering not defined. PLB_ASSERT( false ); } plint envelopeWidth=1; MultiBlockManagement2D adjacentMultiBlockManagement ( blockStructure, new OneToOneThreadAttribution, envelopeWidth ); MultiBlock2D* multiAdjacentBlock = multiBlock.clone(adjacentMultiBlockManagement); std::vector<plint> offset; std::vector<plint> myBlockIds; std::vector<std::vector<char> > data; bool dynamicContent = false; dumpData(*multiAdjacentBlock, dynamicContent, offset, myBlockIds, data); if (ordering==IndexOrdering::backward && myBlockIds.size()==1) { PLB_ASSERT( data.size()==1 ); Box2D domain; blockStructure.getBulk(myBlockIds[0], domain); plint sizeOfCell = multiAdjacentBlock->sizeOfCell(); PLB_ASSERT( domain.nCells()*sizeOfCell == (plint)data[0].size() ); transposeToBackward( sizeOfCell, domain, data[0] ); } plint totalSize = offset[offset.size()-1]; writeOneBlockXmlSpec(*multiAdjacentBlock, fName, totalSize, ordering); writeRawData(fName, myBlockIds, offset, data); delete multiAdjacentBlock; global::profiler().stop("io"); }