void StupidAllocator::syncRegister(LInstruction *ins, RegisterIndex index) { if (registers[index].dirty) { LMoveGroup *input = getInputMoveGroup(ins->id()); LAllocation *source = new LAllocation(registers[index].reg); uint32_t existing = registers[index].vreg; LAllocation *dest = stackLocation(existing); input->addAfter(source, dest); registers[index].dirty = false; } }
void StupidAllocator::syncForBlockEnd(LBlock *block, LInstruction *ins) { // Sync any dirty registers, and update the synced state for phi nodes at // each successor of a block. We cannot conflate the storage for phis with // that of their inputs, as we cannot prove the live ranges of the phi and // its input do not overlap. The values for the two may additionally be // different, as the phi could be for the value of the input in a previous // loop iteration. for (size_t i = 0; i < registerCount; i++) syncRegister(ins, i); LMoveGroup *group = nullptr; MBasicBlock *successor = block->mir()->successorWithPhis(); if (successor) { uint32_t position = block->mir()->positionInPhiSuccessor(); LBlock *lirsuccessor = graph.getBlock(successor->id()); for (size_t i = 0; i < lirsuccessor->numPhis(); i++) { LPhi *phi = lirsuccessor->getPhi(i); uint32_t sourcevreg = phi->getOperand(position)->toUse()->virtualRegister(); uint32_t destvreg = phi->getDef(0)->virtualRegister(); if (sourcevreg == destvreg) continue; LAllocation *source = stackLocation(sourcevreg); LAllocation *dest = stackLocation(destvreg); if (!group) { // The moves we insert here need to happen simultaneously with // each other, yet after any existing moves before the instruction. LMoveGroup *input = getInputMoveGroup(ins->id()); if (input->numMoves() == 0) { group = input; } else { group = new LMoveGroup(alloc()); block->insertAfter(input, group); } } group->add(source, dest); } } }
void RegisterAllocator::dumpInstructions() { #ifdef DEBUG fprintf(stderr, "Instructions:\n"); for (size_t blockIndex = 0; blockIndex < graph.numBlocks(); blockIndex++) { LBlock* block = graph.getBlock(blockIndex); MBasicBlock* mir = block->mir(); fprintf(stderr, "\nBlock %lu", static_cast<unsigned long>(blockIndex)); for (size_t i = 0; i < mir->numSuccessors(); i++) fprintf(stderr, " [successor %u]", mir->getSuccessor(i)->id()); fprintf(stderr, "\n"); for (size_t i = 0; i < block->numPhis(); i++) { LPhi* phi = block->getPhi(i); fprintf(stderr, "[%u,%u Phi] [def %s]", inputOf(phi).bits(), outputOf(phi).bits(), phi->getDef(0)->toString()); for (size_t j = 0; j < phi->numOperands(); j++) fprintf(stderr, " [use %s]", phi->getOperand(j)->toString()); fprintf(stderr, "\n"); } for (LInstructionIterator iter = block->begin(); iter != block->end(); iter++) { LInstruction* ins = *iter; fprintf(stderr, "["); if (ins->id() != 0) fprintf(stderr, "%u,%u ", inputOf(ins).bits(), outputOf(ins).bits()); fprintf(stderr, "%s]", ins->opName()); if (ins->isMoveGroup()) { LMoveGroup* group = ins->toMoveGroup(); for (int i = group->numMoves() - 1; i >= 0; i--) { // Use two printfs, as LAllocation::toString is not reentant. fprintf(stderr, " [%s", group->getMove(i).from()->toString()); fprintf(stderr, " -> %s]", group->getMove(i).to()->toString()); } fprintf(stderr, "\n"); continue; } for (size_t i = 0; i < ins->numDefs(); i++) fprintf(stderr, " [def %s]", ins->getDef(i)->toString()); for (size_t i = 0; i < ins->numTemps(); i++) { LDefinition* temp = ins->getTemp(i); if (!temp->isBogusTemp()) fprintf(stderr, " [temp %s]", temp->toString()); } for (LInstruction::InputIterator alloc(*ins); alloc.more(); alloc.next()) { if (!alloc->isBogus()) fprintf(stderr, " [use %s]", alloc->toString()); } fprintf(stderr, "\n"); } } fprintf(stderr, "\n"); #endif // DEBUG }
void AllocationIntegrityState::dump() { #ifdef DEBUG fprintf(stderr, "Register Allocation Integrity State:\n"); for (size_t blockIndex = 0; blockIndex < graph.numBlocks(); blockIndex++) { LBlock* block = graph.getBlock(blockIndex); MBasicBlock* mir = block->mir(); fprintf(stderr, "\nBlock %lu", static_cast<unsigned long>(blockIndex)); for (size_t i = 0; i < mir->numSuccessors(); i++) fprintf(stderr, " [successor %u]", mir->getSuccessor(i)->id()); fprintf(stderr, "\n"); for (size_t i = 0; i < block->numPhis(); i++) { const InstructionInfo& info = blocks[blockIndex].phis[i]; LPhi* phi = block->getPhi(i); CodePosition input(block->getPhi(0)->id(), CodePosition::INPUT); CodePosition output(block->getPhi(block->numPhis() - 1)->id(), CodePosition::OUTPUT); fprintf(stderr, "[%u,%u Phi] [def %s] ", input.bits(), output.bits(), phi->getDef(0)->toString()); for (size_t j = 0; j < phi->numOperands(); j++) fprintf(stderr, " [use %s]", info.inputs[j].toString()); fprintf(stderr, "\n"); } for (LInstructionIterator iter = block->begin(); iter != block->end(); iter++) { LInstruction* ins = *iter; const InstructionInfo& info = instructions[ins->id()]; CodePosition input(ins->id(), CodePosition::INPUT); CodePosition output(ins->id(), CodePosition::OUTPUT); fprintf(stderr, "["); if (input != CodePosition::MIN) fprintf(stderr, "%u,%u ", input.bits(), output.bits()); fprintf(stderr, "%s]", ins->opName()); if (ins->isMoveGroup()) { LMoveGroup* group = ins->toMoveGroup(); for (int i = group->numMoves() - 1; i >= 0; i--) { // Use two printfs, as LAllocation::toString is not reentrant. fprintf(stderr, " [%s", group->getMove(i).from()->toString()); fprintf(stderr, " -> %s]", group->getMove(i).to()->toString()); } fprintf(stderr, "\n"); continue; } for (size_t i = 0; i < ins->numDefs(); i++) fprintf(stderr, " [def %s]", ins->getDef(i)->toString()); for (size_t i = 0; i < ins->numTemps(); i++) { LDefinition* temp = ins->getTemp(i); if (!temp->isBogusTemp()) fprintf(stderr, " [temp v%u %s]", info.temps[i].virtualRegister(), temp->toString()); } size_t index = 0; for (LInstruction::InputIterator alloc(*ins); alloc.more(); alloc.next()) { fprintf(stderr, " [use %s", info.inputs[index++].toString()); if (!alloc->isConstant()) fprintf(stderr, " %s", alloc->toString()); fprintf(stderr, "]"); } fprintf(stderr, "\n"); } } // Print discovered allocations at the ends of blocks, in the order they // were discovered. Vector<IntegrityItem, 20, SystemAllocPolicy> seenOrdered; seenOrdered.appendN(IntegrityItem(), seen.count()); for (IntegrityItemSet::Enum iter(seen); !iter.empty(); iter.popFront()) { IntegrityItem item = iter.front(); seenOrdered[item.index] = item; } if (!seenOrdered.empty()) { fprintf(stderr, "Intermediate Allocations:\n"); for (size_t i = 0; i < seenOrdered.length(); i++) { IntegrityItem item = seenOrdered[i]; fprintf(stderr, " block %u reg v%u alloc %s\n", item.block->mir()->id(), item.vreg, item.alloc.toString()); } } fprintf(stderr, "\n"); #endif }
bool AllocationIntegrityState::checkIntegrity(LBlock* block, LInstruction* ins, uint32_t vreg, LAllocation alloc, bool populateSafepoints) { for (LInstructionReverseIterator iter(block->rbegin(ins)); iter != block->rend(); iter++) { ins = *iter; // Follow values through assignments in move groups. All assignments in // a move group are considered to happen simultaneously, so stop after // the first matching move is found. if (ins->isMoveGroup()) { LMoveGroup* group = ins->toMoveGroup(); for (int i = group->numMoves() - 1; i >= 0; i--) { if (*group->getMove(i).to() == alloc) { alloc = *group->getMove(i).from(); break; } } } const InstructionInfo& info = instructions[ins->id()]; // Make sure the physical location being tracked is not clobbered by // another instruction, and that if the originating vreg definition is // found that it is writing to the tracked location. for (size_t i = 0; i < ins->numDefs(); i++) { LDefinition* def = ins->getDef(i); if (def->isBogusTemp()) continue; if (info.outputs[i].virtualRegister() == vreg) { MOZ_ASSERT(*def->output() == alloc); // Found the original definition, done scanning. return true; } else { MOZ_ASSERT(*def->output() != alloc); } } for (size_t i = 0; i < ins->numTemps(); i++) { LDefinition* temp = ins->getTemp(i); if (!temp->isBogusTemp()) MOZ_ASSERT(*temp->output() != alloc); } if (ins->safepoint()) { if (!checkSafepointAllocation(ins, vreg, alloc, populateSafepoints)) return false; } } // Phis are effectless, but change the vreg we are tracking. Check if there // is one which produced this vreg. We need to follow back through the phi // inputs as it is not guaranteed the register allocator filled in physical // allocations for the inputs and outputs of the phis. for (size_t i = 0; i < block->numPhis(); i++) { const InstructionInfo& info = blocks[block->mir()->id()].phis[i]; LPhi* phi = block->getPhi(i); if (info.outputs[0].virtualRegister() == vreg) { for (size_t j = 0, jend = phi->numOperands(); j < jend; j++) { uint32_t newvreg = info.inputs[j].toUse()->virtualRegister(); LBlock* predecessor = block->mir()->getPredecessor(j)->lir(); if (!addPredecessor(predecessor, newvreg, alloc)) return false; } return true; } } // No phi which defined the vreg we are tracking, follow back through all // predecessors with the existing vreg. for (size_t i = 0, iend = block->mir()->numPredecessors(); i < iend; i++) { LBlock* predecessor = block->mir()->getPredecessor(i)->lir(); if (!addPredecessor(predecessor, vreg, alloc)) return false; } return true; }
bool GreedyAllocator::allocateRegisters() { // Allocate registers bottom-up, such that we see all uses before their // definitions. for (size_t i = graph.numBlocks() - 1; i < graph.numBlocks(); i--) { LBlock *block = graph.getBlock(i); IonSpew(IonSpew_RegAlloc, "Allocating block %d", (uint32)i); // All registers should be free. JS_ASSERT(state.free == RegisterSet::All()); // Allocate stack for any phis. for (size_t j = 0; j < block->numPhis(); j++) { LPhi *phi = block->getPhi(j); VirtualRegister *vreg = getVirtualRegister(phi->getDef(0)); allocateStack(vreg); } // Allocate registers. if (!allocateRegistersInBlock(block)) return false; LMoveGroup *entrySpills = block->getEntryMoveGroup(); // We've reached the top of the block. Spill all registers by inserting // moves from their stack locations. for (AnyRegisterIterator iter(RegisterSet::All()); iter.more(); iter++) { VirtualRegister *vreg = state[*iter]; if (!vreg) { JS_ASSERT(state.free.has(*iter)); continue; } JS_ASSERT(vreg->reg() == *iter); JS_ASSERT(!state.free.has(vreg->reg())); allocateStack(vreg); LAllocation *from = LAllocation::New(vreg->backingStack()); LAllocation *to = LAllocation::New(vreg->reg()); if (!entrySpills->add(from, to)) return false; killReg(vreg); vreg->unsetRegister(); } // Before killing phis, ensure that each phi input has its own stack // allocation. This ensures we won't allocate the same slot for any phi // as its input, which technically may be legal (since the phi becomes // the last use of the slot), but we avoid for sanity. for (size_t i = 0; i < block->numPhis(); i++) { LPhi *phi = block->getPhi(i); for (size_t j = 0; j < phi->numOperands(); j++) { VirtualRegister *in = getVirtualRegister(phi->getOperand(j)->toUse()); allocateStack(in); } } // Kill phis. for (size_t i = 0; i < block->numPhis(); i++) { LPhi *phi = block->getPhi(i); VirtualRegister *vr = getVirtualRegister(phi->getDef(0)); JS_ASSERT(!vr->hasRegister()); killStack(vr); } } return true; }