void JIT::compilePutByIdSlowCase(int baseVReg, Identifier* ident, int, Vector<SlowCaseEntry>::iterator& iter, unsigned propertyAccessInstructionIndex) { linkSlowCaseIfNotJSCell(iter, baseVReg); linkSlowCase(iter); emitPutJITStubArgConstant(reinterpret_cast<unsigned>(ident), 2); emitPutJITStubArg(X86::eax, 1); emitPutJITStubArg(X86::edx, 3); JmpSrc call = emitCTICall(Interpreter::cti_op_put_by_id); // Track the location of the call; this will be used to recover repatch information. m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].callReturnLocation = call; }
void JIT::compileOpCallVarargsSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) { int callee = instruction[1].u.operand; linkSlowCaseIfNotJSCell(iter, callee); linkSlowCase(iter); JITStubCall stubCall(this, cti_op_call_NotJSFunction); stubCall.addArgument(regT1, regT0); stubCall.addArgument(regT3); stubCall.addArgument(regT2); stubCall.call(); sampleCodeBlock(m_codeBlock); }
void JIT::compileOpCallSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter, unsigned, OpcodeID opcodeID) { int callee = instruction[1].u.operand; int argCount = instruction[2].u.operand; int registerOffset = instruction[3].u.operand; linkSlowCaseIfNotJSCell(iter, callee); linkSlowCase(iter); JITStubCall stubCall(this, opcodeID == op_construct ? cti_op_construct_NotJSConstruct : cti_op_call_NotJSFunction); stubCall.addArgument(callee); stubCall.addArgument(JIT::Imm32(registerOffset)); stubCall.addArgument(JIT::Imm32(argCount)); stubCall.call(); sampleCodeBlock(m_codeBlock); }
void JIT::compileOpCallVarargsSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter) { int callee = instruction[1].u.operand; linkSlowCaseIfNotJSCell(iter, callee); Jump notCell = jump(); linkSlowCase(iter); move(TrustedImm32(JSValue::CellTag), regT1); // Need to restore cell tag in regT1 because it was clobbered. notCell.link(this); JITStubCall stubCall(this, cti_op_call_NotJSFunction); stubCall.addArgument(regT1, regT0); stubCall.addArgument(regT3); stubCall.addArgument(regT2); stubCall.call(); sampleCodeBlock(m_codeBlock); }
void JIT::compileGetByIdSlowCase(int resultVReg, int baseVReg, Identifier* ident, Vector<SlowCaseEntry>::iterator& iter, unsigned propertyAccessInstructionIndex) { // As for the hot path of get_by_id, above, we ensure that we can use an architecture specific offset // so that we only need track one pointer into the slow case code - we track a pointer to the location // of the call (which we can use to look up the repatch information), but should a array-length or // prototype access trampoline fail we want to bail out back to here. To do so we can subtract back // the distance from the call to the head of the slow case. linkSlowCaseIfNotJSCell(iter, baseVReg); linkSlowCase(iter); #ifndef NDEBUG JmpDst coldPathBegin = __ label(); #endif emitPutJITStubArg(X86::eax, 1); emitPutJITStubArgConstant(reinterpret_cast<unsigned>(ident), 2); JmpSrc call = emitCTICall(Interpreter::cti_op_get_by_id); ASSERT(X86Assembler::getDifferenceBetweenLabels(coldPathBegin, call) == repatchOffsetGetByIdSlowCaseCall); emitPutVirtualRegister(resultVReg); // Track the location of the call; this will be used to recover repatch information. m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].callReturnLocation = call; }