void DamageDealt(Unit* /*victim*/, uint32& damage, DamageEffectType /*damageType*/) { auto tyrannus = this->tyrannus(); if (!tyrannus) return; if (tyrannus->getVictim()) me->CastCustomSpell(SPELL_OVERLORD_BRAND_DAMAGE, SPELLVALUE_BASE_POINT0, damage, tyrannus->getVictim(), true, NULL, NULL, tyrannus->GetGUID()); }
void Babality::onPreFinish(std::string name){ SoundManager::Instance()->playSoundOnce("pre_babality", 0); SDL_Delay(2000); Character* victim = getVictim(name); Character* winner = getWinner(name); winner->setMovement(""); winner->isBabality = true; victim->setMovement(BABALITY_MOVEMENT); victim->isLazy = false; victim->setCurrentSprite(); victim->isBabality = true; }
void initLog() { uart_print("Initializing Write Log Space...\r\n"); uart_print("Initializing clean list..."); //testCleanList(); cleanListInit(&cleanListDataWrite, CleanList(0), LOG_BLK_PER_BANK); uart_print("done\r\n"); //int off = __builtin_offsetof(LogCtrlBlock, increaseLpn); for(int bank=0; bank<NUM_BANKS; bank++) { adaptiveStepDown[bank] = initStepDown; adaptiveStepUp[bank] = initStepUp; nStepUps[bank] = 0; nStepDowns[bank] = 0; for(int lbn=0; lbn<LOG_BLK_PER_BANK; lbn++) { cleanListPush(&cleanListDataWrite, bank, lbn); } UINT32 lbn = cleanListPop(&cleanListDataWrite, bank); hotLogCtrl[bank] = (LogCtrlBlock) { .logLpn = lbn * PAGES_PER_BLK, .lpnsListAddr = LPNS_BUF_BASE_1(bank), .logBufferAddr = HOT_LOG_BUF(bank), .chunkPtr = 0, .increaseLpn=increaseLpnHotBlkFirstUsage, .updateChunkPtr=updateChunkPtr, .nextLowPageOffset=INVALID, .allChunksInLogAreValid = TRUE, .useRecycledPage=FALSE, .precacheDone=TRUE, }; for(int chunk=0; chunk<CHUNKS_PER_PAGE; ++chunk) { hotLogCtrl[bank].dataLpn[chunk] = INVALID; hotLogCtrl[bank].chunkIdx[chunk] = INVALID; } lbn = cleanListPop(&cleanListDataWrite, bank); coldLogCtrl[bank] = (LogCtrlBlock) { .logLpn = lbn * PAGES_PER_BLK, .lpnsListAddr = LPNS_BUF_BASE_2(bank), .logBufferAddr = COLD_LOG_BUF(bank), .chunkPtr = 0, .increaseLpn=increaseLpnColdBlk, .updateChunkPtr=updateChunkPtr, .nextLowPageOffset=INVALID, .allChunksInLogAreValid = TRUE, .useRecycledPage=FALSE, .precacheDone=TRUE, }; for(int chunk=0; chunk<CHUNKS_PER_PAGE; ++chunk) { coldLogCtrl[bank].dataLpn[chunk] = INVALID; coldLogCtrl[bank].chunkIdx[chunk] = INVALID; } nValidChunksFromHeap[bank] = INVALID; } } static void findNewLpnForColdLog(const UINT32 bank, LogCtrlBlock * ctrlBlock) { uart_print("findNewLpnForColdLog bank "); uart_print_int(bank); if (cleanListSize(&cleanListDataWrite, bank) > 2) { uart_print(" use clean blk\r\n"); uart_print("cleanList size = "); uart_print_int(cleanListSize(&cleanListDataWrite, bank)); uart_print("\r\n"); UINT32 lbn = cleanListPop(&cleanListDataWrite, bank); ctrlBlock[bank].logLpn = lbn * PAGES_PER_BLK; ctrlBlock[bank].increaseLpn = increaseLpnColdBlk; } else { if (reuseCondition(bank)) { #if PrintStats uart_print_level_1("REUSECOLD\r\n"); #endif uart_print(" second usage\r\n"); UINT32 lbn = getVictim(&heapDataFirstUsage, bank); UINT32 nValidChunks = getVictimValidPagesNumber(&heapDataFirstUsage, bank); resetValidChunksAndRemove(&heapDataFirstUsage, bank, lbn, CHUNKS_PER_LOG_BLK_FIRST_USAGE); resetValidChunksAndRemove(&heapDataSecondUsage, bank, lbn, CHUNKS_PER_LOG_BLK_SECOND_USAGE); resetValidChunksAndRemove(&heapDataCold, bank, lbn, nValidChunks); ctrlBlock[bank].logLpn = (lbn * PAGES_PER_BLK) + 2; ctrlBlock[bank].increaseLpn = increaseLpnColdBlkReused; nand_page_ptread(bank, get_log_vbn(bank, lbn), 125, 0, (CHUNK_ADDR_BYTES * CHUNKS_PER_LOG_BLK + BYTES_PER_SECTOR - 1) / BYTES_PER_SECTOR, ctrlBlock[bank].lpnsListAddr, RETURN_WHEN_DONE); // Read the lpns list from the max low page (125) where it was previously written by incrementLpnHotBlkFirstUsage } else { uart_print(" get new block\r\n"); UINT32 lbn = cleanListPop(&cleanListDataWrite, bank); ctrlBlock[bank].logLpn = lbn * PAGES_PER_BLK; ctrlBlock[bank].increaseLpn = increaseLpnColdBlk; while(cleanListSize(&cleanListDataWrite, bank) < 2) { #if PrintStats uart_print_level_1("GCCOLD\r\n"); #endif garbageCollectLog(bank); } } } } void increaseLpnColdBlkReused (UINT32 const bank, LogCtrlBlock * ctrlBlock) { uart_print("increaseLpnColdBlkReused bank "); uart_print_int(bank); uart_print("\r\n"); UINT32 lpn = ctrlBlock[bank].logLpn; UINT32 pageOffset = LogPageToOffset(lpn); if (pageOffset == UsedPagesPerLogBlk-1) { UINT32 lbn = get_log_lbn(lpn); nand_page_ptprogram(bank, get_log_vbn(bank, lbn), PAGES_PER_BLK - 1, 0, (CHUNK_ADDR_BYTES * CHUNKS_PER_LOG_BLK + BYTES_PER_SECTOR - 1) / BYTES_PER_SECTOR, ctrlBlock[bank].lpnsListAddr, RETURN_WHEN_DONE); mem_set_dram(ctrlBlock[bank].lpnsListAddr, INVALID, (CHUNKS_PER_BLK * CHUNK_ADDR_BYTES)); insertBlkInHeap(&heapDataCold, bank, lbn); findNewLpnForColdLog(bank, ctrlBlock); } else { ctrlBlock[bank].logLpn = lpn+2; } uart_print("increaseLpnColdBlkReused (bank="); uart_print_int(bank); uart_print(") new lpn "); uart_print_int(ctrlBlock[bank].logLpn); uart_print("\r\n"); }
void findNewLpnForHotLog(const UINT32 bank, LogCtrlBlock * ctrlBlock) { uart_print("findNewLpnForHotLog bank "); uart_print_int(bank); if (cleanListSize(&cleanListDataWrite, bank) > 2) { uart_print(" use clean blk\r\n"); uart_print("cleanList size = "); uart_print_int(cleanListSize(&cleanListDataWrite, bank)); uart_print("\r\n"); UINT32 lbn = cleanListPop(&cleanListDataWrite, bank); ctrlBlock[bank].logLpn = lbn * PAGES_PER_BLK; ctrlBlock[bank].increaseLpn = increaseLpnHotBlkFirstUsage; // we are not using a recycled block anymore ctrlBlock[bank].updateChunkPtr = updateChunkPtr; // we are not using a recycled block anymore ctrlBlock[bank].useRecycledPage = FALSE; } else { //if ((heapDataFirstUsage.nElInHeap[bank] > 0) && ((float)validMin > tot) ) //if (heapDataFirstUsage.nElInHeap[bank] > hotFirstAccumulated[bank]) #if AlwaysReuse if(reuseConditionHot(bank)) #else if(reuseCondition(bank)) #endif { #if PrintStats uart_print_level_1("REUSEHOT\r\n"); #endif uart_print(" second usage\r\n"); UINT32 lbn = getVictim(&heapDataFirstUsage, bank); UINT32 nValidChunks = getVictimValidPagesNumber(&heapDataFirstUsage, bank); resetValidChunksAndRemove(&heapDataFirstUsage, bank, lbn, CHUNKS_PER_LOG_BLK_FIRST_USAGE); //resetValidChunksAndRemove(&heapDataSecondUsage, bank, lbn, CHUNKS_PER_LOG_BLK_SECOND_USAGE); resetValidChunksAndRemove(&heapDataSecondUsage, bank, lbn, nValidChunks); resetValidChunksAndRemove(&heapDataCold, bank, lbn, CHUNKS_PER_LOG_BLK_SECOND_USAGE); ctrlBlock[bank].logLpn = lbn * PAGES_PER_BLK; ctrlBlock[bank].increaseLpn = increaseLpnHotBlkSecondUsage; ctrlBlock[bank].updateChunkPtr = updateChunkPtrRecycledPage; nand_page_ptread(bank, get_log_vbn(bank, lbn), 125, 0, (CHUNK_ADDR_BYTES * CHUNKS_PER_LOG_BLK + BYTES_PER_SECTOR - 1) / BYTES_PER_SECTOR, ctrlBlock[bank].lpnsListAddr, RETURN_WHEN_DONE); // Read the lpns list from the max low page (125) where it was previously written by incrementLpnHotBlkFirstUsage printValidChunksInFirstUsageBlk(bank, ctrlBlock, lbn); if (canReuseLowPage(bank, 0, ctrlBlock)) { // Reuse page 0 prefetching immediately precacheLowPage(bank, ctrlBlock); ctrlBlock[bank].updateChunkPtr = updateChunkPtrRecycledPage; ctrlBlock[bank].useRecycledPage = TRUE; ctrlBlock[bank].precacheDone = TRUE; ctrlBlock[bank].nextLowPageOffset = 0; return; } ctrlBlock[bank].logLpn++; if (canReuseLowPage(bank, 1, ctrlBlock)) { // Reuse page 1 prefetching immediately precacheLowPage(bank, ctrlBlock); ctrlBlock[bank].updateChunkPtr = updateChunkPtrRecycledPage; ctrlBlock[bank].useRecycledPage = TRUE; ctrlBlock[bank].precacheDone = TRUE; ctrlBlock[bank].nextLowPageOffset = 1; return; } else { ctrlBlock[bank].updateChunkPtr = updateChunkPtr; ctrlBlock[bank].useRecycledPage = FALSE; ctrlBlock[bank].precacheDone = FALSE; ctrlBlock[bank].nextLowPageOffset = INVALID; increaseLpnHotBlkSecondUsage(bank, ctrlBlock); } } else { uart_print(" get new block\r\n"); uart_print("No blks left for second usage\r\n"); UINT32 lbn = cleanListPop(&cleanListDataWrite, bank); ctrlBlock[bank].logLpn = lbn * PAGES_PER_BLK; ctrlBlock[bank].increaseLpn = increaseLpnHotBlkFirstUsage; // we are not using a recycled block anymore ctrlBlock[bank].updateChunkPtr = updateChunkPtr; // we are not using a recycled block anymore ctrlBlock[bank].useRecycledPage = FALSE; while(cleanListSize(&cleanListDataWrite, bank) < cleanBlksAfterGcHot) { #if PrintStats uart_print_level_1("GCHOT\r\n"); #endif garbageCollectLog(bank); } } } }
void initGC(UINT32 bank) { #if PrintStats uart_print_level_1("CNT "); uart_print_level_1_int(bank); uart_print_level_1(" "); uart_print_level_1_int(cleanListSize(&cleanListDataWrite, bank)); uart_print_level_1(" "); uart_print_level_1_int(heapDataFirstUsage.nElInHeap[bank]); uart_print_level_1(" "); uart_print_level_1_int(heapDataSecondUsage.nElInHeap[bank]); uart_print_level_1(" "); uart_print_level_1_int(heapDataCold.nElInHeap[bank]); uart_print_level_1("\r\n"); #endif nValidChunksInBlk[bank] = 0; // note(fabio): this version of the GC cleans only completely used blocks (from heapDataSecondUsage). UINT32 validCold = getVictimValidPagesNumber(&heapDataCold, bank); UINT32 validSecond = getVictimValidPagesNumber(&heapDataSecondUsage, bank); uart_print("Valid cold "); uart_print_int(validCold); uart_print(" valid second "); uart_print_int(validSecond); uart_print("\r\n"); if (validCold < ((validSecond*secondHotFactorNum)/secondHotFactorDen)) { uart_print("GC on cold block\r\n"); nValidChunksFromHeap[bank] = validCold; victimLbn[bank] = getVictim(&heapDataCold, bank); #if PrintStats #if MeasureGc uart_print_level_1("COLD "); uart_print_level_1_int(bank); uart_print_level_1(" "); uart_print_level_1_int(validCold); uart_print_level_1("\r\n"); #endif #endif } else { uart_print("GC on second hot block\r\n"); nValidChunksFromHeap[bank] = validSecond; victimLbn[bank] = getVictim(&heapDataSecondUsage, bank); #if PrintStats #if MeasureGc uart_print_level_1("SECOND "); uart_print_level_1_int(bank); uart_print_level_1(" "); uart_print_level_1_int(validSecond); uart_print_level_1("\r\n"); #endif #endif } victimVbn[bank] = get_log_vbn(bank, victimLbn[bank]); uart_print("initGC, bank "); uart_print_int(bank); uart_print(" victimLbn "); uart_print_int(victimLbn[bank]); uart_print(" valid chunks "); uart_print_int(nValidChunksFromHeap[bank]); uart_print("\r\n"); #if PrintStats { // print the Hot First Accumulated parameters uart_print_level_1("HFMAX "); for (int i=0; i<NUM_BANKS; ++i) { uart_print_level_1_int(hotFirstAccumulated[i]); uart_print_level_1(" "); } uart_print_level_1("\r\n"); } #endif { // Insert new value at position 0 in adaptive window and shift all others for (int i=adaptiveWindowSize-1; i>0; --i) { adaptiveWindow[bank][i] = adaptiveWindow[bank][i-1]; } adaptiveWindow[bank][0] = nValidChunksFromHeap[bank]; } if (nValidChunksFromHeap[bank] > 0) { nand_page_ptread(bank, victimVbn[bank], PAGES_PER_BLK - 1, 0, (CHUNK_ADDR_BYTES * CHUNKS_PER_LOG_BLK + BYTES_PER_SECTOR - 1) / BYTES_PER_SECTOR, VICTIM_LPN_LIST(bank), RETURN_WHEN_DONE); // read twice the lpns list size because there might be the recycled lpns list appended gcOnRecycledPage[bank]=FALSE; pageOffset[bank]=0; gcState[bank]=GcRead; } else { resetValidChunksAndRemove(&heapDataFirstUsage, bank, victimLbn[bank], CHUNKS_PER_LOG_BLK_FIRST_USAGE); resetValidChunksAndRemove(&heapDataSecondUsage, bank, victimLbn[bank], CHUNKS_PER_LOG_BLK_SECOND_USAGE); resetValidChunksAndRemove(&heapDataCold, bank, victimLbn[bank], CHUNKS_PER_LOG_BLK_SECOND_USAGE); nand_block_erase(bank, victimVbn[bank]); cleanListPush(&cleanListDataWrite, bank, victimLbn[bank]); #if MeasureGc uart_print_level_2("GCW "); uart_print_level_2_int(bank); uart_print_level_2(" "); uart_print_level_2_int(0); uart_print_level_2(" "); uart_print_level_2_int(nValidChunksFromHeap[bank]); uart_print_level_2("\r\n"); #endif gcState[bank]=GcIdle; } }