void AdaptiveEnhanceLuong::GetBlockStatistics( ) { for (int y = mHalfBlockSize; y < mHeight - mHalfBlockSize; y ++ ) { for (int x = mHalfBlockSize; x < mWidth - mHalfBlockSize; x ++ ) { int xTopLeft = x - mHalfBlockSize; int yTopLeft = y - mHalfBlockSize; int xBottomRight = x + mHalfBlockSize; int yBottomRight = y + mHalfBlockSize; mpMeanGrid->SetValue( x, y, NumberGridTools<double>::ComputeLocalMean( mpLastStepGrid, xTopLeft, yTopLeft, xBottomRight, yBottomRight ) ); mpVarianceGrid->SetValue( x, y, NumberGridTools<double>::ComputeLocalVariance( mpLastStepGrid, xTopLeft, yTopLeft, xBottomRight, yBottomRight, mpMeanGrid->GetValue( x, y ) ) ); } } double minVariance, maxVariance; ArrayGrid<double>* pCroppedVarianceGrid = GridExtender<double>::CropBorder( mpVarianceGrid, mWindowSize, mWindowSize ); NumberGridTools<double>::GetMinMax( pCroppedVarianceGrid, minVariance, maxVariance ); bool useDataMinMax = true; int binsize = (int)(( maxVariance - minVariance ) / 500.0); IntHistogram* pHistogram = new IntHistogram( pCroppedVarianceGrid, useDataMinMax, 0.1, 0.9, binsize, 0 ); mLowerBound = pHistogram->GetLowerBound(); mUpperBound = pHistogram->GetUpperBound(); delete pHistogram; delete pCroppedVarianceGrid; }
void HRInto_G1RemSet::print_summary_info() { G1CollectedHeap* g1 = G1CollectedHeap::heap(); #if CARD_REPEAT_HISTO gclog_or_tty->print_cr("\nG1 card_repeat count histogram: "); gclog_or_tty->print_cr(" # of repeats --> # of cards with that number."); card_repeat_count.print_on(gclog_or_tty); #endif if (FILTEROUTOFREGIONCLOSURE_DOHISTOGRAMCOUNT) { gclog_or_tty->print_cr("\nG1 rem-set out-of-region histogram: "); gclog_or_tty->print_cr(" # of CS ptrs --> # of cards with that number."); out_of_histo.print_on(gclog_or_tty); } gclog_or_tty->print_cr("\n Concurrent RS processed %d cards", _conc_refine_cards); DirtyCardQueueSet& dcqs = JavaThread::dirty_card_queue_set(); jint tot_processed_buffers = dcqs.processed_buffers_mut() + dcqs.processed_buffers_rs_thread(); gclog_or_tty->print_cr(" Of %d completed buffers:", tot_processed_buffers); gclog_or_tty->print_cr(" %8d (%5.1f%%) by conc RS threads.", dcqs.processed_buffers_rs_thread(), 100.0*(float)dcqs.processed_buffers_rs_thread()/ (float)tot_processed_buffers); gclog_or_tty->print_cr(" %8d (%5.1f%%) by mutator threads.", dcqs.processed_buffers_mut(), 100.0*(float)dcqs.processed_buffers_mut()/ (float)tot_processed_buffers); gclog_or_tty->print_cr(" Conc RS threads times(s)"); PrintRSThreadVTimeClosure p; gclog_or_tty->print(" "); g1->concurrent_g1_refine()->threads_do(&p); gclog_or_tty->print_cr(""); if (G1UseHRIntoRS) { HRRSStatsIter blk; g1->heap_region_iterate(&blk); gclog_or_tty->print_cr(" Total heap region rem set sizes = " SIZE_FORMAT "K." " Max = " SIZE_FORMAT "K.", blk.total_mem_sz()/K, blk.max_mem_sz()/K); gclog_or_tty->print_cr(" Static structures = " SIZE_FORMAT "K," " free_lists = " SIZE_FORMAT "K.", HeapRegionRemSet::static_mem_size()/K, HeapRegionRemSet::fl_mem_size()/K); gclog_or_tty->print_cr(" %d occupied cards represented.", blk.occupied()); gclog_or_tty->print_cr(" Max sz region = [" PTR_FORMAT ", " PTR_FORMAT " )" ", cap = " SIZE_FORMAT "K, occ = " SIZE_FORMAT "K.", blk.max_mem_sz_region()->bottom(), blk.max_mem_sz_region()->end(), (blk.max_mem_sz_region()->rem_set()->mem_size() + K - 1)/K, (blk.max_mem_sz_region()->rem_set()->occupied() + K - 1)/K); gclog_or_tty->print_cr(" Did %d coarsenings.", HeapRegionRemSet::n_coarsenings()); } }
void ct_freq_update_histo_and_reset() { for (size_t j = 0; j < ct_freq_sz; j++) { card_repeat_count.add_entry(ct_freq[j]); ct_freq[j] = 0; } }
void G1RemSet::print_summary_info(G1RemSetSummary * summary, const char * header) { assert(summary != NULL, "just checking"); if (header != NULL) { gclog_or_tty->print_cr("%s", header); } #if CARD_REPEAT_HISTO gclog_or_tty->print_cr("\nG1 card_repeat count histogram: "); gclog_or_tty->print_cr(" # of repeats --> # of cards with that number."); card_repeat_count.print_on(gclog_or_tty); #endif summary->print_on(gclog_or_tty); }
void HRInto_G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i) { // Construct the region representing the card. HeapWord* start = _ct_bs->addr_for(card_ptr); // And find the region containing it. HeapRegion* r = _g1->heap_region_containing(start); assert(r != NULL, "unexpected null"); HeapWord* end = _ct_bs->addr_for(card_ptr + 1); MemRegion dirtyRegion(start, end); #if CARD_REPEAT_HISTO init_ct_freq_table(_g1->g1_reserved_obj_bytes()); ct_freq_note_card(_ct_bs->index_for(start)); #endif UpdateRSOopClosure update_rs_oop_cl(this, worker_i); update_rs_oop_cl.set_from(r); FilterOutOfRegionClosure filter_then_update_rs_oop_cl(r, &update_rs_oop_cl); // Undirty the card. *card_ptr = CardTableModRefBS::clean_card_val(); // We must complete this write before we do any of the reads below. OrderAccess::storeload(); // And process it, being careful of unallocated portions of TLAB's. HeapWord* stop_point = r->oops_on_card_seq_iterate_careful(dirtyRegion, &filter_then_update_rs_oop_cl); // If stop_point is non-null, then we encountered an unallocated region // (perhaps the unfilled portion of a TLAB.) For now, we'll dirty the // card and re-enqueue: if we put off the card until a GC pause, then the // unallocated portion will be filled in. Alternatively, we might try // the full complexity of the technique used in "regular" precleaning. if (stop_point != NULL) { // The card might have gotten re-dirtied and re-enqueued while we // worked. (In fact, it's pretty likely.) if (*card_ptr != CardTableModRefBS::dirty_card_val()) { *card_ptr = CardTableModRefBS::dirty_card_val(); MutexLockerEx x(Shared_DirtyCardQ_lock, Mutex::_no_safepoint_check_flag); DirtyCardQueue* sdcq = JavaThread::dirty_card_queue_set().shared_dirty_card_queue(); sdcq->enqueue(card_ptr); } } else { out_of_histo.add_entry(filter_then_update_rs_oop_cl.out_of_region()); _conc_refine_cards++; } }
bool G1RemSet::concurrentRefineOneCard_impl(jbyte* card_ptr, int worker_i, bool check_for_refs_into_cset) { // Construct the region representing the card. HeapWord* start = _ct_bs->addr_for(card_ptr); // And find the region containing it. HeapRegion* r = _g1->heap_region_containing(start); assert(r != NULL, "unexpected null"); HeapWord* end = _ct_bs->addr_for(card_ptr + 1); MemRegion dirtyRegion(start, end); #if CARD_REPEAT_HISTO init_ct_freq_table(_g1->max_capacity()); ct_freq_note_card(_ct_bs->index_for(start)); #endif assert(!check_for_refs_into_cset || _cset_rs_update_cl[worker_i] != NULL, "sanity"); UpdateRSOrPushRefOopClosure update_rs_oop_cl(_g1, _g1->g1_rem_set(), _cset_rs_update_cl[worker_i], check_for_refs_into_cset, worker_i); update_rs_oop_cl.set_from(r); TriggerClosure trigger_cl; FilterIntoCSClosure into_cs_cl(NULL, _g1, &trigger_cl); InvokeIfNotTriggeredClosure invoke_cl(&trigger_cl, &into_cs_cl); Mux2Closure mux(&invoke_cl, &update_rs_oop_cl); FilterOutOfRegionClosure filter_then_update_rs_oop_cl(r, (check_for_refs_into_cset ? (OopClosure*)&mux : (OopClosure*)&update_rs_oop_cl)); // Undirty the card. *card_ptr = CardTableModRefBS::clean_card_val(); // We must complete this write before we do any of the reads below. OrderAccess::storeload(); // And process it, being careful of unallocated portions of TLAB's. // The region for the current card may be a young region. The // current card may have been a card that was evicted from the // card cache. When the card was inserted into the cache, we had // determined that its region was non-young. While in the cache, // the region may have been freed during a cleanup pause, reallocated // and tagged as young. // // We wish to filter out cards for such a region but the current // thread, if we're running conucrrently, may "see" the young type // change at any time (so an earlier "is_young" check may pass or // fail arbitrarily). We tell the iteration code to perform this // filtering when it has been determined that there has been an actual // allocation in this region and making it safe to check the young type. bool filter_young = true; HeapWord* stop_point = r->oops_on_card_seq_iterate_careful(dirtyRegion, &filter_then_update_rs_oop_cl, filter_young); // If stop_point is non-null, then we encountered an unallocated region // (perhaps the unfilled portion of a TLAB.) For now, we'll dirty the // card and re-enqueue: if we put off the card until a GC pause, then the // unallocated portion will be filled in. Alternatively, we might try // the full complexity of the technique used in "regular" precleaning. if (stop_point != NULL) { // The card might have gotten re-dirtied and re-enqueued while we // worked. (In fact, it's pretty likely.) if (*card_ptr != CardTableModRefBS::dirty_card_val()) { *card_ptr = CardTableModRefBS::dirty_card_val(); MutexLockerEx x(Shared_DirtyCardQ_lock, Mutex::_no_safepoint_check_flag); DirtyCardQueue* sdcq = JavaThread::dirty_card_queue_set().shared_dirty_card_queue(); sdcq->enqueue(card_ptr); } } else { out_of_histo.add_entry(filter_then_update_rs_oop_cl.out_of_region()); _conc_refine_cards++; } return trigger_cl.value(); }