A repository to store my stuff for Belle2 analysis. Stefano Lacaprara
Some literature:
- BelleII http://dx.doi.org/10.1007/JHEP10(2014)165
- BaBar http://journals.aps.org/prd/abstract/10.1103/PhysRevD.79.052003
eta'->eta(->gg) pi+ pi Ks->pi+ pi-
eta'->eta(->gg) pi+ pi Ks->pi0 pi0
eta'->eta(->gg) pi+ pi KL
(not yet)eta'->eta(->pi+ pi- pi-) pi+ pi Ks->pi+ pi-
eta'->eta(->pi+ pi- pi-) pi+ pi Ks->pi0 pi0
eta'->eta(->pi+ pi- pi-) pi+ pi KL
(not yet)
-
Starting with the you own generated files
-
create the signal root-file 1.
cd steering_files
2. editGenSimAndReco-withoutBeamBkg.py
and select the correctdec
file and the output file, as well as the wanted number of events 3. runbasf2 GenSimAndReco-withoutBeamBkg.py
: time depends on # of events generated 4. check the output file withbasf2 LoadMCParticles.py
5. I keep the produced root files inroot_files
-
run on the officially produced MC files 1. dataset list (MC5) is available at https://belle2.cc.kek.jp/~twiki/bin/view/Computing/MC5Release4Physics 2. First step is to skim the large dataset, then you perform selection on the skim. 3. Skim selection is SkimEtaPrK0.py and skim for all the four analysis channels. 4. there are a number of list of files (ls *list), where all the files produced in the MC5 campaign are listed. These lists are used as input for the actual skimming 5. run with `basf2 SkimEtaPrK0.py (eg 1 0 ccbar) 6. A script submit_lsf_skim.sh to process the list of files from the above list in lsf. WARNING: check carefully the file, expecially the loops, before executing it, since at full throttle it will submit a ton of jobs. 7. an other script 'skimSummary.sh' loops through the log produced by lsf and count the input/output events
-
Run the selection, which reconstruct the wanted decay and produce a flat ntuple with the reconstructed informations:
-
run
basf2 SelectEtaPrK0_ch1.py
(with ch1,2,3,4,5,6 depending on the channel) -
it produces an output file with name
B0_etapr-eta-gg2pi_KS-pi+pi-_output_signal.root
or similar for the other channels. -
As before, I keep the output files in
root_files
-
If you run on skim, there is a script which submit the needed jobs to lsf submit_lsf.sh: WARNING: as before, the loops inside can create a ton of jobs, use with care. 7. an other script 'selectSummary.sh' loops through the log produced by lsf and count the input/output events
-
Run the analysis, namely the selection of the best candidates among the many in each events, and produce some histogram for it.
-
go to
cd ../macro
-
run
root loop_ch1.C
(with ch1,2,3,4,5,6 depending on the channel) -
it creates a output file with the histograms
Histo_chX.root
-
it creates a number of canvas with some reference histograms, properly drawn, which are also save as
pdf
andpng
for use in slides or documents -
the macro
plot_ch.C
can be used to plot the histograms: a functionplot(int channel)
does the work.plotAll()
plots all the histo for all the channels (suggest to run with root in batch mode:root -b
) -
Fit and signal extraction
-
All is in directory fit
-
Still setting up the machinery...
- run the code on officially produced signal files from MC5
- done for ch 1,2,4, and partially for 5 (jobs too long) DONE
- run also on background
- continuum: which one? https://belle2.cc.kek.jp/~twiki/bin/viewauth/Computing/MC5Release4PhysicsGenericContinuum
- first test on uu/dd/ss/cc DONE
- need to perform some sort of base skimming in order to reduce the number of input events (see Alessandro's mail) DONE
- understand how many events/files/subs correspond to how many fb-1 DONE
- peaking (if any: which one?) TODO
- continuum: which one? https://belle2.cc.kek.jp/~twiki/bin/viewauth/Computing/MC5Release4PhysicsGenericContinuum
- understand how to use continuum suppression variable TODO/Doing (but must understand the issue with CosTBTO first)
- understand how to run the actual flavour tagging algorithm TODO
- perform a multi dimensional likelihood fit of signal+backgound to extract the CPV parameters TODO/Doing
- document everything