Skip to content

zahybnaya/mnk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

folders:
========

src:
Contains model implementations 
src/matlab - matlab scripts

scripts: 
	prof.sh - a poor man's profiler
data:
	Data sets from experiments. Each file F has a corresponding F.times with the number of times each move should be tried.
jobs:
	HPC jobs

agents:
	text description files of models

Model fitting:
=============
For model fitting, build the MCS executible with 'make modelfitting' and run a fitting job. 

Cross validate:
==============
cross validate takes the parameter values (from the optimization) and then executes the cross_validate.m script via an HPC job. It's output is a file named testXX.dat where XX the subject number.
The dat files contains a matrix of (currently) 30 values for the log likelihood. Followed by the values selected for the model parameters the got the best fitting results. 
Followed by the mean log likelihood as the last line of the file

Model comparision plots:
=======================
look for loglik.csv files under $scratch . Create a file with the following format "<model> <subject> <loglik>"
Use the ./scripts/plot_loglik.py to create the plot


Format for out.txt:
===================
Trail_Id, N (guess that succeeded), \Sum(1/n), estimation , trails-remaining 


Performing model/parameter recovery:
======================================

./scripts/model_recovery.py  <model_file> <stimulil> <num_of_data_sets> 

This script finds the fitting parameters in a given model_file (e.g., lines like "par=?1{0,100,0.5}" ) 
It creates "points" for each parameter, currently there is only one method implemented and that's a "grid".  A line like "par=?1{0,100,0.5}" means that par is a parameter 
that can go from 0 to 100 with jumps of 0.5.  The grid will be n-dimensional (with n fitted parameters). 

Phases:

1) Generating the model files: 
	For each point the script model_recovery.py generates a model file under "../agents/<model_file>___fake_<point>"

2) Fake data generation:
	The script generates a fake-data running script called "generate_fake_data_.sh"
	For each point the fake-data scripts executes '../src/states -a <generating-model-point> -s <stimuli> number of times (according to the <num_of_data_sets> parameter. 
	The output goes to  ../data/<agent>_DS<DS#> (it may contain multiple subjects).

	** The data-generation does not occur automatically, you need to run "generate_fake_data_.sh" **

3) Fitting:
	A fitting job is also generated and placed under the ../jobs folder.  Execute this job to perform fitting of the model to the generated data sets 

4) Parameter recovery:
	After submitting and completing the fitting job the ./scripts/parameter_recovery.sh generates the params.csv file with the actual and fitted data of the models

5) Plotting:
	You can generate scatter plots with the plotting script 'plot_param_recovery_scatter.py'

6) Model Recovery:
	Generates the following data file: Generating_model, Point, DS, Fitting_model, Measurement(AIC/BIC/etc), value 




About

Single code base for anything that has to do with the mnk-game

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published