Interpreter::Interpreter(ConsoleWidget *console, VideoWidget *video, MonParameterDB *data, const QString &initScript) : m_mutexProg(QMutex::Recursive) { m_initScript = initScript; m_initScript.remove(QRegExp("^\\s+")); // remove initial whitespace m_console = console; m_video = video; m_pixymonParameters = data; m_pc = 0; m_programming = false; m_localProgramRunning = false; m_waiting = false; m_fastPoll = true; m_notified = false; m_running = -1; // set to bogus value to force update m_chirp = NULL; m_renderer = new Renderer(m_video, this); connect(m_console, SIGNAL(textLine(QString)), this, SLOT(command(QString))); connect(m_console, SIGNAL(controlKey(Qt::Key)), this, SLOT(controlKey(Qt::Key))); connect(this, SIGNAL(textOut(QString, QColor)), m_console, SLOT(print(QString, QColor))); connect(this, SIGNAL(error(QString)), m_console, SLOT(error(QString))); connect(this, SIGNAL(enableConsole(bool)), m_console, SLOT(acceptInput(bool))); connect(this, SIGNAL(prompt(QString)), m_console, SLOT(prompt(QString))); connect(this, SIGNAL(consoleCommand(QString)), m_console, SLOT(command(QString))); connect(this, SIGNAL(videoInput(VideoWidget::InputMode)), m_video, SLOT(acceptInput(VideoWidget::InputMode))); connect(m_video, SIGNAL(selection(int,int,int,int)), this, SLOT(handleSelection(int,int,int,int))); prompt(); m_run = true; }
Interpreter::Interpreter(ConsoleWidget *console, VideoWidget *video) : m_mutexProg(QMutex::Recursive) { m_console = console; m_video = video; m_pc = 0; m_programming = false; m_localProgramRunning = false; m_rcount = 0; m_waiting = false; m_fastPoll = true; m_notified = false; m_pendingCommand = NONE; m_running = -1; // set to bogus value to force update m_chirp = NULL; m_renderer = new Renderer(m_video); connect(m_console, SIGNAL(textLine(QString)), this, SLOT(command(QString))); connect(m_console, SIGNAL(controlKey(Qt::Key)), this, SLOT(controlKey(Qt::Key))); connect(this, SIGNAL(textOut(QString, QColor)), m_console, SLOT(print(QString, QColor))); connect(this, SIGNAL(error(QString)), m_console, SLOT(error(QString))); connect(this, SIGNAL(enableConsole(bool)), m_console, SLOT(acceptInput(bool))); connect(this, SIGNAL(prompt(QString)), m_console, SLOT(prompt(QString))); connect(this, SIGNAL(videoInput(VideoWidget::InputMode)), m_video, SLOT(acceptInput(VideoWidget::InputMode))); connect(m_video, SIGNAL(selection(int,int,int,int)), this, SLOT(handleSelection(int,int,int,int))); m_run = true; start(); }
void Interpreter::unwait() { QMutexLocker locker(&m_mutexInput); if (m_waiting) { m_waitInput.wakeAll(); m_key = Qt::Key_Escape; emit videoInput(VideoWidget::NONE); } }
void Interpreter::getSelection(RectA *region) { emit videoInput(VideoWidget::REGION); m_mutexInput.lock(); m_waiting = true; m_waitInput.wait(&m_mutexInput); m_waiting = false; *region = m_selection; m_mutexInput.unlock(); }
void Interpreter::getSelection(Point16 *point) { emit videoInput(VideoWidget::POINT); m_mutexInput.lock(); m_waiting = true; m_waitInput.wait(&m_mutexInput); m_waiting = false; point->m_x = m_selection.m_xOffset; point->m_y = m_selection.m_yOffset; m_mutexInput.unlock(); }
int Interpreter::call(const QStringList &argv, bool interactive) { ChirpProc proc; ProcInfo info; int args[20]; int i, j, k, n, base, res; bool ok; uint type; ArgList list; // not allowed if (argv.size()<1) return -1; // check modules to see if they handle this command, if so, skip to end emit enableConsole(false); for (i=0; i<m_modules.size(); i++) { if (m_modules[i]->command(argv)) return 0; } // a procedure needs extension info (arg info, etc) in order for us to call... if ((proc=m_chirp->getProc(argv[0].toLocal8Bit()))>=0 && m_chirp->getProcInfo(proc, &info)>=0) { memset(args, 0, sizeof(args)); // zero args getArgs(&info, &list); n = strlen((char *)info.argTypes); // if we have fewer args than required... if ((int)list.size()>argv.size()-1) { // if we're interactive, ask for values if (interactive && argv.size()>0) { QStringList cargv = argv; QString pstring, pstring2; for (i=cargv.size()-1; i<(int)list.size(); i++) { if (info.argTypes[i]==CRP_TYPE_HINT) { if (n>i+4) { type = *(uint *)&info.argTypes[i+1]; if (type==FOURCC('R','E','G','1')) { emit videoInput(VideoWidget::REGION); pstring2 = "(select region with mouse)"; } if (type==FOURCC('P','N','T','1')) { emit videoInput(VideoWidget::POINT); pstring2 = "(select point with mouse)"; } } } k = i; pstring = printArgType(&info.argTypes[i], i) + " " + list[k].first + (list[k].second=="" ? "?" : " (" + list[k].second + ")?") + " " + pstring2; emit enableConsole(true); emit prompt(pstring); m_mutexInput.lock(); m_waiting = true; m_waitInput.wait(&m_mutexInput); m_waiting = false; m_mutexInput.unlock(); emit prompt(PROMPT); emit enableConsole(false); if (m_key==Qt::Key_Escape) return -1; cargv << m_command.split(QRegExp("\\s+")); } // call ourselves again, now that we have all the args return call(cargv, true); } else { emit error("too few arguments.\n"); return -1; } } augmentProcInfo(&info); // if we have all the args we need, parse, put in args array for (i=0, j=0; m_argTypes[i]; i++) { if (argv.size()>i+1) { if (m_argTypes[i]==CRP_INT8 || m_argTypes[i]==CRP_INT16 || m_argTypes[i]==CRP_INT32) { args[j++] = m_argTypes[i]; if (argv[i+1].left(2)=="0x") base = 16; else base = 10; args[j++] = argv[i+1].toInt(&ok, base); if (!ok) { emit error("argument didn't parse.\n"); return -1; } } #if 0 else if (m_argTypes[i]==CRP_STRING) { args[j++] = m_argTypes[i]; // string goes where? can't cast pointer to int... } #endif else { // deal with non-integer types return -1; } } } #if 0 // print helpful chirp argument string if (interactive && argv.size()>1) { QString callString = "Chirp arguments for " + argv[0] + " (ChirpProc=" + QString::number(proc) + "): "; for (i=1; i<argv.size(); i++) { if (i>1) callString += ", "; j = i; callString += printArgType(&m_argTypes[i-1], i) + "(" + argv[j] + ")"; } emit textOut(callString + "\n"); } #endif // make chirp call res = m_chirp->callAsync(proc, args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7], args[8], args[9], args[10], args[11], args[12], args[13], args[14], args[15], args[16], args[17], args[18], args[19], END_OUT_ARGS); // check for cable disconnect if (res<0 && !m_notified) //res==LIBUSB_ERROR_PIPE) { m_notified = true; emit connected(PIXY, false); return res; } // get response if we're not programming, save text if we are if (m_programming) addProgram(argv); else m_chirp->serviceChirp(); } else { emit error("procedure unsupported.\n"); return -1; } return 0; }
void Interpreter::command(const QString &command) { QMutexLocker locker(&m_mutexInput); if (m_localProgramRunning) return; if (m_waiting) { m_command = command; m_command.remove(QRegExp("[(),\\t]")); m_key = (Qt::Key)0; m_waitInput.wakeAll(); return; } QStringList words = command.split(QRegExp("[\\s(),\\t]"), QString::SkipEmptyParts); if (words.size()==0) goto end; if (words[0]=="do") { clearLocalProgram(); beginLocalProgram(); } else if (words[0]=="done") { endLocalProgram(); runLocalProgram(); return; } else if (words[0]=="list") listProgram(); else if (words[0].left(4)=="cont") { if (runLocalProgram()>=0) return; } else if (words[0]=="rendermode") { if (words.size()>1) m_renderer->setMode(words[1].toInt()); else emit textOut("Missing mode parameter.\n"); } else if (words[0]=="region") { emit videoInput(VideoWidget::REGION); m_argvHost = words; } #if 0 else if (words[0]=="set") { if (words.size()==3) { words[1].remove(QRegExp("[\\s\\D]+")); m_renderer->m_blobs.setLabel(words[1], words[2]); } } #endif else { handleCall(words); return; // don't print prompt } end: prompt(); }
/** * \fn void addVideos(std::string bddName, std::string activity, int nbVideos, std::string* videoPaths, int dim, int maxPts) * \brief Adds a new video in the choosen activity of the specified BDD. * \param[in] bddName The name of the BDD. * \param[in] activity The name of the activity. * \param[in] nbVideos The number of videos we want to add. * \param[in] videoPaths The different paths to the videos. */ void addVideos(std::string bddName, std::string activity, int nbVideos, std::string* videoPaths){ std::string path2bdd("bdd/" + bddName); //int desc = getDescID(path2bdd); //int dim = getDim(desc); // Loading the bdd IMbdd bdd(bddName,path2bdd); bdd.load_bdd_configuration(path2bdd.c_str(),"imconfig.xml"); // Saving its parameters int maxPts = bdd.getMaxPts(); int scale_num = bdd.getScaleNum(); std::string descriptor = bdd.getDescriptor(); int dim = bdd.getDim(); // Loading the mapping file to get the video label activitiesMap *am; int nbActivities = mapActivities(path2bdd,&am); int i = 0; while(am[i].activity.compare(activity) != 0 && i < nbActivities) i++; if(am[i].activity.compare(activity) != 0){ std::cerr << "Activity not found!\n" << std::endl; exit(EXIT_FAILURE); } int label = am[i].label; delete []am; // Import videos in the selected database string strlabel = inttostring(label); std::string copypath(path2bdd + "/" + strlabel + "/avi"); int nbFiles = nbOfFiles(copypath); int j = nbFiles + 1; for(int i=0 ; i<nbVideos ; i++){ string idFile = inttostring(j); string cmd("cp " + videoPaths[i] + " " + copypath + "/" + strlabel + idFile + ".avi"); system(cmd.c_str()); j++; } // Extract STIPs from the videos and save them in the repertory /path/to/bdd/label/ string fpointspath(path2bdd + "/" + strlabel + "/fp"); j = nbFiles + 1; for(int i=0 ; i<nbVideos ; i++){ KMdata dataPts(dim,maxPts); string idFile = inttostring(j); string videoInput(copypath + "/" + strlabel + idFile + ".avi"); string fpOutput(fpointspath + "/" + strlabel + "-" + idFile + ".fp"); int nPts; nPts = extract_feature_points(videoInput, scale_num, descriptor, dim, maxPts, dataPts); if(nPts != 0){ dataPts.setNPts(nPts); exportSTIPs(fpOutput, dim,dataPts); } j++; } }
/** * \fn void im_refresh_folder(std::string folder, std::vector<std::string> activities, int scale_num, int dim, int maxPts) * \brief Deletes all files excepted videos and extracts STIPs again. * * \param[in] folder the path to the folder containing videos. * \param[in] activities the vector containing all the activities. * \param[in] scale_num the number of scales used for the feature points extraction. * \param[in] dim the dimension of the feature points. * \param[in] maxPts the maximum number of feature points we can extract. */ void im_refresh_folder(const IMbdd& bdd, std::string folder){ std::vector<std::string> activities = bdd.getActivities(); int scale_num = bdd.getScaleNum(); std::string descriptor(bdd.getDescriptor()); int dim = bdd.getDim(); int maxPts = bdd.getMaxPts(); // Deleting all features points for(std::vector<std::string>::iterator activity = activities.begin() ; activity != activities.end() ; ++activity){ string rep(folder + "/" + *activity + "/fp"); DIR * repertoire = opendir(rep.c_str()); std::cout << rep << std::endl; if (repertoire == NULL){ std::cerr << "Impossible to open the fp folder for deletion!" << std::endl; exit(EXIT_FAILURE); } struct dirent *ent; while ( (ent = readdir(repertoire)) != NULL){ if(strcmp(ent->d_name,".") != 0 && strcmp(ent->d_name,"..") != 0) remove(ent->d_name); } closedir(repertoire); } // Extracting feature points for each videos for(std::vector<std::string>::iterator activity = activities.begin() ; activity != activities.end() ; ++activity){ std::string avipath(folder + "/" + *activity + "/avi"); std::string FPpath(folder + "/" + *activity + "/fp"); DIR * repertoire = opendir(avipath.c_str()); if (repertoire == NULL){ std::cerr << "Impossible to open the avi folder for extraction!" << std::endl; exit(EXIT_FAILURE); } struct dirent * ent; int j = 1; while ( (ent = readdir(repertoire)) != NULL){ std::string file = ent->d_name; if(file.compare(".") != 0 && file.compare("..") != 0){ string idFile = inttostring(j); // Extract feature points from the videos and save them in the repertory /path/to/folder/activity/fp KMdata dataPts(dim,maxPts); string videoInput(avipath + "/" + file); string stipOutput(FPpath + "/" + *activity + "-" + idFile + ".fp"); int nPts; nPts = extract_feature_points(videoInput, scale_num, descriptor, dim, maxPts, dataPts); if(nPts != 0){ dataPts.setNPts(nPts); exportSTIPs(stipOutput, dim, dataPts); } j++; } } closedir(repertoire); // The extraction of the videos feature points of the activity is terminated. } im_concatenate_bdd_feature_points(bdd.getFolder(), bdd.getPeople(), bdd.getActivities()); }