/** * Обучение перцептрона * @param x - входной вектор * @param y - правильный выходной вектор */ void Perceptron::Teach(QVector<double> x, QVector<int> y, int speed) { int delta; QVector<int> t = Recognize(x); while (false == (t == y)) { // подстройка весов каждого нейрона for (int i = 0; i < _neurons.size(); i++) { delta = y[i] - t[i]; _neurons[i] -> ChangeWeights(speed, delta, x); } t = Recognize(x); } }
BOOL MzBarDecoder::DecodeFromFile(PCTSTR FileName,BYTE* pcode, DWORD* pnsize, DECODEPARA_ptr pPara){ BOOL bRet = false; if(FileName == NULL || pcode == NULL) return bRet; DECODEPARA_t param; if(pPara == NULL) { param.dwThrehold = 128; param.dwScanRegion.left = 0; param.dwScanRegion.right = m_dib.GetImageWidth(); param.dwScanRegion.top = 0; param.dwScanRegion.bottom = m_dib.GetImageHeight(); }else{ param.dwThrehold = pPara->dwThrehold; param.dwScanRegion = pPara->dwScanRegion; if(param.dwScanRegion.right == 0){ param.dwScanRegion.right = m_dib.GetImageWidth(); } if(param.dwScanRegion.bottom == 0){ param.dwScanRegion.bottom = m_dib.GetImageHeight(); } } if(LoadImage(FileName)){ GrayImage(¶m.dwScanRegion); BinaryImage(param.dwThrehold); if(PreProcess()){ bRet = Recognize(pcode,pnsize); } } return bRet; }
int VoiceWriteRecognition::ProcessMsg(int msg) { switch (msg) { case EH_RECORD: return Record(); break; case EH_SAVE: return Save(); break; case EH_STOP: return Stop(); break; case EH_PLAY: return Play(); break; case EH_RECOGNIZEBTN: return Recognize(); break; default: return 0; } }
int Perceptron::GetClass(QVector<double> x) { QVector<int> reaction = Recognize(x); for (int i = 0; i < reaction.size(); i++) { if (reaction[i] == 1) return (i + 1); } return 0; }
// Low-level function to recognize the current global image to a string. char* TessBaseAPI::RecognizeToString() { BLOCK_LIST block_list; FindLines(&block_list); // Now run the main recognition. PAGE_RES* page_res = Recognize(&block_list, NULL); return TesseractToText(page_res); }
TInt CCmdRemark::ProcessL( const TDesC& aCommand ) { // Complete the test machine - will then get the next cmd Machine()->CompleteRequest(); if (!Recognize(aCommand)) return Error(KErrArgument, TFR_KFmtErrBadCmd, &Keyphrase()); // Do nothing. return KErrNone; }
TInt CCmdListAll::ProcessL(const TDesC& aCommand) { // Complete the test machine - will then get the next cmd Machine()->CompleteRequest(); if (!Recognize(aCommand)) return Error(KErrArgument, TFR_KFmtErrBadCmd, &Keyphrase()); // List commands out Family()->ListAll(Console()); return KErrNone; }
void RunParserTest(std::vector<Token>& tokens) { try { RemoveWhitespaceAndComments(tokens); Recognize(tokens); printf("Parsing Successful\n"); } catch (ParsingException&) { printf("Parsing Failed\n"); } }
char* TessBaseAPI::TesseractRectUNLV(const unsigned char* imagedata, int bytes_per_pixel, int bytes_per_line, int left, int top, int width, int height) { if (width < kMinRectSize || height < kMinRectSize) return NULL; // Nothing worth doing. // Copy/Threshold the image to the tesseract global page_image. CopyImageToTesseract(imagedata, bytes_per_pixel, bytes_per_line, left, top, width, height); BLOCK_LIST block_list; FindLines(&block_list); // Now run the main recognition. PAGE_RES* page_res = Recognize(&block_list, NULL); return TesseractToUNLV(page_res); }
int main(int argc,char ** argv){ int key; double *Features; //code to record video CvVideoWriter *writer=0; int isColor=1; int fps=10; int frameW=640; int frameH=480; writer=cvCreateVideoWriter("out.avi",CV_FOURCC('D','I','V','X'),fps,cvSize(frameW,frameH),isColor); CvMat *mat=cvCreateMat(8,8,CV_64FC1); char *filenamesCov = "../../../fourier/fourier/Variances"; char * FileMeans = "../../../fourier/fourier/Means.txt"; int numGestures; numGestures=11; int numTraining=80; int numFeatures=8; //CvArr** invCovMatrices= (CvArr **)malloc(sizeof(CvArr *)*numGestures); CvArr** eigenVects= (CvArr **)malloc(sizeof(CvArr *)*numGestures); CvArr** Means=(CvArr **)malloc(sizeof(CvArr *)*numGestures); CvArr** eigenVals=(CvArr **)malloc(sizeof(CvArr*)*numGestures); //end record video int indx[] = {6,11,12,14,15,22,27,28,33,36,42}; //5, a, b, c,caps,g,l,LC,p,RC,v, CvCapture* capture = NULL; cascade = (CvHaarClassifierCascade*)cvLoad( "../NewTrained.xml", 0, 0, 0 ); storage = cvCreateMemStorage(0); IplImage* img; IplImage* img1; IplImage* img2; IplImage* imb; if(NULL==(capture = cvCaptureFromCAM(1))){ printf("\nError on cvCaptureFromCAM"); return -1; } train(Means,eigenVects,eigenVals,numGestures,19,numFeatures,indx); fprintf(stderr,"blahblah\n"); //ReadInData(filenamesCov,FileMeans,invCovMatrices,Means,numGestures,numFeatures); //----- cvNamedWindow("Capture", CV_WINDOW_AUTOSIZE); cvNamedWindow("Capture2", CV_WINDOW_AUTOSIZE); cvNamedWindow("Capture3", CV_WINDOW_AUTOSIZE); cvNamedWindow("Window",CV_WINDOW_AUTOSIZE); cvMoveWindow("Capture", 550, 250); cvMoveWindow("Capture2",850, 50); cvMoveWindow("Capture3",100,500); cvMoveWindow("Capture4",500,600); for(;;){ if(NULL==(img=cvQueryFrame(capture))){ printf("\nError on cvQueryFrame"); break; } img1=detect_and_draw(img,1); img2=binary_threshold_hsl(img1); cvShowImage("Capture3",img2); cvShowImage("Capture", img1); cvShowImage("Capture2",img); if(HAND==1) { Features=computeFDFeatures(img2,8); fprintf(stderr,"Found a hand. The gesture recognized is : %c\n",(char) Gest[Recognize(Features,eigenVects,eigenVals,Means,numGestures,numFeatures,6)]); //imb=cvCreateImage("",8,1); } cvWriteFrame(writer,img1); key = cvWaitKey(10); if(key==0x1b) break; } cvReleaseCapture(&capture); cvDestroyWindow("Capture"); cvDestroyWindow("Capture2"); cvDestroyWindow("Capture3"); cvReleaseImage( &img ); cvReleaseImage(&img1); cvWaitKey(0); }
void test( CvArr** Means,CvArr** eigenVects,CvArr** eigenVals,int numGestures, int numTesting,int numFeatures,int *indx) { char num[] ="01-01"; char *fo = ".png"; char *filename; if(TESTING) { filename = (char *)malloc((strlen(test_path)+1)*sizeof(char *)); filename[0]='\0'; strcat(filename,test_path); } else { filename = (char *)malloc((strlen(train_path)+1)*sizeof(char *)); filename[0]='\0'; strcat(filename,train_path); } int stats[numGestures]; int i,j,t; double *Features; IplImage* src; for(i=0;i<numGestures;i++) stats[i]=0; for(i=0; i<numGestures; i++) { for(j=0;j<numTesting;j++) { if(TESTING) filename[21] = '\0'; else filename[22] = '\0'; num[0] = '0'+indx[i]/10 ; num[1] = '0'+indx[i]%10; if((j+1)>9) { num[3] = '0'+(j+1)/10; num[4] = '0'+(j+1)%10; num[5] = '\0'; } else { num[3] = '0'+j+1; num[4] = '\0'; } strcat(filename,num); strcat(filename,fo); //fprintf(stderr,"i=%d j=%d %s \n",i,j,filename); src = cvLoadImage( filename,CV_LOAD_IMAGE_GRAYSCALE ); Features=computeFDFeatures(src,8); t=Recognize(Features,eigenVects,eigenVals,Means,numGestures,numFeatures,5); free(Features); if(t==i) stats[i]++; } } int sum=0; for(i=0;i<numGestures;i++) { sum = sum+stats[i]; } fprintf(stderr,"Percent Accuracy %f\n",(float)sum/(numGestures*numTesting)); /*for(i=0;i<numGestures;i++) { fprintf(stderr,"num_incorrect for %d=%d\n",i+1,stats[i]); }*/ }
// Recognize the member char sample as a word WordAltList *CubeObject::RecognizePhrase(LangModel *lang_mod) { return Recognize(lang_mod, false); }
// Recognize the member char sample as a word WordAltList *CubeObject::RecognizeWord(LangModel *lang_mod) { return Recognize(lang_mod, true); }