TextAdder::TextAdder(const size_t wSpace, const size_t hSpace, const CvScalar color, const double scale): wSpace_(wSpace), hSpace_(hSpace), scale_(scale), font_( cvFont(scale_) ), color_(color), fontHeight_( getFontHeight() ) { }
void Chess_recognition::drawPoint( IplImage *src, vector<Chess_point> point){ char buf[32]; // display the points in an image for(int i = 0; i < point.size(); i++){ cvCircle( src, point.at(i).Cordinate, 5, cvScalar(0,255), 2 ); sprintf(buf, "(%d,%d)",point.at(i).index.x, point.at(i).index.y); cvPutText(src, buf, point.at(i).Cordinate, &cvFont(1.0), cvScalar(255,0,0)); } }
// -------------------------------------------------------------------------- // main(Number of arguments, Value of arguments) // Description : This is the main function. // Return value : SUCCESS:0 ERROR:-1 // -------------------------------------------------------------------------- int main(int argc, char **argv) { // AR.Drone class ARDrone ardrone; // Initialize if (!ardrone.open()) { printf("Failed to initialize.\n"); return -1; } // Recording flag int rec = 0; printf("Press 'R' to start/stop recording."); // Main loop while (!GetAsyncKeyState(VK_ESCAPE)) { // Update if (!ardrone.update()) break; // Get an image IplImage *image = ardrone.getImage(); // Video recording start / stop if (KEY_PUSH('R')) { if (rec) { ardrone.stopVideoRecord(); rec = 0; } else { ardrone.startVideoRecord(); rec = 1; } } // Show recording state if (rec) { static CvFont font = cvFont(1.0); cvPutText(image, "REC", cvPoint(10, 20), &font, CV_RGB(255,0,0)); } // Display the image cvShowImage("camera", image); cvWaitKey(1); } // See you ardrone.close(); return 0; }
// -------------------------------------------------------------------------- // cvDrawText(Image, Drowing point, Messages) // Description : Draw the specified text. // Return value : NONE // -------------------------------------------------------------------------- inline void cvDrawText(IplImage *image, CvPoint point, const char *fmt, ...) { // Font static CvFont font = cvFont(1.0); // Apply format char text[256]; va_list ap; va_start(ap, fmt); vsprintf(text, fmt, ap); va_end(ap); // Draw the text cvPutText(image, text, point, &font, CV_RGB(0, 255, 0)); }
// -------------------------------------------------------------------------- // main(Number of arguments, Argument values) // Description : This is the entry point of the program. // Return value : SUCCESS:0 ERROR:-1 // -------------------------------------------------------------------------- int main(int argc, char **argv) { // AR.Drone class ARDrone ardrone; // Initialize if (!ardrone.open()) { printf("Failed to initialize.\n"); return -1; } // Recording flag bool rec = false; printf("Press 'R' to start/stop recording."); // Main loop while (1) { // Key input int key = cvWaitKey(1); if (key == 0x1b) break; // Update if (!ardrone.update()) break; // Get an image IplImage *image = ardrone.getImage(); // Video recording start / stop if (key == 'r') { rec = !rec; ardrone.setVideoRecord(rec); } // Show recording state if (rec) { static CvFont font = cvFont(1.0); cvPutText(image, "REC", cvPoint(10, 20), &font, CV_RGB(255,0,0)); } // Display the image cvShowImage("camera", image); } // See you ardrone.close(); return 0; }
void RenderMsg( IplImage *display) { char msg[1000]; CvFont font = cvFont( 1.0, 1); cvInitFont( &font, CV_FONT_HERSHEY_PLAIN, 1.5, 1.5, 0, 2, CV_AA); if( stroketype == _strokebg) cvPutText( display, "Stroke: +background", cvPoint( 0, display->height - 1), &font, CV_RGB( 255, 255, 255)); else if( stroketype == _strokefg) cvPutText( display, "Stroke: +foreground", cvPoint( 0, display->height - 1), &font, CV_RGB( 255, 255, 255)); else if( stroketype == _strokeu) cvPutText( display, "Stroke: eraser", cvPoint( 0, display->height - 1), &font, CV_RGB( 255, 255, 255)); sprintf( msg, "brush size: %3d", strokewidth); cvPutText( display, msg, cvPoint( 0, 20), &font, CV_RGB( 0, 255, 0)); return; }
void ChessRecognition::DrawPoints(IplImage *Source, vector<ChessPoint> Point) { if (_EnableThread != false) { // src image에 chessboard의 교점과 각 교점의 index를 표기. char buf[32]; // display the points in an image for (register int i = 0; i < Point.size(); i++) { cvCircle(Source, Point.at(i).Cordinate, 4, cvScalar(0, 255), 1); sprintf(buf, "(%d,%d)", Point.at(i).Index.x, Point.at(i).Index.y); CvFont _TCvFont = cvFont(1.0); cvPutText(Source, buf, Point.at(i).Cordinate, &_TCvFont, cvScalar(255, 0, 0)); } } else { // Not Initialize. // Not Enabled Thread. } }
void GlcmJNode::judge(const Result* result) { Result& myResult = this->result; myResult.setImage(result->getImage()); //转换成256灰阶图像 cvCvtColor(result->getImage(), grayImage, CV_BGR2GRAY); //计算灰度共生矩阵 cvSetZero(grayMat); for(int i = 1; i < height; ++i) for(int j = 0; j + 1 < width; ++j) { //(i, j)<-->(i - 1, j + 1), 45度 int a = cvGetReal2D(grayImage, i, j); int b = cvGetReal2D(grayImage, i - 1, j + 1); cvmSet(grayMat, a, b, cvmGet(grayMat, a, b) + 1); //cvmSet(grayMat, b, a, cvmGet(grayMat, b, a) + 1); } //灰度共生矩阵归一化 for(size_t i = 0; i < GRAY_LEVEL; ++i) for(size_t j = 0; j < GRAY_LEVEL; ++j) cvmSet(grayMat, i, j, cvmGet(grayMat, i, j) / R); //计算能量 //double energy = 0; //for(size_t i = 0; i < GRAY_LEVEL; ++i) for(size_t j = 0; j < GRAY_LEVEL; ++j) // energy += cvmGet(grayMat, i, j) * cvmGet(grayMat, i, j); //计算熵 //double entropy = 0; //for(size_t i = 0; i < GRAY_LEVEL; ++i) for(size_t j = 0; j < GRAY_LEVEL; ++j) // if(fabs(cvmGet(grayMat, i, j)) > 0.000001) // entropy += -cvmGet(grayMat, i, j) * log(cvmGet(grayMat, i, j)); //计算局部平稳性 double stable = 0; for(size_t i = 0; i < GRAY_LEVEL; ++i) for(size_t j = 0; j < GRAY_LEVEL; ++j) stable += cvmGet(grayMat, i, j) / (1 + (i - j) * (i - j)); if(stable < stable_max) myResult.setValue(1); else myResult.setValue(0); if(show_info) { CvFont font = cvFont(1.5, 1); char buf[256]; sprintf(buf, "Local calm:%f %s", stable, stable < stable_max ? "!FIRE!" : "|---|"); cvPutText(grayImage, buf, cvPoint(0, height - 8), &font, cvScalarAll(255)); cvShowImage(name.c_str(), grayImage); } }
void th_put_text(IplImage* img, const char* text, CvPoint p, CvScalar color, float scale) { CvFont font = cvFont(scale,1); cvPutText(img, text, p, &font, color); }
CvFont cvFont_wrap(double scale , int thickness ){ return cvFont(/*double*/scale , /*int*/thickness); }
int main( int argc, char** argv ) { CvSize board_size = {0,0}; float square_size = 1.f, aspect_ratio = 1.f; const char* out_filename = "out_camera_data.yml"; const char* input_filename = 0; int i, image_count = 10; int write_extrinsics = 0, write_points = 0; int flags = 0; CvCapture* capture = 0; FILE* f = 0; char imagename[1024]; CvMemStorage* storage; CvSeq* image_points_seq = 0; int elem_size, flip_vertical = 0; int delay = 1000; clock_t prev_timestamp = 0; CvPoint2D32f* image_points_buf = 0; CvFont font = cvFont( 1, 1 ); double _camera[9], _dist_coeffs[4]; CvMat camera = cvMat( 3, 3, CV_64F, _camera ); CvMat dist_coeffs = cvMat( 1, 4, CV_64F, _dist_coeffs ); CvMat *extr_params = 0, *reproj_errs = 0; double avg_reproj_err = 0; int mode = DETECTION; int undistort_image = 0; CvSize img_size = {0,0}; const char* live_capture_help = "When the live video from camera is used as input, the following hot-keys may be used:\n" " <ESC>, 'q' - quit the program\n" " 'g' - start capturing images\n" " 'u' - switch undistortion on/off\n"; if( argc < 2 ) { // calibration -w 6 -h 8 -s 2 -n 10 -o camera.yml -op -oe [<list_of_views.txt>] printf( "This is a camera calibration sample.\n" "Usage: calibration\n" " -w <board_width> # the number of inner corners per one of board dimension\n" " -h <board_height> # the number of inner corners per another board dimension\n" " [-n <number_of_frames>] # the number of frames to use for calibration\n" " # (if not specified, it will be set to the number\n" " # of board views actually available)\n" " [-di <disk_images> # Number of disk images before triggering undistortion\n" " [-d <delay>] # a minimum delay in ms between subsequent attempts to capture a next view\n" " # (used only for video capturing)\n" " [-s <square_size>] # square size in some user-defined units (1 by default)\n" " [-o <out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters\n" " [-op] # write detected feature points\n" " [-oe] # write extrinsic parameters\n" " [-zt] # assume zero tangential distortion\n" " [-a <aspect_ratio>] # fix aspect ratio (fx/fy)\n" " [-p] # fix the principal point at the center\n" " [-v] # flip the captured images around the horizontal axis\n" " [input_data] # input data, one of the following:\n" " # - text file with a list of the images of the board\n" " # - name of video file with a video of the board\n" " # if input_data not specified, a live view from the camera is used\n" "\n" ); printf( "%s", live_capture_help ); return 0; } for( i = 1; i < argc; i++ ) { const char* s = argv[i]; if( strcmp( s, "-w" ) == 0 ) { if( sscanf( argv[++i], "%u", &board_size.width ) != 1 || board_size.width <= 0 ) return fprintf( stderr, "Invalid board width\n" ), -1; } else if( strcmp( s, "-h" ) == 0 ) { if( sscanf( argv[++i], "%u", &board_size.height ) != 1 || board_size.height <= 0 ) return fprintf( stderr, "Invalid board height\n" ), -1; } else if( strcmp( s, "-s" ) == 0 ) { if( sscanf( argv[++i], "%f", &square_size ) != 1 || square_size <= 0 ) return fprintf( stderr, "Invalid board square width\n" ), -1; } else if( strcmp( s, "-n" ) == 0 ) { if( sscanf( argv[++i], "%u", &image_count ) != 1 || image_count <= 3 ) return printf("Invalid number of images\n" ), -1; } else if( strcmp( s, "-di") == 0) { if( sscanf( argv[++i], "%d", &images_from_file) != 1 || images_from_file < 3) return printf("Invalid di, must be >= 3\n"), -1; } else if( strcmp( s, "-a" ) == 0 ) { if( sscanf( argv[++i], "%f", &aspect_ratio ) != 1 || aspect_ratio <= 0 ) return printf("Invalid aspect ratio\n" ), -1; } else if( strcmp( s, "-d" ) == 0 ) { if( sscanf( argv[++i], "%u", &delay ) != 1 || delay <= 0 ) return printf("Invalid delay\n" ), -1; } else if( strcmp( s, "-op" ) == 0 ) { write_points = 1; } else if( strcmp( s, "-oe" ) == 0 ) { write_extrinsics = 1; } else if( strcmp( s, "-zt" ) == 0 ) { flags |= CV_CALIB_ZERO_TANGENT_DIST; } else if( strcmp( s, "-p" ) == 0 ) { flags |= CV_CALIB_FIX_PRINCIPAL_POINT; } else if( strcmp( s, "-v" ) == 0 ) { flip_vertical = 1; } else if( strcmp( s, "-o" ) == 0 ) { out_filename = argv[++i]; } else if( s[0] != '-' ) input_filename = s; else return fprintf( stderr, "Unknown option %s", s ), -1; } if( input_filename ) { capture = cvCreateFileCapture( input_filename ); if( !capture ) { f = fopen( input_filename, "rt" ); if( !f ) return fprintf( stderr, "The input file could not be opened\n" ), -1; image_count = -1; } mode = CAPTURING; } else capture = cvCreateCameraCapture(0); if( !capture && !f ) return fprintf( stderr, "Could not initialize video capture\n" ), -2; if( capture ) printf( "%s", live_capture_help ); elem_size = board_size.width*board_size.height*sizeof(image_points_buf[0]); storage = cvCreateMemStorage( MAX( elem_size*4, 1 << 16 )); image_points_buf = (CvPoint2D32f*)cvAlloc( elem_size ); image_points_seq = cvCreateSeq( 0, sizeof(CvSeq), elem_size, storage ); cvNamedWindow( "Image View", 1 ); cvNamedWindow( "Undistort",1); int disk_image_cnt = 0; for(;;) { IplImage *view = 0, *view_gray = 0; int count = 0, found, blink = 0; CvPoint text_origin; CvSize text_size = {0,0}; int base_line = 0; char s[100]; int key; if( f && fgets( imagename, sizeof(imagename)-2, f )) { int l = strlen(imagename); if( l > 0 && imagename[l-1] == '\n' ) imagename[--l] = '\0'; if( l > 0 ) { if( imagename[0] == '#' ) continue; view = cvLoadImage( imagename, 1 ); disk_image_cnt++; } } else if( capture ) { IplImage* view0 = cvQueryFrame( capture ); if( view0 ) { view = cvCreateImage( cvGetSize(view0), IPL_DEPTH_8U, view0->nChannels ); if( view0->origin == IPL_ORIGIN_BL ) cvFlip( view0, view, 0 ); else cvCopy( view0, view ); } } if( !view || (disk_image_cnt == images_from_file)) { if( image_points_seq->total > 0 ) { image_count = image_points_seq->total; goto calibrate; } break; } if( flip_vertical ) cvFlip( view, view, 0 ); img_size = cvGetSize(view); found = cvFindChessboardCorners( view, board_size, image_points_buf, &count, CV_CALIB_CB_ADAPTIVE_THRESH ); #if 1 // improve the found corners' coordinate accuracy view_gray = cvCreateImage( cvGetSize(view), 8, 1 ); cvCvtColor( view, view_gray, CV_BGR2GRAY ); cvFindCornerSubPix( view_gray, image_points_buf, count, cvSize(11,11), cvSize(-1,-1), cvTermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 )); cvReleaseImage( &view_gray ); #endif if( mode == CAPTURING && found && (f || clock() - prev_timestamp > delay*1e-3*CLOCKS_PER_SEC) ) { cvSeqPush( image_points_seq, image_points_buf ); prev_timestamp = clock(); blink = !f; #if 1 if( capture ) { sprintf( imagename, "view%03d.png", image_points_seq->total - 1 ); cvSaveImage( imagename, view ); } #endif } cvDrawChessboardCorners( view, board_size, image_points_buf, count, found ); cvGetTextSize( "100/100", &font, &text_size, &base_line ); text_origin.x = view->width - text_size.width - 10; text_origin.y = view->height - base_line - 10; if( mode == CAPTURING ) { if( image_count > 0 ) sprintf( s, "%d/%d", image_points_seq ? image_points_seq->total : 0, image_count ); else sprintf( s, "%d/?", image_points_seq ? image_points_seq->total : 0 ); } else if( mode == CALIBRATED ) sprintf( s, "Calibrated" ); else sprintf( s, "Press 'g' to start" ); cvPutText( view, s, text_origin, &font, mode != CALIBRATED ? CV_RGB(255,0,0) : CV_RGB(0,255,0)); if( blink ) cvNot( view, view ); //Rectify or Undistort the image if( mode == CALIBRATED && undistort_image ) { IplImage* t = cvCloneImage( view ); cvShowImage("Image View", view); cvUndistort2( t, view, &camera, &dist_coeffs ); cvReleaseImage( &t ); cvShowImage( "Undistort", view ); cvWaitKey(0); } else{ cvShowImage( "Image View", view ); key = cvWaitKey(capture ? 50 : 500); } if( key == 27 ) break; if( key == 'u' && mode == CALIBRATED ){ undistort_image = !undistort_image; } if( capture && key == 'g' ) { mode = CAPTURING; cvClearMemStorage( storage ); image_points_seq = cvCreateSeq( 0, sizeof(CvSeq), elem_size, storage ); } if( mode == CAPTURING && (unsigned)image_points_seq->total >= (unsigned)image_count ) { calibrate: if(disk_image_cnt == images_from_file) undistort_image = !undistort_image; cvReleaseMat( &extr_params ); cvReleaseMat( &reproj_errs ); int code = run_calibration( image_points_seq, img_size, board_size, square_size, aspect_ratio, flags, &camera, &dist_coeffs, &extr_params, &reproj_errs, &avg_reproj_err ); // save camera parameters in any case, to catch Inf's/NaN's save_camera_params( out_filename, image_count, img_size, board_size, square_size, aspect_ratio, flags, &camera, &dist_coeffs, write_extrinsics ? extr_params : 0, write_points ? image_points_seq : 0, reproj_errs, avg_reproj_err ); if( code ) mode = CALIBRATED; else mode = DETECTION; } if( !view ) break; cvReleaseImage( &view ); } if( capture ) cvReleaseCapture( &capture ); return 0; }
void hand() { bool useavepalm = true; if(palm->total <= 2) { useavepalm = false; CvFont Font1=cvFont(3,3); cvPutText(frame,"Error Palm Position!!",cvPoint(10,50),&Font1,CV_RGB(255,0,0)); //savepic = true; CvPoint *temp,*additional,*palmtemp; CvMemStorage* palm2storage = cvCreateMemStorage(0); CvSeq* palm2 = cvCreateSeq(CV_SEQ_ELTYPE_POINT,sizeof(CvSeq),sizeof(CvPoint),palm2storage); for(int i=0;i<palm->total;i++) { palmtemp = (CvPoint*)cvGetSeqElem(palm,i); for(int j=1;j<contours->total;j++) { temp = (CvPoint*)cvGetSeqElem(contours,j); if(temp->y == palmtemp->y && temp->x == palmtemp->x) { additional = (CvPoint*)cvGetSeqElem(contours,(int)(j+((contours->total)/2))%(contours->total)); if(additional->y <= palmtemp->y) cvCircle(frame,*additional,10,CV_RGB(0,0,255),-1,8,0); cvSeqPush(palm2,additional); } } } for(int i=0;i<palm2->total;i++) { temp = (CvPoint*)cvGetSeqElem(palm2,i); cvSeqPush(palm,temp); } for(int i=1;i<contours->total;i++) { temp = (CvPoint*)cvGetSeqElem(contours,1); if(temp->y <= additional->y) additional = temp; } cvCircle(frame,*additional,10,CV_RGB(0,0,255),-1,8,0); cvSeqPush(palm,additional); } ////////////////////////////////////////////////////////////////////////////// cvMinEnclosingCircle(palm,&mincirclecenter,&radius); mincirclecenter2.x = cvRound(mincirclecenter.x); mincirclecenter2.y = cvRound(mincirclecenter.y); if(useavepalm){ CvPoint avePalmCenter,distemp; int lengthtemp,radius2; avePalmCenter.x = 0; avePalmCenter.y = 0; for(int i=0;i<palm->total;i++) { CvPoint *temp = (CvPoint*)cvGetSeqElem(palm,i); avePalmCenter.x += temp->x; avePalmCenter.y += temp->y; } avePalmCenter.x = (int)(avePalmCenter.x/palm->total); avePalmCenter.y = (int)(avePalmCenter.y/palm->total); radius2 = 0; for(int i=0;i<palm->total;i++) { CvPoint *temp = (CvPoint*)cvGetSeqElem(palm,i); distemp.x = temp->x - avePalmCenter.x; distemp.y = temp->y - avePalmCenter.y; lengthtemp = sqrtf(( distemp.x* distemp.x)+(distemp.y*distemp.y)); radius2 += lengthtemp; } radius2 = (int)(radius2/palm->total); radius = ((0.5)*radius + (0.5)*radius2); mincirclecenter2.x = ((0.5)*mincirclecenter2.x + (0.5)*avePalmCenter.x); mincirclecenter2.y = ((0.5)*mincirclecenter2.y + (0.5)*avePalmCenter.y); } ////////////////////////////////////////////////////////////////////////////////////// palmposition[palmpositioncount].x = cvRound(mincirclecenter2.x); palmposition[palmpositioncount].y = cvRound(mincirclecenter2.y); palmpositioncount = (palmpositioncount+1)%3; if(palmpositionfull) { float xtemp=0,ytemp=0; for(int i=0;i<3;i++) { xtemp += palmposition[i].x; ytemp += palmposition[i].y; } mincirclecenter2.x = cvRound(xtemp/3); mincirclecenter2.y = cvRound(ytemp/3); } if(palmpositioncount == 2 && palmpositionfull == false) { palmpositionfull = true; } cvCircle(frame,mincirclecenter2,10,CV_RGB(0,255,255),4,8,0); //cvCircle(virtualhand,mincirclecenter2,10,CV_RGB(0,255,255),4,8,0); ////////////////////////////////////////////////////////////////////////////////////// palmsize[palmsizecount] = cvRound(radius); palmsizecount = (palmsizecount+1)%3; if(palmcountfull) { float tempcount=0; for(int i=0;i<3;i++) { tempcount += palmsize[i]; } radius = tempcount/3; } if(palmsizecount == 2 && palmcountfull == false) { palmcountfull = true; } cvCircle(frame,mincirclecenter2,cvRound(radius),CV_RGB(255,0,0),2,8,0); cvCircle(frame,mincirclecenter2,cvRound(radius*1.2),CV_RGB(200,100,200),1,8,0); //cvCircle(virtualhand,mincirclecenter2,cvRound(radius),CV_RGB(255,0,0),2,8,0); //cvCircle(virtualhand,mincirclecenter2,cvRound(radius*1.3),CV_RGB(200,100,200),1,8,0); ////////////////////////////////////////////////////////////////////////////////////// int fingercount = 0; float fingerlength; CvPoint tiplength,*point; for(int i=0;i<fingerseq->total;i++) { point = (CvPoint*)cvGetSeqElem(fingerseq,i); tiplength.x = point->x - mincirclecenter2.x; tiplength.y = point->y - mincirclecenter2.y; fingerlength = sqrtf(( tiplength.x* tiplength.x)+(tiplength.y*tiplength.y)); if((int)fingerlength > cvRound(radius*1.2)) { fingercount += 1; cvCircle(frame,*point,6,CV_RGB(0,255,0),-1,8,0); } } cvClearSeq(fingerseq); cvClearSeq(palm); }
int main() { if(run_tests_only) { MyLine3D::runTest(); return 0; } //CvMat *camera_inner_calibration_matrix; bool show_surf_example=false; bool show_calibration_from_camera_and_undistortion=false; if(show_calibration_from_camera_and_undistortion) { CvMat *object_points_all=0; CvMat *image_points_all=0; CvMat *points_count_all=0; CvMat *camera_matr=0; CvMat *distor_coefs=0; CvMat *rotation_vecs=0; CvMat *transpose_vecs=0; vector<CvPoint2D32f> qu_calibr_points; IplImage* frameCam1; cvNamedWindow("WindowCam1",CV_WINDOW_KEEPRATIO); CvCapture *captureCam1=cvCreateCameraCapture(0); IplImage *quarterFrame; CvPoint2D32f *cornersFounded= new CvPoint2D32f[100]; int cornersCount=0; int result_Found=0; // getting snapshots for inner camera calibration from video camera bool capture_flag=false; while(true) { frameCam1=cvQueryFrame(captureCam1); quarterFrame=cvCreateImage(cvSize((frameCam1->width),(frameCam1->height)),IPL_DEPTH_8U,3); cvCopy(frameCam1,quarterFrame); if(capture_flag) { result_Found=cvFindChessboardCorners(quarterFrame,cvSize(chess_b_szW,chess_b_szH),cornersFounded,&cornersCount);//,CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FILTER_QUADS |CV_CALIB_CB_FAST_CHECK); cvDrawChessboardCorners(quarterFrame,cvSize(chess_b_szW,chess_b_szH),cornersFounded,cornersCount,result_Found); if(result_Found>0) AddPointsToInnerCalibrate(qu_calibr_points,cornersFounded,cornersCount); capture_flag=false; cvShowImage("WindowCam1",quarterFrame); if(result_Found>0) cvWaitKey(0); } char c=cvWaitKey(33); if(c==27) break; if(c==32 || c=='y' || c=='Y') capture_flag=true; cvShowImage("WindowCam1",quarterFrame); cvReleaseImage(&quarterFrame); } cvReleaseImage(&quarterFrame); cvReleaseCapture(&captureCam1); cvDestroyWindow("WindowCam1"); PrintAllPointsForInnerCalibrate(qu_calibr_points,chess_b_szW*chess_b_szH); InitCvMatPointsParametersForInnerCallibration_part1(qu_calibr_points,chess_b_szW*chess_b_szH,object_points_all,image_points_all,points_count_all,chess_b_szW,chess_b_szH); InitOtherCameraParametersForInnerCallibration_part2(qu_calibr_points.size()/(chess_b_szW*chess_b_szH),camera_matr,distor_coefs,rotation_vecs,transpose_vecs); double calibration_error_result=cvCalibrateCamera2(object_points_all, image_points_all, points_count_all, cvSize(imgW,imgH), camera_matr, distor_coefs, rotation_vecs, transpose_vecs, CV_CALIB_FIX_PRINCIPAL_POINT|CV_CALIB_FIX_ASPECT_RATIO|CV_CALIB_ZERO_TANGENT_DIST ); WriteMatrixCoef(camera_matr); WriteMatrixCoef(distor_coefs); //camera_inner_calibration_matrix=cvCreateMat(3,3,CV_32FC1); //cvCopy(camera_matr,camera_inner_calibration_matrix); cvSave("camera_calibration_inner.txt",camera_matr,"camera_inner_calibration_matrix"); cvSave("camera_calibration_dist.txt",distor_coefs,"distor_coefs","coeficients of distortions"); cout<<"Total Error:"<<calibration_error_result<<endl; cout<<"Average Calibration Error :"<<(calibration_error_result)/qu_calibr_points.size()<<endl; //undistortion example IplImage *frame_cur; IplImage *undistor_image; cvNamedWindow("cameraUndistor",CV_WINDOW_KEEPRATIO); CvCapture *captureCam2=cvCreateCameraCapture(0); bool undist_flag=false; while(true) { frame_cur= cvQueryFrame(captureCam2); undistor_image=cvCreateImage(cvSize((frame_cur->width),(frame_cur->height)),IPL_DEPTH_8U,3); if(undist_flag) { cvUndistort2(frame_cur,undistor_image,camera_matr,distor_coefs); } else { cvCopy(frame_cur,undistor_image); } cvShowImage("cameraUndistor",undistor_image); char c=cvWaitKey(33); if(c==27) break; if(c=='u'||c=='U') undist_flag=!undist_flag; cvReleaseImage(&undistor_image); } cvReleaseImage(&undistor_image); cvReleaseCapture(&captureCam2); cvDestroyWindow("cameraUndistor"); }//ending undistortion_example if(show_surf_example) { //using SURF initModule_nonfree();// added at 16.04.2013 CvCapture* capture_cam_3=cvCreateCameraCapture(0); cvNamedWindow("SURF from Cam",CV_WINDOW_KEEPRATIO); cvCreateTrackbar("Hessian Level","SURF from Cam",0,1000,onTrackbarSlide1); IplImage* buf_frame_3=0; IplImage* gray_copy=0; IplImage* buf_frame_3_copy=0; CvSeq *kp1,*descr1; CvMemStorage *storage=cvCreateMemStorage(0); CvSURFPoint *surf_pt; bool surf_flag=false; while(true) { buf_frame_3=cvQueryFrame(capture_cam_3); if(surf_flag) { surf_flag=false; gray_copy=cvCreateImage(cvSize((buf_frame_3->width),(buf_frame_3->height)),IPL_DEPTH_8U,1); buf_frame_3_copy=cvCreateImage(cvSize((buf_frame_3->width),(buf_frame_3->height)),IPL_DEPTH_8U,3); cvCvtColor(buf_frame_3,gray_copy,CV_RGB2GRAY); //cvSetImageROI(gray_copy,cvRect(280,200,40,40)); cvExtractSURF(gray_copy,NULL,&kp1,&descr1,storage,cvSURFParams(0.0,0)); cvReleaseImage(&gray_copy); re_draw=true; while(true) { if(re_draw) { cvCopy(buf_frame_3,buf_frame_3_copy); double pi=acos(-1.0); for(int i=0;i<kp1->total;i++) { surf_pt=(CvSURFPoint*)cvGetSeqElem(kp1,i); if(surf_pt->hessian<min_hessian) continue; int pt_x,pt_y; pt_x=(int)(surf_pt->pt.x); pt_y=(int)(surf_pt->pt.y); int sz=surf_pt->size; int rad_angle=(surf_pt->dir*pi)/180; cvCircle(buf_frame_3_copy,cvPoint(pt_x,pt_y),1/*sz*/,CV_RGB(0,255,0)); cvLine(buf_frame_3_copy,cvPoint(pt_x,pt_y),cvPoint(pt_x+sz*cosl(rad_angle),pt_y-sz*sinl(rad_angle)),CV_RGB(0,0,255)); } cvShowImage("SURF from Cam",buf_frame_3_copy); } char c=cvWaitKey(33); if(c==27) { break; } } cvReleaseImage(&buf_frame_3_copy); } cvShowImage("SURF from Cam",buf_frame_3); char ch=cvWaitKey(33); if(ch==27) break; if(ch==32) surf_flag=true; } if(gray_copy!=0) cvReleaseImage(&gray_copy); cvReleaseCapture(&capture_cam_3); cvDestroyWindow("SURF from Cam"); }//ending SURF_example CvFont my_font=cvFont(1,1); cvInitFont(&my_font,CV_FONT_HERSHEY_SIMPLEX,1.0,1.0); cvNamedWindow("twoSnapshots",CV_WINDOW_KEEPRATIO); cvCreateTrackbar("Select LLine","twoSnapshots",0,1000,onTrackbarSlideSelectLine); CvCapture *capture_4 = 0; IplImage* left_img=0; IplImage* right_img=0; IplImage* cur_frame_buf=0; IplImage* gray_img_left=0; IplImage* gray_img_right=0; IplImage* merged_images=0; IplImage* merged_images_copy=0; CvMat *fundamentalMatrix = 0; vector<KeyPoint> key_points_left; Mat descriptors_left; vector<KeyPoint> key_points_right; Mat descriptors_right; //CvMemStorage *mem_stor=cvCreateMemStorage(0);*/ float min_hessian_value=1001.0f; double startValueOfFocus = 350; char* left_image_file_path = "camera_picture_left.png"; char* right_image_file_path = "camera_picture_right.png"; Array left_points, right_points; left_points.init(1,1); right_points.init(1,1); Array forReconstructionLeftPoints, forReconstructionRightPoints; forReconstructionLeftPoints.init(1,1); forReconstructionRightPoints.init(1,1); while(true) { char ch=cvWaitKey(33); if(ch==27) break; // open left and right images if(ch == 'o' || ch == 'O') { openTwoImages(left_image_file_path, right_image_file_path, left_img, right_img ); MergeTwoImages(left_img,right_img,merged_images); } // save both left and right images from camera if(ch == 's' || ch == 'S') { if( left_img != 0 ) cvSaveImage(left_image_file_path, left_img); if( right_img != 0) cvSaveImage(right_image_file_path, right_img); } if(ch=='l'||ch=='L') { if(capture_4 == 0) { capture_4=cvCreateCameraCapture(0); } cur_frame_buf=cvQueryFrame(capture_4); if(left_img==0) left_img=cvCreateImage(cvSize(cur_frame_buf->width,cur_frame_buf->height),IPL_DEPTH_8U,3); cvCopy(cur_frame_buf,left_img); if(right_img == 0) { right_img=cvCreateImage(cvSize(cur_frame_buf->width,cur_frame_buf->height),IPL_DEPTH_8U,3); cvCopy(cur_frame_buf,right_img); } MergeTwoImages(left_img,right_img,merged_images); } if(ch=='r'||ch=='R') { if(capture_4 == 0) { capture_4=cvCreateCameraCapture(0); } cur_frame_buf=cvQueryFrame(capture_4); if(right_img==0) right_img=cvCreateImage(cvSize(cur_frame_buf->width,cur_frame_buf->height),IPL_DEPTH_8U,3); cvCopy(cur_frame_buf,right_img); if(left_img == 0) { left_img=cvCreateImage(cvSize(cur_frame_buf->width,cur_frame_buf->height),IPL_DEPTH_8U,3); cvCopy(cur_frame_buf,left_img); } MergeTwoImages(left_img,right_img,merged_images); } if(ch=='b'||ch=='B') { if(capture_4 == 0) { capture_4=cvCreateCameraCapture(0); } cur_frame_buf=cvQueryFrame(capture_4); cvCopy(cur_frame_buf,left_img); cvCopy(cur_frame_buf,right_img); } if(ch=='q'||ch=='Q' && left_img!=0) { //proceed left extractFeaturesFromImage(left_img, min_hessian_value, gray_img_left, key_points_left, descriptors_left); } if(ch=='w'||ch=='W' && right_img!=0) { //proceed right extractFeaturesFromImage(right_img, min_hessian_value, gray_img_right, key_points_right, descriptors_right); } if(ch=='m'||ch=='M' && left_img!=0 && right_img!=0) { //merge two images in to bigger one MergeTwoImages(left_img,right_img,merged_images); } if(ch=='c'||ch=='C' && merged_images!=0) { //comparison of two images if(fundamentalMatrix != 0) { cvReleaseMat(& fundamentalMatrix); fundamentalMatrix = 0; } left_to_right_corresponding_points.clear(); right_to_left_corresponding_points.clear(); GetCorrespondingPointsForSURF(key_points_left,descriptors_left,key_points_right,descriptors_right,left_to_right_corresponding_points,right_to_left_corresponding_points); } if(ch == 'E' || ch == 'e') { //drawing lines for corresponding points KeyPoint *leftPoint,*rightPoint,*leftPoint2,*rightPoint2; int width_part=merged_images->width>>1; /*for(int iL=0;iL<left_to_right_corresponding_points.size();iL++) { leftPoint=(CvSURFPoint*)cvGetSeqElem(key_points_left,left_to_right_corresponding_points[iL].first); rightPoint=(CvSURFPoint*)cvGetSeqElem(key_points_right,left_to_right_corresponding_points[iL].second); cvLine(merged_images,cvPoint(leftPoint->pt.x,leftPoint->pt.y),cvPoint(rightPoint->pt.x+width_part,rightPoint->pt.y),CV_RGB(255,0,0)); }*/ int sizeOfAccepptedLeftToRightCorrespondings = left_to_right_corresponding_points.size(); bool* acceptedLeftToRightCorrespondings = 0; getAcceptedCorrespondingsForFindingModelParameters(left_to_right_corresponding_points, key_points_left, key_points_right, fundamentalMatrix, acceptedLeftToRightCorrespondings, sizeOfAccepptedLeftToRightCorrespondings); while(true) { merged_images_copy=cvCreateImage(cvSize(merged_images->width,merged_images->height),merged_images->depth,3); cvCopy(merged_images,merged_images_copy); int iL=selectedLeftLine; int iR=iL; if(iL>=left_to_right_corresponding_points.size()) iL=left_to_right_corresponding_points.size()-1; if(iR>=right_to_left_corresponding_points.size()) iR=right_to_left_corresponding_points.size()-1; char str[100]={0}; if(iL >= 0 ) { bool isLeftToRightLineIsAccepted = acceptedLeftToRightCorrespondings[iL]; // difference value sprintf(str,"%f",left_to_right_corresponding_points[iL].comparer_value); cvPutText(merged_images_copy,str,cvPoint(0,merged_images_copy->height-40),&my_font,CV_RGB(0,255,0)); // count of Matches sprintf(str,"%d",left_to_right_corresponding_points[iL].counterOfMatches); cvPutText(merged_images_copy,str,cvPoint(200,merged_images_copy->height-40),&my_font,CV_RGB(255,255,0)); // median of compared values sprintf(str,"%lf",left_to_right_corresponding_points[iL].medianOfComparedMatches); cvPutText(merged_images_copy,str,cvPoint(250,merged_images_copy->height-40),&my_font,CV_RGB(255,0,0)); // Variance of compared values sprintf(str,"V=%lf",left_to_right_corresponding_points[iL].Variance()); cvPutText(merged_images_copy,str,cvPoint(0,merged_images_copy->height-80),&my_font,CV_RGB(0,255,0)); // Standard deviation of compared values sprintf(str,"SD=%lf",sqrt( left_to_right_corresponding_points[iL].Variance() )); cvPutText(merged_images_copy,str,cvPoint(250,merged_images_copy->height-80),&my_font,CV_RGB(0,255,0)); double SD = sqrt( left_to_right_corresponding_points[iL].Variance() ) ; double median = left_to_right_corresponding_points[iL].medianOfComparedMatches; double compValue = left_to_right_corresponding_points[iL].comparer_value; double mark_1_5 = median - 1.5 * SD - compValue; // Mark 1.5 sprintf(str,"m1.5=%lf", mark_1_5); cvPutText(merged_images_copy,str,cvPoint(0,merged_images_copy->height-120),&my_font,CV_RGB(0,255,0)); sprintf(str,"angle=%lf", left_to_right_corresponding_points[iL].degreesBetweenDeltaVector); cvPutText(merged_images_copy,str,cvPoint(0,merged_images_copy->height-150),&my_font,CV_RGB(0,255,0)); leftPoint= &(key_points_left[ left_to_right_corresponding_points[iL].comp_pair.first ]); rightPoint=&(key_points_right[ left_to_right_corresponding_points[iL].comp_pair.second ]); cvLine(merged_images_copy,cvPoint(leftPoint->pt.x,leftPoint->pt.y),cvPoint(rightPoint->pt.x+width_part,rightPoint->pt.y),CV_RGB(0,255,0)); drawEpipolarLinesOnLeftAndRightImages(merged_images_copy, cvPoint(leftPoint->pt.x,leftPoint->pt.y), cvPoint(rightPoint->pt.x,rightPoint->pt.y), fundamentalMatrix); CvScalar color = CV_RGB(255, 0, 0); if(isLeftToRightLineIsAccepted) { color = CV_RGB(0,255,0); } cvCircle(merged_images_copy, cvPoint(leftPoint->pt.x,leftPoint->pt.y), 5, color); cvCircle(merged_images_copy, cvPoint(rightPoint->pt.x+width_part,rightPoint->pt.y), 5, color); } //cvLine(merged_images_copy,cvPoint(leftPoint->pt.x,leftPoint->pt.y),cvPoint(rightPoint->pt.x,rightPoint->pt.y),CV_RGB(255,0,255)); if(iR >= 0 ) { sprintf(str,"%f",right_to_left_corresponding_points[iR].comparer_value); cvPutText(merged_images_copy,str,cvPoint(width_part,merged_images_copy->height-40),&my_font,CV_RGB(255,0,0)); rightPoint2= &(key_points_right [right_to_left_corresponding_points[iR].comp_pair.first]); leftPoint2= &(key_points_left [right_to_left_corresponding_points[iR].comp_pair.second]); cvLine(merged_images_copy,cvPoint(leftPoint2->pt.x,leftPoint2->pt.y),cvPoint(rightPoint2->pt.x+width_part,rightPoint2->pt.y),CV_RGB(255,0,0)); } //cvLine(merged_images_copy,cvPoint(leftPoint2->pt.x+width_part,leftPoint2->pt.y),cvPoint(rightPoint2->pt.x+width_part,rightPoint2->pt.y),CV_RGB(255,0,255)); cvShowImage("twoSnapshots",merged_images_copy); cvReleaseImage(&merged_images_copy); char ch2=cvWaitKey(33); if(ch2==27) break; if(ch2=='z' && selectedLeftLine>0) { selectedLeftLine--; } if(ch2=='x' && selectedLeftLine<1000) { selectedLeftLine++; } if( ch2 == 'a' || ch2 == 'A') { acceptedLeftToRightCorrespondings[selectedLeftLine] = true; } if( ch2 == 'd' || ch2 == 'D') { acceptedLeftToRightCorrespondings[selectedLeftLine] = false; } }//end of while(true) SaveAcceptedCorresspondings( left_to_right_corresponding_points, right_to_left_corresponding_points, key_points_left, key_points_right, acceptedLeftToRightCorrespondings, sizeOfAccepptedLeftToRightCorrespondings ); ConvertAcceptedCorresspondingsToMyArray(left_to_right_corresponding_points, right_to_left_corresponding_points, key_points_left, key_points_right, acceptedLeftToRightCorrespondings, sizeOfAccepptedLeftToRightCorrespondings, left_points, right_points ); delete[] acceptedLeftToRightCorrespondings; } if( ch == 'T' || ch == 't') { clock_t startTime = clock(); openTwoImages(left_image_file_path, right_image_file_path, left_img, right_img ); // proceed left extractFeaturesFromImage(left_img, min_hessian_value, gray_img_left, key_points_left, descriptors_left); //proceed right extractFeaturesFromImage(right_img, min_hessian_value, gray_img_right, key_points_right, descriptors_right); //comparison of two images if(fundamentalMatrix != 0) { cvReleaseMat(& fundamentalMatrix); fundamentalMatrix = 0; } left_to_right_corresponding_points.clear(); right_to_left_corresponding_points.clear(); GetCorrespondingPointsForSURF(key_points_left,descriptors_left,key_points_right,descriptors_right,left_to_right_corresponding_points,right_to_left_corresponding_points); // searching fundamental matrix and corresponding points findFundamentalMatrixAndCorrespondingPointsForReconstruction( left_to_right_corresponding_points, right_to_left_corresponding_points, fundamentalMatrix, key_points_left, key_points_right, descriptors_left, descriptors_right, left_img, right_img, gray_img_left, gray_img_right, forReconstructionLeftPoints, forReconstructionRightPoints, min_hessian_value, 450); // selecting points for finding model parameters int sizeOfAccepptedLeftToRightCorrespondings = left_to_right_corresponding_points.size(); bool* acceptedLeftToRightCorrespondings = 0; getAcceptedCorrespondingsForFindingModelParameters(left_to_right_corresponding_points, key_points_left, key_points_right, fundamentalMatrix, acceptedLeftToRightCorrespondings, sizeOfAccepptedLeftToRightCorrespondings); ConvertAcceptedCorresspondingsToMyArray(left_to_right_corresponding_points, right_to_left_corresponding_points, key_points_left, key_points_right, acceptedLeftToRightCorrespondings, sizeOfAccepptedLeftToRightCorrespondings, left_points, right_points ); delete[] acceptedLeftToRightCorrespondings; // start process of determination parameters of model and reconstruction of scene cv::Mat mat_left_img(left_img, true); cv::Mat mat_right_img(right_img, true); mainLevenbergMarkvardt_LMFIT(startValueOfFocus, "currentPLYExportFile", left_points, right_points, mat_left_img, mat_right_img, forReconstructionLeftPoints, forReconstructionRightPoints); mat_left_img.release(); mat_right_img.release(); cout << "Code execution time: "<< double( clock() - startTime ) / (double)CLOCKS_PER_SEC<< " seconds." << endl; } if( ch == 'I' || ch == 'i') { //-- Step 3: Matching descriptor vectors using FLANN matcher FlannBasedMatcher matcher; std::vector< DMatch > matches; matcher.match( descriptors_left, descriptors_right, matches ); //double max_dist = 0; double min_dist = 100; ////-- Quick calculation of max and min distances between keypoints //for( int i = 0; i < descriptors_left.rows; i++ ) //{ double dist = matches[i].distance; // if( dist < min_dist ) min_dist = dist; // if( dist > max_dist ) max_dist = dist; //} //printf("-- Max dist : %f \n", max_dist ); //printf("-- Min dist : %f \n", min_dist ); //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist, //-- or a small arbitary value ( 0.02 ) in the event that min_dist is very //-- small) //-- PS.- radiusMatch can also be used here. //std::vector< DMatch > good_matches; left_to_right_corresponding_points.clear(); right_to_left_corresponding_points.clear(); for( int i = 0; i < descriptors_left.rows; i++ ) { //if( matches[i].distance <= max(2*min_dist, 0.02) ) { //good_matches.push_back( matches[i]); left_to_right_corresponding_points.push_back( ComparedIndexes(matches[i].distance, pair<int, int> (i, matches[i].trainIdx)) ); } } cout<< "Count of good matches :" << left_to_right_corresponding_points.size() << endl; stable_sort(left_to_right_corresponding_points.begin(),left_to_right_corresponding_points.end(),my_comparator_for_stable_sort); } //if( ch == 'K' || ch == 'k') //{ // CvSURFPoint *leftPoint; // //proceed left // gray_img_left=cvCreateImage(cvSize((left_img->width),(left_img->height)),IPL_DEPTH_8U,1); // cvCvtColor(left_img,gray_img_left,CV_RGB2GRAY); // cvExtractSURF(gray_img_left,NULL,&key_points_left,&descriptors_left,mem_stor,cvSURFParams(min_hessian_value,0)); // cv::Mat mat_gray_leftImage(gray_img_left, true); // cvReleaseImage(&gray_img_left); // // proceed right // gray_img_right=cvCreateImage(cvSize((right_img->width),(right_img->height)),IPL_DEPTH_8U,1); // cvCvtColor(right_img,gray_img_right,CV_RGB2GRAY); // cv::Mat mat_gray_rightImage(gray_img_right, true); // cvReleaseImage(&gray_img_right); // vector<Point2f> LK_left_points; // vector<Point2f> LK_right_points; // LK_right_points.resize(key_points_left->total); // for( int i = 0; i < key_points_left->total; i++) // { // leftPoint=(CvSURFPoint*)cvGetSeqElem(key_points_left, i); // LK_left_points.push_back(Point2f( leftPoint->pt.x, leftPoint->pt.y)); // } // // vector<uchar> status; // vector<float> err; // cv::calcOpticalFlowPyrLK( // mat_gray_leftImage, // mat_gray_rightImage, // LK_left_points, // LK_right_points, // status, // err); // int width_part=merged_images->width>>1; // // float minErr = err[0]; // for(int k = 0; k < err.size(); k++) // { // if(status[k] && err[k] < minErr) // { // minErr = err[k]; // } // } // cout<< "Lucass Kanade min error: " << minErr<< endl; // int i = 0; // merged_images_copy=cvCreateImage(cvSize(merged_images->width,merged_images->height),merged_images->depth,3); // cvCopy(merged_images,merged_images_copy); // for(; i < LK_left_points.size(); ++i) // { // if(err[i] < 5 * minErr && status[i]) // { // cvLine(merged_images_copy,cvPoint(LK_left_points[i].x,LK_left_points[i].y),cvPoint(LK_right_points[i].x+width_part,LK_right_points[i].y), // CV_RGB(100 + (( i *3) % 155), 100+ ((i*7)%155), 100+ ((i*13)%155))); // } // } // cvShowImage("twoSnapshots",merged_images_copy); // // while(true) // { // char ch2=cvWaitKey(33); // if(ch2==27) // break; // // } // // cvReleaseImage(&merged_images_copy); // status.clear(); // err.clear(); // LK_left_points.clear(); // LK_right_points.clear(); // mat_gray_leftImage.release(); // mat_gray_rightImage.release(); //} if( ch == 'F' || ch == 'f') { findFundamentalMatrixAndCorrespondingPointsForReconstruction( left_to_right_corresponding_points, right_to_left_corresponding_points, fundamentalMatrix, key_points_left, key_points_right, descriptors_left, descriptors_right, left_img, right_img, gray_img_left, gray_img_right, forReconstructionLeftPoints, forReconstructionRightPoints, min_hessian_value); } if( ch == 'P' || ch == 'p') { cv::Mat mat_left_img(left_img, true); cv::Mat mat_right_img(right_img, true); mainLevenbergMarkvardt_LMFIT(startValueOfFocus, "currentPLYExportFile", left_points, right_points, mat_left_img, mat_right_img, forReconstructionLeftPoints, forReconstructionRightPoints); mat_left_img.release(); mat_right_img.release(); } if(merged_images!=0) { cvShowImage("twoSnapshots",merged_images); } }
int main(){ int mode; int camera_idx; char *ini_path = "./setting.ini"; char file_path[256]; IplImage *src_img = NULL, *dst_img = NULL; CvCapture *Capture = NULL; IplImage *Capture_img; while(1){ printf("\n==========================================================================\n"); printf("Select Menu\n"); printf("1.Setting\n"); printf("2.사진 보정\n"); printf("3.동영상 보정\n"); printf("4.Live Cam\n"); printf("5.색각 이상 시뮬레이션\n"); printf("6.색약 보정 & 색각 이상 체험\n"); printf("7.exit\n"); printf("==========================================================================\n\n"); printf("menu : "); scanf("%d", &mode); switch(mode){ case 1: { // 설정부 cvNamedWindow("Color weakness test"); cvCreateTrackbar("factor", "Color weakness test", &temp_factor, 100, SettingTrackbar); IplImage *test_img = cvLoadImage("test_img.jpg"); IplImage *modify_img = cvCreateImage(cvGetSize(test_img), IPL_DEPTH_8U, 3); while(1){ Refine_img(test_img, modify_img, modification_factor, MODE_CORRECTION); cvShowImage("Origin", test_img); cvShowImage("Color weakness test", modify_img); if(cvWaitKey(33) == 27) break; } cvReleaseImage(&test_img); cvReleaseImage(&modify_img); cvDestroyAllWindows(); } break; case 2: // 사진 보정 printf("Image path : "); scanf("%s", file_path); src_img = cvLoadImage(file_path); if(src_img->width == 0){ printf("ERROR : File not found\n"); break; }else{ dst_img = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, 3); Refine_img(src_img, dst_img, modification_factor, MODE_CORRECTION); cvShowImage("Source Image", src_img); cvShowImage("Destination Image", dst_img); cvWaitKey(0); cvReleaseImage(&src_img); cvReleaseImage(&dst_img); src_img = NULL; dst_img = NULL; cvDestroyAllWindows(); } break; case 3: // 동영상 보정 printf("Image path : "); scanf("%s", file_path); Capture = cvCaptureFromAVI(file_path); if(Capture == NULL){ printf("ERROR : Video not found\n"); break; }else{ double nFPS = cvGetCaptureProperty(Capture, CV_CAP_PROP_FPS); //nFPS = 1000 / nFPS; int nTotalFrame = (int)cvGetCaptureProperty(Capture, CV_CAP_PROP_FRAME_COUNT); int frameCount = 1; Capture_img = cvQueryFrame(Capture); dst_img = cvCreateImage(cvGetSize(Capture_img), IPL_DEPTH_8U, 3); IplImage *small_src = cvCreateImage(cvSize(Capture_img->width/2, Capture_img->height/2), IPL_DEPTH_8U, 3); IplImage *small_dst = cvCloneImage(small_src); while(1){ int _TICK = GetTickCount(); Capture_img = cvQueryFrame(Capture); cvResize(Capture_img, small_src); Refine_img(small_src, small_dst, modification_factor, MODE_CORRECTION); cvResize(small_dst, dst_img); //cvShowImage("Source Image", src_img); frameCount++; if(cvWaitKey(1) == 27 || frameCount >= nTotalFrame-1){ cvDestroyAllWindows(); break; } _TICK = GetTickCount() - _TICK; float fps = 1000.0f / (float)_TICK; char buf[32]; sprintf(buf, "%.2f fps", fps); cvPutText(dst_img, buf, cvPoint(30, 30), &cvFont(1.0), cvScalar(0,0,255)); cvShowImage("Destination Image", dst_img); } cvReleaseImage(&small_src); cvReleaseImage(&small_dst); cvReleaseImage(&src_img); cvReleaseImage(&dst_img); Capture = NULL; src_img = NULL; dst_img = NULL; } break; case 4: // 카메라 보정 printf("Select Camera index : "); scanf("%d", &camera_idx); Capture = cvCaptureFromCAM(camera_idx); if(Capture == NULL){ printf("ERROR : Camera not found\n"); break; }else{ while(1){ Capture_img = cvQueryFrame(Capture); src_img = cvCloneImage(Capture_img); dst_img = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, 3); Refine_img(src_img, dst_img, modification_factor, MODE_CORRECTION); cvShowImage("Source Image", src_img); cvShowImage("Destination Image", dst_img); cvReleaseImage(&src_img); cvReleaseImage(&dst_img); if(cvWaitKey(10) == 27){ cvDestroyAllWindows(); break; } } cvReleaseCapture(&Capture); Capture = NULL; src_img = NULL; dst_img = NULL; } break; case 5: // 색각 이상 체험 printf("Image path : "); scanf("%s", file_path); src_img = cvLoadImage(file_path); if(src_img->width == 0){ printf("ERROR : File not found\n"); break; }else{ // 설정부 cvNamedWindow("Color weakness test"); cvCreateTrackbar("factor", "Color weakness test", &temp_factor, 100, SettingTrackbar); IplImage *modify_img = cvCreateImage(cvGetSize(src_img), IPL_DEPTH_8U, 3); while(1){ Refine_img(src_img, modify_img, modification_factor, MODE_DYSCHROMATOPSA); cvShowImage("Color weakness test", modify_img); if(cvWaitKey(33) == 27) break; } cvReleaseImage(&src_img); cvReleaseImage(&modify_img); cvDestroyAllWindows(); } break; case 6: { // 설정부 cvNamedWindow("Color weakness test"); cvNamedWindow("Inverse"); cvCreateTrackbar("factor", "Color weakness test", &temp_factor, 100, SettingTrackbar); cvCreateTrackbar("factor", "Inverse", &t_inverse_factor, 100, inverse_SettingTrackbar); IplImage *test_img = cvLoadImage("test_img.jpg"); IplImage *modify_img = cvCreateImage(cvGetSize(test_img), IPL_DEPTH_8U, 3); IplImage *inverseImg = cvCreateImage(cvGetSize(test_img), IPL_DEPTH_8U, 3); while(1){ Refine_img(test_img, modify_img, modification_factor, MODE_CORRECTION); Refine_img(modify_img, inverseImg, inverse_factor, MODE_DYSCHROMATOPSA); cvShowImage("Origin", test_img); cvShowImage("Inverse", inverseImg); cvShowImage("Color weakness test", modify_img); if(cvWaitKey(33) == 27) break; } cvReleaseImage(&inverseImg); cvReleaseImage(&test_img); cvReleaseImage(&modify_img); cvDestroyAllWindows(); } case 7: return 0; default: printf("ERRER : select correct menu\n"); break; } } return 0; }