MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); connect(ui->actionOpen, SIGNAL(triggered()), this, SLOT(open())); connect(ui->actionReset, SIGNAL(triggered()), this, SLOT(reset())); connect(ui->btn_Left, SIGNAL(clicked()), this, SLOT(turnLeft())); connect(ui->btn_Right, SIGNAL(clicked()), this, SLOT(turnRight())); connect(ui->actionRgb2gray, SIGNAL(triggered()), this, SLOT(rgb2gray())); connect(ui->actionRgb2bw, SIGNAL(triggered()), this, SLOT(rgb2bw())); connect(ui->actionNegative, SIGNAL(triggered()), this, SLOT(negative())); connect(ui->actionStretch, SIGNAL(triggered()), this, SLOT(stretch())); connect(ui->actionLog, SIGNAL(triggered()), this, SLOT(log())); connect(ui->actionHistogramEqualize, SIGNAL(triggered()), this, SLOT(histogramEqualize())); connect(ui->actionHistogramExactSpecifiedEqualize, SIGNAL(triggered()), this, SLOT(histogramExactSpecifiedEqualize())); connect(ui->actionSpatialFilter, SIGNAL(triggered()), this, SLOT(spatialFilter())); connect(ui->actionMedianFilter, SIGNAL(triggered()), this, SLOT(medianFilter())); connect(ui->actionFFT, SIGNAL(triggered()), this, SLOT(makeFFT())); connect(ui->actionOilPaint, SIGNAL(triggered()), this, SLOT(oilPaint())); connect(ui->actionRelief, SIGNAL(triggered()), this, SLOT(relief())); connect(ui->actionEdgeExtraction, SIGNAL(triggered()), this, SLOT(edgeExtraction())); connect(ui->actionGaussianBlur, SIGNAL(triggered()), this, SLOT(gaussianBlur())); connect(ui->actionOpenOperate, SIGNAL(triggered()), this, SLOT(openOp())); connect(ui->actionCloseOperate, SIGNAL(triggered()), this, SLOT(closeOp())); connect(ui->actionExpansion, SIGNAL(triggered()), this, SLOT(expansionOp())); connect(ui->actionCorrosion, SIGNAL(triggered()), this, SLOT(corrosionOp())); connect(ui->checkBox, SIGNAL(toggled(bool)), this, SLOT(saveCheck(bool))); connect(ui->actionSave, SIGNAL(triggered()), this, SLOT(save())); this->ui->graphicsView->setScene(Q_NULLPTR); this->pixmapItem = Q_NULLPTR; this->directory = new QDir(); this->imageProcessor = Q_NULLPTR; this->ui->actionReset->setEnabled(false); this->ui->btn_Left->setEnabled(false); this->ui->btn_Right->setEnabled(false); this->ui->actionRgb2gray->setEnabled(false); this->ui->actionRgb2bw->setEnabled(false); this->ui->actionNegative->setEnabled(false); this->ui->actionStretch->setEnabled(false); this->ui->actionLog->setEnabled(false); this->ui->actionHistogramEqualize->setEnabled(false); this->ui->actionHistogramExactSpecifiedEqualize->setEnabled(false); this->ui->actionSpatialFilter->setEnabled(false); this->ui->actionMedianFilter->setEnabled(false); this->ui->actionFFT->setEnabled(false); this->ui->actionOilPaint->setEnabled(false); this->ui->actionRelief->setEnabled(false); this->ui->actionEdgeExtraction->setEnabled(false); this->ui->actionGaussianBlur->setEnabled(false); this->ui->actionSave->setEnabled(false); }
/** * colorMagnify - color magnification * */ void VideoProcessor::colorMagnify() { // set filter setSpatialFilter(GAUSSIAN); setTemporalFilter(IDEAL); // create a temp file createTemp(); // current frame cv::Mat input; // output frame cv::Mat output; // motion image cv::Mat motion; // temp image cv::Mat temp; // video frames std::vector<cv::Mat> frames; // down-sampled frames std::vector<cv::Mat> downSampledFrames; // filtered frames std::vector<cv::Mat> filteredFrames; // concatenate image of all the down-sample frames cv::Mat videoMat; // concatenate filtered image cv::Mat filtered; // if no capture device has been set if (!isOpened()) return; // set the modify flag to be true modify = true; // is processing stop = false; // save the current position long pos = curPos; // jump to the first frame jumpTo(0); // 1. spatial filtering while (getNextFrame(input) && !isStop()) { input.convertTo(temp, CV_32FC3); frames.push_back(temp.clone()); // spatial filtering std::vector<cv::Mat> pyramid; spatialFilter(temp, pyramid); downSampledFrames.push_back(pyramid.at(levels-1)); // update process std::string msg= "Spatial Filtering..."; emit updateProcessProgress(msg, floor((fnumber++) * 100.0 / length)); } if (isStop()){ emit closeProgressDialog(); fnumber = 0; return; } emit closeProgressDialog(); // 2. concat all the frames into a single large Mat // where each column is a reshaped single frame // (for processing convenience) concat(downSampledFrames, videoMat); // 3. temporal filtering temporalFilter(videoMat, filtered); // 4. amplify color motion amplify(filtered, filtered); // 5. de-concat the filtered image into filtered frames deConcat(filtered, downSampledFrames.at(0).size(), filteredFrames); // 6. amplify each frame // by adding frame image and motions // and write into video fnumber = 0; for (int i=0; i<length-1 && !isStop(); ++i) { // up-sample the motion image upsamplingFromGaussianPyramid(filteredFrames.at(i), levels, motion); resize(motion, motion, frames.at(i).size()); temp = frames.at(i) + motion; output = temp.clone(); double minVal, maxVal; minMaxLoc(output, &minVal, &maxVal); //find minimum and maximum intensities output.convertTo(output, CV_8UC3, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal)); tempWriter.write(output); std::string msg= "Amplifying..."; emit updateProcessProgress(msg, floor((fnumber++) * 100.0 / length)); } if (!isStop()) { emit revert(); } emit closeProgressDialog(); // release the temp writer tempWriter.release(); // change the video to the processed video setInput(tempFile); // jump back to the original position jumpTo(pos); }
/** * motionMagnify - eulerian motion magnification * */ void VideoProcessor::motionMagnify() { // set filter setSpatialFilter(LAPLACIAN); setTemporalFilter(IIR); // create a temp file createTemp(); // current frame cv::Mat input; // output frame cv::Mat output; // motion image cv::Mat motion; std::vector<cv::Mat> pyramid; std::vector<cv::Mat> filtered; // if no capture device has been set if (!isOpened()) return; // set the modify flag to be true modify = true; // is processing stop = false; // save the current position long pos = curPos; // jump to the first frame jumpTo(0); while (!isStop()) { // read next frame if any if (!getNextFrame(input)) break; input.convertTo(input, CV_32FC3, 1.0/255.0f); // 1. convert to Lab color space cv::cvtColor(input, input, CV_BGR2Lab); // 2. spatial filtering one frame cv::Mat s = input.clone(); spatialFilter(s, pyramid); // 3. temporal filtering one frame's pyramid // and amplify the motion if (fnumber == 0){ // is first frame lowpass1 = pyramid; lowpass2 = pyramid; filtered = pyramid; } else { for (int i=0; i<levels; ++i) { curLevel = i; temporalFilter(pyramid.at(i), filtered.at(i)); } // amplify each spatial frequency bands // according to Figure 6 of paper cv::Size filterSize = filtered.at(0).size(); int w = filterSize.width; int h = filterSize.height; delta = lambda_c/8.0/(1.0+alpha); // the factor to boost alpha above the bound // (for better visualization) exaggeration_factor = 2.0; // compute the representative wavelength lambda // for the lowest spatial frequency band of Laplacian pyramid lambda = sqrt(w*w + h*h)/3; // 3 is experimental constant for (int i=levels; i>=0; i--) { curLevel = i; amplify(filtered.at(i), filtered.at(i)); // go one level down on pyramid // representative lambda will reduce by factor of 2 lambda /= 2.0; } } // 4. reconstruct motion image from filtered pyramid reconImgFromLaplacianPyramid(filtered, levels, motion); // 5. attenuate I, Q channels attenuate(motion, motion); // 6. combine source frame and motion image if (fnumber > 0) // don't amplify first frame s += motion; // 7. convert back to rgb color space and CV_8UC3 output = s.clone(); cv::cvtColor(output, output, CV_Lab2BGR); output.convertTo(output, CV_8UC3, 255.0, 1.0/255.0); // write the frame to the temp file tempWriter.write(output); // update process std::string msg= "Processing..."; emit updateProcessProgress(msg, floor((fnumber++) * 100.0 / length)); } if (!isStop()){ emit revert(); } emit closeProgressDialog(); // release the temp writer tempWriter.release(); // change the video to the processed video setInput(tempFile); // jump back to the original position jumpTo(pos); }