bool ConvertToGrayscaleInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); StandardStatus status; image.SetStatusCallback( &status ); Console().EnableAbort(); image.SetColorSpace( ColorSpace::Gray ); return true; }
bool ConvertToRGBColorInstance::ExecuteOn( View& view ) { ImageWindow window = view.Window(); Array<ImageWindow> windows = ImageWindow::AllWindows(); for ( size_type i = 0; i < windows.Length(); ++i ) if ( windows[i].Mask() == window && !windows[i].MainView().IsColor() ) windows[i].RemoveMask(); AutoViewLock lock( view ); ImageVariant image = view.Image(); StandardStatus status; image.SetStatusCallback( &status ); Console().EnableAbort(); image.SetColorSpace( ColorSpace::RGB ); return true; }
bool FluxCalibrationInstance::CanExecuteOn( const View& view, pcl::String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "FluxCalibration cannot be executed on complex images."; return false; } if ( view.Image()->IsColor() ) { whyNot = "FluxCalibration cannot be executed on color images."; return false; } FITSKeywordArray inputKeywords; view.Window().GetKeywords( inputKeywords ); if ( FluxCalibrationEngine::KeywordExists( inputKeywords, "FLXMIN" ) || FluxCalibrationEngine::KeywordExists( inputKeywords, "FLXRANGE" ) || FluxCalibrationEngine::KeywordExists( inputKeywords, "FLX2DN" ) ) { whyNot = "FluxCalibration cannot be executed on an already flux-calibrated image."; return false; } whyNot.Clear(); return true; }
bool DebayerInstance::CanExecuteOn( const View& view, String& whyNot ) const { if ( view.Image().IsComplexSample() ) whyNot = "Debayer cannot be executed on complex images."; else if ( view.Image().Width() < 6 || view.Image().Height() < 6 ) whyNot = "Debayer needs an image of at least 6 by 6 pixels"; else { whyNot.Clear(); return true; } return false; }
bool AnnotationInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); StandardStatus status; image.SetStatusCallback( &status ); Console().EnableAbort(); if ( !image.IsComplexSample() ) if ( image.IsFloatSample() ) switch ( image.BitsPerSample() ) { case 32: AnnotationEngine::Apply( static_cast<pcl::Image&>( *image ), *this ); break; case 64: AnnotationEngine::Apply( static_cast<pcl::DImage&>( *image ), *this ); break; } else switch ( image.BitsPerSample() ) { case 8: AnnotationEngine::Apply( static_cast<pcl::UInt8Image&>( *image ), *this ); break; case 16: AnnotationEngine::Apply( static_cast<pcl::UInt16Image&>( *image ), *this ); break; case 32: AnnotationEngine::Apply( static_cast<pcl::UInt32Image&>( *image ), *this ); break; } return true; }
bool PhotometricSuperflatInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) return false; StandardStatus status; image.SetStatusCallback( &status ); Console().EnableAbort(); if ( image.IsFloatSample() ) switch ( image.BitsPerSample() ) { case 32: PhotometricSuperflatEngine::Apply( static_cast<Image&>( *image ), *this ); break; case 64: PhotometricSuperflatEngine::Apply( static_cast<DImage&>( *image ), *this ); break; } else switch ( image.BitsPerSample() ) { case 8: PhotometricSuperflatEngine::Apply( static_cast<UInt8Image&>( *image ), *this ); break; case 16: PhotometricSuperflatEngine::Apply( static_cast<UInt16Image&>( *image ), *this ); break; case 32: PhotometricSuperflatEngine::Apply( static_cast<UInt32Image&>( *image ), *this ); break; } return true; }
bool CropInstance::ExecuteOn( View& view ) { if ( !view.IsMainView() ) return false; // should not happen! if ( p_margins == 0.0 ) { Console().WriteLn( "<end><cbr><* Identity *>" ); return true; } AutoViewLock lock( view ); ImageWindow window = view.Window(); ImageVariant image = view.Image(); Crop C( p_margins ); C.SetMode( static_cast<Crop::crop_mode>( p_mode ) ); C.SetResolution( p_resolution.x, p_resolution.y ); C.SetMetricResolution( p_metric ); C.SetFillValues( p_fillColor ); // Dimensions of target image int w0 = image.Width(); int h0 = image.Height(); // Dimensions of transformed image int width = w0, height = h0; C.GetNewSizes( width, height ); if ( width < 1 || height < 1 ) throw Error( "Crop: Invalid operation: Null target image dimensions" ); // On 32-bit systems, make sure the resulting image requires less than 4 GB. if ( sizeof( void* ) == sizeof( uint32 ) ) { uint64 sz = uint64( width )*uint64( height )*image.NumberOfChannels()*image.BytesPerSample(); if ( sz > uint64( uint32_max-256 ) ) throw Error( "Crop: Invalid operation: Target image dimensions would exceed four gigabytes" ); } DeleteAstrometryMetadataAndPreviewsAndMask( window ); Console().EnableAbort(); StandardStatus status; image.SetStatusCallback( &status ); C >> image; if ( p_forceResolution ) { Console().WriteLn( String().Format( "Setting resolution: h:%.3lf, v:%.3lf, u:px/%s", p_resolution.x, p_resolution.y, p_metric ? "cm" : "inch" ) ); window.SetResolution( p_resolution.x, p_resolution.y, p_metric ); } return true; }
void AdaptiveStretchCurveGraphInterface::__Click( Button& sender, bool checked ) { if ( sender == GUI->Render_ToolButton ) { ImageWindow window( m_width, m_height, 3, // numberOfChannels 8, // bitsPerSample false, // floating point true ); // color if ( !m_gridBitmap.IsNull() ) { View mainView = window.MainView(); ImageVariant v = mainView.Image(); static_cast<UInt8Image&>( *v ).Blend( m_gridBitmap ); if ( !m_curveBitmap.IsNull() ) static_cast<UInt8Image&>( *v ).Blend( m_curveBitmap ); } window.BringToFront(); window.Show(); window.ZoomToFit( false/*allowMagnification*/ ); } else if ( sender == GUI->Edit_ToolButton ) { CurvesTransformationInstance curves( TheCurvesTransformationProcess ); float ux = 1.0/(m_curve.Length() - 1); float m0 = 0; for ( int i = 0, j = 1; j < m_curve.Length(); ++j ) { float dy = Abs( m_curve[j] - m_curve[i] ); if ( dy > 0.01 ) { float dx = ux*(j - i); float m = dy/dx; if ( Abs( m - m0 )/m > 0.05 ) { m0 = m; i = j; curves[CurveIndex::RGBK].Add( ux*i, m_curve[i] ); } } else if ( 1 + dy == 1 ) { for ( ; ++j < m_curve.Length(); ++j ) { dy = Abs( m_curve[j] - m_curve[i] ); if ( 1 + dy > 1 ) { m0 = 0; i = j-1; curves[CurveIndex::RGBK].Add( ux*i, m_curve[i] ); break; } } } } curves.LaunchInterface(); } }
bool ConvertToGrayscaleInstance::CanExecuteOn( const View& view, pcl::String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "ConvertToGrayscale cannot be executed on complex images."; return false; } if ( view.Image().ColorSpace() == ColorSpace::Gray ) { whyNot = "ConvertToGrayscale cannot be executed on grayscale images."; return false; } whyNot.Clear(); return true; }
bool ChannelCombinationInstance::CanExecuteOn( const View& v, String& whyNot ) const { if ( v.Image().IsComplexSample() ) { whyNot = "ChannelCombination cannot be executed on complex images."; return false; } if ( v.Image()->ColorSpace() != ColorSpace::RGB ) { whyNot = "ChannelCombination can only be executed on RGB color images."; return false; } whyNot.Clear(); return true; }
bool AnnotationInstance::CanExecuteOn( const View& view, pcl::String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "Annotation cannot be executed on complex images."; return false; } return true; }
bool BinarizeInstance::CanExecuteOn( const View& view, pcl::String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "Binarize cannot be executed on complex images."; return false; } whyNot.Clear(); return true; }
bool PhotometricSuperflatInstance::CanExecuteOn( const View& view, String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "PhotometricSuperflat cannot be executed on complex images."; return false; } whyNot.Clear(); return true; }
bool CurvesTransformationInstance::CanExecuteOn( const View& view, pcl::String& whyNot ) const { if ( view.Image().IsComplexSample() ) { whyNot = "CurvesTransformation cannot be executed on complex images."; return false; } whyNot.Clear(); return true; }
bool FluxCalibrationInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) throw Error( "FluxCalibration cannot be executed on complex images." ); StandardStatus status; image->SetStatusCallback( &status ); image->Status().Initialize( "Flux calibration", image->NumberOfPixels() ); Console().EnableAbort(); if ( image.IsFloatSample() ) switch ( image.BitsPerSample() ) { case 32: FluxCalibrationEngine::Apply( static_cast<Image&>( *image ), view, *this ); break; case 64: FluxCalibrationEngine::Apply( static_cast<DImage&>( *image ), view, *this ); break; } else switch ( image.BitsPerSample() ) { case 8: case 16: { ImageVariant tmp; tmp.CreateFloatImage( 32 ); tmp.CopyImage( image ); FluxCalibrationEngine::Apply( static_cast<Image&>( *tmp ), view, *this ); image.CopyImage( tmp ); } break; case 32: { ImageVariant tmp; tmp.CreateFloatImage( 64 ); tmp.CopyImage( image ); FluxCalibrationEngine::Apply( static_cast<DImage&>( *tmp ), view, *this ); image.CopyImage( tmp ); } break; } return true; }
bool RescaleInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) return false; Console().EnableAbort(); StandardStatus status; image.SetStatusCallback( &status ); switch ( mode ) { default: case RescalingMode::RGBK: image->SelectNominalChannels(); image.Rescale(); break; case RescalingMode::RGBK_Individual: for ( int c = 0; c < image->NumberOfNominalChannels(); ++c ) { image->SelectChannel( c ); image.Rescale(); } break; case RescalingMode::CIEL: { ImageVariant L; image.GetLightness( L ); L.Rescale(); image.SetLightness( L ); } break; case RescalingMode::CIEY: { ImageVariant Y; image.GetLuminance( Y ); Y.Rescale(); image.SetLuminance( Y ); } break; } return true; }
bool BinarizeInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) return false; Console().EnableAbort(); StandardStatus status; image.SetStatusCallback( &status ); BinarizeEngine::Apply( image, *this ); return true; }
bool LarsonSekaninaInstance::ExecuteOn( View& view ) { AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) return false; StandardStatus status; image.SetStatusCallback( &status ); Console().EnableAbort(); ImageVariant sharpImg; sharpImg.CreateFloatImage( (image.BitsPerSample() > 32) ? image.BitsPerSample() : 32 ); sharpImg.AllocateImage( image->Width(), image->Height(), 1, ColorSpace::Gray ); if ( useLuminance && image->IsColor() ) { ImageVariant L; image.GetLightness( L ); Convolve( L, sharpImg, interpolation, radiusDiff, angleDiff, center, 0 ); ApplyFilter( L, sharpImg, amount, threshold, deringing, rangeLow, rangeHigh, false, 0, highPass ); image.SetLightness( L ); } else { for ( int c = 0, n = image->NumberOfNominalChannels(); c < n; ++c ) { image->SelectChannel( c ); if ( n > 1 ) Console().WriteLn( "<end><cbr>Processing channel #" + String( c ) ); Convolve( image, sharpImg, interpolation, radiusDiff, angleDiff, center, c ); ApplyFilter( image, sharpImg, amount, threshold, deringing, rangeLow, rangeHigh, disableExtension, c, highPass ); } } return true; }
bool ChannelCombinationInstance::ExecuteOn( View& view ) { ImageWindow sourceWindow[ 3 ]; ImageVariant sourceImage[ 3 ]; AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) throw Error( "ChannelCombination cannot be executed on complex images." ); if ( image->ColorSpace() != ColorSpace::RGB ) throw Error( "ChannelCombination requires a RGB color image." ); Console().EnableAbort(); StandardStatus status; image->SetStatusCallback( &status ); String baseId; Rect r; int w0, h0; if ( view.IsPreview() ) { ImageWindow w = view.Window(); View mainView = w.MainView(); baseId = mainView.Id(); r = w.PreviewRect( view.Id() ); mainView.GetSize( w0, h0 ); } else { baseId = view.Id(); r = image->Bounds(); w0 = r.Width(); h0 = r.Height(); } int numberOfSources = 0; for ( int i = 0; i < 3; ++i ) if ( channelEnabled[i] ) { String id = channelId[i]; if ( id.IsEmpty() ) id = baseId + '_' + ColorSpaceId::ChannelId( colorSpace, i ); sourceWindow[i] = ImageWindow::WindowById( id ); if ( sourceWindow[i].IsNull() ) throw Error( "ChannelCombination: Source image not found: " + id ); sourceImage[i] = sourceWindow[i].MainView().Image(); if ( !sourceImage[i] ) throw Error( "ChannelCombination: Invalid source image: " + id ); if ( sourceImage[i]->IsColor() ) throw Error( "ChannelCombination: Invalid source color space: " + id ); if ( sourceImage[i]->Width() != w0 || sourceImage[i]->Height() != h0 ) throw Error( "ChannelCombination: Incompatible source image dimensions: " + id ); ++numberOfSources; } if ( numberOfSources == 0 ) return false; const char* what = ""; switch ( colorSpace ) { case ColorSpaceId::RGB: what = "RGB channels"; break; case ColorSpaceId::CIEXYZ: what = "normalized CIE XYZ components"; break; case ColorSpaceId::CIELab: what = "normalized CIE L*a*b* components"; break; case ColorSpaceId::CIELch: what = "normalized CIE L*c*h* components"; break; case ColorSpaceId::HSV: what = "normalized HSV components"; break; case ColorSpaceId::HSI: what = "normalized HSI components"; break; } image->Status().Initialize( String( "Combining " ) + what, image->NumberOfPixels() ); if ( image.IsFloatSample() ) switch ( image.BitsPerSample() ) { case 32: CombineChannels( static_cast<Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 64: CombineChannels( static_cast<DImage&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; } else switch ( image.BitsPerSample() ) { case 8: CombineChannels( static_cast<UInt8Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 16: CombineChannels( static_cast<UInt16Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 32: CombineChannels( static_cast<UInt32Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; } return true; }
bool ChannelCombinationInstance::ExecuteGlobal() { ImageWindow sourceWindow[ 3 ]; ImageVariant sourceImage[ 3 ]; int numberOfSources = 0; int width = 0, height = 0; bool floatSample = false; int bitsPerSample = 0; for ( int i = 0; i < 3; ++i ) if ( channelEnabled[i] && !channelId[i].IsEmpty() ) { sourceWindow[i] = ImageWindow::WindowById( channelId[i] ); if ( sourceWindow[i].IsNull() ) throw Error( "ChannelCombination: Source image not found: " + channelId[i] ); sourceImage[i] = sourceWindow[i].MainView().Image(); if ( !sourceImage[i] ) throw Error( "ChannelCombination: Invalid source image: " + channelId[i] ); if ( sourceImage[i]->IsColor() ) throw Error( "ChannelCombination: Invalid source color space: " + channelId[i] ); if ( sourceImage[i].IsFloatSample() ) floatSample = true; if ( sourceImage[i].BitsPerSample() > bitsPerSample ) bitsPerSample = sourceImage[i].BitsPerSample(); if ( width == 0 ) { width = sourceImage[i]->Width(); height = sourceImage[i]->Height(); } else { if ( sourceImage[i]->Width() != width || sourceImage[i]->Height() != height ) throw Error( "ChannelCombination: Incompatible source image dimensions: " + channelId[i] ); } ++numberOfSources; } if ( numberOfSources == 0 ) throw Error( "ChannelCombination: No source image(s)." ); ImageWindow w( width, height, 3, bitsPerSample, floatSample, true, true ); if ( w.IsNull() ) throw Error( "ChannelCombination: Unable to create target image." ); View mainView = w.MainView(); AutoViewLock lock( mainView ); try { ImageVariant image = mainView.Image(); Console().EnableAbort(); StandardStatus status; image->SetStatusCallback( &status ); const char* what = ""; switch ( colorSpace ) { case ColorSpaceId::RGB: what = "RGB channels"; break; case ColorSpaceId::CIEXYZ: what = "normalized CIE XYZ components"; break; case ColorSpaceId::CIELab: what = "normalized CIE L*a*b* components"; break; case ColorSpaceId::CIELch: what = "normalized CIE L*c*h* components"; break; case ColorSpaceId::HSV: what = "normalized HSV components"; break; case ColorSpaceId::HSI: what = "normalized HSI components"; break; } image->Status().Initialize( String( "Combining " ) + what, image->NumberOfPixels() ); String baseId = mainView.Id(); Rect r = image->Bounds(); if ( image.IsFloatSample() ) switch ( image.BitsPerSample() ) { case 32: CombineChannels( static_cast<Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 64: CombineChannels( static_cast<DImage&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; } else switch ( image.BitsPerSample() ) { case 8: CombineChannels( static_cast<UInt8Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 16: CombineChannels( static_cast<UInt16Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; case 32: CombineChannels( static_cast<UInt32Image&>( *image ), colorSpace, baseId, r, sourceImage[0], sourceImage[1], sourceImage[2] ); break; } w.Show(); return true; } catch ( ... ) { w.Close(); throw; } }
bool RotationInstance::ExecuteOn( View& view ) { if ( !view.IsMainView() ) return false; // should never reach this point! AutoViewLock lock( view ); ImageVariant image = view.Image(); if ( image.IsComplexSample() ) return false; double degrees = Round( Deg( p_angle ), 4 ); if ( degrees == 0 ) { Console().WriteLn( "<end><cbr><* Identity *>" ); return true; } ImageWindow window = view.Window(); window.RemoveMaskReferences(); window.RemoveMask(); window.DeletePreviews(); Console().EnableAbort(); StandardStatus status; image.SetStatusCallback( &status ); if ( p_optimizeFast ) switch ( TruncI( degrees ) ) { case 90: Rotate90CCW() >> image; return true; case -90: Rotate90CW() >> image; return true; case 180: case -180: Rotate180() >> image; return true; default: break; } AutoPointer<PixelInterpolation> interpolation( NewInterpolation( p_interpolation, 1, 1, 1, 1, true, p_clampingThreshold, p_smoothness, image ) ); Rotation T( *interpolation, p_angle ); /* * On 32-bit systems, make sure the resulting image requires less than 4 GB. */ if ( sizeof( void* ) == sizeof( uint32 ) ) { int width = image.Width(), height = image.Height(); T.GetNewSizes( width, height ); uint64 sz = uint64( width )*uint64( height )*image.NumberOfChannels()*image.BytesPerSample(); if ( sz > uint64( uint32_max-256 ) ) throw Error( "Rotation: Invalid operation: Target image dimensions would exceed four gigabytes" ); } T.SetFillValues( p_fillColor ); T >> image; return true; }