zl程序教程

您现在的位置是:首页 >  其他

当前栏目

目标跟踪技术

技术 目标 跟踪
2023-09-27 14:26:36 时间
public static void main(String[] args) { String fileName = UtilIO.pathExample("D:\\JavaProject\\Boofcv\\example\\tracking\\chipmunk.mjpeg"); ImageType imageType = ImageType.single(GrayF32.class); //ImageType imageType = ImageType.il(3, InterleavedF32.class); //ImageType imageType = ImageType.il(3, InterleavedU8.class); ConfigGeneralDetector confDetector = new ConfigGeneralDetector(); confDetector.threshold = 10; confDetector.maxFeatures = 300; confDetector.radius = 6; // KLT tracker PointTracker tracker = FactoryPointTracker.klt(new int[]{1, 2, 4, 8}, confDetector, 3, GrayF32.class, null); // This estimates the 2D image motion ImageMotion2D GrayF32,Homography2D_F64 motion2D = FactoryMotion2D.createMotion2D(500, 0.5, 3, 100, 0.6, 0.5, false, tracker, new Homography2D_F64()); ConfigBackgroundBasic configBasic = new ConfigBackgroundBasic(30, 0.005f); // Configuration for Gaussian model. Note that the threshold changes depending on the number of image bands // 12 = gray scale and 40 = color ConfigBackgroundGaussian configGaussian = new ConfigBackgroundGaussian(12,0.001f); configGaussian.initialVariance = 64; configGaussian.minimumDifference = 5; // Comment/Uncomment to switch background mode BackgroundModelMoving background = FactoryBackgroundModel.movingBasic(configBasic, new PointTransformHomography_F32(), imageType); // FactoryBackgroundModel.movingGaussian(configGaussian, new PointTransformHomography_F32(), imageType);
media.openVideo(fileName, background.getImageType()); media.openCamera(null,640,480,background.getImageType());
// storage for segmented image. Background = 0, Foreground = 1 GrayU8 segmented = new GrayU8(video.getNextWidth(),video.getNextHeight()); // Grey scale image thats the input for motion estimation GrayF32 grey = new GrayF32(segmented.width,segmented.height); // coordinate frames Homography2D_F32 firstToCurrent32 = new Homography2D_F32(); Homography2D_F32 homeToWorld = new Homography2D_F32(); homeToWorld.a13 = grey.width/2; homeToWorld.a23 = grey.height/2; // Create a background image twice the size of the input image. Tell it that the home is in the center background.initialize(grey.width * 2, grey.height * 2, homeToWorld); BufferedImage visualized = new BufferedImage(segmented.width,segmented.height,BufferedImage.TYPE_INT_RGB); ImageGridPanel gui = new ImageGridPanel(1,2); gui.setImages(visualized, visualized); ShowImages.showWindow(gui, "Detections", true); double fps = 0; double alpha = 0.01; // smoothing factor for FPS while( video.hasNext() ) { ImageBase input = video.next(); long before = System.nanoTime(); GConvertImage.convert(input, grey); if( !motion2D.process(grey) ) { throw new RuntimeException("Should handle this scenario"); Homography2D_F64 firstToCurrent64 = motion2D.getFirstToCurrent(); UtilHomography.convert(firstToCurrent64, firstToCurrent32); background.segment(firstToCurrent32, input, segmented); background.updateBackground(firstToCurrent32,input); long after = System.nanoTime(); fps = (1.0-alpha)*fps + alpha*(1.0/((after-before)/1e9)); VisualizeBinaryData.renderBinary(segmented,false,visualized); gui.setImage(0, 0, (BufferedImage)video.getGuiImage()); gui.setImage(0, 1, visualized); gui.repaint(); System.out.println("FPS = "+fps); try {Thread.sleep(5);} catch (InterruptedException e) {} }

这里写图片描述

这里写图片描述

这里写图片描述


90+目标跟踪算法&九大benchmark!基于判别滤波器和孪生网络的视觉目标跟踪:综述与展望(上) 视觉目标跟踪(VOT)是计算机视觉中的一个基本开放问题,任务是估计图像序列中目标的轨迹和状态。VOT具有广泛的应用,包括自动驾驶、机器人、智能视频监控、运动分析和医学成像。给定任意目标对象的初始状态,VOT中的主要挑战是学习在后续帧中搜索目标对象时使用的外观模型。近年来,由于引入了多种跟踪基准,如TrackingNet、VOT2018和GOT-10K,VOT受到了极大的关注。尽管最近取得了进展,VOT仍然是一个开放的研究问题,可能比以往任何时候都更加活跃。
90+目标跟踪算法&九大benchmark!基于判别滤波器和孪生网络的视觉目标跟踪:综述与展望(下) 视觉目标跟踪(VOT)是计算机视觉中的一个基本开放问题,任务是估计图像序列中目标的轨迹和状态。VOT具有广泛的应用,包括自动驾驶、机器人、智能视频监控、运动分析和医学成像。给定任意目标对象的初始状态,VOT中的主要挑战是学习在后续帧中搜索目标对象时使用的外观模型。近年来,由于引入了多种跟踪基准,如TrackingNet、VOT2018和GOT-10K,VOT受到了极大的关注。尽管最近取得了进展,VOT仍然是一个开放的研究问题,可能比以往任何时候都更加活跃。
多目标跟踪算法研究现状 随着科技的发展,多目标跟踪已成为热门的研究课题,是机器视觉领域的一个重要研究方向,在军事和民用领域都有着广泛的应用。多目标跟踪的目的为对多个目标物体进行持续跟踪,期间维持同一目标的标签不变化,同时对每个目标在未来帧中的状态进行预测。