用opencv的背景差分做运动检测

目前,许多运动检测技术都是基于简单的背景差分概念的,即假设摄像头(视频)的曝光和场景中的光照条件是稳定的,当摄像头捕捉到新的帧时,我们可以从参考图像中减去该帧,并取这个差的绝对值,以获得帧中每个像素位置的运动测量值。如果帧的任何区域与参考图像有很大的不同,我们就认为给定区域中是一个运动物体。

常用的有MOG2:

fgbg = cv2.createBackgroundSubtractorMOG2(
    detectShadows=False, # disable shadow detect for speed up
    #history=100, # for train bg frame default 500
    # varThreshold = 32 # smaller is more sensitive default 16
)

也可以KNN:

fgbg = cv2.createBackgroundSubtractorKNN(
    detectShadows=False, # disable shadow detect for speed up
    # history=100, # for train bg frame default 500
    # dist2Threshold=200, # lower is more sensitive, default 400
)

应用差分很简单,只需要:

fgmask = fgbg.apply(frame)

会同时更新的内部背景模型,返回一个掩膜,前景部分是白色255、阴影部分是灰色127、背景是黑色0。

我的应用场景,只需要看是否有运动即可,所以可以直接用阈值来判断:

count = cv2.countNonZero(fgmask)
if count > MOVE_THRESHOLD:
   # move
else:
   # still

如果你需要圈出来,可以进一步做轮廓检测:

_, thresh = cv2.threshold(fg_mask, 244, 255, cv2.THRESH_BINARY)
cv2.erode(thresh, erode_kernel, thresh, iterations=2)
cv2.dilate(thresh, dilate_kernel, thresh, iterations=2)

contours, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL,
                                          cv2.CHAIN_APPROX_SIMPLE)
# draw contours for move 
for c in contours:
    if cv2.contourArea(c) > 1000:
        x, y, w, h = cv2.boundingRect(c)
        cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 255, 0), 2)

cv2.imshow('detection', frame)

 

Leave a Reply

Your email address will not be published. Required fields are marked *