Lava Palaver using Camera

23rd October 2019 0 By Neil Stevenson

Part of the design philosophy of Dangle is “no unnecessary sensors”. Since we already have a camera in the design for other challenges, why not use the same camera for Lava Palaver? How hard can this be?

The approach we’re trying out is to basically do a classic “white line follower”, but use OpenCV as the means of image recognition and determining where we need to move next. This will save us the work to create specialised sensors for this challenge.

Getting the image

To start we need to install OpenCV and PiCamera Python3 packages. As this is covered elsewhere, I’ll start by describing how we’ve coded the image acquisition after this point.

To get the image as a continuous stream, we can start by doing:

import time
import numpy as np
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (160, 120)
camera.framerate = 30
rawCapture = PiRGBArray(camera, size=camera.resolution)
# Loop capturing images ready for processing
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
	# grab the next frame as a numpy array
	original = frame.array
	# Next, process the image

So by this point, we get an image into the original variable. If we run this on the Lava Palaver track, we get an image such as:

Raw image from camera

Once we have this, we can use OpenCV to process the image. The approach we’re taking here is to simply find the brightest areas, which should be the white line down the centre of the track.

We need to first convert to greyscale, apply a filter to remove any noise and minor reflections. Once we’re done this, we slice the picture into a number of horizontal parts and find the brightest point at each slice.

	# Convert to greyscale and apply a Gaussian blur to the image in order 
	# to make more robust against noise and reflections
	gray = cv2.cvtColor(original.copy(), cv2.COLOR_BGR2GRAY)
	# ensure blur radius is odd and slightly bigger than the white line
	radius = 31
	cv2.GaussianBlur(gray, (radius, radius), 0)

	# Chop the image into a number of horizontal slices
	numSlices = 20
	slices = np.array_split(gray,numSlices)

	offset = 0
	points = []
	assessment = original.copy()
	# For each slice, determine the brightest point
	for bit in range(len(slices)):
		(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(slices[bit])
		point = (maxLoc[0],maxLoc[1]+offset)
		offset += len(slices[bit])

By this point, we get a list of points that represent the line we’ve detected. If we plot these over the original image, we get quite a nice correlation.

Detected track (red)

We can then fit a “best line” to these points to get an angle we need to move to get back on track. Using the HUBER line fit should avoid a small number of rogue points.

	# Fit a straight line to the brightest point on each slice
	vx, vy, x0, y0 = cv2.fitLine(np.array(points), cv2.DIST_HUBER, 0, 0.1, 0.1)
	# ensure the arrow is always pointing forwards (up the image)
	if vy > 0:
		vy = -vy
		vx = -vx
	# Draw an arrow representing the brightest points
	vx *= camera.resolution[0]//3
	vy *= camera.resolution[0]//3
	cv2.arrowedLine(assessment, (x0-vx, y0-vy), (x0+vx, y0+vy), (0, 255, 0), 2)
	# Calculate an angle from where we are to approx mid point
	currentPosition = (camera.resolution[0]//2, camera.resolution[1])
	desiredPosition = (x0,y0)
	angle = np.arctan((currentPosition[0]-desiredPosition[0])/(currentPosition[1]-desiredPosition[1])) * 180.0/3.14159
	cv2.arrowedLine(assessment, currentPosition, desiredPosition, (255, 255, 0), 2)
	# display the results
	cv2.putText(assessment, f"{angle} degrees", (5, camera.resolution[1]-4), cv2.FONT_HERSHEY_DUPLEX, 0.4, (0, 255, 0))
	cv2.imshow(f"assessed direction", assessment)
	key = cv2.waitKey(1) & 0xFF

This resulted in the fitted line as the green arrow and estimated course direction to get back onto the line as the cyan arrow:

Resulting course assessment (cyan)

All we need to do now is to integrated it into the Dangle motor controller code!