Measuring social distancing using Tensorflow Object Detection API

Daniel Rojas Ugalde
3 min readMay 6, 2020

These days being at the right distance from another person is very important. Due to the COVID-19 situation, we’ve learned a lot about social distancing and how it can help us fight the virus.

Using Tensorflow you can measure how far or close a person is from another person. This will be a relative measure given that the picture can be from different angles and perspectives.

This example will be based on the Object Detection Google Collab Notebook. You can try it out with a few tweaks. In the following paragraphs, I’ll explain the changes and tweaks needed to run the example.

The final result will be something like the following picture. You’ll have some lines and a list of the distances. You can also play with a threshold that way you can only show lines below that threshold (when people are too close).

First of all, you need to add some imports to the import section. You’ll need this to draw the lines:

from PIL import Image, ImageDrawimport itertoolsfrom itertools import compress

We will use the show_inference method as a starting point. We’ll also have to create some methods. The main idea is to complete the following steps:

  • Get every detected object from Tensorflow.
  • Filter them by class (we only want people) and score (we only want objects with a confidence higher than 50%).
  • Calculate centroids for the boxes. This is a fancy word that means the center of a rectangle.
  • Calculate permutations between all the centroids.
  • Calculate distance for the different permutations.
  • Apply a threshold to the permutations based on the distance.
  • Draw the lines
  • Show the picture

With the following method we can calculate the centroid for the bounding box (fancy word for rectangle):

def calculate_centroid(bounding_box):return (((bounding_box[3]-bounding_box[1])/2)+bounding_box[1],((bounding_box[2]-bounding_box[0])/2)+bounding_box[0])

To calculate the centroids. I found a very neat trick on Stackoverflow. This will allow us to find the permutation but to avoid adding the inverse permutation. For example, if you have 2 points (A, B). You only need a line from A-B or B-A, not both:

def calculate_permutations(detection_centroids):permutations = []for current_permutation in itertools.permutations(detection_centroids, 2):if current_permutation[::-1] not in permutations:permutations.append(current_permutation)return permutations

To calculate the distance, we’ll use the Euclidean distance. The formula looks very similar to the hypotenuse form a triangle.

def calculate_centroid_distances(centroid1,centroid2):return  math.sqrt((centroid2[0]-centroid1[0])**2 + (centroid2[1]-centroid1[1])**2)

The following method calculates the distances for a group of permutations:

def calculate_all_distances(centroids):distances = []for centroid in centroids:distances.append(calculate_centroid_distances(centroid[0],centroid[1]))return distances

To draw the lines, we’ll need to normalize the centroids using the image width and size.

def normalize_centroids(centroids,im_width,im_height):newCentroids = []for centroid in centroids:newCentroids.append((centroid[0]*im_width,centroid[1]*im_height))return newCentroids

Once we have everything in place here is the full method:

def show_inference_calculating_distance(model, image_path):distance_treshold = 0.5person_class = 1score_treshold = 0.5# the array based representation of the image will be used later in order to prepare the# result image with boxes and labels on it.image_np = np.array(Image.open(image_path))# Actual detection.output_dict = run_inference_for_single_image(model, image_np)# Visualization of the results of a detection.vis_util.visualize_boxes_and_labels_on_image_array(image_np,output_dict['detection_boxes'],output_dict['detection_classes'],output_dict['detection_scores'],category_index,instance_masks=output_dict.get('detection_masks_reframed', None),use_normalized_coordinates=True,line_thickness=8,min_score_thresh=score_treshold)#Filter persons and boxes with a score higher than 50%boolPersons = output_dict['detection_classes'] == person_classboolScores = output_dict['detection_scores'] > score_tresholdboolCombined = np.logical_and(boolPersons,boolScores)output_dict['detection_scores'] = output_dict['detection_scores'][boolCombined]output_dict['detection_classes'] = output_dict['detection_classes'][boolCombined]output_dict['detection_boxes'] = output_dict['detection_boxes'][boolCombined]output_dict['detection_centroids'] = [calculate_centroid(x) for x in output_dict['detection_boxes']]#Get image width and size for further centroid normalizationim = Image.fromarray(image_np)im_width, im_height = im.size#Calculate permutations and distancesoutput_dict['detection_permutations'] = calculate_permutations(output_dict['detection_centroids'])output_dict['detection_centroids_distances'] = calculate_all_distances(output_dict['detection_permutations'])#Filter permutations based on a distance tresholdboolDistances = np.array(output_dict['detection_centroids_distances']) < distance_tresholdoutput_dict['detection_centroids'] = normalize_centroids(output_dict['detection_centroids'],im_width,im_height)output_dict['detection_permutations'] = calculate_permutations(output_dict['detection_centroids'])output_dict['detection_permutations'] = list(compress(output_dict['detection_permutations'], boolDistances))#Draw linesdraw = ImageDraw.Draw(im)for centroid in output_dict['detection_permutations']:draw.line((centroid[0],centroid[1]), fill=255, width=3)print(output_dict['detection_centroids_distances'])display(im)

You can also find the whole code on this repository.

--

--