Hong

addyolo

Showing 58 changed files with 2126 additions and 18 deletions
code/web/package-lock.json
code/web/node_modules/**
/code/venv
/code/deep_sort_yolov4/model_data/yolov4.weights
/code/deep_sort_yolov4/model_data/yolo4_weight.h5
\ No newline at end of file
......
# YOLOv4 + Deep_SORT
<img src="https://github.com/yehengchen/Object-Detection-and-Tracking/blob/master/OneStage/yolo/deep_sort_yolov4/output/comparison.png" width="81%" height="81%"> <img src="https://github.com/yehengchen/video_demo/blob/master/video_demo/output.gif" width="40%" height="40%"> <img src="https://github.com/yehengchen/video_demo/blob/master/video_demo/TownCentreXVID_output.gif" width="40%" height="40%">
__Object Tracking & Counting Demo - [[BiliBili]](https://www.bilibili.com/video/BV1Ug4y1i71w#reply3014975828) [[Chinese Version]](https://blog.csdn.net/weixin_38107271/article/details/96741706)__
## Requirement
__Development Environment: [Deep-Learning-Environment-Setup](https://github.com/yehengchen/Ubuntu-16.04-Deep-Learning-Environment-Setup)__
* OpenCV
* sklean
* pillow
* numpy 1.15.0
* torch 1.3.0
* tensorflow-gpu 1.13.1
* CUDA 10.0
***
It uses:
* __Detection__: [YOLOv4](https://github.com/yehengchen/Object-Detection-and-Tracking/tree/master/OneStage/yolo/Train-a-YOLOv4-model) to detect objects on each of the video frames. - 用自己的数据训练YOLOv4模型
* __Tracking__: [Deep_SORT](https://github.com/nwojke/deep_sort) to track those objects over different frames.
*This repository contains code for Simple Online and Realtime Tracking with a Deep Association Metric (Deep SORT). We extend the original SORT algorithm to integrate appearance information based on a deep appearance descriptor. See the [arXiv preprint](https://arxiv.org/abs/1703.07402) for more information.*
## Quick Start
__0.Requirements__
pip install -r requirements.txt
__1. Download the code to your computer.__
git clone https://github.com/yehengchen/Object-Detection-and-Tracking.git
__2. Download [[yolov4.weights]](https://drive.google.com/file/d/1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT/view) [[Baidu]](https://pan.baidu.com/s/1jRudrrXAS3DRGqT6mL4L3A ) - `mnv6`__ and place it in `deep_sort_yolov4/model_data/`
*Here you can download my trained [[yolo4_weight.h5]](https://pan.baidu.com/s/1JuT4KCUFaE2Gvme0_S37DQ ) - `w17w` weights for detecting person/car/bicycle,etc.*
__3. Convert the Darknet YOLO model to a Keras model:__
```
$ python convert.py model_data/yolov4.cfg model_data/yolov4.weights model_data/yolo.h5
```
__4. Run the YOLO_DEEP_SORT:__
```
$ python main.py -c [CLASS NAME] -i [INPUT VIDEO PATH]
$ python main.py -c person -i ./test_video/testvideo.avi
```
__5. Can change [deep_sort_yolov3/yolo.py] `__Line 100__` to your tracking object__
*DeepSORT pre-trained weights using people-ReID datasets only for person*
```
if predicted_class != args["class"]:
continue
if predicted_class != 'person' and predicted_class != 'car':
continue
```
## Train on Market1501 & MARS
*People Re-identification model*
[cosine_metric_learning](https://github.com/nwojke/cosine_metric_learning) for training a metric feature representation to be used with the deep_sort tracker.
## Citation
### YOLOv4 :
@misc{bochkovskiy2020yolov4,
title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
author={Alexey Bochkovskiy and Chien-Yao Wang and Hong-Yuan Mark Liao},
year={2020},
eprint={2004.10934},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
### Deep_SORT :
@inproceedings{Wojke2017simple,
title={Simple Online and Realtime Tracking with a Deep Association Metric},
author={Wojke, Nicolai and Bewley, Alex and Paulus, Dietrich},
booktitle={2017 IEEE International Conference on Image Processing (ICIP)},
year={2017},
pages={3645--3649},
organization={IEEE},
doi={10.1109/ICIP.2017.8296962}
}
@inproceedings{Wojke2018deep,
title={Deep Cosine Metric Learning for Person Re-identification},
author={Wojke, Nicolai and Bewley, Alex},
booktitle={2018 IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2018},
pages={748--756},
organization={IEEE},
doi={10.1109/WACV.2018.00087}
}
## Reference
#### Github:deep_sort@[Nicolai Wojke nwojke](https://github.com/nwojke/deep_sort)
#### Github:deep_sort_yolov3@[Qidian213 ](https://github.com/Qidian213/deep_sort_yolov3)
#### Github:Deep-SORT-YOLOv4@[LeonLok](https://github.com/LeonLok/Deep-SORT-YOLOv4)
import os
import colorsys
import numpy as np
from keras import backend as K
from keras.models import load_model
from keras.layers import Input
from yolo4.model import yolo_eval, yolo4_body
from yolo4.utils import letterbox_image
from PIL import Image, ImageFont, ImageDraw
from timeit import default_timer as timer
import matplotlib.pyplot as plt
from operator import itemgetter
class Yolo4(object):
def get_class(self):
classes_path = os.path.expanduser(self.classes_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def get_anchors(self):
anchors_path = os.path.expanduser(self.anchors_path)
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def load_yolo(self):
model_path = os.path.expanduser(self.model_path)
assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
self.class_names = self.get_class()
self.anchors = self.get_anchors()
num_anchors = len(self.anchors)
num_classes = len(self.class_names)
# Generate colors for drawing bounding boxes.
hsv_tuples = [(x / len(self.class_names), 1., 1.)
for x in range(len(self.class_names))]
self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
self.colors = list(
map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
self.colors))
self.sess = K.get_session()
# Load model, or construct model and load weights.
self.yolo4_model = yolo4_body(Input(shape=(608, 608, 3)), num_anchors//3, num_classes)
# Read and convert darknet weight
print('Loading weights.')
weights_file = open(self.weights_path, 'rb')
major, minor, revision = np.ndarray(
shape=(3, ), dtype='int32', buffer=weights_file.read(12))
if (major*10+minor)>=2 and major<1000 and minor<1000:
seen = np.ndarray(shape=(1,), dtype='int64', buffer=weights_file.read(8))
else:
seen = np.ndarray(shape=(1,), dtype='int32', buffer=weights_file.read(4))
print('Weights Header: ', major, minor, revision, seen)
convs_to_load = []
bns_to_load = []
for i in range(len(self.yolo4_model.layers)):
layer_name = self.yolo4_model.layers[i].name
if layer_name.startswith('conv2d_'):
convs_to_load.append((int(layer_name[7:]), i))
if layer_name.startswith('batch_normalization_'):
bns_to_load.append((int(layer_name[20:]), i))
convs_sorted = sorted(convs_to_load, key=itemgetter(0))
bns_sorted = sorted(bns_to_load, key=itemgetter(0))
bn_index = 0
for i in range(len(convs_sorted)):
print('Converting ', i)
if i == 93 or i == 101 or i == 109:
#no bn, with bias
weights_shape = self.yolo4_model.layers[convs_sorted[i][1]].get_weights()[0].shape
bias_shape = self.yolo4_model.layers[convs_sorted[i][1]].get_weights()[0].shape[3]
filters = bias_shape
size = weights_shape[0]
darknet_w_shape = (filters, weights_shape[2], size, size)
weights_size = np.product(weights_shape)
conv_bias = np.ndarray(
shape=(filters, ),
dtype='float32',
buffer=weights_file.read(filters * 4))
conv_weights = np.ndarray(
shape=darknet_w_shape,
dtype='float32',
buffer=weights_file.read(weights_size * 4))
conv_weights = np.transpose(conv_weights, [2, 3, 1, 0])
self.yolo4_model.layers[convs_sorted[i][1]].set_weights([conv_weights, conv_bias])
else:
#with bn, no bias
weights_shape = self.yolo4_model.layers[convs_sorted[i][1]].get_weights()[0].shape
size = weights_shape[0]
bn_shape = self.yolo4_model.layers[bns_sorted[bn_index][1]].get_weights()[0].shape
filters = bn_shape[0]
darknet_w_shape = (filters, weights_shape[2], size, size)
weights_size = np.product(weights_shape)
conv_bias = np.ndarray(
shape=(filters, ),
dtype='float32',
buffer=weights_file.read(filters * 4))
bn_weights = np.ndarray(
shape=(3, filters),
dtype='float32',
buffer=weights_file.read(filters * 12))
bn_weight_list = [
bn_weights[0], # scale gamma
conv_bias, # shift beta
bn_weights[1], # running mean
bn_weights[2] # running var
]
self.yolo4_model.layers[bns_sorted[bn_index][1]].set_weights(bn_weight_list)
conv_weights = np.ndarray(
shape=darknet_w_shape,
dtype='float32',
buffer=weights_file.read(weights_size * 4))
conv_weights = np.transpose(conv_weights, [2, 3, 1, 0])
self.yolo4_model.layers[convs_sorted[i][1]].set_weights([conv_weights])
bn_index += 1
weights_file.close()
self.yolo4_model.save(self.model_path)
if self.gpu_num>=2:
self.yolo4_model = multi_gpu_model(self.yolo4_model, gpus=self.gpu_num)
self.input_image_shape = K.placeholder(shape=(2, ))
self.boxes, self.scores, self.classes = yolo_eval(self.yolo4_model.output, self.anchors,
len(self.class_names), self.input_image_shape,
score_threshold=self.score)
def __init__(self, score, iou, anchors_path, classes_path, model_path, weights_path, gpu_num=1):
self.score = score
self.iou = iou
self.anchors_path = anchors_path
self.classes_path = classes_path
self.weights_path = weights_path
self.model_path = model_path
self.gpu_num = gpu_num
self.load_yolo()
def close_session(self):
self.sess.close()
if __name__ == '__main__':
model_path = 'model_data/yolo4_weight.h5'
anchors_path = 'model_data/yolo_anchors.txt'
classes_path = 'model_data/coco_classes.txt'
weights_path = 'model_data/yolov4.weights'
score = 0.5
iou = 0.5
model_image_size = (608, 608)
yolo4_model = Yolo4(score, iou, anchors_path, classes_path, model_path, weights_path)
yolo4_model.close_session()
\ No newline at end of file
# vim: expandtab:ts=4:sw=4
import numpy as np
class Detection(object):
"""
This class represents a bounding box detection in a single image.
Parameters
----------
tlwh : array_like
Bounding box in format `(x, y, w, h)`.
confidence : float
Detector confidence score.
feature : array_like
A feature vector that describes the object contained in this image.
Attributes
----------
tlwh : ndarray
Bounding box in format `(top left x, top left y, width, height)`.
confidence : ndarray
Detector confidence score.
feature : ndarray | NoneType
A feature vector that describes the object contained in this image.
"""
def __init__(self, tlwh, confidence, feature):
self.tlwh = np.asarray(tlwh, dtype=np.float)
self.confidence = float(confidence)
self.feature = np.asarray(feature, dtype=np.float32)
def to_tlbr(self):
"""Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,
`(top left, bottom right)`.
"""
ret = self.tlwh.copy()
ret[2:] += ret[:2]
return ret
def to_xyah(self):
"""Convert bounding box to format `(center x, center y, aspect ratio,
height)`, where the aspect ratio is `width / height`.
"""
ret = self.tlwh.copy()
ret[:2] += ret[2:] / 2
ret[2] /= ret[3]
return ret
# vim: expandtab:ts=4:sw=4
import numpy as np
class Detection_YOLO(object):
"""
This class represents a bounding box detection in a single image.
Parameters
----------
tlwh : array_like
Bounding box in format `(x, y, w, h)`.
confidence : float
Detector confidence score.
feature : array_like
A feature vector that describes the object contained in this image.
Attributesutils
----------
tlwh : ndarray
Bounding box in format `(top left x, top left y, width, height)`.
confidence : ndarray
Detector confidence score.
feature : ndarray | NoneType
A feature vector that describes the object contained in this image.
"""
def __init__(self, tlwh, confidence, cls):
self.tlwh = np.asarray(tlwh, dtype=np.float)
self.confidence = float(confidence)
self.cls = cls
def to_tlbr(self):
"""Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,
`(top left, bottom right)`.
"""
ret = self.tlwh.copy()
ret[2:] += ret[:2]
return ret
def to_xyah(self):
"""Convert bounding box to format `(center x, center y, aspect ratio,
height)`, where the aspect ratio is `width / height`.
"""
ret = self.tlwh.copy()
ret[:2] += ret[2:] / 2
ret[2] /= ret[3]
return ret
# vim: expandtab:ts=4:sw=4
from __future__ import absolute_import
import numpy as np
from . import linear_assignment
def iou(bbox, candidates):
"""Computer intersection over union.
Parameters
----------
bbox : ndarray
A bounding box in format `(top left x, top left y, width, height)`.
candidates : ndarray
A matrix of candidate bounding boxes (one per row) in the same format
as `bbox`.
Returns
-------
ndarray
The intersection over union in [0, 1] between the `bbox` and each
candidate. A higher score means a larger fraction of the `bbox` is
occluded by the candidate.
"""
bbox_tl, bbox_br = bbox[:2], bbox[:2] + bbox[2:]
candidates_tl = candidates[:, :2]
candidates_br = candidates[:, :2] + candidates[:, 2:]
tl = np.c_[np.maximum(bbox_tl[0], candidates_tl[:, 0])[:, np.newaxis],
np.maximum(bbox_tl[1], candidates_tl[:, 1])[:, np.newaxis]]
br = np.c_[np.minimum(bbox_br[0], candidates_br[:, 0])[:, np.newaxis],
np.minimum(bbox_br[1], candidates_br[:, 1])[:, np.newaxis]]
wh = np.maximum(0., br - tl)
area_intersection = wh.prod(axis=1)
area_bbox = bbox[2:].prod()
area_candidates = candidates[:, 2:].prod(axis=1)
return area_intersection / (area_bbox + area_candidates - area_intersection)
def iou_cost(tracks, detections, track_indices=None,
detection_indices=None):
"""An intersection over union distance metric.
Parameters
----------
tracks : List[deep_sort.track.Track]
A list of tracks.
detections : List[deep_sort.detection.Detection]
A list of detections.
track_indices : Optional[List[int]]
A list of indices to tracks that should be matched. Defaults to
all `tracks`.
detection_indices : Optional[List[int]]
A list of indices to detections that should be matched. Defaults
to all `detections`.
Returns
-------
ndarray
Returns a cost matrix of shape
len(track_indices), len(detection_indices) where entry (i, j) is
`1 - iou(tracks[track_indices[i]], detections[detection_indices[j]])`.
"""
if track_indices is None:
track_indices = np.arange(len(tracks))
if detection_indices is None:
detection_indices = np.arange(len(detections))
cost_matrix = np.zeros((len(track_indices), len(detection_indices)))
for row, track_idx in enumerate(track_indices):
if tracks[track_idx].time_since_update > 1:
cost_matrix[row, :] = linear_assignment.INFTY_COST
continue
bbox = tracks[track_idx].to_tlwh()
candidates = np.asarray([detections[i].tlwh for i in detection_indices])
cost_matrix[row, :] = 1. - iou(bbox, candidates)
return cost_matrix
# vim: expandtab:ts=4:sw=4
import numpy as np
import scipy.linalg
"""
Table for the 0.95 quantile of the chi-square distribution with N degrees of
freedom (contains values for N=1, ..., 9). Taken from MATLAB/Octave's chi2inv
function and used as Mahalanobis gating threshold.
"""
chi2inv95 = {
1: 3.8415,
2: 5.9915,
3: 7.8147,
4: 9.4877,
5: 11.070,
6: 12.592,
7: 14.067,
8: 15.507,
9: 16.919}
class KalmanFilter(object):
"""
A simple Kalman filter for tracking bounding boxes in image space.
The 8-dimensional state space
x, y, a, h, vx, vy, va, vh
contains the bounding box center position (x, y), aspect ratio a, height h,
and their respective velocities.
Object motion follows a constant velocity model. The bounding box location
(x, y, a, h) is taken as direct observation of the state space (linear
observation model).
"""
def __init__(self):
ndim, dt = 4, 1.
# Create Kalman filter model matrices.
self._motion_mat = np.eye(2 * ndim, 2 * ndim)
for i in range(ndim):
self._motion_mat[i, ndim + i] = dt
self._update_mat = np.eye(ndim, 2 * ndim)
# Motion and observation uncertainty are chosen relative to the current
# state estimate. These weights control the amount of uncertainty in
# the model. This is a bit hacky.
self._std_weight_position = 1. / 20
self._std_weight_velocity = 1. / 160
def initiate(self, measurement):
"""Create track from unassociated measurement.
Parameters
----------
measurement : ndarray
Bounding box coordinates (x, y, a, h) with center position (x, y),
aspect ratio a, and height h.
Returns
-------
(ndarray, ndarray)
Returns the mean vector (8 dimensional) and covariance matrix (8x8
dimensional) of the new track. Unobserved velocities are initialized
to 0 mean.
"""
mean_pos = measurement
mean_vel = np.zeros_like(mean_pos)
mean = np.r_[mean_pos, mean_vel]
std = [
2 * self._std_weight_position * measurement[3],
2 * self._std_weight_position * measurement[3],
1e-2,
2 * self._std_weight_position * measurement[3],
10 * self._std_weight_velocity * measurement[3],
10 * self._std_weight_velocity * measurement[3],
1e-5,
10 * self._std_weight_velocity * measurement[3]]
covariance = np.diag(np.square(std))
return mean, covariance
def predict(self, mean, covariance):
"""Run Kalman filter prediction step.
Parameters
----------
mean : ndarray
The 8 dimensional mean vector of the object state at the previous
time step.
covariance : ndarray
The 8x8 dimensional covariance matrix of the object state at the
previous time step.
Returns
-------
(ndarray, ndarray)
Returns the mean vector and covariance matrix of the predicted
state. Unobserved velocities are initialized to 0 mean.
"""
std_pos = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-2,
self._std_weight_position * mean[3]]
std_vel = [
self._std_weight_velocity * mean[3],
self._std_weight_velocity * mean[3],
1e-5,
self._std_weight_velocity * mean[3]]
motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
mean = np.dot(self._motion_mat, mean)
covariance = np.linalg.multi_dot((
self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
return mean, covariance
def project(self, mean, covariance):
"""Project state distribution to measurement space.
Parameters
----------
mean : ndarray
The state's mean vector (8 dimensional array).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
Returns
-------
(ndarray, ndarray)
Returns the projected mean and covariance matrix of the given state
estimate.
"""
std = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-1,
self._std_weight_position * mean[3]]
innovation_cov = np.diag(np.square(std))
mean = np.dot(self._update_mat, mean)
covariance = np.linalg.multi_dot((
self._update_mat, covariance, self._update_mat.T))
return mean, covariance + innovation_cov
def update(self, mean, covariance, measurement):
"""Run Kalman filter correction step.
Parameters
----------
mean : ndarray
The predicted state's mean vector (8 dimensional).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
measurement : ndarray
The 4 dimensional measurement vector (x, y, a, h), where (x, y)
is the center position, a the aspect ratio, and h the height of the
bounding box.
Returns
-------
(ndarray, ndarray)
Returns the measurement-corrected state distribution.
"""
projected_mean, projected_cov = self.project(mean, covariance)
chol_factor, lower = scipy.linalg.cho_factor(
projected_cov, lower=True, check_finite=False)
kalman_gain = scipy.linalg.cho_solve(
(chol_factor, lower), np.dot(covariance, self._update_mat.T).T,
check_finite=False).T
innovation = measurement - projected_mean
new_mean = mean + np.dot(innovation, kalman_gain.T)
new_covariance = covariance - np.linalg.multi_dot((
kalman_gain, projected_cov, kalman_gain.T))
return new_mean, new_covariance
def gating_distance(self, mean, covariance, measurements,
only_position=False):
"""Compute gating distance between state distribution and measurements.
A suitable distance threshold can be obtained from `chi2inv95`. If
`only_position` is False, the chi-square distribution has 4 degrees of
freedom, otherwise 2.
Parameters
----------
mean : ndarray
Mean vector over the state distribution (8 dimensional).
covariance : ndarray
Covariance of the state distribution (8x8 dimensional).
measurements : ndarray
An Nx4 dimensional matrix of N measurements, each in
format (x, y, a, h) where (x, y) is the bounding box center
position, a the aspect ratio, and h the height.
only_position : Optional[bool]
If True, distance computation is done with respect to the bounding
box center position only.
Returns
-------
ndarray
Returns an array of length N, where the i-th element contains the
squared Mahalanobis distance between (mean, covariance) and
`measurements[i]`.
"""
mean, covariance = self.project(mean, covariance)
if only_position:
mean, covariance = mean[:2], covariance[:2, :2]
measurements = measurements[:, :2]
cholesky_factor = np.linalg.cholesky(covariance)
d = measurements - mean
z = scipy.linalg.solve_triangular(
cholesky_factor, d.T, lower=True, check_finite=False,
overwrite_b=True)
squared_maha = np.sum(z * z, axis=0)
return squared_maha
# vim: expandtab:ts=4:sw=4
from __future__ import absolute_import
import numpy as np
from sklearn.utils.linear_assignment_ import linear_assignment
from . import kalman_filter
INFTY_COST = 1e+5
def min_cost_matching(
distance_metric, max_distance, tracks, detections, track_indices=None,
detection_indices=None):
"""Solve linear assignment problem.
Parameters
----------
distance_metric : Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
The distance metric is given a list of tracks and detections as well as
a list of N track indices and M detection indices. The metric should
return the NxM dimensional cost matrix, where element (i, j) is the
association cost between the i-th track in the given track indices and
the j-th detection in the given detection_indices.
max_distance : float
Gating threshold. Associations with cost larger than this value are
disregarded.
tracks : List[track.Track]
A list of predicted tracks at the current time step.
detections : List[detection.Detection]
A list of detections at the current time step.
track_indices : List[int]
List of track indices that maps rows in `cost_matrix` to tracks in
`tracks` (see description above).
detection_indices : List[int]
List of detection indices that maps columns in `cost_matrix` to
detections in `detections` (see description above).
Returns
-------
(List[(int, int)], List[int], List[int])
Returns a tuple with the following three entries:
* A list of matched track and detection indices.
* A list of unmatched track indices.
* A list of unmatched detection indices.
"""
if track_indices is None:
track_indices = np.arange(len(tracks))
if detection_indices is None:
detection_indices = np.arange(len(detections))
if len(detection_indices) == 0 or len(track_indices) == 0:
return [], track_indices, detection_indices # Nothing to match.
cost_matrix = distance_metric(
tracks, detections, track_indices, detection_indices)
cost_matrix[cost_matrix > max_distance] = max_distance + 1e-5
indices = linear_assignment(cost_matrix)
matches, unmatched_tracks, unmatched_detections = [], [], []
for col, detection_idx in enumerate(detection_indices):
if col not in indices[:, 1]:
unmatched_detections.append(detection_idx)
for row, track_idx in enumerate(track_indices):
if row not in indices[:, 0]:
unmatched_tracks.append(track_idx)
for row, col in indices:
track_idx = track_indices[row]
detection_idx = detection_indices[col]
if cost_matrix[row, col] > max_distance:
unmatched_tracks.append(track_idx)
unmatched_detections.append(detection_idx)
else:
matches.append((track_idx, detection_idx))
return matches, unmatched_tracks, unmatched_detections
def matching_cascade(
distance_metric, max_distance, cascade_depth, tracks, detections,
track_indices=None, detection_indices=None):
"""Run matching cascade.
Parameters
----------
distance_metric : Callable[List[Track], List[Detection], List[int], List[int]) -> ndarray
The distance metric is given a list of tracks and detections as well as
a list of N track indices and M detection indices. The metric should
return the NxM dimensional cost matrix, where element (i, j) is the
association cost between the i-th track in the given track indices and
the j-th detection in the given detection indices.
max_distance : float
Gating threshold. Associations with cost larger than this value are
disregarded.
cascade_depth: int
The cascade depth, should be se to the maximum track age.
tracks : List[track.Track]
A list of predicted tracks at the current time step.
detections : List[detection.Detection]
A list of detections at the current time step.
track_indices : Optional[List[int]]
List of track indices that maps rows in `cost_matrix` to tracks in
`tracks` (see description above). Defaults to all tracks.
detection_indices : Optional[List[int]]
List of detection indices that maps columns in `cost_matrix` to
detections in `detections` (see description above). Defaults to all
detections.
Returns
-------
(List[(int, int)], List[int], List[int])
Returns a tuple with the following three entries:
* A list of matched track and detection indices.
* A list of unmatched track indices.
* A list of unmatched detection indices.
"""
if track_indices is None:
track_indices = list(range(len(tracks)))
if detection_indices is None:
detection_indices = list(range(len(detections)))
unmatched_detections = detection_indices
matches = []
for level in range(cascade_depth):
if len(unmatched_detections) == 0: # No detections left
break
track_indices_l = [
k for k in track_indices
if tracks[k].time_since_update == 1 + level
]
if len(track_indices_l) == 0: # Nothing to match at this level
continue
matches_l, _, unmatched_detections = \
min_cost_matching(
distance_metric, max_distance, tracks, detections,
track_indices_l, unmatched_detections)
matches += matches_l
unmatched_tracks = list(set(track_indices) - set(k for k, _ in matches))
return matches, unmatched_tracks, unmatched_detections
def gate_cost_matrix(
kf, cost_matrix, tracks, detections, track_indices, detection_indices,
gated_cost=INFTY_COST, only_position=False):
"""Invalidate infeasible entries in cost matrix based on the state
distributions obtained by Kalman filtering.
Parameters
----------
kf : The Kalman filter.
cost_matrix : ndarray
The NxM dimensional cost matrix, where N is the number of track indices
and M is the number of detection indices, such that entry (i, j) is the
association cost between `tracks[track_indices[i]]` and
`detections[detection_indices[j]]`.
tracks : List[track.Track]
A list of predicted tracks at the current time step.
detections : List[detection.Detection]
A list of detections at the current time step.
track_indices : List[int]
List of track indices that maps rows in `cost_matrix` to tracks in
`tracks` (see description above).
detection_indices : List[int]
List of detection indices that maps columns in `cost_matrix` to
detections in `detections` (see description above).
gated_cost : Optional[float]
Entries in the cost matrix corresponding to infeasible associations are
set this value. Defaults to a very large value.
only_position : Optional[bool]
If True, only the x, y position of the state distribution is considered
during gating. Defaults to False.
Returns
-------
ndarray
Returns the modified cost matrix.
"""
gating_dim = 2 if only_position else 4
gating_threshold = kalman_filter.chi2inv95[gating_dim]
measurements = np.asarray(
[detections[i].to_xyah() for i in detection_indices])
for row, track_idx in enumerate(track_indices):
track = tracks[track_idx]
gating_distance = kf.gating_distance(
track.mean, track.covariance, measurements, only_position)
cost_matrix[row, gating_distance > gating_threshold] = gated_cost
return cost_matrix
# vim: expandtab:ts=4:sw=4
import numpy as np
def _pdist(a, b):
"""Compute pair-wise squared distance between points in `a` and `b`.
Parameters
----------
a : array_like
An NxM matrix of N samples of dimensionality M.
b : array_like
An LxM matrix of L samples of dimensionality M.
Returns
-------
ndarray
Returns a matrix of size len(a), len(b) such that eleement (i, j)
contains the squared distance between `a[i]` and `b[j]`.
"""
a, b = np.asarray(a), np.asarray(b)
if len(a) == 0 or len(b) == 0:
return np.zeros((len(a), len(b)))
a2, b2 = np.square(a).sum(axis=1), np.square(b).sum(axis=1)
r2 = -2. * np.dot(a, b.T) + a2[:, None] + b2[None, :]
r2 = np.clip(r2, 0., float(np.inf))
return r2
def _cosine_distance(a, b, data_is_normalized=False):
"""Compute pair-wise cosine distance between points in `a` and `b`.
Parameters
----------
a : array_like
An NxM matrix of N samples of dimensionality M.
b : array_like
An LxM matrix of L samples of dimensionality M.
data_is_normalized : Optional[bool]
If True, assumes rows in a and b are unit length vectors.
Otherwise, a and b are explicitly normalized to lenght 1.
Returns
-------
ndarray
Returns a matrix of size len(a), len(b) such that eleement (i, j)
contains the squared distance between `a[i]` and `b[j]`.
"""
if not data_is_normalized:
a = np.asarray(a) / np.linalg.norm(a, axis=1, keepdims=True)
b = np.asarray(b) / np.linalg.norm(b, axis=1, keepdims=True)
return 1. - np.dot(a, b.T)
def _nn_euclidean_distance(x, y):
""" Helper function for nearest neighbor distance metric (Euclidean).
Parameters
----------
x : ndarray
A matrix of N row-vectors (sample points).
y : ndarray
A matrix of M row-vectors (query points).
Returns
-------
ndarray
A vector of length M that contains for each entry in `y` the
smallest Euclidean distance to a sample in `x`.
"""
distances = _pdist(x, y)
return np.maximum(0.0, distances.min(axis=0))
def _nn_cosine_distance(x, y):
""" Helper function for nearest neighbor distance metric (cosine).
Parameters
----------
x : ndarray
A matrix of N row-vectors (sample points).
y : ndarray
A matrix of M row-vectors (query points).
Returns
-------
ndarray
A vector of length M that contains for each entry in `y` the
smallest cosine distance to a sample in `x`.
"""
distances = _cosine_distance(x, y)
return distances.min(axis=0)
class NearestNeighborDistanceMetric(object):
"""
A nearest neighbor distance metric that, for each target, returns
the closest distance to any sample that has been observed so far.
Parameters
----------
metric : str
Either "euclidean" or "cosine".
matching_threshold: float
The matching threshold. Samples with larger distance are considered an
invalid match.
budget : Optional[int]
If not None, fix samples per class to at most this number. Removes
the oldest samples when the budget is reached.
Attributes
----------
samples : Dict[int -> List[ndarray]]
A dictionary that maps from target identities to the list of samples
that have been observed so far.
"""
def __init__(self, metric, matching_threshold, budget=None):
if metric == "euclidean":
self._metric = _nn_euclidean_distance
elif metric == "cosine":
self._metric = _nn_cosine_distance
else:
raise ValueError(
"Invalid metric; must be either 'euclidean' or 'cosine'")
self.matching_threshold = matching_threshold
self.budget = budget
self.samples = {}
def partial_fit(self, features, targets, active_targets):
"""Update the distance metric with new data.
Parameters
----------
features : ndarray
An NxM matrix of N features of dimensionality M.
targets : ndarray
An integer array of associated target identities.
active_targets : List[int]
A list of targets that are currently present in the scene.
"""
for feature, target in zip(features, targets):
self.samples.setdefault(target, []).append(feature)
if self.budget is not None:
self.samples[target] = self.samples[target][-self.budget:]
self.samples = {k: self.samples[k] for k in active_targets}
def distance(self, features, targets):
"""Compute distance between features and targets.
Parameters
----------
features : ndarray
An NxM matrix of N features of dimensionality M.
targets : List[int]
A list of targets to match the given `features` against.
Returns
-------
ndarray
Returns a cost matrix of shape len(targets), len(features), where
element (i, j) contains the closest squared distance between
`targets[i]` and `features[j]`.
"""
cost_matrix = np.zeros((len(targets), len(features)))
for i, target in enumerate(targets):
cost_matrix[i, :] = self._metric(self.samples[target], features)
return cost_matrix
# vim: expandtab:ts=4:sw=4
import numpy as np
import cv2
def non_max_suppression(boxes, max_bbox_overlap, scores=None):
"""Suppress overlapping detections.
Original code from [1]_ has been adapted to include confidence score.
.. [1] http://www.pyimagesearch.com/2015/02/16/
faster-non-maximum-suppression-python/
Examples
--------
>>> boxes = [d.roi for d in detections]
>>> scores = [d.confidence for d in detections]
>>> indices = non_max_suppression(boxes, max_bbox_overlap, scores)
>>> detections = [detections[i] for i in indices]
Parameters
----------
boxes : ndarray
Array of ROIs (x, y, width, height).
max_bbox_overlap : float
ROIs that overlap more than this values are suppressed.
scores : Optional[array_like]
Detector confidence score.
Returns
-------
List[int]
Returns indices of detections that have survived non-maxima suppression.
"""
if len(boxes) == 0:
return []
boxes = boxes.astype(np.float)
pick = []
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2] + boxes[:, 0]
y2 = boxes[:, 3] + boxes[:, 1]
area = (x2 - x1 + 1) * (y2 - y1 + 1)
if scores is not None:
idxs = np.argsort(scores)
else:
idxs = np.argsort(y2)
while len(idxs) > 0:
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
xx1 = np.maximum(x1[i], x1[idxs[:last]])
yy1 = np.maximum(y1[i], y1[idxs[:last]])
xx2 = np.minimum(x2[i], x2[idxs[:last]])
yy2 = np.minimum(y2[i], y2[idxs[:last]])
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
overlap = (w * h) / area[idxs[:last]]
idxs = np.delete(
idxs, np.concatenate(
([last], np.where(overlap > max_bbox_overlap)[0])))
return pick
# vim: expandtab:ts=4:sw=4
class TrackState:
"""
Enumeration type for the single target track state. Newly created tracks are
classified as `tentative` until enough evidence has been collected. Then,
the track state is changed to `confirmed`. Tracks that are no longer alive
are classified as `deleted` to mark them for removal from the set of active
tracks.
"""
Tentative = 1
Confirmed = 2
Deleted = 3
class Track:
"""
A single target track with state space `(x, y, a, h)` and associated
velocities, where `(x, y)` is the center of the bounding box, `a` is the
aspect ratio and `h` is the height.
Parameters
----------
mean : ndarray
Mean vector of the initial state distribution.
covariance : ndarray
Covariance matrix of the initial state distribution.
track_id : int
A unique track identifier.
n_init : int
Number of consecutive detections before the track is confirmed. The
track state is set to `Deleted` if a miss occurs within the first
`n_init` frames.
max_age : int
The maximum number of consecutive misses before the track state is
set to `Deleted`.
feature : Optional[ndarray]
Feature vector of the detection this track originates from. If not None,
this feature is added to the `features` cache.
Attributes
----------
mean : ndarray
Mean vector of the initial state distribution.
covariance : ndarray
Covariance matrix of the initial state distribution.
track_id : int
A unique track identifier.
hits : int
Total number of measurement updates.
age : int
Total number of frames since first occurance.
time_since_update : int
Total number of frames since last measurement update.
state : TrackState
The current track state.
features : List[ndarray]
A cache of features. On each measurement update, the associated feature
vector is added to this list.
"""
def __init__(self, mean, covariance, track_id, n_init, max_age,
feature=None):
self.mean = mean
self.covariance = covariance
self.track_id = track_id
self.hits = 1
self.age = 1
self.time_since_update = 0
self.state = TrackState.Tentative
self.features = []
if feature is not None:
self.features.append(feature)
self._n_init = n_init
self._max_age = max_age
def to_tlwh(self):
"""Get current position in bounding box format `(top left x, top left y,
width, height)`.
Returns
-------
ndarray
The bounding box.
"""
ret = self.mean[:4].copy()
ret[2] *= ret[3]
ret[:2] -= ret[2:] / 2
return ret
def to_tlbr(self):
"""Get current position in bounding box format `(min x, miny, max x,
max y)`.
Returns
-------
ndarray
The bounding box.
"""
ret = self.to_tlwh()
ret[2:] = ret[:2] + ret[2:]
return ret
def predict(self, kf):
"""Propagate the state distribution to the current time step using a
Kalman filter prediction step.
Parameters
----------
kf : kalman_filter.KalmanFilter
The Kalman filter.
"""
self.mean, self.covariance = kf.predict(self.mean, self.covariance)
self.age += 1
self.time_since_update += 1
def update(self, kf, detection):
"""Perform Kalman filter measurement update step and update the feature
cache.
Parameters
----------
kf : kalman_filter.KalmanFilter
The Kalman filter.
detection : Detection
The associated detection.
"""
self.mean, self.covariance = kf.update(
self.mean, self.covariance, detection.to_xyah())
self.features.append(detection.feature)
self.hits += 1
self.time_since_update = 0
if self.state == TrackState.Tentative and self.hits >= self._n_init:
self.state = TrackState.Confirmed
def mark_missed(self):
"""Mark this track as missed (no association at the current time step).
"""
if self.state == TrackState.Tentative:
self.state = TrackState.Deleted
elif self.time_since_update > self._max_age:
self.state = TrackState.Deleted
def is_tentative(self):
"""Returns True if this track is tentative (unconfirmed).
"""
return self.state == TrackState.Tentative
def is_confirmed(self):
"""Returns True if this track is confirmed."""
return self.state == TrackState.Confirmed
def is_deleted(self):
"""Returns True if this track is dead and should be deleted."""
return self.state == TrackState.Deleted
# vim: expandtab:ts=4:sw=4
from __future__ import absolute_import
import numpy as np
from . import kalman_filter
from . import linear_assignment
from . import iou_matching
from .track import Track
class Tracker:
"""
This is the multi-target tracker.
Parameters
----------
metric : nn_matching.NearestNeighborDistanceMetric
A distance metric for measurement-to-track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of consecutive detections before the track is confirmed. The
track state is set to `Deleted` if a miss occurs within the first
`n_init` frames.
Attributes
----------
metric : nn_matching.NearestNeighborDistanceMetric
The distance metric used for measurement to track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of frames that a track remains in initialization phase.
kf : kalman_filter.KalmanFilter
A Kalman filter to filter target trajectories in image space.
tracks : List[Track]
The list of active tracks at the current time step.
"""
def __init__(self, metric, max_iou_distance=0.7, max_age=30, n_init=3):
self.metric = metric
self.max_iou_distance = max_iou_distance
self.max_age = max_age
self.n_init = n_init
self.kf = kalman_filter.KalmanFilter()
self.tracks = []
self._next_id = 1
def predict(self):
"""Propagate track state distributions one time step forward.
This function should be called once every time step, before `update`.
"""
for track in self.tracks:
track.predict(self.kf)
def update(self, detections):
"""Perform measurement update and track management.
Parameters
----------
detections : List[deep_sort.detection.Detection]
A list of detections at the current time step.
"""
# Run matching cascade.
matches, unmatched_tracks, unmatched_detections = \
self._match(detections)
# Update track set.
for track_idx, detection_idx in matches:
self.tracks[track_idx].update(
self.kf, detections[detection_idx])
for track_idx in unmatched_tracks:
self.tracks[track_idx].mark_missed()
for detection_idx in unmatched_detections:
self._initiate_track(detections[detection_idx])
self.tracks = [t for t in self.tracks if not t.is_deleted()]
# Update distance metric.
active_targets = [t.track_id for t in self.tracks if t.is_confirmed()]
features, targets = [], []
for track in self.tracks:
if not track.is_confirmed():
continue
features += track.features
targets += [track.track_id for _ in track.features]
track.features = []
self.metric.partial_fit(
np.asarray(features), np.asarray(targets), active_targets)
def _match(self, detections):
def gated_metric(tracks, dets, track_indices, detection_indices):
features = np.array([dets[i].feature for i in detection_indices])
targets = np.array([tracks[i].track_id for i in track_indices])
cost_matrix = self.metric.distance(features, targets)
cost_matrix = linear_assignment.gate_cost_matrix(
self.kf, cost_matrix, tracks, dets, track_indices,
detection_indices)
return cost_matrix
# Split track set into confirmed and unconfirmed tracks.
confirmed_tracks = [
i for i, t in enumerate(self.tracks) if t.is_confirmed()]
unconfirmed_tracks = [
i for i, t in enumerate(self.tracks) if not t.is_confirmed()]
# Associate confirmed tracks using appearance features.
matches_a, unmatched_tracks_a, unmatched_detections = \
linear_assignment.matching_cascade(
gated_metric, self.metric.matching_threshold, self.max_age,
self.tracks, detections, confirmed_tracks)
# Associate remaining tracks together with unconfirmed tracks using IOU.
iou_track_candidates = unconfirmed_tracks + [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update == 1]
unmatched_tracks_a = [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update != 1]
matches_b, unmatched_tracks_b, unmatched_detections = \
linear_assignment.min_cost_matching(
iou_matching.iou_cost, self.max_iou_distance, self.tracks,
detections, iou_track_candidates, unmatched_detections)
matches = matches_a + matches_b
unmatched_tracks = list(set(unmatched_tracks_a + unmatched_tracks_b))
return matches, unmatched_tracks, unmatched_detections
def _initiate_track(self, detection):
mean, covariance = self.kf.initiate(detection.to_xyah())
self.tracks.append(Track(
mean, covariance, self._next_id, self.n_init, self.max_age,
detection.feature))
self._next_id += 1
This diff could not be displayed because it is too large.
This diff is collapsed. Click to expand it.
person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
This file is too large to display.
This file is too large to display.
This file is too large to display.
12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401
{"schema":{"fields":[{"name":"index","type":"integer"},{"name":"total","type":"string"},{"name":"now","type":"string"},{"name":"time","type":"string"},{"name":"s","type":"number"}],"primaryKey":["index"],"pandas_version":"0.20.0"},"data":[{"index":0,"total":0,"now":0,"time":"2020-06-17 03:27:10.288851","s":0.0},{"index":1,"total":11,"now":7,"time":"2020-06-17 03:27:11.051125","s":0.7622739018},{"index":2,"total":13,"now":8,"time":"2020-06-17 03:27:11.813399","s":1.5245478036},{"index":3,"total":15,"now":10,"time":"2020-06-17 03:27:12.575673","s":2.2868217054},{"index":4,"total":21,"now":13,"time":"2020-06-17 03:27:13.337947","s":3.0490956072},{"index":5,"total":22,"now":13,"time":"2020-06-17 03:27:14.100221","s":3.811369509},{"index":6,"total":23,"now":12,"time":"2020-06-17 03:27:14.862494","s":4.5736434109},{"index":7,"total":24,"now":11,"time":"2020-06-17 03:27:15.624768","s":5.3359173127},{"index":8,"total":24,"now":7,"time":"2020-06-17 03:27:16.387042","s":6.0981912145},{"index":9,"total":25,"now":8,"time":"2020-06-17 03:27:17.149316","s":6.8604651163},{"index":10,"total":26,"now":9,"time":"2020-06-17 03:27:17.911590","s":7.6227390181},{"index":11,"total":26,"now":10,"time":"2020-06-17 03:27:18.673864","s":8.3850129199},{"index":12,"total":26,"now":7,"time":"2020-06-17 03:27:19.436138","s":9.1472868217},{"index":13,"total":29,"now":8,"time":"2020-06-17 03:27:20.198412","s":9.9095607235},{"index":14,"total":31,"now":11,"time":"2020-06-17 03:27:20.960686","s":10.6718346253},{"index":15,"total":31,"now":10,"time":"2020-06-17 03:27:21.722960","s":11.4341085271},{"index":16,"total":32,"now":13,"time":"2020-06-17 03:27:22.485233","s":12.1963824289},{"index":17,"total":33,"now":11,"time":"2020-06-17 03:27:23.247507","s":12.9586563307},{"index":18,"total":33,"now":11,"time":"2020-06-17 03:27:24.009781","s":13.7209302326},{"index":19,"total":35,"now":11,"time":"2020-06-17 03:27:24.772055","s":14.4832041344},{"index":20,"total":37,"now":9,"time":"2020-06-17 03:27:25.534329","s":15.2454780362},{"index":21,"total":38,"now":10,"time":"2020-06-17 03:27:26.296603","s":16.007751938},{"index":22,"total":39,"now":11,"time":"2020-06-17 03:27:27.058877","s":16.7700258398},{"index":23,"total":41,"now":12,"time":"2020-06-17 03:27:27.821151","s":17.5322997416},{"index":24,"total":42,"now":11,"time":"2020-06-17 03:27:28.583425","s":18.2945736434},{"index":25,"total":42,"now":12,"time":"2020-06-17 03:27:29.345699","s":19.0568475452},{"index":26,"total":42,"now":10,"time":"2020-06-17 03:27:30.107972","s":19.819121447},{"index":27,"total":42,"now":8,"time":"2020-06-17 03:27:30.870246","s":20.5813953488},{"index":28,"total":43,"now":9,"time":"2020-06-17 03:27:31.632520","s":21.3436692506},{"index":29,"total":45,"now":11,"time":"2020-06-17 03:27:32.394794","s":22.1059431525},{"index":30,"total":45,"now":9,"time":"2020-06-17 03:27:33.157068","s":22.8682170543},{"index":31,"total":46,"now":11,"time":"2020-06-17 03:27:33.919342","s":23.6304909561},{"index":32,"total":46,"now":11,"time":"2020-06-17 03:27:34.681616","s":24.3927648579},{"index":33,"total":46,"now":11,"time":"2020-06-17 03:27:35.443890","s":25.1550387597},{"index":34,"total":47,"now":10,"time":"2020-06-17 03:27:36.206164","s":25.9173126615},{"index":35,"total":47,"now":9,"time":"2020-06-17 03:27:36.968438","s":26.6795865633},{"index":36,"total":48,"now":10,"time":"2020-06-17 03:27:37.730711","s":27.4418604651},{"index":37,"total":48,"now":10,"time":"2020-06-17 03:27:38.492985","s":28.2041343669},{"index":38,"total":50,"now":10,"time":"2020-06-17 03:27:39.255259","s":28.9664082687},{"index":39,"total":51,"now":10,"time":"2020-06-17 03:27:40.017533","s":29.7286821705},{"index":40,"total":52,"now":9,"time":"2020-06-17 03:27:40.779807","s":30.4909560724},{"index":41,"total":52,"now":8,"time":"2020-06-17 03:27:41.542081","s":31.2532299742},{"index":42,"total":53,"now":7,"time":"2020-06-17 03:27:42.304355","s":32.015503876},{"index":43,"total":55,"now":8,"time":"2020-06-17 03:27:43.066629","s":32.7777777778},{"index":44,"total":56,"now":8,"time":"2020-06-17 03:27:43.828903","s":33.5400516796},{"index":45,"total":56,"now":10,"time":"2020-06-17 03:27:44.591177","s":34.3023255814},{"index":46,"total":56,"now":9,"time":"2020-06-17 03:27:45.353450","s":35.0645994832},{"index":47,"total":57,"now":10,"time":"2020-06-17 03:27:46.115724","s":35.826873385},{"index":48,"total":57,"now":8,"time":"2020-06-17 03:27:46.877998","s":36.5891472868},{"index":49,"total":58,"now":10,"time":"2020-06-17 03:27:47.640272","s":37.3514211886},{"index":50,"total":58,"now":8,"time":"2020-06-17 03:27:48.402546","s":38.1136950904},{"index":51,"total":60,"now":8,"time":"2020-06-17 03:27:49.164820","s":38.8759689922},{"index":52,"total":63,"now":12,"time":"2020-06-17 03:27:49.927094","s":39.6382428941},{"index":53,"total":63,"now":12,"time":"2020-06-17 03:27:50.689368","s":40.4005167959},{"index":54,"total":63,"now":8,"time":"2020-06-17 03:27:51.451642","s":41.1627906977},{"index":55,"total":63,"now":10,"time":"2020-06-17 03:27:52.213916","s":41.9250645995},{"index":56,"total":64,"now":11,"time":"2020-06-17 03:27:52.976190","s":42.6873385013},{"index":57,"total":64,"now":9,"time":"2020-06-17 03:27:53.738463","s":43.4496124031},{"index":58,"total":64,"now":10,"time":"2020-06-17 03:27:54.500737","s":44.2118863049},{"index":59,"total":64,"now":9,"time":"2020-06-17 03:27:55.263011","s":44.9741602067},{"index":60,"total":64,"now":8,"time":"2020-06-17 03:27:56.025285","s":45.7364341085},{"index":61,"total":64,"now":9,"time":"2020-06-17 03:27:56.787559","s":46.4987080103},{"index":62,"total":65,"now":8,"time":"2020-06-17 03:27:57.549833","s":47.2609819121},{"index":63,"total":66,"now":9,"time":"2020-06-17 03:27:58.312107","s":48.023255814},{"index":64,"total":67,"now":10,"time":"2020-06-17 03:27:59.074381","s":48.7855297158},{"index":65,"total":68,"now":10,"time":"2020-06-17 03:27:59.836655","s":49.5478036176},{"index":66,"total":68,"now":9,"time":"2020-06-17 03:28:00.598929","s":50.3100775194},{"index":67,"total":69,"now":9,"time":"2020-06-17 03:28:01.361202","s":51.0723514212},{"index":68,"total":69,"now":8,"time":"2020-06-17 03:28:02.123476","s":51.834625323},{"index":69,"total":69,"now":10,"time":"2020-06-17 03:28:02.885750","s":52.5968992248},{"index":70,"total":70,"now":8,"time":"2020-06-17 03:28:03.648024","s":53.3591731266},{"index":71,"total":71,"now":9,"time":"2020-06-17 03:28:04.410298","s":54.1214470284},{"index":72,"total":73,"now":11,"time":"2020-06-17 03:28:05.172572","s":54.8837209302},{"index":73,"total":73,"now":10,"time":"2020-06-17 03:28:05.934846","s":55.645994832},{"index":74,"total":74,"now":9,"time":"2020-06-17 03:28:06.697120","s":56.4082687339},{"index":75,"total":75,"now":10,"time":"2020-06-17 03:28:07.459394","s":57.1705426357},{"index":76,"total":75,"now":7,"time":"2020-06-17 03:28:08.221668","s":57.9328165375},{"index":77,"total":77,"now":10,"time":"2020-06-17 03:28:08.983941","s":58.6950904393}]}
\ No newline at end of file
This diff is collapsed. Click to expand it.
{"schema":{"fields":[{"name":"index","type":"string"},{"name":"total","type":"string"},{"name":"now","type":"string"},{"name":"time","type":"string"},{"name":"s","type":"string"}],"primaryKey":["index"],"pandas_version":"0.20.0"},"data":[]}
\ No newline at end of file
{"schema":{"fields":[{"name":"index","type":"string"},{"name":"x","type":"string"},{"name":"y","type":"string"},{"name":"id","type":"string"},{"name":"time","type":"string"},{"name":"s","type":"string"}],"primaryKey":["index"],"pandas_version":"0.20.0"},"data":[]}
\ No newline at end of file
Keras==2.3.1
tensorflow-gpu==1.13.1
numpy==1.15.0
opencv-python==4.2.0.34
scikit-learn==0.23.1
scipy==1.4.1
Pillow
torch==1.4.0
torchvision==0.5.0
This file is too large to display.
This file is too large to display.
import cv2
import os
import moviepy.editor as mp
video = "00046"
#input_imgs
im_dir = '/home/cai/Desktop/yolo_dataset/t1_video/t1_video_'+video+'/'
#im_dir =
#output_video
video_dir = '/home/cai/Desktop/yolo_dataset/t1_video/test_video/det_t1_video_'+video+'_test_q.avi'
#fps
fps = 50
#num_of_imgs
num = 310
#img_size
img_size = (1920,1080)
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
#opencv3
videoWriter = cv2.VideoWriter(video_dir, fourcc, fps, img_size)
for i in range(0,num):
#im_name = os.path.join(im_dir,'frame-' + str(i) + '.png')
im_name = os.path.join(im_dir,'t1_video_'+video+'_' + "%05d" % i + '.jpg')
frame = cv2.imread(im_name)
#frame = cv2.resize(frame, (480, 320))
#frame = cv2.resize(frame,(520,320), interpolation=cv2.INTER_CUBIC)
videoWriter.write(frame)
print (im_name)
videoWriter.release()
print('finish')
This diff is collapsed. Click to expand it.
import os
import errno
import argparse
import numpy as np
import cv2
import tensorflow as tf
def _run_in_batches(f, data_dict, out, batch_size):
data_len = len(out)
num_batches = int(data_len / batch_size)
s, e = 0, 0
for i in range(num_batches):
s, e = i * batch_size, (i + 1) * batch_size
batch_data_dict = {k: v[s:e] for k, v in data_dict.items()}
out[s:e] = f(batch_data_dict)
if e < len(out):
batch_data_dict = {k: v[e:] for k, v in data_dict.items()}
out[e:] = f(batch_data_dict)
def extract_image_patch(image, bbox, patch_shape):
"""Extract image patch from bounding box.
Parameters
----------
image : ndarray
The full image.
bbox : array_like
The bounding box in format (x, y, width, height).
patch_shape : Optional[array_like]
This parameter can be used to enforce a desired patch shape
(height, width). First, the `bbox` is adapted to the aspect ratio
of the patch shape, then it is clipped at the image boundaries.
If None, the shape is computed from :arg:`bbox`.
Returns
-------
ndarray | NoneType
An image patch showing the :arg:`bbox`, optionally reshaped to
:arg:`patch_shape`.
Returns None if the bounding box is empty or fully outside of the image
boundaries.
"""
bbox = np.array(bbox)
if patch_shape is not None:
# correct aspect ratio to patch shape
target_aspect = float(patch_shape[1]) / patch_shape[0]
new_width = target_aspect * bbox[3]
bbox[0] -= (new_width - bbox[2]) / 2
bbox[2] = new_width
# convert to top left, bottom right
bbox[2:] += bbox[:2]
bbox = bbox.astype(np.int)
# clip at image boundaries
bbox[:2] = np.maximum(0, bbox[:2])
bbox[2:] = np.minimum(np.asarray(image.shape[:2][::-1]) - 1, bbox[2:])
if np.any(bbox[:2] >= bbox[2:]):
return None
sx, sy, ex, ey = bbox
image = image[sy:ey, sx:ex]
image = cv2.resize(image, tuple(patch_shape[::-1]))
return image
class ImageEncoder(object):
def __init__(self, checkpoint_filename, input_name="images",
output_name="features"):
self.session = tf.Session()
with tf.gfile.GFile(checkpoint_filename, "rb") as file_handle:
graph_def = tf.GraphDef()
graph_def.ParseFromString(file_handle.read())
tf.import_graph_def(graph_def, name="net")
self.input_var = tf.get_default_graph().get_tensor_by_name(
"net/%s:0" % input_name)
self.output_var = tf.get_default_graph().get_tensor_by_name(
"net/%s:0" % output_name)
assert len(self.output_var.get_shape()) == 2
assert len(self.input_var.get_shape()) == 4
self.feature_dim = self.output_var.get_shape().as_list()[-1]
self.image_shape = self.input_var.get_shape().as_list()[1:]
def __call__(self, data_x, batch_size=32):
out = np.zeros((len(data_x), self.feature_dim), np.float32)
_run_in_batches(
lambda x: self.session.run(self.output_var, feed_dict=x),
{self.input_var: data_x}, out, batch_size)
return out
def create_box_encoder(model_filename, input_name="images",
output_name="features", batch_size=32):
image_encoder = ImageEncoder(model_filename, input_name, output_name)
image_shape = image_encoder.image_shape
def encoder(image, boxes):
image_patches = []
for box in boxes:
patch = extract_image_patch(image, box, image_shape[:2])
if patch is None:
print("WARNING: Failed to extract image patch: %s." % str(box))
patch = np.random.uniform(
0., 255., image_shape).astype(np.uint8)
image_patches.append(patch)
image_patches = np.asarray(image_patches)
return image_encoder(image_patches, batch_size)
return encoder
def generate_detections(encoder, mot_dir, output_dir, detection_dir=None):
"""Generate detections with features.
Parameters
----------
encoder : Callable[image, ndarray] -> ndarray
The encoder function takes as input a BGR color image and a matrix of
bounding boxes in format `(x, y, w, h)` and returns a matrix of
corresponding feature vectors.
mot_dir : str
Path to the MOTChallenge directory (can be either train or test).
output_dir
Path to the output directory. Will be created if it does not exist.
detection_dir
Path to custom detections. The directory structure should be the default
MOTChallenge structure: `[sequence]/det/det.txt`. If None, uses the
standard MOTChallenge detections.
"""
if detection_dir is None:
detection_dir = mot_dir
try:
os.makedirs(output_dir)
except OSError as exception:
if exception.errno == errno.EEXIST and os.path.isdir(output_dir):
pass
else:
raise ValueError(
"Failed to created output directory '%s'" % output_dir)
for sequence in os.listdir(mot_dir):
print("Processing %s" % sequence)
sequence_dir = os.path.join(mot_dir, sequence)
image_dir = os.path.join(sequence_dir, "img1")
image_filenames = {
int(os.path.splitext(f)[0]): os.path.join(image_dir, f)
for f in os.listdir(image_dir)}
detection_file = os.path.join(
detection_dir, sequence, "det/det.txt")
detections_in = np.loadtxt(detection_file, delimiter=',')
detections_out = []
frame_indices = detections_in[:, 0].astype(np.int)
min_frame_idx = frame_indices.astype(np.int).min()
max_frame_idx = frame_indices.astype(np.int).max()
for frame_idx in range(min_frame_idx, max_frame_idx + 1):
print("Frame %05d/%05d" % (frame_idx, max_frame_idx))
mask = frame_indices == frame_idx
rows = detections_in[mask]
if frame_idx not in image_filenames:
print("WARNING could not find image for frame %d" % frame_idx)
continue
bgr_image = cv2.imread(
image_filenames[frame_idx], cv2.IMREAD_COLOR)
features = encoder(bgr_image, rows[:, 2:6].copy())
detections_out += [np.r_[(row, feature)] for row, feature
in zip(rows, features)]
output_filename = os.path.join(output_dir, "%s.npy" % sequence)
np.save(
output_filename, np.asarray(detections_out), allow_pickle=False)
def parse_args():
"""Parse command line arguments.
"""
parser = argparse.ArgumentParser(description="Re-ID feature extractor")
parser.add_argument(
"--model",
default="resources/networks/mars-small128.pb",
help="Path to freezed inference graph protobuf.")
parser.add_argument(
"--mot_dir", help="Path to MOTChallenge directory (train or test)",
required=True)
parser.add_argument(
"--detection_dir", help="Path to custom detections. Defaults to "
"standard MOT detections Directory structure should be the default "
"MOTChallenge structure: [sequence]/det/det.txt", default=None)
parser.add_argument(
"--output_dir", help="Output directory. Will be created if it does not"
" exist.", default="detections")
return parser.parse_args()
def main():
args = parse_args()
encoder = create_box_encoder(args.model, batch_size=32)
generate_detections(encoder, args.mot_dir, args.output_dir,
args.detection_dir)
if __name__ == "__main__":
main()
import cv2
image_folder = './mask_face'
video_name = './cut_test.m4v'
vc = cv2.VideoCapture(video_name)
c = 1
if vc.isOpened():
rval,frame=vc.read()
else:
rval=False
while rval:
rval,frame=vc.read()
cv2.imwrite('./mask_face/IMG_'+str(c)+'.jpg',frame)
c=c+1
cv2.waitKey(1)
vc.release()
import colorsys
import numpy as np
from keras import backend as K
from keras.models import load_model
from yolo4.model import yolo_eval, Mish
from yolo4.utils import letterbox_image
import os
from keras.utils import multi_gpu_model
class YOLO(object):
def __init__(self):
self.model_path = os.getcwd() + '/../deep_sort_yolov4/model_data/yolo4_weight.h5'
self.anchors_path = os.getcwd() + '/../deep_sort_yolov4/model_data/yolo_anchors.txt'
self.classes_path = os.getcwd() + '/../deep_sort_yolov4/model_data/coco_classes.txt'
self.gpu_num = 1
self.score = 0.5
self.iou = 0.5
self.class_names = self._get_class()
self.anchors = self._get_anchors()
self.sess = K.get_session()
self.model_image_size = (608, 608) # fixed size or (None, None)
self.is_fixed_size = self.model_image_size != (None, None)
self.boxes, self.scores, self.classes = self.generate()
def _get_class(self):
classes_path = os.path.expanduser(self.classes_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def _get_anchors(self):
anchors_path = os.path.expanduser(self.anchors_path)
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
anchors = np.array(anchors).reshape(-1, 2)
return anchors
def generate(self):
model_path = os.path.expanduser(self.model_path)
assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
self.yolo_model = load_model(model_path, custom_objects={'Mish': Mish}, compile=False)
print('{} model, anchors, and classes loaded.'.format(model_path))
# Generate colors for drawing bounding boxes.
hsv_tuples = [(x / len(self.class_names), 1., 1.)
for x in range(len(self.class_names))]
self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
self.colors = list(
map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
self.colors))
np.random.seed(10101) # Fixed seed for consistent colors across runs.
np.random.shuffle(self.colors) # Shuffle colors to decorrelate adjacent classes.
np.random.seed(None) # Reset seed to default.
# Generate output tensor targets for filtered bounding boxes.
self.input_image_shape = K.placeholder(shape=(2, ))
if self.gpu_num>=2:
self.yolo_model = multi_gpu_model(self.yolo_model, gpus=self.gpu_num)
boxes, scores, classes = yolo_eval(self.yolo_model.output, self.anchors,
len(self.class_names), self.input_image_shape,
score_threshold=self.score, iou_threshold=self.iou)
return boxes, scores, classes
def detect_image(self, image):
if self.is_fixed_size:
assert self.model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert self.model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
else:
new_image_size = (image.width - (image.width % 32),
image.height - (image.height % 32))
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
# print(image_data.shape)
image_data /= 255.
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
out_boxes, out_scores, out_classes = self.sess.run(
[self.boxes, self.scores, self.classes],
feed_dict={
self.yolo_model.input: image_data,
self.input_image_shape: [image.size[1], image.size[0]],
K.learning_phase(): 0
})
return_boxes = []
return_scores = []
return_class_names = []
for i, c in reversed(list(enumerate(out_classes))):
predicted_class = self.class_names[c]
if predicted_class != 'person': # Modify to detect other classes.
continue
box = out_boxes[i]
score = out_scores[i]
x = int(box[1])
y = int(box[0])
w = int(box[3] - box[1])
h = int(box[2] - box[0])
if x < 0:
w = w + x
x = 0
if y < 0:
h = h + y
y = 0
return_boxes.append([x, y, w, h])
return_scores.append(score)
return_class_names.append(predicted_class)
return return_boxes, return_scores, return_class_names
def close_session(self):
self.sess.close()
This diff is collapsed. Click to expand it.
"""Miscellaneous utility functions."""
from functools import reduce
from PIL import Image
import numpy as np
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
def compose(*funcs):
"""Compose arbitrarily many functions, evaluated left to right.
Reference: https://mathieularose.com/function-composition-in-python/
"""
# return lambda x: reduce(lambda v, f: f(v), funcs, x)
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')
def letterbox_image(image, size):
'''resize image with unchanged aspect ratio using padding'''
iw, ih = image.size
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', size, (128,128,128))
new_image.paste(image, ((w-nw)//2, (h-nh)//2))
return new_image
def rand(a=0, b=1):
return np.random.rand()*(b-a) + a
def get_random_data(annotation_line, input_shape, random=True, max_boxes=100, jitter=.3, hue=.1, sat=1.5, val=1.5, proc_img=True):
'''random preprocessing for real-time data augmentation'''
line = annotation_line.split()
image = Image.open(line[0])
iw, ih = image.size
h, w = input_shape
box = np.array([np.array(list(map(int,box.split(',')))) for box in line[1:]])
if not random:
# resize image
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
dx = (w-nw)//2
dy = (h-nh)//2
image_data=0
if proc_img:
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', (w,h), (128,128,128))
new_image.paste(image, (dx, dy))
image_data = np.array(new_image)/255.
# correct boxes
box_data = np.zeros((max_boxes,5))
if len(box)>0:
np.random.shuffle(box)
if len(box)>max_boxes: box = box[:max_boxes]
box[:, [0,2]] = box[:, [0,2]]*scale + dx
box[:, [1,3]] = box[:, [1,3]]*scale + dy
box_data[:len(box)] = box
return image_data, box_data
# resize image
new_ar = w/h * rand(1-jitter,1+jitter)/rand(1-jitter,1+jitter)
scale = rand(.25, 2)
if new_ar < 1:
nh = int(scale*h)
nw = int(nh*new_ar)
else:
nw = int(scale*w)
nh = int(nw/new_ar)
image = image.resize((nw,nh), Image.BICUBIC)
# place image
dx = int(rand(0, w-nw))
dy = int(rand(0, h-nh))
new_image = Image.new('RGB', (w,h), (128,128,128))
new_image.paste(image, (dx, dy))
image = new_image
# flip image or not
flip = rand()<.5
if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT)
# distort image
hue = rand(-hue, hue)
sat = rand(1, sat) if rand()<.5 else 1/rand(1, sat)
val = rand(1, val) if rand()<.5 else 1/rand(1, val)
x = rgb_to_hsv(np.array(image)/255.)
x[..., 0] += hue
x[..., 0][x[..., 0]>1] -= 1
x[..., 0][x[..., 0]<0] += 1
x[..., 1] *= sat
x[..., 2] *= val
x[x>1] = 1
x[x<0] = 0
image_data = hsv_to_rgb(x) # numpy array, 0 to 1
# correct boxes
box_data = np.zeros((max_boxes,5))
if len(box)>0:
np.random.shuffle(box)
box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx
box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy
if flip: box[:, [0,2]] = w - box[:, [2,0]]
box[:, 0:2][box[:, 0:2]<0] = 0
box[:, 2][box[:, 2]>w] = w
box[:, 3][box[:, 3]>h] = h
box_w = box[:, 2] - box[:, 0]
box_h = box[:, 3] - box[:, 1]
box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box
if len(box)>max_boxes: box = box[:max_boxes]
box_data[:len(box)] = box
return image_data, box_data
absl-py==0.9.0
asgiref==3.2.7
astor==0.8.1
astroid==2.3.3
cachetools==4.1.0
certifi==2020.4.5.1
chardet==3.0.4
colorama==0.4.3
cycler==0.10.0
Django==3.0.5
et-xmlfile==1.0.1
gast==0.2.2
google-auth==1.14.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.28.1
h5py==2.10.0
idna==2.9
image==1.5.31
isort==4.3.21
jdcal==1.4.1
joblib==0.14.1
Keras==2.3.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
lazy-object-proxy==1.4.3
Markdown==3.2.1
matplotlib==3.2.1
mccabe==0.6.1
numpy==1.18.3
oauthlib==3.1.0
opencv-python==4.2.0.34
openpyxl==3.0.3
opt-einsum==3.2.1
pandas==1.0.3
Pillow==7.1.1
protobuf==3.11.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pylint==2.4.4
pyparsing==2.4.7
python-dateutil==2.8.1
pytz==2019.3
PyYAML==5.3.1
requests==2.23.0
requests-oauthlib==1.3.0
rsa==4.0
scikit-learn==0.22.2.post1
scipy==1.4.1
seaborn==0.10.1
six==1.14.0
sklearn==0.0
sqlparse==0.3.1
tensorboard==1.15.0
tensorflow==1.15.2
tensorflow-estimator==1.15.1
tensorflow-gpu==1.15.2
tensorflow-gpu-estimator==2.1.0
termcolor==1.1.0
typed-ast==1.4.1
urllib3==1.25.9
Werkzeug==1.0.1
wrapt==1.11.2
xlrd==1.2.0
......@@ -6,12 +6,14 @@
"start": "node ./bin/www"
},
"dependencies": {
"child_process": "^1.0.2",
"cookie-parser": "~1.4.4",
"debug": "~2.6.9",
"ejs": "~2.6.1",
"express": "~4.16.1",
"http-errors": "~1.6.3",
"morgan": "~1.9.1",
"multer": "^1.4.2"
"multer": "^1.4.2",
"python-shell": "^2.0.1"
}
}
......
......@@ -2,10 +2,13 @@ var express = require('express');
var router = express.Router();
var multer = require("multer");
var path = require("path");
var PythonShell = require('python-shell');
var spawn = require("child_process").spawn;
var storage = multer.diskStorage({
destination: function(req, file, callback) {
callback(null, "upload/")
callback(null, "../deep_sort_yolov4/")
},
filename: function(req, file, callback) {
var extension = path.extname(file.originalname);
......@@ -21,20 +24,33 @@ var upload = multer({
// 뷰 페이지 경로
router.get('/', function(req, res, next) {
res.render("index")
res.render('index', { title: 'Express' });
});
// 2. 파일 업로드 처리
router.post('/create', upload.single("File"), async(req, res) => {
router.post('/create', upload.single("File"), function(req, res) {
// 3. 파일 객체
var file = req.file
// 4. 파일 정보
var result = {
originalName: file.originalname,
size: file.size,
var file = req.file;
var options = {
mode: 'text',
pythonPath: __dirname + '/../../venv/Scripts/python.exe',
pythonOptions: ['-u'],
scriptPath: __dirname + '/../../deep_sort_yolov4/',
args: [file.originalname]
};
var shell1 = new PythonShell.PythonShell('main.py', options);
shell1.end(function(err) {
if (err) throw err;
else {
res.render('result', { file_name: file.originalname.split('.')[0] });
}
res.json(result);
});
// PythonShell.PythonShell.run('main.py', options, (err, results) => {
// if (err) throw err;
// PythonShell.PythonShell.run('kmean.py', options2, (err, results) => {
// if (err) throw err;
// res.render('result', { file_name: file.originalname.split('.')[0] });
// });
// });
});
module.exports = router;
\ No newline at end of file
......
......@@ -32,7 +32,8 @@
<div class="sidebar-brand-icon rotate-n-15">
<i class="fas fa-thumbs-up"></i>
</div>
<div class="sidebar-brand-text mx-3">유동 인구 분석</div>
<div class="sidebar-brand-text mx-3">유동 인구 분석
</div>
</a>
<!-- Divider -->
......
......@@ -86,10 +86,10 @@
<!-- /.container-fluid -->
<div class="container row">
<div class="container-video col-md-6">
<video src="data/output2.avi" width="400" controls autoplay></video>
<video src="data/<%=file_name%>.mp4" width="400" controls autoplay></video>
</div>
<div class="container-kmeans col-md-6">
<img src="data/test3_kmeans.png" style="width: 100%;" alt="Kmeans Image">
<img src="data/<%=file_name%>_kmeans.png" style="width: 100%;" alt="Kmeans Image">
</div>
</div>
<div class="container-kibana"></div>
......@@ -127,9 +127,6 @@
<!-- Custom scripts for all pages-->
<script src="javascripts/sb-admin-2.min.js"></script>
<!-- Page level plugins -->
<script src="vendor/chart.js/Chart.min.js"></script>
<!-- Page level custom scripts -->
<!-- <script src="javascripts/demo/chart-pie-demo.js"></script> -->
</body>
......