SciELO - Scientific Electronic Library Online

 
vol.25 número2Open Loop Control of a Hysteretic System with an Inverse Volterra with a DC ComponentVolume Measurement System Based on Hall Effect Sensors Circularly Coupled and Arranged in a Quadrature Shape índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.25 no.2 Ciudad de México abr./jun. 2021  Epub 11-Oct-2021

https://doi.org/10.13053/cys-25-2-3783 

Articles

Real Time Vision Based Overtaking Assistance System for Drivers at Night on Two-Lane Single Carriageway

Gouranga Mandai1  * 

Diptendu Bhattacharya1 

Parthasarathi De1 

1 National Institute of Technology Agartala, Computer Science and Engineering Department, India. gourangamandal@yahoo.com, diptendu1@gmail.com, parthasarathide76@gmail.com


Abstract

In this article, an effective solution has been presented to assist a driver in taking decisions for overtaking under adverse night time dark condition on a two-lane single carriageway road. Here, an awkward situation of the road where a vehicle is just in front of the test vehicle in the same direction and another vehicle coming from the opposite direction is considered. As the environmental condition is very dark, so only headlights and taillights of any vehicle are visible. Estimation of distance and speed with greater accuracy, especially at night where vehicles are not visible is really a challenging task. The proposed assistance system can estimate the actual and relative speed and the distance of the slow vehicle in front of the test vehicle and the vehicle coming from the opposite direction by observing taillights and headlights respectively. Subsequently, required gap, road condition level, speed and acceleration for safe overtaking are estimated. Finally, the overtaking decision is made in a such way that there should not be any collision between vehicles. Several real time experiments reveal that the estimation achieves a great accuracy with safe condition over the state-of-the-art techniques using a low-cost 2D camera.

Keywords: Two-lane single carriageway; headlights and taillights; low cost 2D camera; distance and speed estimation; overtaking decision

1 Introduction

Due to the dark condition of the road, driving at night is really tough. Other vehicles are also not visible properly. In the dark condition of the road overtaking is quite impossible.

Sometimes vehicle in front in the same direction, has a very slow speed. Therefore, we also had to move with very slow speed behind the slow vehicle. It wastes valuable time for a fast vehicle to run behind a slow vehicle unnecessarily.

Overtaking with rash driving in dark road increase the probability of road accidents. It is observed that, out of all road-accidental deaths, 44% are due to rash driving, which is likely due to over speed, overtaking, and driver's wrong prediction about other vehicles [1].

Almost 45% of traffic accidents happen at night. The maximum number of road accidents (18.6 %) recorded between 6:00 pm and 9:00 pm [2].

Real-time estimation of vehicle-to-vehicle distance followed by relative and actual speed from another moving vehicle is really difficult at night. Objects and vehicles are not visible properly, only taillights and headlights are visible at night (see figure 1).

Fig. 1 Only the Headlights and taillights are visible at night 

The estimation becomes more difficult in the presence of oncoming vehicle from the reverse direction. By watching only the headlights of an oncoming vehicle, it is not possible to estimate a safe driving trajectory. Therefore, it is very tough to decide whether an overtake is safe or risky.

Few researchers tried to solve the problem of uncertainty while making a decision of overtaking during day time. The decision of overtaking whether it is safe or risky is comparatively easier during day time when the road has a clear view of sunlight. However, overtaking during the night is really difficult.

This condition creates confusion for a driver and if the driver takes the risk of overtaking, then it causes a pathetic road accident in maximum cases. It is not possible to understand the distance and speed of the oncoming vehicle from a far distance at night. This situation is very dangerous because by watching only headlights of an oncoming vehicle, it is not possible to estimate the gap between vehicles and acceleration required for safe overtaking. The primary contributions of this paper are summarized as follows:

  • — The proposed method can estimate vehicle-to-vehicle distance, relative and actual speed of any vehicle from a moving test vehicle at night.

  • — The approach is designed for complex traffic situations where oncoming and ongoing both vehicles are running on a two-lane road.

  • — The vision-based assistance is made by observing only the visual parameters of taillights and headlights.

  • — Sensitive vision-based parameters of headlights and taillights are used to estimate the exact distance and speed with actual and relative values.

  • — Parameters like road condition level, required speed and acceleration are used to take a safe overtaking decision for drivers.

  • — No high-cost sensors are used for estimations like state-of-the-art techniques. A low-cost 2D camera is used in the proposed method.

  • — The proposed approach aims to provide assistance in minimum processing time.

The rest of the article is structured as follows. Section 2 describes literature survey related to the proposed approach. Section 3 provides an ephemeral picture of the general overtaking maneuver. Section 4 presents the proposed approach. Section 5 represents an evaluation of real-time empirical performance. Lastly, Section 6 represents the conclusion and describes the next steps in future research.

2 Literature Survey

Few scholars attempted to estimate the distance and speed of vehicles on the road in real-time. However, most of the attempts are made in the daytime and used high-cost sensors in a fixed position. To our knowledge, no study is available to assist a driver in taking decisions for overtaking at night time. In this article, a novel and effective solution have been presented to assist a driver to take a perfect overtaking decision at night. Few studies of day time are discussed.

Great et al. [3] and Wicaksono et al. [4] used the Gaussian Mixture Model along with filtering techniques and post-processing steps to extract foreground image from recorded video. Finally, the location is determined in each frame to estimate the speed based on its distance between frames.

Hua et al. [5] combined modern deep learning with classic computer vision approaches to predict vehicle speed efficiently. The speeds of vehicles are estimated using a static camera from the corner point movement within the track associated with the vehicle.

Giannakeris et al. [6] introduced a fully automatic camera calibration algorithm to estimate the speed of vehicles.

Koyuncu et al. [7] detected speed of vehicles by using a simple camera and image processing software. The speed is calculated as the ratio of real distance covered by the camera field of view (FOV). Qi et al. [8] measured the distance between two vehicles based on the vehicle pose information by monocular vision.

Javadi et al. [9] presented a video-based speed estimation method of vehicles using the movement-pattern vector as an input variable.

Weber et al. [10] analyzed and discussed the influence of the road anomalies and the vehicle suspension for tracking and distance estimation of vehicles.

Kim et al. [11] estimated the distance of a vehicle driving in front by extracting the aggregated channel features of frames using a single black-box camera. Moazzam et al. [12] presented a vehicle speed determination method from video using the "boundary box" technique.

Bastos et al. [13] and Bell et al. [14] used Convolutional Neural Networks (CNNs) and the "You Only Look Once" (YOLO) algorithm to detect vehicles and their distances.

Liu et al. [15] introduced a lightweight network to amend the feature extraction layer of YOLOv3 to estimate the distance of vehicles.

Vakili et al. [16] and Ali et al. [17] presented a single view geometry-based relative distance estimation algorithm using a camera and the geometric information of the camera system.

Zaarane et al. [18] introduced an inter-vehicle distance estimation method for self-driving based on a camera view field angle using a stereo camera, installed behind the rear-view mirror of the host vehicle.

Few researchers attempted to solve the problem of overtaking decision for the two-lane road during day-time, are discussed here.

Milanés et al. [19], Naranjo [20] and Basjaruddin et al. [21] proposed a fuzzy logic based automatic overtaking system using artificial intelligence and computer vision.

Shamir [22] presented an overtaking maneuver method that provides a minimum trajectory for changing lane during overtaking. The overtaking trajectory depends on enough initial velocity, the minimum distance to cover and minimum time needed by explicit formulas.

Sezer [23] offered a new design to solve the overtaking problem for bi-directional roads using Markov Decision Procedure that works based on Mixed Observability.

Groza et al. [24] and Vieira et al. [25] used a warning alert for drivers during overtaking maneuvers. The system depends on vehicle-to-vehicle communication technologies based on a vehicular Adhoc network (VANETS) and multi-agent systems.

Yu et al. [26] and Ngai et al. [27] used reinforcement learning practices to solve the problem of cooperative overtaking.

Vasic et al. [28] presented an algorithm based on a sensor fusion technique and cooperative tracking to take an overtaking decision for intelligent vehicles that are connected with a network.

Patra et al. [29] introduced an ITS based overtaking assistance system for drivers that provides real-time live video streaming from the front vehicle. Even if the driver is unable to view the road view due to any blocked vehicle in the front, this system provides a clear vision of the road ahead.

Ghumman et al. [30] presented a novel overtaking approach by changing the lane for an autonomous vehicle based on the on-line trajectory-generation method.

Raghavan et al. [31] and Wang et al. [32] presented the solution of the classic car overtaking problem by a control algorithm which minimizes the probability of collision with cars.

Németh et al. [33] introduced an overtaking driver assistance system for autonomous vehicles using a hierarchical function.

Barmpounakis et al. [34] developed an unconventional overtaking pattern for drivers who use powered two-wheeler by using a decision tree that is meta-optimized.

Zhou et al. [35] introduced Gap Acceptance Based Safety Assessment of Autonomous Overtaking Function. Elleuch et al. [36] proposed numerous cooperative overtaking assistance systems based on Vehicular Ad hoc Networks (VANET) to prevent collisions.

3 General Overtaking Maneuver

A normal overtaking maneuver is usually used to cross a slower vehicle or a stationary vehicle present in the same lane. This process can be completed on bi-directional roads and also on the freeways, which is only one wide lane and vehicles run in both directions. An overtaking maneuver consists of three phases:

  • a) diverting from the actual lane,

  • b) driving forward in the attached adjacent lane until the overtaking vehicle has passed, and

  • c) returning to the actual lane (see figure 2(a)).

Fig. 2 Overtaking maneuver with new position of vehicle after overtake (a) Normal overtaking maneuver; (b) overtaking maneuver on difficult situation with another vehicle is coming from opposite direction 

This three-phase overtaking maneuver is quite simple if we consider only one slow vehicle in front of the overtaking vehicle and there is no other vehicle in any direction. Whereas if there is another vehicle coming from the opposite direction (see figure 2(b)), then the decision of overtaking becomes difficult.

4 Proposed Approach

In this article, a novel overtaking assistance system is proposed for drivers. The proposed approach is completely vision-based and depends on the analysis of visual parameters of headlights and taillights at night.

This approach can help a driver to make perfect and safe decision of overtaking at night time under such a difficult situation where some other vehicle is coming from the opposite direction with a slow vehicle in the front. The overall framework is shown in figure 3.

Fig. 3 Overall framework of the proposed system 

4.1 Intensity and Color Level Slicing

A real-time frame at night is considered as the vision of a driver. A night time frame of a busy two-lane single carriageway consists of several headlights and taillights. To extract headlights and taillights from frames, intensity, and color level slicing is applied. Color level slicing is a process that segments area with a defined color and intensity level slicing is another process that segments a defined range of light intensities in a frame [37,38]. The graphical illustration of intensity level slicing is exposed in figure 4.

Fig. 4 Intensity level slicing to display lights with high-intensity range by a slicing plane 

For headlights, a threshold of intensity T(int) = 220 is used. Similarly, for taillights a threshold amount of red in RGB frame T(red) = 200 is provided. The luminosity technique (for human perception) is applied to evaluate the intensity of a pixel where the green color is given a higher priority than the red and blue color. The luminosity is calculated as follows:

Intensity of pixel I(x,y)=Intx,y=0.21R+0.72G+0.07B. (1)

The output frame is calculated as follows:

Jx,y=Ix,yforIntx,y>=Tint=220orRx,y>= Tred=200  0otherwise (2)

Hence, the frame is transformed into a new frame that comprises only high-intensity lights (headlights) and red-colored lights (taillights).

4.2 Clustering of High Intensity and Red Pixels

After finding the bright and red-colored pixels, clustering is applied to extract the exact position of headlights and taillight with their corresponding centroid. Here, a traditional clustering method like K-Means is not applicable because the occurrence of headlights is unknown. Consequently, the cluster's quantity is also undefined. Therefore, the DBSCAN [39] clustering is chosen. To avoid ambiguity of false headlight (e.g.- street lights), the entire frame is divided into two zones through the skyline.

All the lights above the skyline are considered false light and the lights below the skyline are either taillight or headlight. In the region below the skyline of the frame, the DBSCAN is applied to the following conditions:

  • i) The minimum radius of a cluster = 4 pixels.

  • ii) The minimum area (pixels) of a cluster = 8.

After clustering is done, three components are ready for use, viz: a) set of clusters for all headlights (S1) b) set of clusters of all taillights (S2) and c) centroid of all clusters. Two-lane single-carriageway in India (driving style: "drive on the left") are selected for real-time trials. Hence, it can be assumed that all the ongoing vehicles are in the left lane and all the oncoming vehicles are on the right lane of a road. Thus, to extract features from headlights and taillights, the focus should be on the right half portion and left portion of the frame respectively.

4.3 Feature Extraction

Some significant features are extracted from the clusters of headlights and taillight with their corresponding centroid. A pair of lights can be confirmed as either headlights or taillights when the movement of two lights of a vehicle is identical (i.e.: two lights should be nearly in a horizontal line and the direction movements of two lights should be the same). All pairs of lights are checked. If the horizontal distance changes < 10 pixels and vertical distance changes < 5 pixels between two light's centroids then the movement is called identical. Hence, the pair of lights is considered either a pair of headlights or taillights.

Now, a pair of lights can be identified explicitly as headlights by the following criteria:

  • i) The intensity of the lights should be very high (l>=T1, Where T1 = 220 is used as a Threshold value).

    Likewise, a pair of lights can be identified explicitly as taillights by the following criteria:

  • ii) The color of light should be reddish (R in RGB>=T2, Where T2 = 200 is used as a Threshold value).

    After distinguishing the headlights and taillights, the following three features are calculated as follows.

  • 1) The horizontal distance (Euclidean distance between two centroids) between every pair of lights is calculated as:

dista,b=ba-ax2+by-ay2, (3)

  • where a = (ax, ay) and b = (bx, by) are two centroids. The greater the distance indicates closer the vehicle.

  • 2) The height of a pair of lights from the baseline is calculated as follows.

Height=r_max-r_centroid, (4)

  • where baseline indicates a ground line. The r_max and r_centroid represents the extreme row and the row of the centroid respectively. Less height indicates closer to the vehicle.

  • 3) Area of all the headlights and taillights:

A counter variable is used to count the high-intensity pixels in a headlight cluster. A larger count of pixels indicates the light occupied a bigger area in the frame and the vehicle is close by.

Three sets of rules are defined to calculate the stated features of headlights and taillights. The defined rules for estimating the distance of any vehicle that is coming from the opposite way using headlight position is shown in figure 5.

Fig. 5 Three defined rules for vehicle distance estimation in a frame with 1280×786 resolution; (a) Horizontal distance (centroid's Euclidean distance) between the two headlights; (b) Heights of two headlights in a frame and their equivalent distance from the baseline of the road; (c) Area of a headlight (i.e.-number of high-intensity pixels available) 

4.4 Distance and Speed Estimation of Vehicles

After calculating the above three important features, the distance of an oncoming vehicle can be estimated. The following three different ways are defined to estimate the distance Viz:

  • i) d1 = estimated distance from way1 (i.e., from horizontal distance between the two headlights),

  • ii) d2 = estimated distance from way2 (i.e., from heights of two headlights from baseline),

  • iii) d3 = estimated distance from way3 (i.e., from the area of the headlight).

The estimated distance may not be the same in all the procedures. Therefore, normalization is done from the three estimations as follows.

final_distance(D)=(3*(avg_pair)+single)/4, (5)

where age_pair = average of the closest pair from (d1 ,d2 and d3), and single = the distance which is not in closest pair.

The speed of the vehicles is then estimated. To compute the relative speed of two vehicles, vehicle-to-vehicle distance is estimated at last ten frames with an interval of ten frames. The difference between two distance obtained in nth frame (current frame) and (n+10)th frame is estimated as follows:

Dndifference=Dnth+Dn+10th∀ oncoming vehiclesDnth-Dn+10th∀ ongoing vehicles (6)

Relative speed

β=n=110Dn0difference÷10×3×60×60. (7)

where D(n0)difference indicates a change of distance in every 10th frame. The relative speed is calculated considering 30fps video and its unit is km/hr. The actual speed of ongoing and oncoming vehicles are calculated as follows:

Actual speedα=β+αT∀ ongoing vehiclesβ-αT∀ oncoming vehicles (8)

where β and αT indicates the relative speed and actual speed of test vehicle respectively.

Let us consider the complex traffic situation of three vehicles at night. Vehicle "A" and "B" is in the same direction and "C" is oncoming vehicle in the adjacent lane as projected in figure 4. So, the actual speed of 'B' and 'C' is calculated as follows:

αB=βA,B+αAKm/hr. (9)

Similarly, the actual speed of 'C':

αC=βA,B-αAKm/hr. (10)

4.5 Features Formulation

After calculating all the distance and speed, the mathematical formulation of the decision of overtaking is constructed as follows.

Let us consider the adverse situation of three vehicles (one vehicle is in the front in the same direction and another vehicle is coming in the adjacent lane from the opposite direction of the test vehicle) at night as represented in figure 6.

Fig. 6 Overtaking in an adverse situation where one vehicle is in the front of another vehicle and another one vehicle is coming in the adjacent lane from the opposite direction 

Let, the speed of any vehicle is represented by α and the relative speed of two vehicles is represented by β. Let the relative speed of vehicle A and C = x km/hr. So:

βA,C=αA+αC=xKm/hr, (11)

and relative speed of A and B is y km/hr, i.e.,

βA,B=αA-αB=yKm/hr, (12)

so:

αA+αC-αA-αB=x-ykm/hr, (13)

or:

αA+αC-αA+αB=x-ykm/hr.

or:

αB+αC=x-ykm/hrx-y×100060×60m/sec. (14)

or:

αB+αC=x-y×518m/sec. (15)

This is the relative speed of B and C, i.e.- the sum of the speed of B and speed of C.

Let, distances from A to C dist(A,C) and A to B dist(A,B) are δ1 meter and δ2 meter respectively. So:

distB.C=distA.C-distA.B=δ1-δ2δ1-δ2+k1meter (16)

where k1 is a constant for safety purposes. Here, k1 = 10 meter is used for safety.

Accordingly, the time needed for B and C to cross (δ1 - δ2 + k1) meter distance with a speed of 5*(x-y)/18 meter/second is calculated from (9, 10) as follows:

Required TimeT1=δ1-δ2+k1×18x-y×5second, (17)

Therefore, vehicle A has to overtake vehicle B within T1 second of time. The relative distance that needs to be covered is as follows:

Relative distance need to coverδcover=δ2+k2meter, (18)

where k2 is a constant for the extra safety purpose (as the vehicle need to cover the extra length of two vehicles and few more angular distance for changing the lane and for returning to the previous lane). Here, the value of k2 = 8 meters is used for safe overtaking. Therefore, the minimum relative speed of vehicle A over vehicle B is estimated as follows:

Required relative speed of A and B

βA,B1=δ2+k2T1m/s=δ2+k2×18T1×5km/hr, (19)

So, vehicle A has to run βA,B1 km/hr faster than B's speed. The minimum speed required for A is calculated as follows:

Required speed of A=speed of B+βA,B1km/hr. (20)

However, the relative speed of A and B is y km/hr. So, A has only y km/hr more speed than B. So, an increment of speed required for vehicle A is calculated as follows:

Required increment of speed=βA,B1-ykm/hr. (21)

So, if βA,B1-y is negative, then no need to increase the speed, otherwise there is a need to increase the speed by βA,B1-y more than the current speed throughout the trajectory, then only it is possible to overtake. As the current speed of A is less, hence some acceleration is required. The amount of uniform acceleration needed is calculated as follows. We know the equation for uniform acceleration is:

x=ut+12at2, (22)

where x is the displacement, a is the acceleration, 't' is time between initial and final position, and u is the initial speed of vehicle A. Here, the preliminary speed of vehicle A is known as it is our overtaking vehicle. As the system is installed in the vehicle (A), so speed can be retrieved easily. From (11, 12, 16) we got:

Required accelerationa=2ut+xt2=2×T1×speed of A+δ2+k2T12m/s2 (23)

4.6 Road Condition Detection

For the perfect overtaking decision, detection of road condition is also essential. Here, the road condition level is estimated by considering vertical displacement of light. Maximum displacement of 200 pixels is considered if the road condition is extremely bad and have many speed breaker and potholes in the road. The output value of the road condition level is finalized within 0 to 10 by dividing the average displacement of light centroid by 200. Where 0 indicates a smooth road with no potholes and 10 indicates an extreme bad road with many potholes. The estimation has been completed as pseudo code 1.

Pseudo Code 1. Estimation of Road Condition 

4.7 Overtaking Decision Based on Fuzzy Logic

The decision-making system is designed based on fuzzy logic using the Mamdani method [40]. Input membership functions in the proposed system are the condition of the road and the required acceleration.

The Value of the required acceleration is divided, where greater than 4 indicates a high value of acceleration that may be risky in the sense of driving. Figure 7(a) and figure 7(b) represents the membership functions of the required acceleration and condition of the road.

Fig. 7 (a) Membership functions of required acceleration, (b) Membership functions of road condition level, (c) Membership functions of output 

Finally, three types of final decision are generated, viz: - a) overtake with confidence, b) overtake with caution and c) don't overtake. Figure 7(c) represents the output membership function.

The rules used in decision-making system is exposed in Table 1. The 3D visualization of the relationship between input and output result of the decision-making system is represented in figure 8.

Table 1 Fuzzy rules for fuzzy decision 

Required Acceleration
Below 2.0 (low) 2.0 to 4.0 (medium) Above 4.0 (high)
Road Condition
Level
Below 2.5 (good) Overtake Overtake with
Caution
Don’t Overtake
2.5 to 7 (medium) Overtake with
Caution
Don’t Overtake Don’t Overtake
Above 7 (bad) Overtake with
Caution
Don’t Overtake Don’t Overtake

Fig. 8 3D visualization of the relationship of input and output 

5 Real-Time Performance Evaluation

The proposed approach is tested in real-time during the night on two-lane single-carriageway roads. Two lanes have no separation and vehicles run in both directions. Empirical tests are conducted in complex traffic situations where cars are running in both directions on the two lane single carriageway. The accuracy in terms of errors of speed and distance estimation is shown in Table 2.

Table 2 Accuracy of Distance and Speed Estimation 

Vehicles Estimates
value
Ground truth Value
obtained from GPS
Error Mean error
Distance 1 32 m 32 m 0 m 1.1 m
2 95 m 94 m - 1 m
13 126 m 128.5 m + 2.5 m
4 51 m 50 m - 1 m
5 76 m 77 m + 1 m
Speed 1 30 km/hr 31 km/hr + 1 km/hr. .4 km/hr
2 42 km/hr. 41 km/hr. - 1 km/hr.
3 50 km/hr. 52 km/hr. + 2 km/hr.
4 61 km/hr. 59 km/hr. - 2 km/hr.
5 54 km/hr. 53 km/hr. - 1 km/hr.

There are no existing methods available, which can estimate the real-time distance and speed of any vehicle accurately at night. However, a compression among the proposed method and state-of-the-art techniques for distance and speed estimation (in terms of average error from 25 frames in percentage) of vehicles in real-time is revealed in Table 3.

Table 3 Compression of the Proposed Method with State-of-The -Art Techniques for the Distance and Speed Estimation 

Error (average of 25 frames in %) of various distance estimation methods
Javadi et al. [9] Kim et al. [11] Liu et al. [15] Vakili et al. [16] Ali et al. [17] Zaarane et al. [18]
3.455 3.426 2.713 2.713 2.374 1.84
Error (average of 25 frames in %) of various speed estimation methods
Great et al. [3] Hua et al. [5] Giannakeris et al. [6] Koyuncu et al. [7] Javadi et al. [9] Moazzam et al. [12]
4.872 3.032 4.945 3.715 2.707 2.425

A sample frame during a real-time experiment in a complex traffic scenario where the distances and speeds of both ongoing and oncoming vehicles are estimated is revealed in figure 9 with two different modes of view.

Fig. 9 a) A real-time input frame b) Distance and speed estimation c) Special mode with processed frame 

The proposed system has been tested during night time dark conditions in the two-lane single-carriageway road where no separation is present between two lanes and vehicles run bidirectionally. In this test, a particular situation is considered where car A and B are running in a similar direction and car C is driving in the opposite direction in the right lane. It is expected that the situation will allow car A to overtake car B while considering car C.

The accuracy obtained in real-time observation is shown in Table 4. After several real-time experiments, a total of 46 realistic decisions has been observed, out of them, few are shown in Table 5.

Table 4 Real-time observation of overtaking accuracy 

Threshold
value
used
Total of
overtaking
situations
observed
Number of fuzzy decisions
of overtaking
(Overtake + Overtake with
caution)
Number of realistic
safe overtaking
scenario observed
Number of
failure
estimation
Accuracy in %
Set 1 46 34 32 2 94.12%
Set 2 46 34 34 0 100%

Table 5 Real-time observation of overtaking decision 

Number of
overtaking
situations
observed
Required
acceleration
(m/s2)
Road
Condition
level
Threshold value Used: SET 1 Threshold value Used: SET 2
Fuzzy
decision
Realistic
scenario
observed
Estimation:
success/
failure
Fuzzy
decision
Realistic
scenario
observed
Estimation:
success/
failure
1st 5.65 6 Don’t
overtake
Safe, as
no
overtake
success Don’t
overtake
Safe, as no
overtake
success
2nd 2.67 2 Overtake
with
Caution
Safe, as
overtaken
carefully
success Overtake
with
Caution
Safe, as
overtaken
carefully
success
3rd 1.02 1 Overtake Safe
overtake
success Overtake Safe
overtake
success
4th 2.85 6 Don’t
overtake
Safe, as
no
overtake
success Overtake
with
Caution
Safe, as
overtaken
carefully
success
5th 3.95 2.4 Overtake
with
Caution
Risky
condition
during
overtaking
failure Don’t
overtake
Safe, as no
overtake
success
6th 4.50 7 Don’t
overtake
Safe, as
no
overtake
success Don’t
overtake
Safe, as no
overtake
success

Two sets of threshold value used in real-time experiments are:

Set 1. Required acceleration: less than 2.0 (low), 2.0 to 4.0 (medium) and above 4.0 (high),

Road Condition Level: less than 2.5 (good), 2.5 to 7.0 (medium) and above 7.0 (bad).

Set 2. Required acceleration: less than 2.0 (low), 2.0 to 3.0 (medium) and above 3.0 (high),

Road Condition Level: less than 2.5 (good), 2.5 to 6.0 (medium) and above 6.0 (bad).

4.8 5.1 Analysis of Computation Time

The computation time is the most decisive factor in the case of driving. Numerous real-time experiments reveal that the computation time is amazingly less so that the drivers can see the distance and the speed of other vehicles in real time without any time delay. The processing time is exposed in figure 10.

Fig. 10 Processing time observation (CPU configuration: Intel(R) Core (TM) i7-10510U @ 2133 MHz, 8GB RAM, and 500 GB SSD) 

6 Conclusion and Future Work

The real-time dark road at night is really dangerous for driving because the driver cannot see another vehicle properly. Making a perfect decision of overtaking is complicated for humans and obviously for any machine by observing headlights and taillights of other vehicles. Till now, no such decision-making algorithm exists which can make a perfect overtaking decision at night time and adverse situation.

The proposed system is presented as a novel vision-based overtaking decision-making technique during night time adverse situations. Different situations are considered with three vehicles viz: a slower front vehicle, opposite oncoming vehicle and test vehicle. The proposed system can estimate vehicle-to-vehicle distance, speed of vehicles, road condition level, the required gap and acceleration for safe overtaking in real-time.

The system can take a real-time fast and accurate decision of overtaking. The decision is estimated depending on the distance and speed of other vehicles and the required gap in the presence of an oncoming vehicle at night. The proposed system will be very helpful for safe overtaking in two-lane bidirectional road for night time driving. It will add an evolutionary benefit to autonomous vehicles as well as manual driving.

As future work, more complex evaluation scenarios can be considered, such as additional traffic travelling on multiple lanes in different directions, or having different traffic participants in addition to vehicles (e.g., bikes). Moreover, the vehicle-tracking algorithm at night would need to be extended to address the dark vehicle (vehicle at parking mode) without any headlight.

Acknowledgments

The authors greatly acknowledge the 'Research laboratory', and Department of Computer Science & Engineering, National Institute of Technology Agartala, India for facilitating a top-notch research environment for conducting this research.

References

1. World Health Organization (2018). Violence and injury prevention and world health organization: Global status report on road safety 2018. Supporting a Decade of Action, Geneve. [ Links ]

2. Transport Research Wing (2017). Government of India, Road Accidents in India - 2017. New Delhi. [ Links ]

3. Great, J., Sopiak, D., Oravec, M., Pavlovicova, J. (2017) . Vehicle speed detection from camera stream using image processing methods. 59th International Symposium ELMAR-2017, pp. 18-20. DOI: 10.23919/ELMAR.2017.8124468. [ Links ]

4. Wicaksono, D.W., Setiyono, B. (2017). Speed estimation on moving vehicle based on digital image processing. International Journal of Computing Science and Applied Mathematics, Vol. 3, No. 1, pp. 21-26. DOI: 10.12962/ j24775401.v3i1.2117. [ Links ]

5. Hua, S., Kapoor, M., Anastasiu, D.C. (2018). Vehicle tracking and speed estimation from traffic videos. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 153-160. DOI: 10.1109/ CVPRW.2018.00028. [ Links ]

6. Giannakeris, P., Kaltsa, V., Avgerinakis, K., Briassouli, A., Vrochidis, S., Kompatsiaris, I. (2018) . Speed estimation and abnormality detection from surveillance cameras. IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 93-936. DOI: 10.1109/CVP RW.2018.00020. [ Links ]

7. Koyuncu, H., Koyuncu, B. (2018). Vehicle speed detection by using camera and image processing software. International Journal of Engineering and Science, Vol. 7, No. 9, pp. 64-72. DOI: 10.9790/18130709036472. [ Links ]

8. Qi, S.H., Li, J., Sun, Z.P., Zhang, J.T., Sun, Y. (2019) . Distance estimation of monocular based on vehicle pose information. Journal of Physics Conference Series, Vol. 1168, No. 3, pp. 1-7. DOI: 10.1088/1742-6596/1168/3/032 040. [ Links ]

9. Javadi, Md.S., Dahl, M., Pettersson, M.I. (2019). Vehicle speed measurement model for video-based systems. Computers & Electrical Engineering, Vol. 76, pp. 238-248. DOI: 10.1016/j.compeleceng.2019.04.001. [ Links ]

10. Weber, Y., Kanarachos, S. (2019). The correlation between vehicle vertical dynamics and deep learning-based visual target state estimation: A sensitivity study. Sensors, Vol. 19, No. 22, pp. 1-28. DOI: 10.3390/s19224870. [ Links ]

11. Kim, J.B. (2019). Efficient vehicle detection and distance estimation based on aggregated channel features and inverse Perspective Mapping from a Single Camera. Symmetry, Vol. 11, No. 10, pp. 1-20. DOI: 10.3390/sym1110 1205. [ Links ]

12. Moazzam, G., Haque, M.R., Uddin, M.S. (2019). Image-based vehicle speed estimation. Journal of Computer and Communications, Vol. 7, No. 6, pp. 1-5. DOI: 10.4236/jcc.2019.76001. [ Links ]

13. da Silva-Bastos, M.E., Fidelis-Freitas, V.Y., Teles de Menezes, R.S., Maia, H. (2020). Vehicle speed detection and safety distance estimation using aerial images of Brazilian highways. Seminário Integrado de Software e Hardware (SEMISH), Sociedade Brasileira de Computação, pp. 258-268. DOI: 10.5753/semish.2020.11334. [ Links ]

14. Bell, D., Xiao, W., James, P. (2020). Accurate vehicle speed estimation from monocular camera footage. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. V-2-2020, pp. 419-426. DOI: 10.5194/isprs-annals-V-2-2020-419-2020. [ Links ]

15. Liu, J., Zhang, R. (2020). Vehicle detection and ranging using two different focal length cameras. Journal of Sensors, Vol. 2020, pp. 1-14. DOI: 10.1155/2020/4372847. [ Links ]

16. Vakili, E., Shoaran, M., Sarmadi, M.R. (2020). Single-camera vehicle speed measurement using the geometry of the imaging system. Multimedia Tools and Applications, Vol. 79, pp. 19307-19327. DOI: 10.1007/s11042-020-08761-5. [ Links ]

17. Ali, A., Hassan, A., Ali, A.R., Ullah-Khan, H., Kazmi, W., Zaheer, A. (2020). Real-time vehicle distance estimation using single view geometry. IEEE Winter Conference on Applications of Computer Vision (WACV)-Snowmass Village, pp. 1111-1120. DOI: 10.1109/WACV45572.2020.9093634. [ Links ]

18. Zaarane, A., Slimani, I., Okaishi, W.A., Atouf, I., Hamdoun, A. (2020). Distance measurement system for autonomous vehicles using stereo camera. Array, 5, pp. 1-7. DOI: 10.1016/j.array.2020.100016. [ Links ]

19. Milanés, V., Llorca, D.F., Villagrá, J., Pérez, J., Fernández, C., Parra, I., González, C., Sotelo, M.A. (2012). Intelligent automatic overtaking system using vision for vehicle detection. Expert Systems with Applications, Vol. 39, No. 3, pp. 3362-3373. DOI: 10.1016/j.eswa.2011.09.024. [ Links ]

20. Naranjo, J.E., González, C., García, R., de Pedro T. (2008). Lane-change fuzzy control in autonomous vehicles for the overtaking manoeuvre. IEEE Transactions on Intelligent Transportation Systems, Vol. 9, No. 3, pp. 438-450. DOI: 10.1109/TITS.2008.922880. [ Links ]

21. Basjaruddin, N.C., Kuspriyanto, K., Saefudin, D., Rakhman, E., Ramadlan, A.M. (2015). Overtaking assistant system based on fuzzy logic. Telkomnika, Vol. 13, No. 1, pp. 76-84. DOI: 10.12928/TELKOMNIKA.v13i1.499. [ Links ]

22. Shamir, T. (2004). How should an autonomous vehicle overtake a slower moving vehicle: Design and analysis of an optimal trajectory. IEEE Transactions on Automatic Control, Vol. 49, No. 4, pp. 607-610. DOI: 10.1109/TAC. 2004.825632. [ Links ]

23. Sezer, V. (2018). Intelligent decision making for overtaking manoeuvre using mixed observable Markov decision process. Journal of Intelligent Transportation Systems, Vol. 22, No. 3, pp. 201-217. DOI: 10.1080/15472450.2017.133 4558. [ Links ]

24. Groza, A., Cara, C., Zaporojanz, S., Calmicov, I. (2016). Assisting drivers during overtaking using car-2-car communication and multi-agent systems. IEEE 12th International Conference on Intelligent Computer Communication and Processing (ICCP), pp. 293-299. DOI: 10.1109/ICCP.2016.7737162. [ Links ]

25. Vieira, A.S.S., Celestino, Jr.J., Patel, A., Taghavi, M. (2013). Driver assistance system towards overtaking in vehicular Ad Hoc Network. The Ninth Advance Conference on Telecommunication (AICT), pp. 100-107. [ Links ]

26. Yu, C., Wang, X., Hao, J., Feng, Z. (2019). Reinforcement learning for cooperative overtaking. 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 341- 349. [ Links ]

27. Ngai, D.C.K., Yung, N.H.C. (2011). A Multiple-goal reinforcement learning method for complex vehicle overtaking manoeuvres. IEEE Transactions on Intelligent Transportation Systems , Vol. 12, No. 2, pp. 509-522. DOI: 10.1109/TITS.2011.2106158. [ Links ]

28. Vasic, M., Lederrey, G., Navarro, I., Martinoli, A. (2016). An overtaking decision algorithm for networked intelligent vehicles based on cooperative perception. IEEE Intelligent Vehicles Symposium, pp. 1054-1059. DOI: 10.1109/IVS.2016.7535519. [ Links ]

29. Patra, S., Calafate, C.T., Canoz, J.C., Manzonix, P. (2015). An ITS solution providing real-time visual overtaking assistance using smartphones. 40th Annual IEEE Conference on Local Computer Networks, Clearwater Beach, pp. 270-278. DOI: 10.1109/LCN.2015.7366 320. [ Links ]

30. Ghumman, U., Kunwar, F., Benhabib, B. (2008). Guidance-based on-line motion planning for autonomous highway overtaking. International Journal on Smart Sensing and Intelligent Systems, Vol. 1, No. 2, pp. 549-571. DOI: 10.21307/ijssis-2017-307. [ Links ]

31. Raghavan, A., Wei, J., Baras, J.S., Johansson, K.H. (2018). Stochastic control formulation of the car overtake problem. International Federation of Automatic Control, Vol. 51, No. 9, pp. 124-129. DOI: 10.1016/J.IFA COL.2018.07.021. [ Links ]

32. Wang, F., Yang, M., Yang, R. (2009). Conflict probability estimation based overtaking for intelligent vehicles. IEEE Transactions on Intelligent Transportation Systems , Vol. 10, No. 2, pp. 366-370. DOI: 10.1109/TITS.2009.2020200. [ Links ]

33. Németh, B., Gáspár, P., Hegedus, T. (2018). Optimal control of overtaking manoeuvre for intelligent vehicles. Journal of Advanced Transportation, Vol. 2018, No. 1, pp. 1-11. DOI: 10.1155/2018/2195760. [ Links ]

34. Barmpounakis, E.N., Vlahogianni, E.I., Golias, J.C. (2018). Identifying predictable patterns in the unconventional overtaking decisions of PTW for cooperative ITS. IEEE Transactions on Intelligent Vehicles, Vol. 3, No. 1, pp. 102-111. DOI: 10.1109/TIV.2017.27 88195. [ Links ]

35. Zhou, J., Tkachenko, P., del Re, L. (2019). Gap acceptance based safety assessment of autonomous overtaking function. IEEE Intelligent Vehicles Symposium (IV), pp. 2113-2118. DOI: 10.1109/IVS.2019.8814141. [ Links ]

36. Elleuch, I., Makni, A., Bouaziz, R. (2019). Cooperative overtaking assistance system based on v2v communications and RTDB. The Computer Journal, Vol. 62, No. 10, pp.1426-1449. DOI: 10.1093/COMJNL/ BXZ026. [ Links ]

37. Gaba, G.S., Singh, P., Singh, G. (2012). Implementation of image enhancement techniques. IOSR Journal of Electronics and Communication Engineering (IOSRJECE), Vol. 1, No. 2, pp. 20-23. [ Links ]

38. Jayaraman, S., Esakkirajan, S., Veerakumar, T. (2009). Digital image processing. McGraw Hill Education, New Delhi. [ Links ]

39. Ester, M., Kriegel, H.P., Sander, J., Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the 2nd Int. Conf. Knowledge Discovery Data Mining (KDD'96), pp. 226-231. [ Links ]

40. Mamdani, E.H. (1974). Application of fuzzy algorithms for control of a simple dynamic plant. Proceedings of the IEEE, Vol. 121, No. 12, pp. 1585-1588. DOI: 10.1049/piee.1974.0328. [ Links ]

Received: October 20, 2020; Accepted: April 02, 2021

* Corresponding author is Gouranga Manda. gourangamandal@yahoo.com

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License