SciELO - Scientific Electronic Library Online

 
vol.28 número2Design of Routes for Collaborative Robotsin the Automobile Painting Processthrough a Comparison of Perturbative Heuristicsfor Iterated Local SearchGrammatical Evolution with Codons Selection Order as Intensification Process índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Computación y Sistemas

versión On-line ISSN 2007-9737versión impresa ISSN 1405-5546

Comp. y Sist. vol.28 no.2 Ciudad de México abr./jun. 2024  Epub 31-Oct-2024

https://doi.org/10.13053/cys-28-2-5025 

Articles of the thematic section

Prescribed-Time Trajectory Tracking Control of Wheeled Mobile Robots Using Neural Networks and Robust Control Techniques

Jesús A. Rodríguez-Arellano1 

Víctor D. Cruz1 

Luis T. Aguilar1  * 

Roger Miranda-Colorado2  3 

11 Instituto Politécnico Nacional, Tijuana, Mexico. jrodriguez@citedi.mx, vcruz@citedi.mx.

22 Instituto Politécnico Nacional, CITEDI, Mexico. rmirandaco@conacyt.mx.

33 Cátedras CONAHCyT, Mexico.


Abstract:

This research presents a novel trajectory generation algorithm and the design of a prescribed time controller for trajectory tracking tasks for autonomous vehicles. The trajectory generation algorithm uses a hybrid combination of computer vision techniques and intelligent rail detection methods using an on-board camera. Based on the previous information, a possible trajectory is then generated that the vehicle should follow. A time-prescribed controller is then developed and implemented to track the trajectory generated by the proposed methodology. The controller uses a hybrid structure in which a time-varying feedback controller transitions into a fixed-time controller. This approach achieves stabilization in the prescribed time despite the initial conditions. To address the trajectory design, a scaled autonomous vehicle simulator was used to then evaluate the prescribed time controller compared to a finite time controller and a dynamic feedback controller. The simulation results demonstrate the effectiveness of trajectory generation and trajectory tracking control algorithms in addressing these challenges in real-world scenarios by examining two situations: unperturbed and perturbed cases.

Keywords: Prescribed time stabilization; trajectory generation; neural networks

1 Introduction

Research on autonomous vehicles is intensively explored due to their increasing scope of application, such as surveillance tasks, space exploration, delivery, transportation, and others [1, 2, 3, 4]. These vehicle systems require information about the environment to correctly perform the various tasks and avoid collisions with various obstacles in the environment. The various tasks that these vehicles perform are classified into three main problems: trajectory tracking, path tracking, and point stabilization [5,6].

All information is collected by the sensors these systems are equipped with, such as B. LiDAR, ultrasonic sensors, GPS, and cameras. The information of the environment is processed and interpreted to perform the above-mentioned navigation tasks and avoid accidents and collisions [4, 7]. Therefore, the main focus of this study consists of the precise control of the vehicle’s movement along a desired path, known as trajectory tracking, as well as the generation of such paths, known as trajectory generation. These topics have attracted significant attention in the field of autonomous vehicles [8]. A wide range of studies have been carried out on the generation of trajectories.

Prior studies [9, 10, 11, 12] have shown that it is possible to implement intelligent techniques for generating trajectories and segmenting lanes. A technique for lane line identification is proposed by the authors in [13]. This approach utilizes the RANSAC algorithm to aid in the detection of lane lines and is predicated on an adaptive region of interest extraction strategy.

In addition, convolutional neural network-based algorithms for lane line detection are proposed by Haixia and colleagues [14]. For training and validation purposes, their algorithms employ the TuSimple dataset. The implementation of a Neural Network (NN) using an on-board camera in [9] leads to favorable outcomes in the trajectory generation process.

Hart et al. [10] describe the implementation of intelligent methodologies to create a viable trajectory generator. The study’s findings indicate that it is feasible to create trajectories using intelligent methodologies. However, Bellusci et al. [11] present a new approach to lane segmentation using neural networks and computer vision techniques to create a map of the surroundings. Nevertheless, it does not explain the process of generating trajectories.

Besides, in [15] the system utilizes a convolutional neural network along with an auxiliary layer to detect the borders of lanes. In addition, the authors suggest a straightforward algorithm to rectify the vehicle's orientation by utilizing the centroid of the drivable area.

Unfortunately, there is no process for generating trajectories. In addition, in [16] was developed an independent navigation system using machine learning and computer vision techniques on a scaled vehicle.

The system also incorporates a depth camera for localization. However, the algorithm’s performance is reduced in low-brightness scenarios. Furthermore, a study conducted by Neven et al. [17] focuses on the development of a control system for a Car-Like robot equipped with a vision system.

This system enables the robot to detect and monitor lanes on a road. The study’s findings indicate that the utilization of vision techniques leads to effective lane detection in practical situations.

After addressing the trajectory generation problem, the subsequent critical task is solving trajectory tracking. Numerous studies have tackled this challenge, but there are lingering issues in the field. Various control schemes, including feedback control strategies [18], Sliding Mode Control (SMC) [19], and decoupled approaches [20], have been explored.

Furthermore, these controllers use different scenarios that encompass diverse convergence rates [20, 21], the effect of disturbances [19, 22, 23], and different kinematic models [18]. Cui et al. [24] introduced an adaptive control law within SMC, demonstrating exponential convergence to the trajectory.

The proposed methodology implements a decoupled approach to position and orientation tracking. Experimental results indicated its effectiveness for trajectory tracking despite disturbances. Nevertheless, the proposed methodology implements a simplified kinematic model, and the convergence rate is low. Qun Lu et al [21] proposed a fixed-time controller coupled with an observer under kinematic disturbances.

The controller accounted for signal saturation to prevent slipping, yielding satisfactory tracking results. However, it is noteworthy that the convergence exhibited a gradual pace. The employed kinematic model was the simplified version, and the control structure is complex. In [25], the authors presented a prescribed-time containment controller coupled with a prescribed-time observer to achieve leader-follower tasks.

This study employs the effect of uncertainties and external disturbances by using a chain of integrators for the model. The results demonstrated good tracking performance and disturbance rejection. The research on trajectory tracking has been tackled from different perspectives, like finite-time stability, fixed-time stability, and simplified kinematic models.

However, prescribed-time stability has not been widely studied on WMRs, but in [25] it was proved that this methodology can be implemented in these systems by achieving a fast convergence rate and low tracking errors. Thus, it is important to tackle this problem in WMRs because these systems need to attain a fast response in different scenarios where convergence time is crucial.

Furthermore, trajectory generation is a problem that has been studied from different perspectives, however, there are not many studies with onboard cameras on this crucial task.

1.1 Contribution

Based on the previous literature review, the contribution of this research is to address the trajectory generation and trajectory tracking tasks for autonomous vehicles.

To attain the trajectory generation problem, we develop a novel algorithm that combines computer vision techniques and NNs that enable us to segment rails and then design a feasible trajectory using an onboard camera.

We generate the trajectory using the Autominy simulator and propose a novel methodology. Furthermore, we design a new prescribed-time controller that drives the vehicle to the desired trajectory despite the effects of the disturbances.

This controller is composed of two stages: initially, a time-varying feedback control drives the system to a neighborhood of the origin; then it switches to a twisting controller that converges in fixed time to the origin.

To implement the proposed controller, we perform a coordinate transformation to the complete kinematic model of a Car-Like robot. Then, a series of simulations are performed between the proposed controller against a finite-time controller and a feedback controller. The trajectory tracked is the reference signal generated by the proposed algorithm.

The results demonstrate that the desired trajectory is a good option for trajectory generation problems, and the proposed controller is also a feasible option for trajectory tracking by demonstrating its superior performance against the compared control schemes. Then, the main contributions are:

  • – Develop a novel trajectory generation algorithm by combining NNs and computer vision techniques for WMRs using an on-board camera.

  • – Design of a novel prescribed time controller using the complete kinematic model of a WMR, that attains the trajectory tracking problem despite the effect of kinematic disturbances.

  • – Validation of the trajectory generated by the proposed algorithm by comparing the proposed controller and control schemes from the literature.

  • – Exhaustive qualitative and quantitative study that demonstrates the superiority of the proposed controller against the finite time and dynamic feedback controllers.

1.2 Organization

The subsequent sections are arranged in the following order: Section 2 describes the novel approach for generating trajectories, detailing both the intelligent method and the vision techniques. Section 3 provides a detailed explanation of the kinematic model of a WMR, including a coordinate transformation that allows for the implementation of a hybrid control scheme.

Section 4 develops the controller design that attains the prescribed time stabilization. The outcomes of the trajectory generation methodology are showcased in Section 5, alongside the evaluation of the suggested controller formulated in Section 4. Section 6, ultimately, provides the final findings and results of this manuscript.

1.3 Notation

Trigonometric functions are described as cψ, sψ, tψ which corresponds to cos(ψ), sin(ψ) and tan(ψ) correspondingly. The function sign(σ)=σ/|σ| if σ0, and sign(0)[1,1]. + represents positive real numbers.

2 Trajectory Generation

The generation of trajectories is an essential undertaking for WMRs owing to the diverse outcomes it has in real-world situations and the limitations it must satisfy to ensure its tracking is feasible. To achieve this goal, we propose the use of the methodology presented in Fig. 1, which integrates computer vision techniques and neural networks. To achieve this objective, we provide a detailed description of the proposed strategy.

Fig. 1 Trajectory generation methodology 

2.1 HybridNets Neural Network

The Neural Network employed is HybridNets1fn [27, 28], which is an end-to-end perception neural network based on PyTorch. The objective is to address the multi-task issue by employing segmentation and box detection classification networks. Its main architecture comprises two networks, as depicted in Fig. 2.

Fig. 2 HybridNets architecture [29] 

The initial part of the system is the backbone, which uses the EfficientNet-B3 convolutional neural network architecture to extract characteristics from the input. This architecture scales the dimensions of depth, width, and resolution using a composite coefficient to obtain feature maps of the image.

The information extracted by the backbone is then passed on to the neck network, called EfficientDet, which uses a Weighted Bi-directional Feature Pyramid Network (BiFPN) module for image segmentation and object detection. The BiFPN module achieves this by creating bidirectional interconnections between network nodes. Each input feature is assigned an additional weight, allowing the network to determine the individual significance of each featurefn.

2.2 Proposed Algorithm

To attain the trajectory generation task, we perform a series of steps detailed in Fig. 1, which allows us to finally obtain a feasible trajectory for a WMR using an on-board camera.

To this end, we provide a deep description of this algorithm in this section by using the Gazebo simulator of the Autominy vehicle [31].

Image acquisition. The first step entails reading the input image captured by the vehicle’s built-in camera, as shown in Fig. 3.

Fig. 3 Image acquired from the on-board camera of the Autominy simulator 

This step can be realized through the application of the available topics and the Robotic Operating System (ROS) to control the vehicle.

Intelligent Methods for Lane Detection. HybridNets NN application is used for lane segmentation and lane detection. This scheme identifies and separates the lane observed by the vehicle’s on-board camera. An important advantage of this NN is its ability to accurately identify the lane directly above the vehicle, as shown in Fig. 4.

Fig. 4 Output image with HybridNets NN 

Perspective Transformation. The output NN image undergoes a perspective transformation technique called “bird’s eye view”. This technique simulates the image being viewed from a higher angle, similar to a bird’s viewpoint [32].

Leveraging this transformation provides a comprehensive perspective of the lane, ensuring both lanes remain parallel to facilitate the implementation of lane detection and segmentation processes. This simplifies the detection and segmentation processes. Figure 5 illustrates the use of the technique from a bird’s eye view.

Fig. 5 Bird-Eye-View perspective transformation 

Maximum Drivable Area Acquisition. In our pursuit to determine the largest navigable region, we first examine the area divided by the NN and thus determine the track with the largest extent to generate the desired path. To achieve this, a mask is implemented, which makes use of the segmentation color indicated by the intelligent method results (blue area).

All resulting available segmented areas are then found and the maximum available area is calculated. Subsequent to this phase, the largest area is selected and highlighted in green to proceed with the trajectory generation method.

The algorithm then identifies all currently available segmented areas and uses them to calculate the largest possible area. Next, the algorithm identifies the largest area and marks it with a green marker to proceed with trajectory creation.

Centroid Calculation and Reference Point Determination. The subsequent procedure entails determining the centroid of the drivable area that has been identified through the green contour in the segmented results, as depicted in Fig. 6. Afterward, five points are positioned to create the intended trajectory. Initially, two immobile points are positioned at an equivalent elevation as the ArUco marker on the Autominy vehicle. These points are fixed and serve as the initial positions for displaying the trajectory. In addition, two extra points are included, one located blow, and one above the calculated centroid. The reference points are adjusted according to the area determined by the NN, and they help to determine the trajectory when navigating a curved lane based on the captured image.

Fig. 6 Procedure for determining the centroid of the available drivable area 

Afterward, the visual representation of the trajectory that corresponds to the five points mentioned earlier is displayed in Fig. 7. Additionally, the homography transformation converts the image representing the aerial perspective into the initial image perspective [33].

Fig. 7 Reference points for path visualization 

This transformation is dependent on the ArUco position, which serves as a reference point. This procedure utilizes the marker location and its position within the pixel coordinates on the bird-eye view image to accurately convert them into the corrected perspective pixel coordinates.

Figure 8 illustrates the process of translating the first trajectory to its equivalent on the vehicle’s perspective.

Fig. 8 Perspective correction 

Path generation. Ultimately, we derive the equation that defines the reference trajectory. The outcomes of the trajectory generation phase utilizing our suggested approach are depicted in Fig. 9. The trajectory is determined by analyzing the image captured by the on-board camera.

Fig. 9 Results for trajectory generation 

This visual observation is used to determine the trajectory. Figure 10 illustrates the reference trajectory produced using the proposed methodology in conjunction with the Autominy simulator. The controller will receive this trajectory as the reference input.

Fig. 10 Path generated using the proposed trajectory generation method 

3 Kinematic Model and Control Design

To tackle the trajectory tracking problem, we aim to design a controller that achieves prescribed-time stability to the desired trajectory. To this end, let us consider the full kinematic model a Car-Like robot depicted in Fig. 11, where q(t)=[x(t),y(t),θ(t),ϕ(t)]4 is the system’s vector configuration, the x(t), y(t) is the position on the plane with respect to the world frame {x,y}, θ(t) is the orientation of the vehicle with respect to the x axis, ϕ(t) is the steering angle of the front wheels. Now, the kinematic model of the WMR is described as:

q˙(t)=S(q)v(t)+d(t), (1)

S(q)=[cθsθtϕl00001],v(t)=[v1(t)v2(t)], (2)

Fig. 11 Description of the WMR’s kinematic model in the x, y plane 

With v(t)=[v1(t),v2(t)]T2 is the control input vector with v1(t), v2(t) being the linear and angular velocities, and d(t)=[d1(t),d2(t),d3(t),d4(t)]T4 encompasses the disturbances, which are bounded and smooth until its first time derivative [34].

Furthermore, the actuators generate bounded control signals. Hence, based on the previous statement we are able to consider the following assumption.

Assumption 1. There exist some positive constants D¯, d¯, V1+, V2+, and Φ such that:

||d(t)||D¯,d˙(t)d¯,|v1(t)|V1+,|v2(t)|V2+,|ϕ(t)|Φ<π/2. (3)

Then, we perform a coordinate transformation by defining the new output variable:

ζ(t)=[ζ1(t)ζ2(t)]=[x+lcθ+δc(θ+ϕ)y+lsθ+δs(θ+ϕ)]. (4)

With an arbitrary δ0, which will be used to design a controller. Furthermore, to attain the trajectory tracking problem, a reference kinematic model is required, which is described by:

q˙d(t)=S(qd)vd(t),S(qd)=[cθdsθdtϕdl00001],vd(t)=[vd1(t)vd2(t)], (5)

where qd(t)=[xd(t),yd(t),θd(t),ϕd(t)]T4, vd1(t),vd2(t) are the reference signals of q(t) and v1(t), v2(t). Moreover, we perform the same coordinate transformation as in (4):

ζd(t)=[ζd1(t)ζd2(t)]=[xd+lcθd+δc(θd+ϕd)yd+lsθd+δs(θd+ϕd)]. (6)

By following the procedure described in [20] along the coordinate transformation (5) and the reference model (9), we define the tracking error ξ¯(t)=ξ(t)ξd(t), which yields to the following dynamic error structure:

ξ˜¨(t)=A(θ,ϕ)u¯(t)+A¯(θ,ϕ)v(t)+Γ˙(t)A(θd,ϕd)u¯d(t)A˙(θd,ϕd)vd, (7)

where the structure of A(θ,ϕ), A¯(θ,ϕ), u¯(t), Γ˙(t), u¯d(t), A˙(θd,ϕd), and vd(t) can be consulted in [23].

Now we can state the control objective by implementing a hybrid control scheme that makes the tracking error signal ξ˜(t) to converge to zero in a prescribed time despite disturbances.

Control objective. Considering the WMR’s kinematic model (1), the coordinate transformation (4), and the error’s dynamics (7), design a control input v(t) such that the tracking error converges to zero in a prescribed time.

4 Control Design

In order to design the proposed controller, depicted in Fig. 12, we begin by presenting a general structure for the controller, which is then applied to the kinematic model (1) by employing the tracking error ξ˜(t), and the error's dynamics (7) to achieve prescribed time stability to the desired trajectory.

Fig. 12 General scheme of the proposed methodology 

4.1 General Structure

The control problem encompasses two stages. First, the system is directed towards an arbitrary small attraction zone by means of a time-varying state feedback control law.

Once the system enters the attraction zone, a twisting controller is executed to ensure that it converges to the equilibrium point in prescribed time [35].

In order to address this issue, we begin by formulating a general second-order system with the following structure:

x˙1(t)=x2(t),x˙2(t)=u(t,x)+w(t,x), (8)

where x(t)=[x1,x2]T2 is the state vector, u(t,x) is the control input, and w(t,x) encompasses smooth and bounded disturbances, such that |w(t,x)|W+, with W+ being the upper bound of the disturbances. To attain prescribed time stability, we structure the control input as follows:

u={u1(t,x),t<T1andx2\GR,u2(x),otherwise (9)

where GR={x:V(x)R} is the attraction domain, and:

u1(t,x)=l1(t)x1(t)l2(t)x2(t), (10)

u2(x)=c1sign(x1)c2sign(x2), (11)

l1(t)=μ2(t)[2+m][3+m]+k1μ(t)[2+m]+k2μ3+m(t)[2+m]+k1k2μ2+m(t),l2(t)=2μ(t)[2+m]+k1+k2μ(t)2+m. (12)

With m, k1, k2+, and the function:

μ(t)=1T1+t0t. (13)

The control signal u1(t,x) employs some time- varying gains l1(t) and l2(t), which act in the first stage of the control strategy in t<T. Then the twisting controller, represented by u2(t) with some gains c1>c2>W+>0, is introduced once the trajectories of the system enter the attraction domain GR, defined by the parameter R and the time T. Furthermore, this domain is set by the non-strict Lyapunov function:

V(x)=c1|x1(t)|+12x22(t). (14)

Which ensures that the equilibrium point is finite-time stable, according to [36, Theorem 5.1]. The proposed controller attains prescribed-time stabilization by combining two methodologies; the first stage consists in a control structure encompassed by time-varying gains (12) that drives the trajectories of the system to a vicinity of the origin defined by the time T and the attraction domain GR.

Once the trajectories reach this vicinity, the twisting controller (11) is introduced to reach the origin in fixed time with an admissible disturbance (for more details, refer [35]). Based on the previous information, the controller’s implementation for the WMR is described in the following subsection.

4.2 Wheeled Mobile Robot Controller

To implement the hybrid control scheme (9), we first define the following state variables:

x1(t)=ξ˜1(t),x2(t)=ξ˜˙1(t), (15)

x3(t)=ξ˜2(t),x4(t)=ξ˜˙2(t). (16)

Thus, we can rewrite the error’s dynamics (7) by using definitions (15, 16), which yields to two decoupled second-order systems:

ϒ1{x˙1(t)=x2(t),x˙2(t)=u1(t,x)+w1(t,x), (17)

ϒ2{x˙3(t)=x4(t),x˙4(t)=u2(t,x)+w2(t,x), (18)

where u1(t,x) and u2(x) are the control inputs retrieved from:

u(t)=A(θ,ϕ)u¯(t). (19)

Being:

u=[u1(t,x),u2(t,x)]T,w1=a˙11v1+a˙12v2+λ˙1ad11v˙d1ad12v˙d2a˙d11vd1a˙d12vd2,w2=a˙21v1+a˙22v2+λ˙2ad21v˙d1ad22v˙d2a˙d21vd1a˙d22vd2, (20)

where wi(t) are considered smooth and bounded disturbances, such as |wi(x,t)|Mi+ with i{1,2}. According to (9), the hybrid controller is structured as [35]:

ui={u1i,t<T1iandx2\GRiu2i,otherwise, (21)

where T1i>0 is a design parameter, and:

u11l11(t)x1(t)l12x2(t), (22)

u12=l21(t)x3(t)l22x4(t), (23)

u21=c11sign(x1)c12sign(x2), (24)

u22=c21sign(x3)c22sign(x4), (25)

GR1={x:V(x1,x2)R1}, (26)

GR2={x:V(x3,x4)R2}. (27)

The control u1i(t,x) is a linear time-varying state feedback with positive time-varying gains l1i(t), l2i(t) with the structure defined in (12), and the control structure u2i(x) attains stability in finite time to the origin according to the stability analysis developed with the non-strict Lyapunov function (14) [36].

Finally, in order to recover the control inputs v1(t), and v2(t) we use the definitions A(θ,ϕ) and v˙(t) from (19), which yields to:

v(t)=[v1(t)v2(t)]=[0τv˙1(τ)dτ0τv˙2(τ)dτ], (28)

where u¯(t)=A1(θ,ϕ)u(t). Therefore, we synthesize the stated controller in the following Theorem.

Theorem 1. [35] Let the dynamic’s error (7) and assume that A1 holds. Then, the controller (19), (21)-(28), provided (12, 13), ensures the trajectory tracking in prescribed time.

5 Numerical Results

To assess the proposed approaches for trajectory generation and trajectory tracking tasks, we conducted simulations using MATLAB/SIMULINK® for trajectory tracking and the Autominy simulator for trajectory generation [4]. The trajectory generated via the methodology described in Section 2 is illustrated in Fig. 13.

Fig. 13 Reference points for path visualization 

To assess the performance of the proposed methodology, named PTC, we consider the controllers described in references [18] and [20]. We choose the dynamic feedback controller [18] (named DFC), due its simple structure and because it implements the complete kinematic model of a Car-Like robot.

Furthermore, we also employ [20] (referred to as FTC) because it utilizes the full kinematic model of the Car-Like robot, attains the trajectory tracking problem by using a decoupling approach, and achieves Finite-time stability. The following cases are considered for the validation of the PTC controller:

  • C1: There is no effect of kinematic disturbances in the model, considering the initial conditions [x(0),y(0),θ(0),ϕ(0)]T=[1,2,arctan(π/2),0]T and l=0.255[m].

  • C2: A The effect of kinematic disturbances is introduced with d1(t)=0.05+0.05sin(2t), d2(t)=0.050.05cos(2t), d3(t)=0.05, d4(t)=0.05, with the initial conditions considered in case C1.

The gains considered for the DFC are kpi=343, kvi=147, kai=21, and for the FTC are k1=35, k2=25, k3=5 and k4=7. Finally, the parameters for the PTC's time-varying gains are T1i=1, k11=5, k12=5, k21=25, k22=15, m_i=1, and for the twisting controller are c11=275, c12=495, c21=770, c22=715 withδ=0.03. The simulation in Matlab/Simulink was performed using the sampling time of 1×104 seconds and Runge-Kutta algorithm as the solver.

In Fig. 13 is depicted the final trajectory generated with the procedure detailed in Section 2. The red marks are the samples gathered from the Autominy simulator, and the smooth trajectory generated with the time intervals and the red marks is represented by the blue line. The equations that represent the motion at each time interval are shown in Table 3 (see Appendix A) and are depicted in generalized coordinates in Fig. 14.

Table 1 Performance indexes IAE, ITSE and ISV for the WMR for case C1 

Controller IAE ISV
x˜ y˜ θ˜ ϕ˜Ž
PTC 0.653 0.168 0.21 0.245 149.2
DFC 1.415 0.392 0.314 0.299 28.44
FTC 0.939 0.26 0.522 1.18 45.03
ITSE
x˜ y˜ θ˜ ϕ˜Ž
PTC 0.119 0.006 0.008 0.148
DFC 0.29 0.038 0.019 0.111
FTC 0.264 0.033 0.207 3.242

Table 2 Performance indexes IAE, ITSE and ISV for the WMR for case C2 

Controller IAE ISV
x˜ y˜ θ˜ ϕ˜Ž
PTC 0.697 0.268 0.62 0.455 190.1
DFC 1.373 0.447 1.01 2.03 169.5
FTC 0.936 0.371 0.544 1.147 44.47
ITSE
x˜ y˜ θ˜ ϕ˜Ž
PTC 0.133 0.017 0.253 0.312
DFC 0.49 0.04 1.35 16.26
FTC 0.263 0.171 0.159 2.59

Table 3 Polynomials for trajectory reference 

Time interval Reference trajectories
0t1.13 xd(t)=0.0776t7+0.4971t61.7644t5+1.3442t4+0.1t+1.668yd(t)=0.0074t7+0.0471t60.167t5+0.1274t4+9e3t+0.4869
1.13t<2.14 xd(t)=0.004t7+0.88t68.37t5+31.73t462.0045t3+66.23t236.73t+9.68yd(t)=5e4t7+752e4t6707.7e3t5+2.638t45.069t3+5.328t22.909t+1.1545
2.14t<3.23 xd(t)=0.01t6+0.1552t50.97t4+3.187t35.860t2+5.92t1.2013yd(t)=15.8e3t6+246e3t51.569t4+5.262t39.797t2+9.625t3.38
3.21t<4.32 xd(t)=2e4t7+0.175t63.68t5+31.51t4141.87t3+353.27t2464.53t+251.09yd(t)=15.2e3t6+0.327t52.91t4+13.672t335.83t2+49.682t27.951
4.32t<5.31 xd(t)=0.5t6+13.8t5+156.6t4942.8t3+3172.7t25659.4t+4181.8yd(t)=36e4t60.1t5+1.161t47.117t3+24.431t244.536t+34.212
5.31t<6.81 xd(t)=0.1t62.9t5+41.1t4308.7t3+1294.2t22870.9t+2635.1yd(t)=4e4t6+12.2e3t50.1618t4+1.1245t34.292t2+8.483t6.1832
6.81t<7.83 xd(t)=19t5+324t42994t3+15515t242699t+48755yd(t)=19t5+324t42994t3+15515t242699t+48755
7.83t<9 xd(t)=8t5+152t41638t3+9883t231714t+42282yd(t)=0.2t5+4.4t448.1t3+292.1t2944.8t+1270.5
9t<10.03 xd(t)=3t5+57t4672t3+4435t215499t+22397yd(t)=0.2t54.1t4+47.1t3296.7t2+980.6t1321
10.03t<11.25 xd(t)=0.8t5+19.1t4242.5t3+1712.4t26342.8t+9588.1yd(t)=2t5+64t4896t3+7053t229575t+51615
11.25t<12.26 xd(t)=80t4+1460t314040t2+71590t150960yd(t)=10t5+210t43150t3+26780t2121230t+228190
12.26t<13.34 xd(t)=3t4+79t32077t2+17320t51070yd(t)=2t5+64t4915t3+7138t228147t+42243
13.34t<14.51 xd(t)=t521t4+344t33102t2+14569t27559yd(t)=xd(t)=t521t4+344t33102t2+14569t27559+18543t35323

Fig. 14 Desired trajectory represented in generalized coordinates x(t), y(t), θ(t) and ϕ(t) 

In Fig. 15 are depicted the tracking errors in generalized coordinates. It can be observed that the FTC and the PTC controllers converge faster than the DFC. Also, the proposed controller reaches the vicinity near the origin in t<1[s], as highlighted with the orange area; during this transitory stage the time-varying feedback control scheme is used, which afterwards is switched to the twisting controller.

Fig. 15 Tracking errors in x˜, y˜, θ˜ and ϕ˜ for case C1 

This behavior is also present in y˜, where the FTC and the PTC present the fastest responses, followed by the DFC. The PTC controller exhibits a slight overshoot; however, it approaches the vicinity of the origin at t<1[s] within the orange region.

Finally, in ϕ˜ S the PTC controller obtains the fastest convergence compared with the DFC and FTC controllers; however, it also exhibits the highest level of overshoot. Furthermore, in 10t14.15 the proposed controller achieves the lowest overshoots compared with the FTC and DFC controllers; this behavior can be attributed to the alterations in the reference signal for ϕ(t) as depicted in Fig. 14 during this specific period.

Furthermore, in Fig. 16 are depicted the control signals generated by the controllers in comparison to the desired control inputs v1(t) and v2(t), which are represented by the black-colored dashed lines. In v1(t), it is evident that the FTC controller exhibits the highest overshoot when compared to the PTC and DFC controllers; this behavior is associated with the fastest convergence of the FTC’s tracking x˜ in Fig. 15, which is attributed to the presence of the SMC in the x coordinate.

Fig. 16 Control signals v1(t) and v2(t) generated for case C1 

However, the PTC achieves the fastest response, which implies the fastest tracking response. For v2(t), the PTC controller generates a large control signal before the commutation of the twisting controller, then it generates a noisy control signal, which is directly related to the tracking error of ϕ(t) in Fig. 15. Moreover, it is evident that the PTC controller transitions to the twisting control scheme in a time period less than 1 second.

As observed in the signal, the PTC controller generates the effect of chattering due to the presence of the twisting controller. In addition, all the control signals remain with similar behaviors after the transient stage.

Figure 17 depicts the tracking errors for case C2 in the x, y, θ and ϕ coordinates. It can be observed that x˜ behaves similarly, with the FTC and PTC controllers achieving the fastest response, even when disturbances are considered. Regarding y˜(t), it is evident that the FTC and PTC controllers also achieve the fastest response; nevertheless, the performance of the FTC and DFC controllers is diminished due to the generation of greater oscillations.

Fig. 17 Tracking errors in x˜, y˜, θ˜ and ϕ˜ for case C2 

Furthermore, it is noteworthy that the time-varying stage of the PTC commutes to the twisting controller before the orange-colored area ends, where the twisting control signal is introduced, and thereafter the error signal keeps a neighborhood in zero.

For θ˜, the fastest response is achieved by the PTC and FTC control schemes, but the PTC controller keep less oscillations. It can also be observed that the DFC controller generates a greater error at the end of the simulation.

For ϕ˜(t), the largest overshoot was generated by the PTC due to the time-varying stage and the control signal v2(t), which is observed at the beginning of the orange-colored area.

The PTC maintains the lowest tracking errors in this coordinate with the lowest oscillations even in the presence of disturbances.

Conversely, the DFC and FTC controllers present the large oscillations during the trajectory tracking. The tracking error ϕ˜(t) is associated with the control signal v2(t) depicted in Fig. 18, where it is corroborated that the DFC controller generated large control signals. On the other hand, the PTC controller generated the largest overshoot while the time-varying gains were active, i.e., in the orange-colored area at t<1[s]; afterwards, the proposed scheme presents the effect of chattering due to the properties of the twisting controller and the SMC, however, its magnitude is small.

Fig. 18 Control signals v1(t) and v2(t) generated for case C2 

For v1(t), the PTC controller achieved the fastest response with a slight overshoot compared with the FTC controller. The DFC controller generated the lowest overshot and achieved the lowest response. In addition, it is clear that the proposed PTC controller achieves the fastest response and keeps the lowest oscillations in comparison with the DFC and FTC controllers.

Hence, the aforementioned findings illustrate that the PTC controller achieves the fastest response and the lowest oscillations, despite the existence of disturbances. Therefore, it can be inferred that the PTC controller excels over the DFC and FTC controllers.

In addition, a quantitative analysis was con-ducted to supplement the qualitative analysis. Tables 1 and 2 expose the IAE (Integral of Absolute Error) and ITSE (Integral of Time-weighted Squared Error) for evaluating the tracking error of the controllers, while the ISV (Integral of Squared Control Signal) is used to quantify the control signals [20]. Table 1 displays the quantitative tracking errors and ISV for case C1 while Table 2 showcases the performance indexes for case C2.

Regarding case C1, it is evident that the PTC controller achieves the lowest tracking errors for each coordinate, except for ϕ(t), where the FTC controller achieves the lowest tracking errors in terms of ITSE. These quantities indicate that the PTC controller outstands with the best performance by achieving the lowest tracking errors with respect to the DFC and FTC controllers.

In addition, for case C2, the FTC controller demonstrated the smallest tracking error in θ(t), while the PTC controller exhibited the smallest tracking errors in the remaining coordinates, thus demonstrating its superior performance in comparison with the other controllers.

Based on the preceding qualitative and quantitative analysis, we can deduce that the PTC controller demonstrates superior performance in both disturbed and undisturbed scenarios by achieving the fastest response and lowest oscillations with respect to the DFC and PTC controllers. Moreover, the proposed control strategy achieved the lowest tracking errors with ITSE and IAE in general, thus outstanding its performance. Furthermore, we have observed that the trajectory generation algorithm is an effective solution for the task of generating trajectories.

This is because it successfully fulfills the non-holonomic constraints of WMRs. The simulations conducted with the controllers have demonstrated the algorithm’s ability to track trajectories in real-world scenarios, even in the presence of disturbances.

6 Trajectory Generation

The trajectory generation algorithm obtained specific points from the rail detections, which were subsequently analyzed to calculate two polynomials in the x,y plane. In order to achieve a feasible trajectory for a Car-Like robot [17], it is necessary for these polynomials to be both smooth and continuous. The procedure for designing the equations is described in [37], resulting in the equation's trims presented in Table 3.

7 Conclusions

The manuscript introduced an innovative approach for generating trajectories in vehicular systems by utilizing an on-board camera, computer vision techniques, and intelligent algorithms.

The proposed methodology was evaluated by employing three controllers that successfully achieved the trajectory tracking task. In addition, a prescribed-time controller was introduced for the purpose of trajectory tracking tasks. The performance of this controller was evaluated by comparing it to two controllers previously discussed in the literature.

Employing the simulation capabilities of MATLAB/SIMULINK, the prescribed-time controller demonstrated be superior, showcasing its resilience and adaptability in maintaining the desired trajectory, irrespective of the presence of disturbances.

The proposed methodology demonstrated its key feature of adjusting the convergence rate and its ability to withstand disturbances. Therefore, this study introduced a novel approach that integrates two approaches to tackle significant challenges in vehicular systems: the prescribed-time controller and the trajectory generation algorithm.

Future research ought to emphasize on enhancing the prescribed time controller by means of reducing or modifying the control structure to mitigate chattering and minimize the overshoot of the control signal of v2(t).

Furthermore, the trajectory generation algorithm can be enhanced by utilizing optimization algorithms to enhance its desired control signals in relation to generalized coordinates.

Acknowledgments

This work was supported in part by Instituto Politécnico Nacional under Grant SIP 2024-0018 and the program “Investigadoras e Investigadores por México”, Cátedras CONAHCYT, Project No. 537.

References

1. Zhang, J., Li, S., Meng, H., Li, Z., Sun, Z. (2023). Variable gain based composite trajectory tracking control for 4-wheel skid-steering mobile robots with unknown disturbances. Control Engineering Practice, Vol. 132, pp. 105428. DOI: 10.1016/j.conengprac.2022.105428. [ Links ]

2. Li, L., Cao, W., Yang, H., Geng, Q. (2022). Trajectory tracking control for a wheel mobile robot on rough and uneven ground. Mechatronics, Vol. 83, pp. 102741. DOI: 10.1016/j.mechatronics.2022.102741. [ Links ]

3. Gao, H., Chen, C., Ding, L., Li, W., Yu, H., Xia, K., Liu, Z. (2017). Tracking control of WMRS on loose soil based on mixed H2/H∞ control with longitudinal slip ratio estimation. Acta Astronautica, Vol. 140, pp. 49–58. DOI: 10.1016/j.actaastro.2017.07.037. [ Links ]

4. Olayode, I. O., Du, B., Severino, A., Campisi, T., Alex, F. J. (2023). Systematic literature review on the applications, impacts, and public perceptions of autonomous vehicles in road transportation system. Journal of Traffic and Transportation Engineering (English Edition), Vol. 10, No. 6, pp. 1037–1060. DOI: 10.1016/j.jtte.2023.07.006. [ Links ]

5. Wang, C. (2011). A novel variable structure theory applied in design for wheeled mobile robots. Artificial Life and Robotics, Vol. 16, No. 3, pp. 378–382. DOI: 10.1007/s10015-011-0955-3. [ Links ]

6. Yan, K., Ma, B. (2023). Global posture stabilization for the kinematic model of a rear-axle driven car-like mobile robot considering obstacle avoidance. IEEE Robotics and Automation Letters, Vol. 8, No. 9, pp. 5568–5575. DOI: 10.1109/lra.2023.3296351. [ Links ]

7. Kocsis, M., Schultz, A., Zöllner, R., Mogan, G. L. (2016). A method for transforming electric vehicles to become autonomous vehicles. CONAT 2016 International Congress of Automotive and Transport Engineering, pp. 752–761. DOI: 10.1007/978-3-319-45447-4_83. [ Links ]

8. Han, X., Zhao, X., Xu, X., Mei, C., Xing, W., Wang, X. (2024). Trajectory tracking control for underactuated autonomous vehicles via adaptive dynamic programming. Journal of the Franklin Institute, Vol. 361, No. 1, pp. 474–488. DOI: 10.1016/j.jfranklin.2023.12.003. [ Links ]

9. Cruz, V. D., Rodriguez, J. A., Aguilar, L. T., Colorado, R. M. (2023). Trajectory tracking control of wheeled mobile robots using neural networks and feedback control techniques. Studies in Computational Intelligence, pp. 381–393. DOI: 10.1007/978-3-031-28999-6_24. [ Links ]

10. Hart, P., Rychly, L., Knoll, A. (2019). Lane-merging using policy-based reinforcement learning and post-optimization. IEEE Intelligent Transportation Systems Conference, pp. 3176–3181. DOI: 10.1109/itsc.2019.8917002. [ Links ]

11. Bellusci, M., Cudrano, P., Mentasti, S., Cortelazzo, R. E. F., Matteucci, M. (2024). Semantic interpretation of raw survey vehicle sensory data for lane-level hd map generation. Robotics and Autonomous Systems, Vol. 172, pp. 104513. DOI: 10.1016/j.robot.2023.104513. [ Links ]

12. Han, Z., Gu, J., Feng, Y. (2023). Blind lane detection and following for assistive navigation of vision impaired people. International Conference on Advanced Robotics and Mechatronics, pp. 721–726. DOI: 10.1109/icarm58088.2023.10218843. [ Links ]

13. Chen, Y., Wong, P. K., Yang, Z. (2021). A new adaptive region of interest extraction method for two-lane detection. International Journal of Automotive Technology, Vol. 22, No. 6, pp. 1631–1649. DOI: 10.1007/s12239-021-0141-0. [ Links ]

14. Haixia, L., Xizhou, L. (2021). Flexible lane detection using CNNs. International Conference on Computer Technology and Media Convergence Design. DOI: 10.1109/ctmcd53128.2021.00057. [ Links ]

15. Khan, M. A., Kee, S., Sikder, N., Mamun, M. A. A., Zohora, F. T., Hasan, M. T., Bairagi, A. K., Nahid, A. (2021). A vision-based lane detection approach for autonomous vehicles using a convolutional neural network architecture. Joint 10th International Conference on Informatics, Electronics and Vision, pp. 1-10. DOI: 10.1109/ICIEVicIVPR52578.2021.9564229. [ Links ]

16. Peregrina-Ochoa, S. A. (2019). Sistema de navegación autónomo para un vehículo a escala mediante aprendizaje automático y visión por computadora. Tesis de Maestría en Ciencias en Ingeniería de Cómputo, Centro de Investigación en Computación, Instituto Politécnico Nacional. [ Links ]

17. Neven, D., Brabandere, B. D., Georgoulis, S., Proesmans, M., Gool, L. V. (2018). Towards end-to-end lane detection: an instance segmentation approach. IEEE Intelligent Vehicles Symposium, pp. 286–291. DOI: 10.1109/IVS.2018.8500547. [ Links ]

18. Luca, A. D., Oriolo, G., Samson, C. (1998). Feedback control of a nonholonomic car-like robot. Robot Motion Planning and Control, pp. 171–253. DOI: 10.1007/bfb0036073. [ Links ]

19. Liu, D., Tang, M., Fu, J. (2022). Robust adaptive trajectory tracking for wheeled mobile robots based on gaussian process regression. Systems and Control Letters, Vol. 163, pp. 105210. DOI: 10.1016/j.sysconle.2022.105210. [ Links ]

20. Rosas-Vilchis, A. J. (2021). Algoritmos de observación y control robustos para el vehículo autónomo. Tesis de Maestría en Ciencias en Sistemas Digitales, Centro de Investigación en Computación, Instituto Politécnico Nacional. [ Links ]

21. Lu, Q., Chen, J., Wang, Q., Zhang, D., Sun, M., Su, C. (2022). Practical fixed-time trajectory tracking control of constrained wheeled mobile robots with kinematic disturbances. ISA Transactions, Vol. 129, pp. 273–286. DOI: 10.1016/j.isatra.2021.12.039. [ Links ]

22. Dixon, W. E., Dawson, D. M., Zergeroglu, E. (2000). Tracking and regulation control of a mobile robot system with kinematic disturbances: a variable structure-like approach. Journal of Dynamic Systems, Measurement, and Control, Vol. 122, No. 4, pp. 616–623. DOI: 10.1115/1.1316795. [ Links ]

23. Rodríguez-Arellano, J. A., Miranda-Colorado, R., Aguilar, L. T., Negrete-Villanueva, M. (2023). Trajectory tracking nonlinear H∞ controller for wheeled mobile robots with disturbances observer. ISA Transactions, Vol. 142, pp. 372–385. DOI: 10.1016/j.isatra.2023.07.037. [ Links ]

24. Cui, M., Liu, H., Wang, X., Liu, W. (2023). Adaptive control for simultaneous tracking and stabilization of wheeled mobile robot with uncertainties. Journal of Intelligent and Robotic Systems, Vol. 108, No. 3, DOI: 10.1007/s10846-023-01908-0. [ Links ]

25. Chang, S., Wang, Y., Zuo, Z., Yang, H., Luo, X. (2023). Robust prescribed-time containment control for high-order uncertain multi-agent systems with extended state observer. Neurocomputing, Vol. 559, pp. 126782. DOI: 10.16/j.neucom.2023.126782. [ Links ]

26. Github (2004). TuSimple Dataset. http://github.com/TuSimple/tusimple-benchmark. [ Links ]

27. Vu, D., Ngo, B., Phan, H. (2022). Hybridnets: end-to-end perception network. arXiv. DOI: 10.48550/ARXIV.2203.09035. [ Links ]

28. GitHub (2022). ONNX-hybridnets-multitaskroad- detection: Python scripts for performing road segmentation and car detection using the hybridnets multitask model in ONNX. http://github.com/ibaiGorordo/ONNX-HybridNets-Multitask-Road-Detection. [ Links ]

29. GitHub (2020). Pinto model repository for storing models that have been inter-converted between various models. http://github.com/PINTO0309/PINTO_model_zooLinks ]

30. Marutotamtama, J. C., Setyawan, I. (2021). Physical distancing detection using YOLO v3 and bird's eye view transform. Proceedings of the 2nd International Conference on Innovative and Creative Information Technology, pp. 50–56. DOI: 10.1109/ICITech50181.2021.9590157. [ Links ]

31. Sihombing, D. P., Nugroho, H. A., Wibirama, S. (2015). Perspective rectification in vehicle number plate recognition using 2d-2d transformation of planar homography. Proceedings of the International Conference on Science in Information Technology, pp. 237–240. DOI: 10.1109/ICSITech.2015.7407810. [ Links ]

32. Yoo, S. J. (2013). Adaptive neural tracking and obstacle avoidance of uncertain mobile robots with unknown skidding and slipping. Information Sciences, Vol. 238, pp. 176–189. DOI: 10.1016/j.ins.2013.03.013. [ Links ]

33. Kairuz, R. I. V., Orlov, Y., Aguilar, L. T. (2021). Prescribed-time stabilization of controllable planar systems using switched state feedback. IEEE Control Systems Letters, Vol. 5, No. 6, pp. 2048–2053. DOI: 10.1109/LCSYS.2020.3046682. [ Links ]

34. Orlov, Y. (2020). Nonsmooth Lyapunov analysis in finite and infinite dimensions. Springer Cham. DOI: 10.1007/978-3-030-37625-3. [ Links ]

35. Biagiotti, L., Melchiorri, C. (2009). Trajectory planning for automatic machines and robots. Springer Berlin. [ Links ]

The TuSimple dataset, comprising 6408 images of highways in the United States presented at a resolution of 1280x720, was utilized to train HybridNets [26].

To implement the intelligent method, the NN was converted to an Open Neural Network Exchange (ONNX) model to enhance its inference. The codes can be found in [29], while the weights can be found in [30].

Received: January 22, 2023; Accepted: April 24, 2024

* Corresponding author: Luis T. Aguilar, e-mail: laguilarb@ipn.mx

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License