Next Article in Journal
Soft Sensor Technology for the Determination of Mechanical Seal Friction Power Performance
Previous Article in Journal
Pipe Organ Design Including the Passive Haptic Feedback Technology and Measurement Analysis of Key Displacement, Pressure Force and Sound Organ Pipe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Adaptive Controller Based on Hyperbolic Cost Function for Non-Affine Discrete-Time Systems with Variant Control Direction

by
Miriam Flores-Padilla
* and
Chidentree Treesatayapun
Department of Robotic and Advanced Manufacturing, Cinvestav, Av. Industrial Metalurgia, Ramos Arizpe 25900, Mexico
*
Author to whom correspondence should be addressed.
Appl. Syst. Innov. 2024, 7(3), 38; https://doi.org/10.3390/asi7030038
Submission received: 15 February 2024 / Revised: 3 April 2024 / Accepted: 18 April 2024 / Published: 28 April 2024

Abstract

:
As technology evolves, more complex non-affine systems are created. These complex systems are hard to model, whereas most controllers require information on systems to be designed. This information is hard to obtain for systems with varying control directions. Therefore, this study introduces a novel data-driven estimator and controller tailored for single-input single-output non-affine discrete-time systems. This approach focuses on cases when the control direction varies over time and the mathematical model of the system is completely unknown. The estimator and controller are constructed using a Multiple-input Fuzzy Rules Emulated Network framework. The weight vectors are updated through the gradient descent optimization method, which employs a unique cost function that multiplies the error by a hyperbolic tangent. The stability analyses demonstrate that both the estimator and controller converge to uniformly ultimately bounded (UUB) functions of Lyapunov. To validate the results, we show experimental tests of force control that were executed on the z-axis of a drive-controlled 3D scanning robot. This system has a varying control direction, and we also provide comparison results with a state-of-the-art controller. The results show a mean absolute percentage tracking error smaller than one percent on the steady state and the expected variation in the system’s control direction.

1. Introduction

In the continually growing landscape of control systems, adaptive controllers have garnered significant attention owing to their adeptness in handling the intricate dynamics of modern systems, which are often unknown and highly non-linear [1,2]. With these sophisticated systems, there has been a surge in the availability of system status information, enabling adaptive controllers to rely less on system knowledge. This shift has led to the emergence of data-driven controllers (DDCs) and model estimators [3,4]. Researchers typically categorize the adaptation of DDCs into online learning, offline learning, and hybrid approaches that combine both online and offline learning strategies.
Along the DDCs’ most popular online methods, model-free adaptive control (MFAC) [5,6,7,8] has the advantages of a low computational cost for other online methods and no information required about the system besides the control direction. This method has been mainly used for non-linear systems. Regarding the offline methods, iterative feedback tuning [9,10] (IFT) and virtual reference feedback tuning [11,12] (VRFT) focus on parameter adaptation/identification; IFT adapts with iterations according to the gradient descent method whereas VRFT looks for the global minimum of the data available. Both require some controlled experiments on the system to have data on its behavior previous to controlling the system. Offline methods usually have lower computational costs than online methods but require more information on the system. No offline methods have reported results for systems with varying control directions. Hybrid methods have the popular iterative learning control (ILC) [13,14] that is commonly used for systems with repetitive tasks. This method has the advantage of decreasing error in each cycle but requires a bit of prior knowledge of the system. These types of controllers have a common disadvantage: they need to know the control direction of the system.
In 1983, Roger D. Nussbaum [15] proposed a solution of adaptive control for systems where the control direction is unknown. His proposed controller could deal with the unknown control direction by adapting a control gain. The proposed function slowly adapts to the unknown control directions. It has been used to solve different problems such as those pertaining to non-affine systems [16,17], non-linear systems [18], switched systems [19], reducing the computational cost [20,21], industrial applications [22,23], and time-varying control gain with no change in the sign [24]. Unfortunately, to the authors’ knowledge, only a very few articles address the problem of systems with varying control directions (control gain with sign change) [25,26]. Both articles use a Nussbaum-type function alongside fuzzy observers and controllers. Their focus is on affine non-linear systems; they only report simulation results to validate their theoretical analysis, and one of them does not present a graph of the control-related parameter estimation.
Unlike affine systems, non-affine systems have a non-linear relation with the system output regarding the control input [27,28,29]. This property of the non-affine systems makes it very difficult to find an exact solution for them. Non-affine systems have countless applications such as active magnetic bearings, aircraft dynamics, biochemical processes, dynamic models in pendulum control, underwater vehicles, and so on [30,31]. The common approach to controlling non-affine systems is adaptive control, which mostly involves neural networks where the control direction is either known or estimated with the Nussbaum gain [32,33,34]. Some of these systems also show time-varying sign behavior when we talk about the correlation between the output of the system and the control input. Therefore, our focus is the development of an adaptive controller for non-affine systems with time-varying control direction behavior.
This work presents two significant contributions. First, we introduce a novel controller capable of effectively managing non-affine, non-linear discrete-time systems with varying control directions. This adaptive controller showcases remarkable versatility in handling such systems, enabling precise control even amidst changing control directions. Then, we propose a novel cost function, employing a hyperbolic tangent of the error multiplied by the error, diverging from traditional quadratic or absolute functions. Investigations reveal that this innovative cost function facilitates faster responses to aggressive system changes while ensuring smoother control laws and estimated function responses. These enhancements significantly improve the overall system performance and may find valuable applications in redundancy scenarios for future works.
The rest of this work is organized as follows: Section 2 outlines the requirements and assumptions of the systems to control. Section 3 introduces the model estimator and its upgrade law according to the gradient descent method. We propose an unusual cost function: a hyperbolic tangent of the error multiplied by the error. The stability proof of the estimator is provided at the end of Section 3. Section 4 develops a model-free adaptive controller, where the weight vector is also upgraded according to the gradient descent method with a similar cost function as the estimator. The stability proof of the closed-loop system is provided at the end of Section 4. Section 5 shows the performance of the controller and estimator for a highly non-linear system with a changing control direction. This section also provides a comparison of the experimental results with a state-of-the-art controller. Finally, Section 6 offers suggestions for future work and concluding remarks.

2. Problem Formulation

A single-input single-output non-affine non-linear discrete-time system described as
y ( k + 1 ) = F y ( k ) , , y ( k n y ) , u ( k ) , , u ( k n u ) ,
with unknown indices n y and n u , it has the following affine representation:
y ( k + 1 ) = f y ( k ) , , y ( k n y ) , u ( k 1 ) , , u ( k n u ) + g y ( k ) , , y ( k n y ) , u ( k 1 ) , , u ( k n u ) u ( k ) + ε h k y ( k ) , , y ( k n y ) , u ( k ) , , u ( k n u ) , = f ( k ) + g ( k ) u ( k ) + ε h ( k ) ,
where f ( k ) and g ( k ) are unknown functions and ε h ( · ) is a bounded residual error when the non-affine system (1) meets the following assumptions:
Assumption 1. 
The non-linear function F ( · ) in (1) needs to be continuous regarding the control law u ( k ) . This implies that
y ( k + 1 ) u ( k ) g ( k ) ,
where 0 < | g ( k ) | g M and g M is an unknown positive constant. Therefore, the system (1) is controllable with unknown and varying control directions.
According to Assumption 1, the control direction is determined by utilizing the sign function of g ( k ) in (3) as
sign Δ y ( k + 1 ) Δ u ( k ) = sign g ( k ) ,
where Δ u ( k ) 0 .
Therefore, the equivalent model based on MiFREN [35] is developed in the following section.

3. Model Estimator

In this work, a class of non-linear discrete-time systems described in (1) and (2), where δ ( k ) y ( k ) , , y ( k n y ) , u ( k 1 ) , , u ( k n u ) , f ( k ) f δ f ( k ) , g ( k ) g δ g ( k ) and ε h ( k ) ε h δ ε ( k ) , is rewritten as
y ( k + 1 ) = f ( k ) + g ( k ) u ( k ) + ε h ( k ) ,
where ε h ( k ) is a bounded residual error such that | ε h ( k ) | ε M . It is noticeable that f ( k ) and g ( k ) are unknown functions. Therefore, the adaptive network MiFREN is used to estimate those functions as
f ( k ) = φ T ( k ) β f * ,
g ( k ) = φ T ( k ) β g * ,
where φ ( k ) is the multidimensional vector of the membership functions and β f * and β g * are the unknown ideal weight vectors.
By utilizing the equivalent model based on MiFREN, the dynamics in (5)–(7) are estimated as
y ^ ( k + 1 ) = f ^ ( k ) + g ^ ( k ) u ( k ) .
Therefore, the MiFREN implementation leads to
f ^ ( k ) = φ T ( k ) β f ( k ) ,
and
g ^ ( k ) = φ T ( k ) β g ( k ) ,
where β f ( k ) and β g ( k ) are iterative weight vectors used to estimate the functions f ^ ( k ) and g ^ ( k ) , respectively. This implementation is illustrated in the diagram of Figure 1. This diagram shows that the inputs of the estimator are the system output and the estimation error at the kth iteration to avoid causality issues. Both inputs enter simple fuzzy membership functions μ and are later combined in the multidimensional membership function φ ( k ) . Then, the estimation proceeds as explained in Equations (8)–(10).
The weight vectors β f ( k ) and β g ( k ) are updated by the gradient descent method. Thus, the estimation error e ^ ( k + 1 ) is introduced as
e ^ ( k + 1 ) = y ( k + 1 ) y ^ ( k + 1 ) .
Recalling (8)–(10) with (11), it yields
e ^ ( k + 1 ) = φ T ( k ) β f * + φ T ( k ) β g * u ( k ) + ε h ( k ) φ T ( k ) β f ( k ) φ T ( k ) β g ( k ) u ( k ) , = φ T ( k ) β ˜ f ( k ) + φ T ( k ) β ˜ g ( k ) u ( k ) + ε h ( k ) ,
where β ˜ f ( k ) = β f * β f ( k ) and β ˜ g ( k ) = β g * β g ( k ) .
The cost function E ^ ( k + 1 ) is secondly selected as a semi-definite positive function:
E ^ ( k + 1 ) = tanh e ^ ( k + 1 ) e ^ ( k + 1 ) .
It is worth observing that the proposed cost function (13) is developed here to reduce the high-frequency behavior, which will be discussed next in the experimental results.
Therefore, the update laws for the weight parameters are formulated by the gradient descent method as follows
β f ( k + 1 ) = β f ( k ) η f E ^ ( k + 1 ) β f ( k ) ,
and
β g ( k + 1 ) = β g ( k ) η g E ^ ( k + 1 ) β g ( k ) ,
where η f and η g are the learning rates for β f and β g , respectively.
Using the chain rule, the cost function (13) derivation is obtained as
E ^ ( k + 1 ) β f ( k ) = E ^ ( k + 1 ) e ^ ( k + 1 ) e ^ ( k + 1 ) y ^ ( k + 1 ) y ^ ( k + 1 ) β f ( k ) , = h e ^ ( k + 1 ) φ T ( k ) ,
where
h e ^ ( k + 1 ) sec h 2 e ^ ( k + 1 ) e ^ ( k + 1 ) + tanh e ^ ( k + 1 ) .
Then, the cost function (13) derivation is also calculated as
E ^ ( k + 1 ) β g ( k ) = E ^ ( k + 1 ) e ^ ( k + 1 ) e ^ ( k + 1 ) y ^ ( k + 1 ) y ^ ( k + 1 ) β g ( k ) , = h e ^ ( k + 1 ) φ T ( k ) u ( k ) .
Thus, substituting (16) into (14) and (15) into (18), the update laws for the weight parameters can be formulated as follows
β f ( k + 1 ) = β f ( k ) + η f h e ^ ( k + 1 ) φ ( k ) ,
and
β g ( k + 1 ) = β g ( k ) + η g h e ^ ( k + 1 ) φ ( k ) u ( k ) .
It is seen that η f and η g can play an important role in the model performance. Thus, η f and η g are determined as in the following theorem.
Theorem 1. 
A class of non-affine non-linear discrete-time systems (1) that can be represented in an affine way (5)—meeting the three assumptions made in Section 2—can also be estimated by (8) based on MiFREN. The estimation error, along with the internal signals, is convergent when the estimator parameters are designed following the conditions
η f = η g = η T ,
0 < η T 2 ξ e ^ ( k + 1 ) h m a x 2 φ m a x 2 1 + u 2 ( k ) .
Proof. 
To verify the convergence of the estimation error along and the internal signals of the estimator, let us select the Lyapunov function as
L y ^ ( k + 1 ) = β ˜ f 2 ( k + 1 ) + β ˜ g 2 ( k + 1 ) .
Therefore, the Lyapunov function differentiation is calculated as
Δ L y ^ ( k + 1 ) = β ˜ f 2 ( k + 1 ) β ˜ f 2 ( k ) + β ˜ g 2 ( k + 1 ) β ˜ g 2 ( k ) .
Utilizing the learning laws developed in (19) and (20), we obtain
β ˜ f ( k + 1 ) = β f * β f ( k + 1 ) , = β ˜ f ( k ) η f h e ^ ( k + 1 ) φ ( k ) ,
and
β ˜ g ( k + 1 ) = β g * β g ( k + 1 ) , = β ˜ g ( k ) η g h e ^ ( k + 1 ) φ ( k ) u ( k ) ,
respectively. By substituting (25) and (26) into (24), we have
Δ L y ^ ( k + 1 ) = β ˜ f ( k ) η f h e ^ ( k + 1 ) φ ( k ) 2 β ˜ f 2 ( k ) + β ˜ g ( k ) η g h e ^ ( k + 1 ) φ ( k ) u ( k ) 2 β ˜ g 2 ( k ) , = 2 η f β ˜ f T ( k ) φ ( k ) h e ^ ( k + 1 ) + η f 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 2 η g β ˜ g T ( k ) φ ( k ) h e ^ ( k + 1 ) u ( k ) + η g 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) u 2 ( k ) .
According to (21) where η f = η g = η T , the relation in (27) can be rewritten as
Δ L y ^ ( k + 1 ) = 2 η T β ˜ f T ( k ) φ ( k ) + β ˜ g T ( k ) φ ( k ) u ( k ) h e ^ ( k + 1 ) + η T 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) .
From the definition of the estimation error (12), we have
e ^ ( k + 1 ) ε h ( k ) = φ T ( k ) β ˜ f ( k ) + φ T ( k ) β ˜ g ( k ) u ( k ) .
By employing (28), it yields
Δ L y ^ ( k + 1 ) = 2 η T e ^ ( k + 1 ) h ( k + 1 ) + 2 η T ε h ( k ) h e ^ ( k + 1 ) + η T 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) .
Let us define ξ e ^ ( k + 1 ) e ^ ( k + 1 ) h e ^ ( k + 1 ) ; thus, from the last equation, we have
Δ L y ^ ( k + 1 ) = 2 η T ξ e ^ ( k + 1 ) + 2 η T ε h ( k ) h e ^ ( k + 1 ) + η T 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) .
Figure 2 shows the cost function E e ^ ( k ) , a semi–definite positive function dependent on the estimation error e ^ ( k ) . Furthermore the function h e ^ ( k + 1 ) in Figure 3, shows that h e ^ ( k + 1 ) is a bounded function regardless of the value of e ^ ( k + 1 ) with the limits 1.2 h e ^ ( k + 1 ) 1.2 .
Given that ε h ( k ) h e ^ ( k + 1 ) is a bounded function with an unknown sign, the Lyapunov function differentiation (30) can be rewritten as
Δ L y ^ ( k + 1 ) 2 η T ξ e ^ ( k + 1 ) + η T 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) + 2 | η T ε h ( k ) h e ^ ( k + 1 ) | < 0
since, for definite positive values a and b with an unknown value c, we can say that
a + b + c a + b + | c | .
Limits for the learning rate η T are derived according to the known functions
2 η T ξ e ^ ( k + 1 ) + η T 2 h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) < 0 ,
as
0 < η T 2 ξ e ^ ( k + 1 ) h e ^ m a x 2 φ m a x 2 1 + u m a x 2 2 ξ e ^ ( k + 1 ) h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) ,
where h m a x | h e ^ ( k + 1 ) | , u m a x | u ( k ) | , and φ m a x 2 φ T ( k ) φ ( k ) t 0 . It is suggested to set the learning rate as
η T 2 ξ e ^ ( k + 1 ) h e ^ m a x 2 φ m a x 2 1 + u 2 ( k ) ,
when no offline learning has taken place to accelerate the error convergence to a bonded compact set. In this case, a zero-initial-weight vector is recommended if there is no human knowledge of the system behavior to pass into the neural network. If the initial parameters of the estimator are obtained by offline learning or previous behavioral knowledge of the system, the learning rate can be set as
η T 2 ξ e ^ ( k + 1 ) h e ^ m a x 2 φ m a x 2 1 + u m a x 2 .
Recalling the differentiation of the Lyapunov (31), and with the previous statements, it can be seen that the differentiation is semi-definite negative when the estimation error co-related parameter ξ e ^ ( k + 1 ) is bounded as
ξ e ^ ( k + 1 ) > Ω ξ ,
where Ω ξ 1 2 η T h e ^ 2 ( k + 1 ) φ T ( k ) φ ( k ) 1 + u 2 ( k ) + | ε h ( k ) h e ^ ( k + 1 ) . This boundary concludes the stability proof, where the estimation error and estimator’s internal signals are established as UUB according to the proposed Lyapunov function (for more information, see the Lyapunov extension Theorem 2.5.7 [36]). □
The following section proposes an adaptive controller that can deal with varying and unknown control directions, based on information provided by the estimator.

4. Data-Based Model-Free Adaptive Controller

An adaptive data-driven controller is proposed with a direct control scheme (Figure 4). The adaptive controller depends on the closed-loop system tracking error and the desired trajectory. The upgrade of the weight vector β ( k ) depends on the model estimator as described in this section.
Defining the system’s tracking error as
e ( k + 1 ) r ( k + 1 ) y ( k + 1 ) ,
where r ( k + 1 ) is the desired trajectory, we propose a MiFREN-based adaptive controller as
u ( k ) = φ c T ( k ) β c ( k ) ,
where φ c ( k ) is a multidimensional membership-function vector and β c ( k ) is the weight vector for the controller. The fundamental difference of the proposed controller is found in the weight vector actualization method. The actualization will be performed with the gradient descent method and a novel cost function: the hyperbolic tangent of the tracking error multiplied by the tracking error. It is worth noticing that this is a model-free adaptive controller, and no further information on the system is required at this point. The actualization method for the weight vector β c ( k ) will be discussed in more detail in this section.
The upgrade law of weight vectors is defined according to the gradient descent method as
β c ( k + 1 ) = β c ( k ) η E ( k + 1 ) β c ( k ) ,
with the cost function
E ( k + 1 ) = tanh e ( k + 1 ) e ( k + 1 ) .
As was stated for the estimator cost function, this type of function has the advantage of being smooth near the origin, unlike functions with absolute values.
The partial derivative needed for the upgrade law (37) is obtained with the chain rule
E ( k + 1 ) β c ( k ) = E ( k + 1 ) e ( k + 1 ) e ( k + 1 ) β c ( k ) , = sec h 2 e ( k + 1 ) e ( k + 1 ) + tanh e ( k + 1 ) e ( k + 1 ) β c ( k ) .
and the partial derivative of the tracking error regarding the weight vector is obtained as
e ( k + 1 ) β c ( k ) = e ( k + 1 ) y ( k + 1 ) y ( k + 1 ) u ( k ) u ( k ) β c ( k ) , = y ( k + 1 ) u ( k ) φ c ( k ) .
The term y ( k + 1 ) u ( k ) , according to Theorem 1, is estimated as
y ( k + 1 ) u ( k ) f ( k ) + g ( k ) u ( k ) + ε h ( k ) u ( k ) g ( k ) g ^ ( k ) .
Then, the partial derivative of the cost function is approximated as
E ( k + 1 ) β c ( k ) sec h 2 e ( k + 1 ) e ( k + 1 ) + tanh e ( k + 1 ) g ^ ( k ) φ c ( k ) .
Substituting the last equation on the upgrade law (37), we obtain a feasible upgrade law
β c ( k + 1 ) = β c ( k ) + η h e ( k + 1 ) g ^ ( k ) φ c ( k ) ,
with h e ( k + 1 ) sec h 2 e ( k + 1 ) e ( k + 1 ) + tanh e ( k + 1 ) . Per the stability proof for Theorem 2, the upgrade law can also be stated as
β c ( k + 1 ) = β c ( k ) + η c h e ( k + 1 ) sign g ^ ( k ) g ^ m i n φ c ( k ) ,
with | g ^ ( k ) | g ^ m i n k 0 .
Theorem 2. 
A class of non-affine non-linear discrete-time systems (1) represented in an affine way (2) is estimated as (8) if the original system (1) follows the assumptions described in Section 2. The tracking error along with the internal signals are convergent with the system estimator according to Theorem 1, the controller (36), and the upgrade law (40) or (41), if parameters are designed following the next conditions:
  • g ( k ) g ^ ( k ) ;
  • sign g ( k ) = sign g ^ ( k ) ;
  • | ε h ( k ) | < < 1 ;
  • 0 < η < 1 g ^ m a x 2 | | φ c m a x | | 2 2 1 g ^ 2 ( k ) | | φ c ( k ) | | 2 2 .
Proof. 
To verify the convergence of the closed-loop tracking error and the convergence of the system’s internal signals, the Lyapunov function is selected:
L ( k + 1 ) = β c ˜ T ( k + 1 ) β c ˜ ( k + 1 ) + e 2 ( k + 1 ) .
Therefore, the Lyapunov function differentiation is calculated as
Δ L ( k + 1 ) = β c ˜ T ( k + 1 ) β c ˜ ( k + 1 ) β c ˜ T ( k ) β c ˜ ( k ) + e 2 ( k + 1 ) e 2 ( k ) .
Defining β c ˜ ( k + 1 ) as the difference in the ideal weight vector β * and the current iteration β ( k + 1 ) , it is also established that
β c ˜ ( k + 1 ) = β c * β c ( k + 1 ) ,
Substituting the update law (40) in the last equation
β c ˜ ( k + 1 ) = β c * β c ( k ) η h e ( k + 1 ) g ^ ( k ) φ c ( k ) ,
the weight vector error is also described as
β c ˜ ( k + 1 ) = β c ˜ ( k ) η h e ( k + 1 ) g ^ ( k ) φ c ( k ) .
In a similar sense, considering that the ideal weight produces the ideal controller and no tracking error, it is inferred that
e * ( k + 1 ) = 0 = r ( k + 1 ) y * ( k + 1 ) = f ( k ) + g ( k ) β c * T φ c ( k ) ,
hence,
r ( k + 1 ) = f ( k ) + g ( k ) u * ( k ) .
Substituting the last equation in the tracking error (35),
e ( k + 1 ) = f ( k ) + g ( k ) β c * T φ c ( k ) f ( k ) g ( k ) β c T ( k ) φ c ( k ) ε ( k ) ,
is rearranged as
e ( k + 1 ) = g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) .
With Equations (44) and (45), the Lyapunov function differentiation (43) is rewritten as
Δ L ( k + 1 ) = g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) 2 e 2 ( k ) + β c ˜ ( k ) η h e ( k + 1 ) g ( k ) φ c T ( k ) 2 β c ˜ 2 ( k ) ,
and with some mathematical processes takes the form
Δ L ( k + 1 ) = g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) 2 e 2 ( k ) 2 η φ c T ( k ) β c ˜ ( k ) h e ( k + 1 ) g ( k ) + η 2 h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) ,
and finally,
Δ L ( k + 1 ) = g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) 2 e 2 ( k ) η 2 φ c T ( k ) β c ˜ ( k ) h e ( k + 1 ) g ( k ) η h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) .
From the last equation, the learning rate η needs to be positive definite. The term multiplied by η also needs to be definite positive as
2 φ c T ( k ) β c ˜ ( k ) h e ( k + 1 ) g ( k ) η h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) > 0 .
To ensure the last inequality, the learning rate is bounded as
η < 2 φ c T ( k ) β c ˜ ( k ) h e ( k + 1 ) g ( k ) h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) .
From (45), it is inferred that
β c ˜ T ( k ) φ c ( k ) = e ( k + 1 ) ε ( k ) g ( k ) ,
then, the boundary of the learning rate becomes
η < 2 e ( k + 1 ) ε ( k ) h e ( k + 1 ) h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) .
Considering that 2 e ( k + 1 ) h e ( k + 1 ) and ε ( k ) is small enough to be negligible, the learning rate boundary is rewritten as
η h e ( k + 1 ) 2 h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) < 2 e ( k + 1 ) ε ( k ) h e ( k + 1 ) h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) ,
and the final boundary of the learning rate becomes
0 < η 1 g m a x 2 φ c m a x 2 1 g 2 ( k ) φ c 2 ( k ) ,
where g m a x | g ^ ( k ) | and φ c m a x 2 φ c 2 ( k ) k 0 .
In a similar sense, we can say that
η sign g ( k ) g m i n g ( k ) 1 g m a x 2 φ c m a x 2 1 g m a x 2 φ c m a x 2 ,
hence, if we set
η = sign g ( k ) g m i n g ( k ) 1 g m a x 2 φ c m a x 2 ,
and substitute it into the upgrade law (40), we obtain
β c ( k + 1 ) = β c ( k ) + sign g ( k ) g m i n g ( k ) 1 g m a x 2 φ c m a x 2 h e ( k + 1 ) g ^ ( k ) φ c ( k )
and according to (39), it is rearranged as
β c ( k + 1 ) = β c ( k ) + 1 g m a x 2 φ c m a x 2 h e ( k + 1 ) sign g ( k ) g m i n φ c ( k )
If we define
η c = 1 g m a x 2 φ c m a x 2 ,
and substitute it in (50), we obtain the upgrade law in (41).
From the previous boundary, it is derived that the Lyapunov differentiation (46) can be rewritten as
Δ L ( k + 1 ) = g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) 2 e 2 ( k ) 2 η | φ c T ( k ) β c ˜ ( k ) h e ( k + 1 ) g ( k ) | + η 2 h e 2 ( k + 1 ) g 2 ( k ) φ c 2 ( k ) .
With the knowledge of the boundaries of function h e ( k + 1 ) , it is also deduced that | h e ( k + 1 ) | 1.2 . Replacing the boundaries, the last equation is rearranged as
Δ L ( k + 1 ) g ( k ) β c ˜ T ( k ) φ c ( k ) ε ( k ) 2 e 2 ( k ) 2.4 η | φ c T ( k ) β c ˜ ( k ) g ( k ) | + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) .
For negative or positive constants a and b, the inequality ( a + b ) 2 2 a 2 + 2 b 2 is always met. With that property in mind, the last equation can be rewritten as
Δ L ( k + 1 ) 2 g 2 ( k ) β c ˜ 2 ( k ) φ c 2 ( k ) + 2 ε 2 ( k ) e 2 ( k ) 2.4 η | φ c T ( k ) β c ˜ ( k ) g ( k ) | + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) < 0 .
To ensure stability as in the previous equation, we must analyze the different term boundaries. The boundary of the tracking error e ( k + 1 ) is established from the equation
e 2 ( k ) + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) + 2 ε 2 ( k ) < 0 ,
and is defined as
e 2 ( k ) > Ω e ,
with Ω e 1.44 η 2 g 2 ( k ) φ c 2 ( k ) + 2 ε 2 ( k ) .
On the other hand, the boundary of the weight vector error β ˜ ( k ) is defined from the equation
2 [ g 2 ( k ) β c ˜ 2 ( k ) φ c 2 ( k ) 1.2 η | φ c T ( k ) β c ˜ ( k ) g ( k ) | ] + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) + 2 ε 2 ( k ) < 0 ,
where adding and subtracting 1.2 η to and from the equation means some terms can be rearranged on a square binomial:
2 0.6 η | φ c T ( k ) β c ˜ ( k ) g ( k ) | 2 + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) + 2 ε 2 ( k ) 1.2 η 2 < 0 .
Isolating the term β c ˜ ( k ) , it is bounded as
| β c ˜ ( k ) | 1 > Ω β c ,
where Ω β c 1.2 η + 1.44 η 2 g 2 ( k ) φ c 2 ( k ) 2 ε 2 ( k ) + 1.2 η 2 2 | β c ˜ ( k ) | 1 | g ( k ) | .
This concludes the stability proof with the boundedness of the closed-loop system’s tracking error and internal signals. The next section shows experimental results to validate the proposed controller performance. □

5. Validation Results

For experimentation, we proposed a Cartesian robot with the motor speed controlled by a driver (the frequency and direction as the input) and the output as the sensed force. Both the input and output are considered for the z-axis of the robot. The robot was designed at Cinvestav Saltillo. The robot uses servo-motors with a terminal voltage of 60VDC, a continuous torque of 0.353 Nm, and an incremental encoder (AMT102), controlled with a generic driver. The generic driver is connected to a computer by NI DAQ Multifunction SCB-68, which also controls the pulse generator Agilent 33220A and the power supply B&K Precision 1666. The force sensor TW Transducer 9105-TW-MINI58 is also connected to a PC with an ATI Industrial Automation 9620-05-DAQ. The control algorithm is run on MATLAB 2013, and the computer has a processor GenuineIntel 7, a RAM of 2 GB, and an integrated hard disk of 150 MB. Figure 5 shows a picture of the experimental setup, and Figure 6 shows a diagram of how the system works.
For performance comparison, the controller proposed by M. L. Corradini in the article A Robust Sliding-Mode Based Data-Driven Model-Free Adaptive Controller [37] was replicated.
The error metrics presented are the Sum Square Error [SSE] defined as
S S E = k = 1 n e ( k ) 2 ,
and the Mean Absolute Percentage Error [MAPE]
M A P E = 1 n k = 1 n | r ( k ) y ( k ) | | r ( k ) | .
The control algorithms are written for MATLAB 2013. Given the type of system, it is clear that the control laws need to be separated between the direction of the motor and the pulse speed; the control law information is divided as the motor direction
d u ( k ) = 1 u ( k ) 0 , 0 u ( k ) < 0 . ,
and the frequency sent to the diver
u f ( k ) = | u ( k ) | .
This means that if the control law equals u ( k ) = 5 , the motor direction d u ( k ) = 1 will be moving to the right and the driver will be sent a pulse frequency of u f ( k ) = 5 kHz. On the other hand, if the control law equals u ( k ) = 5 , the motor direction d u ( k ) = 0 will be moving to the left and the driver will be sent a pulse frequency of u f ( k ) = 5 kHz.
Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the controller’s performance, and Table 1 shows the metrics results for both controllers. As is seen in both the figures and the table, the proposed controller has a more significant tracking error at the beginning than the comparison controller but has a better performance in the final cycle. The proposed controller has better performance for the initial and final cycle of the estimation. It can be seen that both controllers have high-frequency disturbance on both the estimator and the system performance, and the proposed controller has a slower settling.
Figure 12 shows the PPD estimation g ( k ) ^ as proposed in comparison with ϕ ^ 1 ( k ) of the comparison controller. The PPD is usually approximated as y ( k + 1 ) u ( k ) Δ y ( k + 1 ) Δ u ( k ) with Δ y ( k + 1 ) = y ( k + 1 ) y ( k ) and Δ u ( k ) = u ( k ) u ( k 1 ) . Figure 12 shows the approximation according to the system performance and controllers. It can be seen that the high frequency in both controllers causes the estimation to be scattered. Figure 13 shows and approximation of the PPD of the system with each controller, where it can be seen the comparison controller produces more sign changes. In contrast, if we think about how the control direction and the PPD should behave with a smooth input to the system, as shown in Figure 14 and Figure 15, the direction of the function g ^ ( k ) has an expected direction during the simulation, unlike the comparison controller.

6. Conclusions

Our proposed controller and estimator address the challenges posed by unknown non-affine discrete-time systems with varying control directions. They incorporate novel cost functions, which play a crucial role in adapting to changes in the control direction and ensuring system stability. Through rigorous analysis based on Lyapunov theory, we proved the convergence of these methods, which instills confidence in their effectiveness.
Experimental validation conducted on a force-feedback control system, characterized by its time-varying control direction, demonstrates the practical utility of the proposed estimator and controller. The results illustrate the smooth and adaptive nature of the system’s response to changes in the control direction, highlighting the efficacy of the proposed methods in real-world scenarios.
While the proposed controller may initially exhibit a slower response compared to state-of-the-art alternatives, its adaptive nature ultimately enables a remarkable performance. By continuously estimating and adjusting the control direction, the system achieves an impressive tracking accuracy and robustness over time. Other systems can implement the proposed estimator and controller by acquiring data for estimator training and spending some time training with the actual system. Additionally, it is worth noting that due to the MiFREN base of these methods, human knowledge can be transferred into the system through the initialization of weight vectors, which will be enhanced with the learning algorithm.

Author Contributions

Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Mexican Research Organization CONAHCyT scholarship #811678 and was developed at Cinvestav Saltillo.

Data Availability Statement

The experimental code and generated data are available upon request. Please contact the corresponding author.

Acknowledgments

This study was developed at Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Unidad Saltillo.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DDCData-driven control
FRENFuzzy Rules Emulated Network
IFTIterative feedback tuning
ILCIterative learning control
MAPEMean Absolute Percentage Error
MiFRENMulti-input Fuzzy Rules Emulated Network
PPDPseudo partial derivative
SSESum Square Error
UUBUniformly ultimately bounded
VRFTVirtual reference feedback tuning

References

  1. Ma, Y.S.; Che, W.W.; Deng, C.; Wu, Z.G. Distributed model-free adaptive control for learning nonlinear MASs under DoS attacks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 1146–1155. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, X.; Guo, H.; Cheng, X.; Du, J.; Ma, J. A Robust Design of the Model-Free-Adaptive-Control-Based Energy Management for Plug-in Hybrid Electric Vehicle. Energies 2022, 15, 7467. [Google Scholar] [CrossRef]
  3. Chapagain, K.; Gurung, S.; Kulthanavit, P.; Kittipiyakul, S. Short-term electricity demand forecasting using deep neural networks: An analysis for Thai data. Appl. Syst. Innov. 2023, 6, 100. [Google Scholar] [CrossRef]
  4. Roman, R.C.; Precup, R.E.; Petriu, E.M.; Dragan, F. Combination of data-driven active disturbance rejection and Takagi-Sugeno fuzzy control with experimental validation on tower crane systems. Energies 2019, 12, 1548. [Google Scholar] [CrossRef]
  5. Barth, J.M.; Condomines, J.P.; Bronz, M.; Moschetta, J.M.; Join, C.; Fliess, M. Model-free control algorithms for micro air vehicles with transitioning flight capabilities. Int. J. Micro Air Veh. 2020, 12, 1756829320914264. [Google Scholar] [CrossRef]
  6. Hou, Z.; Jin, S. Model Free Adaptive Control: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  7. Zhou, L.; Li, Z.; Yang, H.; Fu, Y.; Yan, Y. Data-driven model-free adaptive sliding mode control based on FFDL for electric multiple units. Appl. Sci. 2022, 12, 10983. [Google Scholar] [CrossRef]
  8. Ahsan, M.; Salah, M.M.; Saeed, A. Adaptive Fast-Terminal Neuro-Sliding Mode Control for Robot Manipulators with Unknown Dynamics and Disturbances. Electronics 2023, 12, 3856. [Google Scholar] [CrossRef]
  9. Heertjes, M.F.; Van der Velden, B.; Oomen, T. Constrained iterative feedback tuning for robust control of a wafer stage system. IEEE Trans. Control. Syst. Technol. 2015, 24, 56–66. [Google Scholar] [CrossRef]
  10. Roman, R.C.; Precup, R.E.; Hedrea, E.L.; Preitl, S.; Zamfirache, I.A.; Bojan-Dragos, C.A.; Petriu, E.M. Iterative feedback tuning algorithm for tower crane systems. Procedia Comput. Sci. 2022, 199, 157–165. [Google Scholar] [CrossRef]
  11. Duan, L.; Hou, Z.; Yu, X.; Jin, S.; Lu, K. Data-driven model-free adaptive attitude control approach for launch vehicle with virtual reference feedback parameters tuning method. IEEE Access 2019, 7, 54106–54116. [Google Scholar] [CrossRef]
  12. Roman, R.C.; Precup, R.E.; Bojan-Dragos, C.A.; Szedlak-Stinean, A.I. Combined model-free adaptive control with fuzzy component by virtual reference feedback tuning for tower crane systems. Procedia Comput. Sci. 2019, 162, 267–274. [Google Scholar] [CrossRef]
  13. Zhuang, Z.; Tao, H.; Chen, Y.; Stojanovic, V.; Paszke, W. An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 3461–3473. [Google Scholar] [CrossRef]
  14. Maqsood, K.; Luo, J.; Yang, C.; Ren, Q.; Li, Y. Iterative learning-based path control for robot-assisted upper-limb rehabilitation. Neural Comput. Appl. 2021, 1–13. [Google Scholar] [CrossRef]
  15. Nussbaum, R.D. Some remarks on a conjecture in parameter adaptive control. Syst. Control. Lett. 1983, 3, 243–246. [Google Scholar] [CrossRef]
  16. Arefi, M.M.; Zarei, J.; Karimi, H.R. Adaptive output feedback neural network control of uncertain non-affine systems with unknown control direction. J. Frankl. Inst. 2014, 351, 4302–4316. [Google Scholar] [CrossRef]
  17. Mawlani, P.; Arbabtafti, M. Observer-based self-organizing adaptive fuzzy neural network control for non-linear, non-affine systems with unknown sign of control gain and dead zone: A case study of pneumatic actuators. Trans. Inst. Meas. Control 2022, 44, 2214–2234. [Google Scholar] [CrossRef]
  18. Xia, J.; Zhang, J.; Feng, J.; Wang, Z.; Zhuang, G. Command filter-based adaptive fuzzy control for nonlinear systems with unknown control directions. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 1945–1953. [Google Scholar] [CrossRef]
  19. Cao, Y.; Zhao, N.; Xu, N.; Zhao, X.; Alsaadi, F.E. Minimal-approximation-based adaptive event-triggered control of switched nonlinear systems with unknown control direction. Electronics 2022, 11, 3386. [Google Scholar] [CrossRef]
  20. Kamalamiri, A.; Shahrokhi, M.; Mohit, M. Adaptive finite-time neural control of non-strict feedback systems subject to output constraint, unknown control direction, and input nonlinearities. Inf. Sci. 2020, 520, 271–291. [Google Scholar] [CrossRef]
  21. Yu, J.; Shi, P.; Lin, C.; Yu, H. Adaptive neural command filtering control for nonlinear MIMO systems with saturation input and unknown control direction. IEEE Trans. Cybern. 2019, 50, 2536–2545. [Google Scholar] [CrossRef]
  22. Habibi, H.; Nohooji, H.R.; Howard, I. Adaptive PID control of wind turbines for power regulation with unknown control direction and actuator faults. IEEE Access 2018, 6, 37464–37479. [Google Scholar] [CrossRef]
  23. Ren, H.P.; Wang, X.; Fan, J.T.; Kaynak, O. Adaptive backstepping control of a pneumatic system with unknown model parameters and control direction. IEEE Access 2019, 7, 64471–64482. [Google Scholar] [CrossRef]
  24. Wang, S.; Fu, M.; Wang, Y. Robust adaptive steering control for unmanned surface vehicle with unknown control direction and input saturation. Int. J. Adapt. Control Signal Process. 2019, 33, 1212–1224. [Google Scholar] [CrossRef]
  25. Askari, M.R.; Shahrokhi, M.; Talkhoncheh, M.K. Observer-based adaptive fuzzy controller for nonlinear systems with unknown control directions and input saturation. Fuzzy Sets Syst. 2017, 314, 24–45. [Google Scholar] [CrossRef]
  26. Boulkroune, A.; M’saad, M. On the design of observer-based fuzzy adaptive controller for nonlinear systems with unknown control gain sign. Fuzzy Sets Syst. 2012, 201, 71–85. [Google Scholar] [CrossRef]
  27. Bai, W.; Liu, P.X.; Wang, H. Neural-network-based adaptive fixed-time control for nonlinear multiagent non-affine systems. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 570–583. [Google Scholar] [CrossRef]
  28. Wang, S.; Liu, Y.; Yu, H.; Chen, Q. Approximation-Free Control for Nonaffine Nonlinear Systems with Prescribed Performance. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25–27 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 2277–2282. [Google Scholar]
  29. Zhao, Y.; Niu, B.; Zong, G.; Zhao, X.; Alharbi, K.H. Neural network-based adaptive optimal containment control for non-affine nonlinear multi-agent systems within an identifier-actor-critic framework. J. Frankl. Inst. 2023, 360, 8118–8143. [Google Scholar] [CrossRef]
  30. Binazadeh, T.; Rahgoshay, M.A. Robust output tracking of a class of non-affine systems. Syst. Sci. Control Eng. 2017, 5, 426–433. [Google Scholar] [CrossRef]
  31. Liu, Y.J.; Wang, W. Adaptive fuzzy control for a class of uncertain nonaffine nonlinear systems. Inf. Sci. 2007, 177, 3901–3917. [Google Scholar] [CrossRef]
  32. Hu, Y.; Zhang, C.; Wang, B.; Zhao, J.; Gong, X.; Gao, J.; Chen, H. Noise-Tolerant ZNN-Based Data-Driven Iterative Learning Control for Discrete Nonaffine Nonlinear MIMO Repetitive Systems. IEEE/CAA J. Autom. Sin. 2024, 11, 344–361. [Google Scholar] [CrossRef]
  33. Wang, M.; Zhang, Y.; Wang, C. Learning from neural control for non-affine systems with full state constraints using command filtering. Int. J. Control 2020, 93, 2392–2406. [Google Scholar] [CrossRef]
  34. Zhang, F.; Chen, Y.Y. Fuzzy adaptive output consensus tracking control of multiple nonaffine nonlinear pure-feedback systems. In Proceedings of the 2022 IEEE 17th International Conference on Control & Automation (ICCA), Naples, Italy, 27–30 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 648–653. [Google Scholar]
  35. Treesatayapun, C. The knowledge-based fuzzy rules emulated network and its applications on direct adaptive on nonlinear control systems. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2005, 13, 653–672. [Google Scholar] [CrossRef]
  36. Lewis, F.; Jagannathan, S.; Yesildirak, A. Neural Network Control of Robot Manipulators and Non-Linear Systems; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
  37. Corradini, M.L. A robust sliding-mode based data-driven model-free adaptive controller. IEEE Control Syst. Lett. 2021, 6, 421–427. [Google Scholar] [CrossRef]
Figure 1. Estimator diagram.
Figure 1. Estimator diagram.
Asi 07 00038 g001
Figure 2. Cost function E e ^ ( k ) .
Figure 2. Cost function E e ^ ( k ) .
Asi 07 00038 g002
Figure 3. Function h e ^ e ^ ( k ) .
Figure 3. Function h e ^ e ^ ( k ) .
Asi 07 00038 g003
Figure 4. Model-free controller diagram.
Figure 4. Model-free controller diagram.
Asi 07 00038 g004
Figure 5. Cartesian robot setup picture.
Figure 5. Cartesian robot setup picture.
Asi 07 00038 g005
Figure 6. Cartesian robot setup diagram.
Figure 6. Cartesian robot setup diagram.
Asi 07 00038 g006
Figure 7. Cartesian robot system performance: [—] desired trajectory, [- -] proposed controller, and [- · -] comparison controller.
Figure 7. Cartesian robot system performance: [—] desired trajectory, [- -] proposed controller, and [- · -] comparison controller.
Asi 07 00038 g007
Figure 8. Cartesian robot tracking error: [—] proposed controller and [- -] comparison controller.
Figure 8. Cartesian robot tracking error: [—] proposed controller and [- -] comparison controller.
Asi 07 00038 g008
Figure 9. Cartesian robot system estimation: [—] proposed controller and [- -] comparison controller.
Figure 9. Cartesian robot system estimation: [—] proposed controller and [- -] comparison controller.
Asi 07 00038 g009
Figure 10. Cartesian robot estimation error: [—] proposed controller and [- -] comparison controller.
Figure 10. Cartesian robot estimation error: [—] proposed controller and [- -] comparison controller.
Asi 07 00038 g010
Figure 11. Cartesian robot control law: [—] proposed controller and [- -] comparison controller.
Figure 11. Cartesian robot control law: [—] proposed controller and [- -] comparison controller.
Asi 07 00038 g011
Figure 12. Cartesian robot PPD estimation: [—] proposed controller g ^ ( k ) and [- -] comparison controller ϕ ^ 1 ( k ) .
Figure 12. Cartesian robot PPD estimation: [—] proposed controller g ^ ( k ) and [- -] comparison controller ϕ ^ 1 ( k ) .
Asi 07 00038 g012
Figure 13. Cartesian robot PPD approximation Δ y ( k + 1 ) Δ u ( k ) : [—] proposed controller and [- -] comparison controller.
Figure 13. Cartesian robot PPD approximation Δ y ( k + 1 ) Δ u ( k ) : [—] proposed controller and [- -] comparison controller.
Asi 07 00038 g013
Figure 14. System performance y ( k ) and input u ( k ) .
Figure 14. System performance y ( k ) and input u ( k ) .
Asi 07 00038 g014
Figure 15. Smooth input PPD approximation Δ y ( k + 1 ) Δ u ( k ) .
Figure 15. Smooth input PPD approximation Δ y ( k + 1 ) Δ u ( k ) .
Asi 07 00038 g015
Table 1. Performance metrics—Cartesian robot.
Table 1. Performance metrics—Cartesian robot.
Proposed ControllerComparison Controller
T i T f T i T f
MAPE e ( k ) 28.27 % 0.98 % 9.89 % 2.01 %
e ^ ( k ) 72.21 % 2.50 % 163.13 % 4.18 %
SSE e ( k ) 5.53 × 10 4 19.71 1.26 × 10 4 107.25
e ^ ( k ) 4.12 × 10 3 221.11 1.04 × 10 5 489.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Flores-Padilla, M.; Treesatayapun, C. Data-Driven Adaptive Controller Based on Hyperbolic Cost Function for Non-Affine Discrete-Time Systems with Variant Control Direction. Appl. Syst. Innov. 2024, 7, 38. https://doi.org/10.3390/asi7030038

AMA Style

Flores-Padilla M, Treesatayapun C. Data-Driven Adaptive Controller Based on Hyperbolic Cost Function for Non-Affine Discrete-Time Systems with Variant Control Direction. Applied System Innovation. 2024; 7(3):38. https://doi.org/10.3390/asi7030038

Chicago/Turabian Style

Flores-Padilla, Miriam, and Chidentree Treesatayapun. 2024. "Data-Driven Adaptive Controller Based on Hyperbolic Cost Function for Non-Affine Discrete-Time Systems with Variant Control Direction" Applied System Innovation 7, no. 3: 38. https://doi.org/10.3390/asi7030038

Article Metrics

Back to TopTop