An LQR (Linear Quadratic Regulator) Controller is an optimal control strategy used to operate a dynamic system in such a way that it minimizes a defined cost function. The cost function typically represents a trade-off between the state variables (e.g., position, velocity) and control inputs (e.g., forces, torques) and is mathematically expressed as:
where is the state vector, is the control input, is a positive semi-definite matrix that penalizes the state, and is a positive definite matrix that penalizes the control effort. The LQR approach assumes that the system can be described by linear state-space equations, making it suitable for a variety of engineering applications, including robotics and aerospace. The solution yields a feedback control law of the form:
where is the gain matrix calculated from the solution of the Riccati equation. This feedback mechanism ensures that the system behaves optimally, balancing performance and control effort effectively.
Start your personalized study experience with acemate today. Sign up for free and find summaries and mock exams for your university.