There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: . infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; . finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; . nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: . establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; . demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and . shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. eBooks > Fremdsprachige eBooks > Englische eBooks > Sach- & Fachthemen > Technik; eBooks > Fremdsprachige eBooks > Englische eBooks > Sach- & Fachthemen > Informatik; eBooks > Fachbücher > Ingenieurwissenschaften; eBooks > Fachbücher > Informatik; eBooks , Springer, Springer<
Orellfuessli.ch
Nr. A1031703383. Custos de envio:Lieferzeiten außerhalb der Schweiz 3 bis 21 Werktage, , Sofort per Download lieferbar, zzgl. Versandkosten. (EUR 17.55) Details...
(*) Livro esgotado significa que o livro não está disponível em qualquer uma das plataformas associadas buscamos.
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: * infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; * finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; * nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: * establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; * demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and * shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.; PDF; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Springer London<
hive.co.uk
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio Details...
(*) Livro esgotado significa que o livro não está disponível em qualquer uma das plataformas associadas buscamos.
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: * infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; * finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; * nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: * establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; * demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and * shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.; PDF; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Springer London<
hive.co.uk
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio Details...
(*) Livro esgotado significa que o livro não está disponível em qualquer uma das plataformas associadas buscamos.
This book approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP).It shows readers how to derive necessary stabili… mais…
This book approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP).It shows readers how to derive necessary stability and convergence criteria for their own systems.; EPUB; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Transworld<
hive.co.uk
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio Details...
(*) Livro esgotado significa que o livro não está disponível em qualquer uma das plataformas associadas buscamos.
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: . infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; . finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; . nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: . establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; . demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and . shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study. eBooks > Fremdsprachige eBooks > Englische eBooks > Sach- & Fachthemen > Technik; eBooks > Fremdsprachige eBooks > Englische eBooks > Sach- & Fachthemen > Informatik; eBooks > Fachbücher > Ingenieurwissenschaften; eBooks > Fachbücher > Informatik; eBooks , Springer, Springer<
Nr. A1031703383. Custos de envio:Lieferzeiten außerhalb der Schweiz 3 bis 21 Werktage, , Sofort per Download lieferbar, zzgl. Versandkosten. (EUR 17.55)
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: * infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; * finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; * nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: * establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; * demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and * shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.; PDF; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Springer London<
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time appro… mais…
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods: * infinite-horizon control for which the difficulty of solving partial differential Hamilton-Jacobi-Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences; * finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control; * nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point. Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium. In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: * establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm; * demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and * shows how ADP methods can be put to use both in simulation and in real applications. This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.; PDF; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Springer London<
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio
This book approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP).It shows readers how to derive necessary stabili… mais…
This book approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP).It shows readers how to derive necessary stability and convergence criteria for their own systems.; EPUB; Scientific, Technical and Medical > Electronics & communications engineering > Electronics engineeri, Transworld<
No. 9781447147572. Custos de envio:Instock, Despatched same working day before 3pm, zzgl. Versandkosten., mais custos de envio
1Como algumas plataformas não transmitem condições de envio e estas podem depender do país de entrega, do preço de compra, do peso e tamanho do item, de uma possível adesão à plataforma, de uma entrega direta pela plataforma ou através de um fornecedor terceirizado (Marketplace), etc., é possível que os custos de envio indicados pela terralivro não correspondam aos da plataforma ofertante.
Dados bibliográficos do melhor livro correspondente
Dados detalhados do livro - Adaptive Dynamic Programming for Control
EAN (ISBN-13): 9781447147572 ISBN (ISBN-10): 144714757X Ano de publicação: 2013 Editor/Editora: Springer London 423 Páginas Língua: eng/Englisch
Livro na base de dados desde 2012-04-28T15:23:01-03:00 (Sao Paulo) Página de detalhes modificada pela última vez em 2023-09-21T14:24:44-03:00 (Sao Paulo) Número ISBN/EAN: 9781447147572
Número ISBN - Ortografia alternativa: 1-4471-4757-X, 978-1-4471-4757-2 Ortografia alternativa e termos de pesquisa relacionados: Autor do livro: bellman, emma curtis Título do livro: dynamic stability, dynamic programming
Dados da editora
Autor: Huaguang Zhang; Derong Liu; Yanhong Luo; Ding Wang Título: Communications and Control Engineering; Adaptive Dynamic Programming for Control - Algorithms and Stability Editora: Springer; Springer London 424 Páginas Ano de publicação: 2012-12-14 London; GB Língua: Inglês 149,79 € (DE) 177,00 CHF (CH) Available XVI, 424 p.
EA; E107; eBook; Nonbooks, PBS / Technik/Elektronik, Elektrotechnik, Nachrichtentechnik; Regelungstechnik; Verstehen; Adaptive Dynamic Programming; Finite-horizon Control; Infinite-horizon Control; Reinforcement Learning; Zero-sum Game; B; Control and Systems Theory; Optimization; Artificial Intelligence; Computational Intelligence; Systems Theory, Control; Engineering; Optimierung; Künstliche Intelligenz; Kybernetik und Systemtheorie; BC
Optimal Stabilization Control for Discrete-time Systems.- Optimal Tracking Control for Discrete-time Systems.- Optimal Stabilization Control for Nonlinear Systems with Time Delays.- Optimal Tracking Control for Nonlinear Systems with Time-delays.- Optimal Feedback Control for Continuous-time Systems via ADP.- Several Special Optimal Feedback Control Designs Based on ADP.- Zero-sum Games for Discrete-time Systems Based on Model-free ADP.- Nonlinear Games for a Class of Continuous-time Systems Based on ADP.- Other Applications of ADP. Convergence proofs of the algorithms presented teach readers how to derive necessary stability and convergence criteria for their own systems Establishes the fundamentals of ADP theory so that student readers can extrapolate their learning into control, operations research and related fields Applications examples show how the theory can be made to work in real example systems Includes supplementary material: sn.pub/extras
Outros livros adicionais, que poderiam ser muito similares com este livro: