This article may rely excessively on sources too closely associated with the subject, potentially preventing the article from being verifiable and neutral. Please help improve it by replacing them with more appropriate citations to reliable, independent, third-party sources. (March 2022) (Learn how and when to remove this template message)

Dimitri P. Bertsekas ^{[2]} | |
---|---|

Born | 1942 |

Nationality | Greek |

Citizenship | American, Greece |

Alma mater | National Technical University of Athens(1968)^{[3]} |

Known for | Nonlinear programming Convex optimization Dynamic programming Approximate dynamic programming Stochastic systems and Optimal control Data communication network optimization |

Awards | 1997 INFORMS Computing Society (ICS) Prize 1999 Greek National Award for Operations Research 2001 ACC John R. Ragazzini Education Award 2001 Member of the United States National Academy of Engineering 2009 INFORMS Expository Writing Award 2014 AACC Richard E. Bellman Control Heritage Award 2014 INFORMS Khachiyan Prize 2015 SIAM/MOS Dantzig Prize 2018 INFORMS John von Neumann Theory Prize 2022 IEEE Control Systems Award |

Scientific career | |

Fields | Optimization, Mathematics, Control theory, and Data communication networks |

Institutions | The George Washington University Stanford University University of Illinois at Urbana-Champaign Massachusetts Institute of Technology |

Thesis | Control of Uncertain Systems with a Set-Membership Description of the Uncertainty (1971) |

Doctoral advisor | Ian Burton Rhodes^{[1]} |

Other academic advisors | Michael Athans |

Doctoral students | Steven E. Shreve Paul Tseng Asuman Özdağlar ^{[1]} |

**Dimitri Panteli Bertsekas** (born 1942, Athens, Greek: Δημήτρης Παντελής Μπερτσεκάς) is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and also a Fulton Professor of Computational Decision Making at Arizona State University, Tempe.

Bertsekas was born in Greece and lived his childhood there. He studied for five years at the National Technical University of Athens, Greece and studied for about a year and a half at The George Washington University, Washington, D.C., where he obtained his M.S. in electrical engineering in 1969, and for about two years at MIT, where he obtained his doctorate in system science in 1971. Prior to joining the MIT faculty in 1979, he taught for three years at the Engineering-Economic Systems Dept. of Stanford University, and for five years at the Electrical and Computer Engineering Dept. of the University of Illinois at Urbana-Champaign. In 2019, he was appointed a full-time professor at the School of Computing and Augmented Intelligence at Arizona State University, Tempe, while maintaining a research position at MIT.^{[4]}^{[5]}

He is known for his research work, and for his twenty textbooks and monographs in theoretical and algorithmic optimization and control, in reinforcement learning, and in applied probability. His work ranges from theoretical/foundational work, to algorithmic analysis and design for optimization problems, and to applications such as data communication and transportation networks, and electric power generation. He is featured among the top 100 most cited computer science authors^{[6]} in the CiteSeer search engine academic database^{[7]} and digital library.^{[8]} He is also ranked within the top 40 scientists in the world (top 20 in the USA) in the field of Engineering and Technology, and also ranked within the top 50 scientists in the world (top 30 in the USA) in the field of Mathematics.^{[9]}^{[10]} In 1995, he co-founded a publishing company, Athena Scientific, that among others, publishes most of his books.

In the late 1990s Bertsekas developed a strong interest in digital photography. His photographs have been exhibited on several occasions at MIT.^{[11]}

Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science^{[12]} for his book "Neuro-Dynamic Programming" (co-authored with John N. Tsitsiklis); the 2000 Greek National Award for Operations Research; and the 2001 ACC John R. Ragazzini Education Award for outstanding contributions to education.^{[13]} In 2001, he was elected to the US National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks".^{[14]} In 2009, he was awarded the 2009 INFORMS Expository Writing Award for his ability to "communicate difficult mathematical concepts with unusual clarity, thereby reaching a broad
audience across many disciplines."^{[15]}
In 2014 he received the Richard E. Bellman Control Heritage Award from the American Automatic Control Council,^{[16]}^{[17]} the Khachiyan Prize for life-time achievements in the area of optimization from the INFORMS Optimization Society.^{[18]} Also he received the 2015 Dantzig prize from SIAM and the Mathematical Optimization Society,^{[19]} the 2018 INFORMS John von Neumann Theory Prize (jointly with Tsitsiklis) for the books "Neuro-Dynamic Programming" and "Parallel and Distributed Algorithms",^{[15]} and the 2022 IEEE Control Systems Award for “fundamental contributions to the methodology of optimization and control”, and “outstanding monographs and textbooks”.^{[20]}

*Dynamic Programming and Optimal Control*(1996)*Data Networks*(1989, co-authored with Robert G. Gallager)*Nonlinear Programming*(1996)*Introduction to Probability*(2003, co-authored with John N. Tsitsiklis)*A Course in Reinforcement Learning*(2023)

- "Stochastic Optimal Control: The Discrete-Time Case" (1978, co-authored with S. E. Shreve), a mathematically complex work, establishing the measure-theoretic foundations of dynamic programming and stochastic control.
- "Constrained Optimization and Lagrange Multiplier Methods" (1982), the first monograph that addressed comprehensively the algorithmic convergence issues around augmented Lagrangian and sequential quadratic programming methods.
- "Parallel and Distributed Computation: Numerical Methods" (1989, co-authored with John N. Tsitsiklis), which among others established the fundamental theoretical structures for the analysis of distributed asynchronous algorithms.
- "Linear Network Optimization" (1991) and "Network Optimization: Continuous and Discrete Models" (1998), which among others discuss comprehensively the class of auction algorithms for assignment and network flow optimization, developed by Bertsekas over a period of 20 years starting in 1979.
- "Neuro-Dynamic Programming"(1996, co-authored with Tsitsiklis), which laid the theoretical foundations for suboptimal approximations of highly complex sequential decision-making problems.
- "Convex Analysis and Optimization" (2003, co-authored with A. Nedic and A. Ozdaglar) and Convex Optimization Theory (2009), which provided a new line of development for optimization duality theory, a new connection between the theory of Lagrange multipliers and nonsmooth analysis, and a comprehensive development of incremental subgradient methods.
- "Abstract Dynamic Programming" (2013), which aims at a unified development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. A 3rd edition of this monograph, which extends the framework for applications to sequential zero-sum games and minimax problems, was published in 2022.
- "Reinforcement Learning and Optimal Control" (2019), which aims to explore the common boundary between dynamic programming/optimal control and artificial intelligence, and to form a bridge that is accessible by workers with background in either field.
- "Rollout, Policy Iteration, and Distributed Reinforcement Learning" (2020), which focuses on the fundamental idea of policy iteration, its one iteration counterpart, rollout, and their distributed and multiagent implementations. Some of these methods have been the backbones for high-profile successes in games such as chess, Go, and backgammon.
^{[21]}^{[22]}^{[23]} - “Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control" (2022), which introduces a new conceptual framework for reinforcement learning, based on off-line training and on-line play algorithms, which are designed independently of each other but operate in synergy through the powerful mechanism of Newton's method.