Fuzzy policy gradient reinforcement learning for leader-follower systems

Dongbing Gu, Erfu Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

2 Citations (Scopus)


This paper presents a policy gradient multi-agent reinforcement learning algorithm for leader-follower systems. In this algorithm, cooperative dynamics of the leader-follower control is modelled as an incentive Stackelberg game. A linear incentive mechanism is used to connect the leader and follower policies. Policy gradient reinforcement learning explicitly explores policy parameter space to search the optimal policy. Fuzzy logic controllers are used as the policy. The parameters of fuzzy logic controllers can be improved by this policy gradient algorithm.

Original languageEnglish
Title of host publication2005 IEEE International Conference on Mechatronics & Automations
Subtitle of host publicationConference Proceedings
EditorsJason Gu, Peter X. Liu
Place of PublicationPiscataway, NJ.
Number of pages5
ISBN (Print)078039044X
Publication statusPublished - 1 Jul 2005
EventIEEE International Conference on Mechatronics and Automation, ICMA 2005 - Niagara Falls, ON, United Kingdom
Duration: 29 Jul 20051 Aug 2005


ConferenceIEEE International Conference on Mechatronics and Automation, ICMA 2005
Country/TerritoryUnited Kingdom
CityNiagara Falls, ON


  • incentive Stackelberg game
  • multi-agent reinforcement learning
  • policy gradient reinforcement learning
  • control engineering computing
  • fuzzy logic
  • game theory
  • learning (artificial intelligence)
  • multi-agent systems

Cite this