SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Su B, Gutierrez-Farewik EM. Front. Neurorobotics 2023; 17: e1244417.

Copyright

(Copyright © 2023, Frontiers Research Foundation)

DOI

10.3389/fnbot.2023.1244417

PMID

37901705

PMCID

PMC10601656

Abstract

INTRODUCTION: Recent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.

METHODS: We simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 m/s. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.

RESULTS: Simulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.

DISCUSSION: We finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.


Language: en

Keywords

kinematics; CMA-ES; human and humanoid motion analysis; motion synthesis; optimal control; optimization; reflex-based control

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print