Keynote Speakers

Anticipatory Control of Human-Robot Interaction: Towards Autonomous yet Truly Collaborative Robots

Arash Ajoudani

Director of HRI2 Laboratory, Istituto Italiano di Tecnologia, Genova, Italy

Abstract:

Human-centric and collaborative robotic systems are most often designed to coexist and to safely share a working space with humans, tolerating accidental collisions or occasional contacts, but rarely are they thought for entering in direct socio-physical contact with humans to perceive, understand, and react to their distress or needs. To respond to this challenge anticipatory human models and autonomous robot controllers – for fixed and mobile bases – need to be effectively merged to bring human-robot interaction and collaboration to bear on a much wider class of problems. In particular, in this talk I will give an overview of our research activities on human modeling, effort allocation, and autonomous robot loco-manipulation control, as key components to reach this goal.

Bio:

Arash Ajoudani is a tenured senior scientist at the Italian Institute of Technology (IIT), where he leads the Human-Robot Interfaces and physical Interaction (HRI²) laboratory. He also coordinates the Robotics for Manufacturing (R4M) lab of the Leonardo labs, and is a principal investigator of the IIT-Intellimech JOiiNT lab. He is a recipient of the European Research Council (ERC) starting grant 2019 (Ergo-Lean), the coordinator of the Horizon-2020 project SOPHIA, and the co-coordinator of the Horizon-2020 project CONCERT. He is a recipient of the IEEE Robotics and Automation Society (RAS) Early Career Award 2021, and winner of the Amazon Research Awards 2019, of the Solution Award 2019 (MECSPE2019), of the KUKA Innovation Award 2018, of the WeRob best poster award 2018, and of the best student paper award at ROBIO 2013. His PhD thesis was a finalist for the Georges Giralt PhD award 2015 – best European PhD thesis in robotics. He was also a finalist for the Solution Award 2020 (MECSPE2020), the best conference paper award at Humanoids 2018, for the best interactive paper award at Humanoids 2016, for the best oral presentation award at Automatica (SIDRA) 2014, and for the best manipulation paper award at ICRA 2012. He is the author of the book “Transferring Human Impedance Regulation Skills to Robots” in the Springer Tracts in Advanced Robotics (STAR), and several publications in journals, international conferences, and book chapters. He is currently serving as an elected IEEE RAS AdCom member (2022-2024) and as chair and representative of the IEEE-RAS Young Professionals Committee. He is a scholar of the European Lab for Learning and Intelligent Systems (ELLIS).

Anticipatory Control of Human-Robot Interaction: Towards Autonomous yet Truly Collaborative Robots

Arash Ajoudani

Director of HRI2 Laboratory, Istituto Italiano di Tecnologia, Genova, Italy

Abstract:

Human-centric and collaborative robotic systems are most often designed to coexist and to safely share a working space with humans, tolerating accidental collisions or occasional contacts, but rarely are they thought for entering in direct socio-physical contact with humans to perceive, understand, and react to their distress or needs. To respond to this challenge anticipatory human models and autonomous robot controllers – for fixed and mobile bases – need to be effectively merged to bring human-robot interaction and collaboration to bear on a much wider class of problems. In particular, in this talk I will give an overview of our research activities on human modeling, effort allocation, and autonomous robot loco-manipulation control, as key components to reach this goal.

Bio:

Arash Ajoudani is a tenured senior scientist at the Italian Institute of Technology (IIT), where he leads the Human-Robot Interfaces and physical Interaction (HRI²) laboratory. He also coordinates the Robotics for Manufacturing (R4M) lab of the Leonardo labs, and is a principal investigator of the IIT-Intellimech JOiiNT lab. He is a recipient of the European Research Council (ERC) starting grant 2019 (Ergo-Lean), the coordinator of the Horizon-2020 project SOPHIA, and the co-coordinator of the Horizon-2020 project CONCERT. He is a recipient of the IEEE Robotics and Automation Society (RAS) Early Career Award 2021, and winner of the Amazon Research Awards 2019, of the Solution Award 2019 (MECSPE2019), of the KUKA Innovation Award 2018, of the WeRob best poster award 2018, and of the best student paper award at ROBIO 2013. His PhD thesis was a finalist for the Georges Giralt PhD award 2015 – best European PhD thesis in robotics. He was also a finalist for the Solution Award 2020 (MECSPE2020), the best conference paper award at Humanoids 2018, for the best interactive paper award at Humanoids 2016, for the best oral presentation award at Automatica (SIDRA) 2014, and for the best manipulation paper award at ICRA 2012. He is the author of the book “Transferring Human Impedance Regulation Skills to Robots” in the Springer Tracts in Advanced Robotics (STAR), and several publications in journals, international conferences, and book chapters. He is currently serving as an elected IEEE RAS AdCom member (2022-2024) and as chair and representative of the IEEE-RAS Young Professionals Committee. He is a scholar of the European Lab for Learning and Intelligent Systems (ELLIS).

Shaping Robotic Assistance through Structured Robot Learning

Georgia Chalvatzaki

Assistant Professor, TU Darmstadt, Germany

Abstract:

Future intelligent robotic assistants are expected to perform various tasks in unstructured and human-inhabited environments. These robots should support humans in everyday activities as personal assistants or collaborate with them in work environments like hospitals and warehouses. In this talk, I will briefly describe my research works for robotic assistants to help and support humans in need, developing specific human-robot interaction behaviors combining classical robotics and machine learning approaches. I will then explain how mobile manipulation robots are currently the most promising solution among embodied AI systems, thanks to their body structure and sensorial equipment for learning to execute a series of assistive tasks. On top of this, I will point out some key challenges that hinder autonomous mobile manipulation for intelligent assistance, and discuss how structured robot learning can pave the way toward generalizable robot behaviors. Structured robot learning refers to all learning methods at the intersection of classical robotics and machine learning that aim to leverage structure in data and algorithms to scale robot behaviors to complex tasks. Finally, this talk will give insights into how my team and I leverage structured representations, priors, and task descriptions together with learning and planning in some challenging (mobile) manipulation tasks in our path for creating general-purpose intelligent robotic assistants.

Bio:

Georgia Chalvatzaki is Assistant Professor and research leader of the intelligent robotic systems for assistance (iROSA) group at TU Darmstadt, Germany. She received her Diploma and Ph.D. in Electrical and Computer Engineering at the National Technical University of Athens, Greece. Her research interests lie in the intersection of classical robotics and machine learning to develop behaviors for enabling mobile manipulator robots to solve complex tasks in domestic environments with the human-in-the-loop of the interaction process. She holds an Emmy Noether grant for AI Methods from the German research foundation. She is a co-chair of the IEEE RAS technical committee on Mobile Manipulation, co-chair of the IEEE RAS Women in Engineering committee, and was voted “AI-Newcomer” for 2021 by the German Information Society.

Shaping Robotic Assistance through Structured Robot Learning

Georgia Chalvatzaki

Assistant Professor, TU Darmstadt, Germany

Abstract:

Future intelligent robotic assistants are expected to perform various tasks in unstructured and human-inhabited environments. These robots should support humans in everyday activities as personal assistants or collaborate with them in work environments like hospitals and warehouses. In this talk, I will briefly describe my research works for robotic assistants to help and support humans in need, developing specific human-robot interaction behaviors combining classical robotics and machine learning approaches. I will then explain how mobile manipulation robots are currently the most promising solution among embodied AI systems, thanks to their body structure and sensorial equipment for learning to execute a series of assistive tasks. On top of this, I will point out some key challenges that hinder autonomous mobile manipulation for intelligent assistance, and discuss how structured robot learning can pave the way toward generalizable robot behaviors. Structured robot learning refers to all learning methods at the intersection of classical robotics and machine learning that aim to leverage structure in data and algorithms to scale robot behaviors to complex tasks. Finally, this talk will give insights into how my team and I leverage structured representations, priors, and task descriptions together with learning and planning in some challenging (mobile) manipulation tasks in our path for creating general-purpose intelligent robotic assistants.

Bio:

Georgia Chalvatzaki is Assistant Professor and research leader of the intelligent robotic systems for assistance (iROSA) group at TU Darmstadt, Germany. She received her Diploma and Ph.D. in Electrical and Computer Engineering at the National Technical University of Athens, Greece. Her research interests lie in the intersection of classical robotics and machine learning to develop behaviors for enabling mobile manipulator robots to solve complex tasks in domestic environments with the human-in-the-loop of the interaction process. She holds an Emmy Noether grant for AI Methods from the German research foundation. She is a co-chair of the IEEE RAS technical committee on Mobile Manipulation, co-chair of the IEEE RAS Women in Engineering committee, and was voted “AI-Newcomer” for 2021 by the German Information Society.

Launching Socially-Aware Mobile Manipulation Robots in Hospitals

Vivian Chu

CTO and Co-Founder, Diligent Robotics, USA

Abstract:

As robotic technology advances, so does the push to launch robots in the real-world. Over the years, robots have made huge strides in the mobile transport space as well as warehouse automation. However, mobile manipulation robots operating around people in semi-structured environments are still few and far between. At Diligent Robotics, we’re pushing the boundaries of socially-aware mobile manipulation robotics by deploying robots into the hospital environment. This talk will cover the challenges of a startup putting Moxi, a socially-aware mobile manipulation platform, into a semi-structured environment with people (i.e. hospitals). It will include lessons learned and key takeaways as well as insights into healthcare automation given the rise of COVID-19 and the impact of labor shortages.

Bio:

Vivian is the CTO and cofounder of Diligent Robotics, where they build robot assistants that help clinical staff with non-patient-facing tasks so they have more time for patient care. She is an expert roboticist that specializes in robot learnings from people and human-robot interaction. She has received high tier industry recognition including: 2019 MIT TR35, Google Anita Borg Memorial Scholar, Stanford EECS Rising Star, Best Cognitive Robotics Paper Award at ICRA, and featured on Robohub 25 women in robotics you need to know . Vivian has used her HRI and machine learning expertise on several robotic platforms including Moxi, PR2, Meka Robot and Kinova Jaco2. She has also worked at Google[X], Honda Research Institute and at IBM Almaden Research.

Launching Socially-Aware Mobile Manipulation Robots in Hospitals

Vivian Chu

CTO and Co-Founder, Diligent Robotics, USA

Abstract:

As robotic technology advances, so does the push to launch robots in the real-world. Over the years, robots have made huge strides in the mobile transport space as well as warehouse automation. However, mobile manipulation robots operating around people in semi-structured environments are still few and far between. At Diligent Robotics, we’re pushing the boundaries of socially-aware mobile manipulation robotics by deploying robots into the hospital environment. This talk will cover the challenges of a startup putting Moxi, a socially-aware mobile manipulation platform, into a semi-structured environment with people (i.e. hospitals). It will include lessons learned and key takeaways as well as insights into healthcare automation given the rise of COVID-19 and the impact of labor shortages.

Bio:

Vivian is the CTO and cofounder of Diligent Robotics, where they build robot assistants that help clinical staff with non-patient-facing tasks so they have more time for patient care. She is an expert roboticist that specializes in robot learnings from people and human-robot interaction. She has received high tier industry recognition including: 2019 MIT TR35, Google Anita Borg Memorial Scholar, Stanford EECS Rising Star, Best Cognitive Robotics Paper Award at ICRA, and featured on Robohub 25 women in robotics you need to know . Vivian has used her HRI and machine learning expertise on several robotic platforms including Moxi, PR2, Meka Robot and Kinova Jaco2. She has also worked at Google[X], Honda Research Institute and at IBM Almaden Research.

Toward Scalable Autonomy

Aleksandra Faust

Senior Staff Research Scientist, Google Research, Mountain View, USA

Abstract:

Training autonomous agents and systems that perform complex tasks in a variety of real-work environments remains a challenge. While reinforcement learning is a promising technique, training RL agents is an expensive, human-in-the-loop process, requiring heavy engineering and often resulting in suboptimal results. In this talk we explore two main directions toward scalable reinforcement learning and autonomy. First, we discuss several methods for zero-shot sim2real transfer for mobile and aerial navigation, including visual navigation and fully autonomous navigation on a severely resource constrained nano UAV. Second, we observe that the interaction between the human engineer and the agent under training as a decision-making process that the human agent performs, and consequently automate the training by learning a decision making policy. With that insight, we focus on zero-shot generalization and discuss a compositional task curriculum that generalizes to unseen tasks of evolving complexity. We show that across different applications, learning methods improve reinforcement learning agents generalization and performance, and raise questions about nurture vs nature in training autonomous systems.

Bio:

Aleksandra Faust is a Senior Staff Research Scientist and Reinforcement Learning research team co-founder at Google Brain Research. Previously, Aleksandra founded and led Task and Motion Planning research in Robotics at Google, machine learning for self-driving car planning and controls in Waymo. She earned a Ph.D. in Computer Science at the University of New Mexico (with distinction), and a Master’s in Computer Science from the University of Illinois at Urbana-Champaign. Her research interests include learning for safe and scalable autonomy, reinforcement learning, learning to learn for autonomous systems. Aleksandra won IEEE RAS Early Career Award for Industry, the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in the period of 2011-2014, and was named Distinguished Alumna by the University of New Mexico School of Engineering. Her work has been featured in the New York Times, PC Magazine, ZdNet, VentureBeat, and was awarded Best Paper in Service Robotics at ICRA 2018, Best Paper in Reinforcement Learning for Real Life (RL4RL) at ICML 2019, and Best Paper of IEEE Computer Architecture Letters in 2020.

Toward Scalable Autonomy

Aleksandra Faust

Senior Staff Research Scientist, Google Research, Mountain View, USA

Abstract:

Training autonomous agents and systems that perform complex tasks in a variety of real-work environments remains a challenge. While reinforcement learning is a promising technique, training RL agents is an expensive, human-in-the-loop process, requiring heavy engineering and often resulting in suboptimal results. In this talk we explore two main directions toward scalable reinforcement learning and autonomy. First, we discuss several methods for zero-shot sim2real transfer for mobile and aerial navigation, including visual navigation and fully autonomous navigation on a severely resource constrained nano UAV. Second, we observe that the interaction between the human engineer and the agent under training as a decision-making process that the human agent performs, and consequently automate the training by learning a decision making policy. With that insight, we focus on zero-shot generalization and discuss a compositional task curriculum that generalizes to unseen tasks of evolving complexity. We show that across different applications, learning methods improve reinforcement learning agents generalization and performance, and raise questions about nurture vs nature in training autonomous systems.

Bio:

Aleksandra Faust is a Senior Staff Research Scientist and Reinforcement Learning research team co-founder at Google Brain Research. Previously, Aleksandra founded and led Task and Motion Planning research in Robotics at Google, machine learning for self-driving car planning and controls in Waymo. She earned a Ph.D. in Computer Science at the University of New Mexico (with distinction), and a Master’s in Computer Science from the University of Illinois at Urbana-Champaign. Her research interests include learning for safe and scalable autonomy, reinforcement learning, learning to learn for autonomous systems. Aleksandra won IEEE RAS Early Career Award for Industry, the Tom L. Popejoy Award for the best doctoral dissertation at the University of New Mexico in the period of 2011-2014, and was named Distinguished Alumna by the University of New Mexico School of Engineering. Her work has been featured in the New York Times, PC Magazine, ZdNet, VentureBeat, and was awarded Best Paper in Service Robotics at ICRA 2018, Best Paper in Reinforcement Learning for Real Life (RL4RL) at ICML 2019, and Best Paper of IEEE Computer Architecture Letters in 2020.

AI-Robotic Systems for Scientific Discovery -Role of Robotic Technologies-

Kanako Harada

Associate Professor, Center for Disease Biology and Integrative Medicine (CDBIM), Graduate School of Medicine, The University of Tokyo, Japan

Abstract:

Artificial intelligence, robotics, and automation technologies are being incorporated in the field of science exploration and are contributing to scientific discovery. Currently, the mainstream of these technologies is aimed at collecting and analyzing big data by automating the same experimental operations as in factory automation. Our project aims to develop AI-robot systems that can autonomously perform scientific experiments on valuable, small and fragile samples with individual differences on which the same operation doesn’t work. To achieve this goal, it is necessary for the AI-robot systems to autonomously think of a scientific hypothesis, develop a strategy for moving its body, perform manipulations on the samples, observe and interpret the sample’s reactions, and generate new hypotheses in a circular loop. A large-scale interdisciplinary collaborative research project has been designed to demonstrate the loop, and the latest achievements will be presented in the talk.

Bio:

Kanako Harada is Associate Professor of the Center for Disease Biology and Integrative Medicine (CDBIM), Graduate School of Medicine, The University of Tokyo, Japan, and she also belongs to the Department of Bioengineering and the Department of Mechanical Engineering, Graduate School of Engineering. She serves as a Project Manager for one of the
national flagship projects “Moonshot” by the Cabinet Office. She obtained her M.Sc. in Engineering from The University of Tokyo in 2001, and her Ph.D. in Engineering from Waseda University in 2007. She worked for Hitachi Ltd., Japan Association for the Advancement of Medical Equipment, and Scuola Superiore Sant’Anna, Italy, before joining The University of Tokyo. She also served as a Program Manager for the ImPACT program of the Cabinet Office (2016 – 2019). Her research interests include surgical robotic systems, automation of robots for medical applications, skills assessment, patient models, virtual-reality simulators, and regulatory science.

AI-Robotic Systems for Scientific Discovery -Role of Robotic Technologies-

Kanako Harada

Associate Professor, Center for Disease Biology and Integrative Medicine (CDBIM), Graduate School of Medicine, The University of Tokyo, Japan

Abstract:

Artificial intelligence, robotics, and automation technologies are being incorporated in the field of science exploration and are contributing to scientific discovery. Currently, the mainstream of these technologies is aimed at collecting and analyzing big data by automating the same experimental operations as in factory automation. Our project aims to develop AI-robot systems that can autonomously perform scientific experiments on valuable, small and fragile samples with individual differences on which the same operation doesn’t work. To achieve this goal, it is necessary for the AI-robot systems to autonomously think of a scientific hypothesis, develop a strategy for moving its body, perform manipulations on the samples, observe and interpret the sample’s reactions, and generate new hypotheses in a circular loop. A large-scale interdisciplinary collaborative research project has been designed to demonstrate the loop, and the latest achievements will be presented in the talk.

Bio:

Kanako Harada is Associate Professor of the Center for Disease Biology and Integrative Medicine (CDBIM), Graduate School of Medicine, The University of Tokyo, Japan, and she also belongs to the Department of Bioengineering and the Department of Mechanical Engineering, Graduate School of Engineering. She serves as a Project Manager for one of the
national flagship projects “Moonshot” by the Cabinet Office. She obtained her M.Sc. in Engineering from The University of Tokyo in 2001, and her Ph.D. in Engineering from Waseda University in 2007. She worked for Hitachi Ltd., Japan Association for the Advancement of Medical Equipment, and Scuola Superiore Sant’Anna, Italy, before joining The University of Tokyo. She also served as a Program Manager for the ImPACT program of the Cabinet Office (2016 – 2019). Her research interests include surgical robotic systems, automation of robots for medical applications, skills assessment, patient models, virtual-reality simulators, and regulatory science.

Swarms for People

Sabine Hauert

Associate Professor in Swarm Engineering, University of Bristol, UK

Abstract:

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Larger robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

Bio:

Sabine Hauert is Associate Professor of Swarm Engineering at University of Bristol. She leads a team of 20 researchers working on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics (https://hauertlab.com/). Before joining the University of Bristol, Sabine engineered swarms of nanoparticles for cancer treatment at MIT, and deployed swarms of flying robots at EPFL. She’s PI or Co-I on more than 20M GBP in grant funding and has served on national and international committees, including the UK Robotics Growth Partnership, the Royal Society Working Group on Machine Learning and Data Community of Interest, and several IEEE boards. She is President and Executive Trustee of non-profits robohub.org and aihub.org, which connect the robotics and AI communities to the public. As an expert in science communication, she is often invited to speak with media and at conferences (over 50 invited talks).

Swarms for People

Sabine Hauert

Associate Professor in Swarm Engineering, University of Bristol, UK

Abstract:

As tiny robots become individually more sophisticated, and larger robots easier to mass produce, a breakdown of conventional disciplinary silos is enabling swarm engineering to be adopted across scales and applications, from nanomedicine to treat cancer, to cm-sized robots for large-scale environmental monitoring or intralogistics. This convergence of capabilities is facilitating the transfer of lessons learned from one scale to the other. Larger robots that work in the 1000s may operate in a way similar to reaction-diffusion systems at the nanoscale, while sophisticated microrobots may have individual capabilities that allow them to achieve swarm behaviour reminiscent of larger robots with memory, computation, and communication. Although the physics of these systems are fundamentally different, much of their emergent swarm behaviours can be abstracted to their ability to move and react to their local environment. This presents an opportunity to build a unified framework for the engineering of swarms across scales that makes use of machine learning to automatically discover suitable agent designs and behaviours, digital twins to seamlessly move between the digital and physical world, and user studies to explore how to make swarms safe and trustworthy. Such a framework would push the envelope of swarm capabilities, towards making swarms for people.

Bio:

Sabine Hauert is Associate Professor of Swarm Engineering at University of Bristol. She leads a team of 20 researchers working on making swarms for people, and across scales, from nanorobots for cancer treatment, to larger robots for environmental monitoring, or logistics (https://hauertlab.com/). Before joining the University of Bristol, Sabine engineered swarms of nanoparticles for cancer treatment at MIT, and deployed swarms of flying robots at EPFL. She’s PI or Co-I on more than 20M GBP in grant funding and has served on national and international committees, including the UK Robotics Growth Partnership, the Royal Society Working Group on Machine Learning and Data Community of Interest, and several IEEE boards. She is President and Executive Trustee of non-profits robohub.org and aihub.org, which connect the robotics and AI communities to the public. As an expert in science communication, she is often invited to speak with media and at conferences (over 50 invited talks).

Robust Localization and Mapping toward Long-term Navigation

Ayoung Kim

Seoul National University, Korea

Abstract:

Achieving long-term robustness for Simultaneous Localization and Mapping (SLAM) for robot navigation has been thoroughly studied for decades. In this talk, I would like to examine the robustness in terms of perception and representation aspects. Securing images from the camera acquisition phase, followed by image enhancement, improves the front-end visual SLAM in a visually degraded environment. Complementary exploitation of structural lines and points alleviates degradation. A majority of the studies have relied on cameras and lidars, but extending sensor capability beyond the visible spectrum allows us to achieve all-day and all-weather perception. Leveraging nonconventional sensors, including radars and thermal cameras, reveals substantial improvement once their challenges are adequately handled. I would also like to introduce a map representation that handles potential changes occurring in the environment. Our recent work on long-term 3D map management enables robots to reliably navigate in the nonstationary real world. For long-term mapping, efficacy in both memory and computation costs is critical.

Bio:

Ayoung Kim works as an associate professor in the department of mechanical engineering at Seoul National University since 2021 Sep. Before joining SNU, she was at the civil and environmental engineering department, Korea Advanced Institute of Science and Technology (KAIST) from 2014 to 2021. She has B.S. and M.S. degrees in mechanical engineering from SNU in 2005 and 2007, and an M.S. degree in electrical engineering and a Ph.D. degree in mechanical engineering from the University of Michigan (UM), Ann Arbor, in 2011 and 2012.

Robust Localization and Mapping toward Long-term Navigation

Ayoung Kim

Seoul National University, Korea

Abstract:

Achieving long-term robustness for Simultaneous Localization and Mapping (SLAM) for robot navigation has been thoroughly studied for decades. In this talk, I would like to examine the robustness in terms of perception and representation aspects. Securing images from the camera acquisition phase, followed by image enhancement, improves the front-end visual SLAM in a visually degraded environment. Complementary exploitation of structural lines and points alleviates degradation. A majority of the studies have relied on cameras and lidars, but extending sensor capability beyond the visible spectrum allows us to achieve all-day and all-weather perception. Leveraging nonconventional sensors, including radars and thermal cameras, reveals substantial improvement once their challenges are adequately handled. I would also like to introduce a map representation that handles potential changes occurring in the environment. Our recent work on long-term 3D map management enables robots to reliably navigate in the nonstationary real world. For long-term mapping, efficacy in both memory and computation costs is critical.

Bio:

Ayoung Kim works as an associate professor in the department of mechanical engineering at Seoul National University since 2021 Sep. Before joining SNU, she was at the civil and environmental engineering department, Korea Advanced Institute of Science and Technology (KAIST) from 2014 to 2021. She has B.S. and M.S. degrees in mechanical engineering from SNU in 2005 and 2007, and an M.S. degree in electrical engineering and a Ph.D. degree in mechanical engineering from the University of Michigan (UM), Ann Arbor, in 2011 and 2012.

Title: The Future of Intelligent Machines: Combining the Safety of Model-based Design with the Scalability of Data-Driven Algorithms

James Kuffner

Chief Digital Officer, Member of the Board of Directors, Operating Officer, Toyota Motor Corporation, Japan
CEO, Woven Planet Holdings, Inc., Japan

Abstract:

High-performance networking, deep learning, and cloud computing are radically transforming all aspects of human society, and are poised to disrupt the state-of-the-art in the development of intelligent machines.  Specifically, advanced safety and automated driving powered by connected, distributed fleet intelligence (i.e. “cloud robotics”) will enable future mobility systems that will dramatically alter the design and evolution of our cities.  For robot systems and automotive products to be viable in the market, model-based designs with functional safety are incorporated in order to provide explainability and minimize risks to people’s lives.  However, increasingly complex models of intelligent behavior are often difficult or impossible to manually design, such as building reliable real-world perception, planning, and behavior prediction of traffic at scale for automated driving applications.  This talk will explore methods to combine the best of both data-driven and analytical modeling techniques in order to create safe, superior performance intelligent machines – technology products whose ultimate purpose is to support human happiness and well-being.

Bio:

Dr. James Kuffner is the Chief Executive Officer (CEO) and Representative Director of Woven Planet, and a Member of the Board of Directors and Operating Officer of Toyota Motor Corporation (TMC). Dr. Kuffner also serves as the Representative Director of Woven Core, and the President and Representative Director of Woven Alpha. He has also been serving as Chief Digital Officer of TMC. Dr. Kuffner received a Ph.D. from the Stanford University Dept. of Computer Science Robotics Laboratory in 2000, and was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo working on software and planning algorithms for humanoid robots. He joined the faculty at Carnegie Mellon University’s Robotics Institute in 2002. Dr. Kuffner is best known as a co-inventor of the Rapidly-exploring Random Tree (RRT) algorithm, which has become a key standard benchmark for robot motion planning. He has published over 125 technical papers, holds more than 50 patents, and received the Okawa Foundation Award for Young Researchers in 2007. Dr. Kuffner was a Research Scientist and Engineering Director at Google from 2009 to 2016. He was part of the initial engineering team that built Google’s self-driving car. In 2010, he introduced the term “Cloud Robotics” to describe how network-connected robots could take advantage of distributed computation and data stored in the cloud. He was appointed head of Google’s Robotics division in 2014. He Joined the Toyota Research Institute (TRI) as CTO in 2016. Dr. Kuffner continues to serve as an Adjunct Associate Professor at the Robotics Institute, Carnegie Mellon University, and as an Executive Advisor to TRI.

Title: The Future of Intelligent Machines: Combining the Safety of Model-based Design with the Scalability of Data-Driven Algorithms

James Kuffner

Chief Digital Officer, Member of the Board of Directors, Operating Officer, Toyota Motor Corporation, Japan
CEO, Woven Planet Holdings, Inc., Japan

Abstract:

High-performance networking, deep learning, and cloud computing are radically transforming all aspects of human society, and are poised to disrupt the state-of-the-art in the development of intelligent machines.  Specifically, advanced safety and automated driving powered by connected, distributed fleet intelligence (i.e. “cloud robotics”) will enable future mobility systems that will dramatically alter the design and evolution of our cities.  For robot systems and automotive products to be viable in the market, model-based designs with functional safety are incorporated in order to provide explainability and minimize risks to people’s lives.  However, increasingly complex models of intelligent behavior are often difficult or impossible to manually design, such as building reliable real-world perception, planning, and behavior prediction of traffic at scale for automated driving applications.  This talk will explore methods to combine the best of both data-driven and analytical modeling techniques in order to create safe, superior performance intelligent machines – technology products whose ultimate purpose is to support human happiness and well-being.

Bio:

Dr. James Kuffner is the Chief Executive Officer (CEO) and Representative Director of Woven Planet, and a Member of the Board of Directors and Operating Officer of Toyota Motor Corporation (TMC). Dr. Kuffner also serves as the Representative Director of Woven Core, and the President and Representative Director of Woven Alpha. He has also been serving as Chief Digital Officer of TMC. Dr. Kuffner received a Ph.D. from the Stanford University Dept. of Computer Science Robotics Laboratory in 2000, and was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo working on software and planning algorithms for humanoid robots. He joined the faculty at Carnegie Mellon University’s Robotics Institute in 2002. Dr. Kuffner is best known as a co-inventor of the Rapidly-exploring Random Tree (RRT) algorithm, which has become a key standard benchmark for robot motion planning. He has published over 125 technical papers, holds more than 50 patents, and received the Okawa Foundation Award for Young Researchers in 2007. Dr. Kuffner was a Research Scientist and Engineering Director at Google from 2009 to 2016. He was part of the initial engineering team that built Google’s self-driving car. In 2010, he introduced the term “Cloud Robotics” to describe how network-connected robots could take advantage of distributed computation and data stored in the cloud. He was appointed head of Google’s Robotics division in 2014. He Joined the Toyota Research Institute (TRI) as CTO in 2016. Dr. Kuffner continues to serve as an Adjunct Associate Professor at the Robotics Institute, Carnegie Mellon University, and as an Executive Advisor to TRI.

Sensorimotor Control Meets Surgical Robotics – A Model of the Surgeon Can Benefit Patients

Ilana Nisky

Associate Professor, Department of Biomedical Engineering, Ben-Gurion University of the Negev, Israel

Abstract:

In robot-assisted minimally invasive surgery (RAMIS), a surgeon manipulates a pair of joysticks that teleoperate instruments inside a patient’s body to achieve precise control of movement, tissue manipulation, and perception. Despite many advantages for both the patient and the surgeon, the full potential of RAMIS and other teleoperation applications is yet to be realized. During everyday interaction with the external world, our brain graciously deals with a similar task – fine manipulation and perception with outdated and noisy information that arrives from multiple sensors. Hence, I posit that employing models and theories about how our sensorimotor system performs these tasks could help bridge major gaps currently impeding the realization of RAMIS full potential. I will present recent results of our human behavioral and machine learning studies to uncover the kinematic signatures of human movements while executing surgical tasks with virtual and real objects and how they change across different time scales following adaptation and skill acquisition. I will then discuss how we harness these findings to eventually improve the control of surgical robots, the assessment and advancement of surgical skill, and ultimately, the well-being of patients.

Bio:

Prof. Ilana Nisky received all her academic degrees in Biomedical Engineering from Ben-Gurion University of the Negev. After a postdoctoral fellowship at Stanford University as a Marie Curie International Outgoing Fellow, she returned to BGU where she is now an Associate Professor of Biomedical Engineering, and established the Biomedical Robotics Lab. Recently she also joined the Negev Translational Neurorehabilitation Lab as the principal investigator for rehabilitation with haptic interfaces. She is the recipient of the 2019 IEEE Robotics and Automation Society Early Academic Career Award, the 2021 Neural Control of Movement Society Early Career Award, the Alon fellowship by the Israeli Council of High Education, and \was selected as one of 40 promising young Israelis by TheMarker magazine. Her research interests include human motor control, haptics, robotics, human and machine learning, teleoperation, and robot-assisted surgery, and she hopes that this research will improve the quality of treatment for patients, will facilitate better training of surgeons, advance the technology of teleoperation and haptics, and advance our understanding of the brain. Nisky authored more than 80 scientific publications in peer-reviewed journals and conference proceedings, and numerous abstracts in international conferences. She is a Senior Member of IEEE, was an executive committee member of the EuroHaptics Society, and is a board member of the Israeli Society for Medical and Biological Engineering.

Sensorimotor Control Meets Surgical Robotics – A Model of the Surgeon Can Benefit Patients

Ilana Nisky

Associate Professor, Department of Biomedical Engineering, Ben-Gurion University of the Negev, Israel

Abstract:

In robot-assisted minimally invasive surgery (RAMIS), a surgeon manipulates a pair of joysticks that teleoperate instruments inside a patient’s body to achieve precise control of movement, tissue manipulation, and perception. Despite many advantages for both the patient and the surgeon, the full potential of RAMIS and other teleoperation applications is yet to be realized. During everyday interaction with the external world, our brain graciously deals with a similar task – fine manipulation and perception with outdated and noisy information that arrives from multiple sensors. Hence, I posit that employing models and theories about how our sensorimotor system performs these tasks could help bridge major gaps currently impeding the realization of RAMIS full potential. I will present recent results of our human behavioral and machine learning studies to uncover the kinematic signatures of human movements while executing surgical tasks with virtual and real objects and how they change across different time scales following adaptation and skill acquisition. I will then discuss how we harness these findings to eventually improve the control of surgical robots, the assessment and advancement of surgical skill, and ultimately, the well-being of patients.

Bio:

Prof. Ilana Nisky received all her academic degrees in Biomedical Engineering from Ben-Gurion University of the Negev. After a postdoctoral fellowship at Stanford University as a Marie Curie International Outgoing Fellow, she returned to BGU where she is now an Associate Professor of Biomedical Engineering, and established the Biomedical Robotics Lab. Recently she also joined the Negev Translational Neurorehabilitation Lab as the principal investigator for rehabilitation with haptic interfaces. She is the recipient of the 2019 IEEE Robotics and Automation Society Early Academic Career Award, the 2021 Neural Control of Movement Society Early Career Award, the Alon fellowship by the Israeli Council of High Education, and \was selected as one of 40 promising young Israelis by TheMarker magazine. Her research interests include human motor control, haptics, robotics, human and machine learning, teleoperation, and robot-assisted surgery, and she hopes that this research will improve the quality of treatment for patients, will facilitate better training of surgeons, advance the technology of teleoperation and haptics, and advance our understanding of the brain. Nisky authored more than 80 scientific publications in peer-reviewed journals and conference proceedings, and numerous abstracts in international conferences. She is a Senior Member of IEEE, was an executive committee member of the EuroHaptics Society, and is a board member of the Israeli Society for Medical and Biological Engineering.

Inventing Robotic Mechanisms

Kenjiro Tadakuma

Tohoku University, Japan

Abstract:

Conventional omnidirectional wheel mechanisms are limited in their ability to climb steps and cross gaps. The Omni-Ball, consisting of two connected hemispherical wheels, overcomes these limitations by enabling the crossing of higher obstacles and larger gaps than previously. By elongating the Omni-Ball longitudinally into a cylinder shape, we obtained the Omni-Crawler, which enables omnidirectional mobility on rough terrain. In addition, transforming the cylinder shape into a torus with inner-outer membrane motion not only enables robotic mobility in murky water, but makes it possible to further transition from Omni-Crawler to Omni-Gripper. Conventional soft grippers are not suitable for objects with sharp sections such as broken valves and glass shards, but the torus shape solves this problem by using a three-layered variable stiffness skin-bag made of cut-resistant cloth. A similar function could also be achieved using a string of beads made of titanium which can grip objects of almost any shape, even when they are on fire. To build on these gripper mechanisms from the viewpoint of bioinspired robotics, we also developed a structure inspired from the proboscis (mouthpart) of Nemertea, also known as the ribbon worm, and combined it with self-healing materials to realize a robotic blood vessel with active self-healing properties. Through the addition of repair mechanisms, we expect it to be possible to achieve the active transformation of one’s own body, thereby creating the ultimate robotic mechanism.

Bio:

Kenjiro Tadakuma holds an Associate Professorship at Tohoku University in the field of robotics, where he has been leading the Plus Ultra Mechanism Group since 2015. Throughout his career, he has made outstanding contributions to the design of novel robotic mechanisms. As a Ph.D. student at Tokyo Tech (2004 – 2007), he invented the first omnidirectional mechanism, known as “Omni-Ball”. This brought him to MIT’s Field and Space Robotics laboratory as a post-doctoral researcher (2007), where he went on to contribute to the Mars hopper project and developed a polymer-based mechanical device for medical applications. Back in Japan, he held positions at Tohoku University, the University of Electro-Communications, and Osaka University (2008 – 2015), where he expanded on the concept of omnidirectional mechanisms with successful applications in mobile robotics and gripping mechanisms, such as the “Omni-Crawler” and “Omni-Gripper”. At Tohoku University, he is further aiming to extract the essence of biological mechanisms to express them as robotic mechanisms. Notably, his team won the IEEE ICRA Best Paper Award on Mechanisms and Design in 2019. The nature of his inventions illustrates his deep focus in pioneering the field of robotics mechanisms as a fundamental science.

Inventing Robotic Mechanisms

Kenjiro Tadakuma

Tohoku University, Japan

Abstract:

Conventional omnidirectional wheel mechanisms are limited in their ability to climb steps and cross gaps. The Omni-Ball, consisting of two connected hemispherical wheels, overcomes these limitations by enabling the crossing of higher obstacles and larger gaps than previously. By elongating the Omni-Ball longitudinally into a cylinder shape, we obtained the Omni-Crawler, which enables omnidirectional mobility on rough terrain. In addition, transforming the cylinder shape into a torus with inner-outer membrane motion not only enables robotic mobility in murky water, but makes it possible to further transition from Omni-Crawler to Omni-Gripper. Conventional soft grippers are not suitable for objects with sharp sections such as broken valves and glass shards, but the torus shape solves this problem by using a three-layered variable stiffness skin-bag made of cut-resistant cloth. A similar function could also be achieved using a string of beads made of titanium which can grip objects of almost any shape, even when they are on fire. To build on these gripper mechanisms from the viewpoint of bioinspired robotics, we also developed a structure inspired from the proboscis (mouthpart) of Nemertea, also known as the ribbon worm, and combined it with self-healing materials to realize a robotic blood vessel with active self-healing properties. Through the addition of repair mechanisms, we expect it to be possible to achieve the active transformation of one’s own body, thereby creating the ultimate robotic mechanism.

Bio:

Kenjiro Tadakuma holds an Associate Professorship at Tohoku University in the field of robotics, where he has been leading the Plus Ultra Mechanism Group since 2015. Throughout his career, he has made outstanding contributions to the design of novel robotic mechanisms. As a Ph.D. student at Tokyo Tech (2004 – 2007), he invented the first omnidirectional mechanism, known as “Omni-Ball”. This brought him to MIT’s Field and Space Robotics laboratory as a post-doctoral researcher (2007), where he went on to contribute to the Mars hopper project and developed a polymer-based mechanical device for medical applications. Back in Japan, he held positions at Tohoku University, the University of Electro-Communications, and Osaka University (2008 – 2015), where he expanded on the concept of omnidirectional mechanisms with successful applications in mobile robotics and gripping mechanisms, such as the “Omni-Crawler” and “Omni-Gripper”. At Tohoku University, he is further aiming to extract the essence of biological mechanisms to express them as robotic mechanisms. Notably, his team won the IEEE ICRA Best Paper Award on Mechanisms and Design in 2019. The nature of his inventions illustrates his deep focus in pioneering the field of robotics mechanisms as a fundamental science.

Responsible & Empathetic Human Robot Interactions

Pascale Fung

Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST)

Bio:

Pascale Fung is a Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. Prof. Fung is the Director of HKUST Centre for AI Research (CAiRE), an interdisciplinary research centre on top of all four schools at HKUST. She co-founded the Human Language Technology Center (HLTC) and is the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. Prof. Fung represents HKUST on Partnership on AI to Benefit People and Society. She is a member of the IEEE Working Group to develop an IEEE standard – Recommended Practice for Organizational Governance of Artificial Intelligence.

Responsible & Empathetic Human Robot Interactions

Pascale Fung

Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST)

Bio:

Pascale Fung is a Professor at the Department of Electronic & Computer Engineering and Department of Computer Science & Engineering at The Hong Kong University of Science & Technology (HKUST), and a visiting professor at the Central Academy of Fine Arts in Beijing. She is an elected Fellow of the Association for Computational Linguistics (ACL) for her “significant contributions towards statistical NLP, comparable corpora, and building intelligent systems that can understand and empathize with humans”. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for her “contributions to human-machine interactions” and an elected Fellow of the International Speech Communication Association for “fundamental contributions to the interdisciplinary area of spoken language human-machine interactions”. Prof. Fung is the Director of HKUST Centre for AI Research (CAiRE), an interdisciplinary research centre on top of all four schools at HKUST. She co-founded the Human Language Technology Center (HLTC) and is the founding chair of the Women Faculty Association at HKUST. She is an expert on the Global Future Council, a think tank for the World Economic Forum. Prof. Fung represents HKUST on Partnership on AI to Benefit People and Society. She is a member of the IEEE Working Group to develop an IEEE standard – Recommended Practice for Organizational Governance of Artificial Intelligence.

Robot-assisted remote minimally invasive surgery: the fusion of 5G and AI

Shuxin Wang

Tianjin University, China

Abstract:

Remote surgery helps to effectively balance medical resources. However, robot-assisted minimally invasive surgery faces time delay and safety problems in the remote surgical environment. In view of the above problems, this talk will introduce the design method of a minimally invasive robotic surgery system for remote operations. 5G technology is adopted to reduce the time delay of the robot, and artificial intelligence algorithm is combined to improve the safety of the system. A “MicroHand” minimally invasive surgical robot that can assist remote surgery is successfully developed. At present, the system has obtained the approval of China’s National Medical Products Administration, and 51 remote clinical trials have been successfully carried out.

Bio:

Shuxin Wang is a professor at Tianjin University, China. His research field is medical robots. He successfully developed a minimally invasive surgical robot (called MicroHand) and obtained NMPA approval. At present, he is committed to the research of robot-assisted telesurgery. He is the chief scientist of the Institute of Medical Robots and Intelligent Systems (IMRIS) of Tianjin University. He was elected academician of the Chinese Academy of Engineering in 2021. He also won the ASME DED Leonardo Da Vinci Award in 2021.

Robot-assisted remote minimally invasive surgery: the fusion of 5G and AI

Shuxin Wang

Tianjin University, China

Abstract:

Remote surgery helps to effectively balance medical resources. However, robot-assisted minimally invasive surgery faces time delay and safety problems in the remote surgical environment. In view of the above problems, this talk will introduce the design method of a minimally invasive robotic surgery system for remote operations. 5G technology is adopted to reduce the time delay of the robot, and artificial intelligence algorithm is combined to improve the safety of the system. A “MicroHand” minimally invasive surgical robot that can assist remote surgery is successfully developed. At present, the system has obtained the approval of China’s National Medical Products Administration, and 51 remote clinical trials have been successfully carried out.

Bio:

Shuxin Wang is a professor at Tianjin University, China. His research field is medical robots. He successfully developed a minimally invasive surgical robot (called MicroHand) and obtained NMPA approval. At present, he is committed to the research of robot-assisted telesurgery. He is the chief scientist of the Institute of Medical Robots and Intelligent Systems (IMRIS) of Tianjin University. He was elected academician of the Chinese Academy of Engineering in 2021. He also won the ASME DED Leonardo Da Vinci Award in 2021.

Advances in High-Power-Density Dielectric Elastomer Artificial Muscles and their Applications

Huichan Zhao

Department of Mechanical Engineering, Tsinghua University, China

Abstract:

The landmark papers by Pelrine et al. marked the beginning of the use of dielectric elastomer (DE) actuators (and sensors) for soft robotics. They showed that a voltage applied through the thickness of a soft material could produce large strains as a result of the Maxwell stress between charges on the two electrodes. Subsequent works have shown that DE-based actuators are particularly attractive for many soft robotic applications because they exhibit large energy densities with large strains and muscle-like response times. In recent years, we’ve been focusing on developing new configurations, models, fabrication, and design methods for long-life-cycle, high-power-density dielectric elastomer artificial muscles. I’ll present some of our recent work on such aspects and their applications in wearables, microrobotics, and other devices that require compact and high-power actuators.

Bio:

Huichan Zhao is an Associate Professor in the Department of Mechanical Engineering at Tsinghua University. She received her Bachelor’s degree in Mechanical Engineering from Tsinghua University in 2012, and PhD degree from Cornell University in 2017. During 2017-2018, she was a postdoc at Harvard University. Her research interest includes soft robotics, bioinspired robotics, smart materials, flexible sensors and actuators, etc. Some of her work were published in Nature, Science Robotics, Nature Communications, IEEE Transactions on Robotics, Advanced Functional Materials, etc. She was listed in the 30 under 30 Forbes China in 2018 and MIT Technology Review Innovators Under 35 China in 2020. She got the Xiong Youlun Zhihu Excellent Young Scientist Award and Damo Academy Young Fellow Award in 2021.

Advances in High-Power-Density Dielectric Elastomer Artificial Muscles and their Applications

Huichan Zhao

Department of Mechanical Engineering, Tsinghua University, China

Abstract:

The landmark papers by Pelrine et al. marked the beginning of the use of dielectric elastomer (DE) actuators (and sensors) for soft robotics. They showed that a voltage applied through the thickness of a soft material could produce large strains as a result of the Maxwell stress between charges on the two electrodes. Subsequent works have shown that DE-based actuators are particularly attractive for many soft robotic applications because they exhibit large energy densities with large strains and muscle-like response times. In recent years, we’ve been focusing on developing new configurations, models, fabrication, and design methods for long-life-cycle, high-power-density dielectric elastomer artificial muscles. I’ll present some of our recent work on such aspects and their applications in wearables, microrobotics, and other devices that require compact and high-power actuators.

Bio:

Huichan Zhao is an Associate Professor in the Department of Mechanical Engineering at Tsinghua University. She received her Bachelor’s degree in Mechanical Engineering from Tsinghua University in 2012, and PhD degree from Cornell University in 2017. During 2017-2018, she was a postdoc at Harvard University. Her research interest includes soft robotics, bioinspired robotics, smart materials, flexible sensors and actuators, etc. Some of her work were published in Nature, Science Robotics, Nature Communications, IEEE Transactions on Robotics, Advanced Functional Materials, etc. She was listed in the 30 under 30 Forbes China in 2018 and MIT Technology Review Innovators Under 35 China in 2020. She got the Xiong Youlun Zhihu Excellent Young Scientist Award and Damo Academy Young Fellow Award in 2021.

Wearable Robotics with Smart Fluid Devices: Progress and Possibilities

Modar Hassan

Assistant Professor, Department of Intelligent and Mechanical Interaction Technologies, University of Tsukuba, Japan

Abstract:

Wearable robotics has the potential to augment human physical function, support the physical rehabilitation of patients with musculoskeletal disorders, and improve the QOL of persons living with physical disabilities. A MagnetoRheological (MR) fluid is a smart material that can change its apparent viscosity in response to a magnetic field. MR fluid devices are especially promising in wearable robotics due to their fast response time and high material performance. In this talk, I will outline the development of MR devices at the University of Tsukuba for wearable robotics. I will demonstrate the function and construction of the devices, and their application to muscle training devices, robotic ankle foot orthosis, and back support exoskeleton. I will lay out some of the further possibilities for research and application of MR devices in wearable robotics.

Bio:

Modar Hassan is an Assistant Professor at the Department of Intelligent and Mechanical Interaction Technologies, University of Tsukuba, and a co-investigator at the Artificial Intelligence Laboratory at the same institute. His research interests include augmented human technology, esports and para-esports, assistive robotics, orthotics, prosthetics, biomechanics, human performance, and motor control. Currently, he is involved in wearable robotics, esports, and para-esports research targeted to improve human performance in physical and cyberspaces and increase inclusivity in society.

Wearable Robotics with Smart Fluid Devices: Progress and Possibilities

Modar Hassan

Assistant Professor, Department of Intelligent and Mechanical Interaction Technologies, University of Tsukuba, Japan

Abstract:

Wearable robotics has the potential to augment human physical function, support the physical rehabilitation of patients with musculoskeletal disorders, and improve the QOL of persons living with physical disabilities. A MagnetoRheological (MR) fluid is a smart material that can change its apparent viscosity in response to a magnetic field. MR fluid devices are especially promising in wearable robotics due to their fast response time and high material performance. In this talk, I will outline the development of MR devices at the University of Tsukuba for wearable robotics. I will demonstrate the function and construction of the devices, and their application to muscle training devices, robotic ankle foot orthosis, and back support exoskeleton. I will lay out some of the further possibilities for research and application of MR devices in wearable robotics.

Bio:

Modar Hassan is an Assistant Professor at the Department of Intelligent and Mechanical Interaction Technologies, University of Tsukuba, and a co-investigator at the Artificial Intelligence Laboratory at the same institute. His research interests include augmented human technology, esports and para-esports, assistive robotics, orthotics, prosthetics, biomechanics, human performance, and motor control. Currently, he is involved in wearable robotics, esports, and para-esports research targeted to improve human performance in physical and cyberspaces and increase inclusivity in society.

Safe Learning in Robotics

Angela Schoellig

Technical University of Munich & University of Toronto

Abstract:

TBD.

Bio:

TBD.

Safe Learning in Robotics

Angela Schoellig

Technical University of Munich & University of Toronto

Abstract:

TBD.

Bio:

TBD.