Abstract: |
We have reached the point where robots prominently enter our daily lives, be it as a versatile tool at work or as an obedient household helper. This development is especially exciting for people with physical limitations, since designated assistive robots harbour a large potential to enhance their users' autonomy and quality of life. Following this line of thought, the field of assistive robotics introduces mechanical assistants to people who would otherwise struggle with activities of daily living. However, this necessitates adequate and potentially personalised control methods. Focussing on wheelchair-mounted robotic arms, this thesis discusses the methods currently applied in the field, evaluates directly applicable manual alternatives, and proposes a novel shared control based on adaptive degrees of freedom. Following a participatory design, each element is developed and evaluated in close collaboration with the target group, thus allowing for appropriate integrations and realistic assessments.
For the contemporary manual analysis, users evaluated the default manufacturer-provided input device in comparison to a gamepad, 3D mouse, and a command-based voice control. Overall, these studies with the target group (N_1 = 26, N_2 = 15) show a large potential for improvement of the standard in terms of usability and versatility. During this, especially the necessity of mode switches in the robot's default control was remarked negatively. Instead, the participants expressed an eagerness for personalised adaptability, as well as an explicit willingness to train in the use of more complex but capable systems, such as a 3D mouse.
Heeding this, this thesis introduces the novel shared control approach of Adaptive Degrees of Freedom: A camera-based sensor system probabilistically analyses the current situation to generate the most likely directions of robot motion. Subsequently, these directions are mapped onto the user's input device, effectively replacing the classically available cardinal Degrees of Freedom (DoFs) (e.g. up, left, roll, ...). In the end, this enables users to control a robot along arbitrarily complex DoF with any input device, explicitly including very low-DoF interfaces (e.g. chin joysticks), thereby making robots more accessible for people with very limited mobility. For the user, this feels like the system anticipating their next move without taking over control. Instead, it simply provides them with a selection of movement directions designed specifically for the current situation.
This novel control is mathematically and conceptually introduced with its usability verified in preliminary studies. Preparing for a contemporary data-based realisation, a mixed-reality development framework was developed and used to record an extensive dataset of user controlled robots in assistive settings in simulation and reality. Both the framework and dataset were published open-source and free-of-charge.
The dataset was planned to be used with a state-of-the-art deep-learning neural network to predict DoFs end-to-end based on image data. While this was applicable in an initial 2D baseline scenario, the training of machine-learned models in 3D was unsuccessful. This startling result runs seemingly contradictory to the research community's current achievements using similar methods, which is why this thesis includes an extensive analysis of why this is the case.
As an alternative, the author presents a probabilistic behaviour-based integration that is able to generate the adaptive DoFs. This implementation was evaluated in multiple studies, focussing on its general applicability, human-computer-interaction, and usability. Finally, a study conduced solely with the target group (people with limited upper body mobility, N_3 = 24) evaluated the completely integrated system, showcasing high user acceptance with a steep learning curve and high success rates of example trials. |