These are some projects I am currently working or have interest on. They have various levels of complexity and scope. Get in touch for more information on lucas.fonseca@nottingham.ac.uk
Develop and analyse a dynamic model of a single human joint (e.g., knee or elbow), and validate it using inertial measurement unit (IMU) data. The project combines musculoskeletal simulation (e.g., OpenSim, MuJoCo, or a custom model) with experimental data collection using wearable sensors during controlled joint movements with simple loading (e.g., small weights, bodyweight tasks). It is suitable for students interested in biomechanics, simulation, and motion analysis, and can be scaled from short projects to more in-depth research.
Objectives
Build and run a simple dynamic model of a human joint and estimate joint torques from IMU-derived kinematics.
Collect IMU data from controlled movements and compare simulation results against torque or kinematic estimates from experimental data.
Required skills
Basic programming (e.g., Python, MATLAB, or similar).
Familiarity with vectors, basic mechanics, and numerical computation.
Bonus skills
Experience with biomechanical modelling or tools such as OpenSim or MuJoCo.
Prior experience with wearable sensors or motion capture systems.
Design and implement a human motion interface that uses wearable IMU data to control a mobile robot in simulation (e.g., robot moving forward, backward, or turning based on body orientation or arm gestures). The focus is on robust signal processing, mapping movement to robot commands, and software integration in a simulation environment (e.g., ROS + Gazebo, Webots, or similar). The project can range from a proof-of-concept interface to more advanced studies on usability and robustness.
Objectives
Design and implement mappings from IMU-derived orientation/gestures to high-level mobile robot commands in simulation.
Evaluate responsiveness and robustness of the interface in simple navigation tasks.
Required skills
Basic programming (Python and/or C++).
Interest in robotics and signal processing.
Bonus skills
Experience with ROS or other robotics middleware.
Familiarity with IMU sensor fusion and basic control systems.
Investigate how surface electromyography (EMG) and/or IMU can be used to infer user intent for controlling a robotic manipulator, primarily in simulation (e.g., simulated Kinova Lite arm or other arm models), with optional supervised tests on real hardware. The project focuses on collecting IMU/EMG for movements/contractions corresponding to discrete commands (e.g., open/close, up/down), processing these signals, and mapping them to control actions via simple rules or machine learning.
Objectives
Design and implement an wearables-based intent recognition pipeline for a set of discrete manipulator commands.
Integrate the intent recognition module with a simulated robotic arm and evaluate command accuracy and responsiveness.
Required skills
Basic programming (Python or MATLAB).
Basic understanding of signals (time series, filtering).
Bonus skills
Experience with IMU, EMG or other biosignals.
Familiarity with robotics simulation (e.g., ROS, Gazebo, or other simulators).
Develop a system that recognises hand and/or arm gestures from IMU and/or EMG signals and uses them to control software (e.g., GUI events, shortcuts, simple games). The project involves defining a gesture set, collecting labelled sensor data, implementing feature extraction, and training classification models. It can be approached as an introductory applied ML project or extended into a more sophisticated multimodal gesture recognition system.
Objectives
Build a gesture recognition pipeline from data collection through feature extraction to classification for a small set of gestures.
Demonstrate gesture-based control of a simple software interface or application.
Required skills
Programming skills in Python (or equivalent).
Basic knowledge of machine learning (classification, training/testing, evaluation metrics).
Bonus skills
Experience with IMU and/or EMG hardware.
Familiarity with user interface programming or game engines.
Collect IMU data from simple functional movements (e.g., walking, sit-to-stand, stair climbing, reaching) and explore methods for visualising and classifying them. The project focuses on reconstructing basic kinematics (segment orientations, simple joint angles), creating informative visualisations, and applying machine learning to distinguish between movement types. It can be tuned towards data visualisation, biomechanics, or ML depending on the student’s interests.
Objectives
Develop kinematic visualisations (e.g., trajectories, phase plots, joint-angle curves) that characterise different movement types.
Train and evaluate simple classification models to distinguish between movement classes using IMU-derived features.
Required skills
Programming in Python or MATLAB.
Basic understanding of statistics and machine learning.
Bonus skills
Interest or background in biomechanics or movement science.
Experience with data visualisation libraries (e.g., matplotlib, Plotly, ggplot).
Build and evaluate supervised machine learning models to recognise everyday activities (e.g., walking, sitting, standing, using stairs, desk work) from wearable IMU data. The project includes designing a data collection protocol, extracting features from sensor data, and comparing different models (classical ML and/or simple deep learning). It naturally scales from small datasets and basic models to more advanced pipelines, making it suitable for different levels of study.
Objectives
Create a labelled IMU dataset of everyday activities and implement a full activity recognition pipeline.
Compare and evaluate different supervised models (e.g., classical ML vs simple deep learning) for activity classification.
Required skills
Solid programming skills in Python.
Introductory knowledge of machine learning and data handling (Pandas, scikit-learn or similar).
Bonus skills
Experience with deep learning frameworks (e.g., PyTorch, TensorFlow).
Prior exposure to time-series analysis or human activity recognition.
Compare different orientation estimation (sensor fusion) algorithms for wearable IMUs during human movement. The project involves implementing or integrating several filters (e.g., complementary filter, Madgwick, Mahony, EKF-based methods), collecting IMU data during simple movements, and optionally using a reference system such as Vicon for validation. It is well suited to students interested in algorithms, signal processing, and sensor fusion.
Objectives
Implement and compare multiple IMU-based orientation estimation methods on human motion data.
Quantify and analyse differences in accuracy and robustness, optionally using a motion capture system as reference.
Required skills
Programming skills in Python, MATLAB, or C++.
Good mathematical background (linear algebra, basic probability, understanding of rotations).
Bonus skills
Familiarity with quaternions, Kalman filtering, or control theory.
Experience with motion capture systems or embedded sensing.
Create a simple digital twin of an upper limb in a game engine (e.g., Unity, Unreal) driven by IMU data. The project focuses on mapping sensor frames to avatar joint frames, streaming live or recorded orientation data into the engine, and designing an interactive visualisation or task (e.g., virtual hand reaching to targets). It combines real-time systems, graphics/game programming, and basic kinematics.
Objectives
Implement real-time or near real-time mapping from wearable IMU data to a virtual upper-limb model in a game engine.
Demonstrate and evaluate a simple interactive application (e.g., target-reaching or object manipulation) using the digital twin.
Required skills
Programming experience in C# (for Unity) or C++/Blueprints (for Unreal), or willingness to learn.
Basic understanding of 3D coordinate systems and rotations.
Bonus skills
Experience with IMU hardware and streaming data.
Prior work in game development, VR/AR, or interactive graphics.
Design and run a small human movement study using IMUs and a motion capture system (e.g., Vicon) to validate simple biomechanical models. The project includes defining a set of movements (e.g., walking and sit-to-stand), collecting data from participants, computing IMU-based kinematics (segment orientations, joint angles), and comparing them with Vicon-derived kinematics. It offers strong hands-on experience in experimental design, sensor integration, and quantitative analysis.
Objectives
Implement an end-to-end pipeline from experimental design and data collection to IMU-based kinematic estimation.
Validate IMU-derived kinematics against a motion capture reference and analyse accuracy and error sources.
Required skills
Basic programming (Python or MATLAB).
Comfortable with experimental work (lab work, data collection, protocol following).
Bonus skills
Background or interest in biomechanics or gait analysis.
Experience with Vicon or other motion capture systems.
Develop a prototype system for remote assessment of simple upper-limb rehabilitation exercises, using video-based pose estimation and/or wearable IMU data. The project focuses on tracking limb motion, computing metrics such as range of motion, repetition count, and tempo, and classifying repetitions as acceptable or not based on rules or supervised models. It can be oriented more towards software and algorithms, or towards human factors and usability, depending on interest.
Objectives
Implement a system that tracks upper-limb exercises remotely and computes key performance metrics (e.g., range, repetitions, tempo).
Automatically classify exercise repetitions as correct or incorrect and provide simple feedback to the user.
Required skills
Programming in Python (or similar) and basic data processing.
Interest in computer vision and/or sensor-based motion tracking.
Bonus skills
Experience with pose-estimation libraries (e.g., MediaPipe, OpenPose) or IMU processing.
Background or interest in rehabilitation, healthcare technology, or human–computer interaction.