Open Access
30 May 2013 Lower body kinematics evaluation based on a multidirectional four-dimensional structured light measurement
Janusz Lenar, Marcin Witkowski, Vincenzo Carbone, Sjoerd Kolk, Marcin Adamczyk, Robert Sitnik, Marjolein van der Krogt, Nico Verdonschot
Author Affiliations +
Abstract
We report on a structured light-scanning system, the OGX|4DSCANNER, capable of capturing the surface of a human body with 2 mm spatial resolution at a 60 Hz frame-rate. The performance of modeling the human lower body dynamics is evaluated by comparing the system with the current gold standard, i.e., the VICON system. The VICON system relies on the application of reflective markers on a person’s body and tracking their positions in three-dimensional space using multiple cameras [optical motion capture (OMC)]. For the purpose of validation of the 4DSCANNER, a set of “virtual” markers was extracted from the measured surface. A set of musculoskeletal models was built for three subjects based on the trajectories of real and virtual markers. Next, the corresponding models were compared in terms of joint angles, joint moments, and activity of a number of major lower body muscles. Analyses showed a good overall agreement of the modeling outcome. We conclude that the 4DSCANNER within its limitations has the potential to be used in clinical gait analysis instead of optical marker-based systems. The advantage of the 4DSCANNER over OMC solutions is that it does not burden patients with time-consuming marker application. This study demonstrates the versatility of this measurement technique.

1.

Introduction

The need of acquiring kinematic information of human motion is given in numerous applications in both the entertainment industry and clinical environment. Most modern three-dimensional (3-D) animated movies and action games use motion capture (mocap) techniques in the design of the characters’ movement as for real subject choreography or facial expressions registration. In a similar manner, clinicians and researchers are able to evaluate person’s locomotive capabilities and disabilities based on a subject’s recorded 3-D motion.

In the TLEMsafe project1 we develop, validate, and clinically implement an ICT-based patient-specific surgical navigation system that integrates modeling, simulation and visualization tools. This system will help surgeons safely reach the optimal functional results for patients that are undergoing major surgical intervention of the lower extremity. The starting point is the Twente Lower Extremity Model (TLEM),2 based on the first comprehensive and consistent anatomical dataset, in which many relevant parameters are measured from a single cadaveric specimen. The TLEM is personalized for every patient, where individual parameter sets are extracted based on medical imaging measurements and on functional kinematic and dynamic tests. The model is implemented in the Anybody Modeling System (AMS).3 The 3-D trajectories of points fixed to each kinematic segment of the body along with ground reaction forces (GRF), registered during activities of daily living (normal, slow and fast walking, sitting down and getting up from a chair, stairs climbing) performed by the patients, are used to feed the inverse dynamic optimization process. Within the scope of the project, the model is used to drive simulations of the patient’s performances before and after lower limb surgery. Simulations aid surgeons during the surgery planning stage and provide quantitative information about the patient’s expected performance in different surgical scenarios.

Within TLEMsafe, as in many other patient studies, gait is analyzed as (one of) the most informative activity of daily living. A reliable musculoskeletal (M-S) model reproducing the performance during walking requires the registration of pelvis, upper leg, lower leg, and feet, for the sake of a feasible hip, knee, and ankle joints analysis. Therefore, the method to record the kinematics needs to capture a person’s lower body on a walkway performing at least one cycle of comfortable-speed gait (volume of ca. 2.5×1.5×1.5m) at a frame-rate of at least 60 Hz and should have a quality allowing to build a credible M-S model and simulate motion patterns. The patient’s comfort should strongly be taken into consideration, because many orthopedic patients experience pain when they are walking. The use of heavy equipment or following tedious procedures are therefore discouraged.

There are a number of techniques to acquire kinematic information for M-S modeling. They are based on either tracking physical markers, subject’s contours, or projected pattern deformation.

The most popular solution is the optical motion capture technique,4 in which a set of retroreflective or optically active markers is being fixed to the body surface in anatomically meaningful locations. The markers are observed by a number of cameras and their 3-D position is being tracked with up to 120 Hz to 1 kHz frame-rate. Such systems have a long tradition of successful application in both the entertainment and the medical field as they provide high precision, good scalability with additional hardware (more markers and/or more cameras significantly improve the results), and ability to measure in large volume. However, clinically relevant limitations of this technique include time-consuming marker preparation and the occasional marker drop-off, which fatigues elderly or motor impaired patients. Furthermore, subjective marker placement,5 and skin movement relative to the bone6 introduce noticeable uncertainty factors. In addition, optical motion capture solutions require sophisticated and expensive hardware,7 which can be a decisive factor in some applications.

Other techniques as the visual hull reconstruction8,9 also make use of the multiview registration of the subject. The core idea is that the object occupies a hull that is an intersection of cones being a projection of object pixels of each camera. Corazza et al.10 applied this approach to track kinematics of a walking human subject. A subject-specific geometrical model has been fitted to the subject that is measured simultaneously with the VICON system by Sigal and Black (HumanEva II dataset11), which included joint center positions. After fitting the model to a visual hull for each recorded frame, sequences of joint centers were evaluated and compared with marker-driven joint centers of the VICON system. A comparison shows a mean absolute difference of 16 mm for lower extremities. The main source of this relatively large error is the lack of information of most object pixels that lie inside its contour, because the method tracks only the contours. The system recovers the outer hull of the object’s surface and introduce artificial convexities.

Identifiable information on the surface is desirable for more reliable surface measurement. A common method of optically encoding the surface of an arbitrary object is to illuminate this object with a structured pattern. A number of mature structured light solutions12,13 are available to allow a real-time 3-D shape registration. Single-frame methods seem to make most use of the hardware capabilities in terms of measurement frequency. The shape can be extracted from the object deformation of a uniform binary line grid (Sagawa et al.,14 1 kHz acquisition) or a sine fringe pattern (Zhang and Su,15 1 kHz). In order to be able to register the full kinematics, a method should be capable of a multiview operation. Neither of these methods have such capability. Sagawa14 encodes his projected grid with two color values, which cause a crosstalk between various measurement directions. Zhang15 makes use of a relative phase evaluation method, which makes it impossible to refer two directions of measurement to each other and merge the results.

In the current paper we present the OGX|4DSCANNER, which is an implementation of the markerless motion capture (MMC) concept using the structured light technique that is able to capture 3-D shape of human lower body surface at 60Hz. The solution is novel16 and has significant advantages over other available methods. The 4DSCANNER registers the surface of the object and does not require physical markers, which eliminates all marker-related problems with the OMC. The expense factor is much less significant than in case of the OMC solutions, because the system is built from commonly available components. The 4DSCANNER is much more accurate than the visual hull method [2 mm spatial resolution and uncertainty of 2 mm (Ref. 17)], because it takes into account the entire visible region of subject’s body surface. The absolute phase evaluation method along with the optical-aberration-aware cameras calibration allow for merging results from different viewpoints, which would not be possible with Zhang’s15 and Sagawa’s14 solutions.

It is important to note that the recent work of Zhang et al.17 has the potential to provide an impressive measurement frequency to the proposed solution. This method implements a 3-point temporal phase shifting (TPS) system based on a defocused digital light processing (DLP) projector that displays a binary pattern. Because this system exploits the digital micro-mirror device switching frequency of 2 kHz, the 3-point TPS-based shape reconstruction method provides output at a frame-rate of 667 Hz. Although the frequency is extraordinarily high, there are significant limitations of this solution. The level of defocus changes with depth, resulting in (1) decrease in contrast of the fringe pattern, and (2) high likelihood of the “appearance” of higher-order harmonics when the object is closer to the focal plane. It might be possible to fix the projection focus within a required depth of 2 m as long as the projection system is positioned ca. 10 m from the object. Considering problems indicated by the authors18 regarding the light source inefficiency for extreme acquisition frequencies, moving the projection system this far from the object would most likely force researchers to decrease the frequency by one to two levels of magnitude.

The purpose of this study was to explain the MMC method and demonstrate its equivalence and shortcomings in terms of predicting kinematics, joint moments, and muscle activity levels relative to those predicted using the golden standard method (VICON) and kinetics.

2.

Methods

The proposed 4DSCANNER technique employs a single modified sine pattern projection, a synchronized image registration, the spatial carrier phase shifting (SCPS) method for phase map evaluation, and a 3-D shape reconstruction using calibrated camera model. Thanks to the unique synchronization solution and binding detection to a specific projected channel, it is possible to combine multiple projector-camera pairs. In addition, the loose coupling of projectors and cameras enables a projector to “feed” multiple cameras. Hence, a two-projector four-detector system has been used to capture gait at 60 Hz frame-rate with 2 mm spatial resolution and uncertainty of 2 mm.17

2.1.

Principle of the 4DSCANNER

The projected sine pattern is modulated by the object in both amplitude (by the texture) and phase (by the shape) and its image is registered by the camera. For a maximum measurement frequency a single fringe image is analyzed during the reconstruction of the phase modulation. The phase recovery is based on a 7-point SCPS method19 which investigates 7 consecutive pixels to evaluate local phase at the central pixel. In the subsequent step, we perform phase unwrapping to eliminate the 2π ambiguity. We guide the spanning tree algorithm to favor pixels that have high contrast, a low variation of local fringes period, and are far from object’s border, in order to avoid erroneous phase evaluation on the boundaries of the object (i.e., surface discontinuities), in high curvature areas (nonconstant period breaks SCPS), as well as regions that have texture features (SCPS requires constant fringes amplitude).

Because the desired phase distribution consists of absolute values, the information of fringe numbers needs to be carried onto the object. A single fringe of known absolute phase is modulated transversely for its recognition on the image to be straightforward. A random pixel on the stripe is chosen as the initial point of the spanning tree method. During unwrapping, all phase values are shifted by the absolute phase value.

The phase distribution is further scaled in xyz coordinates with use of camera calibration,20 which is called a local calibration. In the local calibration procedure each camera pixel is assigned to a line in the global coordinate system and further on, a phase-to-depth monotonic distribution is evaluated for each of these lines. As a result, for each camera pixel there is a known mapping from absolute phase value to a position on its line, thus to a certain 3-D point. Using this information, the phase map is transformed into a single-directional cloud of points, which represents the surface visible to a single camera.

A multiview measurement is possible due to the global calibration. This is performed using the same calibration board that has been used for the local calibration. The common global frame is fixed for all cameras and all resulting clouds of points can be merged into a single multidirectional cloud.

Figures 1Fig. 23 present exemplary intermediate results for the evaluated 3-D surface of a single frame: the key steps in phase evaluation (Fig. 1), the fringe images of an object observed from various points of view (Fig. 2), and the output clouds of points (Fig. 3).

Fig. 1

Phase modulation evaluation: (a) input fringe image, (b) wrapped phase (7-point SCPS), (c) quality map for spanning tree phase unwrapping algorithm, and (d) unwrapped phase distribution.

JBO_18_5_056014_f001.png

Fig. 2

Four simultaneous views of the object. The measured surfaces are merged into a single model.

JBO_18_5_056014_f002.png

Fig. 3

Resulting cloud of points: (a) single-directional cloud of points and (b) a complete cloud of points consisting of four single-directional clouds (note different coloring of each cloud) in a common coordinate system.

JBO_18_5_056014_f003.png

2.2.

Hardware Configuration

In this study, we used two digital projectors (Casio XJ-A230V) and four industrial cameras (FL3-FW-03S1M-C from Point Grey Research). One projector illuminated the subject with a blue light from the front and the other used a red light from the back; a pair of cameras registered the illuminated frontal surface of the body and another pair registered the rear surface, as shown in Fig. 4(b). Cameras and projectors were rigidly fixed to each other and to the ground [see Fig. 4(a)], and then calibrated in the 2-m-long section of the walkway between them.

Fig. 4

Hardware setup for gait registration: (a) dual-camera measurement head (single projector and two detectors) and (b) side-view configuration schematic. Lower cameras register the higher part of calibrated volume and upper cameras register the lower part, allowing measurement of both feet and trunk.

JBO_18_5_056014_f004.png

Image acquisition was performed at an approximate 60 Hz frame-rate. All cameras were triggered externally by a single custom microcontroller synchronized with one of the projectors using a photodiode. The microcontroller fired acquisition in a manner that guaranteed both full exploitation of available illumination and isolation of opposite cameras (see Fig. 5). Cameras on the side of the red-illuminating projector were triggered together as soon as the negative slope of the blue illumination was registered and the cameras on the side of the blue illumination were triggered after an appropriate amount of time to capture the following blue illumination.

Fig. 5

The concept of projector-camera synchronization that enables the isolation of cameras. Both identical DLP projectors are fed with the same video signal, so they work synchronously. Cameras are triggered by the microcontroller at fixed time intervals relative to the negative slope of the blue illumination for a fixed time period (i.e., shutter time), thus working synchronously with associated projector.

JBO_18_5_056014_f005.png

2.3.

Measurement Protocol

In a modestly darkened laboratory, the 4DSCANNER was calibrated within a volume of 1.5×2.0×2.0m3. Each of the four cameras was calibrated independently with the use of a 1.5×2.0m2 calibration board. The common coordinate system was provided by placing the board in the center of the measurement volume so that it is visible to all the cameras.

For the sake of comparison with the optical motion capture technique, the 4DSCANNER was calibrated inside the calibrated volume of the VICON system. Both systems were adjusted to work with the same acquisition frequency. To achieve synchronization (i.e., temporal alignment) of the data, a single marker was dropped on the floor during each recorded sequence. During the processing stage, the frame of the impact was evaluated for both marker trajectories and fringe video. To achieve a spatial alignment, the 4DSCANNER was calibrated to have the same origin as the VICON system, so the GRF measured with the force plates were expressed in coordinates aligned with output data of both motion capture systems.

This study included three healthy subjects performing a comfortable-speed walking exercise (see Table 1). During the exercise, both VICON and 4DSCANNER were recording the movement simultaneously, as well as a pair of force plates that measured GRF. Optical markers were labeled on VICON recordings, and their positions were evaluated. From the 4DSCANNER fringe image sequences the animated representation of the object surface was calculated. Based on the surface data, an equivalent, synthetic set of markers (so-called virtual markers) was built. The markers provided by both systems are the lower extremities markers suggested by the VICON’s Plug-In Gait (PIG) analysis module.21 As presented in the Fig. 6, the markers are given as:

  • 1. Anterior superior iliac spine marker, left and right (LASI and RASI)

  • 2. Posterior superior iliac spine marker, left and right (LPSI and RPSI)

  • 3. Lateral knee joint marker, left and right (LKNE and RKNE)

  • 4. Lateral thigh marker, left and right (LTHI and RTHI)

  • 5. Lateral ankle marker, left and right (LANK and RANK)

  • 6. Lateral shank marker, left and right (LTIB and RTIB)

  • 7. Forefoot marker, left and right (LTOE and RTOE)

  • 8. Heel marker, left and right (LHEE and RHEE)

Table 1

General information of the subjects used for validation.

Age (years)Height (m)Weight (kg)# of gait trials
Subject 1271.9085.25
Subject 2281.7777.44
Subject 3341.7693.84

Fig. 6

Markers chosen for the validation, forming a sufficient constellation in terms of musculoskeletal (M-S) modeling.

JBO_18_5_056014_f006.png

2.4.

Kinematics Extraction

In the TLEMsafe project, the M-S modeling is performed in the AMS.3 The generic M-S model in AMS software is personalized based on 3-D trajectories of markers attached to body segments coupled with plots of GRF.

For the purpose of preliminary assessment of the applicability of the presented 4-D scanner for kinematics evaluation, a method for emulating virtual markers based on 4-D point cloud sequence was developed (Fig. 7). The virtual markerset complies with the PIG markerset and the intention is to estimate the positions of virtual markers in close proximity to the physical ones. The method developed for locating virtual markers is composed of the following steps that have to be performed for each time-frame of the sequence to be measured:

  • 1. Slicing the point cloud across the global vertical direction and creating groups of points based on distance criterion.

  • 2. Segmentation of the point cloud into pelvis, left leg, and right leg segments. The pelvic segment cut-off threshold is the crotch level that is defined based on a group width.

  • 3. The points in the pelvic region are used to calculate the LPSI, RPSI, LASI, and RASI virtual markers. LPSI and RPSI may be located as in Ref. 22. However, for comparison with the marker-based system, an alternative method was developed because the shape of lower back is occluded by the real markers. This method uses the projection of the dorsal surface contour and locates the virtual markers on a level of the contour inflection point between the buttocks and lumbar back level. The horizontal offset of the markers is based on manual measurement and the height distance is proportional to the buttocks height difference.

  • 4. Based on calculated virtual LPSI and RPSI positions and the vector normal to the surface in the neighborhood of these markers, a reference frame of the pelvis is calculated. In order to estimate LASI and RASI positions, two rays are projected from the LSPI/RPSI midpoint and oriented at certain directions in the pelvic reference frame. Pre-intersection points of these lines with the frontal surface of the point cloud defines virtual LASI and RASI markers. The directions of line projection have been chosen manually in an iterative process. They are equal for all the subjects.

  • 5. The next steps define the process of locating virtual markers on lower limbs’ surface. The first operation in this stage is to divide the leg segments into the upper leg (thigh) and the lower leg (shank and foot) segments. In order to achieve this, each leg is sliced along its long axis, and circles are fitted to each slice (circle fitting is sufficient for our application as long as mainly thigh and shank are under consideration). The analysis of the dorsal points of the circles is used to find the level of popliteal fossa (the knee pit). The leg slices located above the knee pit level are assigned to femur and the slices located below the knee pit level are assigned to the shank and foot segment.

  • 6. Both the previously defined segments are sliced along their own long axes to make further analysis independent from the knee flexion angle. Again, a circle is fitted to each slice. Due to knee flexion, some of the slices consist of points that belong only to frontal or dorsal surfaces. For such points, the circles are disregarded. The knee joint is located halfway between the center of the knee pit circle and the center of the lowest full circle of the thigh. This is a rough correction for the relationship between the knee joint axis and the knee pit.

  • 7. Assuming one degree of freedom (1-DOF) knee joint, we estimate the knee joint axis to be perpendicular to thigh and shank long axes. The LKNE and RKNE markers lie on the joint axes and are translated in the lateral direction by radius of the knee pit circle. The LTHI and RTHI markers are translated (arbitrarily) upwards along the vertical axis of the thighs.

  • 8. The ankle joint is detected based on the diameter of the circles fitted to the lower leg. Ankle border is assumed where the diameter begins to raise when checking from the half of the segment’s height downwards. The long axis of the shank is estimated as the line connecting the center of ankle border’s circle with the knee joint center. As the distance between the two points changes along the measured sequence, an average is applied along the gait cycle, which gives the shank length. (It is also possible to use a manually measured knee-to-joint distance.) The ankle joint center is located on the long axis of the shank at the shank length distance from the knee joint center.

  • 9. Assuming 1-DOF ankle joint, we estimate the ankle joint axis to be perpendicular to the plane spanned by the thigh and shank long axes. The LANK and RANK markers lie on the joint axes and are translated in the lateral direction by radius of the circle fitted at the ankle joint center. The LTIB and RTIB markers are translated (arbitrarily) upwards along the vertical axis of the shanks.

  • 10. The locations of foot markers are estimated on the assumption that the distance between ankle joint center and TOE marker, as well as the distance between ankle center and HEE marker, is constant along the sequence. These distances may be set manually or may be measured in the application by the operator. The TOE and HEE markers are located on the foot surface, on the plane defined by thigh and shank long axes, at distances to ankle center equal to the mentioned ones.

Fig. 7

Kinematics extraction. (a) Input point cloud (point colors—cyan, green, magenta, and yellow—represent cameras that acquired each point), (b) initial slicing along the vertical axis and segmentation, (c) point cloud with thigh and shank+foot segments sliced along their midlines, (d) circles fitted to leg segments, (e) virtual markers (magenta) as well as thigh and shank midlines (black), and (f) virtual markers and segment midlines location with regard to the point cloud.

JBO_18_5_056014_f007.png

The analysis described above is applied to every frame in the sequence resulting in a virtual markerset that is estimated for every point cloud. The trajectories of virtual markers are then exported to a C3D file23 and together with GRF data are fed into the AMS for biomechanical analysis.

2.5.

Biomechanical Analysis

Clinical validation of the measurement system is aimed to assess whether the kinematic information acquired by the 4DSCANNER is sufficient to build a M-S model with reasonable quality. The reference kinematic data were acquired by the current gold standard, i.e., the VICON system7 (VICON Nexus version 1.6.1.57351). A set of M-S models were fed with kinematics from both systems using the AMS3 (version 5.2). A comparison of these models and their outcome parameters is considered highly relevant since M-S models carry the full information of the motor performance of the subject.

2.5.1.

Musculoskeletal model

Our M-S model was based on the TLEM implemented in the AMS. The model consisted of nine body segments: upper body (head, arms, trunk, and pelvis), right and left femur, patella, tibia, and foot. The fibula was considered as one unit with the tibia. Eight joints were modeled: left and right hip, knee, patella/femur, and ankle joints. Hip joints were modeled as a ball-and-socket, defined by a rotation center and three orthogonal rotation axes. The knee and ankle joints were defined as a hinge, with a fixed rotation center and axis. The patella could rotate with respect to the femur around a rotation axis with a fixed rotation center. The patellar ligament was defined as a nondeformable element that connected the patella to the tibia. Thus, without introducing an extra degree of freedom, the orientation and position of the patella depended solely on the knee flexion angle. The orientation and position of the center of mass of the pelvis with respect to a 3-D global frame, together with the joint rotations of the hip, knee, and ankle joints, resulted in a model with 16 degrees of freedom (DOFs) and 10 joint axes (Fig. 8). Each leg contained 56 muscle-tendon parts represented by three-element, Hill-type muscle in series with a tendon.24

Fig. 8

(a) M-S model based on the Twente Lower Extremity Model (TLEM). (b) Inverse dynamics simulation, based on movement tracked by 4DSCANNER virtual markers and ground reaction forces (GRF) recorded by force plates.

JBO_18_5_056014_f008.png

2.5.2.

Data analysis

M-S simulations were based on 3-D motion analysis and force plate data. The model was scaled in order to match the anthropometry of the subjects, derived from the marker positions relative to each other. Inverse kinematic25 was used to calculate the time histories of joint angles during the gait cycle based on the VICON and 4DSCANNER markers. Inverse dynamic was used to calculate the time histories of joint moments needed to reproduce the tracked gait movement. Then a static optimization problem was solved to calculate the muscle forces needed to produce the necessary joint moments.26

In order to compare the quality of marker position based on VICON and 4DSCANNER measurements and to quantify how the potential differences affect M-S model predictions, three different categories of output were analyzed:

  • Kinematics: joint angles for hip, knee, and ankle flexion in the sagittal plane.27

  • Dynamics: joint moments supplied by muscles for hip, knee, and ankle flexion in the sagittal plane.

  • Muscle activity (normalized muscle force) for the main muscles (prime movers) responsible for hip flexion (iliacus), hip extension (gluteus maximus), knee flexion (biceps femoris caput longum), knee extension (rectus femoris), ankle dorsiflexion (tibialis anterior), and ankle plantarflexion (soleus, gastrocnemius).

The differences between model predictions were quantified using the maximum absolute error, mean absolute error, and root mean square error as a basic statistical error description.

3.

Results

The technical requirements of the measurement were met. The 4DSCANNER was able to capture the 3-D shape of the lower body surface at a rate of 60Hz. The spatial resolution of resulting clouds of points was approximately 2 mm. The 4DSCANNER measurements did not require the subjects to carry or wear any equipment except for light, comfortable clothing.

For each of the subjects, four to five single-cycle gait trials were recorded using both the 4DSCANNER and the VICON systems. Each gait trial consisted of marker trajectories provided by VICON, trajectories provided by the 4DSCANNER, and GRF provided by the force plates. For each trial, two M-S models were built, one for each system. Three aspects of these models, i.e., kinematics (Fig. 9), dynamics (Fig. 10), and muscle activity (Fig. 11), were compared (summary in Table 2).

Fig. 9

Average hip, knee, and ankle joint angles. Red (dark) represents the VICON-based model output; blue (light) represents the 4DSCANNER-based model output. Time base corresponds to an entire gait cycle of the right leg, from toe off to toe off.

JBO_18_5_056014_f009.png

Fig. 10

Average hip, knee, and ankle joint moments. Red (dark) represents the VICON-based model output; blue (light) represents the 4DSCANNER-based model output. Time base corresponds to an entire gait cycle of the right leg, from toe off to toe off.

JBO_18_5_056014_f010.png

Fig. 11

Average activity of main muscles in lower extremities. Red (dark) represents the VICON-based model output; blue (light) represents the 4DSCANNER-based model output. Time base corresponds to an entire gait cycle of the right leg, from toe off to toe off.

JBO_18_5_056014_f011.png

Table 2

Error values for all analyzed features of the 4DSCANNER-based musculoskeletal (M-S) models with respect to the models based on VICON measurement.

Subject 1Subject 2Subject 3
#5 Trials#4 Trials#4 Trials
Max absolute errorMean absolute errorRoot mean square errorMax absolute errorMean absolute errorRoot mean square errorMax absolute errorMean absolute errorRoot mean square error
Joint Kinematic (rad)Hip Flexion0.17750.09170.09930.13810.06600.07550.39670.19640.2080
Knee Flexion0.12420.06890.07600.14140.08110.08830.19730.10430.1188
Ankle Plantar Flexion0.23870.05250.08330.15390.06500.07400.21220.06690.0867
Joint Moment (Nm)Hip Flexion34.81567.228010.531827.48587.926410.497630.67418.07039.9405
Knee Flexion18.97068.658710.516018.74366.73948.710733.180513.362517.7186
Ankle Plantar Flexion6.02981.34211.942410.66414.40675.907423.62527.759910.5257
Muscle ActivityIliacus6.551.782.5314.542.154.1018.152.815.00
GluteusMaximus17.684.396.5819.974.687.2719.473.475.17
BicepsFemorisCL17.626.728.8918.725.597.0358.5014.2919.30
RectusFemoris14.652.834.6419.872.484.6326.024.647.67
TibialisAnterior42.454.238.7679.2212.7022.8489.0515.4526.12
Soleus10.421.352.6414.773.225.5312.083.195.36
Gastrocnemius10.832.363.6226.885.8310.40100.816.1328.83

3.1.

Kinematics: Joint Angles

The predicted time histories for hip flexion and knee flexion angle showed comparable patterns for all the three subjects. The 4DSCANNER-based hip flexion angle was generally lower than VICON-based, with a mean absolute error varying from 0.0660 rad (3.78 deg) for Subject 2 to 0.1964 rad (11.25 deg) for Subject 3. The 4DSCANNER-based knee flexion showed a good agreement around 30% to 40% of the gait cycle (heel strike), otherwise it was generally lower than VICON-based, with a mean absolute error varying from 0.0689 rad (3.94 deg) for Subject 1 to 0.1043 rad (5.98 deg) for Subject 3. The 4DSCANNER-based ankle dorsiflexion angle showed a good agreement in the second half of the gait cycle (stance phase), while larger differences were present around 30% to 50% of the gait cycle (heel strike), with a maximum absolute error up to 0.2387 rad (13.68 deg) for Subject 1.

3.2.

Dynamics: Joint Moments

The 4DSCANNER-based hip flexion moment showed a good agreement during swing phase (0% to 30% of the gait cycle) and stance phase (50% to 100% of the gait cycle), but larger differences with the VICON-based moment were present at heel strike (30% to 40% of the gait cycle), with a maximum absolute error of up to 35 Nm for Subject 1.

Also, the 4DSCANNER-based knee flexion moment showed a good agreement during swing phase (0% to 30% of the gait cycle), however, then the relative difference with VICON-based moment increased during the stance phase (40% to 90% of the gait cycle), with a maximum absolute error of up to 33 Nm for Subject 3.

On the other hand, the 4DSCANNER-based ankle plantarflexion moment showed good agreement with VICON-based moment, with a mean absolute error ranging from 1.3 Nm (for Subject 1) to 7.8 Nm (for Subject 3).

3.3.

Muscle Activity

The 4DSCANNER-based muscle activity of iliacus, gluteus maximus, biceps femoris caput longum, and rectus femoris showed good agreement with the VICON-based muscle activity, with mean absolute errors lower than 7%. The only exception is given by the muscle activity of biceps femoris caput longum for Subject 3, where the mean absolute error is 14% and maximum absolute error is 59%.

Tibialis anterior showed larger differences and variability, with a maximum absolute error varying from 42% for Subject 1 up to 89% for Subject 3.

In addition, the 4DSCANNER-based muscle activity of soleus showed good agreement with the VICON-based activity, with a mean absolute errors lower than 3%, while the 4DSCANNER-based muscle activity of gastrocnemius showed unrealistic values (around 200%) for Subject 3, with maximum absolute error equal to 100%.

4.

Discussion

4.1.

Summary

The OGX|4DSCANNER is the recently developed system for capturing a motion based on a full-field 3-D surface imaging. In particular, it is capable of registering kinematics of whole lower human body during gait, and feed the TLEM M-S model with an outcome satisfactory for most joints. As a main conclusion of the described research, the 4DSCANNER could potentially be used in clinical gait analysis instead of optical marker-based VICON system. This is an essential advantage that it does not require any physical artifacts fixed to the body, and, as an optical method of measurement, it is absolutely noninvasive. In addition, the hardware used for the measurement is relatively inexpensive (unlike current commercial motion capture systems) because it mostly consists of off-the-shelf components.

4.2.

Limitations

Current implementation of the 4DSCANNER requires a significant darkening of the room due to short cameras exposure time. This may be a discomforting factor for patients, but should be overcome in the future with more powerful projectors or more sensitive detectors.

The main limitation of the 4DSCANNER applied to gait analysis considers capturing feet around the heel strike stage of gait. No camera used for the measurement can capture the foot in this stage due to high ankle joint dorsiflexion, resulting in self-occlusion of the top foot surface. This issue affects mostly ankle flexion angle and tibialis anterior activity. It can be addressed by employing an additional projector-camera couple to measure feet from a higher altitude.

Hip flexion angles predicted by the M-S models showed very similar patterns when comparing the 4DSCANNER and the VICON system, but presented also an offset that could probably be eliminated by improving evaluation of pelvic tilt. In this case, a better estimation of virtual front pelvis markers, LASI and RASI, or back pelvis markers, LPSI and RPSI, is required.

The largest difference between models based on both systems was found for predicted activity of gastrocnemius, a biarticular muscle that acts both on ankle plantar flexion and on knee flexion. The unrealistic predicted values of around 200% present in the 4DSCANNER-based results were caused by an overestimated knee flexion moment. This represents an issue that needs to be addressed in future.

It is also important to point out the image acquisition bandwidth limitation present during the 4DSCANNER measurement in the current configuration. The projector used in the study was displaying 120frames/s, so with the developed technique it enabled measurement frequency of 120 Hz. Using two workstations to handle four 1394b cameras at full acquisition speed of 120 Hz revealed a bandwidth shortage. We were forced to capture every second frame, but this problem can be overcome by employing additional workstations or higher-bandwidth interface cameras, e.g., an USB3.0 interface.

4.3.

Further Research

In the study, a set of virtual markers was evaluated in order to assess the method for gait analysis. This was a limited usage of the data it provides. Since the surface is registered as a cloud of thousands of points (with the potential increase in future), a “swarm” of virtual markers can be evaluated and fed into a more reliable M-S model. We expect models built upon 1 to 2 levels of magnitude more markers to show smaller variation across trials. Also, geometrical results carry volumetric information of the object, which could be exploited in future to enhance M-S models.

The measurement frequency and spatial resolution should be improved. A higher resolution will enable the extraction and tracking of curvature features and will improve the evaluation of the virtual markers. A higher frequency will enable the ability to track surface transitions through time without curvature features involved and move the method to more statistical ground, making use of the amount of data being registered.

4.4.

Conclusion

We conclude that the 4DSCANNER has the potential to replace optical maker-based systems in clinical gait analysis, provided that overall accuracy is improved, particularly around the foot area. The advantage of the 4DSCANNER over OMC solutions is that it does not burden patients with time-consuming marker application and this study demonstrates the versatility of this multidirectional 4-D structured light measurement technique.

Acknowledgments

The TLEMsafe project,1 under which the described solution has been developed, is funded by the European Commission’s 7th Framework Programme.

References

1. 

“TLEMsafe project official website,” (2013) www.tlemsafe.eu ( May ). 2013). Google Scholar

2. 

M. D. Klein Horsman et al., “Morphological muscle and joint parameters for musculoskeletal modelling of the lower extremity,” Clin. Biomech., 22 (2), 239 –247 (2007). http://dx.doi.org/10.1016/j.clinbiomech.2006.10.003 CLBIEW 0268-0033 Google Scholar

3. 

M. Damsgaard et al., “Analysis of musculoskeletal systems in the AnyBody Modeling System,” Simul. Model. Pract. Theor., 14 (8), 1100 –1111 (2006). http://dx.doi.org/10.1016/j.simpat.2006.09.001 1569-190X Google Scholar

4. 

D. F. J. Perales, “Human motion analysis, and synthesis using computer vision, and graphics techniques. State of art, and applications,” Cybern. Inf., (2001). 1311-9702 Google Scholar

5. 

J. L. McGinley et al., “The reliability of three-dimensional kinematic gait measurements: a systematic review,” Gait Posture, 29 (3), 360 –369 (2009). http://dx.doi.org/10.1016/j.gaitpost.2008.09.003 0966-6362 Google Scholar

6. 

A. Peters et al., “Quantification of soft tissue artifact in lower limb human motion analysis: a systematic review,” Gait Posture, 31 (1), 1 –8 (2010). http://dx.doi.org/10.1016/j.gaitpost.2009.09.004 0966-6362 Google Scholar

7. 

“Vicon motion systems,” (2013) http://www.vicon.com/ ( May ). 2013). Google Scholar

8. 

A. Laurentini, “The visual hull concept for silhouette based image understanding,” IEEE Pattern Anal. Mach. Intell., 16 (2), 150 –162 (1994). http://dx.doi.org/10.1109/34.273735 ITPIDJ 0162-8828 Google Scholar

9. 

S. Corazza et al., “A markerless motion capture system to study musculoskeletal biomechanics: visual hull and simulated annealing approach,” Ann. Biomed. Eng., 34 (6), 1019 –1029 (2006). http://dx.doi.org/10.1007/s10439-006-9122-8 ABMECF 0090-6964 Google Scholar

10. 

S. Corazza et al., “Markerless motion capture through visual hull, articulated ICP and subject specific model generation,” Int. J. Comput. Vision, 87 (1–2), 156 –169 (2010). http://dx.doi.org/10.1007/s11263-009-0284-3 IJCVEQ 0920-5691 Google Scholar

11. 

L. Sigal and M. J. Black, “Humaneva: synchronized video and motion capture dataset for evaluation of articulated human motion,” (2006). Google Scholar

12. 

J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon., 3 (2), 128 –160 (2011). http://dx.doi.org/10.1364/AOP.3.000128 AOPAC7 1943-8206 Google Scholar

13. 

Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng., 50 (8), 1097 –1106 (2012). http://dx.doi.org/10.1016/j.optlaseng.2012.01.007 OLENDN 0143-8166 Google Scholar

14. 

R. Sagawa et al., “Dense 3D reconstruction method using a single pattern for fast moving object,” in IEEE 12th International Conf. on Computer Vision Proc., 1779 –1786 (2009). Google Scholar

15. 

Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express, 13 (8), 3110 –3116 (2005). http://dx.doi.org/10.1364/OPEX.13.003110 OPEXFF 1094-4087 Google Scholar

16. 

J. Lenar, R. Sitnik and M. Witkowski, “Multidirectional four-dimensional shape measurement system,” Proc. SPIE, 8290 82900W (2012). http://dx.doi.org/10.1117/12.907706 PSISDG 0277-786X Google Scholar

17. 

S. Zhang, D. van der Weide and J. Oliver, “Superfast phase-shifting method for 3-D shape measurement,” Opt. Express, 18 (9), 9684 –9689 (2010). http://dx.doi.org/10.1364/OE.18.009684 OPEXFF 1094-4087 Google Scholar

18. 

R. Sitnik, “Four-dimensional measurement by a single-frame structured light method,” Appl. Opt., 48 (18), 3344 –3354 (2009). http://dx.doi.org/10.1364/AO.48.003344 APOPAI 0003-6935 Google Scholar

19. 

K. Larkin and B. Oreb, “Design and assessment of symmetrical phase-shifting algorithms,” J. Opt. Soc. Am. A, 9 (10), 1740 –1748 (1992). http://dx.doi.org/10.1364/JOSAA.9.001740 JOAOD6 0740-3232 Google Scholar

20. 

R. Sitnik, “New method of structure light measurement system calibration based on adaptive and effective evaluation of 3D-phase distribution,” Proc. SPIE, 5856 109 –117 (2005). http://dx.doi.org/10.1117/12.613017 PSISDG 0277-786X Google Scholar

21. 

“VICON PlugIn Gait PIGManualver1,” (2013) www.scribd.com/doc/54778915/VICON-PlugIn-Gait-PIGManualver1 ( May ). 2013). Google Scholar

22. 

J. Michoński et al., “Automatic recognition of surface landmarks of anatomical structures of back and posture,” J. Biomed. Opt., 17 (5), 056015 (2012). http://dx.doi.org/10.1117/1.JBO.17.5.056015 JBOPFO 1083-3668 Google Scholar

23. 

“The 3D biomechanics data standard,” (2013) www.c3d.org ( May ). 2013). Google Scholar

24. 

F. E. Zajac, “Muscle and tendon: properties, models, scaling, and application to biomechanics and motor control,” Crit. Rev. Biomed. Eng., 17 (4), 359 –411 (1989). CRBEDR 0278-940X Google Scholar

25. 

M. S. Andersen, M. Damsgaard and J. Rasmussen, “Kinematic analysis of over-determinate biomechanical systems,” Comput. Meth. Biomech. Biomed. Eng., 12 (4), 371 –384 (2009). http://dx.doi.org/10.1080/10255840802459412 1025-5842 Google Scholar

26. 

A. Erdemir et al., “Model-based estimation of muscle forces exerted during movements,” Clin. Biomech., 22 (2), 131 –154 (2007). http://dx.doi.org/10.1016/j.clinbiomech.2006.09.005 CLBIEW 0268-0033 Google Scholar

27. 

G. Wu and P. R. Cavanagh, “ISB recommendations for standardization in the reporting of kinematic data,” J. Biomech., 28 (10), 1257 –1261 (1995). http://dx.doi.org/10.1016/0021-9290(95)00017-C JBMCB5 0021-9290 Google Scholar
© 2013 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2013/$25.00 © 2013 SPIE
Janusz Lenar, Marcin Witkowski, Vincenzo Carbone, Sjoerd Kolk, Marcin Adamczyk, Robert Sitnik, Marjolein van der Krogt, and Nico Verdonschot "Lower body kinematics evaluation based on a multidirectional four-dimensional structured light measurement," Journal of Biomedical Optics 18(5), 056014 (30 May 2013). https://doi.org/10.1117/1.JBO.18.5.056014
Published: 30 May 2013
Lens.org Logo
CITATIONS
Cited by 18 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Kinematics

Cameras

Gait analysis

3D modeling

Clouds

Calibration

Projection systems

Back to Top