3.1 Feature Extraction
As described as Chap. 2, there are totally 30 sample data for the subjective evaluation, and every sample have 24 channels (include time channel). Due to the different start and end time of each measurement sample, the time domain data can not be used directly for comparing to analyze the relationship between subjective evaluation and objective evaluation. Hence, the feature of sample channel data need to be extracted.
The vehicle angular acceleration can be calculated by the geometry of the sensor position, shown as Fig. 3, where, LF is driver's foot floor sensor data, RF is Co-driver's foot floor sensor data, LR is second row left foot floor sensor data, L1 is the lateral distance between LF and RF, and L2 is the longitudinal distance between LF and LR.
So additional roll and pitch acceleration channels were achieved by Eqs. (1) and (2).
where, roll_acc is the roll accelertation channel data, pitch_acc is the pitch accelertation channel data, and *_Z is the acceleration data of Z direction in the specific sensor position.
Before extracting the features, all the channel data were filtered by a low frequency (0.05 ~ 5 Hz) band-pass Butterworth filter to remove the influence of the high frequency data. In consideration of the requirement of analysis accuracy and comprehensiveness, 433 features were extracted with engineering experience and general mathematics technique. Among them, there are 395 acceleration features and 38 angle features for each sample. The detailed acceleration and angle features are listed as Table 3, where PSD is the abbreviation of power spectral density, envelop is the absolute Hilbert transformation of the channel data, decentralization is removing the central trend of the channel data, and crossed hull is the 2-D convex hull of two selected channel data points.
Table 3
Features | Channel Types & Directions |
Maximum | Acceleration: X, Y, Z, roll, pitch |
Angle: roll, pitch, roll-pitch |
Minimum | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Maximum of Derivative Data | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Minimum of Derivative Data | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Range | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Variance | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Root Mean Square | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Range of Envelop | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Area of Envelop | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Maximum of PSD | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Area of PSD | Acceleration: X, Y, Z, roll, pitch, X-Z, roll-pitch |
Angle: roll, pitch, roll-pitch |
Range of Decentralization Data | Acceleration: X, Y, Z, roll, pitch |
Angle: roll, pitch |
Variance of Decentralization Data | Acceleration: X, Y, Z, roll, pitch |
Angle: roll, pitch |
Crossed Hull Area | Acceleration: X-Z, roll-pitch |
Angle: roll-pitch |
3.2 Correlation and Cluster Analysis
All the extraction features are too much to be used for modeling. Therefore, a correlation analysis is needed to determine which features are important and correlative enough. Then we can screen out these features.
Through statistics method, the Pearson correlation coefficient [11–13] of the subjective evaluation scores and the extraction features were calculated. By setting an appropriate threshold of the correlation coefficient, features with low correlation can be eliminated. The Fig. 4 shows the relationship of the threshold and chosen features. Taking the features retention and the variation stability of feature-size into consideration, the threshold is selected as 0.7, and the relative number of chosen features are 87. With the correlation analysis, the size of features for modeling were reduced to 20%.
There is a strong correlation between these chosen features and subjective evaluation results, but these chosen features may also have strong relationship with each other, that is, these features may have the same nature. Hence, the chosen features should be clustered to make the similar features as one group. Then the most relevant feature in each group can be selected for modeling effectively.
Using the formula (3) to calculate the cluster distance of each two features,
Where, D is the cluster distance, and abs(cc) is the absolute correlation coefficient between each two features.
The hierarchical clustering [13, 14] result with Ward method is shown as Fig. 5, all the 87 chosen features can be classified as 6 groups.
Taking out the most relevant element in each group as the modeling feature, which has strongest correlation with subjective evaluation in each group. It also ensures that the selected feature has low impact on other feature groups. The selected features in each group are listed as Table 4. It indicates that the lateral data has little relation with motion control performance assessment, and effective data are mainly come from driver’s location sensors.
Table 4
Selected Features of Cluster Analysis
No. | Selected Feature | Channel | Direction |
1 | Crossed Hull Area | Driver's Seat Track Acceleration | X-Z |
2 | Range of Envelop | Driver's Seat Bottom Acceleration | X-Z |
3 | Minimum of Derivative Data | Second Row Left Foot Floor Acceleration | Z |
4 | Minimum | Geometry Complex Acceleration | roll-pitch |
5 | Maximum of Derivative Data | Driver's Foot Floor Acceleration | X |
6 | Maximum | Driver's Seat Bottom Acceleration | X |
3.3 Objective Evaluation Modeling
With the selected features of sample data and subjective evaluation scores, regression method can be used for objective evaluation modeling. In consideration of the briefness and efficiency, given that sample size, a linear regression was applied. In comparison to the common least-squares regression, LASSO algorithm [15, 16] is more appropriate. Because the LASSO method can minimize the usual sum of squared errors with a bound on the sum of the absolute values of the coefficients. It is a shrinkage and selection method for linear regression, which can be expressed as Eq. (4).
Where, is the regression coefficients, is Euclidean norm (L2 norm), is the penalty parameter, and is the sum of the absolute values of the vectors.
In order to ensure the generalization of the objective evaluation model and prevent overfitting, the LOOCV (Leave One Out Cross Validation) method was used to estimate the MSE (Mean Squared Error) with the sample features. The simulation result is shown as Fig. 6.
From the simulation result, optimal can be obtained. Then the objective evaluation LASSO model was established with the best penalty parameter. The model can be expressed as Eq. (5), and the regression coefficients are listed as Table 5, which is relative to the Table 4 features.
Where, is the objective evaluation score, is the selected feature, and is the LASSO model regression coefficients.
Table 5
LASSO Model Regression Coefficients
Table 4 Relative No. | Item | Coefficient |
1 | Selected Feature1 | 11.389 |
2 | Selected Feature2 | 9.788 |
3 | Selected Feature3 | 478.221 |
4 | Selected Feature4 | -23.116 |
5 | Selected Feature5 | -459.556 |
6 | Selected Feature6 | -11.730 |
N/A | Intercept | 1.577 |
3.4 Comparison
With the established cross validation LASSO model, the objective evaluation score of each sample can be obtained. The objective evaluation results are shown as Table 6. It shows that the objective evaluation scores fluctuate slightly of different samples with the same combination status. This is mainly due to the variance of measurement caused by the deviation of velocity, position driving through the road, etc al.
Table 6
Objective Evaluation of Each Sample
Combination Status | Sample No. | Objective Evaluation |
Base | 1 | 0.068 |
2 | -0.033 |
3 | 0.036 |
4 | 0.159 |
5 | 0.040 |
6 | -0.160 |
7 | 0.178 |
8 | -0.200 |
9 | -0.058 |
Larger Stiffness Springs | 10 | 0.168 |
11 | 0.252 |
Smaller Stiffness Springs | 12 | -0.539 |
13 | -0.488 |
Lower Damping Shock Absorbers | 14 | -1.487 |
15 | -1.898 |
Longer Jounce Bumpers | 16 | 0.103 |
17 | 0.281 |
Shorter Jounce Bumpers | 18 | -0.085 |
19 | -0.238 |
Thicker Dimension Tires | 20 | -0.130 |
21 | -0.046 |
22 | 0.022 |
Lower Tire Pressures | 23 | 0.353 |
24 | -0.455 |
Suspension without Watt-Links | 25 | 0.144 |
26 | -0.024 |
27 | 0.123 |
28 | 0.079 |
Suspension without Watt-Links & Lower Tire Pressures | 29 | 0.224 |
30 | -0.170 |
In order to reduce the influence of measurement error and measurement condition, the mean objective evaluation scores of same combination status samples are applied on the comparison with the subjective evaluation scores. For comparing conveniently and obviously, the column plot was adopted on comparison, shown as Fig. 7.
From the Fig. 7, we can find that though there is some gap of the two type scores in specific combination status, the variation trend of objective evaluation score can match the subjective score well, especially for the “4 - Lower Damping Shock Absorbers” combination status, which leads to the largest motion control performance change relative to the base. The differences of the two type scores may be mainly caused by that, the sample combinations can not make enough obvious distinction for motion control performance, and the sample size is not enough for accurate modeling. Generally, the overall analysis result is acceptable. The objective evaluation method can predict the ride motion control performance consistently with the subjective evaluation. The process is available and the method can be a complement for ISO 2631.