What is model bias?
Model bias in classification context:

What is model variance?
Model variance relates to the tendency of different training sets to produce different models or predictions with the same type of learner

Which of the following is more harmful for the performance on test set than training set?

The model variance is high when different randomly sampled training sets lead to very different predictions on the test set. The high variance indicates that the model overfits to training set. In this case, the training error may decrease, but test error will increase.
During training process, if your model shows significantly different performance across different training sets, which of the following is NOT a valid way to reduce this variance?

Improving your optimisation algorithm would decrease the bias. To reduce variance, using the other three options would be helpful. To decrease model complexity, you can consider reducing the number of features or using regularisations.
Given a model y=θ0+θ1x , after adding more basis functions to this model, it becomes y=θ0+θ1x+…+θnxn . Adding more basis functions can:
Adding more basis function would increase the model complexity, which would lead to decrease in model bias and increase in variance. The following figure illustrates the relationship of model complexity and model bias and variance.

What are the possible solutions to reduce evaluation variance?
To reduce evaluation variance, we can increase the size of test set, or evaluate multiple times using repeated random subsampling or K-fold cross-validation, and get the average performance across different runs. The stratification generates training and test sets that contain approximately the same distribution of class labels as the overall set. The stratification can help to reduce the bias.