li径向基核函数 (Radial basis function)

径向基函数网络 (Radial basis function network)
sklearn.preprocessing
support vector regression 支持向量机
https://www.saedsayad.com/support_vector_machine_reg.htm
simple regression

multiple linear regression

polynomial regression 多项式回归
X = [[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]]
y = [45000 50000 60000 80000 110000 150000 200000 300000 500000
1000000]
Polynomial Regression is a form of linear regression in which the relationship between the independent variable x and dependent variable y is modeled as an nth degree polynomial. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x)
https://www.geeksforgeeks.org/python-implementation-of-polynomial-regression/

support vector regression 支持向量机
X = [[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]]
y = [45000 50000 60000 80000 110000 150000 200000 300000 500000
1000000]
Decision Tree regression
X = [[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]]
y = [45000 50000 60000 80000 110000 150000 200000 300000 500000
1000000]
Decision tree builds regression or classification models in the form of a tree structure. It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes.

Random Forest Regression
X = [[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]]
y = [45000 50000 60000 80000 110000 150000 200000 300000 500000
1000000]
Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees’ habit of overfitting to their training set.

logistic regression
逻辑回归
https://christophm.github.io/interpretable-ml-book/logistic.html

Confusion matrix
混淆矩阵
https://www.geeksforgeeks.org/confusion-matrix-machine-learning/

A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions. It gives us insight not only into the errors being made by a classifier but more importantly the types of errors that are being made.
Supervised Learning
监督式学习
Unsupervised learning
无监督式学习
Logistic Regression
Python code

k-nearest neighbors algorithm
K-近邻算法
https://www.saedsayad.com/k_nearest_neighbors.htm
K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions). KNN has been used in statistical estimation and pattern recognition already in the beginning of 1970’s as a non-parametric techniqu
SVM and Kernel SVM
https://towardsdatascience.com/svm-and-kernel-svm-fed02bef1200
Logistic Regression

K-Nearest Neighbors

Support Vector Machine

kernel Suppoort Venctor Machine

Naive Bayes
