part1:
idea of the my.grad.desc()
my.grad.desc() X_n+1 <- X_n - lamda*f`(x) #(lamda bi small)
# number of iter = large
nln() = non.linear minimization
newton's X_n+1 <- X_n - (f``(x_n))^-1 * f(`(x_n))
lamda = 1/f(``(Xn))
# number of iter = small
# cost per iter is high
knn:
k-fold CV error estimated with several times, like:
after that ,we say we fitting the k-fold error set
readr
provides faster tabular inporting frameworkfread
is not asscesed: not a part of the tidyverse