haifengl on master
drop DataTransform and move imp… DataFrameTransform -> DataTrans… (compare)
haifengl on master
DataFrameTransform (compare)
haifengl on master
add DataTransform (compare)
haifengl on master
SecurityManager is deprecated f… (compare)
haifengl on master
use scala.jdk.CollectionConvert… (compare)
haifengl on master
fix coefficients() (compare)
CLARANS<double[]> clusters = PartitionClustering.run(20, () -> CLARANS.fit(x, new EuclideanDistance(), 6, 10));
PCA pca = PCA.fit(x);
pca.setProjection(2);
double[][] y = pca.project(x);
Canvas plot = ScatterPlot.of(y, clusters.y, '-').canvas();
Hi all. I'm having trouble getting started with basic linear algebra operations in Smile. I wonder if someone could help? In particular, I don't think I understand how symmetric matrices work.
val m2 = matrix(c(3.0,1.0),c(1.0,2.0)) // create a matrix, which is symmetric
m2.isSymmetric // returns false
m2.cholesky() // fails
If I create a symmetric matrix, isSymmetric
nevertheless returns false, so naturally, cholesky
fails. Is there something I need to do to tell Smile that the matrix is symmetric? Thanks,
val mat = matrix(c(3.0,3.5),c(2.0,2.0),c(0.0,1.0))
mat.qr().Q
returns a matrix with columns that are not orthonormal.
jupyterlab.sh
to bootstrap and install the almond kernel haifengl/smile#672
xh
initialization be inside the loop as a copy of x
, like:
default double g(double[] x, double[] gradient) {
double fx = f(x);
int n = x.length;
for (int i = 0; i < n; i++) {
double[] xh = x.clone();
double xi = x[i];
double h = EPSILON * Math.abs(xi);
if (h == 0.0) {
h = EPSILON;
}
xh[i] = xi + h; // trick to reduce finite-precision error.
h = xh[i] - xi;
double fh = f(xh);
xh[i] = xi;
gradient[i] = (fh - fx) / h;
}
return fx;
}
f(x1 + h1, 0, 0) - f(x1, x2, x3)
, f(x1 + h1, x2 + h2, 0) - f(x1, x2, x3)
, which seems wrong... or am I missing something?
default double g(double[] x, double[] gradient) {
double fx = f(x);
int n = x.length;
for (int i = 0; i < n; i++) {
double[] xh = x.clone();
double xi = x[i];
double h = EPSILON * Math.abs(xi);
if (h == 0.0) {
h = EPSILON;
}
xh[i] = xi + h; // trick to reduce finite-precision error.
h = xh[i] - xi;
double fh = f(xh);
xh[i] = xi;
gradient[i] = (fh - fx) / h;
}
return fx;
}
java.lang.ArithmeticException: LAPACK GETRS error code: -8
at smile.math.matrix.Matrix$LU.solve(Matrix.java:2219)
at smile.math.matrix.Matrix$LU.solve(Matrix.java:2189)
at smile.math.BFGS.subspaceMinimization(BFGS.java:875)
at smile.math.BFGS.minimize(BFGS.java:647)
kindly look into this issue regarding "Formula.lhs" of RandomForest. As my dataset goes through several tranformations I end up having this,
var xtrain: Array[Array[Double]] = xtrainx
var ytrain: Array[Int] = bc_ytrainSet.value.map(x=>scala.math.floor(x).toInt)
var xtest: Array[Array[Double]] = xtestx
var ytest: Array[Int] = bc_ytestSet.value.map(x=>scala.math.floor(x).toInt)
//var nn: KNN[Array[Double]] =KNN.fit(xtrain, ytrain, 5)
var rf = RandomForest.fit(Formula.lhs(?), xtrain)
var pred = rf.predict(xtest)
var accu = Accuracy.of(ytest, pred)
actually I want to know, what to write inside Formula.lhs(?), in the absense of any header. For KNN it is working fine without any header.