Fetching metadata from the HF Docker repository...
Mateusz Paszynski
add polyreg and linreg
72d9d59
-
models
add polyreg and linreg
-
1.52 kB
initial commit
KNNInsuranceModel.joblib
Detected Pickle imports (7)
- "numpy.dtype",
- "_codecs.encode",
- "sklearn.neighbors._regression.KNeighborsRegressor",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "__main__.KNNInsuranceModel"
How to fix it?
75.5 kB
publish website
-
80.9 kB
add polyreg and linreg
NuSVRInsuranceModel.joblib
Detected Pickle imports (12)
- "sklearn.pipeline.Pipeline",
- "__main__.NuSVRInsuranceModel",
- "sklearn.compose._column_transformer.ColumnTransformer",
- "__main__.NuSVRInsuranceModel.MultiplyScaler",
- "numpy.dtype",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "numpy.float64",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.preprocessing._encoders.OneHotEncoder"
How to fix it?
90.9 kB
publish website
-
83 kB
add polyreg and linreg
-
329 Bytes
initial commit
RandomForestInsuranceModel.joblib
Detected Pickle imports (11)
- "sklearn.pipeline.Pipeline",
- "numpy.dtype",
- "sklearn.compose._column_transformer.ColumnTransformer",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "__main__.RandomForestInsuranceModel",
- "numpy.float64",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "sklearn.preprocessing._data.StandardScaler",
- "sklearn.preprocessing._encoders.OneHotEncoder"
How to fix it?
225 kB
publish website
XGBoostInsuranceModel.joblib
Detected Pickle imports (7)
- "sklearn.impute._base.SimpleImputer",
- "numpy.dtype",
- "_codecs.encode",
- "numpy.core.multiarray._reconstruct",
- "numpy.ndarray",
- "joblib.numpy_pickle.NumpyArrayWrapper",
- "__main__.XGBoostInsuranceModel"
How to fix it?
184 kB
publish website
-
4.89 kB
add polyreg and linreg
-
84 kB
add polyreg and linreg
-
1.1 kB
added requirements.txt