YAML Metadata Warning: The pipeline tag "regression" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

First run the following to setup the environment and get the official model code

# Clone the official repo
git clone [email protected]:facebookresearch/HighResCanopyHeight.git

# Install dependencies
pip install stac-model[torch]

# Download the official pretrained checkpoints
mkdir checkpoints && aws s3 --no-sign-request sync s3://dataforgood-fb-data/forests/v1/models/saved_checkpoints/ checkpoints/

Export the model using the following:

from pathlib import Path
import sys
sys.path.append("HighResCanopyHeight")

import torch
import torch.nn as nn
import torchvision.transforms.v2 as T
from stac_model.torch.export import export, package

import src.transforms
from inference import SSLAE


# Create model and load checkpoint
class TreeCanopyHeightModel(nn.Module):
    def __init__(self, classify=True, huge=True):
        super().__init__()
        self.model = SSLAE(pretrained=None, classify=classify, huge=huge, n_bins=256)

    def forward(self, x):
        outputs = self.model(x)
        pred = 10 * outputs + 0.001
        return pred.relu()

path = "checkpoints/SSLhuge_satellite.pth"
ckpt = torch.load(path, map_location="cpu", weights_only=False)
state_dict = {f"model.{k}": v for k, v in ckpt["state_dict"].items()}
model = TreeCanopyHeightModel()
model.load_state_dict(state_dict)

# Create exportable transforms
original_transform = src.transforms.SSLNorm().Trans
norm = original_transform.transforms[-1]

transforms = nn.Sequential(
    T.Normalize(mean=[0], std=[255]),  # replace ToTensor() with normalize to 0-1
    T.Normalize(mean=norm.mean, std=norm.std)
)

# Export and save to pt2
model_program, transforms_program = export(
    input_shape=[-1, 3, 224, 224],
    model=model,
    transforms=transforms,
    device="cpu",
    dtype=torch.float32,
)
package(
    output_file=Path("model.pt2"),
    model_program=model_program,
    transforms_program=transforms_program,
    metadata_properties=None,
    aoti_compile_and_package=False
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support