Upload folder using huggingface_hub
Browse files- LICENSE +1 -1
- README.md +4 -93
- example/APA/GM12878_250M_chr151617_loops.pileup.png +0 -0
- example/CLI_walkthrough.ipynb +8 -7
- example/README.md +5 -0
- example/loop_annotation/GM12878_250M_chr151617_loop_score.bedpe +0 -0
- example/loop_annotation/GM12878_250M_chr151617_loops.bedpe +0 -0
- example/loop_annotation/GM12878_250M_chr151617_loops_method2.bedpe +0 -0
- example/loop_annotation/loop_annotation.ipynb +0 -0
- polaris/loop.py +2 -3
- polaris/loopLF.py +218 -0
- polaris/model/sft_loops.pt +3 -0
- polaris/polaris.py +2 -2
- polaris/version.py +1 -1
- setup.py +2 -2
- setup.sh +43 -0
LICENSE
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
MIT License
|
| 2 |
|
| 3 |
-
Copyright (c)
|
| 4 |
|
| 5 |
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
of this software and associated documentation files (the "Software"), to deal
|
|
|
|
| 1 |
MIT License
|
| 2 |
|
| 3 |
+
Copyright (c) 2024 ai4nucleome
|
| 4 |
|
| 5 |
Permission is hereby granted, free of charge, to any person obtaining a copy
|
| 6 |
of this software and associated documentation files (the "Software"), to deal
|
README.md
CHANGED
|
@@ -19,101 +19,12 @@ tags:
|
|
| 19 |
<!-- <img src="https://img.shields.io/badge/dependencies-tested-green"> -->
|
| 20 |
</a>
|
| 21 |
|
|
|
|
| 22 |
|
| 23 |
🌟 **Polaris** is a versatile and efficient command line tool tailored for rapid and accurate chromatin loop detectionfrom from contact maps generated by various assays, including bulk Hi-C, scHi-C, Micro-C, and DNA SPRITE. Polaris is particularly well-suited for analyzing **sparse scHi-C data and low-coverage datasets**.
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
</div>
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
- Using examples for single cell Hi-C and bulk cell Hi-C loop annotations are under [**example folder**](https://github.com/ai4nucleome/Polaris/tree/master/example).
|
| 31 |
-
- The scripts and data to **reproduce our analysis** can be found at: [**Polaris Reproducibility**](https://zenodo.org/records/14294273).
|
| 32 |
-
|
| 33 |
-
> ❗️<b>NOTE❗️:</b> We suggest users run Polaris on <b>GPU</b>.
|
| 34 |
-
> You can run Polaris on CPU for loop annotations, but it is much slower than on GPU.
|
| 35 |
-
|
| 36 |
-
> ❗️**NOTE❗️:** If you encounter a `CUDA OUT OF MEMORY` error, please:
|
| 37 |
-
> - Check your GPU's status and available memory.
|
| 38 |
-
> - Reduce the --batchsize parameter. (The default value of 128 requires approximately 36GB of CUDA memory. Setting it to 24 will reduce the requirement to less than 10GB.)
|
| 39 |
-
|
| 40 |
-
## Documentation
|
| 41 |
-
📝 **Extensive documentation** can be found at: [Polaris Doc](https://nucleome-polaris.readthedocs.io/en/latest/).
|
| 42 |
-
|
| 43 |
-
## Installation
|
| 44 |
-
Polaris is developed and tested on Linux machines with python3.9 and relies on several libraries including pytorch, scipy, etc.
|
| 45 |
-
We **strongly recommend** that you install Polaris in a virtual environment.
|
| 46 |
-
|
| 47 |
-
We suggest users using [conda](https://anaconda.org/) to create a virtual environment for it (It should also work without using conda, i.e. with pip). You can run the command snippets below to install Polaris:
|
| 48 |
-
|
| 49 |
-
```bash
|
| 50 |
-
git clone https://github.com/ai4nucleome/Polaris.git
|
| 51 |
-
cd Polaris
|
| 52 |
-
conda create -n polaris python=3.9
|
| 53 |
-
conda activate polaris
|
| 54 |
-
```
|
| 55 |
-
-------
|
| 56 |
-
### ❗️Important Note❗️: Downloading Polaris Network Weights
|
| 57 |
-
|
| 58 |
-
The Polaris repository utilizes Git Large File Storage (Git-LFS) to host its pre-trained model weight files. Standard `git clone` operations **will not** automatically download these large files unless Git-LFS is installed and configured.
|
| 59 |
-
|
| 60 |
-
To resolve this, please follow one of the methods below:
|
| 61 |
-
|
| 62 |
-
#### Method 1: Manual Download via Browser
|
| 63 |
-
|
| 64 |
-
1. Directly download the pre-trained model weights (`sft_loop.pt`) from the [Polaris model directory](https://github.com/ai4nucleome/Polaris/blob/master/polaris/model/sft_loop.pt).
|
| 65 |
-
2. Save the file to the directory:
|
| 66 |
-
```bash
|
| 67 |
-
Polaris/polaris/model/
|
| 68 |
-
```
|
| 69 |
-
#### Method 2: Install Git-LFS
|
| 70 |
-
1. Install Git-LFS by following the official instructions: [Git-LFS Installation Guide](https://git-lfs.com/).
|
| 71 |
-
|
| 72 |
-
2. After installation, either:
|
| 73 |
-
|
| 74 |
-
Re-clone the repository:
|
| 75 |
-
|
| 76 |
-
```bash
|
| 77 |
-
git clone https://github.com/ai4nucleome/Polaris.git
|
| 78 |
-
```
|
| 79 |
-
OR, if the repository is already cloned, run:
|
| 80 |
-
|
| 81 |
-
```bash
|
| 82 |
-
git lfs pull
|
| 83 |
-
```
|
| 84 |
-
This ensures all large files, including model weights, are retrieved.
|
| 85 |
-
----------
|
| 86 |
-
|
| 87 |
-
Install [PyTorch](https://pytorch.org/get-started/locally/) as described on their website. It might be the following command depending on your cuda version:
|
| 88 |
-
|
| 89 |
-
```bash
|
| 90 |
-
pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu121
|
| 91 |
-
```
|
| 92 |
-
Install Polaris:
|
| 93 |
-
```bash
|
| 94 |
-
pip install --use-pep517 --editable .
|
| 95 |
-
```
|
| 96 |
-
If fail, please try `python setup build` and `python setup install` first.
|
| 97 |
-
|
| 98 |
-
The installation requires network access to download libraries. Usually, the installation will finish within 5 minutes. The installation time is longer if network access is slow and/or unstable.
|
| 99 |
-
|
| 100 |
-
## Quick Start for Loop Annotation
|
| 101 |
-
```bash
|
| 102 |
-
polaris loop pred -i [input mcool file] -o [output path of annotated loops]
|
| 103 |
-
```
|
| 104 |
-
It outputs predicted loops from the input contact map at 5kb resolution.
|
| 105 |
-
### output format
|
| 106 |
-
It contains tab separated fields as follows:
|
| 107 |
-
```
|
| 108 |
-
Chr1 Start1 End1 Chr2 Start2 End2 Score
|
| 109 |
-
```
|
| 110 |
-
| Field | Detail |
|
| 111 |
-
|:-------------:|:-----------------------------------------------------------------------:|
|
| 112 |
-
| Chr1/Chr2 | chromosome names |
|
| 113 |
-
| Start1/Start2 | start genomic coordinates |
|
| 114 |
-
| End1/End2 | end genomic coordinates (i.e. End1=Start1+resol) |
|
| 115 |
-
| Score | Polaris's loop score [0~1] |
|
| 116 |
-
|
| 117 |
|
| 118 |
## Citation:
|
| 119 |
Yusen Hou, Audrey Baguette, Mathieu Blanchette*, & Yanlin Zhang*. __A versatile tool for chromatin loop annotation in bulk and single-cell Hi-C data__. _bioRxiv_, 2024. [Paper](https://doi.org/10.1101/2024.12.24.630215)
|
|
@@ -128,6 +39,6 @@ Yusen Hou, Audrey Baguette, Mathieu Blanchette*, & Yanlin Zhang*. __A versatile
|
|
| 128 |
```
|
| 129 |
|
| 130 |
## 📩 Contact
|
| 131 |
-
A GitHub issue is preferable for all problems related to using Polaris.
|
| 132 |
|
| 133 |
For other concerns, please email Yusen Hou or Yanlin Zhang ([email protected], [email protected]).
|
|
|
|
| 19 |
<!-- <img src="https://img.shields.io/badge/dependencies-tested-green"> -->
|
| 20 |
</a>
|
| 21 |
|
| 22 |
+
### See https://github.com/ai4nucleome/Polaris for more details.
|
| 23 |
|
| 24 |
🌟 **Polaris** is a versatile and efficient command line tool tailored for rapid and accurate chromatin loop detectionfrom from contact maps generated by various assays, including bulk Hi-C, scHi-C, Micro-C, and DNA SPRITE. Polaris is particularly well-suited for analyzing **sparse scHi-C data and low-coverage datasets**.
|
| 25 |
|
| 26 |
+
## [📝Documentation](https://nucleome-polaris.readthedocs.io/en/latest/)
|
| 27 |
+
**Detailed documentation** can be found at: [Polaris Doc](https://nucleome-polaris.readthedocs.io/en/latest/).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## Citation:
|
| 30 |
Yusen Hou, Audrey Baguette, Mathieu Blanchette*, & Yanlin Zhang*. __A versatile tool for chromatin loop annotation in bulk and single-cell Hi-C data__. _bioRxiv_, 2024. [Paper](https://doi.org/10.1101/2024.12.24.630215)
|
|
|
|
| 39 |
```
|
| 40 |
|
| 41 |
## 📩 Contact
|
| 42 |
+
A [GitHub issue](https://github.com/ai4nucleome/Polaris/issues) is preferable for all problems related to using Polaris.
|
| 43 |
|
| 44 |
For other concerns, please email Yusen Hou or Yanlin Zhang ([email protected], [email protected]).
|
example/APA/GM12878_250M_chr151617_loops.pileup.png
CHANGED
|
|
example/CLI_walkthrough.ipynb
CHANGED
|
@@ -27,8 +27,8 @@
|
|
| 27 |
"\n",
|
| 28 |
" Polaris\n",
|
| 29 |
"\n",
|
| 30 |
-
" A Versatile
|
| 31 |
-
" Data\n",
|
| 32 |
"\n",
|
| 33 |
"Options:\n",
|
| 34 |
" --help Show this message and exit.\n",
|
|
@@ -80,10 +80,10 @@
|
|
| 80 |
" --help Show this message and exit.\n",
|
| 81 |
"\n",
|
| 82 |
"Commands:\n",
|
| 83 |
-
"
|
| 84 |
-
"
|
| 85 |
-
"
|
| 86 |
-
"
|
| 87 |
]
|
| 88 |
}
|
| 89 |
],
|
|
@@ -102,7 +102,7 @@
|
|
| 102 |
},
|
| 103 |
{
|
| 104 |
"cell_type": "code",
|
| 105 |
-
"execution_count":
|
| 106 |
"metadata": {},
|
| 107 |
"outputs": [
|
| 108 |
{
|
|
@@ -120,6 +120,7 @@
|
|
| 120 |
"\n",
|
| 121 |
"Commands:\n",
|
| 122 |
" cool2bcool covert a .mcool file to a .bcool file\n",
|
|
|
|
| 123 |
" pileup 2D pileup contact maps around given foci\n"
|
| 124 |
]
|
| 125 |
}
|
|
|
|
| 27 |
"\n",
|
| 28 |
" Polaris\n",
|
| 29 |
"\n",
|
| 30 |
+
" A Versatile Framework for Chromatin Loop Annotation in Bulk and Single-cell\n",
|
| 31 |
+
" Hi-C Data\n",
|
| 32 |
"\n",
|
| 33 |
"Options:\n",
|
| 34 |
" --help Show this message and exit.\n",
|
|
|
|
| 80 |
" --help Show this message and exit.\n",
|
| 81 |
"\n",
|
| 82 |
"Commands:\n",
|
| 83 |
+
" pool Call loops from loop candidates by clustering\n",
|
| 84 |
+
" pred Predict loops from input contact map directly\n",
|
| 85 |
+
" score Predict loop score for each pixel in the input contact map\n",
|
| 86 |
+
" scorelf *development* Score Pixels for Very Large mcool (>30GB) ...\n"
|
| 87 |
]
|
| 88 |
}
|
| 89 |
],
|
|
|
|
| 102 |
},
|
| 103 |
{
|
| 104 |
"cell_type": "code",
|
| 105 |
+
"execution_count": 3,
|
| 106 |
"metadata": {},
|
| 107 |
"outputs": [
|
| 108 |
{
|
|
|
|
| 120 |
"\n",
|
| 121 |
"Commands:\n",
|
| 122 |
" cool2bcool covert a .mcool file to a .bcool file\n",
|
| 123 |
+
" depth Calculate intra-chromosomal contacts with bin distance >=...\n",
|
| 124 |
" pileup 2D pileup contact maps around given foci\n"
|
| 125 |
]
|
| 126 |
}
|
example/README.md
CHANGED
|
@@ -10,6 +10,11 @@ You can re-run **Polaris** to reproduce these results by following the commands
|
|
| 10 |
|
| 11 |
## Loop Prediction on GM12878 (250M Valid Read Pairs)
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
```bash
|
| 14 |
polaris loop pred --chrom chr15,chr16,chr17 -i ./loop_annotation/GM12878_250M.bcool -o ./loop_annotation/GM12878_250M_chr151617_loops.bedpe
|
| 15 |
```
|
|
|
|
| 10 |
|
| 11 |
## Loop Prediction on GM12878 (250M Valid Read Pairs)
|
| 12 |
|
| 13 |
+
You can download example data from the [Hugging Face repo of Polaris](https://huggingface.co/rr-ss/Polaris/resolve/main/example/loop_annotation/GM12878_250M.bcool?download=true) by runing:
|
| 14 |
+
```bash
|
| 15 |
+
wget https://huggingface.co/rr-ss/Polaris/resolve/main/example/loop_annotation/GM12878_250M.bcool?download=true -O "./loop_annotation/GM12878_250M.bcool"
|
| 16 |
+
```
|
| 17 |
+
And run following code to annotate loops from the example data:
|
| 18 |
```bash
|
| 19 |
polaris loop pred --chrom chr15,chr16,chr17 -i ./loop_annotation/GM12878_250M.bcool -o ./loop_annotation/GM12878_250M_chr151617_loops.bedpe
|
| 20 |
```
|
example/loop_annotation/GM12878_250M_chr151617_loop_score.bedpe
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
example/loop_annotation/GM12878_250M_chr151617_loops.bedpe
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
example/loop_annotation/GM12878_250M_chr151617_loops_method2.bedpe
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
example/loop_annotation/loop_annotation.ipynb
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
polaris/loop.py
CHANGED
|
@@ -36,7 +36,7 @@ def rhoDelta(data,resol,dc,radius):
|
|
| 36 |
except ValueError as e:
|
| 37 |
if "Found array with 0 sample(s)" in str(e):
|
| 38 |
print("#"*88,'\n#')
|
| 39 |
-
print("#\033[91m Error!!! The data is too sparse. Please
|
| 40 |
print("#"*88,'\n')
|
| 41 |
sys.exit(1)
|
| 42 |
else:
|
|
@@ -76,12 +76,11 @@ def rhoDelta(data,resol,dc,radius):
|
|
| 76 |
else:
|
| 77 |
data['rhos']=[]
|
| 78 |
data['deltas']=[]
|
| 79 |
-
|
| 80 |
return data
|
| 81 |
|
| 82 |
def pool(data,dc,resol,mindelta,t,output,radius,refine=True):
|
| 83 |
ccs = set(data.iloc[:,0])
|
| 84 |
-
|
| 85 |
if data.shape[0] == 0:
|
| 86 |
print("#"*88,'\n#')
|
| 87 |
print("#\033[91m Error!!! The file is empty. Please check your file.\033[0m\n#")
|
|
|
|
| 36 |
except ValueError as e:
|
| 37 |
if "Found array with 0 sample(s)" in str(e):
|
| 38 |
print("#"*88,'\n#')
|
| 39 |
+
print("#\033[91m Error!!! The data is too sparse. Please decrease the value of: [t]\033[0m\n#")
|
| 40 |
print("#"*88,'\n')
|
| 41 |
sys.exit(1)
|
| 42 |
else:
|
|
|
|
| 76 |
else:
|
| 77 |
data['rhos']=[]
|
| 78 |
data['deltas']=[]
|
|
|
|
| 79 |
return data
|
| 80 |
|
| 81 |
def pool(data,dc,resol,mindelta,t,output,radius,refine=True):
|
| 82 |
ccs = set(data.iloc[:,0])
|
| 83 |
+
|
| 84 |
if data.shape[0] == 0:
|
| 85 |
print("#"*88,'\n#')
|
| 86 |
print("#\033[91m Error!!! The file is empty. Please check your file.\033[0m\n#")
|
polaris/loopLF.py
ADDED
|
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import click
|
| 3 |
+
import cooler
|
| 4 |
+
import warnings
|
| 5 |
+
import numpy as np
|
| 6 |
+
from torch import nn
|
| 7 |
+
from tqdm import tqdm
|
| 8 |
+
from torch.cuda.amp import autocast
|
| 9 |
+
from importlib_resources import files
|
| 10 |
+
from polaris.utils.util_loop import bedpewriter
|
| 11 |
+
from polaris.model.polarisnet import polarisnet
|
| 12 |
+
from scipy.sparse import coo_matrix
|
| 13 |
+
from scipy.sparse import SparseEfficiencyWarning
|
| 14 |
+
warnings.filterwarnings("ignore", category=SparseEfficiencyWarning)
|
| 15 |
+
|
| 16 |
+
def getLocal(mat, i, jj, w, N):
|
| 17 |
+
if i >= 0 and jj >= 0 and i+w <= N and jj+w <= N:
|
| 18 |
+
mat = mat[i:i+w,jj:jj+w].toarray()
|
| 19 |
+
# print(f"global: {mat.shape}")
|
| 20 |
+
return mat[None,...]
|
| 21 |
+
# pad_width = ((up, down), (left, right))
|
| 22 |
+
slice_pos = [[i, i+w], [jj, jj+w]]
|
| 23 |
+
pad_width = [[0, 0], [0, 0]]
|
| 24 |
+
if i < 0:
|
| 25 |
+
pad_width[0][0] = -i
|
| 26 |
+
slice_pos[0][0] = 0
|
| 27 |
+
if jj < 0:
|
| 28 |
+
pad_width[1][0] = -jj
|
| 29 |
+
slice_pos[1][0] = 0
|
| 30 |
+
if i+w > N:
|
| 31 |
+
pad_width[0][1] = i+w-N
|
| 32 |
+
slice_pos[0][1] = N
|
| 33 |
+
if jj+w > N:
|
| 34 |
+
pad_width[1][1] = jj+w-N
|
| 35 |
+
slice_pos[1][1] = N
|
| 36 |
+
_mat = mat[slice_pos[0][0]:slice_pos[0][1],slice_pos[1][0]:slice_pos[1][1]].toarray()
|
| 37 |
+
padded_mat = np.pad(_mat, pad_width, mode='constant', constant_values=0)
|
| 38 |
+
# print(f"global: {padded_mat.shape}",slice_pos, pad_width)
|
| 39 |
+
return padded_mat[None,...]
|
| 40 |
+
|
| 41 |
+
def upperCoo2symm(row,col,data,N=None):
|
| 42 |
+
# print(np.max(row),np.max(col),N)
|
| 43 |
+
if N:
|
| 44 |
+
shape=(N,N)
|
| 45 |
+
else:
|
| 46 |
+
shape=(row.max() + 1,col.max() + 1)
|
| 47 |
+
|
| 48 |
+
sparse_matrix = coo_matrix((data, (row, col)), shape=shape)
|
| 49 |
+
symm = sparse_matrix + sparse_matrix.T
|
| 50 |
+
diagVal = symm.diagonal(0)/2
|
| 51 |
+
symm = symm.tocsr()
|
| 52 |
+
symm.setdiag(diagVal)
|
| 53 |
+
return symm
|
| 54 |
+
|
| 55 |
+
def processCoolFile(coolfile, cchrom, raw=False):
|
| 56 |
+
extent = coolfile.extent(cchrom)
|
| 57 |
+
N = extent[1] - extent[0]
|
| 58 |
+
if raw:
|
| 59 |
+
ccdata = coolfile.matrix(balance=False, sparse=True, as_pixels=True).fetch(cchrom)
|
| 60 |
+
v='count'
|
| 61 |
+
else:
|
| 62 |
+
ccdata = coolfile.matrix(balance=True, sparse=True, as_pixels=True).fetch(cchrom)
|
| 63 |
+
v='balanced'
|
| 64 |
+
ccdata['bin1_id'] -= extent[0]
|
| 65 |
+
ccdata['bin2_id'] -= extent[0]
|
| 66 |
+
|
| 67 |
+
ccdata['distance'] = ccdata['bin2_id'] - ccdata['bin1_id']
|
| 68 |
+
d_means = ccdata.groupby('distance')[v].transform('mean')
|
| 69 |
+
ccdata[v] = ccdata[v].fillna(0)
|
| 70 |
+
|
| 71 |
+
ccdata['oe'] = ccdata[v] / d_means
|
| 72 |
+
ccdata['oe'] = ccdata['oe'].fillna(0)
|
| 73 |
+
ccdata['oe'] = ccdata['oe'] / ccdata['oe'].max()
|
| 74 |
+
oeMat = upperCoo2symm(ccdata['bin1_id'].ravel(), ccdata['bin2_id'].ravel(), ccdata['oe'].ravel(), N)
|
| 75 |
+
|
| 76 |
+
return oeMat, N
|
| 77 |
+
|
| 78 |
+
@click.command()
|
| 79 |
+
@click.option('-b','--batchsize', type=int, default=128, help='Batch size [128]')
|
| 80 |
+
@click.option('-C','--cpu', type=bool, default=False, help='Use CPU [False]')
|
| 81 |
+
@click.option('-G','--gpu', type=str, default=None, help='Comma-separated GPU indices [auto select]')
|
| 82 |
+
@click.option('-c','--chrom', type=str, default=None, help='Comma separated chroms [all autosomes]')
|
| 83 |
+
@click.option('-t','--threshold', type=float, default=0.5, help='Loop Score Threshold [0.5]')
|
| 84 |
+
@click.option('-s','--sparsity', type=float, default=0.9, help='Allowed sparsity of submatrices [0.9]')
|
| 85 |
+
@click.option('-md','--max_distance', type=int, default=3000000, help='Max distance (bp) between contact pairs [3000000]')
|
| 86 |
+
@click.option('-r','--resol',type=int,default=5000,help ='Resolution [5000]')
|
| 87 |
+
@click.option('--raw',type=bool,default=False,help ='Raw matrix or balanced matrix')
|
| 88 |
+
@click.option('-i','--input', type=str,required=True,help='Hi-C contact map path')
|
| 89 |
+
@click.option('-o','--output', type=str,required=True,help='.bedpe file path to save loop candidates')
|
| 90 |
+
def scorelf(batchsize, cpu, gpu, chrom, threshold, sparsity, max_distance, resol, input, output, raw, image=224):
|
| 91 |
+
""" *development* Score Pixels for Very Large mcool (>30GB) ...
|
| 92 |
+
"""
|
| 93 |
+
print('\npolaris loop scorelf START :) ')
|
| 94 |
+
|
| 95 |
+
center_size = image // 2
|
| 96 |
+
start_idx = (image - center_size) // 2
|
| 97 |
+
end_idx = (image + center_size) // 2
|
| 98 |
+
slice_obj_pred = (slice(None), slice(None), slice(start_idx, end_idx), slice(start_idx, end_idx))
|
| 99 |
+
slice_obj_coord = (slice(None), slice(start_idx, end_idx), slice(start_idx, end_idx))
|
| 100 |
+
|
| 101 |
+
loopwriter = bedpewriter(output,resol,max_distance)
|
| 102 |
+
|
| 103 |
+
if cpu:
|
| 104 |
+
assert gpu is None, "\033[91m QAQ The CPU and GPU modes cannot be used simultaneously. Please check the command. \033[0m\n"
|
| 105 |
+
gpu = ['None']
|
| 106 |
+
device = torch.device("cpu")
|
| 107 |
+
print('Using CPU mode... (This may take significantly longer than using GPU mode.)')
|
| 108 |
+
else:
|
| 109 |
+
if torch.cuda.is_available():
|
| 110 |
+
if gpu is not None:
|
| 111 |
+
print("Using the specified GPU: " + gpu)
|
| 112 |
+
gpu=[int(i) for i in gpu.split(',')]
|
| 113 |
+
device = torch.device(f"cuda:{gpu[0]}")
|
| 114 |
+
else:
|
| 115 |
+
gpuIdx = torch.cuda.current_device()
|
| 116 |
+
device = torch.device(gpuIdx)
|
| 117 |
+
print("Automatically selected GPU: " + str(gpuIdx))
|
| 118 |
+
gpu=[gpu]
|
| 119 |
+
else:
|
| 120 |
+
device = torch.device("cpu")
|
| 121 |
+
gpu = ['None']
|
| 122 |
+
cpu = True
|
| 123 |
+
print('GPU is not available!')
|
| 124 |
+
print('Using CPU mode... (This may take significantly longer than using GPU mode.)')
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
coolfile = cooler.Cooler(input + '::/resolutions/' + str(resol))
|
| 128 |
+
modelstate = str(files('polaris').joinpath('model/sft_loop.pt'))
|
| 129 |
+
_modelstate = torch.load(modelstate, map_location=device.type)
|
| 130 |
+
parameters = _modelstate['parameters']
|
| 131 |
+
|
| 132 |
+
if chrom is None:
|
| 133 |
+
chrom =coolfile.chromnames
|
| 134 |
+
else:
|
| 135 |
+
chrom = chrom.split(',')
|
| 136 |
+
|
| 137 |
+
# for rmchr in ['chrMT','MT','chrM','M','Y','chrY','X','chrX','chrW','W','chrZ','Z']: # 'Y','chrY','X','chrX'
|
| 138 |
+
# if rmchr in chrom:
|
| 139 |
+
# chrom.remove(rmchr)
|
| 140 |
+
|
| 141 |
+
print(f"Analysing chroms: {chrom}")
|
| 142 |
+
|
| 143 |
+
model = polarisnet(
|
| 144 |
+
image_size=parameters['image_size'],
|
| 145 |
+
in_channels=parameters['in_channels'],
|
| 146 |
+
out_channels=parameters['out_channels'],
|
| 147 |
+
embed_dim=parameters['embed_dim'],
|
| 148 |
+
depths=parameters['depths'],
|
| 149 |
+
channels=parameters['channels'],
|
| 150 |
+
num_heads=parameters['num_heads'],
|
| 151 |
+
drop=parameters['drop'],
|
| 152 |
+
drop_path=parameters['drop_path'],
|
| 153 |
+
pos_embed=parameters['pos_embed']
|
| 154 |
+
).to(device)
|
| 155 |
+
model.load_state_dict(_modelstate['model_state_dict'])
|
| 156 |
+
if not cpu and len(gpu) > 1:
|
| 157 |
+
model = nn.DataParallel(model, device_ids=gpu)
|
| 158 |
+
model.eval()
|
| 159 |
+
|
| 160 |
+
badc=[]
|
| 161 |
+
chrom_ = tqdm(chrom, dynamic_ncols=True)
|
| 162 |
+
for _chrom in chrom_:
|
| 163 |
+
chrom_.desc = f"[Analyzing {_chrom}]"
|
| 164 |
+
|
| 165 |
+
oeMat, N = processCoolFile(coolfile, _chrom, raw)
|
| 166 |
+
start_point = -(image - center_size) // 2
|
| 167 |
+
joffset = np.repeat(np.linspace(0, image, image, endpoint=False, dtype=int)[np.newaxis, :], image, axis=0)
|
| 168 |
+
ioffset = np.repeat(np.linspace(0, image, image, endpoint=False, dtype=int)[:, np.newaxis], image, axis=1)
|
| 169 |
+
data, i_list, j_list = [], [], []
|
| 170 |
+
count=0
|
| 171 |
+
for i in range(start_point, N - image - start_point, center_size):
|
| 172 |
+
for j in range(0, max_distance//resol, center_size):
|
| 173 |
+
jj = j + i
|
| 174 |
+
# if jj + w <= N and i + w <= N:
|
| 175 |
+
_oeMat = getLocal(oeMat, i, jj, image, N)
|
| 176 |
+
if np.sum(_oeMat == 0) <= (image*image*sparsity):
|
| 177 |
+
data.append(_oeMat)
|
| 178 |
+
i_list.append(i + ioffset)
|
| 179 |
+
j_list.append(jj + joffset)
|
| 180 |
+
|
| 181 |
+
while len(data) >= batchsize or (i + center_size > N - image - start_point and len(data) > 0):
|
| 182 |
+
count += len(data)
|
| 183 |
+
|
| 184 |
+
bin_i = torch.tensor(np.stack(i_list[:batchsize], axis=0)).to(device)
|
| 185 |
+
bin_j = torch.tensor(np.stack(j_list[:batchsize], axis=0)).to(device)
|
| 186 |
+
targetX = torch.tensor(np.stack(data[:batchsize], axis=0)).to(device)
|
| 187 |
+
bin_i = bin_i*resol
|
| 188 |
+
bin_j = bin_j*resol
|
| 189 |
+
|
| 190 |
+
data = data[batchsize:]
|
| 191 |
+
i_list = i_list[batchsize:]
|
| 192 |
+
j_list = j_list[batchsize:]
|
| 193 |
+
|
| 194 |
+
# print(targetX.shape)
|
| 195 |
+
# print(bin_i.shape)
|
| 196 |
+
# print(bin_j.shape)
|
| 197 |
+
|
| 198 |
+
with torch.no_grad():
|
| 199 |
+
with autocast():
|
| 200 |
+
pred = torch.sigmoid(model(targetX.float().to(device)))[slice_obj_pred].flatten()
|
| 201 |
+
loop = torch.nonzero(pred>threshold).flatten().cpu()
|
| 202 |
+
prob = pred[loop].cpu().numpy().flatten().tolist()
|
| 203 |
+
frag1 = bin_i[slice_obj_coord].flatten().cpu().numpy()[loop].flatten().tolist()
|
| 204 |
+
frag2 = bin_j[slice_obj_coord].flatten().cpu().numpy()[loop].flatten().tolist()
|
| 205 |
+
|
| 206 |
+
loopwriter.write(_chrom,frag1,frag2,prob)
|
| 207 |
+
if count == 0:
|
| 208 |
+
badc.append(_chrom)
|
| 209 |
+
|
| 210 |
+
if len(badc)==len(chrom):
|
| 211 |
+
raise ValueError("polaris loop scorelf FAILED :( \nThe '-s' value needs to be increased for more sparse data.")
|
| 212 |
+
else:
|
| 213 |
+
print(f'\npolaris loop scorelf FINISHED :)\nLoopscore file saved at {output}')
|
| 214 |
+
if len(badc)>0:
|
| 215 |
+
print(f"But the size of {badc} are too small or their contact matrix are too sparse.\nYou may need to check the data or run these chr respectively by increasing -s.")
|
| 216 |
+
|
| 217 |
+
if __name__ == '__main__':
|
| 218 |
+
scorelf()
|
polaris/model/sft_loops.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cae9e9a28e5c3ff0d328934c066d275371d5301db084a914431198134f66ada2
|
| 3 |
+
size 547572280
|
polaris/polaris.py
CHANGED
|
@@ -7,7 +7,7 @@
|
|
| 7 |
|
| 8 |
import click
|
| 9 |
from polaris.loopScore import score
|
| 10 |
-
from polaris.
|
| 11 |
from polaris.loopPool import pool
|
| 12 |
from polaris.loop import pred
|
| 13 |
from polaris.utils.util_cool2bcool import cool2bcool
|
|
@@ -42,7 +42,7 @@ def util():
|
|
| 42 |
|
| 43 |
loop.add_command(pred)
|
| 44 |
loop.add_command(score)
|
| 45 |
-
loop.add_command(
|
| 46 |
loop.add_command(pool)
|
| 47 |
|
| 48 |
util.add_command(depth)
|
|
|
|
| 7 |
|
| 8 |
import click
|
| 9 |
from polaris.loopScore import score
|
| 10 |
+
from polaris.loopLF import scorelf
|
| 11 |
from polaris.loopPool import pool
|
| 12 |
from polaris.loop import pred
|
| 13 |
from polaris.utils.util_cool2bcool import cool2bcool
|
|
|
|
| 42 |
|
| 43 |
loop.add_command(pred)
|
| 44 |
loop.add_command(score)
|
| 45 |
+
loop.add_command(scorelf)
|
| 46 |
loop.add_command(pool)
|
| 47 |
|
| 48 |
util.add_command(depth)
|
polaris/version.py
CHANGED
|
@@ -1 +1 @@
|
|
| 1 |
-
__version__ = '1.
|
|
|
|
| 1 |
+
__version__ = '1.1.0'
|
setup.py
CHANGED
|
@@ -10,14 +10,14 @@ Setup script for Polaris.
|
|
| 10 |
A Versatile Framework for Chromatin Loop Annotation in Bulk and Single-cell Hi-C Data.
|
| 11 |
"""
|
| 12 |
|
| 13 |
-
from setuptools import setup
|
| 14 |
|
| 15 |
with open("README.md", "r") as readme:
|
| 16 |
long_des = readme.read()
|
| 17 |
|
| 18 |
setup(
|
| 19 |
name='polaris',
|
| 20 |
-
version='1.0
|
| 21 |
author="Yusen HOU, Audrey Baguette, Mathieu Blanchette*, Yanlin Zhang*",
|
| 22 |
author_email="[email protected]",
|
| 23 |
description="A Versatile Framework for Chromatin Loop Annotation in Bulk and Single-cell Hi-C Data",
|
|
|
|
| 10 |
A Versatile Framework for Chromatin Loop Annotation in Bulk and Single-cell Hi-C Data.
|
| 11 |
"""
|
| 12 |
|
| 13 |
+
from setuptools import setup
|
| 14 |
|
| 15 |
with open("README.md", "r") as readme:
|
| 16 |
long_des = readme.read()
|
| 17 |
|
| 18 |
setup(
|
| 19 |
name='polaris',
|
| 20 |
+
version='1.1.0',
|
| 21 |
author="Yusen HOU, Audrey Baguette, Mathieu Blanchette*, Yanlin Zhang*",
|
| 22 |
author_email="[email protected]",
|
| 23 |
description="A Versatile Framework for Chromatin Loop Annotation in Bulk and Single-cell Hi-C Data",
|
setup.sh
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Configuration: Model file path and expected SHA-256 checksum
|
| 4 |
+
MODEL_PATH="polaris/model/sft_loop.pt"
|
| 5 |
+
EXPECTED_HASH="cae9e9a28e5c3ff0d328934c066d275371d5301db084a914431198134f66ada2"
|
| 6 |
+
|
| 7 |
+
# Pre-check: Verify if the model file exists with valid checksum
|
| 8 |
+
if [ -f "$MODEL_PATH" ]; then
|
| 9 |
+
# Calculate current file hash
|
| 10 |
+
ACTUAL_HASH=$(sha256sum "$MODEL_PATH" | awk '{print $1}')
|
| 11 |
+
|
| 12 |
+
# Hash validation logic
|
| 13 |
+
if [ "$ACTUAL_HASH" = "$EXPECTED_HASH" ]; then
|
| 14 |
+
echo "✅ Valid model file detected, skipping download"
|
| 15 |
+
pip install --use-pep517 --editable .
|
| 16 |
+
echo "✅ Polaris installation completed"
|
| 17 |
+
exit 0
|
| 18 |
+
else
|
| 19 |
+
# Security measure: Remove corrupted/invalid file
|
| 20 |
+
echo "⚠️ Invalid file hash detected, triggering re-download"
|
| 21 |
+
rm -f "$MODEL_PATH"
|
| 22 |
+
fi
|
| 23 |
+
fi
|
| 24 |
+
|
| 25 |
+
# Model download process
|
| 26 |
+
echo "⏳ Downloading model from Hugging Face..."
|
| 27 |
+
wget -O "$MODEL_PATH" "https://huggingface.co/rr-ss/Polaris/resolve/main/polaris/model/sft_loop.pt?download=true"
|
| 28 |
+
|
| 29 |
+
# Post-download verification
|
| 30 |
+
ACTUAL_HASH=$(sha256sum "$MODEL_PATH" | awk '{print $1}')
|
| 31 |
+
if [ "$ACTUAL_HASH" != "$EXPECTED_HASH" ]; then
|
| 32 |
+
# Error handling for failed verification
|
| 33 |
+
rm -f "$MODEL_PATH"
|
| 34 |
+
echo "❌ Download failed: Checksum mismatch (Actual: $ACTUAL_HASH)"
|
| 35 |
+
echo "Manual download required:"
|
| 36 |
+
echo "wget -O polaris/model/sft_loop.pt \"https://huggingface.co/rr-ss/Polaris/resolve/main/polaris/model/sft_loop.pt?download=true\""
|
| 37 |
+
exit 1
|
| 38 |
+
else
|
| 39 |
+
# Success workflow
|
| 40 |
+
pip install --use-pep517 --editable .
|
| 41 |
+
echo "✅ Model saved to: $MODEL_PATH"
|
| 42 |
+
echo "✅ Polaris installed successfully"
|
| 43 |
+
fi
|